Mimetic AI Systems: Understanding and Regulating the Use of Generative Models for Impersonation
Norman Bukingolts
TLDR
A normative ethics assessment of mimetic AI systems is conducted and several regulatory solutions are proposed to support initiatives in AI governance aimed at addressing the multifaceted obstacles which mimetic AI systems pose to the integrity, value, and endurance of authentic human expression.
Abstract
Generative artificial intelligence models are being used to imitate the words, voices, bodies, and artistic styles of private and public figures with unprecedentedly high accuracy and scalability. Despite their usage offering cost efficiencies over employing the human counterpart, associated drawbacks -- scams and fraud, baseless social death or defamation, and an erosion of trust in online information environments -- are growing and reaching criticality. Mimetic AI systems use generative models which leverage knowledge extracted from data provided during training or inference time to capture and reproduce the actions, decisions, and preferences of specific individuals in novel contexts. In this paper, I explain how such systems power the creation and distribution pipelines of deepfakes, digital doubles, voice clones, and other impersonations. I then conduct a normative ethics assessment of these systems and discuss their benefits and risks to key stakeholders: system operators, targets, and audiences, as well as their creators, intermediaries, and regulators. Finally, I propose several regulatory solutions and outline their possible implementation challenges to support initiatives in AI governance aimed at addressing the multifaceted obstacles which mimetic AI systems pose to the integrity, value, and endurance of authentic human expression.
