AI makes fakes real: How deepfakes threaten truth, privacy, and everything in between
This newsletter discusses the rise of deepfakes, its legal and ethical issues around Copyright ownership, privacy and explores potential legislation, industry self-regulation, and anti-deepfake tech
“In January 2019, deep fakes were buggy and flickery. Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”
-Prof. Hani Farid, deepfake expert, University of California, Berkeley
As the interest in generative AI tools has spiked for its various repercussions, a parallel concern about the intellectual property (IP) ownership and exploitation has sparked several conversations in the professional and academic realms. In the labyrinth of intellectual property, a critical question now looms large: Can the progeny of AI wield the shield of ownership rights, and if so, who shall wield it?
Should the source material be ensconced within the fortress of IP protection, do the tendrils of infringement entwine—whether through the utilization of said source material for the evolution, tutelage, and operation of an AI tool, or through its resultant output?
In a realm where authenticity is a challenge and truth becomes a fleeting specter, the legal realm of IP finds itself entangled in new dilemmas. Intellectual property rights blur as AI-generated content blurs the lines of ownership and originality. Privacy concerns escalate as individuals become unwitting subjects of fabricated narratives.
Recent technological leaps have birthed what's now termed "deepfakes" - hyper-realistic videos forged through face swaps, leaving scant traces of manipulation. These creations are the brainchild of artificial intelligence (AI) applications skilled in melding, substituting, and overlaying images and video clips to birth fake videos of uncanny realism.
Deepfake technology possesses the prowess to conjure diverse forms of content—be it humorous, pornographic, or politically charged—featuring individuals uttering statements without their consent. The game-changing facet of deepfakes lies in the technology's breadth, depth, and sophistication, empowering nearly any computer user to craft counterfeit videos virtually indistinguishable from genuine media.
What are “Deepfakes” and what do they do?
In 2017, deepfakes were thrust into the limelight when a Reddit user unveiled videos depicting celebrities in compromising sexual scenarios. These digital manipulations pose a formidable challenge for detection, as they seamlessly integrate real footage, often accompanied by convincingly authentic audio, and are tailor-made for rapid dissemination across social media platforms. Consequently, many viewers unwittingly accept these concocted visuals as genuine, further complicating the landscape of online authenticity.
Deepfakes emerge from the depths of Generative Adversarial Networks (GANs), a duo of artificial neural networks collaborating to craft convincingly realistic video and Imagery. Dubbed 'the generator' and 'the discriminator', these networks train on a shared dataset of images, videos, or sounds. The generator endeavors to produce novel samples adept enough to deceive its counterpart, the discriminator, which scrutinizes incoming media for authenticity. In this intricate dance, they spur each other towards refinement. A GAN can sift through countless images of an individual and concoct a fresh portrait that echoes their essence without replicating any single source.
These digital deceptions infiltrate social media platforms, thriving amidst the tumult of conspiracies and whispers. Users, swayed by the collective current, often succumb to their allure. Meanwhile, an 'infopocalypse' rages, sowing seeds of skepticism towards any information not originating from within one's social orbit—be it from kin, confidants, or ideological allies who echo familiar beliefs. Paradoxically, many willingly embrace content that reinforces their preconceived notions, even if it bears the hallmarks of falsehood. In this turbulent terrain, truth becomes an enigma, shrouded by the mists of bias and the allurement of digital illusions.
Legal and ethical dilemmas of ownership for AI-generated content
The emergence of AI-generated content spanning across literary, musical, and visual realms poses a profound challenge to the established contours of authorship within copyright jurisprudence.Traditionally anchored in the human-centric creations, copyright law grapples with the intricate question of attribution in the context of AI-driven creativity. Who, indeed, ought to be acknowledged as the rightful author: the programmer who engineers the AI, the user who furnishes the data, or the AI itself? This query fundamentally disrupts conventional notions of creative agency and originality.
Complicating matters further, prevailing legal doctrine denies AI entities the capacity for copyright ownership, thus necessitating a nuanced inquiry into rightful ownership. Should attribution be vested in the AI developer, the user, or regulated under a contractual work-for-hire arrangement? As extant copyright statutes conspicuously omit direct engagement with AI-generated works, a legal lacuna emerges, leaving a proverbial gray area.
One of the most serious risks posed by deepfakes is their ability to spread disinformation on a massive scale. For instance, Deepfakes of world leaders like Vladimir Putin and Volodymyr Zelensky have also been circulating online, often used for propaganda purposes or to spread misinformation about current events. Deepfakes may confuse and affect public opinion by convincingly inserting words into the mouths of prominent individuals/content creators or distorting events with modified imagery. Political discourse, elections, and community trust are all subject to the pernicious effect of synthetic media, which blurs the distinction between reality and fiction.
Recently Sarah Andersen, Kelly McKernan, and Karla Ortiz represented by the Joseph Saveri Law Firm has filed a US federal class-action lawsuit in San Francisco against AI-art companies Stability AI, Midjourney, and DeviantArt for alleged violations of the Digital Millennium Copyright Act, violations of the right of publicity, and unlawful competition. According to the complaint they stated "seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work", and this isn’t even the tip of the iceberg, as the sophistication of this tech fastracts, so does the litigation and the burden on courts worldwide.
Beyond the immediate legal ramifications, deepfakes exacerbate issues such as slut-shaming and revenge porn, presenting serious consequences for individuals' reputations and self-image. The emergence and proliferation of deepfake technology herald a profound challenge to the sanctity of personal privacy and the inviolability of individual identity. This technological innovation facilitates the manipulation of visual media, affording the superimposition of facial features onto explicit contexts or fictitious scenarios, and as stated previously, unsuspecting individuals may find themselves entangled in compromising or injurious circumstances. The malevolent exploitation of deepfakes carries the peril of besmirching reputations and inflicting grave emotional and psychological trauma upon its targets. Thus, the advent of deepfake technology constitutes a palpable menace to the sanctity of personal privacy and the preservation of individual identity, underscoring the imperative for resolute measures to mitigate its potential for abuse.
With the escalating sophistication of deepfake technology, the bedrock of trust upon which digital media stands is steadily crumbling. Society finds itself in a precarious position, wrestling with doubts regarding the authenticity of the myriad content encountered in online realms, from news articles to videos, and even personal interactions. This pervasive erosion of trust reverberates across various domains, profoundly impacting the realms of journalism, media, content generationand the broader reliability of information dissemination in this epoch of digitization.
The advent of deepfake technology presents formidable obstacles within legal proceedings, particularly in criminal cases, with far-reaching implications for individuals' personal and professional realms. The prevailing absence of robust mechanisms for authenticating evidence in many legal systems places the burden on defendants or opposing parties to contest manipulations, effectively privatizing a pervasive dilemma. A potential solution to this quandary could involve mandating evidence authentication before court admission,
Around the globe, extant laws offer some recourse against deepfake issues, albeit hindered by the absence of a precise legal delineation, thus impeding targeted prosecution. The dynamic evolution of deepfake technology exacerbates challenges for automated detection systems, amplifying difficulties, particularly in the face of contextual intricacies. This poses a substantial threat to legal proceedings, potentially elongating trials and heightening the peril of erroneous assumptions.
WIPO’S stance and international contemplations
WIPO recognizes the subtle intricacies posed by deepfakes, which go beyond traditional copyright infringements and include breaches of fundamental human rights such as privacy and personal data protection. This revelation highlights the importance of thoroughly reevaluating the application of copyright to deepfake pictures. WIPO maintains that if deepfake content deviates considerably from the subject's real life, it should be ineligible for copyright protection.
Addressing these complexities, WIPO proposes a paradigm shift wherein, subject to copyright, ownership of deepfake copyright should vest in the creator rather than the subject individual. This proposition stems from the recognition of the creator's agency in the creative process, notwithstanding the absence of intervention or consent from the subject.
WIPO warns that copyright alone may not be an effective tool against deepfakes due to victims' lack of interest in copyrights. Victims are advised to exercise their right to personal data protection. Drawing on Article 5(1) of the EU General Data Protection Regulation (GDPR), which requires the accuracy and currency of personal data, WIPO argues for the prompt removal or correction of irrelevant, incorrect, or misleading deepfake material.
Furthermore, even when deepfake information is factually true, victims have the "right to be forgotten" guaranteed in Article 17 of the GDPR, which allows for the prompt deletion of personal data. WIPO sees this two-pronged approach focusing on personal data protection rights as a more effective technique for countering the various difficulties posed by deepfake material. As a result, WIPO emphasizes the importance of taking a comprehensive strategy that goes beyond standard copyright paradigms to protect persons from the harmful consequences of deepfake technology.
Chief ways to combat deepfakes
With the proliferation of deepfake technology, the potential adverse effects on personal interests, societal institutions, and content generation become increasingly apparent. While extant legal frameworks proscribe harmful deepfakes, challenges in their enforcement persist. The discourse surrounding ex ante regulations, such as prohibiting deepfake technology in consumer markets or mandating legitimacy tests before dissemination, underscores the intricate nature of grappling with this phenomenon.
Concerns arise regarding the enforceability of such regulations, given that individuals can readily access deepfake technology across global platforms. Moreover, the desirability of imposing bans raises fundamental questions concerning societal trust. The efficacy of implementing prohibitive measures is further complicated by the evolving landscape of social and ethical norms, which are subject to gradual development over time. Acknowledging that ex ante rules may not eradicate underlying issues, such as online misogyny and the dissemination of fake news, which persist autonomously, similar reservations apply to prospective regulations constraining online expression.
Concluding from what has been said and heard, there seems to be 4 main ways to combat deepfakes. Firstly, legislation and regulation, secondly, corporate policy and volantary action, thirdly, education and training and lastly anti deepkafe tech. All of them with various other opportunities and challenges.
A call for responsible innovation
This is a compelling call to action for entrepreneurs, developers, and investors in the deepfake space as the potential of deepfakes extends far beyond malicious applications. Imagine educational tools that bring historical figures to life or personalized healthcare experiences with virtual consultations. However, realizing this potential hinges on responsible development and ethical considerations. Below are some ways entrepreneurs, developers, and investors can navigate this exciting yet challenging, as an extension to the Tech for Good movement :
Transparency by Design: Embed mechanisms within deepfake creation tools that flag manipulated content. This fosters user awareness and discourages misuse.
Prioritizing user control : Empower individuals to manage their digital likeness. Develop systems where users can opt-out of having their image used in deepfakes or grant specific permissions for its use.
Collaboration is key: Partner with policymakers, social media platforms, and media outlets to establish clear guidelines for deepfake creation and dissemination. This fosters a unified front against malicious actors.
Investing in detection: Fund research and development of robust deepfake detection tools. The ability to identify manipulated content is crucial for maintaining online trust.
Ethical AI development: Integrate ethical considerations throughout the development process. Prioritize data privacy, user consent, and potential societal impacts from the outset.
By embracing these principles, you can be a force for good in shaping the future of deepfakes. Together, we can ensure this powerful technology empowers creativity, fosters innovation, and safeguards the integrity of our online world.
—-
We will continue our discussion on various aspects of emerging technology in the coming weeks and talk about its reimagined future.
Stay tuned and send this to someone who will enjoy this read!