In March 2022, a widely circulated video seemed to show Ukrainian President Volodymyr Zelenskyy calling upon Ukrainians to lay down their weapons and surrender to Russia. The video was fake, and debunked by President Zelenskyy himself, but its popularity illustrates the ability of deepfakes to influence the public and potentially affect the political landscape.
Two weeks later, a deepfake video went viral that seemed to show American actor Tom Cruise at the American Film Institute awards show jumping over the head of a presenter, garnering over ten million views and bringing the conversation of created fake realities back into public debate.
Deepfakes are realistic, but fake, videos made using machine learning and artificial intelligence software. The quality of deepfake videos is consistently increasing, making them harder to detect as false. Since 2019, a handful of jurisdictions around the world have introduced legislation to address deepfakes, taking divergent approaches to address either harm occurring at the societal or individual level by targeting the producers of deepfakes or the distributors that host them. With respect to harm on a societal level, regulations and legislation criminalizing or prohibiting the dissemination of deepfakes seek to target deepfake distributors and producers. On the individual level, persons whose likeness has been used may have a cause of action against the producer responsible for making the deepfake; or against the platform that hosts or disseminates it.
Addressing Societal Harm
Deepfakes fall under the prohibited practices of the EU’s Code of Practice on Disinformation (“Code”). To address societal harms of false information, the European Commission unveiled on June 16, 2022 a strengthened Code with the goal of developing “very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages.” European Commission Press Release (Jun. 16, 2022). The European Commission Vice-President for Values and Transparency, Věra Jourová, cited the weaponization of information by Russia and attacks on democracy more broadly as considerations behind the strengthened Code, which provides a meaningful measure to address disinformation and achieve a cohesive set of commitments and understanding for platforms. With the introduction of the new Code, large technology companies—including Google, Meta (parent company of Facebook, Instagram and WhatsApp), Twitter, and TikTok—must take measures to counter deepfakes and false accounts on their platforms—or face significant fines of up to 6% of their global turnover.
The Code, initially introduced in 2018 as a voluntary self-regulatory instrument by industry players, will now have the backing of the Digital Services Act (“DSA”), a comprehensive set of rules that the European Commission has been working towards finalizing since 2018 to protect consumers online, establish a framework of transparency and accountability for online platforms and foster competitiveness. The DSA seeks to impose rules on how platforms moderate content, advertise and use algorithms. The Code’s signatories will be subject to the DSA’s audit requirements and dissuasive sanctions if they fail to comply with their obligations.
Platforms that reach more than 10% of the EU population, meaning they have at least 45 million users in the EU, are designated under the DSA as “Very Large Online Platforms” and have a specific set of obligations due to the “systemic risks the platform poses [that] have a disproportionately negative impact in the Union.” The Very Large Online Platforms that signed onto the Code have committed “to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviors, actors and practices not permitted on their services,” including malicious deep fakes, creation and use of fake accounts, account takeovers and bot-driven amplification, non-transparent paid messages or promotion by influencers, hack-and-leak operations and under conduct aimed at artificially amplifying the reach or perceived public support for disinformation. Code, Commitment 14. The Code’s 34 signatories, which include Google, Meta, Microsoft, TikTok, Twitter, Vimeo, World Federation of Advertisers (WFA), Interactive Advertising Bureau (IAB Europe), and European Association of Communication Agencies (EACA), have six months to implement the commitments and measures they agreed to and must report to the Commission by early 2023 their implementation.
The approach of addressing deepfakes by targeting platforms responsible for distributing and hosting them was previously implemented in China, which in 2019 made it a criminal offense to publish deepfake videos created with artificial intelligence or virtual reality, beginning January 1, 2020. The Cyberspace Administration of China, in implementing regulations requiring all deepfakes to be prominently labeled and prohibiting deepfakes used as fake news, explained its reasoning that deepfakes create risks of “endangering national security, undermining social stability, disrupting social order, and infringing on the legitimate rights and interests of others, causing political security risks, national security, and public security risks, and adversely affecting social stability.” Cyberspace Administration of China, Briefing (Nov. 29, 2019).
Certain U.S. jurisdictions target the producer of a deepfake under certain circumstances. For example, in the U.S. state of Texas, it is a crime to make and publish or distribute a deepfake video with the intent of injuring a political candidate or influencing the result of an election. Such laws may end up being hard to enforce given the difficulty in finding the producer of the deepfake videos, and ultimately even with enforcement the video may remain online without additional measures to require its removal from distribution platforms. Once a violation is established, the platform can be notified to remove the content voluntarily or in accordance with industry self-regulatory commitments.
Addressing Individual Harm
The law has moved more slowly than technology in addressing harm to individuals caused by deepfakes. Aside from a few jurisdictions, laws have not been introduced to specifically address deepfakes or provide causes of action to individuals whose image or likeness has been used in a deepfake.
Within the U.S., for example, laws that have been introduced are narrowly tailored to address two primary concerns for how deepfakes could be used: (i) to interfere with elections; or (ii) to develop sexually explicit content. Individuals harmed by deepfake technology have, with varying degrees of success, sought redress in courts against producers and distributors using existing causes of action that were not developed specifically for deepfakes. For example, subjects of deepfakes may assert claims that their image, expression or voice are copyrighted material and their use violates copyright laws. These claims face difficulties overcoming broad exceptions to copyright infringement, for example with the “fair use” doctrine in the U.S. that allows for the unlicensed use of material that would otherwise be prohibited under copyright laws if, for example, the borrowed content were sufficiently transformed. In practice, the copyright holder may have more success focusing its efforts towards the platform rather than the producer, i.e. by posting a takedown notice or requesting that the platform voluntarily remove or flag the deepfake. Similarly, if the deepfake is used in commercial advertising, an action may lie under the Lanham Act.
Targets of deepfakes may also have a claim for defamation, i.e. due to injury to their reputation caused by the deep fake, or under various torts such as the tort of false light, invasion of privacy or the right of publicity, depending on the jurisdiction.
As deepfakes evolve, so too do laws, regulations, and political will to address the issue, so it remains to be seen how this area will develop. One thing seems certain: the days of unregulated platform content are over.