A Look at Global Deepfake Regulation Approaches

TLDR: Deep synthesis technology, or deepfake technology, that depicts artificial images and video comes with risks that should be mitigated through regulation. In this overview, we look at major regulatory deepfake approaches, focusing on Canada, China, EU, Korea, UK, and US.

You might have come across viral videos called “deepfakes,” which show the faces of politicians or celebrities superimposed on different bodies, making it seem like they are saying or doing something controversial. For instance, there was a video where CEO Mark Zuckerberg appeared to be bragging about owning users’ stolen data and another where Game of Thrones’s Jon Snow apologized for the disappointing end of the final season.

Deepfakes use AI to alter videos and images to look frighteningly real. But these videos are not genuine and can be used to spread misinformation with harmful consequences. AI firm Deeptrace identified 15,000 deepfake videos online in 2019, a figure which almost doubled in just nine months. Some experts anticipate that as much as 90 percent of digital content could be synthetically generated within just a few years.

How Deepfakes Work

Let’s break down how deepfakes work in simple terms.

Deepfakes are highly realistic video, audio, or image forgeries or replicas generated using AI. The technologies that create deepfakes include Generative Adversarial Networks (GANs) and machine learning (ML).

ML is a subset of AI that enables systems to learn and improve from experience in the form of the data it collects.

GANs are a type of machine learning algorithm that use two neural networks—a generator and a discriminator—to learn and create data that looks real or human rather than AI-generated.

The generator produces fake data, such as images or video frames, while the discriminator distinguishes between real and fake data—a feedback loop called deep synthesis. Over time, the generator gets better at creating fake data, and the discriminator also gets better at detecting it in an iterative process that results in highly convincing fakes.

Uses of Deepfakes

But how is this technology used?

In this section, we’ll cover a few examples of harmful and beneficial uses of deep synthesis technology. In both the good and the bad, and in the gray area in between, responsible risk management is needed to mitigate the potential risks of this technology.

Benefits of Deepfake Technology

Despite the harms associated with deepfakes, the technology has its benefits, including accessibility, education, forensics, and artistic expression.

Synthetic media can support the development of critical accessibility tools. For example, Lyrebird is a Canadian company that uses deep synthesis to help ALS patients communicate when they’ve lost their ability to speak. By cloning their voice, people can continue to “speak” through the deepfake.

Deepfakes can be used in the entertainment industry. They can create more realistic special effects and allow actors to reprise their roles even after passing away. For example, Synthesia is a platform for generating AI videos based on text prompts. Synthesia is well known for having created the synthetic video of soccer star David Beckham sharing a message to spread malaria awareness in nine different languages.

In the field of education, deepfakes can be used to create interactive and engaging simulations and role-playing exercises. In medicine, deepfakes can be used to simulate surgeries or other procedures, providing medical students and professionals with valuable training opportunities. Additionally, deepfakes can be used in virtual reality and augmented reality applications, enhancing the overall user experience.

In journalism, deepfakes can help to recreate historical events and bring attention to important issues. For example, Deep Empathy is a UNICEF and MIT project that simulates how other cities would look if faced with conflicts similar to that of war-torn Syrian neighborhoods. These synthetic images of New York, London, Boston, and other cities ravaged by the same destabilizing conditions are meant to evoke empathy for the real victims worldwide in Syria or other regions.

Alongside the need for caution and regulation, deepfakes have the potential to bring about positive change in a variety of industries and applications.

Harms of Deepfake Technology

The widespread availability of deepfake technology poses significant risks and harms to individuals, organizations, and society as a whole.

Deepfakes can be used to spread false information, manipulate public opinion, or damage reputations. They can also be used to create fake pornographic content, harass or blackmail individuals, or even manipulate political elections. For example, in 2022, a deepfake video of Kyiv mayor Vitali Klitschko duped several European politicians. Mayors of Berlin, Madrid, and Vienna joined a call with who they thought was the real Klitschko discussing Ukrainian refugees, yet the audio and video presented on the call was a fake.

Recently, there has been a rise in scams where an auto-generated recording of a person’s loved one is used to scam them into sending money, such as when the CEO of an energy company was scammed into handing over almost 250,000 USD.

The ease of creating deepfakes also makes it difficult to determine the authenticity of media, eroding trust in journalism and causing confusion about the truth. Additionally, deepfakes can have social impacts and a negative impact on mental health, as people may struggle to separate truth from fiction.

Overall, the harms of deepfakes are far-reaching and require a concerted effort from individuals, organizations, businesses, and governments to mitigate them.

The industry has responded by aiming to create technology that can accurately detect and label deepfakes. So what are countries doing to regulate the use of this technology? Here’s a look at the approaches of a few countries:

China

In 2019, the Chinese government introduced laws that mandate individuals and organizations to disclose when they have used deepfake technology in videos and other media. The regulations also prohibit the distribution of deepfakes without a clear disclaimer that the content has been artificially generated.

China also recently established provisions for deepfake providers, in effect as of 10 January 2023, through the Cyberspace Administration of China (CAC). The contents of this law affect both providers and users of deepfake technology and establish procedures throughout the lifecycle of the technology from creation to distribution.

These provisions require companies and people that use deep synthesis to create, duplicate, publish, or transfer information to obtain consent, verify identities, register records with the government, report illegal deepfakes, offer recourse mechanisms, provide watermark disclaimers, and more.

Canada

Canada’s approach to deepfake regulation features a three-pronged strategy that includes prevention, detection, and response. To prevent the creation and distribution of deepfakes, the Canadian government works to create public awareness about the technology and develop prevention tech. To detect deepfakes, the government has invested in research and development of deepfake detection technologies. In terms of response, the government is exploring new legislation that would make it illegal to create or distribute deepfakes with malicious intent.

Existing Canadian law bans the distribution of nonconsensual disclosure of intimate images.

Similar to California, the Canada Elections Act contains language that may apply to deepfakes. Canada has also made other efforts in the past to curb the negative impacts of deepfakes, including its “plan to safeguard Canada’s 2019 election” and the Critical Election Incident Public Protocol, a panel investigation process for deepfake incidents.

EU

The EU has taken a proactive approach to deepfake regulation, calling for increased research into deepfake detection and prevention, as well as regulations that would require clear labeling of artificially generated content. The most relevant European deepfake policy trajectories and regulatory frameworks are:

  • The AI regulatory framework
  • The General Data Protection Regulation
  • Copyright regime
  • e-Commerce Directive
  • Digital services act
  • Audio Visual Media Directive
  • Code of Practice on Disinformation
  • Action plan on disinformation
  • Democracy action plan

The EU has proposed laws requiring social media companies to remove deepfakes and other disinformation from their platforms. Updated in June 2022, the EU’s Code of Practice on Disinformation addresses deepfakes through fines of up to 6 percent of global revenue for violators. The code was initially introduced as a voluntary self-regulatory instrument in 2018 but now has the backing of the Digital Services Act. The Digital Services Act, which came into force in November 2022, increases the monitoring of digital platforms for various kinds of misuse. Under the proposed EU AI Act, deepfake providers would be subject to transparency and disclosure requirements.

South Korea

With South Korea’s strong technological advancements, the country was one of the first to invest in AI research and regulatory exploration.

In January 2016, the South Korean government announced it would invest 1 trillion won ( about USD 750 million) in AI research over 5 years. In December 2019, South Korea announced its National Strategy for AI.

In 2020, South Korea passed a law that makes it illegal to distribute deepfakes that could “cause harm to public interest,” with offenders facing up to five years in prison or fines of up to 50 million won or approximately 43,000 USD.

Advocates push for South Korea to tackle digital pornography and sex crimes through additional measures, such as education, civil remedies, and recourse.

United Kingdom

The UK government has introduced several initiatives to address the threat of deepfakes, including funding research into deepfake detection technologies and partnering with industry and academic institutions to develop best practices for detecting and responding to deepfakes.

The UK has funded research and development to support and spread awareness about the harms of revenge or deepfake porn in its ENOUGH communications campaign. The UK hasn’t yet passed horizontal legislation banning the creation or distribution of deepfakes with malicious intent. However, in November last year, the UK announced that deepfake regulation would be included in its much-anticipated mammoth Online Safety Bill. This step is taken amidst the release of police data that roughly 1 in 14 adults in England and Wales experienced a threat to disseminating intimate images.

United States

While there are no federal regulations on deepfakes, some states have passed laws governing their use, primarily focused on targeting deepfake pornography.

California and Texas were the first states to pass laws in 2019. California made it illegal to distribute deepfakes of political candidates within 60 days of an election through AB 730, a law that sunsetted on 1 January 2023. Around the same time, California passed AB 602 banning pornographic deepfakes made without consent. New York also has a deepfake law S5959D passed in 2021, with potential fines, jail time, and civil penalties for unlawful dissemination or publication of a sexually explicit depiction of an individual. Virginia passed § 18.2-386.2 in 2019 that criminalizes the creation and dissemination of sexually explicit deepfakes. The law includes exceptions, such as for parodies and political commentary, and requires the Attorney General to establish a working group to further study deepfakes.

On the federal level, the DEEP FAKES Accountability Act, introduced in 2019, seeks to require deepfake creators to disclose their use, prevent the distribution of deepfakes intended to deceive viewers during an election or harm an individual’s reputation, and set potential fines and imprisonment for violators. The bill would also establish a task force within the Department of Homeland Security to analyze and mitigate the impact of deepfakes on national security and calls for increased funding for research into detecting and mitigating the harm caused by deepfakes.

Regulatory Challenges and Considerations for Deepfakes

The rise of deepfakes has brought about a new set of regulatory challenges and considerations. Regulators must meet the moment to provide guardrails for the use of technology as it scales while negotiating the interests of tech companies, arts companies, healthcare, consumers, and other stakeholders.

One of the biggest challenges in enforcement is catching the most malicious users, who often operate anonymously, adapt quickly, and share their synthetic creations through borderless online platforms. Another consideration is the potential for deepfakes to curtail free speech, particularly political speech, where people can use deepfakes to spread false or misleading information.

Recourse mechanisms, such as takedown notices or legal action, can address copyright questions and defamation. More research is needed into the effectiveness of these mechanisms and research into best practices. Standards generally will help shape this conversation. For example, the World Intellectual Property Organization (WIPO) published the “Draft Issues Paper On Intellectual Property Policy And Artificial Intelligence” in December 2019, which included recommendations for establishing a system of equitable remuneration for victims of deepfake misuse and addressing copyright in relation to deepfakes.

Existing consumer law frameworks can apply in some cases, particularly where deepfakes are used to deceive or defraud individuals. Advocacy groups like the Electronic Frontier Foundation, Coalition for Content Provenance and Authenticity, and Witness Media Lab encourage regulators to rely on existing privacy copyright infringement, fraud, obscenity, and defamatory legal frameworks to regulate the tech as well as public education.

In some cases, regulators must evaluate any gaps left by existing law and identify other opportunities to deter the violation of human rights and protect the right to privacy, personal data protection rights, and copyright law.

Overview of Other Countries’ Approaches

The following countries have invested in AI and/or deepfake research and development (R&D):

Note that the 27 member states in the European Union (EU) are subject to the deepfake regulations of the strengthened Code of Practice on Disinformation and will be subject to the upcoming EU AI Act, which will govern deepfake technology.

Final Thoughts

All in all, deepfakes have both positive and negative implications. While they have beneficial uses in accessibility, education, forensics, and artistic expression, their widespread availability also poses significant risks and harms to individuals, organizations, and society as a whole. Deepfakes can be used to spread misinformation, manipulate public opinion, or damage reputations. They can also erode trust in journalism and the truth.

The regulation of deepfake technology is essential. Some countries, like China, have introduced laws to mandate individuals and organizations to disclose the use of deepfake technology in videos and other media. Overall, AI risk management is crucial to mitigate the potential risks of this technology.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Marketing, RAI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Follow RAI Institute on Social Media 

LinkedIn 

X 

Slack

Share the Post:

Related Posts

Responsible AI News

New Members From Diverse Sectors Join the Institute; Growing Team Deepens Industry Expertise AUSTIN, TEXAS – Oct. 8, 2024 – Responsible AI Institute (RAI Institute),...

Leaders in Responsible AI

September 2024 Parag Kulkarni, Simpplr Chief Technology Officer What does your job entail within your organization? As the Chief Technology Officer at Simpplr, I lead...

RAI Institute Welcome Ally Financial

Responsible AI Institute is excited to announce that Ally Financial Inc. (NYSE: ALLY), a financial services company with the nation’s largest all-digital bank, has joined...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!