Responsible AI Institute Webinar Recap (Hosted on April 17, 2024)
The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to depict events that never occurred – can pose severe risks to businesses and society, but also present opportunities in innovation and technological advancement. In a recent panel hosted by the Responsible AI Institute, experts delved into the alarming rise of generative AI abuse and deepfakes, while highlighting the potential positives of generative AI so long as meaningful regulation is put into place.
The panel featured diverse perspectives from:
Cristina López G., Senior Analyst at Graphica, provided insights on deepfakes’ business impacts.
Sophie Compton, Co-founder of #MyImageMyChoice and Director of ANOTHER BODY, shared her research on survivor stories of image based sexual violence, specifically deepfake abuse.
Ramsey Brown, CEO of Mission Control AI, discussed responsible integration of generative AI.
Anna Blue, Responsible AI Institute Social Impact Fellow, moderated the discussion.
The Business and Societal Threats
The panel began the conversation discussing the biggest generative AI issues that companies and society face, as well as potential vulnerability to deep fakes. López highlighted how the lowered barrier to entry has expanded the field of potential malicious actors, eroding trust in authentic communication for businesses. The existence of deepfake technology alone creates doubts about the authenticity of what audiences see and hear.
Perhaps most concerning from a societal impact is deepfakes’ exploitation for sexual abuse, predominantly targeting women and minorities. Compton shared harrowing accounts of survivors whose lives were shattered, leading to psychological trauma, withdrawal from social circles, and self-censorship. She stated, “I think that what we really have to think about is the cultural impact. This abuse aims to shame and silence, and it is a very effective tool of silencing.” Compton also acknowledged that deepfake technology has incredible positive applications such as allowing sources in journalism to tell their story while protecting their identity through face veiling or allowing someone who has lost their voice to still speak as themselves, but it’s crucial for responsible adoption of this technology to be a priority.
Generative AI Integration and Regulation
Brown emphasized the need for technical guardrails, cultural shifts, and holistic training to responsibly integrate generative AI into enterprises. While he warned about deepfakes breaching corporate security, stressing that synthetic media could undermine trust in basic communications like video calls, he also mentioned that generative AI offers tremendous potential benefits and promises to revolutionize numerous industries like media, entertainment, and scientific research when deployed responsibly.
With 2024 elections looming, the improved quality, scalability, and multilingual capabilities of deepfakes have expanded actors’ reach in influencing voters, exacerbating challenges in maintaining election integrity as forensic tools lag behind technological advancements.
While legislative efforts are underway, the panel expressed skepticism about regulation’s effectiveness due to the decentralized nature of deepfake technology and governments’ reluctance to hamper innovation in a field where the U.S. holds a strategic advantage.
A Call for Collective Action
Recognizing the need to put parameters in place for this technology, the panelists emphasized the need for collective action, urging accountability for tech giants facilitating deepfake abuse content and advocating for revisions to Section 230 to introduce duties of care and regulatory frameworks. Section 230, enacted in 1996, declares that participants in the Internet ecosystem will not be held liable for illegal content posted online by other people, be they service providers or individual people.
As generative AI capabilities continue advancing, the risks posed by deepfakes will intensify. Addressing this challenge requires a multifaceted approach involving technological safeguards, improved literacy and training, and a cultural shift toward greater accountability and responsible norms.
Supporting You on Your RAI Journey
Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for coming AI regulation? RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices via our various AI assessments and certification program.
About Responsible AI Institute (RAI Institute)
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, Responsible AI Institute
nicole@responsible.ai
+1 (440) 785-3588
Follow Responsible AI Institute on Social Media
X (formerly Twitter)