Author: Sez Harmon, AI Policy Analyst, Responsible AI Institute
For perhaps the first time in U.S. history, the 2024 election cycle exhibits AI-powered tools as standard, widely-available resources for campaign leaders. The evolution of AI from limited applications to generative AI (GenAI) is fundamentally reshaping political playbooks. Yet as the value of public AI systems rises, exacerbated risks are also emerging. Today, it’s difficult for campaign strategists to decipher what responsible AI use looks like in politics and Americans are also burdened with deciphering facts from sophisticated, AI-generated misinformation. These developments showcase how political campaigns are taking new shape in 2024 and democracy is shifting in the age of AI.
AI as a Mainstream Campaign Ally
Global campaigns are using GenAI to personalize voter outreach, tailor speeches and marketing materials, and amplify election narratives. These tools can also be used for predictive analysis, to assess where candidates should focus their advertising and fundraising efforts, or campaign tours, to sway voters in key jurisdictions. Campaigns also leverage AI to aid politicians ahead of election debates and speaking engagements. For example, GenAI platforms can be used to practice responses to hard-hitting political questions, give candidates role-play opportunities to field insults in the style of their opponents, and even provide advice on how to delicately address topics to maximize voter support.
Additionally, one crucial benefit to leveraging AI in political campaigns relates to how AI systems collate information and election resources at a speed and depth that is challenging for unassisted political aids to match. Some models are trained on years of political strategy data and media coverage, which would be arduous to parse by a team of human analysts. Also, these tools are often free or have small licensing fees, which makes them accessible to groups across the political spectrum.
However, to ensure AI systems are used responsibly in campaigns, political strategists should deploy these systems ethically and ensure their use is aligned with democratic values. But a common question frequently comes up for campaign teams: how can this goal be practically achieved? A responsible AI approach balances the technological advantages of AI with the impetus to prevent public harm and uphold trust in our political processes. Specifically, this approach requires:
- Accountability and Transparency: Campaigns should clearly disclose when AI-generated content is being used, especially in ads, speeches, or social media posts. This practice includes labeling AI-generated visual and auditory content, so voters are aware of the content’s origins. These steps help campaigns avoid misleading the public with synthetic media that mimics real events or people, fostering trust in democratic elections.
- Privacy-Enhanced Data Collection and Use: AI campaign tools often rely on voter data for outreach targeting and message personalization. The responsible use of AI requires that all voter data is gathered and used in compliance with data privacy laws and leading ethical guidelines, like the National Institute of Standards and Technology’s AI Risk Management Framework. Campaigns should avoid using AI to excessively micro-target voters in manipulative, ill-intended ways, especially through emotionally charged content. Free and fair elections require the public to be free from voter coercion, even through the use of AI tools.
- Bias Mitigation: AI systems sometimes display unfair biases from their training data, which can lead to discriminatory targeting in campaign advertisements or voter outreach and support. Campaign strategists should actively monitor these biases in voter datasets and ensure that all demographic groups are treated with equity in campaign messaging strategies. Bias should also be managed in voter education strategies, such that certain groups are not marginalized. For example, AI tools used to educate voters on political topics, like chatbots, should be accessible to individuals with disabilities, underserved communities, and groups with different language abilities.
Deepfakes: Public Tools or Public Nuisance?
Deepfakes, which are AI-generated photos, videos, and audio recordings intended to look or sound like real people, are commonly associated with malicious actors and false information. While they have the potential to serve as educational resources and training tools, deepfakes are more commonly used to spread misleading narratives about candidates and voting processes during elections. Also, human audiences still struggle to discern deepfakes from live coverage and authentic content; the advanced state of synthetic voice generation and AI-based video editing, along with the subtle manipulation of media through face swapping and lip syncing, has made this challenge even more pronounced.
In order for campaign leaders to mitigate the harmful impact deepfakes can have on engendering mistrust in voters, they need to adopt proactive strategies to counteract the influence of this media. Rapid-response teams dedicated to identifying and debunking AI-manipulated content on political candidates can make-or-break their public image. Campaigns can also leverage deepfake detection tools to monitor and track such content to notify social media platforms and public new organizations when they are gaining traction. Lastly, it is vital for campaigns to educate their staff and their audiences about the risks of deepfakes to enhance overall media literacy that moves beyond content moderation.
Yet right now, many countries are just starting to address deepfakes and the U.S. is no exception. Although nineteen U.S. states have passed deepfake-related laws so far, most of these laws focus on obscenity cases, rather than politics. Here’s a look at several statutes that have been passed to address growing concerns associated with these AI tools:
- In September 2024, California approved amendments to their Elections Code to restrict political deepfakes online. These statutes require tech platforms to remove and report deceptive and digitally altered content on elections. Florida also requires political advertisements and electronic communications to include disclaimers of AI-generated content.
- This past September, the Federal Trade Commission (FTC) finalized a new rule under the FTC Act that extends liability to organizations that provide AI tools used to create deepfakes of government officials and businesses. If these organizations provide the means for users to impersonate government officials or businesses for financial gain, and the organization has knowledge that the AI system is being used for these purposes, they could be held liable for fraud.
- The Deepfake Report Act of 2019 requires the U.S. Department of Homeland Security’s Science and Technology directorate to regularly report advancements in digital content forgery technology.
- The DEFIANCE Act of 2024 aims to enhance protections for individuals affected by intimate digital forgeries made without their consent. This act is one of the few deepfake bills gaining traction and it passed in the Senate, likely because the Act focuses on obscenity.
In general, it can be difficult for everyday individuals to detect deepfakes, especially as AI technologies advance. While regulations lag behind in addressing the impact of deepfakes on elections, voters should keep the following tips in mind to help identify AI-generated content in their newsfeed:
- Monitor Visual Inconsistencies: Deepfake technologies often struggle to capture fine details in human behavior and exhibit imperfect blinking patterns, unnatural body movements, or unusual facial expressions. Although deepfake “tells” constantly shift, like unrealistic facial symmetry, noting when a visual resource includes an unusual human behavior or trait can be the first hint that the content is synthetic media.
- Check Verified Sources: Before sharing viral media content, trace it back to its original source. Trusted media outlets or fact-checking organizations can be very helpful in parsing AI-generated content from authentic news coverage.
- Consider Timing: If controversial video or audio immediately surfaces before a major political event, use extra caution in believing and spreading its contents. Deepfakes are often released at politically significant times to have a bigger influence over public opinion.
- Use Deepfake Detection Tools: There are open-source tools and browser extensions designed to help the public detect deepfakes. Organizations like Deepware or platforms like Microsoft’s Video Authenticator can help analyze content for signs of manipulation.
Humans Are the Champions Behind AI
Keeping these AI developments in mind, it’s important to note that human-led analysis makes AI tools truly valuable. AI-enabled political teams check the veracity of information provided by GenAI applications and enhance political messaging to align with candidate platforms. These critical human checks can make the difference between a campaign based on falsehoods or weak messaging, versus a campaign built on well-researched concepts, human-centered voter engagement, and savvy political advertising. Without a human in the loop, the scale of information processing achieved with AI can come at the cost of political integrity and effective persuasion.
AI is a Double-Edged Political Sword
The use of AI in political campaigns is a double-edged sword: the integration of AI tools in campaigns can be a positive development, but these systems can clearly be used in harmful ways. There are currently few established guidelines (in the U.S. and in other regions) for how AI should be used in elections and scant legal restrictions on the use of artificially generated content in campaigns. Under these circumstances, the stability of democratic processes will erode if elections are determined by manipulated content and data-driven propaganda. Safeguards need to be put in place to protect voters and limit the influence of AI over public opinion. Through responsible AI practices that emphasize human understanding, equity, resilient design, privacy, and security, we can better communicate our ideas and political aspirations with AI tools.

About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Strategy & Marketing, Responsible AI Institute
nicole@responsible.ai
+1 (440) 785-3588
Follow Responsible AI Institute on Social Media
X (formerly Twitter)