America’s Approach to Governing AI

Despite increasing political polarization, federal lawmakers generally agree on seven major challenges posed by AI. They are less aligned on how to address these challenges.

1 Psychology – AI systems that interact with people can threaten individual wellness. For example, AI systems with access to large amounts of information about a person can influence that person’s behavior, purchases, political opinions or self-esteem.

2 Prosperity Since AI is a general purpose technology that will transform multiple industries, the US is making significant investments in AI-related basic and applied research to remain globally competitive. Additionally, it is upskilling workers and seeking to attract highly skilled workers to the US or to employ them in American companies abroad.

3 Defense – International security competition is increasingly focused on speed and connectivity. So, the US government is developing defensive AI systems that respond to high-speed events and promoting interoperability within the government and with the private sector.

4 Bias – Biased AI systems can discriminate against people based on their backgrounds. On social media, AI systems can systematically amplify the voices of people with favored political and social opinions and silence those of others.

5 Biometrics – AI systems that use face or behavior recognition are controversial and, in many jurisdictions, strictly regulated.

6 Inequality – An organization that develops the most effective AI system for a specific purpose renders competitors’ systems obsolete. The US government is seeking to combat this winner-take-all dynamic to ensure that AI-generated wealth is shared widely.

7 Finance – AI-driven high frequency trading can dramatically destabilize financial markets and harm individual investors. Since the flash crash of 2010 – in which the stock market lost a trillion dollars of market value in less than an hour – the federal government’s regulatory and enforcement actions have reduced the volume of high frequency trading as a percentage of total trading volume.

Federal lawmakers are starting to address the challenges posed by AI through measures like investing in AI, curtailing the power of large technology companies, reining in social media and promoting defense data sharing. The White House’s Office of Science and Technology Policy is studying the use of biometrics for “identity verification, identification of individuals, and inference of attributes including individual mental and emotional states”. Federal agencies are determining how to subject AI systems to anti-discrimination laws, like the Fair Credit Reporting Act, the Civil Rights Act (Title VII) and the Patient Protection and Affordable Care Act (Section 1557).

Despite these nascent efforts, America’s approach to regulating AI is characteristically laissez-faire by global standards, relying primarily on courts to determine how existing regulations apply to AI. Federal regulation modeled on the risk-based approach taken in the proposed EU Artificial Intelligence Act is unlikely. However, the EU approach is still relevant to American companies and will affect them in three ways.

First, as American companies are developing their responsible AI programs, they are incorporating – and contributing to – private governance mechanisms, such as standards and certifications. These mechanisms generally reflect a risk-based approach grounded in specific use cases, which is similar to the EU approach.

Second, the Federal Trade Commission (FTC) – a leading federal agency on privacy matters – is also likely to play a major role in overseeing AI. Since the EU GDPR is the de facto global privacy standard, FTC Commissioners are familiar with EU regulatory developments, including those related to AI. Third, influential state and local jurisdictions – like New York City and California – are likely to eventually enact AI laws that incorporate elements of the EU approach.

The common belief that American lawmakers do not understand the challenges posed by AI is outdated. Rather, the parties have not aligned on how to address many of these challenges. Until they do, courts, agencies, sub-national governments and private governance mechanisms will fill in the gaps.

Share the Post:

Related Posts

Procurement AI

Responsible AI Institute May 15, 2024 Webinar Recap Robust procurement practices have emerged as a crucial frontline in fostering responsible AI development and deployment. As...

Jeff Easley Headshot

Leading AI Nonprofit Announces Additional Advancements on Policy and Delivery Team AUSTIN, TEXAS – May 15, 2024 – Responsible AI Institute (RAI Institute), a prominent...

Michael Brent - BCG

Michael Brent Boston Consulting Group Director, Responsible AI Team What does your job entail within your organization? I have the best job in the world....

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!