What to Expect from Biden on AI

The Biden administration’s AI efforts have matured considerably in 2022. Though sometimes described as slow and uneven, the administration’s methodical progress on AI is remarkable given the other challenges it faces, from inflation to Ukraine.

The administration’s AI efforts can be organized into four pillars, pictured here:

1. International Cooperation – The Biden administration’s National Security Strategy (NSS), released in October, describes the administration’s intent to use American strengths in AI resources, investments, and talent to “anchor an allied techno-industrial base” and “rally like-minded actors.”

The administration understands that this will require give-and-take. Earlier this month, it declassified an intelligence assessment that describes how allies have become “more critical” of American policies on “the ethics of AI usage.”

The NSS identifies the U.S.-E.U. Trade and Technology Council (TTC) as a key forum for “coordinating approaches to setting the rules of the road on global technology… based on shared democratic values.” At its meeting on December 5, the TTC is expected to release a roadmap that prioritizes AI risk management and security.

2. Research – To ensure that AI research infrastructure is not exclusively accessible to a few large companies, Congress directed the Biden administration to create the National Artificial Intelligence Research Resource (NAIRR), a “shared research infrastructure providing AI researchers and students across all scientific disciplines with access to computational resources, high-quality data, educational tools and user support.” Later this month, the NAIRR Task Force will submit to the President its final recommendations on how to establish and sustain the NAIRR.

3. Leadership – To ensure that the United States Government’s AI plans and investments further its AI competitiveness and leadership, Congress directed the Biden administration to create the National Artificial Intelligence Advisory Committee (NAIAC), which advises the President on an ongoing basis. Since the Department of Commerce announced its members in April, the NAIAC has met twice.

Also within the Department of Commerce, the National Institute of Standards and Technology (NIST) is expected to release its AI Risk Management Framework 1.0 in January 2023.

4. Values – The Biden administration’s Blueprint for an AI Bill of Rights (AIBOR), released on October 4, outlines the Biden administration’s approach to responsible AI deployment that protects civil rights and embodies American values. While non-binding, its publication signals the White House’s intent to direct federal agencies and their suppliers to consider its five principles when using automated systems to deliver public services. Though the AIBOR does not meaningfully discuss several important issues, like AI use in products and devices or AI use by law enforcement, it outlines principles that federal agencies can apply to these contexts.

Since the AIBOR synthesizes inputs from impacted communities, academics, civil society partners, industry stakeholders, and policymakers, it also serves as an authoritative reference for state and local governments.

The administration considers the AIBOR a ‘Blueprint’ for federal agencies. Many federal agencies (see Pentagon, Consumer Financial Protection Bureau, Housing and Urban Development, Federal Trade Commission, Health and Human Services, and Equal Employment Opportunity Commission) have already announced or adopted policies or strategies that align with the AIBOR’s principles. The AIBOR may not have ‘teeth,’ but AIBOR-aligned federal regulations do.

The administration’s efforts along these four pillars suggest that it is advancing an ambitious but pragmatic, incremental, and sector-specific policy approach to AI.

In the next few months, these AI efforts will culminate in additional policies, strategies, enforcement, and implementation by federal agencies. The results of the administration’s engagement with the EU, including its efforts to navigate differences between the incremental American approach to AI regulation and the more comprehensive EU approach, will also become clearer.

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!