Biden’s EO on AI is a Signal to Industry and Global Partners
The White House’s Executive Order (EO) on Artificial Intelligence, dated October 30, 2023, is the Biden administration’s most significant AI policy instrument yet. Prior efforts, like the Blueprint for an AI Bill of Rights, Voluntary Commitments on AI and the NIST AI RMF – have been non-binding. In comparison, as a directive from the President to the executive branch, the EO has the force of law.
Yet the Biden administration’s ambitions with the EO extend beyond the executive branch or even the US Government. The EO signals the administration’s regulatory approach to industry and is meant as a model for AI regulation globally.
Two primary takeaways for industry are that:
The significance of the NIST AI RMF and related frameworks, resources and evaluation approaches has been elevated significantly and globally, and
The US government intends to provide ongoing, timely and relevant guidance on the use of general-purpose models.
Specifically, among other requirements, the EO:
Requires companies that build powerful, general-purpose models to rigorously test them and report to the government,
Requires organizations that provide cloud computing infrastructure, such as AWS, Azure and GCP, to report on foreign persons who use a large amount of compute for potentially malicious activity,
Requires the Department of Commerce to report on approaches to determining the provenance of content, such as watermarking,
Directs the NIST to develop a version of its AI RMF for Generative AI and to find ways of evaluating,
Directs agencies to take specific actions to make it easier for AI experts to immigrate to the US,
Requires further consideration of how to prevent AI discrimination in employment, education, housing and other contexts, and
Creates new structures for AI coordination in the government and directs a surge in AI-related hiring and upskilling within the government.
The substance and timing of the EO suggest that the administration is putting forth its approach as a model for AI regulation globally. It was issued on the same day as the G7’s release of an AI Code of Conduct - which reflects a similar approach - and two days before the UK AI Safety Summit, at which Kamala Harris showcased it as a blueprint for AI regulation in other jurisdictions. The EO is seen by some commentators as a game changer, because prior to the EO’s issuance, the US Government lagged jurisdictions as varied as the EU, UK, China, Singapore and Canada in outlining a clear regulatory approach.
However, the Biden administration’s reliance on an EO, rather than a bill, to advance its approach to AI regulation reflects the lack of broad, bipartisan support in Congress for the administration’s approach to AI regulation or related issues like privacy. The AI community can continue to expect that federal agencies, rather than lawmakers, will take the lead in voicing and enforcing the administration’s AI priorities.
About The RAI Institute
The Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. The RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products.
For all media inquiries please refer to Head of Marketing & Engagement, Nicole McCaffrey, firstname.lastname@example.org.