By Aaron Arnett
Following our recent webinar, “Responsible AI in 2025: What’s Real, What’s Next, and What Matters,” we wanted to explore in greater depth how the United States has recently taken significant steps in artificial intelligence (AI) governance — an effort spanning multiple administrations and numerous federal agencies. During President Biden’s tenure, the nation issued its first major AI directives, establishing a foundation for future AI regulation. Key initiatives included the Blueprint for an AI Bill of Rights in 2022, issued by the White House Office of Science and Technology Policy (OSTP), followed by Executive Order 14110 in 2023 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and a National Security Memorandum on Artificial Intelligence in 2024.
These directives were not developed in isolation. Beyond the White House, efforts to regulate AI have unfolded at local, national, and international levels, creating a complex regulatory environment for organizations to navigate. Looking ahead, future AI regulation under President Trump will likely build on the work of the same government offices and departments that shaped the U.S. AI regulatory landscape under President Biden, while incorporating new personnel appointed by the incoming administration.
This blog post will explore the existing AI directives, the key AI figures in the Biden administration, the potential players in President Trump’s administration, and what these developments mean for the future of AI governance.
President Biden’s AI Directives
President Biden’s administration introduced three major AI directives: the AI Bill of Rights, Executive Order 14110, and the National Security Memorandum.
In October 2022, the White House Office of Science and Technology Policy (OSTP) issued the AI Bill of Rights, a non-binding document outlining guidelines for the responsible use of AI. It emphasized five key principles: AI systems should be safe and effective; they must not discriminate; they should protect data privacy; they should notify users and explain how information is being used; and they should offer human alternatives when appropriate.
Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have acted with urgency to harness the tremendous promise of AI while addressing its risks to protect Americans’ rights and safety. As part of this effort, President Biden secured Voluntary Commitments from leading AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — to promote the safe, secure, and transparent development of AI technologies.
In October 2023, President Biden issued Executive Order 14110, launching a government-wide effort involving over 50 federal entities to address AI development and regulation. The order focused on key areas such as safety and security, privacy, and consumer protection. It also tasked the Office of Management and Budget (OMB) with establishing an interagency council to coordinate federal AI use and develop guidance on AI governance.
The National Security Memorandum, issued in 2024, focused on U.S. government national security departments and agencies. It directed these entities to ensure the United States leads in the development of safe, secure, and trustworthy AI; to leverage advanced AI technologies to enhance the national security mission; and to advance international consensus and governance around AI. While the Executive Order concentrated on non-national security agencies, the National Security Memorandum specifically targeted national-security departments.
Notably, none of these directives introduced specific regulations for AI companies. Instead, they provided best practices for responsible AI use and tasked government departments and agencies with evaluating and potentially developing regulations as needed.
A Look at the Important AI Government Departments
The Biden Administration’s directives were the result of significant interagency collaboration. Many government departments contribute to AI regulation, but two key players are the White House Office of Science and Technology Policy (OSTP) and the Department of Commerce, which houses both the National Institute of Standards and Technology (NIST) and the AI Safety Institute.
The OSTP, established in 1976, advises the president on policy formation and budget development. The OSTP director, who also serves as the president’s assistant and chief advisor on science policy, plays a critical role in shaping AI policy due to their proximity to the president. This central position ensures OSTP’s influence on all major AI initiatives and directives.
The Department of Commerce is the primary Cabinet-level department focused on AI regulation. NIST’s mission is to advance measurement science, standards, and technology to enhance economic security and improve quality of life. As part of this mission, NIST has released several reports identifying critical areas for AI standardization. It set a benchmark for AI risk management efforts by releasing the AI Risk Management Framework and accompanying Playbook in early 2023. In 2024, NIST further advanced its efforts with the release of its Generative AI Profile, offering specific guidelines for managing the unique risks associated with generative AI technologies. In that same year, the Biden Administration expanded NIST’s capacity by establishing the AI Safety Institute within the agency. The institute is tasked with providing the federal government with the technical expertise to evaluate cutting-edge AI systems, ensuring they meet safety, security, and trustworthiness standards. Although still in its infancy, the AI Safety Institute is well-positioned to play a pivotal role in shaping the future of AI governance and policy in the United States.
Who’s Who in President-Elect Trump’s Administration
All departments and agencies involved in AI regulation under the Biden Administration are expected to continue contributing under the Trump administration, albeit with new personnel and potentially shifting priorities.
President Trump has already announced nominations for several key AI leadership roles and introduced new positions to address emerging challenges. For the role of OSTP director — his chief science advisor — he has nominated Michael Kratsios, who previously served as OSTP Chief Technology Officer during Trump’s prior term. Kratsios is known for his pro-innovation, minimally regulatory approach to AI policy. In the private sector, he was the managing director at Scale AI, an AI infrastructure start-up, bringing industry experience that may shape his approach to AI governance. Supporting him as counselor to the OSTP director will be retired computer scientist Lynne Parker, from the University of Tennessee, Knoxville. Parker, who led national AI policy efforts at OSTP from 2018 to 2022, brings deep technical expertise and a strong track record in AI policy development. Together, Kratsios and Parker are likely to play instrumental roles in shaping AI policy under the Trump administration.
In addition to formal appointments, President Trump has named Silicon Valley entrepreneur David Sacks as his AI & Crypto Czar, an advisory role outside the government. Sacks, who has expressed concerns about overregulation of AI companies and potential censorship, is expected to advocate for a less restrictive regulatory environment. However, as an advisor without formal authority over government personnel, Sacks’ influence on AI policy will largely depend on his ability to forge strong connections with key decision-makers.
At the Department of Commerce, President Trump has nominated Howard Lutnick, CEO of the financial services firm Cantor Fitzgerald, to lead the department. Lutnick lacks direct experience in AI industries, leaving questions about how AI regulation will evolve under his leadership. Furthermore, as of early January 2025, Trump has yet to nominate leaders for NIST or the AI Safety Institute. This leaves the Department of Commerce’s position on AI policy — along with its critical internal units like NIST and the AI Safety Institute — uncertain at this stage.
Future of AI Policy under President Trump
Like President Biden’s administration, President Trump’s administration is expected to place significant emphasis on AI policy. Based on the views of his OSTP director nominee, Michael Kratsios, and AI & Crypto Czar, David Sacks, Trump is likely to pursue a less stringent regulatory approach compared to his predecessor. For instance, guidelines aimed at mitigating AI “bias” — which some members of Trump’s team view as a form of censorship — may be deprioritized or removed altogether.
The Trump administration is also expected to prioritize policies that bolster America’s position as a global leader in AI. Rather than focusing heavily on regulation, the administration may adopt strategies designed to accelerate innovation and strengthen the competitiveness of American AI companies on the global stage.
President Biden’s administration marked the first presidency to place AI policy at the forefront of its agenda, setting a precedent for its successors. While Trump’s approach may diverge in scope and focus, the emphasis on AI as a critical area of governance and strategy will undoubtedly continue.
Supporting you on your responsible AI journey
Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for the new administration AI regulatory trajectory? Join our RAI Hub community or become a member, and if you missed our recent webinar “Responsible AI in 2025: What’s Real, What’s Next, and What Matters,” you can watch it on demand now.
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact
Nicole McCaffrey
Head of Marketing, Responsible AI Institute
+1 (440) 785-3588
Connect with RAI Institute