Part I
Co-Authored by Michael Chapman and Hadassah Drukarch
As jurisdictions across the Atlantic adjust their policies, the U.S. is also evolving to meet expectations for trustworthy and responsible AI. Amid these changes, organizations adopting AI are pondering which sources and experts to trust and how to prioritize evolving policies. They need to understand how to respond to these shifting tides and adapt to changing requirements with minimal operational and reputational disruption.
This blog explores how organizations can navigate the evolving U.S. AI policy landscape and sheds light on the influence of state, federal, and international dynamics on policymaking. While a single post cannot fully demystify AI policy, these insights will guide organizations to understand ongoing developments and their interconnections. A close eye on these dynamics can help them anticipate what is around the corner and prepare to adapt.
Focusing on one jurisdiction is not enough
With the “race to regulate” AI in full swing, many jurisdictions are crafting policies to address its ethical, legal, and social implications. This regulatory effort spans local, national, and international levels, forming a complex web of rules that organizations must navigate. Doing so can seem daunting or even impossible.
The sheer volume and pace of regulatory developments make it difficult for any entity to keep up. Organizations, depending on their maturity and regulatory experience, may struggle to prioritize the most critical actions amidst abundant information. Moreover, these regulations are often interlinked, with changes in one jurisdiction triggering adjustments in others. To manage this complexity, organizations might benefit from focusing on key jurisdictions or proposals and leveraging resources like the IAPP’s U.S. State AI Governance Legislation Tracker, White & Case’s AI Watch: Global Regulatory Tracker, and the Emerging Technology Observatory’s AI Governance and Regulatory Archive (AGORA) by Purdue University’s Governance and Responsible AI Lab (GRAIL).
Focusing on policies in your organization’s immediate vicinity is insufficient; some regulations have cross-jurisdictional impacts, and even finely targeted policies can create domino effects that permeate other regions. For these reasons, organizations must widen their vision to anticipate future developments, enhance their AI governance capacity, and establish practices that ensure their AI progresses with proper guardrails in place.
State policy can start domino effects and force federal action
Anyone with experience in U.S. state policy will attest that state dynamics can spill across borders. For example, states often adopt ‘model legislation’ from each other, facilitated by executive agencies, advocacy groups, legislators, or cross-state organizations like political parties and industry associations. Once an idea is introduced and legitimized, it can quickly proliferate. For instance, Hawaii set a 100% renewable energy goal in 2015, and within a few years, 24 states, D.C., and Puerto Rico followed suit. Similarly, early AI regulations in states like California, Colorado, New York, and Virginia are likely to inspire others. In fact, this year the majority of U.S. states have introduced AI bills and some are adopting resolutions or enacting legislation.
Adopting policies from other states offers several benefits: it grants legitimacy, aligns states sharing similar priorities, and can even stoke competitive spirit – “if state ‘X’ can do this, why can’t we?” It is also easier than creating new regulations from scratch. This means early movers can have tremendous influence as other states look for inspiration. For example, states looking to regulate AI will likely consider Colorado’s definitions of algorithmic discrimination.
Given this dynamic, states can set the tone for national AI regulation and even prompt federal action. Absent comprehensive AI legislation, states may erect a fragmented patchwork of incompatible definitions and conflicting requirements. As companies face impossible compliance requirements or threaten to leave certain states due to compliance burdens, Congress might step in for consistency’s sake, using state policies as a guide.
Federal policies overrule state decisions and reflect international competition
While states can influence federal action, federal policies hold supremacy over state and local policies. They can set minimum standards, ensure consistency, or sometimes complicate or undermine state initiatives. Federal policy extends beyond legislation to include regulations, executive orders, agency guidelines, and other governmental actions that implement and enforce laws.
While Executive Orders have significantly impacted AI governance, policies from individual agencies, such as administrative rules and organizational reforms, are also influential. These structural changes reflect agency priorities and establish processes for industry and government engagement. Unlike broad Executive Orders, agency policies address specific aspects of AI implementation and regulation. For example, the Federal Trade Commission has addressed AI-related concerns by prohibiting digital impersonation and soliciting comments on handling AI platforms used for impersonation.
Federal courts, including the Supreme Court, can also enter the fold, especially where constitutional protections or court precedents are challenged. Recently, the Supreme Court overruled the Chevron doctrine, which required federal courts to defer to federal agency interpretations of ambiguous laws. Though it is unclear how this ruling will play out in practice, agencies are facing reduced flexibility in interpreting the policies they are intended to execute. This will prove particularly challenging in new regulatory fields like AI, as this new ruling could make it harder to introduce federal requirements for AI use.
Lawmakers, often generalists, juggle numerous issues from education, environment, and health to consumer protection, defense, and technology. They are well-equipped to set the direction through policy but sometimes defer to experts within federal agencies to settle the technology-specific details. Though there is currently no comprehensive federal law covering AI systems, reduced agency leeway may complicate the process of putting forward federal laws, opening up the floor to state laws to serve as important sources of guidance on AI governance as states nor their agencies are affected by the Court’s ruling.
However, even when following every piece of state and federal policy, blind spots may still exist. For instance, the EU’s General Data Protection Regulation (GDPR) mandates certain safeguards for companies handling EU citizens’ data, affecting U.S. companies regardless of location. Additionally, national policies in other regions can also ‘punch up’ to international levels, as seen with industrial and environmental policies driven by international competition, trade, and geopolitics.
This all shows that national approaches to AI regulation cannot be considered in a vacuum. Policies may stem from ethical or legal reasoning, but they can also reflect a country’s tech industry interests or trade relationships. When new policies emerge, it is therefore essential to consider the regional motivations behind them that may proliferate to other regions with similar values or parallel tactical goals.
International policies forge alliances and seek cooperation and consistency
Global dynamics often exert pressure on federal policy efforts, but they can also mitigate competitive pressures and foster mutual benefits. By setting standard definitions, requirements, and auditability conditions for AI systems, governments can better address AI’s adverse impacts while providing organizations with the certainty needed for effective AI adoption. This alignment helps countries with different priorities and regulatory dynamics create more coordinated approaches to AI. For example, the EU AI Act reaches beyond EU borders and impacts organizations elsewhere, including in the United States. Through what is commonly termed the “Brussels effect,” EU policies can influence governments around the world, as we have noticed surrounding its risk-based approach to AI regulation.
However, many international policies lack the enforcement authority of state or federal regulation, manifesting instead as informal guidance, plans, or recommendations, also known as “soft law.” An example of such an international soft law mechanism is the OECD’s Principles for Trustworthy AI, initially adopted in 2019 and updated in May 2024, which guide AI actors in developing trustworthy AI and help policymakers create effective AI policies, thereby fostering global interoperability between jurisdictions. Soft law can appear at the national level too – the National Institute of Standards and Technology (NIST) AI Risk Management Framework in the United States, though not legally binding, is an important predictor of future policy developments and helps companies align with expected legal requirements. This is evident in the recently enacted Colorado AI Act, which follows core principles from the NIST framework, including guidance and standards for AI design, development, deployment, and testing. Despite their largely non-binding nature, soft law instruments for AI governance can thus lead to significant downstream policy impacts.
Effectively navigating U.S. AI policy requires a proactive responsible AI approach
In today’s evolving U.S. AI policy landscape, it is essential for organizations to stay ahead of changing policies and legal obligations by tracking developments at multiple levels — state, federal, and international. Given the dynamic nature of AI regulation, however, simply monitoring these changes is insufficient. To future-proof their ways of working around AI adoption, organizations must adopt a proactive responsible AI approach. This involves implementing best practices and policies to strengthen their responsible AI maturity, engaging with industry experts and stakeholders, and investing in continuous education and training for their workforce. By staying informed and proactive, organizations can ensure they are prepared to meet new requirements as they arise.
In this regard, RAI Institute’s conformity assessments and certification program offer tools for validating that they fulfill applicable requirements at both the organizational and system levels. These tools provide evidence of regulatory alignment and build trust and credibility with stakeholders. By integrating comprehensive monitoring, proactive governance, and rigorous validation processes, organizations can navigate the complex web of AI regulations effectively, ensuring they remain compliant, competitive, and well-prepared for the road ahead.
Want to receive Part II direct to your inbox? Sign up here.
Supporting you on your responsible AI journey
Stay tuned for our next blog, where we will delve deeper into the role of conformity assessments and certifications as means to move beyond mere checkbox compliance and legal minimums to foster robust responsible AI practices.
Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for coming AI regulation? Join our RAI Hub community or become a member, and don’t miss our upcoming webinar on September 18, 2024, titled “Making Sense of the U.S. AI Regulatory Landscape,” which will provide expert insights and guidance on keeping track of everything that is happening, and what AI regulation in the US may look like in the future.
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, Responsible AI Institute
+1 (440) 785-3588
Follow Responsible AI Institute on Social Media
X (formerly Twitter)