Introducing the Responsible AI Top-20 Controls

Responsible AI Institute Stewards RAI Top 20 Controls

Navigating the current landscape, where AI seems omnipresent, and balancing the need for speed, innovation, competitive edge, and responsible practices can be daunting. It’s challenging to sift through the abundance of information and directives to determine the most critical actions and where to begin, especially in an emerging field. To address this challenge, enter the Responsible AI Top-20 Controls.


Inspired by similar initiatives in cybersecurity, the Responsible AI Top-20 Initiative was created to provide an open, simple, relevant, thorough, and current set of Controls for users and managers of AI, teams responsible for AI strategy and governance, and Responsible AI practitioners themselves.

The Top-20 Controls were created to quickly and easily jumpstart an organization’s AI governance capacity, anchoring teams to a simple set of actions and best practices to ensure their AI moves forward with the right guardrails. The Controls answer the questions: “What do I do?” and “Where do I start?”

The implementation of Top-20 controls are intentionally non-prescriptive and are intended to be a convening forum for RAI experts across organizations and society to provide the most necessary and relevant RAI implementation guidance through a community-focused approach. 

Origin Story

Leaders from Booz Allen Hamilton, a leading provider of AI services to the federal government, and Mission Control AI recognized a real need across their respective client bases for more concrete guidance on implementing AI responsibly. This perspective was based on their work with both leading AI innovators and the U.S. federal government, which is arguably the most complex and scrutinized AI operating environment. Development of these proposed controls also emerged as an opportunity to collaborate closely with the Responsible AI Institute, given the group’s leadership in promoting many of these standards.

Workshops were convened at the Leaders in Responsible AI Summit (March 22, 2024), where delegates provided feedback and recommendations that further informed the current list of Controls. The Top-20 will be stewarded by the RAI Institute and governed by a soon to be announced Governance Committee and supported by an Industry Advisory Group.  

The Controls

A small caveat– yes, there are only 15 controls! We worked with the community to establish what we consider to be 15 timeless and necessarily unchanging Controls in a field characterized by change. But we also expect there to be considerable technical developments over the next 18 months (to include enhanced multi-modal capabilities, and agentic and neuro-symbolic systems, for example) that will dictate what the remaining controls should be. This built-in flexibility, therefore, is designed to evolve in partnership with both the technology itself and critical feedback from users across sectors and regions as these controls meet reality.

  1. Engage Executives
  2. Align Organizational Values and Incentives
  3. Activate Your AI Governance Team
  4. Integrate RAI into Roles & Responsibilities
  5. Engage Employees
  6. Continuous Education.
  7. Establish AI Risk Management Strategy
  8. Inventory Your AI Assets
  9. Conduct Impact Assessments
  10. Implement Adaptive Risk Controls
  11. Continuously Monitor Your AI Lifecycle
  12. Manage Third Parties
  13. Manage (Emerging) Regulatory Compliance
  14. Develop Incident Response Plan
  15. Engage Impacted Stakeholders

More information and details about the Top-20 Controls can be found in the RAI Hub. There is no cost to join.

Path Forward

In the coming months, implementation pathways will be defined and developed for the Controls. Later in 2024, during the first quarterly review, we will gather feedback from early adopters and adjust the Controls accordingly. We will maintain continuous, iterative development, adoption, and governance efforts, adapting as needed based on feedback and evolving best practices.

The Responsible AI Top-20 is your roadmap to starting this vital journey. In an ever-growing sea of policies and frameworks, we believe these controls will serve as our collective anchor point, orienting us to a core set of actions and best practices that will ensure a baseline level of maturity in our community’s work. Join us! 

The first 15 essential controls are available now, with 5 more on the horizon to address emerging AI developments.

Join the RAI Hub today to access:

    • The current Top 15 Controls, ready for immediate implementation
    • Detailed methodology behind the Controls
    • Comprehensive FAQ to guide your efforts
    • Future updates, including the additional 5 controls as they’re released

Be part of the community shaping responsible AI practices. Visit the RAI Hub now and start implementing these crucial controls.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Marketing, RAI Institute 

+1 (440) 785-3588

Follow RAI Institute on Social Media 


X (formerly Twitter)




Share the Post:

Related Posts

AI Standards

Authors: Credo – AI Lucia Gamboa and Evi Fuelle, RAI Institute – Patrick McAndrew and Hadassah Drukarch. Standards play a crucial role in the development...

ESG Webinar

Responsible AI Institute June 26, 2024 Webinar Recap As AI continues to revolutionize industries across the globe, its impact on sustainability and environmental, social, and...

We are thrilled to announce the release of our highly anticipated white paper, “AI’s Impact on Our Sustainable Future: A Guiding Framework for Responsible AI...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!