Register to attend our November virtual event, “AI Empowerment in the Workplace: Navigating New Opportunities & Organizational Shifts” 📅 November 20, 2024 | 🕚 11 AM EST | 🌐 Virtual

Introducing the Responsible AI Top-20 Controls

Responsible AI Institute Stewards RAI Top 20 Controls

Navigating the current landscape, where AI seems omnipresent, and balancing the need for speed, innovation, competitive edge, and responsible practices can be daunting. It’s challenging to sift through the abundance of information and directives to determine the most critical actions and where to begin, especially in an emerging field. To address this challenge, enter the Responsible AI Top-20 Controls.

Overview 

Inspired by similar initiatives in cybersecurity, the Responsible AI Top-20 Initiative was created to provide an open, simple, relevant, thorough, and current set of Controls for users and managers of AI, teams responsible for AI strategy and governance, and Responsible AI practitioners themselves.

The Top-20 Controls were created to quickly and easily jumpstart an organization’s AI governance capacity, anchoring teams to a simple set of actions and best practices to ensure their AI moves forward with the right guardrails. The Controls answer the questions: “What do I do?” and “Where do I start?”

The implementation of Top-20 controls are intentionally non-prescriptive and are intended to be a convening forum for RAI experts across organizations and society to provide the most necessary and relevant RAI implementation guidance through a community-focused approach. 

Origin Story

Leaders from Booz Allen Hamilton, a leading provider of AI services to the federal government, and Mission Control AI recognized a real need across their respective client bases for more concrete guidance on implementing AI responsibly. This perspective was based on their work with both leading AI innovators and the U.S. federal government, which is arguably the most complex and scrutinized AI operating environment. Development of these proposed controls also emerged as an opportunity to collaborate closely with the Responsible AI Institute, given the group’s leadership in promoting many of these standards.

Workshops were convened at the Leaders in Responsible AI Summit (March 22, 2024), where delegates provided feedback and recommendations that further informed the current list of Controls. The Top-20 will be stewarded by the RAI Institute and governed by a soon to be announced Governance Committee and supported by an Industry Advisory Group.  

The Controls

A small caveat– yes, there are only 15 controls! We worked with the community to establish what we consider to be 15 timeless and necessarily unchanging Controls in a field characterized by change. But we also expect there to be considerable technical developments over the next 18 months (to include enhanced multi-modal capabilities, and agentic and neuro-symbolic systems, for example) that will dictate what the remaining controls should be. This built-in flexibility, therefore, is designed to evolve in partnership with both the technology itself and critical feedback from users across sectors and regions as these controls meet reality.

  1. Engage Executives
  2. Align Organizational Values and Incentives
  3. Activate Your AI Governance Team
  4. Integrate RAI into Roles & Responsibilities
  5. Engage Employees
  6. Continuous Education
  7. Establish AI Risk Management Strategy
  8. Inventory Your AI Assets
  9. Conduct Impact Assessments
  10. Implement Adaptive Risk Controls
  11. Continuously Monitor Your AI Lifecycle
  12. Manage Third Parties
  13. Manage (Emerging) Regulatory Compliance
  14. Develop Incident Response Plan
  15. Engage Impacted Stakeholders

More information and details about the Top-20 Controls can be found in the RAI Hub. There is no cost to join.

Path Forward

In the coming months, implementation pathways will be defined and developed for the Controls. Later in 2024, during the first quarterly review, we will gather feedback from early adopters and adjust the Controls accordingly. We will maintain continuous, iterative development, adoption, and governance efforts, adapting as needed based on feedback and evolving best practices.

The Responsible AI Top-20 is your roadmap to starting this vital journey. In an ever-growing sea of policies and frameworks, we believe these controls will serve as our collective anchor point, orienting us to a core set of actions and best practices that will ensure a baseline level of maturity in our community’s work. Join us! 

The first 15 essential controls are available now, with 5 more on the horizon to address emerging AI developments.

Join the RAI Hub today to access:

    • The current Top 15 Controls, ready for immediate implementation
    • Detailed methodology behind the Controls
    • Comprehensive FAQ to guide your efforts
    • Future updates, including the additional 5 controls as they’re released

Be part of the community shaping responsible AI practices. Visit the RAI Hub now and start implementing these crucial controls.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Marketing, RAI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Follow RAI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

YouTube

 

Share the Post:

Related Posts

RAI Institute AI Assessment

Part Two Co-Authored by Indira Patil and Hadassah Drukarch AI impact assessments are becoming more widely adopted, as both government and private organizations are recognizing...

Deepfakes in US Elections

Author: Sez Harmon, AI Policy Analyst, Responsible AI Institute For perhaps the first time in U.S. history, the 2024 election cycle exhibits AI-powered tools as...

RAI Institute Virtual Event Series

Responsible AI Institute October 16, 2024 Webinar Recap In an era where AI is rapidly transforming industries across the board, the importance of implementing AI...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!