Register to attend Responsible AI: What’s Real, What’s Next, and What Matters webinar 📅 January 22, 2025 | 🕚 11 AM EST | 🌐 Virtual

From Compliance Checkbox to Best Practice: The Value of AI Impact Assessments  

RAI Institute AI Assessment

Part Two

Co-Authored by Indira Patil and Hadassah Drukarch

AI impact assessments are becoming more widely adopted, as both government and private organizations are recognizing the need for incentivizing and assuring responsible AI development and deployment. The widespread adoption of AI technologies across industries — from healthcare and finance to transportation and law — has driven significant gains in efficiency, innovation, and productivity. However, these advancements are also associated with increased risks and safety concerns, ranging from bias and fairness, data privacy and management, transparency and explainability to inclusion and social justice. 

As AI regulations evolve, organizations struggle to make sense of the emerging regulatory landscape and face the challenge of conducting effective risk assessments amid varying requirements. Given AI’s significant impact, robust assessments are essential to analyze and mitigate risks throughout its lifecycle. AI assessments must go beyond technical performance reviews, to address issues such as the ethical implications of decision-making processes, accountability, and compliance with legal frameworks. By evaluating systems across their lifecycle, AI assessments:

  • Enable alignment with emerging regulatory mandates; 
  • Enable alignment with industry best practices;
  • Provide organizations with a pathway to build trust with stakeholders;
  • Prevent unintended consequences; and
  • Promote accountability and transparency. 

As regulations evolve, thorough impact assessments will shift from being merely a legal requirement to a business necessity for responsible AI adoption.

Connecting the Dots Amid Regulatory Complexity and Fragmentation 

The AI regulatory landscape is rapidly changing, with new laws and frameworks emerging to address AI risks and safety challenges. Notable examples include the Office of Management and Budget (OMB) guidance linked to the  Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI issued in November 2023. Additional developments include the 2023 National Institute of Standards and Technology (NIST) Risk Management Framework, along with its recent Generative AI Profile. Additionally, there is the European Union (EU) AI Act that entered into force in 2024 and various ISO standards like ISO/IEC 42001:2023 and ISO/IEC 23894:2023. While the EU AI Act is binding, this does not apply to other frameworks and guidelines. Finally, state and local governments are also introducing AI regulations, such as Colorado’s Consumer Protections for Interactions with Artificial Intelligence Act, which imposes strict requirements on high-risk AI systems to protect consumers from foreseeable risks of algorithmic discrimination. This fragmented and decentralized regulatory environment poses challenges for organizations as they work to align their AI practices with diverse and evolving requirements.

As AI regulations evolve, organizations face the challenge of conducting effective impact assessments amid varying requirements. This issue is heightened by uncertainty about how new laws will impact AI products, with Fortune 500 executives concerned about rising compliance costs, revenue losses, and penalties. The fragmented nature of AI regulations further complicates matters, as businesses must navigate diverse laws and sector-specific guidelines without a unified standard. In this landscape, organizations must navigate increasing uncertainty while maintaining high standards of AI safety, security, and trustworthiness. Leveraging existing assessment frameworks and conducting thorough impact assessments can help identify risks and adapt to changing regulations, while demonstrating a commitment to responsible AI, building stakeholder trust, and setting industry best practices.

A Look Behind the Scenes: RAI Institute’s AI System Assessment

The Responsible AI (RAI) Institute offers targeted solutions to help organizations navigate the complex regulatory landscape surrounding AI. Central to these efforts are our assessments, which guide AI maturity at both organizational and system levels. Our System Assessment framework provides a comprehensive approach to evaluating an organization’s governance of AI use cases throughout their lifecycle, drawing from the NIST AI RMF, ISO 42001, and global regulations to translate complex requirements into clear, actionable controls for better risk management and compliance.

Figure 1. A Breakdown of the AI Lifecycle Stages

The assessment framework consists of five key components:

  1. Organizational Governance and Documentation: This first component provides a visual summary of the organization’s AI actors and their responsibilities during AI system scoping, development, deployment, and operation.
  2. Pre-screening risk categorization: This section collates the system’s relevant context, including industry, geography, use case, actors, ecosystem role, level of human involvement, and third-party contribution to inform the impact assessment that follows.
  3. Conformity assessment: This phase evaluates how the development of the AI system aligns with its actual and potential impacts. It uses a set of binary criteria statements, organized sequentially across AI lifecycle stages and mapped to relevant regulatory frameworks, to conduct a comprehensive impact analysis based on the system’s characteristics and uses. This analysis forms the basis for developing strategies for implementing effective risk management controls in the following stage.  
  4. Risk Determination and Controls Implementation: This section identifies and analyzes  operational risks related to the organization’s established practices and safeguards for the AI system. It includes  strategies to mitigate these risks throughout all stages of the AI lifecycle, ensuring that risk is minimized by design. This approach helps organizations pinpoint their existing risk mitigation methods and guides them in creating a comprehensive risk and controls matrix that applies not only to the assessed use case, but also to any future AI initiatives.
  5. System-Level Documentation: This section complements the previous components by helping organizations identify the types of documentation that can support their risk mitigation efforts. Proper documentation is essential for ensuring accountability and auditability, and the guidance provided aims to help organizations align their documentation practices with current requirements, industry standards, and public expectations.

A Holistic Approach to AI Governance Through AI Impact Assessments

RAI Institute’s System Assessment framework stands out for its adaptability and comprehensive nature, providing a robust baseline tool that can be tailored to meet the specific needs of organizations based on their use case, context, and jurisdiction. A notable example of this flexibility was demonstrated when we aligned the assessment framework with the OMB guidance to the Executive Order. During this process, we discovered that several critical components of our assessment framework — such as diverse team composition, detailed documentation, and contingency planning — are not explicitly required by the OMB guidance, but are essential for a holistic approach to AI governance. This highlights the value of utilizing the full scope of the framework to prepare for future regulatory developments, even when compliance requirements are less stringent.

Balancing the costs and benefits of performing AI assessments is a challenge for many organizations, but the advantages — such as reducing risks, preventing costly errors, and building trust with stakeholders — generally outweigh the expenses. RAI Institute’s flexible implementation options for the assessment framework enable organizations to align the assessment depth with their risk appetite and strategic goals. Moreover, its focus on comprehensive documentation ensures that organizations have a clear record of their AI systems, supporting regulatory compliance, stakeholder communication, and ongoing improvement. By leveraging this or similar assessment frameworks, organizations can take a significant step towards implementing responsible AI that meets ethical standards and regulatory requirements.

Supporting Your Responsible AI Journey

Want to get the latest insights into U.S. and Global regulatory frameworks and the evolving policy landscape that underpins our discussion on AI impact assessments? Head over to our previous blog “Cutting through the noise: Navigating AI policy levels in the U.S.” to learn more about the influence of state, federal, and international dynamics on policymaking. 

Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for coming AI regulation? Join our RAI Hub community or become a member, and check out our recent webinar titled “Making Sense of the U.S. AI Regulatory Landscape,” which provides expert insights and guidance on keeping track of everything that is happening, and what AI regulation in the U.S. may look like in the future. 

Become a Member - Responsible AI Institute

About the Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Strategy & Marketing 

Responsible AI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Follow Responsible AI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Share the Post:

Related Posts

Responsible AI Institute's RAISE 2024 & Leadership in RAI Awards

Responsible AI Institute December 11, 2024 RAISE Recap The Responsible AI Institute’s RAISE 2024 event brought together over,1,000 industry leaders to explore responsible AI development....

Embedding ethical oversight in AI governance.

Co-Authored by Hadassah Drukarch and Monika Viktorova As artificial intelligence (AI) systems become embedded into critical areas of our lives, the ethical implications and societal...

Responsible AI Institute Virtual Event Series

Responsible AI Institute November 20, 2024 Webinar Recap AI is transforming industries at a breakneck pace, and its integration into the workplace is both an...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!