Accelerating Your Responsible AI Journey

Developing a mature responsible AI program in your organization takes commitment, time, and resources. Getting your responsible AI journey started has become ever more important for all types of organizations building, buying or supplying AI systems.

Prepare to comply with emerging laws and regulations

Accelerate the development and deployment of systems

Avoid acquiring technical debt created by non-compliant products

Mitigate business risk and harms caused by potential incidents

Accelerating Your Responsible AI Journey

Developing a mature responsible AI program in your organization takes commitment, time, and resources. Getting your responsible AI journey started has become ever more important for all types of organizations building, buying or supplying AI systems.

Building AI

  • Organizational Maturity Assessment (OMA) with policy and governance roadmap/recommendations
  • Generic System Level Assessment (SLA) remediation roadmap
  • Policy and governance foundations

Buying AI

  • Procurement process landscape review to baseline current practice
  • Recommendations and tools based on current maturity
  • Supplier assessment/qualification/checklist
  • Responsible AI procurement templates (MSA, etc.)
  • Documentation requirements
  • Recommendations for contract language

Supplier Preparedness

Most organizations purchasing technology products have added stringent diversity and security requirements into their contracts, and it’s expected they will soon have to demand the same for responsible AI. The Supplier Preparedness program helps you to identify opportunities to improve your responsible AI practices and to meet, and perhaps even exceed, your client’s expectations for trusted and responsible AI, giving you a lasting advantage in the market.

Supplying AI

  • Supplier Maturity Assessment (SMA)
  • System Level Assessment (SLA) for 1 system in the design phase
  • Policy and governance roadmap including recommendations for remediation
  • Documentation requirements to be prepared to answer the most common occurrences in RFPs
  • Contract language recommendations

Responsible AI Conformity Assessments

The RAI Institute’s responsible AI assessments help you demonstrate that your AI systems and organizational practices fulfill the requirements of agreed-upon responsible AI regulations, laws, best practices, specifications, and standards. RAII continuously works with experts to update and validate its Responsible AI Framework, the foundation for all our assessments, composed of 6 dimensions for the evaluation of AI systems and 5 dimensions for the evaluation of organizational maturity.

Types of responsible AI assessments

Our assessments are built with the needs of organizations and regulators in mind, as well as those of their clients, investors and other key stakeholders. We built conformity assessments for high-risk AI systems (or products), for responsible AI practices in organizations, and for the responsible AI practices of technology suppliers.

Responsible AI System-Level Assessment (SLA)

The RAI Institute’s responsible AI SLA evaluates AI systems across 6 dimensions of responsible AI: Data and System Operations, Explainability and Interpretability, Accountability, Consumer Protection, Bias and Fairness, and Robustness.

Responsible AI Organizational Maturity Assessment (OMA)

The RAI Institute’s responsible AI OMA evaluates an organization’s practices across 5 dimensions of responsible AI maturity: Policy and Governance, Strategy and Leadership, People and Training, AI System Documentation and Procurement Practices.

Responsible AI Supplier Maturity Assessment (SMA)

Based on the same dimensions as the Organizational Maturity Assessment, the responsible AI SMA evaluates a supplier’s organizational responsible AI practices to bring an extra layer of trust between suppliers and their clients.

How Conformity Assessments Work

Conformity assessments can be performed differently depending on the level of assurance demanded by regulators, investors, clients, auditors, or even by your internal compliance team.

Self-Evaluation

By embedding our assessments into your internal policies and processes, you are creating robust mechanisms that will help you to mitigate risk, accelerate the AI lifecycle, and facilitate future compliance efforts. The assessment results in a confident self-attestation that you have met stringent responsible AI requirements.

Independent Evaluation

The RAI Institute’s assessments can also be performed by those who have an interest in using your products or in doing business with your organization, such as clients, industry associations and investors, providing the basis for a trust-based relationship. Independent evaluation results in an externally validated attestation of conformity.

Independent or Accredited Audits

Some regulations require high-risk AI systems to be evaluated through an independent and accredited auditor – such a mechanism is currently the only way to deliver the RAI Institute’s Responsible AI Certification Program for AI systems.

The First Independent, Accredited Certification Program for Responsible AI

The RAI Institute’s Responsible AI Certification Program for an AI System is an independently delivered audit that demonstrates alignment with the requirements of RAII’s accredited and specifically calibrated conformity assessments, looking at a system’s tasks, region of operations, and industry relevant risks and regulations. Learn more by consulting the documentation below:

Scheme Document (Sample)

The scheme document sample illustrates the rationale and intent behind a selection of our certification’s core scheme, containing all questions, answers, scoring, rationales, thresholds and references relevant to the requirements of a conformity assessment in alignment with the RAI Institute Responsible AI Implementation Framework dimensions.

White Paper

This white paper provides an overview of the RAI Institute’s Responsible AI Certification Program and of how it was developed. It explains how certification aligns with AI laws, regulations, principles, research, and practitioner insights, and dives deeper into RAI Institute’s initial areas of focus.

Guidebook

The Guidebook explains the background and requirements for the RAI Institute’s Responsible AI Certification Program, as well as the processes for development, amendment, and governance, and organization- and auditor-specific information related to certification and recertification.