Frequently asked questions
RAI Institute is a membership-based, community-driven non-profit organization committed to advancing human-centric and trustworthy AI. We help our members fast track their responsible AI success through our independent assessments, benchmarks, and certification program.
Founded in 2016, we bring extensive experience in regulatory policies, governance, and the development of responsible AI systems for industry and governments.
Leading corporations, governments, and suppliers trust us to advance their responsible AI efforts through a first-of-its-kind system that assesses, qualifies, and certifies AI systems and helps practitioners navigate the complex landscape of creating responsible AI.
We operate in the US, Canada, Europe and the United Kingdom bringing extensive experience in regulatory policies, governance, and the development of responsible AI systems for industry and governments. Our tools have been among the first to demonstrate how to operationalize OECD AI principles into action and expand opportunities with AI while minimizing harm in local and global communities.
Responsible AI Institute’s mission is to advance the design and deployment of safe and trustworthy artificial intelligence which benefits all of humanity.
We are a membership-driven non-profit that is funded primarily through the annual membership fees from corporations, technology solution providers, and individuals.
Our RAI Institute Certification is the first independent and community developed Responsible AI rating, certification, and documentation system.
RAI Institute Certification is a symbol of trust that an AI system has been designed, built, and deployed in line with the five OECD Principles on Artificial Intelligence to promote use of AI that is innovative and trustworthy and that respects human rights and democratic values.
A RAI Institute conformity assessment can be used in one of three ways with increasing levels of trust associated with them:
Low-trust: Using the responsible AI conformity assessment to perform an internal evaluation of AI systems so that you can provide a self-attestation that you are building AI-system that is in fact responsible.
Medium-trust: Using the responsible AI conformity assessment to have a second-party perform an non-accredited validation that an AI system is built responsibly inclusive of review of supporting documentation so that you can provide a second-party validation that your AI-system that is in fact responsible.
High-trust: an independent and accredited 3rd-party performs an audit of an AI system using the responsible AI conformity Assessment resulting in the issuance of the RAI Institute Certification
With the ability to apply the RAI Institute assessments as first-party assessments, second-party assessments and third-party audits; members engender the trust to scale human-centric AI with confidence.
For organizations building and buying AI systems:
RAI Institute assessments allow organizations building and/or buying AI systems to have a programmatic approach to assuring, validating and certifying AI systems used in their organizations meet internal policies, are aligned to global standards and are in a good position as regulations emerge.
For organizations building and supplying AI systems:
Many procuring organizations are looking to the RAI Institute to help them define what their responsible AI procurement practices should be. By using out responsible AI maturity assessments for your AI-enabled systems you can assure customers that the AI-enabled system they are buying is built implemented and operated in a responsible manner and in accordance with their policies. The symbol of trust the RAI Institute provides acts as a competitive advantage for AI enabled systems that impact human health, wealth or livelihood or are being used to build and deploy AI systems.
For Audit, verification and Consulting Companies:
Securing an independent, third-party Responsible AI Certification given by a respected responsible AI non-profit can help further differentiate your services and enhance your leadership and expertise in the human-centric, Trusted AI field. In addition, by participating in RAI community work groups, collaborating with RAI Fellows and leading partners such as World Economic Forum, OECD, IEEE, ANSI, etc., you continue to add robustness, value and depth to your offerings.
The RAI Institute members are non-governmental organizations that have subscribed for membership allowing them access to The RAI Institute tools & assets, products and solutions as they build, buy and supply AI Systems. The RAI Institute tools & assets include policy and governance templates, a regulatory tracker, responsible AI Organizational Maturity Assessments, responsible AI System Level Assessments and others. The RAI Institute product is our responsible AI conformity Assessments and our solutions focus on helping organizations to build, buy and supply AI systems.
Responsible Artificial Intelligence is the “practice of designing, building, deploying, operationalizing and monitoring AI systems in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.” (Source: World Economic Forum, RAI Institute Partner).
While many organizations have worked to establish principles and comprehensive definitions for Responsible AI, we have decided to ground our efforts in accordance with OECD’s five Principles on Artificial Intelligence.
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
As we look to operationalize the essence of these principles, we have identified the following categories: System Operations, Explainability and Interpretability, Accountability, Consumer Protection, Bias and Fairness, Safety and Robustness.
The potential societal benefits and implications of AI are enormous. To trust an AI system, we must have confidence in its decisions. We need to know that a decision is reliable and fair, that it can be transparent, and that cannot be tampered with.
However, given their ability to continuously learn and evolve from data, AI is proving to be a double-edged sword. On one hand, AI is helping remove costs, simplify business processes, and enhance products and customer experiences. On the other hand, most automated decisioning data and AI models today are black-boxes that often function in oblique, invisible ways for both its developers as well as consumers and regulators. This is creating new business risks from black box models that function in oblique, invisible, deceivable, and sometimes biased ways resulting in reputation damage, revenue losses, regulatory backlash, criminal investigations, and diminished public trust.
Responsible AI Institute provides conformity assessment and certification aimed at bringing transparency, fairness and robustness to AI and expert system powered automated decisioning systems to ensure these systems are fair, transparent and accountable and in a manner consistent with user expectations, organizational values and societal laws and norms.
These terms often get used interchangeably, and in many circumstances people who use them are interested in the same related goals and objectives. However, it’s important to understand these distinctions as they could be used to either mean different things or focus on different aspects of AI’s use in society.
At the RAI Institute, we like to use the most comprehensive term, “responsible”, as it adheres to individual and collective values inspiring responsible actions are taken to mitigate harm to people and the planet.
Ethics are a set of values specific to an individual or group, and can vary and conflict. While considering one’s values is incredibly important, it is essential that we are targeting objectives that benefit people and the planet as an integrated ecosystem.
While many in the community choose to use ethics as a term, we recognize that not everyone has the same ethics. It is not our place to define what is or isn’t ethical for an individual. When you are being responsible, it means you are recognizing that your actions could have an impact on others, additionally, you are taking steps to ensure an individual or group’s choices, liberties, and preferences are not harmed. What is important as part of responsible AI operations is that organizations define their own AI ethics principles and make these transparent to their employees and customers.
The term “Trustworthy AI” is most often used to reference the technical implementation of AI focused mostly on ensuring fairness through the detection and mitigation of bias as well as ensuring AI models are transparent and explainable.
Responsible remains the most comprehensive and inclusive term ensuring that the system is not just safe or trusted, but that it respects and upholds human rights and societal values as well.
Responsible AI, Ethical AI, or Trustworthy AI all relate to the framework and principles behind the design, development and implementation of AI systems in a manner that benefits individuals, society and businesses while reinforcing human centricity and societal value.
No matter where you are on your AI building, buying or supplying journey, our programs and services can help. We’ve laid out the journey to certification in four stages: Network, Educate, Assess, and Certify. Our goal is to allow members to network through community, educate themselves and other practitioners and students, assess their systems, and certify their responsible and trustworthy AI.
RAI Institute Certification only applies to AI Systems and Models, not individuals or organizations or software products used in the design, development, deployment, or operations of AI systems. As such, The RAI Institute does not explicitly certify, endorse or promote products, services or companies, nor do we track, list or report data related to products and their responsible AI qualities.
RAI Institute is an independent certification system that accelerates human centric AI by assuring the quality of AI models and applications based on overall characteristics of the project. We do not award credits based on the use of particular products but rather upon meeting the performance standards set forth in our rating systems. It is up to project teams to determine which products are most appropriate for credit achievement and program requirements.
Although the RAI Institute does not certify, promote, or endorse products and services of individual companies, products and services do play a role and can help projects with the RAI Institute credit achievement. We have a list of products that have been used in member use cases and case studies to attain certification and we can share that list with your team. (Note that products and services do not earn project points.)
Our vibrant community consists of standards bodies, corporations, universities, government agencies, data and policy focused nonprofits, and technology solution providers
We are also affiliated with working groups and global bodies such as:
- The AI Public Awareness Working Group
- A group looking at mechanisms to boost public awareness and foster trust in AI. It also aims to ground the Canadian discussion in a measured understanding of AI technology, its potential uses, and its associated risks.
- OECD ONE AI
- A multi-stakeholder and multi-disciplinary group that builds on the OECD’s successful experience with the first AI Group of experts (AIGO), which developed a proposal that formed the basis for the OECD AI Principles adopted in May 2019.
- WEF Global AI Action Alliance
- ISO JTC1SC42
- Canada Data Governance Steering Committee
- Algora Lab
- The Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS)
- InnSoTech project for Alberta “HelpSeekers” Initiative
- NSERC research project at University of Toronto, “CREATE Responsible AI”
- Become a Member
- Join RAI Institute’s community to access a suite of membership benefits, including RAI Institute Certification, an independent authoritative framework and trust symbol
- [Become a Member]
- Join a Working Group
- We hold working groups to discuss our work determining best practices and harms in use cases related to Fair Lending, Skin Disease Detection, and Automated Hiring. If you have expertise in one of these areas, get in touch!
- [Learn More]
- Sign up for our newsletter
- Stay on top of the latest developments related to RAI Institute Certification, upcoming regulatory changes, events and more!
- [Sign Up Here]
- Follow us on social media
- Join our LinkedIn and Twitter communities!
RAI Institute maintains awareness of regulatory developments pertaining to AI and to RAI Institute’s priority industries (lending, health care, human resources, and procurement) in the US, UK, Canada, and at the EU level. Where relevant, these regulatory developments are reflected in updates to RAI Institute’s assessments and tools.
RAI Institute hosts events with policymakers and its members to facilitate the exchange of best practices, concerns, and feedback.
The Responsible AI Safety and Effectiveness (RAISE) Benchmarks are beta version of tools developed by Responsible AI Institute to help organizations enhance the integrity of their AI products and services. They are crucial because they enable companies to integrate responsible AI principles into their development and deployment processes, aligning with evolving global standards and regulatory requirements such as President Biden’s Executive Order, the European Union’s AI Act, and Canada’s Artificial Intelligence and Data Act. There are three RAISE Benchmarks beta tools being launched: RAISE Corporate AI Policy Benchmark
RAISE LLM Hallucinations Benchmark
RAISE Vendor Alignment Benchmark
The RAISE Hallucinations Benchmark tool helps organizations assess and mitigate the risks of AI hallucinations common in large language models. It ensures that your AI-powered products and solutions are as accurate and reliable as possible.
The RAISE Benchmarks and Model Enterprise AI Policy will become available in June 2024.
From December 6, 2023, some RAI Institute members who have Testbed membership, as well as RAISE Benchmark design partners, will have Beta access to the automated versions of the RAISE Benchmarks.
After June 2024, RAI Institute organizational members with Testbed membership will have access to all automated Benchmarks, the Model Enterprise AI Policy and the option to have one detailed report per month reviewed by the RAI Institute or a partner organization. RAI Institute network members with Testbed membership will have the ability to use automated Benchmarks and the Model Enterprise AI Policy with their clients on terms agreed to with RAI Institute. All RAI Institute members and non-members will have access to the final version of the RAISE Benchmark methodologies and a public version of the Model Enterprise AI Policy.
While we respect the confidentiality of our members, we can share that several leading companies in financial services, healthcare, and other industries have endorsed and found value in using the RAISE Benchmarks to enhance the trustworthiness and safety of their AI applications. We will be releasing case studies in the following months.
You can explore membership options on the RAI Institute’s website to join and gain access to the RAISE Benchmarks and other valuable resources.
Yes, the RAISE Benchmarks are designed to be accessible and beneficial to organizations of all sizes, including startups, aiming to develop responsible AI solutions.
The RAISE Benchmarks cover a range of technical aspects related to responsible AI, including AI policy comprehensiveness, risk assessment for hallucinations in large language models, and alignment of vendor AI practices with ethical standards. It is based on two important standards: The NIST AI RMF and ISO/IEC 42001 AI Governance Standards.
The NIST AI RMF (National Institute of Standards and Technology Artificial Intelligence Risk Management Framework) is a framework developed by NIST to help organizations manage and reduce risks related to adopting and using artificial intelligence (AI). It provides guidelines and best practices for assessing, authorizing, monitoring, and managing AI systems to ensure security, compliance, and reliability. This framework is valuable for organizations in various sectors to enhance the security and trustworthiness of their AI deployments.
The ISO/IEC 42001 standard is a global Management System standard developed over three years with the expertise of professionals from more than 50 countries. It aims to enhance the governance and accountability of Artificial Intelligence (AI) worldwide. This standard provides guidelines for organizations of varying sizes that plan to integrate AI into their products and services. It promotes responsible and efficient AI deployment while fostering transparency in AI adoption across diverse industries and sectors.
The RAISE Policy Benchmark evaluates AI policies by measuring their scope and alignment with the RAI Institute’s model enterprise AI policy with the NIST AI Risk Management Framework and the under-development ISO/IEC 42001 standard. It helps organizations address trustworthiness and risk considerations introduced by generative AI and large language models.
The RAISE Policy Benchmark evaluates the comprehensiveness of an organization’s AI policies by assessing their alignment with the Responsible AI Institute’s model enterprise AI policy. One of the key criteria used in this evaluation is the alignment with the seven dimensions of the NIST AI RMF (National Institute of Standards and Technology Artificial Intelligence Risk Management Framework). The seven dimensions of the NIST AI RMF are essential considerations for managing and mitigating risks associated with AI adoption. They encompass various aspects of AI governance and accountability, ensuring that AI systems are developed, deployed, and managed in a secure and responsible manner. These dimensions include:
- Governance and Accountability: This dimension focuses on establishing clear roles, responsibilities, and accountability for AI-related activities within the organization.
- Transparency: Ensuring that AI systems are transparent in their decision-making processes and providing clear explanations for their actions.
- Fairness and Non-discrimination: Addressing bias and ensuring that AI systems do not discriminate against individuals or groups based on protected characteristics.
- Privacy and Data Protection: Protecting the privacy and personal data of individuals when AI systems are used to process or analyze data.
- Safety and Security: Ensuring that AI systems are safe to use and resilient against potential security threats and vulnerabilities.
- Interpretability and Explainability: Providing the ability to understand and explain how AI systems arrive at their decisions or recommendations.
- Accountability for AI-Related Decisions: Establishing mechanisms to track and review AI-related decisions and their impact. The RAISE Policy Benchmark tool uses these dimensions as a critical criterion to assess the organization’s AI policies. It helps organizations ensure that their AI policies comprehensively address these key dimensions, promoting responsible AI adoption and compliance with evolving global standards and regulations.
Large language models (LLMs) and Generative AI models possess the ability to generate their own content, a feature that sets them apart from ‘Classic AI.’ However, this unique capability introduces specific risk factors such as the tendency of LLMs to generate unexpected, incorrect, misleading outputs. The RAISE Hallucinations Benchmark tool assists organizations in assessing and minimizing the risks of AI hallucinations, which are common in large language models. It provides actionable insights to improve the reliability of AI-powered solutions.
The RAISE Vendor Alignment Benchmark assesses whether the AI practices of vendor organizations align with the ethical and responsible AI policies of their purchasing counterparts. It ensures that vendors meet the values and expectations of the businesses they serve.
Technical teams can become RAI Institute Members with testbed access and participate in the private preview of the RAISE Benchmarks. Feedback from technical experts is crucial for further refinement and improvement of these benchmarks.
Can you provide examples of specific technical challenges that the RAISE Benchmarks help address?
The RAISE Benchmarks help address challenges related to AI policy creation, identifying and mitigating LLM hallucinations, and ensuring ethical alignment between vendors and buyers in AI-related transactions.
While some technical expertise is beneficial, the RAISE Benchmark tools are designed to be user-friendly and accessible to technical and non-technical team members alike.
The three RAISE Benchmarks are in various stages of design, development, and testing and undergo regular updates based on feedback and evolving AI standards. Members receive notifications and access to updates through the RAI Institute’s channels as well as through RAISE working group participation.
Yes, as a testbed member, you will have access to technical resources, documentation, and guidance to support the effective implementation of the RAISE Benchmarks within your organization.