Learn more about Responsible Artificial Intelligence, RAI, and how you can get involved.
According to our partner, World Economic Forum, Responsible Artificial Intelligence is the “practice of designing, building and deploying AI systems in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.”
While many organizations have worked to establish principles and comprehensive definitions for Responsible AI, we have decided to ground our efforts in accordance with OECD’s five Principles on Artificial Intelligence.
As we look to operationalize the essence of these principles, we have identified the following categories: explainability and interpretability, bias and fairness, accountability, robustness, and data quality.
These terms often get used interchangeably, and in many circumstances people who use them are interested in the same goals and objectives. However, it’s important to understand these distinctions as they could be used to either mean different things or focus on different aspects of AI’s use in society.
At RAI, we like to use the most comprehensive term, Responsible, as it adheres to individual and collective values inspiring responsible actions are taken to mitigate harm to people and the planet. Ethics are a set of values specific to an individual or group, and can vary and conflict. While considering one’s values is incredibly important, it is essential that we are targeting objectives that benefit people and the planet as an integrated ecosystem.
While many in the community choose to use ethics as a term, we recognize that not everyone has the same ethics. It is not our place to define what is or isn’t ethical for an individual. When you are being responsible, it means you are recognizing that your actions could have an impact on others, additionally, you are taking steps to ensure an individuals or group’s choices, liberties, and preferences are not harmed.
Responsible remains the most comprehensive and inclusive term ensuring that the system is not just safe or trusted, but that it respects and upholds human rights and societal values as well.
Responsible AI, Ethical AI, or Trustworthy AI all relate to the framework and principles behind the design, development and implementation of AI systems in a manner that benefits individuals, society and businesses while reinforcing human centricity and societal value.
Responsible AI Institute (RAI) is a 501(c)3 non-profit organization committed to advancing human-centric and responsible AI.
We are a community-driven organization focused on building and distributing tangible governance tools that accelerate the design, development and use of Responsible AI. Currently entering the fifth year of operations, we bring extensive experience in regulatory policies, data governance, and the development of trustworthy AI systems for industry and governments.
RAI tools have been among the first to demonstrate how to operationalize OECD AI principles into action and expand opportunities with AI while minimizing harm in local and global communities. We are best known for its community-based development of Responsible AI Design Assistant and the Responsible AI Leadership (RAI) independent rating and certification system.
RAI is not alone in its mission. We’re in the company of an vast group of academic institutions, research centres, civil society organizations, government agencies and companies that make up the larger ecosystem working to ensure the responsible use of AI. RAI often forges partnerships, advisorships, and affiliations with our counterparts working across sectors. These include:
RAI Certification by Responsible AI Institute is the first independent and community developed Responsible AI rating, certification, and documentation system.
RAI Certification is a symbol of trust that an AI system has been designed, built, and deployed in line with the five OECD Principles on Artificial Intelligence to promote use of AI that is innovative and trustworthy and that respects human rights and societal values. We use our five categories of responsible AI (explainability, fairness, accountability, robustness, and data quality) as parameters for the different credit elements within the RAI Certification rating system. RAI acts as a framework for decision-making for model developers, model risk managers, auditors, regulators and consumers in all of these areas, accelerating and rewarding best practices and innovation and recognizing exemplary responsible AI projects with different levels of RAI certification.
You can learn more about the RAI Certification Beta here.
RAI Assessment is a voluntary process by which a company evaluates internal controls over its data and model driven automated decisioning systems. Modeled after operationalizing OECD’s five AI Principles, RAI Assessments evaluate data, model, and use cases for AI hazard identification, risk evaluation, and risk control.
RAI Certification by Responsible AI Institute provides value to various members.
No matter where you are on your AI journey, our programs and services can help. We’ve laid out the journey to certification in four stages: Network, Educate, Assess, and Certify. Our goal is to allow members to network through community, educate themselves and other practitioners and students, assess their systems, and certify their responsible and trustworthy AI.
You can see the programs and tools listed for each step of the journey here.
We recognize there are growing concerns and a lack of trust with the accelerated adoption of AI in society. We are dedicated to promoting development of responsible and trustworthy AI systems. Through a diverse community of leading experts, including industry practitioners, policy makers, regulators, consumers, and academia, RAI has a unique and vital perspective on how to affect real change in both government and industry. RAI Members:
Recognizing that a project of this magnitude needs to be built by the community for the benefit of the community, we launched the Certification Working Group December 2020 with WEF and SRI. The Certification Working Groups will be based on our areas of focus: Fair Lending, Fraud Detection, Automated Diagnosis and Treatment, Automated Hiring. To join a working group, please contact us.