Learn more about Responsible Artificial Intelligence, RAI, and how you can get involved.Read our RAI Whitepaper
What is Responsible Artificial Intelligence (RAI)?
According to our partner, World Economic Forum, Responsible Artificial Intelligence is the “practice of designing, building and deploying AI systems in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.”
While many organizations have worked to establish principles and comprehensive definitions for Responsible AI, we have decided to ground our efforts in accordance with OECD’s five Principles on Artificial Intelligence.
AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
As we look to operationalize the essence of these principles, we have identified the following categories: explainability and interpretability, bias and fairness, accountability, robustness, and data quality.
How is Responsible AI related to Ethical AI or Trustworthy AI?
These terms often get used interchangeably, and in many circumstances people who use them are interested in the same goals and objectives. However, it’s important to understand these distinctions as they could be used to either mean different things or focus on different aspects of AI’s use in society.
At RAI, we like to use the most comprehensive term, Responsible, as it adheres to individual and collective values inspiring responsible actions are taken to mitigate harm to people and the planet. Ethics are a set of values specific to an individual or group, and can vary and conflict. While considering one’s values is incredibly important, it is essential that we are targeting objectives that benefit people and the planet as an integrated ecosystem.
While many in the community choose to use ethics as a term, we recognize that not everyone has the same ethics. It is not our place to define what is or isn’t ethical for an individual. When you are being responsible, it means you are recognizing that your actions could have an impact on others, additionally, you are taking steps to ensure an individuals or group’s choices, liberties, and preferences are not harmed.
Responsible remains the most comprehensive and inclusive term ensuring that the system is not just safe or trusted, but that it respects and upholds human rights and societal values as well.
Responsible AI, Ethical AI, or Trustworthy AI all relate to the framework and principles behind the design, development and implementation of AI systems in a manner that benefits individuals, society and businesses while reinforcing human centricity and societal value.
Who is the Responsible AI Institute?
Responsible AI Institute (RAI) is a 501(c)3 non-profit organization committed to advancing human-centric and responsible AI.
We are a community-driven organization focused on building and distributing tangible governance tools that accelerate the design, development and use of Responsible AI. Currently entering the fifth year of operations, we bring extensive experience in regulatory policies, data governance, and the development of trustworthy AI systems for industry and governments.
RAI tools have been among the first to demonstrate how to operationalize OECD AI principles into action and expand opportunities with AI while minimizing harm in local and global communities. We are best known for its community-based development of Responsible AI Design Assistant and the Responsible AI Leadership (RAI) independent rating and certification system.
Where does RAI sit in the ecosystem of Responsible AI?
RAI is not alone in its mission. We’re in the company of an vast group of academic institutions, research centres, civil society organizations, government agencies and companies that make up the larger ecosystem working to ensure the responsible use of AI. RAI often forges partnerships, advisor-ships, and affiliations with our counterparts working across sectors. These include:
Development of a Responsible AI Certification program with the University of Toronto’s Schwartz Reisman Institute and The World Economic Forum’s Global AI Action Alliance
Expert for OECD ONE AI
A multi-stakeholder and multi-disciplinary group that builds on the OECD’s successful experience with the first AI Group of experts (AIGO), which developed a proposal that formed the basis for the OECD AI Principles adopted in May 2019.
Member of the AI Public Awareness Working Group
A group looking at mechanisms to boost public awareness and foster trust in AI. It also aims to ground the Canadian discussion in a measured understanding of AI technology, its potential uses, and its associated risks.
Special Advisor to Equal AI
Member of ISO JTC1/SC42
Steering committee member of Canada Data Governance Steering Committee
Steering Committee member for Algora Lab
Advisor to NSERC research project at University of Toronto, "CREATE Responsible AI"
What is RAI Certification Beta?
RAI Certification by Responsible AI Institute is the first independent and community developed Responsible AI rating, certification, and documentation system.
RAI Certification is a symbol of trust that an AI system has been designed, built, and deployed in line with the five OECD Principles on Artificial Intelligence to promote use of AI that is innovative and trustworthy and that respects human rights and societal values. We use our five categories of responsible AI (explainability, fairness, accountability, robustness, and data quality) as parameters for the different credit elements within the RAI Certification rating system. RAI acts as a framework for decision-making for model developers, model risk managers, auditors, regulators and consumers in all of these areas, accelerating and rewarding best practices and innovation and recognizing exemplary responsible AI projects with different levels of RAI certification.
You can learn more about the RAI Certification Beta here.
What is RAI Assessment?
RAI Assessment is a voluntary process by which a company evaluates internal controls over its data and model driven automated decisioning systems. Modeled after operationalizing OECD’s five AI Principles, RAI Assessments evaluate data, model, and use cases for AI hazard identification, risk evaluation, and risk control.
What value does a RAI certification provide?
RAI Certification by Responsible AI Institute provides value to various members.
For AI/ML Application Developers: RAI Certification and Documentation provides your AI/ML developers, model risk management staff, and internal audit groups an independent verification system and a concise framework for identifying and implementing responsible and ethical AI systems through systematic assessment of data, model, and process quality.
For Independent Software Vendors: RAI accreditation of your AI/ML software products help your products and services get selected. Our team will provide a technical review of your products, facilitate a stage-by-stage credit scoring of data, model, and process contributions, and identify and architect API based integration points for direct access of RAI Application into your products.
For Audit and Consulting Companies: Securing an independent, third-party Responsible AI Leadership certification given by a respected responsible AI non-profit can help further differentiate your services and enhance your leadership and expertise in the Trusted AI field. In addition, by participating in RAI community work groups, collaborating with RAI Fellows and leading partners such as World Economic Forum, OECD, IEEE, ANSI, etc., you continue to add robustness, value and depth to your offerings.
How can I start the journey to a RAI Certification?
No matter where you are on your AI journey, our programs and services can help. We’ve laid out the journey to certification in four stages: Network, Educate, Assess, and Certify. Our goal is to allow members to network through community, educate themselves and other practitioners and students, assess their systems, and certify their responsible and trustworthy AI.
You can see the programs and tools listed for each step of the journey here.
How can I get involved?
Become a member We recognize there are growing concerns and a lack of trust with the accelerated adoption of AI in society. We are dedicated to promoting development of responsible and trustworthy AI systems. Through a diverse community of leading experts, including industry practitioners, policy makers, regulators, consumers, and academia, RAI has a unique and vital perspective on how to affect real change in both government and industry. RAI Members:
Gain access to expertise, principles, and practical tools to guide development of responsible AI systems.
Inform AI regulatory conversations at a broader scale.
Support the leading organization developing independent and authoritative AI Governance tools. Join Visionary Private, Public, and Academic Leaders as We Promote Open, Ethical AI.
Join a working group Recognizing that a project of this magnitude needs to be built by the community for the benefit of the community, we launched the Certification Working Group December 2020 with WEF and SRI. The Certification Working Groups will be based on our areas of focus: Fair Lending, Fraud Detection, Automated Diagnosis and Treatment, Automated Hiring. To join a working group, please contact us.