Working Together
for AI We Can Trust

Artificial Intelligence holds great potential and great peril. As practitioners
and stakeholders, our choice is to build a future that either will — or will
not — be trusted by everyone. The Responsible AI Institute is working to
define responsible AI with practical tools and expert guidance on data
rights, privacy, security, explainability, and fairness.


patient-1
patient-2
patient-3
woman-1
woman-2
woman-3
woman-4

AI Offers
Incredible Promise.

AI is, without hyperbole, poised to reshape the world as we know it. It is
expected to add more than $15 trillion to the global economy by 2030,
sparking significant change across industries. By 2022, over 60% of
companies will have implemented machine learning, big data analytics, and
related AI tools into their operations. As we navigate the intricacies of a
technology already integrated into many of our systems, it is vital we do so
through focused, practical lenses.


AI's Power
Needs Guardrails.

When not designed in a thoughtful and responsible manner, AI systems
can be biased, insecure, and not compliant with existing laws, even going
so far as to violate human rights. AI presents significant risk of financial
and reputational harm for companies that haven't thought through their
strategies and roadmaps.

More so than any technology we’ve yet encountered, it is imperative that
AI systems be designed and managed in a responsible way.


A Framework of
Trust Is Imperative

LGBTQ couples were 73 percent more likely to be denied a mortgage than heterosexual couples with comparable financial credentials.

A Florida county sheriff’s office is combining academic data with highly sensitive health department data to label specific children as possible criminals.

A 25-year old Detroit man was arrested for felony theft after the city’s facial recognition software misidentified him, a common racial bias mistake made when AI isn’t designed responsibly.

These examples are jarring and only a sampling of the thousands of similar cases — but they don’t negate the incredible potential of this powerful technology. The right tools and guidance can turn situations like these into great benefits for humanity.

RAI Badge

The World's First Independent,
Accredited Certification Program
for Responsible AI

RAI Certification is a symbol of trust that an AI system
has been designed, built, and deployed in line with the
five OECD Principles on Artificial Intelligence to promote
use of AI that is innovative and trustworthy and that
respects human rights and societal values.

We use our five categories of responsible AI
(explainability, fairness, accountability, robustness, and
data quality) as parameters for the different credit
elements within the RAI Certification rating system.


Help Advance Trusted AI

Working together, we can create AI systems the world can trust.

“RAI Certification provides the guardrails for use of AI and data in an ethical and responsible manner. Partnering with RAI Institute to build an independent and trusted Responsible AI Certification system helps our customers accelerate adoption and impact from AI systems.”

Cognitive Scale

Matt Sanchez

Founder and CTO, Cognitive Scale

"We’re delighted to be working with RAI toward an independent and community-developed AI certification program. RAI will provide organizations the right guard rails not only to preserve trust and avoid harm, but also enable organizations to innovate and drive better societal outcomes with AI.”

WEF

Mark Caine

Lead AI and ML, WEF

"The continued emergence of Artificial Intelligence (AI) technologies presents an exciting opportunity for Anthem to explore the development of next-generation products and services. As one of the initial charter members of RAI, we are thrilled to see this community grow through active collaboration and are pleased to provide our insights and expertise by participating in the RAI Certification beta pilot.”

Anthem

Rajeev Ronaki

SVP and CDO, Anthem

"We’re delighted to be working with RAI toward an independent and community-developed AI certification program. RAI will provide organizations the right guard rails not only to preserve trust and avoid harm, but also enable organizations to innovate and drive better societal outcomes with AI.”

WEF

Mark Caine

Lead AI and ML, WEF

"The continued emergence of Artificial Intelligence (AI) technologies presents an exciting opportunity for Anthem to explore the development of next-generation products and services. As one of the initial charter members of RAI, we are thrilled to see this community grow through active collaboration and are pleased to provide our insights and expertise by participating in the RAI Certification beta pilot.”

Anthem

Rajeev Ronaki

SVP and CDO, Anthem

Join visionary private, public, and academic leaders as we promote open, ethical AI.


Alta ML
Anthem
Cognitive Scale
Jackson
University of Alberta
Algora
Argo Design
Beacon
CIO
CIPS
UT Austin
Deloitte
EY
Hypergiant
Microsoft
Mila
Montreal
Oceanis
Oproma
Oxford Brooks
Prudence AI
PWC
Queens
R/AI
Saxena Foundation
University of Toronto
Strauss Center
 

Learn more about our work and partnerships.