top of page
  • Ashley Casovan

A Look at Responsible AI: 2022 in Review and 2023 Outlook

The European Union’s whitepaper "On Artificial Intelligence - A European approach to excellence and trust,” says it best: "AI should work for people and be a force for good in society."

With a shared vision, the Responsible AI Institute (RAI Institute) has spent 2022 continuing to ensure a human-centered approach is taken to the design, development, and deployment of AI. We promote the important societal innovations that will inevitably come from AI but recognize this enthusiasm must be balanced by well-defined regulations, standards, and industry best practices.


We have not done this alone. Our rich community ranges from individual contributors and civil society collaborators who alert us to key research and relevant issues, to governments testing out different uses for AI and thinking through new policy measures, to our corporate members who are building, buying, and supplying AI solutions and are concerned with the implications of these systems. Through the hundreds of combined conversations, workshops, assessments, and use cases we’ve completed this year, we have continued to learn from each and every one of you. As a result, we’ve become a central hub for responsible AI research, knowledge, and resources.


Our focus remains on applying this research and knowledge to advance and support the oversight and governance of AI systems. Since our founding in 2017, we have been focused on creating meaningful impact for AI practitioners. Building from internationally agreed-upon principles and policy objectives, we are dedicated to translating these general objectives into workable and easy-to-apply organizational practices. This continues to be done through our development of context-specific responsible AI conformity assessments and certification programs aligned with proposed and enacted regulations.


Global Responsible AI Trends

This year saw a marked shift in responsible AI awareness from government and industry. New AI regulations were drafted or continued to be developed at both the local and national levels across the world. In addition, significant reporting was done on the state of responsible AI in business. For example, MIT Sloan Management Review and BCG produced an aptly titled report, To be a Responsible AI Leader, Focus on Being Responsible, on the importance of organizations taking a proactive approach to responsible AI. In another recent report, From AI compliance to competitive advantage, Accenture surveyed 850 executives from 17 regions and 20 industries and found that “coupled with the opinion that Responsible AI can fuel business performance… over 80% of respondents plan to increase investment in Responsible AI.”


We believe this shift occurred for three main reasons:

  1. Awareness and Scope - The AI ecosystem grew due to increased awareness and occurrence of AI incidents becoming a C-suite topic with the recognition that the breadth of responsible AI is holistic and not limited to testing for bias and explainability.

  2. Regulatory Momentum - Proposed AI-focused regulations have continued to emerge, mature, and have even been enacted.

  3. Maturing Standards - Global standards bodies have advanced their efforts to develop standards to address all aspects of AI oversight from high-level risk frameworks to measurable robustness requirements.


That’s why, while AI regulations continue to develop, we are supporting organizations to mature their responsible AI practices to prepare for certification and regulatory compliance. This important work has proven to reduce risk when building or buying AI, increase customer trust, and avoid costly technical debt. For organizations that are supplying AI solutions, this has demonstrated a distinct competitive advantage as the market demand for this type of assurance increases. This approach has the added benefit of providing us with real-life use cases that we can apply to the ongoing development of our conformity assessment schemes, ensuring we are building practical, not theoretical work.


RAI Institute Highlights from 2022

In 2022, we built on our strong foundation and we continue to do our part to ensure AI systems are fair, safe, and inclusive, ultimately helping to advance society in a positive direction. For a recap of our work this year, please see our summary video from our annual conference, RAISE or read our RAISE recap blog.


Key activities:


“I am passionate about three things besides my family – mindfulness, equity, and inclusion. These are not just nouns to me, they are also verbs, actions that I take every day that manifest in the form of challenging my own thinking, advocacy, and where I invest my time. Not surprisingly, my personal passions inform my work passions. These manifest in the form of ensuring equal pay for equal work when I lead a team, ensuring diversity is a first-class citizen in the hiring process, and ensuring that technology is developed in a transparent, trustworthy, inclusive, and safe manner. This has all culminated over the last 3 years into my push to change the AI industry from a technology-focused industry to a human-focused industry – making sure that technology, specifically AI-enabled systems, are built, deployed, and implemented so they are transparent, fair, and safe. I strongly advocated for the industry shift to a focus on human-centered AI and responsible AI. Not from a design perspective, but rather from an outcome perspective.


It should not be surprising that when I left my post as IBM’s first-ever Global Chief AI Officer, I did so to be at the epicenter of human-centered, responsible AI. By joining the Responsible AI Institute as President, I am now helping to lead the independent non-profit organization that is poised to become the de facto standard for assessing conformity to these principles of human-centered and responsible AI. “

– Seth Dobrin, on why he joined the RAI Institute



Continuing the Momentum in 2023

2023 will be a transformational year for us and our members. We expect to see another significant increase in membership as these issues and the need to take action to become more important to business leaders. Continuing our work of supporting organizations in their RAI efforts, here are some things you can expect from us in 2023:

  • Expansion of our responsible AI conformity assessment program by focusing on new use cases in new domains supported by additional pilots and regulatory sandboxes.

  • Hosting more regulatory roundtables in more regions to enable necessary dialogue between practitioners and policymakers.

  • Highlighting success stories from our members and partners through increased social media posts, webinars, and conference engagements.

  • Launch of a new procurement campaign with key ecosystem collaborators who are also thinking about the significant implications of buying and selling AI.

  • Officially expanding to Canada! While we have supported Canadian members since the start, we will officially launch our efforts as Canada takes a more active role in standards development and AI regulation.


We are very excited about our future and the potential of our impact. We welcome you to join us on our journey, so please reach out if you have any questions about how to get involved.


Finally, a huge thank you to our engaged community and members who have been the guiding force to get us here. There continues to be a long road ahead, and we look forward to working with you to continue to evolve and build the responsible AI ecosystem.



Ashley Casovan, Executive Director Seth Dobrin, Ph.D., President

bottom of page