2023 Recap & Look Ahead From the RAI Institute Team

In 2023, the landscape of artificial intelligence was marked by sweeping transformation, driven by the mainstream adoption of ChatGPT and ongoing developments in Generative AI. The conversation around responsible AI surged, propelled by significant policy developments around the EU AI Act, the White House Executive Order on Trustworthy AI, the UK AI Safety Summit, and Canada’s AI and Data Act. Amid this fast-paced evolution, the Responsible AI Institute marked its most significant year to date.

A Momentous Year

Our membership grew by over 400% and we helped our new and existing members navigate the complexities of the RAI landscape and take steps to implement RAI at scale. We convened important conversations, consortiums, leadership groups, strategic academic and corporate partnerships and Certification pilots at the intersection of civil society, academia, policy makers, regulators and industry to advance complex discussions about AI safety and assurance. Our team underwent strategic expansion, incorporating key roles in policy, marketing and member engagement to align with our vision and better support our members. 

Evolving with Purpose

In 2023, insights from our members, supporters, and the community shaped our mission and paved the way for our 2024 roadmap. Recognizing that responsible AI is a collaborative endeavor, we eagerly anticipate embracing continued transparency, launching our BRAIN member portal (coming in Q1!), and introducing tools and assets supporting responsible AI implementation at scale. A sneak peek into what we have planned:

Assessments and Assets: Continuously updating and evolving our Organizational, AI System, and Vendor Assessments to align with the most relevant regulatory frameworks and organizational best practices. Introducing turn-key guidebooks, templates and materials to expedite responsible AI implementation and scale across the organization.

Benchmarks: Testing key benchmarks, beginning with our RAISE Corporate AI Policy Benchmark aligned with the NIST AI Risk Management Framework and continuing on to our anticipated RAISE LLM Hallucination and RAISE Vendor Alignment Benchmarks.

Certification and Community: Sharing a community version of our Certification assessment, aligned with the NIST AI RMF, for ongoing feedback and accelerating adoption with key partners. We will continue to convene thematic and industry-relevant conversations on AI standards, Certification, assurance and best practices enabling more collaboration between our members and within the ecosystem. 

Expressing Gratitude

As we wrap-up 2023, we extend heartfelt gratitude to our dedicated organizational and individual members for their unwavering support and commitment. Special thanks to collaborators and peers for engaging conversations and inspiring challenges that drive continuous improvement. Here’s to a stratospheric 2024, where collaboration, innovation, and responsible AI principles continue to define our path. Thank you for being an integral part of this transformative chapter.

Support the RAI Institute in 2024 by Becoming a Member 

RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contacts

Audrey Briers

Bhava Communications for RAI Institute

rai@bhavacom.com

+1 (858) 522-0898

Nicole McCaffrey

Head of Marketing, RAI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Follow RAI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Share the Post:

Related Posts

Procurement AI

Responsible AI Institute May 15, 2024 Webinar Recap Robust procurement practices have emerged as a crucial frontline in fostering responsible AI development and deployment. As...

Jeff Easley Headshot

Leading AI Nonprofit Announces Additional Advancements on Policy and Delivery Team AUSTIN, TEXAS – May 15, 2024 – Responsible AI Institute (RAI Institute), a prominent...

Michael Brent - BCG

Michael Brent Boston Consulting Group Director, Responsible AI Team What does your job entail within your organization? I have the best job in the world....

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!