Responsible AI Institute December 11, 2024 RAISE Recap
The Responsible AI Institute’s RAISE 2024 event brought together over,1,000 industry leaders to explore responsible AI development. General Manager Jeff Easley kicked off the event with a State of the Union address, highlighting the organization’s growth. The event featured a forward-looking panel on Responsible AI and AI Governance and concluded with the Leadership in Responsible AI Awards, showcasing the Institute’s commitment to promoting responsible AI practices.
Following Easley’s opening remarks, Hadassah Drukarch, Director of Policy & Delivery, introduced our panel. As AI becomes increasingly integral to organizational infrastructure, the lines between responsible AI practices and governance are evolving rapidly. Our RAISE event panel of industry experts explored this complex landscape, examining the critical intersection of ethical implementation and regulatory compliance. They discussed emerging challenges and strategies for building resilient AI practices in an intricate regulatory environment.
The panel comprised of experts cutting across multiple backgrounds including:
- Cortnie Abercrombie – Founder and CEO of AI Truth, shared her thoughts from a business perspective and what she has seen evolving through the years.
- Asha Saxena, Founder and CEO of Women Leaders In Data & AI (WLDA) and The AI Factor Institute, provided her insights into why tech leaders are so critical to this conversation.
- Lama Saouma, Senior Policy Advisor, Canadian AI Safety Institute, contributed expertise from a standards and regulatory background, educating our audience on how emerging terminology will impact both regulation and innovation.
- Hadassah Drukarch, Director of Policy & Delivery at Responsible AI Institute, moderated the discussion.
Evolving Governance Frameworks
The panel identified persistent gaps in AI governance, emphasizing the need for comprehensive, end-to-end processes. Abercrombie stressed the importance of a holistic approach that extends from initial use case evaluation to post-deployment crisis management, noting that many organizations lack a complete governance strategy. She stated, “What is our crisis management around an AI product? I don’t see a lot of that whole end to end process from the very beginning of the consideration all the way through to the very end, past the point that we release it. We just kind of go, ‘Oh, It’s released, and now we’ll just keep maintaining our part of it,’ but we don’t go so far as to even train users sometimes on how to use these things.”
“You want to make sure that the governance journey starts when you’re thinking about responsible AI,” Saxena remarked. “You want to invest in explainability and transparency tools.” Saxena outlined four stages of responsible AI adoption: 1) Awareness of ethical principles, 2) Experimental pilots aligning AI with business objectives, 3) Policy development and internal governance frameworks, and 4) Operational integration to embed ethical principles throughout implementation. She highlighted the critical need to invest in explainability and transparency tools from the outset.
Accountability Across the Lifecycle
The panel stressed that AI governance requires comprehensive accountability across all stakeholders, from developers to end-users. Abercrombie recommended creating dedicated roles like a Chief AI Ethics Officer to oversee governance efforts. Saxena highlighted the critical importance of diversity in development teams, datasets, and decision-making processes, emphasizing that diverse perspectives are key to creating inclusive and unbiased AI systems.
Proactive Regulation and Standardization
Saouma discussed the global push for standardization and regulation and pointed out that while general frameworks provide essential guidance, they must be tailored to specific industries and use cases to be effective. Saouma said, “A lot of the risks that we will see emerging and we see applied are very context specific. Developing the risk threshold would need the input from that specific industry.”
Looking Ahead
The panelists offered actionable advice to help organizations implement responsible AI practices. Abercrombie encouraged embedding ethical principles into every aspect of strategy, culture, and operations. Saxena highlighted the importance of aligning AI ethics with corporate values, while also fostering a culture of continuous learning and adaptation.
As organizations prepare for 2025, collaboration and innovation in responsible AI remain paramount. Saouma emphasized the need for clear delineation of roles within international governance frameworks and stronger accountability across the AI lifecycle. The RAISE 2024 panel reinforced that responsible AI is as much about organizational culture and leadership as it is about technology. These lessons provide a roadmap for aligning ethical principles with business success, ensuring a sustainable and trustworthy AI future.
Leadership in Responsible AI Awards
Following the panel, we presented our 2024 Leadership in Responsible AI Awards. These awards recognize a leading initiative, organization, and individual in the responsible AI ecosystem. Congratulations to all of our winners and nominees of this year!
Outstanding Initiative Winner: OneTrust Copilot
Nominees:
TELUS’ AI Report: The Power of Perspectives
RBC Borealis- RESPECT AI
KPMG AI Impact Initiative
OneTrust Copilot
The Standards Council of Canada ISO 42001 Pilot with ATB Financial
Verizon’s Responsible AI Initiative
Gender Biases in AI
Outstanding Organization Winner: Brown-Forman
Nominees:
The Institute for Experiential AI’s Responsible AI Practice at Northeastern University
TELUS
KPMG
Shell
Brown-Forman
ATB Financial
Boston Consulting Group
Outstanding Individual Winner: Bryan McGowan
Michael Brent, Director of Responsible AI, BCG
Yukun Zhang, Director, AI Governance & Responsible AI, ATB Financial
Amy Challen, Global Head of AI, Shell
Bryan McGowan, Principal, Global Trusted AI Team, KPMG
Luana Lo Piccolo, Responsible AI Governance and Law Expert
Jesslyn Dymond, Director of AI Governance and Data Ethics, TELUS
Cansu Canca, Director of Responsible AI Practice at Institute for Experiential AI, Northeastern U.
The insights and achievements shared at RAISE 2024 showcase the growing momentum behind responsible AI development. As we navigate this transformative era, your voice and expertise are crucial in shaping AI’s ethical future. Become a member of the Responsible AI Institute to access our comprehensive resources, including the Responsible AI Hub, policy templates, and governance frameworks. Connect with industry leaders, participate in groundbreaking initiatives, and help build AI systems that benefit humanity.
Ready to make an impact? Join our community of changemakers and lead the way in responsible AI development. Become a member today.
About Responsible AI Institute (RAI Institute)
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
For all media inquiries, please refer to Head of Strategy and Marketing, Nicole McCaffrey.
+1 440-785-3588
Connect with Responsible AI Institute
X (formerly Twitter)