We are a proud partner of HumanX. Join us in Las Vegas, March 10-13th. Register to attend here. Use RAI Institute community discount code HX25p_RAI to save $250.

Accelerating Responsible AI: Proven Strategies from Regulated Industries

RAI Institute Virtual Event Series

Responsible AI Institute October 16, 2024 Webinar Recap

In an era where AI is rapidly transforming industries across the board, the importance of implementing AI responsibly cannot be overstated. A recent panel discussion brought together experts from highly regulated sectors to share their experiences and strategies for accelerating responsible AI adoption. This blog post summarizes the key insights from this enlightening conversation, offering valuable lessons for organizations at any stage of their AI journey.

Our panelists provided their expertise from regulated industries:

  • Brian Allen, Senior Vice President of Emerging Technology Risk Management for BITS at the Bank Policy Institute, provided his perspective on responsible AI implementation in the finance and banking industry.
  • Dr. Julie Novic, AI Strategist at Further, shared her expertise on responsible AI developments in the healthcare sector.
  • Kent Sokoloff, Senior Data Architect on the RAI Operational Team at Chevron, contributed his thoughts on what has been effective when implementing AI solutions in energy.
  • Phil Dawson, Head of Global AI Policy at Armilla AI, provided insights into developing an effective AI risk management strategy for insurance purposes.
  • Greg Woolf, Founder & CEO at AI Reg Risk Think Tank, moderated the discussion.

At the Forefront of Responsible AI: Finance, Energy, and Healthcare

The panel kicked off with panelists exploring the effectiveness of responsible AI in regulated industries. The financial industry has emerged as a leader in responsible AI implementation, largely due to its highly regulated nature and the maturity of its practices. Their approach to AI adoption offers valuable lessons for others. Allen states, “The financial services industry, with its incredibly mature risk practices, was able to quickly adapt to AI’s rapid adoption because they are programmatic with risk management. This stability allows them to handle speed and change effectively, unlike other industries with more ad hoc approaches.” 

In contrast, the energy sector has demonstrated the critical importance of understanding AI use cases and their associated risks, particularly given the industry’s far-reaching impact on society and the environment. Healthcare, while traditionally slower to adopt new technologies, is making significant strides in AI implementation. The industry’s cautious approach stems from the need to balance innovation with patient safety. “In healthcare, while there’s a perception of slowness in adopting AI, there’s actually a lot of innovation happening in what we call efficiency plays, like reducing doctor burnout by helping them process documentation faster,” comments Novic. As AI gains traction in healthcare, it’s becoming clear that the potential benefits are substantial, but so too are the challenges that must be navigated.

Navigating the Challenges of AI Adoption

Adopting AI in regulated industries comes with its own set of unique challenges. Large energy companies, for instance, must carefully navigate the complexities of implementing AI systems while adhering to strict regulatory requirements and managing environmental and safety concerns. Sokoloff says, “For us, the single largest risk is vendor-provided AI. We can manage what our own people are doing, but it’s much harder to get insight into what a vendor is providing.”

In healthcare, the adoption of AI is a delicate balancing act. The industry must weigh the potential for AI to revolutionize patient care against the risks associated with implementing new technologies in life-critical situations. This careful approach underscores the importance of thorough testing, validation, and ongoing monitoring of AI systems in healthcare settings.

The financial sector, despite its relative maturity in AI adoption, still faces significant challenges. Concerns around data privacy, algorithmic bias, and the explainability of AI decisions are at the forefront of discussions in the industry. Financial institutions are actively working to overcome these hurdles through robust governance frameworks and advanced technical solutions.

Building a Robust AI Program

Developing a strong, responsible AI program requires a multifaceted approach. Dawson states, “We’re pioneering insurance solutions for AI risk and performance. This involves automated testing and risk assessments for AI applications, helping insurers understand new risks associated with AI and adapting their traditional risk management approaches.” In healthcare, establishing clear parameters for AI system rollout is crucial. This includes rigorous testing protocols, clear guidelines for AI use in clinical settings, and mechanisms for ongoing evaluation and improvement of AI systems.

For companies in all sectors, staying ahead of the curve in responsible AI practices offers significant advantages. As regulations continue to evolve, organizations that proactively implement responsible AI frameworks are better positioned to adapt quickly and maintain compliance. This proactive approach not only mitigates risks but also builds trust with customers and stakeholders. “For small and medium-sized businesses, it’s a matter of putting the brakes on so you can go faster. Having robust processes in place helps ensure that organizations can safely scale AI adoption,” says Novic.

Energy companies like Chevron have taken inspiration from other industries’ practices, adapting and implementing responsible AI strategies that fit their unique operational context. “Chevron didn’t need to build AI risk management from the ground up because we already had a lot of the necessary practices in place due to the nature of process engineering and being a highly regulated industry,” comments Sokoloff. This cross-industry learning highlights the value of collaboration and knowledge sharing in advancing responsible AI practices across sectors.

Key Takeaways for Effective Responsible AI Implementation

The panel discussion concluded with several crucial insights for organizations aiming to implement responsible AI effectively. Experts emphasized the need to prioritize transparency and explainability in AI systems to build trust and meet regulatory requirements, while also developing robust governance frameworks that cover all aspects of AI implementation. “The conversation with regulators needs to be context-specific,” states Allen. “It’s not just about AI as a whole but about its application and the specific risks associated with each use case.” 

The panelists stressed the importance of investing in ongoing education and training for both technical teams and end-users to ensure responsible AI use throughout the organization. Additionally, the panel highlighted the value of cross-industry collaboration to share best practices and learn from diverse experiences. Finally, they underscored the necessity for organizations to remain agile and prepared to adapt their AI strategies as regulations and technologies continue to evolve in this rapidly changing landscape. “A key part of how we assess AI risk is by performing both qualitative and quantitative evaluations of the AI system. It’s about building a new assessment process that evolves with the technology,” says Dawson.

By embracing these principles and learning from the experiences of regulated industries, organizations across all sectors can accelerate their responsible AI initiatives while mitigating risks and maximizing the benefits of this transformative technology.

Watch the panel discussion on demand. 

Access Suggested Resources

Supporting You on Your RAI Journey

Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for coming AI regulation? RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations like those mentioned, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices.

Become a RAI Institute Member

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

For all media inquiries, please refer to Head of Strategy and Marketing, Nicole McCaffrey.

nicole@responsible.ai

+1 440-785-3588

Follow Responsible AI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Share the Post:

Related Posts

For Immediate Release February 6, 2025, AUSTIN, TX — The need for operationalizing responsible AI is evolving rapidly as organizations grapple with escalating AI incidents...

Demystifying the AI Assurance Landscape

By Yogasai Gazula How can organizations ensure that their AI systems are trustworthy? Assurance services have become commonplace for financial operations, compliance, IT security, and...

Energy AI

Co-Authored by Michael Chapman, Yogasai Gazula, and Hadassah Drukarch Artificial intelligence (AI) is emerging as a powerful tool in the energy sector, offering unprecedented opportunities...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!

Login to the RAI Hub