Responsible AI Programs: Putting the Pieces Together

Responsible AI Programs Webinar

Responsible AI Institute March 20, 2024 Webinar Recap

During the most recent installment of Responsible AI Institute’s Virtual Event Series, we convened experts from leading organizations to discuss the critical elements of effective enterprise Responsible AI programs. As artificial intelligence capabilities rapidly advance, the pressures are mounting for businesses to establish robust governance frameworks, adopt enabling tools and technologies, and cultivate cultures rooted in responsible AI principles.

Our panelists brought their unique perspectives on responsible AI programs, namely:

      • Steve Mills, Chief AI Ethics Officer at Boston Consulting Group, provided insights into how to start building AI governance into your organization.

      • Saima Shafiq, SVP and Head of Applied AI Transformation at PNC Bank, shared her expertise on applying AI governance. 

      • Monika Viktorova, Product Manager at CEVA Logistics, contributed her thoughts on internal review processes. 

      • Alex Miller, Technology Strategist on the Responsible AI team at Spark92, provided perspective on how different industries develop their AI governance structures.

      • Var Shankar, Executive Director at Responsible AI Institute, moderated the discussion.

    Governance Foundations Are Essential

    The panel reinforced that while building a comprehensive Responsible AI program is a multi-year journey, organizations must start now to stay ahead of the curve. Mills noted that it may take 2-3 years to become fully mature, but that organizations can learn a lot and develop significant value along the journey. Key initial actions include designating executive leadership, defining risk principles and policies, integrating AI governance into existing processes, and critically evaluating use cases through an ethics lens during design phases.

    Impartiality, accountability, and diverse perspectives are vital for effective internal review of AI systems. Viktorova mentioned that this approach will minimize conflict of interest and also ensure that organizations’ decision making is based on a rational scoping of the full risks and benefits of a project. Smaller teams can creatively leverage third-party tools and partner resources to compensate for limited in-house subject matter expertise. The panel stressed that organizations must make Responsible AI a cross-functional priority, not just a technology team’s concern.

    Tools Accelerate Responsible AI Scaling

    As AI governance programs mature, the right tools become mission-critical enablers for activities like model monitoring, bias testing, risk reporting, and more. While no single vendor platform is likely to satisfy all needs across document AI, conversational AI, generative AI, and other advanced capabilities, a centralized AI governance solution integrated with best-of-breed supporting tools is ideal.

    Tooling is necessary but not sufficient on its own. Shafiq highlighted that having an accountable party is important, but having the cross company collaboration and some working groups that are accountable for defining the overall governance is critical. The panel of experts emphasized that enterprises succeeding with Responsible AI are those proactively implementing holistic training programs and fostering cultures of responsible AI practice permeating through all roles – from the C-suite through data scientists, business analysts, and beyond. Miller emphasized that Responsible AI needs to be positioned as not just a technology concern, but as a business concern that impacts everyone in the organization.

    Now Is The Time To Operationalize Responsible AI

    While daunting, operationalizing responsible AI principles is becoming a strategic imperative as regulatory pressures increase, consumer expectations rise, and AI’s societal impacts intensify. Fortunately, a growing array of standards, best practices, and partnership opportunities are available to help pave the way.

    Responsible AI Institute, in collaboration with Boston Consulting Group, recently published an article on putting AI standards into action by establishing a strategic and efficient AI governance structure.  The responsible development of AI will prove to be one of this decade’s paramount challenges and opportunities. Those who get started in earnest today will be best positioned for long-term competitive advantage while mitigating risks to customers, shareholders and society.

    About Responsible AI Institute (RAI Institute)

    Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

    Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

    Media Contact
    For all media inquiries please refer to Head of Marketing & Engagement, Nicole McCaffrey.
    [email protected]
    +1 440-785-3588

    Social Media

    LinkedIn

    Twitter

    Slack

     

    Share the Post:

    Related Posts

    Risks of Generative AI

    Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

    Responsible AI Institute + AltaML Case Study

    Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

    Responsible AI Institute Q1 2024 New Members

    Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

    News, Blog & More

    Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!