Leaders in Responsible AI: A Member’s Story

Leaders in Responsible AI

Philip Dawson, Armilla AI

June 2024

What does your job entail within your organization?

As the Head of AI Policy, I wear many hats at Armilla AI. I lead our work with international organizations, governments and standards bodies, helping to shape responsible AI policy, governance and standardization initiatives in collaboration with Armilla’s customers and partners. In our client engagements, I am responsible for ensuring Armilla’s AI advisory, assessment and AI/LLM evaluation services meet and exceed emerging rules for Responsible AI, in line with legal requirements and best practices. Given the critical role Armilla AI plays in helping enterprises operationalize AI governance, I also lead client development with AI-first companies as well as enterprises in sectors such as financial services, banking, insurance, telecommunications, customer service and employment and human resources.

Why must organizations prioritize responsible AI governance and systems in their business?

The 2024 Stanford Index recently reported a nearly 30-fold increase in AI incidents since 2012, highlighting that AI risk and safety considerations remain principal barriers to scaling AI in enterprises today. Enterprises that prioritize responsible AI governance and systems are better positioned to accelerate time to market, outcompete rival companies and avoid the consequences of AI failures, which include reputational damage, business interruptions, economic losses, class actions, and fines. At Armilla AI, we’re proud to be at the forefront of efforts to mitigate and protect companies from AI risk, offering warranty coverage for safe, fair and reliable AI solutions in partnership with global insurers. As a concrete example, when one of our HRTech clients was successfully acquired recently, Armilla’s assessment and warranty were highlighted in the press release. That’s pretty significant.

What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?

Market-based incentives like enterprise procurement are among the most significant drivers of Responsible AI in the market – much more than regulation. We’re seeing enterprises integrate new requirements for responsible use, including independent AI/LLM assessments, and adapt vendor risk management programs to protect against third-party AI failures. So, Responsible AI is becoming critical to commercial success for vendors as well.

We’re really privileged to be working with companies on both sides of this, helping raise the bar for responsible use and eliminate AI risk. It’s one of the main reasons we’re working with insurers to pioneer new policies covering AI performance and liability. Part of our process for determining eligibility coverage is assessing enterprises against Responsible AI frameworks and standards to better understand risk, mitigations that have been put in place and make recommendations for remediation. Coverage against AI risk is emerging as another powerful driver of responsible AI governance for enterprises and scale ups alike. We were thrilled to be highlighted for this work in recent reports by Deloitte and Swiss Re on AI risk and insurance. 

About Armilla AI

Armilla AI is a provider of AI/LLM testing, red-teaming and risk transfer solutions, helping enterprises realize the benefits of high-performing, fair and reliable AI. Using industry-leading AI/LLM evaluation technology, we evaluate and quantify the risk level of AI systems to provide AI assessment and warranties, backed by leading reinsurers Swiss Re, Greenlight Re and Chaucer. We’ve gone through some of the world’s most prestigious AI and insurtech accelerators, including Y Combinator, Lloyd’s Lab and Betaworks’ AI Camp, and are listed among FinTech Global’s Insurtech100 and InsureTechConnect’s Forward50, among the most exciting insurtech startups in the world. We are also the proud winners of the Responsible AI Institute’s 2023 award for most outstanding product, for our innovative AI warranty. You can find me on Linkedin or reach out by email to find out more: phil@armilla.ai

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Join the Responsible AI Movement

Join the responsible AI movement by becoming a member of the Responsible AI Institute’s RAI Hub. Gain access to a vibrant community, exclusive resources, and opportunities to contribute to the development of responsible AI standards and best practices, while working together to ensure AI is developed and deployed responsibly.

RAI Hub

Media Contact

Nicole McCaffrey

Head of Marketing, Responsible AI Institute

nicole@responsible.ai

+1 (440) 785-3588

Follow Responsible AI Institute on Social Media

LinkedIn

X(formerly Twitter)

YouTube  

Share the Post:

Related Posts

Responsible AI Institute Stewards RAI Top 20 Controls

Navigating the current landscape, where AI seems omnipresent, and balancing the need for speed, innovation, competitive edge, and responsible practices can be daunting. It’s challenging...

AI Standards

Authors: Credo – AI Lucia Gamboa and Evi Fuelle, RAI Institute – Patrick McAndrew and Hadassah Drukarch. Standards play a crucial role in the development...

ESG Webinar

Responsible AI Institute June 26, 2024 Webinar Recap As AI continues to revolutionize industries across the globe, its impact on sustainability and environmental, social, and...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!