Leaders in Responsible AI: A Member’s Story

Leaders in Responsible AI

March 2024

Gerald Kierce-Iturrioz

Trustible

Co-Founder & CEO

What does your job entail within your organization?

As the co-founder & CEO of a responsible AI governance software company, my job is all about vision & execution. I generally divide my responsibilities into three priorities: product development, customer engagement, and market education. I also view my work as very collaborative – working closely with many of my passionate colleagues and external partners, like RAI Institute, to advance the conversation of Responsible AI around the world.

What do you think are the biggest challenges your organization faces related to integrating AI into your business?

In many ways, we are an AI-native company. We have deployed both “old school” AI and new foundational models in our products to help our customers scale their Responsible AI initiatives and automate compliance. We also try to responsibly leverage AI software throughout our business, particularly in marketing, operations, and product. 

However, one challenge that we often face is: how do we determine which model is best fit for a particular task? We evaluate models primarily on three dimensions: 1) performance on a given task, 2) cost, 3) risk. On this last point around risk management, it is often hard to understand what risks different models may present. This realization led us to build new capabilities on our own platform that help customers evaluate the big model providers against each other to understand how transparently each of them disclose issues around copyright, data sources, toxicity, risk disclosures, PII in data, and more.

Why must organizations prioritize responsible AI governance and systems in their business?

I think there’s three arguments on why prioritize responsible AI governance: 

1) It’s good business, and leads to better AI products: According to a recent Gartner report, 46% of organizations that have implemented AI governance frameworks have seen increased revenue; 30% of them have decreased costs. Done correctly, responsible AI governance can be an accelerant of AI adoption. 

2) Regulations will require it: the EU AI Act is around the corner, and the requirements will take time to fully implement. More AI laws are getting introduced and enacted around the world. Responsible AI governance will be inevitable, and organizations that delay implementation will risk increased costs to retrofit compliance. 

3) It’s the right thing to do!

What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?

I’ve spoken to hundreds of companies about their AI strategy and one thing has become clear to me: for the most part, people have good intentions. They don’t want to deploy a biased or discriminatory AI system – they often just don’t have the tools, skills, or capabilities to anticipate and mitigate those risks. 

If I assume this positive intent, this means that organizations can and want to prioritize Responsible AI. They just need the support. 

Our work at Trustible is really important to me because we provide actionable insights and tools for organizations to operationalize their Responsible AI initiatives. This means they don’t get to just talk about Responsible AI, but that they get to actually do it and prove it – ensuring trust in their AI systems.

Become a RAI Institute Member

 

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks, and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands, Shell, Chevron, Roche and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

For all media inquiries please refer to Head of Marketing & Engagement, Nicole McCaffrey, [email protected].

+1 440.785.3588.

Social Media

LinkedIn

Twitter

Slack

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!