Register to attend Responsible AI: What’s Real, What’s Next, and What Matters webinar 📅 January 22, 2025 | 🕚 11 AM EST | 🌐 Virtual

Leaders in Responsible AI: A Member’s Story

Leaders in Responsible AI

Aisha Tahirkheli, KPMG

November 2024

Aisha Tahirkheli

KPMG

Managing Director, Trusted AI

What does your job entail within your organization?

As the Trusted AI leader for our firm, my mission is to drive the ethical and responsible design, development and use of AI to ensure we unlock AI’s full potential with confidence.

In my role, I chair KPMG’s Trusted AI Council, a cross-functional oversight committee that continuously reviews our Trusted AI strategic framework, ethical pillars and guiding principles. We help KPMG stay ahead of the curve, aligning with evolving regulations and technological advancements, and translating our principles into actionable policies, processes, technology, and training.

Collaboration is key. The Council actively engages with internal and external stakeholders, shaping AI governance frameworks and addressing emerging challenges. We are committed to leading the charge in Trusted AI adoption, setting the benchmark for our industry and beyond.

What do you think are the biggest challenges your organization faces related to integrating AI into your business?

The biggest challenge in integrating AI across everything we do isn’t technical – it’s human. With 40,000+ professionals, bringing diverse experiences and technical capabilities, enabling effective adoption means meeting our people where they are. That’s why we launched aIQ (AI + human IQ) – our people-first, human-centric approach to KPMG’s AI transformation.

aIQ’s core focus is empowering our people. We’re making AI accessible and understandable to everyone, from partners to knowledge workers, through cutting-edge GenAI tools, personalized learning paths, and mandatory Trusted AI training.

Transparency is also key as it builds trust, and trust drives adoption. Our patent- pending AI system cards help demystify the “black box,” providing clear explanations of our AI systems’ capabilities, limitations, and performance benchmarks.

It’s not about pushing AI faster – it’s about bringing our people along on this transformative journey to unlock their potential and accelerate value for our clients.

Why must organizations prioritize responsible AI governance and systems in their business?

There is an opportunity to build greater trust around AI adoption. Our recent KPMG GenAI Consumer Trust survey revealed that 78% of US consumers believe companies using AI must ensure responsible practices, yet under half actually trust them to do so. This presents an opportunity for businesses to lead the charge in closing this trust gap. Successful leaders using AI will implement guardrails, not speed bumps on the AI highway, paving the way for responsible AI adoption.

The top five steps businesses can take to drive trust in AI include:

1. Define a Governance Framework – Establish a framework and ethical principles to guide responsible use.

2. Mobilize an Oversight Committee – Assemble a diverse council to operationalize principles into guidelines, controls, and tooling.

3. Implement an AI Inventory – Ensure traceability, transparency, and accountability.

4. Launch Mandatory Training – Empower professionals on ethical AI development and use.

5. Publish AI System Cards – Continuously monitor, test, and demonstrate alignment with your Trusted AI framework and principles.

At KPMG, our Trusted AI approach, rooted in being values-led, human-centric, and trustworthy, guides our commitment to using AI responsibly and ethically. Read KPMG’s full Trusted AI statement here.

What’s a lesson you’ve learned that has shaped your work in responsible AI? 

One key lesson is the importance of being action-oriented amidst regulatory uncertainty around AI. With an absence of federal legislation and lack of a unified global risk management standard, organizations can get stuck in analysis paralysis, debating about what should or shouldn’t be done. Meanwhile, the technology continues evolving at breakneck speed! 

While unknowns exist, there are emerging guideposts we can mobilize around now. We know a risk-based approach to governing AI—similar to the EU AI Act—is expected. The National Institute of Standards and Technology (NIST) has proposed a comparable framework likely to become the U.S. standard, given its leading role in implementing White House directives. Additionally, there is consensus around core ethical principles like fairness, accountability and transparency that can anchor responsible AI programs. 

So, my advice is to lean on what we know, be pre-disposed to action, and continuously evolve as the technology and regulations mature. The key is maintaining momentum.

Why is this work important to you?

Trusted AI sits at the intersection of my passion for technology and DEI. My driving force is the reality that the AI solutions we build today will either break down or reinforce society’s barriers tomorrow. When I see AI systems struggling with recognizing diverse faces or perpetuating harmful gender stereotypes, it strengthens my resolve. I’m driven by two goals: ensuring AI systems perform equitably, while actively creating pathways for women and underrepresented groups in AI. Whether I’m establishing mentorship programs, implementing fairness metrics, or building diverse teams, each step advances equity.

About KPMG

KPMG LLP is the U.S. firm of the KPMG global organization of independent professional services firms providing audit, tax and advisory services. The firm’s multi-disciplinary approach and deep, practical industry knowledge helps clients meet challenges and respond to opportunities.

AI is opening up entirely new avenues for improving experiences, delivering new value streams and transforming business models. The firm’s deep industry expertise and advanced technical skills help our people and clients harness the power of AI – from strategy and design through implementation and ongoing operations.

About the Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Media Contact

Nicole McCaffrey

Head of Strategy & Marketing 

Responsible AI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Follow Responsible AI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Share the Post:

Related Posts

Responsible AI Institute's RAISE 2024 & Leadership in RAI Awards

Responsible AI Institute December 11, 2024 RAISE Recap The Responsible AI Institute’s RAISE 2024 event brought together over,1,000 industry leaders to explore responsible AI development....

Embedding ethical oversight in AI governance.

Co-Authored by Hadassah Drukarch and Monika Viktorova As artificial intelligence (AI) systems become embedded into critical areas of our lives, the ethical implications and societal...

Responsible AI Institute Virtual Event Series

Responsible AI Institute November 20, 2024 Webinar Recap AI is transforming industries at a breakneck pace, and its integration into the workplace is both an...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!