We are a proud partner of HumanX. Join us in Las Vegas, March 10-13th. Register to attend here. Use RAI Institute community discount code HX25p_RAI to save $250.

Leaders in Responsible AI: A Member’s Story

AI Strategy Implementation Leader

Kaytlin Henderson, AI Strategy Implementation Leader

Dow

October 2024

What does your job entail within your organization?

As the AI Strategy Implementation Leader, I lead a team in Dow’s AI Hub, which is a part of our Enterprise Data & Analytics (ED&A) organization. I am responsible for shaping and implementing the AI strategy for Dow, which includes key deliverables around a Responsible AI framework, Generative AI capabilities, and the monetization of Digital and AI solutions. Dow follows a hub and spoke model for Data and Analytics, which balances centralized services – primarily ED&A – with autonomy for the Dow businesses and functions we support – the spokes. We carry this model over to AI. In fact, the AI Hub sets best practices around AI and provides consulting and analytics services as needed to enable the spoke businesses and functions across the company. With AI implementation accelerating across Dow – not to mention the world! – a major AI Hub service I am building out is the governance and systems around Responsible AI for the company. This is a highly collaborative effort that engages leaders from across the Company.

What do you think are the biggest challenges your organization faces related to integrating AI into your business?

One of the significant challenges in adopting AI solutions is that users can struggle to understand what an AI solution is doing and how it arrives at its conclusion. This lack of understanding can lead to mistrust and underutilization of AI tools. Dow is taking steps to address this through a Data & Analytics (D&A) Literacy program. A program like this increases adoption by educating users on the fundamentals of AI as well as topics like data quality and data governance. As of the end of June 2024, about 7,000 Dow employees, or approximately 20 percent of our workforce, had taken a D&A Literacy program. When we tie in our Responsible AI Principles, users can see how there is a human in the loop. AI is augmenting jobs, not replacing them. These efforts build trust in AI for a truly impactful and valuable outcome. By demystifying AI processes and enhancing data literacy, these programs are empowering people at Dow to confidently use AI tools, interpret their outputs accurately, and make informed decisions.

Why must organizations prioritize responsible AI governance and systems in their business?

At Dow, adhering to responsible AI principles aligns closely with our core values outlined in the Dow Code of Conduct, which include Respect for People, Integrity, and Protecting our Planet. Prioritizing responsible AI governance builds trust in our AI solutions, which is fundamental to our adoption of AI. Many studies have come out showing that adhering to responsible AI principles enhances innovation and accuracy, leading to better, more reliable results.

I’ll highlight Dow’s Responsible AI Principles. Each has a clear motivation to prioritize responsible AI governance. By addressing biases in algorithms and data, we promote fair and inclusive AI solutions, aligning with ethical standards and broadening the valuable impact of AI. Being transparent and accountable in AI systems increases user understanding and trust, while mechanisms for human oversight and control ensure that we have an intervention mechanism that prevents action from models that have veered off course, optimizing outcomes.

Dow has had a robust data security and privacy system for many years, and we continue that legacy for new AI solutions to protect sensitive information, maintain compliance with both Dow’s rules and external regulations, and safeguard against breaches. Prioritizing safe and reliable AI systems prevents harm and human endangerment, reflecting our organization’s commitment to a safe and secure operational environment. Embedding these principles into AI governance helps Dow maintain a strong ethical foundation as AI use drives us towards our ambition to be the most innovative, customer-centric, inclusive, and sustainable materials science company in the world.

What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?

I am a chemical engineer with a background in data and analytics, including cloud development, for digital innovation in manufacturing. Dow has been creating and getting value from AI solutions for more than two decades and transparency has been critical to the adoption of these AI tools and results. I see the tremendous value that these solutions have brought. I am inspired by the use cases we are working on today and the possibilities long-term. And I am proud of Dow’s commitment to do this in a responsible fashion. Also, Dow has a very robust cyber security program, which has been evaluating risk on vendor tools and software development, expanding to include cloud development as these capabilities have grown. I mention our history and our principles, because these existing capabilities map directly to fulfilling our Responsible AI principles.

I came into my role as AI Strategy Implementation Leader as Dow’s Responsible AI Steering Team was publishing the Responsible AI principles I described earlier. What reassures me, and what I share across Dow, is how we were living these Responsible AI principles long before formally recognizing this as Responsible AI.

About Dow

Dow is one of the world’s leading materials science companies, serving customers in high- growth markets such as packaging, infrastructure, mobility and consumer applications. Our global breadth, asset integration and scale, focused innovation, leading business positions and commitment to sustainability enable us to achieve profitable growth and help deliver a sustainable future. We operate manufacturing sites in 31 countries and employ approximately 35,900 people. Dow delivered sales of approximately $45 billion in 2023. Learn more about us and our ambition to be the most innovative, customer-centric, inclusive and sustainable materials science company in the world by visiting www.dow.com.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Become a RAI Institute Member

Follow RAI Institute on Social Media 

LinkedIn 

X (formerly Twitter)

Slack

Media Contact

Nicole McCaffrey

Head of Strategy & Marketing, RAI Institute

nicole@responsible.ai 

+1 (440) 785-3588

Share the Post:

Related Posts

For Immediate Release February 6, 2025, AUSTIN, TX — The need for operationalizing responsible AI is evolving rapidly as organizations grapple with escalating AI incidents...

Demystifying the AI Assurance Landscape

By Yogasai Gazula How can organizations ensure that their AI systems are trustworthy? Assurance services have become commonplace for financial operations, compliance, IT security, and...

Energy AI

Co-Authored by Michael Chapman, Yogasai Gazula, and Hadassah Drukarch Artificial intelligence (AI) is emerging as a powerful tool in the energy sector, offering unprecedented opportunities...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!

Login to the RAI Hub