Leaders in Responsible AI: A Member’s Story

Sarah Curtis

Sarah Curtis

Head of Product, Responsible AI

Booz Allen Hamilton

What does your job entail within your organization?

I lead the creation of tools, frameworks, and guidelines to help clients think through the value and risks of their AI systems.  My day-to-day is split between product strategy, product development, and collaboration with key partners (like the Responsible AI Institute) to close the gap between client needs and suitable solutions. 

Through market research and communications with cross-functional teams, I also curate and prioritize which solutions should be standardized and taken to market.

What do you think are the biggest challenges your organization faces related to integrating AI into your business?

The federal agencies we serve have diverse missions and must comply with regulations in a timely manner. Additionally, we are seeing a broad spectrum of organizational maturity for both deploying AI systems and setting up responsible AI best practices. Because of this, our solutions need to be lightweight and standardized to make an immediate impact in most client environments but also stay flexible.

Why must organizations prioritize responsible AI governance and systems in their business?

Our team defines Responsible AI as the convergence between ethics, governance, and safety.Without these three things, long term success of AI initiatives in both the public and private sectors doesn’t seem feasible. By imbedding responsible AI best practices into the system lifecycle all AI can become responsible AI.

To successfully innovate, the government must safeguard and maintain the public’s trust in their AI systems. In my experience, designing products while simultaneously thinking through the human impacts tend to generate better outcomes.

What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?

Paying to get your oil changed is cheaper than replacing your whole engine. Elements of Responsible AI such as risk management may be perceived as routine maintenance, which isn’t necessarily exciting and is, therefore, often overlooked.

Consideration of responsible AI practices is paramount because it’s easier to prevent problems before they arise then it is to pick up the pieces after the fact. Placing ethics at the top of the funnel, taking a holistic view, and applying an interdisciplinary approach to AI development will make a difference in the long-term impact of AI tools.

About Booz Allen Hamilton

Booz Allen Hamilton is a 110-year-old strategic consultancy with in-depth expertise in AI and cybersecurity. We leverage the perspectives of our diverse talent to deliver impactful solutions for the nation’s most critical civil, defense, and national security priorities. Our mission is to Empower People to Change the World®

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Become a Member - Responsible AI Institute

Media Contact

For all media inquiries please refer to Head of Marketing & Engagement, Nicole McCaffrey, [email protected].

Social Media

LinkedIn

Twitter

Slack

Share the Post:

Related Posts

VFS becomes new Responsible AI Institute member

The Responsible AI Institute welcomes VFS Global, a leading outsourcing and technology services specialist for governments and diplomatic missions worldwide, to its growing community of...

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!