Evi Fuelle, Credo AI
July 2024
Global Policy Director
What does your job entail within your organization?
I lead our Policy Team, which on a broad level includes developing Credo AI’s policy philosophy and working with our product and research teams to help translate AI policies into practice. As an AI governance platform, we help companies in financial services, insurance, healthcare, and other sectors develop Responsible AI at scale. A large part of my job involves bringing insights from both the AI research community as well as our enterprise customers to policymakers, to show them “what works and what doesn’t” in designing and deploying AI responsibly. Conversely, Credo AI’s policy team helps enterprises understand what risks policymakers are concerned about, and strategies to effectively identify and mitigate those risks with enterprise-wide policies, processes, and standards.
What do you think are the biggest challenges your organization faces related to integrating AI into your business?
One of the biggest challenges we have been working on since Credo AI’s founding in 2020 is defining “what good looks like” when designing and deploying AI systems that are responsible, trustworthy, and transparent. We continue to work with our customers on the implementation of industry best practices, AI governance standards, and regulatory requirements for Responsible AI systems to help solve for this challenge.
Building on academic research, Credo AI’s domain expertise, and industry frameworks from leaders like MITRE and NIST, the interdisciplinary team at Credo AI has developed the market’s most extensive library of AI-specific Risk Scenarios and Controls within our platform. This library is designed to anticipate and mitigate negative incidents, enabling the development and deployment of safe and controlled AI systems with unparalleled speed to governance.
We are incredibly proud of this work, which helps enterprises build non-brittle AI governance structures that enable them to operate in multiple markets and jurisdictions while continuing to innovate responsibly with AI at scale.
Why must organizations prioritize responsible AI governance and systems in their business?
Trust is a competitive differentiator for global enterprises – organizations that invest in AI governance can enable trust in the systems they design and deploy in every market in which they operate. The implication for AI governance is that doing it right will translate to being a more trustworthy brand. Corporate leaders should be thinking about building a framework for assessing and managing their AI risks, so that they can continue to innovate with certainty.
Credo AI was recently named to Fast Company’s Annual List of the World’s Most Innovative Companies of 2024. We are incredibly proud of this award for many reasons, but it also validates our conviction in where the market is headed, and reinvigorates our commitment to empower organizations to responsibly build, adopt, procure and use AI at scale.
What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?
Never stop learning! AI continues to be an emerging area of research and ever evolving technology, with a myriad of regulatory approaches in different jurisdictions.
Effective AI governance requires us all to continually educate ourselves in order to understand “what is in the realm of the possible,” whether that is in regard to the capability of large and powerful AI models, or the flexibility and nimbleness needed for institutional governance structures to keep pace with a rapidly evolving technology. It is especially important that we continue to educate ourselves about the impact of AI on our society as a whole, not just what our own individual experiences with it may be.
This work is important to me because it will determine the type of society in which our future generations live. We owe it to them to ensure that not only will they have access to this incredible technology, but that AI is in service to humanity.
About Credo AI
Credo AI is on a mission to empower organizations to responsibly build, adopt, procure and use AI at scale. Credo AI’s pioneering AI Governance, Risk Management and Compliance platform helps organizations measure, monitor and manage AI risks, while ensuring compliance with emerging global regulations and standards, like the EU AI Act, NIST, and ISO. Credo AI keeps humans in control of AI, for better business and society. Founded in 2020, Credo AI has been recognized as a CB Insights AI 100, CB Insights Most Promising Startup, Technology Pioneer by the World Economic Forum, Fast Company’s Next Big Thing in Tech, and a top Intelligent App 40 by Madrona, Goldman Sachs, Microsoft and Pitchbook. To learn more, visit: credo.ai or follow us on LinkedIn.
About Responsible AI Institute (RAI Institute)
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, RAI Institute
+1 (440) 785-3588
Follow RAI Institute on Social Media
X (formerly Twitter)