3 Questions to Ask When Buying AI to Assess Responsibility and Trustworthiness

Navigating AI suppliers is a complex challenge for today’s procuring businesses. The vast potential of AI technology is paired with hidden perils and risks that can be hidden behind opaque “black box” dense technical explanations.

Yet, as both supply and demand for AI solutions grow, it’s critical for businesses to be able to adopt reliable, credible, and ethical AI solutions.

To help you do so, this guide sheds light on key conversations you should have as you assess AI vendors, building on the expertise of responsible AI (RAI) standards and best practices.

Responsible AI Procurement – Why it Matters

In the digital age, selecting the right AI supplier can be challenging, as each AI option has its own potential rewards and risks. Furthermore, the market size can be overwhelming. Valued at $136 billion in 2022, the global AI market is projected to expand at a compound annual growth rate (CAGR) of 37.3% from 2023 to 2030.

Procurement of AI that isn’t built with a responsible framework comes with serious costs.

The wrong AI partnership can lead to financial, reputational, and societal damages from lawsuits, technical debt, and lost competitive advantage. And given that AI reflects its data and developers’ own biases and objectives, there’s a strong need for accountability, especially in today’s Generative AI landscape.

The Responsible AI Institute (RAI Institute) assists businesses in making responsible AI choices and developing their RAI organizational maturity. This guide presents crucial questions for evaluating AI vendors, drawing from major AI governance standards and frameworks like the upcoming ISO AI Management Standard 42001, EU AI Act, and NIST AI Risk Management Framework. Use this to ensure your AI solutions are responsible, future-ready, and mindful of societal implications.

Questions to Ask When Evaluating AI Suppliers

We’ve simplified many considerations into three key questions to evaluate if AI suppliers meet responsible AI guidelines and forthcoming regulations. With the AI market still developing, businesses need to identify critical AI practices that align with their goals. Incorporate these questions into your Responsible AI Procurement strategy, making them a part of discussions and procurement documentation like RFPs, RFIs, SoWs, and NDAs.

For a more in-depth understanding of AI ethics and regulations, the RAI Institute provides comprehensive AI vendor reviews based on our RAI Implementation Framework, consolidating hundreds of AI standards, best practices, and legal requirements.

1. Responsibility at the Organizational Level: RAI Policies & Processes

Does your business integrate AI risk management, ethics, and/or responsibility in its protocols and documentation? Is there a regularly updated AI policy?

RAI Gold Standard Answer:

  • Comprehensive grasp of AI risk management, ethics, and responsibility, mentioning values like human oversight, transparency, equity, validity, accountability, safety, privacy, explainability, and reliability.
  • AI models that prioritize transparency, explainability, and neutrality
  • Recurring staff RAI training on subjects like data operations and fairness
  • AI policy updated to reflect modern RAI and DEI standards
  • Engagement with RAI experts and periodic third-party audits
  • Transparent channels for AI-related user feedback, errors, and concerns

Insufficient Answer:

  • Non-specific responses
  • Lack of documented model risk
  • Ignoring or referencing outdated standards
  • No mention of RAI training

Follow-up questions:

  • Is there a distinct RAI development/use policy?
  • How does the AI policy align with other organization policies?
  • Can you describe recent AI policy changes made as a result of feedback?

2. Responsibility at the System-Level: AI Development & Management:

How do you uphold responsibility in AI systems across their lifecycle, underscoring human oversight, transparency, and reliability? What processes and documentation are part of this strategy?

RAI Gold Standard Answer:

  • Detailed RAI processes spanning the AI system’s lifecycle with governance gates, resourcing, and documentation
  • Human-in-the-loop protocols for key decisions
  • Regular bias audits and debiasing initiatives
  • Explicit roles with an emphasis on ethics
  • Continuous AI accuracy monitoring.
  • Adherence to data protection norms and frequent security audits

Insufficient Answer:

  • Piecemeal RAI processes without processes for independent reviews, stakeholder engagement, alignment with RAI objectives, etc
  • Generalized claims without details
  • Neglecting core AI principles
  • Over-reliance on AI, lack of human checks
  • Absence of AI system bias identification

Follow-up questions:

  • How does the organization assign and document AI roles and resources?
  • What guidelines govern AI system operations?
  • How are trustworthy AI objectives and processes outlined?
  • How is AI data managed from collection to preparation?
  • Is there a mechanism to assess AI’s societal and individual impacts?

3. Responsibility in Vendor Handoff: AI Usage and Responsibility Distribution:

How is the AI documentation handover managed to ensure responsible AI system use by the client?

Gold Standard Answer

  • Detailed client onboarding addressing AI system nuances
  • Client-specific, adaptable documentation
  • Updated documentation with clear version tracking
  • Digital access for clients to current documentation
  • Role-based training modules
  • Post-handoff support channels for client assistance

Insufficient Answer

  • One-size-fits-all documentation approach
  • No recent documentation updates
  • Sparse post-transfer support

Follow-up questions:

  • How does the company ensure AI documentation clarity?
  • How are AI-related incidents relayed?
  • Are there agreements for ethical AI development with suppliers?

References and Additional Resources for Buying AI

These questions stem from five years of expertise at the RAI Institute, drawing from diverse professionals in our community and hundreds of RAI standards. They align with the leading requirements of the ISO AIMS 42001, NIST’s AI Risk Management Framework, and the EU AI Act, considered the eminent frameworks in this area. ISO AIMS focuses on AI best practices, NIST emphasizes trustworthy AI development, while the EU AI Act ensures AI aligns with user safety and rights within the European Union.

For more information about the RAI Institute Implementation Framework, check out our whitepaper and guidebook.

To learn more about AI Procurement, we recommend the following resources:

How We Can Help

In the bustling AI supplier landscape, expert guidance is essential. The Responsible AI Institute, a premier independent nonprofit, supports businesses in handling the AI domain responsibly and effectively.

We offer a Vendor Assessment tailored to your needs, based on our Responsible AI Implementation Framework. Our evaluations and procurement advice mirror current best practices, standards, and regulations underpinning responsible AI procurement so you can simplify your AI decisions. Looking to enhance your AI strategy? Join us today.

About Us

The RAI Institute is focused on providing tools for organizations and AI practitioners to build, buy, and supply safe and trusted AI systems, including generative AI systems. Our offerings provide assurance that AI systems are aligned with existing and emerging internal policies, regulations, laws, best practices, and standards for the responsible use of technology.

By promoting responsible AI practices, we can minimize the risks of this exciting technology and ensure that the benefits of generative AI are fully harnessed and shared by all.

Media Contact

For all media inquiries, please refer to Head of Marketing & Engagement, Nicole McCaffrey


+1 440.785.3588.

Social Media




Share the Post:

Related Posts

Procurement AI

Responsible AI Institute May 15, 2024 Webinar Recap Robust procurement practices have emerged as a crucial frontline in fostering responsible AI development and deployment. As...

Jeff Easley Headshot

Leading AI Nonprofit Announces Additional Advancements on Policy and Delivery Team AUSTIN, TEXAS – May 15, 2024 – Responsible AI Institute (RAI Institute), a prominent...

Michael Brent - BCG

Michael Brent Boston Consulting Group Director, Responsible AI Team What does your job entail within your organization? I have the best job in the world....

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!