
TAKE THE PLEDGE TO
PROCURE RESPONSIBLY
Take the pledge to procure responsibly
The Responsible Artificial Intelligence Institute (RAI Institute) is calling all organizations to commit to buying and supplying responsible AI systems that, when feasible, have been audited and certified by an accredited third party.
In collaboration with working groups representing industry, civil society, and academia, the RAI Institute has drafted the “Responsible AI Supplier Clause” for pledge signatories to adapt to their governance structure and incorporate into their Master Service Agreement or equivalent procurement vehicles.
The proposed text is intended to help organizations think about gaps in their AI governance. Signatories are not required to use the text as is, but rather to mold it based on their style, language, and objectives. The RAI Institute also commits to refreshing the text regularly as AI regulations, standards, and best practices evolve.
We urge innovative organizations to pledge to procure responsibly.
We encourage you to lead by example and make your pledge known.
We also recognize that the field of AI is rapidly moving. If you think we have missed any elements in our draft text, or if you have changes to propose, please let us know in the comments on your pledge form.
The RAI Institute respects privacy and will not make the names of signatories public without consent.
CASE STUDY
Option 1 - NYC
The City of New York will soon require organizations in the region to conduct a bias audit of any automated employment tools before use. Local Law 144 will require audit results to be made publicly available, and organizations using AI in hiring processes will need to let all candidates and employees know. Failure to comply will result in civil penalties.
Automated tools such as applicant tracking systems, used to rapidly filter resumes, are ubiquitous in modern hiring processes. They can save organizations time and money. However, when unchecked, these systems can exclude candidates through discriminatory algorithms and fail to recognize potential or creativity.
All AI is subject to risk. With regulation at our doorstep and the vision for a safer and more secure world, we can mitigate this risk through teamwork. Using the RAI Institute’s “Responsible AI Supplier Clause” as a foundation, organizations can commit to bolstering and streamlining their own procurement processes.
Option 2 - DoD
The Joint Artificial Intelligence Center (JAIC) within the US Department of Defense (DoD) took their principles to practice and proactively identified the need for more rigorous oversight of risk in their data and AI acquisition contracts.
The RAI Institute worked directly with the DoD and engaged external parties to develop and implement a new procurement vehicle tailored to AI systems. Together we mapped the harms of AI in the defense context, designed role-based training for Pentagon staff, built a public portal for AI procurement, and standardized contract language.
As a result, the DoD’s AI procurement process became faster, more supplier friendly, more cost effective, and more transparent.
All AI is subject to risk. With regulation at our doorstep and the vision for a safer and more secure world, we can mitigate this risk through teamwork. Using the RAI Institute’s “Responsible AI Supplier Clause” as a foundation, organizations can commit to bolstering and streamlining their own procurement processes.

SIGNATORIES












TAKE PRINCIPLES INTO PRACTICE
OVERVIEW
There is no denying that artificial intelligence (AI) is as much a business as it is a breakthrough. Many organizations procure design elements from others to incorporate in their own AI systems.
However, these partnerships can fall short of a shared understanding of responsible AI, compromising AI governance and harming not only the growth of business operations on both ends, but also the lives of users who place their trust in AI services.
Responsible AI is a joint effort—suppliers, buyers, designers, and regulators of AI systems working together. We can strengthen the AI we share by holding ourselves accountable, encouraging each other to uphold best practices, and agreeing on standards of certification.
Some jurisdictions are already requiring that specific AI systems be audited and made public prior to use. Organizations should not wait for their governments to require the same before rising up to their principles and implementing sound review practices.