Responsible AI Institute May 15, 2024 Webinar Recap
Robust procurement practices have emerged as a crucial frontline in fostering responsible AI development and deployment. As part of our May virtual event, “How Procurement Can Shape Responsible AI,” we hosted an expert panel spotlighting both the challenges and critical importance of procurement rigor in upholding AI governance.
The panel featured a variety of perspectives:
Dr. Cari Miller, Co-Founder of the AI Procurement Lab, provided insights on how critical procurement will be as AI continues to advance.
Yukun Zhang, Director of AI Governance & Responsible AI at ATB Financial, shared her thoughts on procurement at the enterprise level.
Harry Chambers, Senior Privacy Analyst at OneTrust, discussed how emerging regulation will impact AI procurement processes moving forward.
Hadassah Drukarch, Director of Policy & Delivery at Responsible AI Institute, moderated the discussion.
Upholding Responsible AI with Rigorous Risk Assessment
Panelists began the conversation by revealing significant hurdles facing procurement teams. From limited vendor availability and skills gaps, to lack of transparency from AI providers, organizations must overcome significant challenges while still being compliant with emerging regulation. As Miller noted, “Legislators can’t keep up as fast as procurement needs to” in AI’s rapidly evolving landscape.
To develop trustworthy AI systems through procurement, panelists emphasized rigorous risk assessment as the crucial first step. Miller outlined a foundational risk management framework, “Establish legitimate business needs first. During vendor solicitation, interrogate user vulnerabilities and risks. Assess ethics and risk practices, and then create a checklist of required mitigations to bake into contracts.”
At ATB Financial, Zhang described their procurement process as balancing the benefits against the risks. “For high-risk AI, we have stringent assessments scrutinizing ethics, data practices, monitoring performance, and more.” Zhang emphasized the importance of clear guidance and implementation and said that comprehensive due diligence is pivotal.
Transparency Through Binding Obligations
Panelists agreed that transparency surrounding AI procurement is both an ethical mandate and pragmatic necessity. Chambers advocated binding this through contractual requirements, to clearly define responsibilities between AI providers and buyers. “Having a transparent relationship with the vendor is something that [organizations] should actively want to seek to distinguish themselves within the market,” Chambers stated. Doing so, by testing methods, intended uses, and limitations, will create a culture of ethical diligence to avoid bias and other potential risks with AI models.
Chambers also cited emerging laws like the EU AI Act and Colorado’s new AI regulations as mandating such disclosures. Zhang added, “Cross-functional collaboration engaging legal, ethical, and technical experts enables fuller transparency assessment.”
A Holistic Governance Imperative Through Collective Action
While pockets of AI governance frameworks have emerged, the panel underscored glaring gaps in robust, enforceable standards. Miller raised alarms about generative AI’s intensifying risks of abuse and disinformation. That said, an overarching tone of optimism persisted. If stakeholders can align around stringent collective action to uphold ethics as AI accelerates, it will ensure that AI is safe, trustworthy and rights-protecting before deployment. Panelists all agreed that this is a moral and pragmatic imperative.
Serving as discerning gatekeepers, procurement teams bear immense responsibility in shaping responsible AI through rigorous due diligence and risk assessment, binding vendors to transparency and accountability, and cross-functional collaboration engaging diverse stakeholders. Collective action prioritizing human considerations over technological urgency is vital to cultivating AI’s immense potential while mitigating existential risks. Values-driven procurement practices provide a pivotal pathway for organizations to move forward with developing AI responsibly.
Supporting You on Your RAI Journey
Looking to stay informed about regulatory updates and learn how your organization can proactively prepare for coming AI regulation? RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices via our various AI assessments and certification program.
About Responsible AI Institute (RAI Institute)
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Marketing, Responsible AI Institute
nicole@responsible.ai
+1 (440) 785-3588
Follow Responsible AI Institute on Social Media