RAI Institute is published in the Journal of European Public Policy!

In a new article published in the Journal of European Public Policy’s special issue on The Governance of AI, Graeme Auld, Ashley Casovan, Amanda Clarke, and Benjamin Faveri explore how various private and public actors interact and use private governance initiatives, like certification programs, during the development of ethical AI standards. The article identifies three distinct pathways for public-private interaction: (1) oppose and fend off states; (2) engage and push states; and (3) lead and inspire states.

In pathway 1, corporations and civil society actors turn to private governance to oppose and fend off state governance interventions. In pathway 2, corporations and civil society actors turn to private governance to engage and push states to institutionalize specific governance rules. In pathway 3, corporations and civil society actors use private governance to lead and inspire states to redefine regulatory possibilities and provide blueprints for future governance reforms.

The article that pathway 2 (engage and push states) dominates the current ethical AI standards development space. Examples of pathway 2 are seen in the Responsible AI Institute’s (RAII) convening of AI experts from various sectors and regions in conjunction with the World Economic Forum’s Global AI Action Alliance to develop a responsible AI certification program and their continued role in international AI standards development through International Organization for Standardization and Institute of Electrical and Electronics Engineers. Other examples of this pathway are seen through the Partnership on AI’s role in collaboratively fostering AI research, guidelines, principles, and best AI governance practices or Amnesty International and Access Now’s Toronto Declaration, pushing for governments and private actors to address AI risks together through private governance standards.

While pathway 2 currently dominates the ethical AI standards development space, three sources of instability could shift the dominant pathway to pathway 1 or 3, or an entirely new pathway. These sources of instability are: (a) increases in AI governance demands, which could potentially motivate corporations and civil society actors to oppose and fend-off costly state interventions viewed as incapable of addressing AI system’s risks; (b) focusing events around AI failures could raise the salience of ongoing private governance experiments and redefine the scope and focus of AI governance efforts; and (c) localization effects could increase new and varied AI standards as sectors and professionals begin addressing AI governance in their specific use cases and as states perceive a misalignment between their interests and the focus of global AI standards.

Many of the organizations that RAI Institute has worked with have indicated a desire for industry-specific AI governance mechanisms over more general ones, as many general AI governance mechanisms do not address industry-specific regulatory needs. This desire is aligned with the increase in AI governance demand and localization effects instability sources described above, possibly shifting which pathway dominates.

This article provides useful context for the continuing development of RAI Insitute’s certification program, click here to read the article.

Share the Post:

Related Posts

Responsible AI News

New Members From Diverse Sectors Join the Institute; Growing Team Deepens Industry Expertise AUSTIN, TEXAS – Oct. 8, 2024 – Responsible AI Institute (RAI Institute),...

Leaders in Responsible AI

September 2024 Parag Kulkarni, Simpplr Chief Technology Officer What does your job entail within your organization? As the Chief Technology Officer at Simpplr, I lead...

RAI Institute Welcome Ally Financial

Responsible AI Institute is excited to announce that Ally Financial Inc. (NYSE: ALLY), a financial services company with the nation’s largest all-digital bank, has joined...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!