RAI Institute is published in the Journal of European Public Policy!

In a new article published in the Journal of European Public Policy’s special issue on The Governance of AI, Graeme Auld, Ashley Casovan, Amanda Clarke, and Benjamin Faveri explore how various private and public actors interact and use private governance initiatives, like certification programs, during the development of ethical AI standards. The article identifies three distinct pathways for public-private interaction: (1) oppose and fend off states; (2) engage and push states; and (3) lead and inspire states.

In pathway 1, corporations and civil society actors turn to private governance to oppose and fend off state governance interventions. In pathway 2, corporations and civil society actors turn to private governance to engage and push states to institutionalize specific governance rules. In pathway 3, corporations and civil society actors use private governance to lead and inspire states to redefine regulatory possibilities and provide blueprints for future governance reforms.

The article that pathway 2 (engage and push states) dominates the current ethical AI standards development space. Examples of pathway 2 are seen in the Responsible AI Institute’s (RAII) convening of AI experts from various sectors and regions in conjunction with the World Economic Forum’s Global AI Action Alliance to develop a responsible AI certification program and their continued role in international AI standards development through International Organization for Standardization and Institute of Electrical and Electronics Engineers. Other examples of this pathway are seen through the Partnership on AI’s role in collaboratively fostering AI research, guidelines, principles, and best AI governance practices or Amnesty International and Access Now’s Toronto Declaration, pushing for governments and private actors to address AI risks together through private governance standards.

While pathway 2 currently dominates the ethical AI standards development space, three sources of instability could shift the dominant pathway to pathway 1 or 3, or an entirely new pathway. These sources of instability are: (a) increases in AI governance demands, which could potentially motivate corporations and civil society actors to oppose and fend-off costly state interventions viewed as incapable of addressing AI system’s risks; (b) focusing events around AI failures could raise the salience of ongoing private governance experiments and redefine the scope and focus of AI governance efforts; and (c) localization effects could increase new and varied AI standards as sectors and professionals begin addressing AI governance in their specific use cases and as states perceive a misalignment between their interests and the focus of global AI standards.

Many of the organizations that RAI Institute has worked with have indicated a desire for industry-specific AI governance mechanisms over more general ones, as many general AI governance mechanisms do not address industry-specific regulatory needs. This desire is aligned with the increase in AI governance demand and localization effects instability sources described above, possibly shifting which pathway dominates.

This article provides useful context for the continuing development of RAI Insitute’s certification program, click here to read the article.

Share the Post:

Related Posts

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

Healthcare AI

As generative AI charges ahead, it presents challenges and opportunities across sectors. Its consequences are especially pronounced in healthcare, where patient wellbeing is at risk....

Responsible AI Institute - Employment Working Group

An update from the RAI Employment working group on context-specific assessment development AI tools are widely used in corporate environments today to support recruitment and...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!