Understanding the UK’s White Paper on AI Regulation

By Amanda Lawson and Alex Bilodeau-Bouchard

On March 29, the UK’s Department for Science, Innovation and Technology (DSIT) published a white paper outlining “A pro-innovation approach to AI regulation,” which describes the country’s plans to regulate AI systems. The paper focuses on the use of AI rather than the technology itself, acknowledging the challenges of a horizontal, one-size-fits-all regulatory framework.

The paper empowers domain-specific regulators to take the lead in tailoring the implementation of flexible rules and public expectations and introduces the idea of a regulatory sandbox in which foundational AI businesses can test rules before going to market.

Importantly, the paper also highlights that underpinning this context-driven approach, “tools for trustworthy AI including assurance techniques and technical standards will play a critical role in enabling the responsible adoption of AI and supporting the proposed regulatory framework.” The RAI Institute is closely tracking this emerging field and is equipped to deliver a range of AI assurance services and expertise to foster responsible innovation. To align our assessments and certification program requirements with regulatory objectives, we closely track emerging AI laws and guidance in our Regulatory Tracker. It is great to see the UK’s efforts to highlight tools for trustworthy AI and to shine a pragmatic light on the way forward.

Among the tools introduced in the paper are a set of five guiding, non-statutory principles for AI regulators: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. These principles mirror the RAI Institute’s own implementation framework, setting responsible AI as the foundation for innovation and strengthening our existing collaborations to support interoperable measures.

Along with the principles, a number of key deliverables and investments were announced in the paper:

  • An open consultation period from 29 March 2023 to 21 June 2023 for individuals and businesses to share their feedback on the white paper;
  • An AI Regulation Roadmap, to be published in parallel with the UK’s response to the consultation, which will include details on the implementation of principles and the plan for a £2-million pilot regulatory sandbox or testbed;
  • Practical guidance on AI regulation, including risk assessment templates, to be issued over the next 12 months; and
  • A portfolio of AI assurance techniques, to be launched in spring 2023, that will demonstrate how these techniques are already being applied to real-world use cases.

The RAI Institute is especially interested in Part Four of the paper, which relates directly to our standards development and conformity assessments in its discussion of tools for trustworthy AI to support compliance within industry and civil society. The paper defines assurance techniques as “impact assessment, audit, and performance testing along with formal verification methods,” to be complemented by existing technical standards that help regulators develop “sector-specific approaches to AI regulation by providing common benchmarks and practical guidance to organizations.” We are keenly awaiting the release of assurance techniques and assessment templates and are reflecting updates on the UK’s strategy in our regulatory tracker.

We are excited to see assessment tools for responsible AI brought to the forefront of governance efforts and the outlining of a three-tiered, “layered” approach to encouraging the sustainable adoption of responsible practices: sector-agnostic standards to support cross-sectoral principles, followed by more tailored standards to address contextual issues such as bias (e.g., ISO/IEC TR 24027:2021) and transparency (e.g., IEEE 7001-2021), and ultimately strengthened by sector-specific technical standards to support compliance with eventual regulation and performance measures.

The paper suggests that existing oversight bodies such as the UK’s Financial Conduct Authority (FCA) could promote the use of responsible AI tools in their own field. The RAI Institute recently convened policy leaders, standards developers, industry experts, and researchers from the UK and Canada to compare current AI governance efforts in each jurisdiction. Convened as a Regulatory Roundtable, this event was hosted by the FCA, with support from the UK’s Foreign Commonwealth and Development Office. With the objective of building a harmonized strategy for AI in financial services and across industries, we shared ideas, agreed upon areas for improvement, and planned future discussions. We will continue to engage in similar collaborations with authorities in other sectors to bring AI principles to practice in a meaningful, calculated manner.

The paper outlines clear next steps. Within six to twelve months, DIST aims to “publish proposals for the design of a central M&E framework including identified metrics, data sources, and any identified thresholds or triggers for further intervention or iteration of the framework.” In the meantime, businesses buying, selling, designing, or working with AI systems have ample information to explore the first layer of AI governance and begin applying general principles to their operations.

As we continue to monitor developments in the UK and across the globe, we encourage businesses and individuals to reach out to the RAI Institute for guidance and to consider joining our growing community of members and leaders.

Share the Post:

Related Posts

Responsible AI News

New Members From Diverse Sectors Join the Institute; Growing Team Deepens Industry Expertise AUSTIN, TEXAS – Oct. 8, 2024 – Responsible AI Institute (RAI Institute),...

Leaders in Responsible AI

September 2024 Parag Kulkarni, Simpplr Chief Technology Officer What does your job entail within your organization? As the Chief Technology Officer at Simpplr, I lead...

RAI Institute Welcome Ally Financial

Responsible AI Institute is excited to announce that Ally Financial Inc. (NYSE: ALLY), a financial services company with the nation’s largest all-digital bank, has joined...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!