“Opening Black Boxes: Addressing Legal Barriers to Public Interest Algorithmic Auditing”

This article was written by Consumer Reports. It summarizes their new report on addressing legal barriers to algorithmic audits.

TLDR

  • A new report from Consumer Reports has found that public interest researchers can have a major role to play in identifying algorithmic harms.
  • In the report, Nandita Sampath, an AI Policy Analyst at Consumer Reports, describes how some aspects of existing laws can hinder public interest auditors conducting investigatory research on algorithms.
  • CR gives policy recommendations to remove these legal barriers, including by providing auditors with safe harbors in certain cases, and to provide new incentives for organizations to be more transparent about how their algorithms work.

Consumer Reports (CR) has released a new report called “Opening Black Boxes: Addressing Legal Barriers to Public Interest Algorithmic Auditing.” In the report, we outline how public interest auditors attempting to uncover algorithm harms can run into legal roadblocks that can hinder auditors from performing this kind of research.

With the black box nature of algorithms and few regulations in the US that require transparency, testing, and auditing measures, we claim that public interest researchers can have a major role to play in identifying and notifying the public and regulators of algorithmic harms. The report defines “public interest auditing” as investigatory research into an algorithm intended to discover and inform the public about potential harms caused by the algorithm. They can be performed by academics, public interest groups, journalists, or just concerned citizens. However, these investigators need access to adequate information in order to perform effective audits, which they do not always have.

Examples of public interest auditing of algorithms include the NYU Ad Observatory’s research into Facebook’s political ad targeting and ProPublica’s investigation into COMPAS, whose developers claim can predict a criminal defendant’s likelihood of becoming a recidivist. These research groups were able to identify bias issues and other potential harms, including misleading claims from the companies developing and using the algorithms.

In particular, we identify four kinds of auditing techniques that are commonly used by public interest auditors: Code Audits, Crowdsourced Audits, Scraping, and Sock Puppet Audits. Each of these audits come with pros and cons, and researchers often have to choose the type of audit they carry out depending on the information available to them about the algorithm.

The report discusses the practical and legal limitations that researchers can run into when performing different kinds of audits. Laws like the Computer Fraud and Abuse Act (CFAA), which were written with the intention of criminalizing hacking, have hindered researchers from even attempting to tinker with algorithms for fear of legal recourse. Other issues like potential copyright infringement when obtaining training data and violating contracts like Terms of Service agreements that attempt to limit testing can also frustrate auditors who don’t have the funds to go to court.

We also make a number of policy recommendations that would help empower public interest researchers to conduct this needed research while balancing other important values such as privacy and intellectual property. Specifically, they recommend changes in the following areas:

  • Access and publication mandates
  • CFAA and computer trespass
  • Contract law
  • Digital Millennium Copyright Act
  • Copyright
  • Civil Rights, Privacy, and Security
  • Consumer Protection

The report also recommends ways to legally incentivize companies to provide the public with more transparency into their algorithms, to encourage internal whistleblowers to report illegal behavior, and to incentivize good-faith research by providing auditors with safe harbors in particular cases.

Please click here for the full report.

Nandita Sampath is an AI Policy Analyst at Consumer Reports.

Share the Post:

Related Posts

Responsible AI Programs Webinar

Responsible AI Institute March 20, 2024 Webinar Recap During the most recent installment of Responsible AI Institute’s Virtual Event Series, we convened experts from leading...

AI Standards & Policies

Var Shankar, Executive Director, Responsible AI Institute and Steve Mills, Managing Director & Partner, Chief AI Ethics Officer, Boston Consulting Group (BCG) AI Governance Mechanisms...

Dow New Member Annoucement of Responsible AI Institute
The Responsible AI Institute welcomes Dow to its growing community of industry leading members....
Leaders in Responsible AI

March 2024 Gerald Kierce-Iturrioz Trustible Co-Founder & CEO What does your job entail within your organization? As the co-founder & CEO of a responsible AI...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!