Artificial Intelligence at a Crossroads: Compliance as Ethics

David Morar, Molly Nystrom and Shannon Kay

“We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate.” This phrase is taken from the document “AI at Google: our principles 2018” and is meant to show the company’s commitment to an ethical understanding of its own work. Google’s famous dicton “Don’t be evil” has now matured into a corporate document that outlines fundamental ways the company tries to do good.

While that is a nice sentiment, and certainly an important one for good corporate citizens, ethics means more than just following the law, and can even be antithetical to the law. For example, laws in the past promoted segregation and racism, and, while legal during their time, these laws would not be considered ethical as they violated human rights. Certainly, some laws contain ethical standards, but that is not a given, nor is it an assurance that in those particular cases following the law is ethical. The concept of ethics is, much like laws, interested in what is right and what is wrong, but ethics, unlike law, is not constrained by enforcement or punishment through the might of the state. It is, rather, focused on virtues, rights, obligations to and benefits for society. It is a goal you aspire to, a sense of doing the best you can.

Strictly defined, ethics can be described, as dictionaries often do, as moral principles that govern a person’s behavior or the conducting of an activity. Relatedly, applied-ethics is concerned with specific examples and situations wherein ethical standards are applied.

Meanwhile, compliance implies meeting standards or rules. Ethics and applied ethics demand flexibility and critical thinking. Consider the infamous trolley problem, where one must decide to let a trolley run over five people on the tracks or change the trolley’s path to run over one person instead. One must consider one’s responsibility to every person on the tracks, the consequences of both actions, the aftermath, and how one’s conduct will affect how they answer in future situations. Compliance, on the other hand, demands little variations in interpretation. Compliance asks for the actions to be taken regardless of context.

Technology companies function in society and usually keep, or at the very least try to keep, themselves to ethical standards that guide their actions. Our work, studying and understanding the way technology companies conceptualize and operationalize ethics, shows that throughout the tech industry, there is a strong trend to connect ethics with compliance. Throughout the documents studied, we find strong use of compliance vocabulary. Compliance vocabulary, as we see it, is phrasing that displaces responsibility for monitoring ethics from the company onto an “other”– the AI itself, government, etc. Their goal seems to be to create a checklist of what is ethical- as in, what follows the law- check those boxes and move on. Much like the quote from Google’s AI Principles, a responsibility to be ethical is then confined within the responsibility towards the law. We see these boundaries in 50-page documents (terms of service, end-user agreements etc.) most of us probably don’t read and even fewer understand but all agree to, in order to get the product we want. Vague, broad words are used to express their ethical commitments, and the abstraction of their responsibility leads to nonsensical statements, such as personifying the AI just in case anything goes wrong. For example, in their document advising developers on conversational AI, Microsoft notes that, “[since] bots may have human-like personas, it is especially important that they interact respectfully […].” While the document is aimed at developers, they also include recommendations like “the bots simply steer clear of controversial subjects”, phrasing their language in a way that personifies the AI and separates both the AI bot and its actions away from the developers. There are important questions that relate to this practice and many other operationalizations of ethics including: where should legal liability reside, who carries ultimate accountability, and fundamentally what are the relationships between the company, the product and the consumers. Certainly these don’t have a universal solution, which is the point of ethics. Thus, one cannot simply use the same checklist to tick all the ethical boxes and move on. Different companies have different goals, different methods of operation, different products, and thus require different ethical codes of conduct.

When it comes to ethical standards in tech companies, what is missing is companies holding a consideration for ethics in general, while making clear in their standards how they choose to adhere to concepts of applied ethics. Applied ethics means that specific situations are outlined, definitions are clear, and the consumer is able to consent or decline to using the products provided by the company because they have a clear understanding of the consequences.

In the text mentioned above Google does a good job at delineating its ethical commitment, while not fully devolving into a compliance perspective. However, its usage of “risk”, “consent” and “legal norms” shows the important connection between compliance and ethics. Other documents, like Google’s Responsible AI Practices, have a stronger compliance flavor, with words like “metrics” and “concrete goals” having top billing for its recommended practices. While compliance and ethics overlap, the concerning trend is that the inherent nuances discussed above seem to get lost.

This work opens up the conversation to more fundamental concerns about not just what each company does in terms of ethics, but of how ethics are operationalized at the level of industry. Google is not the only company with a code of ethics that leans towards compliance as an ethical standard. Amazon’s description of ethics broadly, and specifically in regards to its Rekognition AI-based program, implies the idea that compliance with laws means being ethical. As tech companies grow bigger and begin to use technology that requires large amounts of our data, showing an understanding of the ethical implications of their work is paramount. Tech companies should internalize that ethics isn’t mere compliance, especially when dealing with Artificial Intelligence. By taking leaps towards creating ethical standards that are more in depth than laws, and outlining specific situations that demand the application of ethics, companies will be ahead of the curve, becoming ethical innovators the rest of the industry looks up to. These standards, as our next blog post will show, are well suited to being translated into industry-wide practices and frameworks.

Share the Post:

Related Posts

Leaders in Responsible AI

Philip Dawson, Armilla AI June 2024 What does your job entail within your organization? As the Head of AI Policy, I wear many hats at...

Further joins RAI Institute

The Responsible AI Institute is pleased to welcome Further, a company that enhances enterprise team efficiency through data, cloud, and AI technologies, to its growing...

Responsible AI Institute Hub

Responsible AI Institute (RAI Institute), a prominent non-profit organization dedicated to advancing the responsible use of AI, has introduced a brand new Responsible AI Hub,...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!