Why Bias is Interwoven with Several Other Dimensions of Trusted AI

At CognitiveScale, we have grouped the key aspects of building and deploying trustable AI solutions under 5 pillars, representing 5 major types of Risks that Businesses face if these aspects are not properly addressed while employing AI technologies [1]. These five pillars are: (i) Fairness or lack of Bias, (ii) Explainability, (iii) Robustness, (iv) Data Risks and (v) Transparency, Auditability and Compliance. While different companies may focus on only one or two aspects, in this blog we argue that in general, all five aspects need to be considered together even if only one of them seems to be of primary concern.

Specifically we start with the concept of fairness and show how it is integrally tied to the other pillars. For an automated decision-making system (ADMS) to be considered fair, it must be unbiased towards specified subgroups of people who are impacted by the decisions being made. These subgroups are called “protected classes” and can be defined based on gender, race, age, ethnicity, etc., or combinations of such attributes, depending on the application. An unbiased solution should demonstrate that it is not being prejudiced against the protected classes when compared to its treatment of the rest of the population. The issue of fairness has been in the limelight recently in several high profile cases, for example when Amazon discovered that its AI algorithm for recruitment was biased against women, was not able to adequately mitigate this bias and eventually publicly stated that they would discontinue using the biased solution altogether [2].

Fairness, Explainability and Transparency.

Fairness is ultimately a human perception, hence any evaluation of fairness should be human-centric. Simply put, the eventual arbiters of judging whether an ADMS is unbiased or not should be the people who are impacted by its decisions. The philosopher John Rawls famously argued that fairness is “a demand for impartiality”, and equated fairness to justice. This viewpoint has dominated extensive studies on fairness in both organizational and social psychology over several decades by now. Two key types of justice have emerged as the most pertinent ones for fairness, (i) distributive justice and (ii) procedural justice [3]. The former, which is based on a fair division of different types of outcomes, has been the main focus on recent fairness machine learning algorithms. As an example, an approach based on distributive justice may attempt to see that both false negative and false positives are matched across all pertinent subgroups of people. Note that the large number of fairness metrics available in IBM’s AIF360 Fairness toolbox, an excellent representative of recent work in machine learning, all fall in this category.

Procedural justice, on the other hand, considers the perceived fairness of a decision-making process. It turns out that this is determined not only by the outcomes themselves but also the process by which the outcomes were obtained. Many researchers across multiple domains have shown that humans deem outcomes provided by a transparent system to be fairer than outcomes from an opaque system. In some cases, a substantially less fair system (as measured by distributive justice alone) is deemed fairer because the decision making logic was explainable and exposed to the humans that it affected. Such observations have also been made for adverse events. For example, studies show that explaining how layoff decisions were made significantly increases the perceived fairness of those decisions [4]. Hence providing a human-understandable explanation of a given decision as well counterfactual explanations in terms of how more preferred outcomes can be obtained, i.e., a feasible paths to “recourse” [5], both help in improving perceptions of fairness, and also increase the propensity of a person to accept the outcome of an ADMS. Similarly, systems that are procedurally transparent and accountable are deemed fairer by humans. In other words, both explainability and transparency are relevant to the pursuit of unbiased AI algorithms. Human studies have also shown that the perceived fairness and acceptability of an ADMS can be further enhanced by adding “outcome control”[6], i.e., that a provision for some human oversight, with the ability for readjustment, be available during the final decision-making.

Bias and Data Risks.

Now, how about “data risks”? Actually, data risks stemming from privacy, provenance and quality, are an integral component of explainability and transparency. In fact, recently, European privacy laws have codified the “right to an explanation” for users of platforms in which data is collected by mandating that organizations give a degree of transparency to explain what data is collected and how it is used [7]. A fair solution should assure us that only proper, and properly collected data, was used in building an AI-based ADMS. Moreover, we note that biases inherent in training data are notoriously difficult to detect and neutralize, thereby increasing the importance of proper data handling in the construction of fair learning systems.

Bias and Robustness.

Finally, there is always a concern that a continually learning system may become more biased or unfair over time. This is where robustness becomes relevant. A robust system provides more protection against data poisoning and other data attacks, and also reduces unwanted or unintended drift in system behavior over time due to changing data characteristics. A robustness metric informs the user about the stability of an ADMS, and can alleviate concerns about future behavior.

In conclusion, in order to comprehensively mitigate the risk of bias in AI solutions, one has to not only employ direct de-biasing approaches, but also consider the explainability, sensitivity and transparency of their solutions, and address associated data risks as well.

[1] CognitiveScale Responsible AI Framework, 2018. [2] Reuters, “Amazon scraps secret AI recruiting tool that showed bias against women”, published Oct 9, 2018. [3] John W Thibaut and Laurens Walker. 1975. Procedural justice: A psychological analysis. L. Erlbaum Associates. [4] Robert J Bies, Christopher L Martin, and Joel Brockner. 1993. Just laid off, but still a “good citizen?” Only if the process is fair. Employee Responsibilities and Rights Journal 6, 3, 227–238. Click here to read more. [5] Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 10–19. [6]. M. K. Lee, et al, 2019. “Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation”, in Proc. ACM Hum.-Comput. Interact., Vol. 3, No. CSCW, Article 182. https://doi.org/10.1145/3359284 [7] Bryce Goodman and Seth Flaxman. 2016. EU regulations on algorithmic decision-making and a “Right to explanation”. In ICML workshop on human interpretability in machine learning (WHI 2016), New York, NY. Click here to read more.

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!