This is the second, final part of our two-part privacy tech-know blog series on algorithmic fairness. In the first part, we introduced the topic of algorithmic fairness and analyzed the three main definitions of it. In this second part, we will further develop our analysis by focusing on topics more specific to its implementation. Throughout this discussion, we assume working-level technical knowledge of artificial intelligence (AI) and machine learning (ML).
Conceptually, algorithmic fairness is straightforward. It is the ethical concept of fairness applied to the domain of AI/ML. This combination has resulted in the creation of a new discipline, whose overarching goal is to work out the conditions under which an AI/ML model may be said to treat equal persons equally and unequal persons unequally.
However, in practice, algorithmic fairness can be difficult to implement. Not only are there multiple generally incompatible definitions to choose from; each definition interprets fairness differently, raising pros and cons to consider for each. While algorithmic fairness may have developed a strong theoretical basis, many questions remain when it comes to putting its ideas into practice.
Fortunately, a growing body of research has been devoted to exploring this challenging topic. The practical side of algorithmic fairness can be viewed from two perspectives. On the one hand, there is the “positive” side, which focuses on the concrete fairness-enhancing measures and de-biasing techniques that can be applied to an AI/ML model to help achieve algorithmic fairness. Alternatively, there is the more “critical” side, which focuses on discovering the limitations inherent in mathematical notions of fairness. Of course, this distinction is somewhat artificial. Both the “positive” and “critical” sides of algorithmic fairness in practice contribute to the same underlying goal, namely of getting closer to the truth with respect to its possibilities and limits.
In this second, final part of our blog series, we will complete our analysis of algorithmic fairness by exploring these additional practical aspects of it. To clarify, this document does not provide guidance on the application of algorithmic fairness under federal privacy laws. The aim of this second part is to discuss the various measures and limits of algorithmic fairness to help further contextualize our understanding of it from a technical perspective, as opposed to a legal or policy perspective.
What measures can be applied to help achieve algorithmic fairness?
Depending on which definition of algorithmic fairness is considered most appropriate in the circumstances, different technical measures may be applied to the AI/ML model to help satisfy the criteria. Not all measures are exclusive to a particular definition of algorithmic fairness. In fact, many are cross-cutting and work equally well to support different definitions across multiple contexts. Of course, the details of how best to apply each measure will depend on the context of the data processing.
In what follows, we will discuss various fairness-enhancing measures in terms of the stages of the AI/ML training lifecycle. All AI/ML models undergo training and it is here, during the different stages of the training lifecycle, that fairness-enhancing measures have their most direct impact. In general, there are three stages of the lifecycle: pre-processing, in-processing and post-processing. Each stage has its own set of measures to consider.
Although organized chronologically, in practice the stages form an iterative process whereby each stage influences the others and may be revisited at any time depending on the results of the process.
At this initial stage, the focus is on the training data itself. Because AI/ML works by attempting to best fit a mathematical model to some set of training data, the more the training data fairly reflects the object or outcome to be predicted, the greater the chances of the model to learn and implement a fair representation of individuals’ past behaviour by default. Accordingly, the measures involved at this stage consist of changing or augmenting the training data so as to better position the AI/ML model to account for fairness. In general, there are three measures to consider:
- Ensure that training data is balanced and representative of the population. AI/ML learns through examples. A consequence of this is that it cannot understand what it has not already been shown. If the data used to train an AI/ML model underrepresents or does not contain enough examples of certain groups who form part of the population to which the AI/ML model will be applied, then the model will, in effect, ignore or overlook the statistical relationships that predict the target variable for them. If these relationships differ from those of the other (over)represented groups, then meaningful disparities in performance may arise. For example, studies have shown that some facial analysis tools for gender estimation exhibit disproportionate error rates across groups defined by race and gender.Footnote 1
- Ensure that the ground truth is objective. In a set of training data, the ground truth or target variable represents what is considered a correct answer to a prediction based on the feature values present alongside it. However, just because a variable is called the “ground truth” does not necessarily mean that it is. Especially in the domain of human behaviour, it is sometimes difficult or even impossible to obtain high-quality objective data regarding labels for supervised learning. In such cases, it may be tempting to substitute a full and accurate portrayal of the object or outcome to be predicted with a proxy that is easier to obtain but less reliable as an indicator. If the proxy variable is too subjective or dependent on human discretion for its fulfillment, it may contain historical or other biases that would then be learned and ultimately reproduced in the AI/ML model. For example, recidivism risk assessment tools have been criticized for using documented arrests as a proxy for actual crime.Footnote 2
- Ensure that features equally predict the target variable across groups. Similarly, not all features in a set of training data may have the same level of statistical importance in terms of predicting the target variable across groups. For some groups, the presence or absence of a particular feature may be a strong indicator of the target variable, but for others, less so or not at all. If the features do not equally predict the target variable across groups, then this may lead to a skewed situation where the AI/ML model overestimates the target variable for groups whose features are less predictive of it and underestimates the target variable for groups whose features are more predictive of it. In such cases, novel solutions may be required. For example, some jurisdictions have implemented gender-specific recidivism risk assessment tools to address concerns with feature bias.Footnote 3
After preparation of the training data, the next step is to begin the actual process of training the AI/ML model. The focus is therefore on the training algorithm. Because the AI/ML training process works by attempting to minimize the value of a cost function, the more the function itself includes criteria to penalize unfairness, the greater the chances of the process itself to generate a model with fair statistical relationships. Accordingly, the measures involved at this stage consist of placing constraints on the cost function or training algorithm so as to better account for fairness in the formation of statistical relationships. In general, there are two measures to consider:
- Add one or more fairness-enhancing regularization terms to the cost function. The cost function of an AI/ML model represents the overall state away from which the model should move in order to enhance performance, that is, become more “intelligent.” In addition to penalizing prediction errors and overfitting, it is possible to include penalty terms to minimize expressions of unfairness. For example, some researchers have suggested the use of a “prejudice remover” regularization term to penalize the amount of mutual information between classifier scores and sensitive attributes.Footnote 4
- Use fair adversarial learning. Recent research in AI/ML has led to the development of a new method of training whereby the cost function takes the form of a competitive game between two models.Footnote 5 The one model, typically called a “generator,” tries to produce examples that outsmart the other model, typically called a “discriminator,” which in turn tries to guess whether the examples have a certain property or not. The two models are connected via a feedback loop that enables both to improve their abilities. Eventually, if the models have enough capacity, the training reaches a point of equilibrium where the discriminator cannot distinguish between examples with or without the property, because the generator has successfully learned to obfuscate it.
When used for fairness, the generator is the AI/ML classifier. The property that the discriminator tries to guess and the generator tries to obscure is the sensitive attribute. In effect, this turns the training process into an optimization problem where the goal is to maximize the AI/ML classifier’s ability to predict the target variable while minimizing the discriminator’s ability to predict the sensitive attribute.Footnote 6
After training, the last step is to set an appropriate threshold. Because AI/ML classification tasks often work as regression tasks but with a cut-off point to define category boundaries, where the threshold is set has important implications for fairness. Different threshold values will generally change the relative proportion of members correctly and incorrectly assigned to the positive and negative class across groups, thereby affecting respective predictive values, error rates and total proportion of positive outcomes. Accordingly, the measures involved at this stage consist of setting or modifying the scoring threshold to help make the AI/ML model’s predictions fit some fairness criteria. In general, there are two measures to consider:
- Set the threshold to a value that satisfies your fairness criteria. This measure follows directly from the concept of a threshold. As noted above, different threshold values will generally change the relative proportion of members correctly or incorrectly assigned to the positive and negative class across groups. If the threshold is set to a cut-off point where the predictive values, error rates or total proportion of positive outcomes are equal across groups, then some fairness criteria will be satisfied. This is often achieved by locating the point, if available, at which the lines of a graphical plot intersect. For example, the receiver operating characteristic (ROC) curve plots the true positive rate against the false positive rate at various threshold settings of a binary classifier. The point at which the group-specific curves intersect represents a parity of error rates, thereby satisfying separation.
- Use a separate threshold for each group. This measure is essentially a more extreme version of the previous one. Instead of a single common threshold to define the category boundaries across all groups, a separate threshold can be defined for each group and set to a cut-off point that individually satisfies some shared fairness criteria. For example, in response to Cleary’s initial formulation of sufficiency, some researchers advocated for greater flexibility in determining outcomes for lower-scoring groups and suggested the use of multiple thresholds to achieve a more balanced proportion of results.Footnote 7 However, this measure has also been criticized for shifting responsibility away from the AI/ML developer and onto the end user. For example, in a study it produced on demographic effects across facial recognition algorithms, the U.S. National Institute for Standards and Technology “discount[s]” the idea of group-specific thresholds, since they require the system owner to determine the different threshold levels through some form of appropriate testing and then build additional software to implement the strategy.Footnote 8
What are the limits of mathematical notions of fairness?
Algorithmic fairness shares in the basic properties of fairness, but it also differentiates itself from the ethical concept of fairness by focusing on the treatment of individuals and groups within the context of an AI/ML model. AI/ML is a modern mathematical technology, whose perspective is fundamentally different from that of ethics and fairness. As discussed in the introduction to the first part of this blog series, fairness is characterized by a future-oriented ambiguity, whereas AI/ML exhibits a type of past-is-prologue precision. While these two disciplines can be merged to a certain extent, ultimately their differences cannot be fully resolved.
A consequence of this irresolvable gap is that algorithmic fairness can only ever serve as a proxy for fairness. Because of its mathematical nature and prioritization of calculation over deliberation, algorithmic fairness lacks the fully temporal, context-aware judgement of ethical determinations of fairness. As such, its use comes with some important limitations and caveats concerning the validity of its results. In general, there are four to mention:
- It does not address the ethical implications of the actual task an AI/ML model performs. Algorithmic fairness works by comparing the behavior that an AI/ML model exhibits across different groups against a certain set of criteria. Based on the results of this comparison, the AI/ML model is deemed to be fair or not. However, when it comes to the ethical implications of the actual task the AI/ML model performs, algorithmic fairness is silent. It does not assess the morality or underlying ethics of the task itself, only the observed characteristics of its after-the-fact application. This ethical gap can lead to paradoxical situations where an automated task that is clearly unethical can still be considered algorithmically “fair.” For example, a group of academics wrote a fictitious satirical paper showing how an algorithmic system designed to mulch elderly people into nutrients can nonetheless adhere to the framework of fairness, accountability and transparency.Footnote 9 A real and ongoing example is how some social media companies design their news feed algorithms to maximize “user engagement” based on click rates, thereby indirectly prioritizing and giving undue influence to (mis)information articles designed specifically to exploit individuals’ anger, fear and resentment.
- It cannot determine which definition is appropriate or “fair” in the circumstances. Algorithmic fairness consists of multiple definitions that are generally incompatible. An important question to ask therefore is which definition is appropriate and in which circumstances? Which definition among sufficiency, separation and independence should be considered “fair”? The tools of algorithmic fairness cannot answer this. They can only determine whether a particular definition is satisfied, not should it be satisfied. Aristotle had a nice way of describing this limitation of technical thinking. He made a distinction between the mode of thinking of “technology” (or “craft”; technê) and that of “practical judgement” (phronêsis). Whereas technology cannot think beyond the products whose utility or efficiency it works to maximize, practical judgment is able to perceive the ultimate human actions conducive to living well as a whole.Footnote 10 Consequently, only something similar in nature to Aristotle’s practical judgement can decide which definition of algorithmic fairness is actually fair in the circumstances.
It is important to note also that this limitation of algorithmic fairness accords with findings from current human rights jurisprudence, in particular, in Canada. For example, in analyzing claims of discrimination leading to adverse effects, the Supreme Court of Canada has stated in its decision Fraser v Canada that “[t]here is no universal measure for what level of statistical disparity is necessary to demonstrate that there is a disproportionate impact, and the Court should not […] craft rigid rules on this issue.”Footnote 11 A full discussion of legal fairness and legal rules about discrimination is beyond the scope of this blog series.
- Its results can be manipulated. Whether an AI/ML model satisfies some definition of algorithmic fairness depends solely on the statistical relationships between three variables: the ground truth Y, the predicted score R and the sensitive attribute A. However, the data used to populate these variables is often generated from observations of real-world processes that are controlled by the same organization whose use or development of an AI/ML model is under scrutiny. This opens the door to potential abuse or manipulation. By modifying the frequency, manner, scope or nature of the process, it is possible for an organization to curate a set of variable values that appear to satisfy some definition of algorithmic fairness, but in reality, only serve to copy its statistical relationships. Several academic studies have raised this issue.Footnote 12 For example, an organization could artificially lower the error rate of its facial recognition program by intentionally (re)submitting queries of individuals whose identities it already knows.
- It cannot evaluate its own effects. Algorithmic fairness aims to promote the well-being of individuals or groups by imposing a general demographic criterion on the results of an AI/ML model. However, whether individuals or groups truly benefit from such an intervention ultimately depends on how the decisions affect their lives in the long term. While algorithmic fairness may result in greater short-term benefits, studies indicate that common fairness criteria may not promote improvement over time.Footnote 13 For example, requiring a bank to give out loans to individuals who are less likely to repay them ultimately impoverishes the individuals who end up defaulting as a result. This is an issue that algorithmic fairness cannot address on its own.
In this second, final part of our blog series on algorithmic fairness, we further developed our analysis from a practical perspective and gained additional insights into the nature of algorithmic fairness:
- Measures to help achieve algorithmic fairness can be organized into three groups based on the stages of the AI/ML training lifecycle:
- Ensure that training data is balanced and representative of the population
- Ensure that the ground truth is objective
- Ensure that features equally predict the target variable across groups
- Add one or more fairness-enhancing regularization terms to the cost function
- Use fair adversarial learning
- Set the threshold to a value that satisfies your fairness criteria
- Use a separate threshold for each group
- Due to its mathematical nature, algorithmic fairness suffers from several ethical limitations:
- It does not address the ethical implications of the actual task an AI/ML model performs
- It cannot determine which definition is appropriate or “fair” in the circumstances
- Its results can be manipulated
- It cannot evaluate its own effects
Based on these additional insights, it is clear that algorithmic fairness is not only a complex discipline, but that it is no panacea. While various practical measures can be applied to help achieve it, ultimately mathematical notions of fairness have inherent ethical limitations that can only be properly dealt with using non-technical thinking. As always when it comes to evaluating ethical uses of technology, the key is to continue the process of deliberation and not let your thinking be replaced by calculation!