top of page
Search
sam-bfd

Using predictive algorithms in ethically high-stake scenarios.

INTRODUCTION

In this essay I will explore two, interrelated complex ethical challenges. The first is the challenge of deciding whether predictive algorithms can express objective and neutral quantified judgements. The second is the challenge of deciding whether their judgments should be preferred to the judgement of human beings in situations where ethically high-stake decisions are required. The philosophical significance of these challenges is the potential for predictive algorithms to inform ethically high-stake decisions when the algorithm is, in fact, producing unethically biased results. In such a situation, these decisions could potentially result in the creation and exacerbation of social injustices. In this essay, I will argue that predictive algorithms should not, at present, be understood as objective or neutral technologies, and as a result, should also not yet be used in ethically high-stake decision-making scenarios, such as - as we shall see - matters of freedom or imprisonment.


CONNECTING TO THE PATTERN CASE STUDY

The PATTERN case study can be used to illustrate this argument by acting as an example of an algorithm that produced unethically biased judgements that went on to inform ethically high-stake decisions. I’ll argue that its use does more harm than good, not only because algorithms like PATTERN can produce results that indirectly express racial bias but also because the responsibility for, and the justification behind, these decisions can be offloaded onto the algorithms themselves. As I will demonstrate, the algorithms can be branded with the accountability for unethical decisions on the grounds that they are purely mathematical and therefore devoid of normative reasoning. This offloading of accountability can drastically exacerbate social tension and counter any productive engagement being had on issues of social justice. These algorithms are not yet developed enough to be appealed to as evidence of a neutral, strictly quantative decision. They can be just as inaccurate and as imperfect as human beings, with the added difficulty of being unable to explain the decisions. As such, the decision-making of people should remain primary in ethically high-stake scenarios, so that explanations and justifications can be given, and individuals can be held to account.


In order to illustrate the relation between the PATTERN case study and my normative claim that algorithms should not be understood as neutral and thus not used in ethical high-stake decision-making processes, I will introduce the case study and highlight aspects that support my claim. PATTERN is an algorithm that is programmed to predict the likelihood of an inmate reoffending upon release (i.e., the inmates recidivism risk). PATTERN stands for (Prisoner Assessment Tool Targeting Estimated Risk and Needs) (BOP, 2020). The algorithm emerged from the passing of the First Step Act; a piece of legislation concerned with reforming criminal justice procedures (Cyphert, 2020). The results of PATTERN’s risk assessment were used to decide whether certain prisoners were eligible for early release from prison. If the algorithm decided that X prisoner had a ‘low risk’ of re-offending, then the prisoner would be eligible for early release and if the algorithm decided that they weren’t ‘low risk’, X prisoner wouldn’t be eligible (Urban Institute, 2021).


DISCUSSION ON THE CLAIM THAT ALGORITHMS ARE NOT NEUTRAL AND OBJECTIVE

I will now demonstrate how this case study can illustrate and support my normative claim by briefly discussing some of the results it produced on the recidivism risk of analysed U.S prison inmates. The results of PATTERN’s decision-making, taken from a report on PATTERN that was created by the Office of Justice Programs within the U.S Department of Justice, shows that PATTERN overpredicts the recidivism risk of ‘non-white individuals’ and underpredicts the recidivism risk of Native Americans (DOJ 2O21). The report reveals that Black and Hispanic populations in particular were overpredicted with a high recidivism rate. For the purposes of this essay, I will focus on PATTERN’s overpredictions. The reason this result demonstrates that algorithms should not be seen as neutral or objective is because, if it were, we shouldn’t see overpredictions based on racial identities. I argue that such a biased overprediction is valid grounds for the statement that the algorithm lacks neutrality and objectivity.


That being said, it is important to note that this bias is not innate to the algorithm, rather, as it has been suggested, PATTERN’s racial bias results from the data it has been inputted with to generate its risk scores (Sheehey, 2021). R. Benjamin argues – in relation to a similar predictive algorithm – that the datapoints it uses (such as criminal history and ‘neighbourhood characteristics’) are structured by ‘racial domination’ (Benjamin, 2019). In other words, Benjamin suggests that racial bias is embedded in the data categories that are inputted into predictive algorithms like PATTERN. For example, as evidence suggests, Black people are disproportionally and unfairly challenged by police in the US (Peeples, 2021). This means that they are much more likely to have data points (like criminal history) that lead to a higher recidivism risk score. It is in this sense that such predictive algorithms should not be seen as objective or neutral. So long as these algorithms are being inputted with data that is laden with racial bias, the algorithms will indirectly produce racially biased judgements.


There is a common challenge to this claim that is neatly captured in a statement made by a spokesman for the LAPD, the police department which used the PredPol algorithm (an algorithm similar to PATTERN that attempts to predict crimes). In relation to the PredPol algorithm, he states, “It is math, not magic, and it’s not racist” (Sheehey, 2021). As noted, arguments such as Ruha Benjamin’s on the systemic nature of racial bias and its embeddedness in data are appropriate responses to such claims. However, there are further arguments against the suggestion that these algorithms are mathematical and therefore normatively neutral. A strong example of this is presented in N. Spaulding’s paper on human judgement and ‘algorithmic governance’ (Spaulding, 2021). In the paper, Spaulding responds to the commonly held belief that algorithms function purely in mathematic terms by demonstrating that a multiplicity of human judgements exist at the design-level of algorithms, which gives rise to “significant social, political and aesthetic dimensions” (ibid.). Whether it is in “the choice and quality of the training data” or in “translating a task…into a structured formula…”, a wide range of, potentially biased or prejudiced, human judgements are necessary to train an algorithm (ibid.). So, while algorithms should certainly be understood as mathematical instruments, the essential requirement of human involvement at the design and data level still gives rise to the potential for human bias to infiltrate its results.


DISCUSSION ON THE USE OF PREDICTIVE ALGORITHMS IN ETHICALLY HIGH-STAKE DECISION-MAKING SCENARIOS

Now that we have established the possibility for bias to exist in algorithms and thus the problem with understanding them as innately objective and neutral, the further challenge to address is what we, as a society, do with this fact. For example, some, such as the American Civil Liberties Union, have suggested that the bias in PATTERN’s results is enough grounds to discontinue its use in evaluations of recidivism risk (ACLU, 2020). Others, such as M. Hamilton (a professor who studies risk assessments) thinks that the PATTERN algorithm is ‘worth saving’ as it is likely to still be less biased than human beings (NPR, 2022). Fundamentally, as expressed by Berk et al. (2017), the question of what to do with algorithms like PATTERN comes down to making ethical trade-offs. For example, ‘How many unanticipated crimes are worth some specified improvement in conditional use accuracy equality?’ (ibid.). This is clearly an open question, and one that will receive a variety of valid responses. An important factor to consider in answering this question is that ideals such as justice and fairness cannot be perfectly mapped into the framework of predictive algorithms (Fazelpour and Lipton, 2020).


It is the argument of this paper that these imperfect algorithms should not yet be accepted in ethically high-stake scenarios, and the decision on whether or not to grant a person their freedom is certainly one of these scenarios. The consequences of such a decision are life-changing and should, at present, be made by people who can explain and justify their decision and ultimately, be held accountable. As we saw from the LAPD spokesperson’s statement, it is too easy for responsibility to be offloaded from man to machine. As such, some scholars have argued that the purpose and function of algorithms should be reconsidered. C. Barabas has argued that algorithms should be used, not to measure ‘criminal proclivities’, but to instead hold state officials to account over their decision-making (Barabas, 2020). Whilst there is productive potential in this suggestion, it too, seems plagued from the offset by the likelihood for controversial inaccuracies to arise. Fundamentally, these algorithms do not yet seem to be just or fair enough to be relied upon to make any ethically high-stake decisions. As a result, it is argued that, for the moment, human judgement should remain as the mainstay.


CONCLUSION

Though significant debate continues, at present, the PATTERN algorithm remains in use (DOJ, 2021). Some will argue that this is justified as it is better to continue contending with these algorithms given that they’re ‘here-to-stay’. Although this is true, it seems very cavalier to do so in such high-stake scenarios as deciding on an individual’s freedom. It is perhaps more morally useful to contend with the behaviour of these algorithms in more forgiving circumstances, before consolidating their use in ethically high-stake scenarios.


In sum, this essay has argued that predictive algorithms are not perfectly neutral and objective and should not yet be used to make ethically high-stake decisions.







REFERENCES

ACLU. (2020). Coalition letter on the use of the PATTERN risk assessment in prioritizing release in response to the COVID-19 pandemic. [online]

Available at: <https://www.aclu.org/letter/coalition-letter-use-pattern-risk-assessment-prioritizing-release-response-covid-19-pandemic> [Accessed 3March 2022].


Barabas, C. (2020). Beyond Bias: “Ethical AI” in Criminal Law, in The Oxford Handbook for AI Ethics (ed.) Dubber, M.D. Pasquale, F. and Das, S. pg. 1 - 21

doi: 10.1093/oxfordhb/9780190067397.013.47


Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code, Polity Press,. ProQuest Ebook Central, pg. 53 - 66 Available at: <https://ebookcentral.proquest.com/lib/ed/detail.action?docID=5820427> [Accessed 7 March 2022].

Berk, R. et al. (2021). Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological methods & research. [Online] 50 (1), 3–44.


Cyphert, A. B. (2020). Reprogramming Recidivism: The First Step Act and Algorithmic Prediction of Risk. Seton Hall law review. 51(2), pg. 331– 381.

Available at: https://www.nature.com/articles/d41586-020-01846-z [Accessed 6 March 2022].


Fazelpour. S, Lipton. Z (2020). Algorithmic fairness from a non-ideal perspectives. AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pg. 57-63

Available at: < https://scholarship.shu.edu/cgi/viewcontent.cgi?article=1773&context=shlr> [Accessed 7 March 2022].

Federal Bureau of Prisons, (2020). FSA Update: AG released update on the new Risk and Needs Assessment System. Available at: <https://www.bop.gov/resources/news/20200115_fsa_update.jsp> Accessed 4 March 2022].


Johnson, C., (2022). Flaws plague a tool meant to help low-risk federal prisoners win early release. National Public Radio (npr.org) [online].

Available at: <https://www.npr.org/2022/01/26/1075509175/justice-department-algorithm-first step-act> [Accessed 3 March 2022].


Peeples, L., (2022). What the data say about police brutality and racial bias — and which reforms might work. Nature.com. [online] Available at: https://www.nature.com/articles/d41586-020-01846-z [Accessed 6 March 2022].


Spaulding, N.W. (2020) Is Human Judgment Necessary?: Artificial Intelligence, Algorithmic Governance, and the Law, in The Oxford Handbook for AI Ethics (ed.) Dubber, M.D. Pasquale, F. and Das, S.

doi: 10.1093/oxfordhb/9780190067397.013.25


Sheehey, B., (2021). Technologies of Incarceration, COVID-19, and the Racial Politics of Death. [online] Blog of the APA.

Available at: <https://blog.apaonline.org/2021/06/07/technologies-of-incarceration-covid-19-and-the-racial-politics-of-death/> [Accessed 6 March 2022].


The Urban Institute. (2021). The First Step Act’s Risk Assessment Tool. [online]

Available at: <https://apps.urban.org/features/risk-assessment/> [Accessed 7 March 2022].


U.S. Department of Justice. (2021). 2021 Review and Revalidation of the First Step Act Risk Assessment Tool. NCJ 303859. National Institute of Justice [online], pp.3 - 46.

Available at: <https://www.ojp.gov/pdffiles1/nij/303859.pdf> [Accessed 7 March 2022].



11 views0 comments

Comments


bottom of page