top of page
Search

Are Automated Weapon Systems evil-in-themselves?

Updated: Apr 7, 2023

Introduction Automated Weapon Systems (AWS) have usually been consigned to the realm of science fiction. However, they are now becoming an increasingly realistic method of waging war in the modern world. Such a possibility has led to a flourishing literature on their ethicality, with debates being developed on such aspects of AWS as its technical limitations and dangers (Asaro 2020 in Liao, 2020), the potential responsibility gap between moral agent and weapon-system (Sparrow 2007), and also the ethical consequences that could result from the economic and social changes that AWS would cause (Hoffman 2016).


That being said, in this essay I will focus on the discussion being had on the ethicality of using these systems in the first place. In other words, whilst a substantial amount of work has been done on the ethical consequences of AWS, there is also an important discussion being had on the ethicality of using these systems per-se. One of the scholars leading the way in these debates is Robert Sparrow, who has provided interesting arguments on the intrinsic immorality of AWS. This essay will respond to the idea that AWS are intrinsically immoral or, as it has been put by Sparrow, mala in se (evil in itself).


The response will feature an explication of mala in se, and in particular, an interpretation on its theoretical foundation. This will allow us to evaluate Sparrow’s claim that AWS are evil-in-themselves. The resulting conclusion will be the claim that this understanding of AWS incorrect and this is due to a misunderstanding concerning the presence of agency in the use of AWS, which – as we shall see - is important to evaluations on mala in se. In sum, I will argue that AWS should be understood as possessing extended-agency and as such, should not be seen as mala in se. These claims have important implications for responsibility in AWS as it reaffirms the idea that – at present - human beings are still responsible for the behaviour of these systems.


Mala in se

The first task of this paper is to explore and then clarify the meaning of the term mala in se and describe how it relates to the debate on the ethicality of using AWS. Latin for ‘evil in itself’, mala in se is used to describe those actions that are “literally ‘wrong in themselves’” (Jeffrey and Roberts, 2014). The term is contrasted with mala prohibita, which means ‘wrong as prohibited’ (ibid.), Both phrases are rooted in criminal law and are used to distinguish between two categories of criminal action (ibid.). For example, murder is understood as an action that is mala in se, while trespassing on private property would be understood as mala prohibita, given that the action’s wrongness results from defying laws that prohibit it. While the meaning of mala in se is simple, it is difficult to pin down the justification behind the meaning. What is being appealed to when something is labelled as evil-in-itself? Divine law, natural law and social consensus have all been appealed to as the foundation for such claims, though each has been criticised and rejected in one way or another (ibid.). As such, this essay will outline an alternative theoretical foundation for the term mala in se, before offering an interpretation on its reading. This will allow us to situate the term, more precisely, within the debate on AWS.


Morten Dige, in his paper on the principle of mala in se, goes into great detail to describe his interpretation of the basis behind mala in se, and does so in the context of ‘just war’ theory (Dige 2012). He explains that the basis behind mala in se can be seen in the related ethical principle of proportionality (ibid.). The principle of proportionality, alongside the principle of discrimination, are ethical principles that guide evaluations on the justness of conduct in war (Seth, 2020). In brief, if an action does not discriminate between combatants and non-combatants in war or isn’t ethically proportionate to the advantage of that action, then it is an unjust action (ibid.). After undertaking an analysis of these principles in relation to mala in se, Dige concludes that ‘vertical disproportionality’ is most indicative of actions that are mala in se. This ‘vertical disproportionality’ accounts for actions that are extreme in the depth (as opposed to breadth) of harm they cause to the victims. Thus, regardless of whether the harmful action discriminates between combatants and non-combatants or is horizontally proportionate[1] to the advantage of the action, the action is mala in se if the harm caused to the person is excessive (Dige 2012). Dige sees an action as excessively harmful, or vertically disproportionate, when it “[destroys] the basis of humanity in the victim” (ibid.). This idea of ‘destroying the basis of humanity in the victim’ is important to explain as it provides the basis behind the term mala in se. This basis can then be used to inform our answer to the question of whether AWS are evil-in-themselves.


Humanity as the basis behind mala in se

In order to begin explaining the way ‘humanity’ and its destruction provides the basis behind the term mala in se, it will be useful to identify a definition of ‘humanity’. I say, a definition, as the term is generally used to account for human beings as a collective (Coupland 2001). However, this only shifts the task of defining the term humanity, to defining the human being in general. One way to do this is to state that it is “that collection of features that make us distinctively human” (Johnson and Cureton 2022). Using this, and the work of other scholars, we can focus on at least one defining feature of the illusive human condition.


This feature can be revealed by interpreting Dige’s discussion on the way that one’s humanity is destroyed. It is suggested in his paper that rape destroys a person’s humanity because it violates their ‘bodily integrity” and “entitlement to choose” their own “sex partners” (Orend 2001, in Dige 2012). One way to understand how this destroys one’s humanity is by first recognising that the act of rape targets a specific aspect of the individual, namely their sexuality. A similar targeting occurs in the act of torture, in that it specifically targets the sensitive aspect of the individual without considering the other aspects of their condition.[2] In both cases, the acts intentionally attack one part of a person and neglect the individual as a whole. In other words, they reduce the individual to only one part or aspect of their condition as human beings. In torture and rape respectively, you are related to as merely a sensitive and sexual entity.


This can shed light on at least one feature of the human condition. We are made up of many aspects/parts and we are those aspects/parts all at once. This idea is implicit in Anthony Coates’ conceptualisation of the enemy (Coates, 2006 in Dige, 2012). Coates argues that we have to understand the enemy in a limited sense, for “the enemy is never an enemy in totality” (ibid.). To make clear, while I am my sexuality, I am also my sensitivity, my mortality, my identity as an enemy, as a father, a civilian and a friend. Thus, when you treat an individual as if their whole being is only made up of the part that you are targeting, you disregard their human condition, and resultantly, their humanity.


To conclude my interpretation of Dige’s discussion on humanity, it will be useful to summarise two ways in which it is disregarded. To do this, I will situate Dige’s idea of ‘rational quarrel’ within my summations (Dige, 2012). One disregards the humanity of an individual when they:


1) target an aspect of their identity that they could have no ‘rational quarrel’ with (e.g. sexuality, sensitivity, fatherhood),


and/or


2) target an aspect of their identity that they can have rational quarrel with (e.g., as enemy combatant) but do so whilst neglecting the fact that it is only one aspect of their condition as a human being (e.g., also a father, and a sensitive body).


An important point to extract from this interpretation is that central to something (an act/weapon) being mala in se, is the way that it (the act/weapon) relates to the person. In other words, there is a certain way of relating to another human being that is evil-in-itself. This is the theoretical underpinning for the term, mala in se. When an act is ‘vertically disproportionate’, it can be said to attack and ultimately disregard the humanity of the human being. In doing so, the act becomes evil-in-itself.


AWS as mala in se?

After that important yet lengthy discussion on the basis behind mala in se, it is now possible to explore its connection to AWS. Sparrow, in his paper on mala in se, argues that the basis behind AWS being evil-in-themselves is the belief that AWS disrespect the humanity of the combatant (Sparrow, 2016). It is argued, by utilising the work of Nagel, that this disrespect is conveyed by the failure of AWS to relate to combatants as ‘subjects’ (Sparrow 2016). Sparrow interprets this as meaning an “interpersonal relationship” with the combatant is required, as it serves to acknowledge the combatant as a human being, and not just as a target (ibid.). Interestingly, both scholars note a connection between Kant’s moral philosophy and the importance of acknowledging the humanity of the person (Sparrow, 2016 and Dige, 2012). Note how Sparrow’s argument on mala in se, similarly to Dige’s, is based on relating to a human being in a particular way. The moral importance of this relation is clearly akin to Kant’s categorical imperative concerning the treatment of persons as ends and not means. The philosopher is thus a clear inspiration behind the respective approaches to understanding mala in se.


Returning to AWS, it could be argued then, that the reason why AWS are mala in se is because they are unable to relate to the target as a subject, a human being. So the argument would go, these weapon-systems are programs. They are entities with no agency of their own, and as such, will inevitably relate to only one aspect of a person, its existence as a target. However, in the next section of the paper, I will explain why this argument rests on a misconception concerning the lack of agency in AWS’. When reinterpreted, it demonstrates that these weapon systems possess an extended-agency. This renders AWS as an expression of the human agents that program and, ultimately, create them. When this is understood, it appears that the relation between an AWS and a combatant is actually a relation between a programmer and a combatant, with the system acting as a proxy. This challenges the idea that AWS are mala in se because of their inability to relate to the human being as a ‘subject’. The necessary relation does exist, and it exists because human agency is extended into these systems.


AWS AS extended-agency

It is unquestionable that a human being is morally responsible for a death that results from attacking another human being with a knife. We do not claim that, because the knife has no agency, we are not treating the victim as a subject and in turn, we are disrespecting their humanity. Yet, when it comes to AWS, there is a much more contentious debate surrounding its agency relative to our own. This contention is not unfounded. Due to the advances in machine learning and artificial intelligence more generally, it is becoming increasingly commonplace for machines to make decisions independently of human beings. The ‘black box’ problem of AI relates to the fact that programmers aren’t aware of the specific processes by which AI systems accomplish specified goals (Rai, 2019). In this sense, machines are making independent decisions in pursuit of a specified goal. When this goal is killing human beings, it is obvious why ethical concerns are raised.


However, it is the argument of this paper that regardless of the independent process by which AI systems are able to achieve a goal, the goal is still entirely specified and constrained by human beings. The relation between programmer and AWS is very strong. The programmer inputs data and detailed algorithms into the system in such a way that the system merely expresses these inputs. Without the programmers, there is no AI system. One could liken the relation between a programmer and an AI system to that of a puppet and a puppet master. In both cases, there is a passive and an active entity. Without the active entity, the passive entity does not function.


One could object that in the case of the programmer and AI system, there is a behavioural separation. A person programs an AI system which behaves in a way that the programmer is not aware of. This is not the case in the relation between the puppet and master. However, the fundamental point is that the programmer, like the master, is the sufficient condition for the behaviour of the AI system, or of the puppet. The behaviour of the system is still constrained by the intentional programming it has been given. As such, the intentions, and thus the agency of the programmer are still manifest in the system.


HYPOTHESIS OF EXTENDED COGNITION

In order to support the argument that human agency should be said to exist within AI systems, it will be useful to discuss the theory of cognitive extension. Cognitive extension is a theory championed by Andy Clark and David Chalmers in their paper, The Extended Mind (Clark and Chalmers, 1998). In short, the theory claims that cognition, and possibly even beliefs, extend out into artifacts external to the human brain. The cause of this extension is the dense cognitive interaction that results from engagement between the brain and objects in the world. This cognitive engagement with objects in the world can be said to be a coupled cognitive system (ibid.). This challenged the idea that cognition exists solely in the brain. While there has been extensive debate on the validity of this claim, it is now generally accepted that cognition is not confined to the brain and can be scaffolded by artifacts in the environment (Adams and Aizawa 2001) (Palermos (2014) (Rupert 2004).


The relevance of this theory to the possibility of extended-agency in AWS is that it is plausible to think that properties of human behaviour (like cognition, but also like agency) can extend into artifacts that are external to them. As such, the claim that human agency is present, by extension, in the behaviour of AWS is a plausible one. The consequence is that AWS should not be seen as mala in se, as the persons they target are apprehended as subjects by the human beings who programmed them. The question of whether the humanity of the human targets is considered or not is thus contingent on the way that other human beings, and not AI systems, choose to relate to them.


CONCLUSION

In sum, this essay has argued that contrary to Sparrow’s argument, AWS should not be seen as mala in se. To make this claim, I began by exploring the theoretical foundation of the term. This was captured in the idea of ‘vertical disproportionality’, a way of harming human beings that disregards or even destroys their humanity. An interpretation on this idea was then explored and related back to the debate on AWS. This gave rise to the conclusion that AWS could be mala in se if they had no agency. However, it was argued that AWS possess an extended-agency, because of the necessary involvement of human agency in the configuration of the behaviour AWS exhibit. This argument was supported by appeal to the hypothesis of extended cognition. Ultimately, the intentions and decisions of human beings live within these systems, and their behaviour - whether targeting persons or otherwise - is an expression of this.


BIBLIOGRAPHY


Adams, F. and Aizawa. K. (2001). The bounds of cognition, Philosophical Psychology, 14 (1). 44 – 63.


Asaro, P. (2020), Autonomous Weapons and the Ethics of Artificial Intelligence, in Liao, S. (2020). Ethics of Artificial Intelligence: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-8


Clark, A. and Chalmers, D. (1998). The extended mind. Analysis, (58). 7-19.

<https://era.ed.ac.uk/handle/1842/1312> (Accessed 2022)


Coupland, R. (2001). Humanity: What is it and how does it influence international law? Revue Internationale de la Croix-Rouge/International Review of the Red Cross, 83 (844). 969 – 989.

<10.1017/S156077550018349X> (Accessed 2022)


Dige, M. (2012). EXPLAINING THE PRINCIPLE OF MALA IN SE. Journal of military ethics, 11 (4). 318–332. <doi:10.1080/15027570.2012.758404> (Accessed 2022)


Hoffman, R., Cullen, T. and Hawley, J. (2016) The myths and costs of autonomous weapon systems. Bulletin of the atomic scientists. 72 (4). 247–255.


Johnson, R and Cureton, A. (Spring 2022 Edition). Kant’s Moral Philosophy. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). <https://plato.stanford.edu/archives/spr2022/entries/kant-moral/>. (Accessed 2022)


Lazar, S. (Spring 2020 Edition). War. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.).

<https://plato.stanford.edu/archives/spr2020/entries/war/>. (Accessed 2022)


Palermos, S. O. (2014). Loops, constitution, and cognitive extension. Cognitive systems research, (27), pp. 25 - 4. <https://doi.org/10.1016/j.cogsys.2013.04.002> (Accessed 2022)


Rai, A. (2020). Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science. 48 (1).137–141 <https://doi.org/10.1007/s11747-019-00710-5> (Accessed 2022)


Robert Sparrow (2016), Robots as “Evil Means”? A Rejoinder to Jenkins and Purves, Ethics & International Affairs, 30 (3). 401-403.


Roth, J. and Roberts, J. (2014). Mala in Se and Mala Prohibita, in Bruce A. (2014). Encyclopedia of Criminal Justice Ethics. SAGE Publications, Inc. 566-567 <https://dx.doi.org/10.4135/9781452274102.n203> (Accessed 2022)

Rupert, R. (2004). Challenges to the Hypothesis of Extended Cognition. The Journal of Philosophy, 101(8), 389-428.


Sparrow, R. (2007). Killer Robots. Journal of applied philosophy. 24 (1). 62–77.


Sparrow, R. (2016). Robots and Respect: Assessing the Case against Autonomous Weapons Systems, Ethics & International Affairs 30 (1). 93 -116.



[1] proportionate in breadth [2] Utilising the Aristotelian understanding of the term. ‘Sensitive’ relates to the human capacity of sensation.

16 views0 comments

Comentários


bottom of page