The Black Mirror episode entitled Hated in the Nation (hereafter referred to as HitN) revealed how easy it becomes for humans to judge one another when aided by the power of social media; people could easily condemn others while hiding behind relative anonymity and camouflaging within the opinion of the majority. HitN turned humans’ proclivity to judge others online to a literally life-threatening tendency by introducing the Granular company and their Automated Drone Insects (or ADIs). The ADIs were originally intended to replace bees but were hijacked by a man named Garrett Scholes and were used to cause the death of not only publicly “hated” individuals, but also the “haters” (that is to say those people who condemn others online) themselves. The “haters” determine who the “hated” people are through an indirect election system via social media; whoever receives the most #DeathTo tags in a single day is marked for death and is soon killed by a rogue ADI that burrows into that person’s brain. The “haters” were ultimately killed off however when it was revealed that using the #DeathTo tag marks you as a target of the ADIs as well, causing the death of hundreds of thousands of people.
I argue in this paper that the people that used the hashtag #DeathTo (the “haters”) are as (or maybe even more) morally accountable for the deaths of the “hated” as much as Granular’s ADIs. I will first explain Moor’s idea of degrees of moral agency, as well as Hardalupas’ systemic account of moral agency, before using those two concepts to argue my thesis statement. This shall be followed by a synthesis of the results of my analysis, as well as a conclusion containing a summary of my main points of argument along with my other insights on the subject of this paper.
The level of connectivity that our world has achieved in the last few decades thanks to the invention of the internet and social media has brought humans closer than ever before. It is unfortunate therefore that we have managed to find ways to bring some of our divisive impulses into this new online frontier yet again. This paper aims to show that we are no less accountable for the things that we say and do online than in “real life”, and I hope to touch at least a few minds and impart this realization upon others.
Degrees of Moral Agency
Moor’s (2006) paper on machine ethics made a distinction between full moral agency and the indirectly-defined “partial” moral agency. His conception of moral agency supposed that it exists on a spectrum, and he posited that there are different degrees in which one could be considered as a moral agent. Moor defined full moral agency as the capacity to explicitly formulate moral judgements and rationally defend them – a capability that humans are said to possess thus making them full moral agents. However, Moor argued that machines could be considered as moral agents as well in that one could program a machine to be implicitly ethical, that is to say that a certain machine is ethical because it avoids making unethical decisions through its program algorithm. A concrete example of a machine acting as an implicit ethical agent would be a pacemaker that’s made to correct a patient’s irregular heartbeat; the pacemaker has been designed to consistently keep up the steady pace at which it pumps the heart because otherwise the patient will die. Another example of a machine acting implicitly ethical is the technology behind aircraft autopilot systems. This system is designed to correctly maintain an airplane’s flight trajectory without much help of a human pilot – a function that could easily result in a plane crash and the death of hundreds if not executed properly. Though machines’ bases for ethical judgement come from humans, Moor still considers them to be partial moral agents in that to some degree, they are functioning with respect to human ethics.
Moor’s conception of machine ethics lays the groundwork for Hardalupas’ (2017) systemic account of moral agency in that the latter assumes that there exists different degrees of moral agency. A background illustrating how an entity that’s both not a full moral agent and that’s not a non-moral agent could exist is necessary for one to understand the premise of the Hardalupas’ moral agency theory.
Systemic Account of Moral Agency
Hardalupas’ (2017) account of moral agency veers away from the traditional ones (as per Hardalupas) such as that by Parthemore & Whitby (2013) where moral agency is supposed to only be applicable to any entity that 1.) Has a concept of self and the capability to self-reflect, and 2.) Has conceptual agency – that is the capacity to freely apply its abilities and propositional knowledge in response to different external stimuli. Hardalupas’ systematic account of moral agency builds upon Moor’s (2006) concepts of partial and full moral agency and puts forth four base criteria that when all deemed to be true within the smallest set of entities (X) possible at any given circumstance, that set could be considered as a full moral agent of an action (A) despite the set (X) being composed of partial moral agents. The four criteria are as follows, ordered in an ascending manner relative to the degree in which it makes an entity a moral agent:
I. X acts in a way A that is evaluated with respect to moral rules
II. X follows moral rules
III. X has the potential to follow different rules
IV. X has a moral motivator
Additionally, Hardalupas (2017) stated that there are two kinds of moral motivators: weak and strong. A weak moral motivator entails “having a reason to believe the act A of X is moral”, while a strong moral motivator entails “having a reason to believe the rules X follows are moral.
Consider this adaptation of the Otto and Inga thought experiment originally by Clark and Chalmers (1998): Suppose there are two entities named Jack and Sally; Sally is unable to act upon her moral decisions and she has resolved to help people through others by creating a book that contains all of the possible moral judgements that any agent could make in response to any circumstance that they might face. Jack is unable to make moral decisions by himself and decides to purchase Sally’s book of morals which he eventually uses in deciding whether to help resuscitate a man (action A) in need of CPR after drowning or not – an action which Jack soon performs. Under accounts of moral agency such as that by Parthemore & Whitby (2013), Jack and Sally are not moral agents responsible for action (A) because they both lack conceptual agency; the former is unable to formulate his own moral decisions and the latter cannot act upon her moral decisions. Under the systemic account of moral agency however, Jack and Sally are components of a system (X) that collectively fulfills the criteria that Hardalupas put forward. Though Jack only fulfills criteria I, II, and III since he only follows Sally’s moral book, he is in a system with Sally who fulfills criteria II, III, and IV, thus making the system (X) that they both belong to a full moral agent responsible for the revival of the drowned man (action A).
ANALYSIS OF THESIS STATEMENT
My thesis statement “the people that used the hashtag #DeathTo (the “haters”) are as (or maybe even more) morally accountable for the deaths of the “hated” as much as Granular’s ADIs” could be proven true using Hardalupas’ account of moral agency because the “haters” (the #DeathTo users) and Granular’s ADIs both belong to the same set (X) that collectively caused the death of the “hated” (action A). Consider the following arguments:
I. Can the ADIs’ murder of the “hated” be evaluated with respect to moral rules?
a. Yes. Singer’s (2011) Practical Ethics clearly states multiple moral bases that one could use to gauge the morality of homicide such as preference utilitarianism, human rights, and even hedonism, among others.
II. Do ADIs follow moral rules?
a. Yes. Moor’s (2006) study on machine ethics stated how machines could become implicit moral agents when human programmers create the former’s operational algorithm in such a way that it prevents machines from functioning in a potentially unethical manner. This limiting factor was mentioned by the HitN character Rasmus Sjoberg when he stated that ADIs are designed to simply drop on the ground when they malfunction, virtually removing their potential to cause further harm to others in their broken state.
III. Do the ADI’s have the potential to follow different rules?
a. Yes. Garrett Scholes somehow managed to hack the ADIs and change their programming from simply emulating normal honeybees to perpetrating the murders enabled by the “haters’” use of #DeathTo. The extent at which ADIs could potentially follow different rules is of course limited to how humans are able to modify its programming however (Moor, 2006).
IV. Do the ADIs have a moral motivator?
a. No. Having a moral motivator requires X to have the capacity to believe in something in the first place, as X’s belief in the morality of its actions and the rules upon which such actions are based are cornerstones of establishing a moral motivator (Hardalupas, 2017). The ADIs were not designed to carry out such functions as ‘believing’; the sole purpose of the ADIs were to act as replacement for bees and facilitate the pollination process of plants and nothing more.
I. Can the “haters” indirect contribution to the death of the “hated” be evaluated with respect to moral rules?
a. Yes. Dubljević et. al. (2018) argued that our moral judgement of others comes from our unconscious moral intuition formed by processing Agents (the one enacting an action), Deeds (the action performed by Agents), and Consequences (the outcome/s that has/have arisen from an Agent’s Deed) through the application of concepts from moral theories like virtue ethics, utilitarianism, and deontology.
II. Do the “haters” follow moral rules?
a. Yes. Both Moor (2006) and Parthemore & Whitby (2013) consider humans to possess consciousness of themselves and of their deeds and are free to perform and justify their moral actions. The concepts that humans use to guide their actions are the moral rules that the “haters” naturally follow as well.
III. Do the “haters” have the potential to follow different rules?
a. Yes. Parthemore and Whitby (2013) states that one of the factors that clearly separates humans from machines as a moral agent is the former’s capability to act differently even when faced with circumstances that are similar to other circumstances that they have experienced in the past. This flexibility in humans’ behavioral patterns is a clear indicator of their capability to follow different ethical rules.
IV. Do the “haters” have a moral motivator?
a. Yes. The “haters’” have a reason to believe that their part in the murder of the “hated” is moral as evidenced by HitN character Liza Bahar when confronted about the death of journalist Jo Powers after she tagged the latter in a #DeathTo post: “…I know that she is dead. But did you read what she had written?”, the latter sentence referring to Powers’ scathing article about a physically disabled activist. It could be said that Bahar believes, to some extent, that the death of Jo Powers is justified because of the latter’s deeds. Additionally, the “haters” also have a reason to believe that the rules that they follow are moral, the reason being that the bases of their moral judgement towards the “hated” rests upon the morally unacceptable actions of the latter that the former perceives. Jo Powers was condemned because of her negative article about a wheelchair activist. Tusk, a rapper and another victim of the #DeathTo tag, berated a child fan’s video saying that the latter “can’t dance for sh*t”. Clara Meades, yet another victim of the #DeathTo, was condemned after a picture of her apparently mocking a war memorial surfaced online.
Looking back at Hardalupas’ conception of the systematic account of moral agency, the primary requirement for a set (X) to be considered as a full moral agent accountable for an action (A) is for all of the components of set (X) to collectively fulfil all four of the moral agency criteria present in the systemic account. Set (X) composed of the ADIs and the “haters” is a full moral agent because though the ADIs were not able to fulfil the fourth criteria (the presence of a moral motivator), they belong to the same set (X) as with the “haters” who fulfilled all of the criteria, thus making the ADIs and the “haters” full moral agents of the death of the “hated” (action A) when categorized as being components of the same set (X).
The analysis has also shown that though the “haters” belong to the same set as the ADIs and could already be considered as a full moral agent under that premise, Hardalupas (2017) has also specified that the smallest set (X) that fulfils all of the four criteria could be considered as a full moral agent. If we consider the “haters” to belong to a separate set (Y) of which they are the sole component and see whether they are fully accountable for the deaths of the “hated” (action A), they could still be considered as a full moral agent because they fulfilled all of Hardalupas’ criteria. It is in this regard that one could say that the “haters” have more moral accountability than the ADIs for action (A).
My thesis statement “the people that used the hashtag #DeathTo (the “haters”) are as (or maybe even more) morally accountable for the deaths of the “hated” as much as Granular’s ADIs” is justified under Hardalupas’ (2017) systemic account of moral agency if we consider the ADIs and the “haters” to belong to the same set (X). Set (X) is a full moral agent responsible for the death of the “hated” (action A) since despite one of its components being a partial moral agent (the ADIs are incapable of having a moral motivator and so failed to fulfill criteria IV), the “haters” themselves were already full moral agents (they fulfilled all four criteria) thus making every component of the set that they belong to a full moral agent as well. In terms of the degree of liability of the components of set (X), the “haters” are more accountable for action (A) seeing as they were already full moral agents themselves even without taking into account the ADIs.
But as terrifying as the ADIs are, they are but a representation of the profound effects that our words could have on other people. Let us always remember that we’re not the only ones thinking that a small passing comment online won’t hurt anyone; a heavy storm is nothing but a collection of thousands of tiny raindrops, and a minor insult in our eyes hardly matters less for a heart weighed down by others’ spite. I’m not encouraging anyone to refrain from calling out rape, pedophilia, or government corruption, among other social issues, nor am I against freedom of expression. I am merely reminding everyone of us to be mindful of our behavior both in an online or a personal setting, and to always be aware of the weight of our words.
Written by Limore Aguhar
Clark, A., & Chalmers, D. (1998). The extended mind. analysis, 58(1), 7-19.
Dubljević, V., Sattler, S., & Racine, E. (2018). Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment. PLoS ONE, 13(10), 1–28. https://doi.org/10.1371/journal.pone.0204631
Hardalupas, M. (2017). “A Systematic Account of Machine Moral Agency” in Müller, V. Philosophy and Theory of Artificial Intelligence (pp. 252-254). Cham, CH: Springer.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE intelligent systems, 21(4), 18-21.
Parthemore, J., & Whitby, B. (2013). What makes any agent a moral agent? Reflections on machine consciousness and moral agency. International Journal of Machine Consciousness, 5(02), 105-129.
Singer, P. (2011). Practical ethics. Cambridge university press.