Lethal Autonomous Weapons (LAWs) refer to robot technology designed to target and attack designated threats, independent of human control. They have a strong possibility of becoming the weapons of the future, posing possible advantages over human soldiers, such as precision targeting and enhanced situational awareness.
This blog is going to assess the ethical dilemmas that an engineer would face in their development, utilising the contrasting ethical viewpoints of utilitarian and deontological approaches. Utilitarianism focuses on the overall consequences of the action while deontology assesses morality of the action itself.
The Greater Good?
Whenever discussing automated weapons, there is always mention of iRobot or Skynet type uprisings. The reality is very different, as the concept of self-aware artificial intelligence (AI) is well beyond the scope and requirement of LAWs. However, they could still maximise the wellbeing and happiness to the majority of people: a key aspect of utilitarian thinking. The benefits of LAWs correspond with the utilitarian principles, through reducing the casualties of war and enhancing national security.
“LAWs can ultimately perform more ethically than human soldiers due to their ability to take a ‘first do no harm’ approach”
(Ronald Arkin, Georgia Institute of Technology)
The lack of deep-rooted survival instincts in a robot has the potential for more humane warfare. Naturally, a soldier will follow the logic of ‘kill or be killed’ on the front line. This pressure can lead to a breakdown of moral judgements or even international humanitarian law, by not following accepted rules of engagement, endangering civilians, or by using excessive force. However, in the case of LAWs there is opportunity for programmers to enforce ethical rationale on the battlefield.
Ronald Arkin, a robotics expert at Georgia Tech, believes that ‘LAWs can ultimately perform more ethically than human soldiers due to their ability to take a first do no harm approach’. LAWs will not act in fear of their own safety or vengeance for injured comrades and ultimately, cannot break the rules of governance that have been programmed by the engineer. This reduces the chance of harm and escalation of conflict due to unnecessary engagement, which benefits both sides.
When addressing a branch of utilitarian ethics, known as egoism, contributing to the development of LAWs could be viewed as acceptable. Egoistic ethics judge self-interest as the basis towards whether something is right or wrong. From this approach, a government utilising LAWs would be acting on its own self-interest, increasing the security of its own country, therefore enhancing the level of happiness for its people. The idea of acting upon self-interest is further developed through the removal of soldiers from direct warfare, preventing physical and mental harm. According to a study performed by the RAND Corporation1, as of 2014 more than 20% of soldiers that fought in Iraq or Afghanistan developed PTSD and/or depression. Upon returning home, PTSD has a considerable negative impact on the lives of the soldiers and those around them. The utilisation of LAWs could therefore maximise the happiness of those affected.
A prime example of where this technology would benefit is at the Indo-Pakistani Border. Skirmishes break out due to perceived aggression from either side or action out of fear for one’s own safety. LAW’s patrolling the border wouldn’t exhibit emotional responses and could therefore reduce the chance of conflict, leading to an overall increase in happiness of over 1.4 billion people that occupy these states.
Stop the Killer Robots
“AI is our biggest existential threat and the development of full AI could spell the end of the human race” (Stephen Hawking). Whilst this may sound extreme, there is clear concern from experts that engineering robots with the ability to carry out ethical decisions may have detrimental consequences on the human race. This has led to the “Stop the Killer Robots”2 campaign which has been in motion since 2013 and attracted several high profile representatives, notably Steve Wozniak, Elon Musk and Stephen Hawking. We have already seen semi-autonomous weapons used in the Syrian conflict, where computer guided drone missiles have missed their targets, resulting in the loss of civilian life and have subsequently faced legal action3. Another case4 details the US ‘targeted killing programme’ in which drone strikes, with the aim of killing 41 human targets, resulted in the total deaths of 1147 people, including civilians.
“Anyone caught in vicinity is guilty by association… when a drone strike kills more than one person there is no guarantee that those persons deserved their fate… So it is a phenomenal gamble”
(The Drone Papers, The Intercept)
From an ethical perspective, the case against development of LAWs can be regarded as a deontological argument, where the focus is on the action and not the consequence. In this case, the decision by a computerised system to take the life of a human being regardless of the potential consequences. Murder in all cases is immoral and prohibited by international law. However, there are exceptions in the case of declared armed conflicts’ where unlawful killing is permitted in certain circumstances. The decision based upon whether killing within combat is regarded as murder depends on the applications of concepts, notably intention, rights, legitimacy and justice. In basic terms, accountability and responsibility are the main arguments against LAWs.
“A lack of accountability in any situation is bad, but during wartime it is worse”
(The Ethics of Autonomous Weaponry)
Science fiction has often portrayed robots to be capable of showing human emotions (see: ‘The Terminator’). However, “it would be impossible for scientists to develop a computer as complex as the human brain, thus emotions of a machine would be extremely limited5”. This makes the concepts previously mentioned difficult to define for a robot and therefore it would be impossible to: 1) Apportion blame and 2) Determine whether any resulting death is deemed lawful. Could the engineer be held accountable?
To add to this, the development of LAWs by a single country could lead to an arms race akin to The Cold War6 due to their significant advantages. These kinds of events represent the main fears of the “Stop the Killer Robot” campaign, with them stressing that “Governments need to listen to the experts’ warnings and work with us to tackle this challenge together before it is too late”.
With the growing reality of LAWs, engineers face an ethical minefield if they are to contribute to the advancement of autonomous weapons. Following the frameworks presented above, is it justified, under any circumstances, for an engineer to utilise their skills in the development of LAWs.
Group 60: James Ratcliffe, Daanua; Durrani, Nathan Patel, Hinesh Patel