“Killer Robots” – Weapons of the Future?

Lethal Autonomous Weapons (LAWs) refer to robot technology designed to target and attack designated threats, independent of human control. They have a strong possibility of becoming the weapons of the future, posing possible advantages over human soldiers, such as precision targeting and enhanced situational awareness.

This blog is going to assess the ethical dilemmas that an engineer would face in their development, utilising the contrasting ethical viewpoints of utilitarian and deontological approaches. Utilitarianism focuses on the overall consequences of the action while deontology assesses morality of the action itself.

The Greater Good?

Whenever discussing automated weapons, there is always mention of iRobot or Skynet type uprisings. The reality is very different, as the concept of self-aware artificial intelligence (AI) is well beyond the scope and requirement of LAWs. However, they could still maximise the wellbeing and happiness to the majority of people: a key aspect of utilitarian thinking. The benefits of LAWs correspond with the utilitarian principles, through reducing the casualties of war and enhancing national security.

“LAWs can ultimately perform more ethically than human soldiers due to their ability to take a ‘first do no harm’ approach”

(Ronald Arkin, Georgia Institute of Technology)

The lack of deep-rooted survival instincts in a robot has the potential for more humane warfare. Naturally, a soldier will follow the logic of ‘kill or be killed’ on the front line. This pressure can lead to a breakdown of moral judgements or even international humanitarian law, by not following accepted rules of engagement, endangering civilians, or by using excessive force. However, in the case of LAWs there is opportunity for programmers to enforce ethical rationale on the battlefield.

Ronald Arkin, a robotics expert at Georgia Tech, believes that ‘LAWs can ultimately perform more ethically than human soldiers due to their ability to take a first do no harm approach’. LAWs will not act in fear of their own safety or vengeance for injured comrades and ultimately, cannot break the rules of governance that have been programmed by the engineer. This reduces the chance of harm and escalation of conflict due to unnecessary engagement, which benefits both sides.

When addressing a branch of utilitarian ethics, known as egoism, contributing to the development of LAWs could be viewed as acceptable. Egoistic ethics judge self-interest as the basis towards whether something is right or wrong. From this approach, a government utilising LAWs would be acting on its own self-interest, increasing the security of its own country, therefore enhancing the level of happiness for its people. The idea of acting upon self-interest is further developed through the removal of soldiers from direct warfare, preventing physical and mental harm. According to a study performed by the RAND Corporation1, as of 2014 more than 20% of soldiers that fought in Iraq or Afghanistan developed PTSD and/or depression. Upon returning home, PTSD has a considerable negative impact on the lives of the soldiers and those around them. The utilisation of LAWs could therefore maximise the happiness of those affected.

A prime example of where this technology would benefit is at the Indo-Pakistani Border. Skirmishes break out due to perceived aggression from either side or action out of fear for one’s own safety. LAW’s patrolling the border wouldn’t exhibit emotional responses and could therefore reduce the chance of conflict, leading to an overall increase in happiness of over 1.4 billion people that occupy these states.

Stop the Killer Robots

AI is our biggest existential threat and the development of full AI could spell the end of the human race” (Stephen Hawking). Whilst this may sound extreme, there is clear concern from experts that engineering robots with the ability to carry out ethical decisions may have detrimental consequences on the human race. This has led to the “Stop the Killer Robots2 campaign which has been in motion since 2013 and attracted several high profile representatives, notably Steve Wozniak, Elon Musk and Stephen Hawking. We have already seen semi-autonomous weapons used in the Syrian conflict, where computer guided drone missiles have missed their targets, resulting in the loss of civilian life and have subsequently faced legal action3. Another case4 details the US ‘targeted killing programme’ in which drone strikes, with the aim of killing 41 human targets, resulted in the total deaths of 1147 people, including civilians.

“Anyone caught in vicinity is guilty by association… when a drone strike kills more than one person there is no guarantee that those persons deserved their fate… So it is a phenomenal gamble”

(The Drone Papers, The Intercept)

From an ethical perspective, the case against development of LAWs can be regarded as a deontological argument, where the focus is on the action and not the consequence. In this case, the decision by a computerised system to take the life of a human being regardless of the potential consequences. Murder in all cases is immoral and prohibited by international law. However, there are exceptions in the case of declared armed conflicts’ where unlawful killing is permitted in certain circumstances. The decision based upon whether killing within combat is regarded as murder depends on the applications of concepts, notably intention, rights, legitimacy and justice. In basic terms, accountability and responsibility are the main arguments against LAWs.

“A lack of accountability in any situation is bad, but during wartime it is worse”

(The Ethics of Autonomous Weaponry)

Science fiction has often portrayed robots to be capable of showing human emotions (see: ‘The Terminator’). However, “it would be impossible for scientists to develop a computer as complex as the human brain, thus emotions of a machine would be extremely limited5. This makes the concepts previously mentioned difficult to define for a robot and therefore it would be impossible to: 1) Apportion blame and 2) Determine whether any resulting death is deemed lawful. Could the engineer be held accountable?

 Screen Shot 2017-04-07 at 13.54.34

To add to this, the development of LAWs by a single country could lead to an arms race akin to The Cold War6 due to their significant advantages. These kinds of events represent the main fears of the “Stop the Killer Robot” campaign, with them stressing that “Governments need to listen to the experts’ warnings and work with us to tackle this challenge together before it is too late”.

With the growing reality of LAWs, engineers face an ethical minefield if they are to contribute to the advancement of autonomous weapons. Following the frameworks presented above, is it justified, under any circumstances, for an engineer to utilise their skills in the development of LAWs.

References:

Links:

[1] http://www.rand.org/content/dam/rand/pubs/monographs/2008/RAND_MG720.pdf

[2] https://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

[3] https://www.rt.com/uk/316373-cameron-drone-airstrikes-syria/

[4] https://www.theguardian.com/us-news/2014/nov/24/-sp-us-drone-strikes-kill-1147

[5] https://www.academia.edu/3852148/The_Ethics_of_Autonomous_Weaponry_A_philosophical_analysis

[6] http://www.historylearningsite.co.uk/modern-world-history-1918-to-1980/the-cold-war/the-nuclear-arms-race/

 

Group 60: James Ratcliffe, Daanua; Durrani, Nathan Patel, Hinesh Patel

Advertisements

34 thoughts on ““Killer Robots” – Weapons of the Future?

  1. Some really good arguments here! I find it a bit hard to consider technology with ‘lethal’ in the title as something that aims to reduce harm. Yes the robots can be programmed to enforce ethical rational but will they? In a time of war, will this really be a priority for those commanding them?

    Like

  2. Good article and a good summary of pros and cons of LAWs. Often ethics is over looked when it comes to engineering but it should really be at the heart of every decision. I believe engineers are accountable for their decisions; not just the benefits to sociality but also the potential consequences.

    Like

  3. Would the implementation of LAWs really decrease accountability? As it stands today, governments can get away with almost anything if they wield enough power. During the Iraq war, soldiers in Abu Ghraib tortured detainees for months on end. Once revealed, the government blamed the entire atrocity on a few ‘bad apples’ and fired a few foot soldiers. None of the higher ups, including the secretary of defence, faced any meaningful punishments.

    Like

  4. I’m not entirely sure on this, because artificial intelligence could never have the implicit emotions a human has, no matter how far the field advances. They’ll never be able to replace that human decision making and therefore we are just going to be relinquishing blame for whoever causes the next disaster (i.e who would be held accountable?).
    But actually I thought it was a really interesting point that it will reduce the amount of people with PTSD and depression. It’s a big thing at the moment with soldiers being left with minimal care and it would probably sway me more in favour of using these sorts of weapons if it had such a big effect on soldier’s lives.

    Like

  5. Although I understand there are benefits for these weapons, would they not make military action easier, and thus lead to more deaths? What happens when one country monopolises these weapons? Or worse, when one of these malfunctions and kills civilians, even just one, will the engineer take responsibility for the deaths of innocents? If they are the ones creating a weapon that is programmed to choose its own target, I can’t see anyone else that could be held to blame.

    Like

    1. I strongly do not believe that the engineer could possibly be held responsible for any deaths caused at the “hands” of the autonomous robot. Would a bin man be held accountable for the pollution caused by landfill sites? The answer is no! They are an intermediate party paid to do a job and dictated by a higher authority. The accountability lies with the person who ultimately gives the green light on their employment.
      However, that being said, if deaths were caused emanating from issues with the design or software of the autonomous robot, and these were down to negligence or poor engineering, the engineer should be held accountable towards these deaths and it not just passed off as a malfunction independent of human error.

      Like

  6. Very interesting but would it really make any difference. They would be subject to the programming and the programmer. When things go wrong or did they ? the excuses are already there!

    Like

  7. Right I’ve got a few points but I’ve no idea whether they will be any use to you or not.

    First off I would say that the increased use of drones has led to a massive expansion of covert military action over conventional action, meaning that more and more military action is being undertaken in the ‘shadows’ and allowing governments to reduce their accountability when something wrong. A good example of this happened recently in Yemen. So for the past two years America has been using drones to bomb the shit out of people In Yemen, namely Iranian backed Houthi rebels, more often than not killing civilians, but no-one has really spoke about it. However, Trump recently started using conventional special ops teams and they accidently killed some civilians and people went crazy. So governments are held more accountable for their mistakes when human error is involved rather than when a machine does it. Using machines will therefore allow governments to get away with killing civilians which you could argue is immoral. It also strengthens the deep state which lacks democratic legitimacy but I think that might be a bit too complex to go into for your piece,

    On the India-pakistan point I think it would depend whether the machine that was patrolling was controlled by a person or was programmed to be autonomous. If controlled by a person then I would argue that there more likely to fight as the person controlling the drone/robot or whatever won’t be faced with a moral dilemma when choosing to shoot at another robot, whereas if it was a person on patrol they were contemplating shooting they hesitate s a result of their conscience.

    Another thing to bear in mind is that cross border scraps are not necessarily a bad thing which might sound weird but I’ll explain. So, taking a region like Nagorno-Karabakh which is even more dangerous that the India-Pakistan border, skirmishes provide a means of letting off steam. So when people in a country start getting pissed off about the territory issue, the government lobs a few shells and mortars over , then everyone calms down again until next time when they’ll do the same thing. Introducing drones means this kind of thing wont happen, meaning peoples anger can stoke up and actually increase the likelihood of a full blown war, so from that point is it actually making people safer in the long run or increasing the likelihood of a proper conflict?

    Like I say no idea if that any help to you but that’s what I thought of

    Like

    1. I agree with the ‘greater good’ argument, there is an opportunity for making things better here.

      Surely the engineer designing the weapon is in a position of power here? They can set the ethical rules. If the technology is sufficiently developed that is supersedes human ability to judge warfare situations the engineer could impose stricter ethical restrictions on the conflict that they engage in than we have at the moment. Also, if the technology is less prone to making mistakes than conventional methods of warfare they could sufficiently reduce incidents such as friendly fire and accidental civilian casualties, this would be a massive benefit!

      I guess that the real ethical dilemma for the engineer is whether the technology they develop is good enough to improve the current situation and good enough to be able to adhere to the ethical restrictions that they impose on it.

      As far as the level of AI goes, there would have to be very strict restrictions on the ‘Intelligence’ programmed into the technology. For example incorporating abilities such as self preservation seem very risky… Again I think the ethical dilemma for the engineer is to decide whether or not the weapon they are designing actually improves warfare and how sure they can be that it won’t go wrong.

      Like

  8. Well written and good to see all different sides have been considered. Whilst I definitely agree that humans often let their emotions negatively take over, I think going towards no human decision making is risky generally. Also much easier and therefore maybe more likely to blindly press a button sending the robots off than to be on the front line in person.

    Like

  9. Understanding the need for advanced technology within future wars and conflict, I believe there will always be consequences to the welfare of all life. Can a robot be fully programmed to carry out ethical and emotional decisions at the right time without harming the innocent?
    But, if using LAWs will help soldiers and civilians caught within direct warfare, definitely improving their chance of survival, reducing physical and mental harm without any risk of malfunctioning, then this would be hugely beneficial.

    Like

  10. Great article, with balanced arguments for both sides. However, I still think we are a few years away from truly deploying and relying on LAWs in warfare. I think the lack of human emotion in robots will lead to more destruction.

    Like

  11. Interesting article. I would have thought that the ethical dilemas for the engineer had been faced already with the decision to work in a profession creating weopans in the first place. Not something I could do. Therefore it then becomes about making weopans that cause the least death for greatest impact. I can see there are advantages with the weopan but removing emotions is always worrying. What if a regime puts innocent people in areas where they should not be which has happened before. Dont we need to be able to make a judgement call? I am then of course assuming the people concerned have strong ethics which with some of the current world leaders I wouldn’t be sure of. Can see both sides not a simple black and white answer and think each case would need a decision so back to a human call!

    Like

  12. I honestly have never considered the moral dilemma of automated weapons, weapons probably because I didn’t really know they were a thing! I would say having extensively studied the Cold War and the arms race within it I’d say that anything threatening a further escalation would be a no go. I also think whilst the arguments for the weapons seem convincing (especially the one about lose of moral grounding in intense warfare something that is a huge issue throughout history), they are relatively weak when weighed up against the possible total distraction of automated weaponry and the fact that in my opinion you could never replace a human when it comes to the decision of when to kill and when not to it’s just too complex. Overall though a very good discussion I definitely feel more informed then I did before reading it.

    Like

  13. I am become death, the destroyer of worlds.

    J. Robert Oppenheimer – creator of the atombic bomb.

    What about PTSD for the engineers that may possibly create weapons of mass destruction?

    Like

  14. One issue I have with the implementation of LAWs and the ethics in relation to the programming engineers is placing value on life. These weapons must need to judge when to bomb a target and when not to, if there is a high value target in a school with hundreds of children will the weapon choose to kill or not? When is an acceptable level, and who is to blame for the civilian deaths, the engineer who programmed the weapon? The government who chose the target?

    The government do not value non combatants as has been shown by the many times they have bombed civilians for the chance at killing a target, an estimate of civilian casualties in Syria is around 3000 currently. Therefore we can assume that the programmers will be ‘forced’ to program the autonomous weapons in a certain way and therefore should not take the brunt of the blame.

    Despite the programmers not having a choice of the way a weapon is programmed does not make them ethically sound, they could simply remove themselves from the entire system preventing governments from having anyone with the technical know-how to implement LAWs currently.

    Like

  15. There have been some really interesting comments already, so what I can add is limited.

    I would only say that there has been a huge effort made to sanitise war and death in order to make thwm acceptable policy choices. Linguistically, this can be seen in the use of phrases like collateral damage and euthanasia. LAWs, there name giving an implied meaning of justice, could just be another part of that. The 24 hour news would tell us that a supposed threat had been eliminated using the most advanced ethically trained LAWs availabe in a surgical strike. As a result, we would not face up to the role that power relations and the society we have built play in generating conflict. Finally, war could become a permanent feature of our existence.

    Like

  16. I think the key aspect which needs to be addressed is the reliability of the LAWs. If the machines are categorically 100% reliable then I think only then can the use of LAWs should definitely be considered. Having said this, they should not be implemented if there is any doubt within the operating systems.

    Like

  17. You make some very compelling arguments for the use of LAWs however, I have some concern with the conclusion you make. How can we ensure that the correct ethical frameworks are implemented? Who decides what ethical framework is correct? And how can we ensure the AI is implemented without the possibility of a nation, or individual, modifying the intelligence for their own personal gain?

    Like

  18. Lots of interesting comments about an intefesting article.

    One thing that was not discussed was the use of sanitised language to describe certain types of violence. The acronym LAWs does not seem to me to be a coincidence, it could have been designed to generate approval for the violence that they have the power to commit. In that way it is similar to phrases like ‘collateral damage’, ‘surgical strike’ or ‘euthanasia’. I think people with science backgrounds can often underestimate the power of language to affect cultural perceptions of conflict. Equally, while LAWs could affect the ways in which wars are conducted they will not deal with the power dynamics that cause wars and injustice. War has been defined as total societal collapse and that definition better explains atrocities and war crimes rather than evil or poor decision making.

    Like

  19. Very intereting debate on the accountability and ethics of engineers in their pursuit of achieving LAWs.
    I believe LAWs could be more beneficial than problematic for humanity.

    Like

  20. I am really not in favour of the implementation or even people choosing to have any involvement in the development of autonomous weapons. The biggest worry for me is the lack of accountability that you touch on in this blog. If a human soldier was to go haywire and kill innocent people or even just use excessive force, he would be held accountable for these actions. But if this was to occur with an autonomous weapon it becomes far too easy to pass off the blame to numerous stake holders; engineer, military, government ect. I would feel more at ease if a set of rules were internationally agreed upon before any implementation took place. For example, establishing an agreed hierarchy of accountability in which the government/military were at the top would force their hand at ensuring they were only deployed when extremely necessary and without excessive levels of force.

    Like

  21. Very well written and interesting article. The rate in which technology is advancing, and how intricate weapons are becoming these days, LAWS don’t seem too far away. I thought the reference to the effects of PTSD was an important topic to note and will be an important point of discussion in the debate on whether LAWS should happen.
    A side point: Utilitarianism is often criticised in its open claim of maximising aggregate welfare, as potentially causing more harm than good, meaning that they are willing to go to any extremes, such as mass killings as an example as long as it the net outcome is good. obviously this is an extreme example, but if LAWS were to be programmed through this route of maxmising welfare, how could this level of judgement potentially be controlled.

    Like

  22. Arkin is reported as stating that the ‘The benefits of LAWs correspond with the utilitarian principles, through reducing the casualties of war and enhancing national security.’ “LAWs can ultimately perform more ethically than human soldiers due to their ability to take a ‘first do no harm’ approach”

    Although the lack of deep-rooted survival instincts in a robot has the potential for more humane warfare, the only way a robot can act is in accord with the way it has been programmed in the first place. Assuming that a robot is going to be programmed in a way to destroy or kill in a humane way ignores that it can be programmed in an inhumane manner . The engagement of any nation state to kill and destroy is always in the manner that State sets its moral compass.

    Although there is indeed an opportunity to programme a LAW to deliver in an ethical manner its programme is only as ethical as the programmer who programmed it. This is no different to the programming of a human soldier by his/ her country.
    There is indeed an opportunity for programmers to enforce ethical rationale on the battlefield but who defines the ethical framework within which a human soldier or a programmed LAW acts?

    The Article is a thought provoking summary of these issues.

    Like

  23. An interesting discussion on the ethical issues behind the future development of LAWs. Engineers will always want to push boundaries to further the greater good, but there are often ethical issues when the results of engineering progress are placed into the hands of those who have the power to operate them. A good article dealing with the issues.

    Like

  24. A very though provoking article. I think it’s interesting in the Greater Good section that a lot of focus is placed on ethical rationale and ‘first do no harm’. In a warzone, although it would be ideal that these two approaches would work, ultimately a lot of people are willing to die for what they believe in and as such LAWs may have little effect on the loss of life for the other side.
    As well as this, the effort to prevent the escalation of conflict by targeting specific groups with LAWs could have the opposite effect, potentially turning these groups into martyrs and only strengthening the perceived threat.
    Finally, I don’t believe that were a LAW to ‘go rouge’ one engineer should be held accountable. It should be the responsibility of the company or group, especially as it is likely different parts of these LAWs will be designed by different groups of people.

    Like

  25. A very well written article with very valid and well discussed points made for both arguments.
    I think it’s interesting that you raised the idea of the possibility of an ‘arms race akin to the cold war’ developing since I can clearly see why this would want to be avoided; however a lot of advancement in science, technology and engineering was carried out during this period due to governments spending so much more money encouraging these sectors to grow. Therefore if this scenario were to be brought about again, would that be such a bad idea seeing as advancements could ultimately bring more benefits to more people?

    Like

  26. Interesting piece, well argued, given constraints of space.
    However, the somewhat gloomy cornerstone is the seemingly unchanging fact that human conflict is a given and , furthermore, that in the end, it will always be settled by killing more of ‘them’ than they can kill of ‘ours’- however flimsy the moral argument.
    Call me an old hippie, but wouldn’t it be more uplifting to imagine and perhaps to work towards a world in which the slaughter of others is not considered a rational, nor dare one say, utilitarian, choice.
    I’ve yet to see a low rank computer operating soldier, 5 miles underground in a silo in North Dakota, be charged with the slaughter of an Afghan wedding party; or anyone, for that matter.
    C’est la vie, eh?

    Like

  27. Realistically, human conflict will never cease and in my opinion, it’s inevitable that LAWs will be part of all war.

    As to who should be held accountable, International rules can be implemented but boundaries will always be pushed, often broken.

    Calls from ‘high profile intelligence experts’ to ban autonomous weapons, I feel is futile. If the technology is there, it will be used and developed, ethically correct, or not.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s