Robot Wars: Are autonomous killing machines no longer science fiction?

In the past decade robots have become increasingly prevalent in warfare. From the use of unmanned drones such as the Harpy [1], created by Israel to automatically detect and destroy radar emitters or the LS3[2], a dog-like robot that carries 160kg of supplies alongside foot soldiers, which is to be deployed by the US military. As these robots move towards increasingly combative roles, engineers and designers need to consider how these robots fall within the normative ethics frameworks or whether there is now a desperate need for a new set of ethical frameworks

group49pic1.png

Ario-H762.

The continued rapid development of autonomous robots to be used in warfare has been considered hedonistic by many. Companies and engineers are seeking recognition from creating the first automated combative robots whilst not considering the consequences of their actions. The question that needs to be asked is; do we want to create a robot that is a fully autonomous killing machine? And not everyone is convinced. Due to the rapid development of autonomous weapons with little consideration of the laws governing their use, and experts in the field are fearful of the consequences. Recently 20,000 physicists, engineers and experts from artificial intelligence and robotics research have signed an open letter calling for a treaty to ban lethal autonomous weapons[3]. With prominent members from the science and technology industry also signing the letter, such as Stephen Hawking, Elon Musk (Tesla) and Steve Wozniak (Apple) and which may help ignite the debate into the ethics involved when creating these machines.

 

“(Japan) has no plan to develop robots with humans out of the loop, which may be capable of committing murder”-

Japan’s Ambassador to LAWS Conference[4]

 

The main argument being used to justify the use of robots in warfare is the consequentialist belief that it will lead to a reduction in war related deaths through the use of; more effective and efficient target identification, carrying out riskier operations or acting as deterrents, similar to Nuclear weapons. The belief that the use of robots in war zones will save lives is not unfounded, 40% of deaths in Iraq since 2003 [5] were caused by IEDs, the continued introduction of bomb defusal drones would undoubtedly help to cut this number.

Robots have the ability to carry out tasks that would previously have been thought of as impossible, this may lead to a more concentrated amount of deaths over a shorter time period which could be argued to break virtuous ethics, but would nevertheless limit the length of a war. However, would the reduction in risk to human life lead to more “artificial” wars between robots leading to more innocent civilian casualties?

Robots do not possess the emotions that can sometimes cloud the judgement and ability to identify credible targets like a human soldier. Rash, costly and even unethical decisions have all been made by highly trained soldiers who have been overcome by emotions such as fear, frustration, anger and adrenaline in the heat of battle. A number of cases have been documented where soldiers have broken strict protocol and disobeyed the Laws of War [6] and Rules of Engagement [7]. A report in 2006 reviewing the Iraq war stated that over 10% of soldiers admitted to mistreating noncombatants  [8](damaging property, physical violence) when not necessary.  However a robot soldier programmed to make decisions under a deontological ethical framework (in which ethics are classified as a set of strict rules) would not be able to act in this manner.  This would mean that in certain situations a robot would be able to act in a more ethical and moral manner than a human and potentially reduce both friendly and civilian casualties. However, these machines making purely deontological decisions without any influence of virtuous ethics is a new phenomenon. An example would be, if robots are programmed not to harm women or children then there is the potential opportunity for this to be exploited by insurgents. The utilitarianism framework would then be applied, but how would the value of an innocent man, woman or child’s life compare to that of a soldier’s life: would it be worth 1, 2, 3, or more?

 

Another dilemma in the use of autonomous robots in warfare is the question of command, whether commanding officers would have the ability to override the pre-programmed ethics. Say a robot were to receive an order from an officer which would directly violate its deontological framework (i.e. to target a house which contains both enemy soldiers and civilians) then would the robot be programmed to obey or disobey the command? If programmed to follow any orders in a slavish manner, then the robots would not have their own ethical framework at all but merely rely on the ethics of the human in command.

 

“Without clear international regulations, the only thing holding arms makers back from selling such machines appears to be the conscience” –

  Jungsuk Park, DoDAAM Systems Limited [9]

 

In terms of consequentialism, the development and deployment of these robots in warfare will cause more social, economic and environmental damage to war zones compared to human combat. It is also logical to predict that only developed countries would have the technology and wealth available to use these autonomous weapons and so there is a risk that underdeveloped countries could be bullied and intimidated. For example, middle eastern countries like Iraq may experience more wars because of their abundant oil resources compared to their military power. Furthermore the requirement to compete with such military power could result in less developed countries investing money into research and development of weapons rather than other areas such as health care and education.

 

In addition, if more and more countries are able to apply robots in war, this will be a huge threat and may negatively affect world peace. For the majority of people living in developed countries, they would prefer to use robots to fight wars instead of soldiers, because of the promise of reduced deaths. However, a stronger voice against using robots in war may occur in many developing countries where modern wars most commonly occur, as they will have to adapt to this change and will be the most greatly affected by these automated weapons patrolling their streets.

group49pic2.png

MQ-9 Reaper UAV.

After considering the points discussed, it obvious that the manufacture and use of autonomous weapons needs to be under strict regulation. The current laws governing their use are underdeveloped for the current level of weapons and predicting their continued development. The proliferation of autonomous weapons has the potential to intensify, rather than reduce, conflicts in unstable regions. As a result, engineers and designers can no longer take the separatist stance without acting out of neglect and they can no longer disregard ethical consequences when creating these autonomous killing machines.

Group 49: Matthew Mckean, Jacob Marlow, Siyu Wang, Wenhao Li 

Advertisements

9 thoughts on “Robot Wars: Are autonomous killing machines no longer science fiction?

  1. An interesting and eye opening dissection of the ethical issues raised with the use of such machines. Not to mention an effective consideration of what is needed with regards to an engineers role in controlling the extent to which said issues take effect in modern engineering.

    Like

  2. This is a really good article which highlights some of the key issues surrounding the use of autonomous robots in warfare. Would a rise in a robot industry replacing the current arms industry be an area of concern?

    Like

  3. A very well written article. I especially like the part about developing countries having a strong voice against the use of robots in war. It seems obvious that certain nations will have advanced technology, where in the future it may possibly lead to a few super elites basically controlling the rest of the world. In such a society, there will be little rights for so called “smaller countries”. Therefore, do you think this open letter calling for a ban on autonomous weapons will ultimately be listened to, or will a catastrophic event have to take place before it is upheld, much like how nuclear weapons led to countries pledging no-first-use (NFU), or to only use defensively? Or could we one day live in a world where control by whatever means will rule?

    Like

  4. This being an efficient answer for military forces, I think the article touches on important aspects to consider in creating a machine to implement automated attacks. It has honed in well on the ethical duties of man to judge the importance of what the creation means from the outset. Since technology favourably lacks a conscience, it can carry out what a moral person may hesitate on. While greater for efficiency, this will equally marginalise civilians in the process. The ranking system is a way to denote an empathetic approach to this, however, leads to the question of how the distinction is made between civilian and enemy. While creating that distinction can save lives, it does create consternation for what lies ahead in the future, and whether it isn’t actually a more dangerous idea than it seems.

    Like

  5. Using a large number of military robots on the battlefield will have a huge impact on the existing war ethics. When unmanned weapons with independent attack capacity face enemy who lay down their arms or deprived of weapons. They cannot identify or judge their true intentions, and gives the correct corresponding response. It is difficult for robots to do it. British news agency statistics show that from 2004 to February 2014, the United States conducted 376 drone attacks in Pakistan, resulting in the death of 926 civilians. In addition, the robot will sometimes point to the soldiers of their own countries instead of enemy. The arrival of the non human war may result in the abuse of force. Because of not being in the battle environment, the long-range combatant has a “game” mentality. Psychological research shows that individuals tend to be more daring and violent in the virtual world. For the controllers themselves, they do not experience the fear and pain of death.With the development of robotics equipment, the law is lagging behind the new technology. “International Humanitarian Law” even cannot control the UAV .The United Nations discussed the use of military robots in 2013, demanding the freezing of the development of these weapons and the establishment of a discussion committee. In order to prevent the abuse of robotic weapons, the international community should establish the military robot legislation as soon as possible , to limit the use of military robot and timing.

    Like

  6. A very thought provoking article in to the use of artificial intelligence in warfare, and where our responsibility lies as engineers around this topic. I strongly agree that the manufacture and use of autonomous weapons needs to be, and continue to be, under strict regulation.

    Like

  7. This is an interesting article that highlights the importance, and urgency, of developing an ethical framework to delineate the ‘behaviour’ of autonomous killing machines. I will be interested to see how engineers can help to define this framework, and how any regulations could be enforced across the globe.

    Like

  8. An interesting and thought-provoking read! It did make me wonder whether the fact that we already hear and see news of drones killing enemy combatants and civilians, is already desensitising people to the concept of autonomous killing machines? As seen on the TV it looks just like an extension of virtual reality games played on a playstation or X-box.
    The article mentions use of robots in bomb disposal – it seems to me that this apparent defensive use is ethically different to use of autonomous machines in an offensive capacity.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s