In the past decade robots have become increasingly prevalent in warfare. From the use of unmanned drones such as the Harpy , created by Israel to automatically detect and destroy radar emitters or the LS3, a dog-like robot that carries 160kg of supplies alongside foot soldiers, which is to be deployed by the US military. As these robots move towards increasingly combative roles, engineers and designers need to consider how these robots fall within the normative ethics frameworks or whether there is now a desperate need for a new set of ethical frameworks
The continued rapid development of autonomous robots to be used in warfare has been considered hedonistic by many. Companies and engineers are seeking recognition from creating the first automated combative robots whilst not considering the consequences of their actions. The question that needs to be asked is; do we want to create a robot that is a fully autonomous killing machine? And not everyone is convinced. Due to the rapid development of autonomous weapons with little consideration of the laws governing their use, and experts in the field are fearful of the consequences. Recently 20,000 physicists, engineers and experts from artificial intelligence and robotics research have signed an open letter calling for a treaty to ban lethal autonomous weapons. With prominent members from the science and technology industry also signing the letter, such as Stephen Hawking, Elon Musk (Tesla) and Steve Wozniak (Apple) and which may help ignite the debate into the ethics involved when creating these machines.
“(Japan) has no plan to develop robots with humans out of the loop, which may be capable of committing murder”-
Japan’s Ambassador to LAWS Conference
The main argument being used to justify the use of robots in warfare is the consequentialist belief that it will lead to a reduction in war related deaths through the use of; more effective and efficient target identification, carrying out riskier operations or acting as deterrents, similar to Nuclear weapons. The belief that the use of robots in war zones will save lives is not unfounded, 40% of deaths in Iraq since 2003  were caused by IEDs, the continued introduction of bomb defusal drones would undoubtedly help to cut this number.
Robots have the ability to carry out tasks that would previously have been thought of as impossible, this may lead to a more concentrated amount of deaths over a shorter time period which could be argued to break virtuous ethics, but would nevertheless limit the length of a war. However, would the reduction in risk to human life lead to more “artificial” wars between robots leading to more innocent civilian casualties?
Robots do not possess the emotions that can sometimes cloud the judgement and ability to identify credible targets like a human soldier. Rash, costly and even unethical decisions have all been made by highly trained soldiers who have been overcome by emotions such as fear, frustration, anger and adrenaline in the heat of battle. A number of cases have been documented where soldiers have broken strict protocol and disobeyed the Laws of War  and Rules of Engagement . A report in 2006 reviewing the Iraq war stated that over 10% of soldiers admitted to mistreating noncombatants (damaging property, physical violence) when not necessary. However a robot soldier programmed to make decisions under a deontological ethical framework (in which ethics are classified as a set of strict rules) would not be able to act in this manner. This would mean that in certain situations a robot would be able to act in a more ethical and moral manner than a human and potentially reduce both friendly and civilian casualties. However, these machines making purely deontological decisions without any influence of virtuous ethics is a new phenomenon. An example would be, if robots are programmed not to harm women or children then there is the potential opportunity for this to be exploited by insurgents. The utilitarianism framework would then be applied, but how would the value of an innocent man, woman or child’s life compare to that of a soldier’s life: would it be worth 1, 2, 3, or more?
Another dilemma in the use of autonomous robots in warfare is the question of command, whether commanding officers would have the ability to override the pre-programmed ethics. Say a robot were to receive an order from an officer which would directly violate its deontological framework (i.e. to target a house which contains both enemy soldiers and civilians) then would the robot be programmed to obey or disobey the command? If programmed to follow any orders in a slavish manner, then the robots would not have their own ethical framework at all but merely rely on the ethics of the human in command.
“Without clear international regulations, the only thing holding arms makers back from selling such machines appears to be the conscience” –
Jungsuk Park, DoDAAM Systems Limited 
In terms of consequentialism, the development and deployment of these robots in warfare will cause more social, economic and environmental damage to war zones compared to human combat. It is also logical to predict that only developed countries would have the technology and wealth available to use these autonomous weapons and so there is a risk that underdeveloped countries could be bullied and intimidated. For example, middle eastern countries like Iraq may experience more wars because of their abundant oil resources compared to their military power. Furthermore the requirement to compete with such military power could result in less developed countries investing money into research and development of weapons rather than other areas such as health care and education.
In addition, if more and more countries are able to apply robots in war, this will be a huge threat and may negatively affect world peace. For the majority of people living in developed countries, they would prefer to use robots to fight wars instead of soldiers, because of the promise of reduced deaths. However, a stronger voice against using robots in war may occur in many developing countries where modern wars most commonly occur, as they will have to adapt to this change and will be the most greatly affected by these automated weapons patrolling their streets.
MQ-9 Reaper UAV.
After considering the points discussed, it obvious that the manufacture and use of autonomous weapons needs to be under strict regulation. The current laws governing their use are underdeveloped for the current level of weapons and predicting their continued development. The proliferation of autonomous weapons has the potential to intensify, rather than reduce, conflicts in unstable regions. As a result, engineers and designers can no longer take the separatist stance without acting out of neglect and they can no longer disregard ethical consequences when creating these autonomous killing machines.
Group 49: Matthew Mckean, Jacob Marlow, Siyu Wang, Wenhao Li