Weaponised drone technology is one of the most controversial military developments of our time, with many suggesting that it crosses the line of what is considered ethical warfare. Until now, reasonable human control has always been required before any offensive action could be taken, in order to comply with International Humanitarian Law (IHL) (1). However, the establishment of the Group of Governmental Experts (7), a global organisation that discusses the use of Lethal Autonomous Weapon Systems (LAWs) (2), could indicate that human control over weapon systems may soon be replaced by LAWs.
With the development of machines such as the BAE Taranis (3), which is capable of fully autonomous combat, serious ethical dilemmas regarding artificial intelligence in warfare are being raised. Should the control over life and death be relinquished to machines? Or is the decision to kill a person something only another human can make?
We believe this decision to be critical due to the stakeholders involved: the public, the leading governing bodies and everyone in between.
Humans cannot be trusted – Hand the decision to the machine!
An immediate criticism of LAW technology is the potential for civilian casualties, that which such a machine would obviously not be as adverse to as humans during military engagement. Whilst many may argue that no loss of civilian life should ever be acceptable, utilitarian thinking may counter this. The utilitarian approach assigns a value to a civilian life and dictates that an acceptable loss of life must exist so long as this is less than that assigned to the target. In fast-moving modern conflict, where decisions have to be made on the latest intelligence, operations must be carried out quickly and covertly to maximise the chance of success. The use of LAWs, without the requirement for human intervention, offers the most effective solution in destroying enemy targets, thereby minimising the risk to the greater population. The reduction in risk to the masses justifies the comparatively small loss of civilian life that may be incurred during missions. This principle can be extended to the sense of security likely felt by large populations possessing such technologies, outweighing the greater suffering of much smaller populations at risk of collateral damage.
History has shown that many individuals have abused their position of power. This opportunistic behavior, on multiple occasions, has led to the death of millions of people. Placing this power in the hands of an autonomous weapon prevents the decision to kill being made upon an individual’s moral compass and has positive implications for both the utilitarianism and virtue ethical frameworks.
Removing the human aspect, which has the potential to be selfish and governed by ulterior motives, improves the moral standing of the technology when applying virtue ethics. The LAWs could be pre-programmed with certain ‘qualities’, manifesting in the AI (8) decision making process. These qualities could be determined by a global governing body ensuring that no unfair advantage is gained and IHL is abided. The decision to engage will be based on global ethical ideologies and not the operators opinion, thus removing the individual opinions or beliefs. However, this idea raises serious questions regarding the validity of its practical implementation.
The vast array of consequences associated with war has led to the application of a consequentialist framework being applied to the decision of whether a machine should kill; in particular, regarding the irreversible damage to mental health and the subsequent economic impact.
“Combat experience, particularly losing a team member, was related to an increase in ethical violations” (4)
The use of LAWs will lead to a reduction in the number of people being forced to kill, or witness a comrade being killed in the line of duty. Additionally, relieving soldiers of the responsibility of pulling the trigger could drastically improve their mental health. Applying a virtue ethics framework, preserving the mental health of the soldiers may also improve the quality of future decisions where human judgment is unavoidable. Furthermore, the reduced requirement for infantry soldiers may prove to have significant economic advantages. This, in turn, can have utilitarian benefits as the money may be spent elsewhere i.e. the health care system or education.
Human control is required to retain morality and control
From a consequentialist viewpoint, the use of fully automated military drones without any human intervention raises some important ethical issues. Clearly an automated drone needs to be able to distinguish between targets and innocent civilians. The ability to distinguish friend from foe is based on a deeper understanding of human behaviour and a higher form of logic, neither of which can be achieved by computers. Such a case of mistaken identity could result in civilian death – which is unethical from a consequentialist viewpoint. Mistaken identity, and subsequent civilian death, forms one of the main criteria currently limiting the deployment of LAWs, due to violation of IHL (1).
The myriad scenarios potentially encountered in warfare could never be programmed into a drone, and without human judgement, its response be to unexpected scenarios is nigh-on impossible predict. Would it result in significant civilian fatalities? Who would subsequently be to blame? Previously, it was suggested that removing the operator improves the decision from a virtue ethics standpoint as any prejudices are similarly removed. However, compassion is also eliminated, leaving the decision in the hands of an automated drone, indiscriminate in its decision to kill so long as the target matches its criteria. This binary characteristic may, therefore, be considered immoral.
Although previous utilitarian viewpoints provided significant advantages to LAWs, they could cause considerable distributive injustice within warfare, which could be devastating to LEDCs (9).
LAWs are fundamentally designed to kill. Applying Kant’s ethical framework raises the question as to whether weapons should, in fact, be used at all, regardless of their method of deployment. This opens the door for a much wider ethical debate.
It is evident that autonomous weapons would be a groundbreaking advancement in warfare technology. That said, affording LAWs complete control over life and death is deemed to be a contravention of the ethical frameworks upon which human civilisation is based. As engineers, we believe that further developing autonomous technologies, which have the potential to identify and neutralise any threat to our friends, family and country, is extremely beneficial. However, comparative analysis of the aforementioned ethical frameworks indicates that the final decision must always be the responsibility of a morally just human operator.
Group 13: Callum Smith, Philip Jackson, Shaun Ellis, Albert Houghton