Lethal Autonomy – The next step in warfare or a step too far?

Weaponised drone technology is one of the most controversial military developments of our time, with many suggesting that it crosses the line of what is considered ethical warfare. Until now, reasonable human control has always been required before any offensive action could be taken, in order to comply with International Humanitarian Law (IHL) (1). However, the establishment of the Group of Governmental Experts (7), a global organisation that discusses the use of Lethal Autonomous Weapon Systems (LAWs) (2), could indicate that human control over weapon systems may soon be replaced by LAWs.

With the development of machines such as the BAE Taranis (3), which is capable of fully autonomous combat, serious ethical dilemmas regarding artificial intelligence in warfare are being raised. Should the control over life and death be relinquished to machines? Or is the decision to kill a person something only another human can make?

We believe this decision to be critical due to the stakeholders involved: the public, the leading governing bodies and everyone in between.

Humans cannot be trusted – Hand the decision to the machine!

An immediate criticism of LAW technology is the potential for civilian casualties, that which such a machine would obviously not be as adverse to as humans during military engagement. Whilst many may argue that no loss of civilian life should ever be acceptable, utilitarian thinking may counter this. The utilitarian approach assigns a value to a civilian life and dictates that an acceptable loss of life must exist so long as this is less than that assigned to the target. In fast-moving modern conflict, where decisions have to be made on the latest intelligence, operations must be carried out quickly and covertly to maximise the chance of success. The use of LAWs, without the requirement for human intervention, offers the most effective solution in destroying enemy targets, thereby minimising the risk to the greater population. The reduction in risk to the masses justifies the comparatively small loss of civilian life that may be incurred during missions. This principle can be extended to the sense of security likely felt by large populations possessing such technologies, outweighing the greater suffering of much smaller populations at risk of collateral damage.

History has shown that many individuals have abused their position of power. This opportunistic behavior, on multiple occasions, has led to the death of millions of people. Placing this power in the hands of an autonomous weapon prevents the decision to kill being made upon an individual’s moral compass and has positive implications for both the utilitarianism and virtue ethical frameworks.

17% of Soldiers and Marines agreed or strongly agreed that all noncombatants should be treated as insurgents” (4)

Removing the human aspect, which has the potential to be selfish and governed by ulterior motives, improves the moral standing of the technology when applying virtue ethics. The LAWs could be pre-programmed with certain ‘qualities’, manifesting in the AI (8) decision making process. These qualities could be determined by a global governing body ensuring that no unfair advantage is gained and IHL is abided. The decision to engage will be based on global ethical ideologies and not the operators opinion, thus removing the individual opinions or beliefs. However, this idea raises serious questions regarding the validity of its practical implementation.

The vast array of consequences associated with war has led to the application of a consequentialist framework being applied to the decision of whether a machine should kill; in particular, regarding the irreversible damage to mental health and the subsequent economic impact.

Combat experience, particularly losing a team member, was related to an increase in ethical violations” (4)
The use of LAWs will lead to a reduction in the number of people being forced to kill, or witness a comrade being killed in the line of duty. Additionally, relieving soldiers of the responsibility of pulling the trigger could drastically improve their mental health. Applying a virtue ethics framework, preserving the mental health of the soldiers may also improve the quality of future decisions where human judgment is unavoidable. Furthermore, the reduced requirement for infantry soldiers may prove to have significant economic advantages. This, in turn, can have utilitarian benefits as the money may be spent elsewhere i.e. the health care system or education.

Human control is required to retain morality and control

From a consequentialist viewpoint, the use of fully automated military drones without any human intervention raises some important ethical issues. Clearly an automated drone needs to be able to distinguish between targets and innocent civilians. The ability to distinguish friend from foe is based on a deeper understanding of human behaviour and a higher form of logic, neither of which can be achieved by computers. Such a case of mistaken identity could result in civilian death – which is unethical from a consequentialist viewpoint. Mistaken identity, and subsequent civilian death, forms one of the main criteria currently limiting the deployment of LAWs, due to violation of IHL (1).

The myriad scenarios potentially encountered in warfare could never be programmed into a drone, and without human judgement, its response be to unexpected scenarios is nigh-on impossible predict. Would it result in significant civilian fatalities? Who would subsequently be to blame? Previously, it was suggested that removing the operator improves the decision from a virtue ethics standpoint as any prejudices are similarly removed. However, compassion is also eliminated, leaving the decision in the hands of an automated drone, indiscriminate in its decision to kill so long as the target matches its criteria. This binary characteristic may, therefore, be considered immoral.

85% of people believe LAWS should not be used for offensive purposes (5)

Although previous utilitarian viewpoints provided significant advantages to LAWs, they could cause considerable distributive injustice within warfare, which could be devastating to LEDCs (9).

LAWs are fundamentally designed to kill. Applying Kant’s ethical framework raises the question as to whether weapons should, in fact, be used at all, regardless of their method of deployment. This opens the door for a much wider ethical debate.

It is evident that autonomous weapons would be a groundbreaking advancement in warfare technology. That said, affording LAWs complete control over life and death is deemed to be a contravention of the ethical frameworks upon which human civilisation is based. As engineers, we believe that further developing autonomous technologies, which have the potential to identify and neutralise any threat to our friends, family and country, is extremely beneficial. However, comparative analysis of the aforementioned ethical frameworks indicates that the final decision must always be the responsibility of a morally just human operator.

Group 13: Callum Smith, Philip Jackson, Shaun Ellis, Albert Houghton

Advertisements

20 thoughts on “Lethal Autonomy – The next step in warfare or a step too far?

  1. Nice report, I think that your allusion to the public consensus on the issue is a useful one, and no doubt where the debate should go in future years… A government’s actions should be vindicated by a mandate approving these potentially disastrous or potentially helpful machines; public opinion could be vital in swaying political control with regard to the populace/government/armed forces triumvirate.

    Like

    1. An interesting comment. Do you feel, then, as public opinion can carry so much weight in the decision making process, that as this technology gains momentum and publicity surrounding it grows that resources need to be devoted to educating the public as to the potential benefits and/or pitfalls of such a technology?

      Like

  2. Very interesting article – prompts/provokes an interesting debate on what may be considered a particularly difficult and ‘heavy’ topic.

    Like

  3. Interesting read, can’t really disagree with the conclusion! However, with current World Politics (Donald Trump as President) I think the probability of these systems in use is quite high.

    Like

    1. Check out 4th Turning theory. If it has any validity, we’re at the point in history where we almost always fall into war, regardless of who’s in control. With regard to LAWS, once an idea is out there, it’s impossible to stuff it back into the bottle. It will end up being implemented regardless of the outcome. The only option is to try to manage it for the best outcome (something governments don’t seem to be able to accomplish with any degree of success.)

      Like

  4. Great integration of ethical theories. Would be interesting to consider Hedonism, Contractarianism, Natural Law etc. to aid further, differing, points of view

    Like

  5. With the current, technology-driven society in which we now live, do you think this influences the public’s opinion on such matters?
    Are we able to sufficiently conceive a non-technology dependent environment (where such decisions do not impinge upon machines) anymore?

    Like

    1. That is an interesting comment Ollie. I believe the public are much more open to the idea now than they would have been, say 50 years ago, because of the many technology successes in that period. I believe public opinion may be similar to that for autonomous vehicles as we are effectively putting our own/ other people’s lives in the hands of technology. I think the public would need to be reassured with testing that the technology works before they would get behind it. With that said, I believe that LAWs will be implemented eventually because of the technology driven environment that you have mentioned, however the timescale of this technology is the major uncertainty.

      Like

  6. I believe the main issue with the deployment of LAWs is related to the reliability of such machines, as you have mentioned briefly in this article. If we cannot trust these machines to be at least as reliable as a human (i.e not killing civilians) then it is surely not ethical to implement them. All other arguments supporting LAWs are insignificant compared to this.

    Liked by 1 person

    1. I find the concept of comparing the reliability of a machine to that of a human an interesting one, especially on an issue where a moral judgement must take place. A machine, whilst potentially unreliable in function, must adhere to a predetermined set of regulations regarding its decision making protocols. A human, however, has the ability to turn its back on protocols and act on impulse. Could it not be argued that, in this sense, a machine may be a more reliable option when it comes to preserving civilian life?

      Like

  7. Interesting viewpoints from both sides of the argument. It seems a given that a reliance on technology and artificial intelligence is an inevitable progression in all areas of life, including warfare, and perhaps in cases such as this the better use of time and money is how to reduce the need for killing, and therefore reduce the demand for LAW’s, rather than question their ethical value, as artificial intelligence in everyday life is only going to become more prominent, Having said this, this is clearly a different angle on the issue and the engineering side of the issue has been covered thoroughly here, a very thought provoking read.

    Liked by 1 person

  8. Three points:

    First, there is an old book, titled (as I recall) “Computers in Battle.” The author makes the point that computer systems cannot be allowed to control the kill switch since it is impossible to guarantee that there are no bugs in their code. A mistake would have serious consequences. One might be willing to accept the mistaken death of a single civilian (although the individual involved would argue otherwise), but a mistake with a more powerful weapon, such as a nuclear warhead, would be much worse. If we humans are to formulate the rules for a killing machine, we would have to be perfect programmers, and we are not. There is always the unforeseen circumstance. It’s been said that it’s impossible to fool-proof anything since fools are so creative. That concept supports this argument.

    Second point: At present, we do not understand how AIs using Deep Learning come to their decisions. They learn by observation and formulating their own set of rules to control their interactions with the environment. Do we really want to give the power of life and death to something that makes its decision based on criteria and rules we cannot comprehend?

    One might argue that it’s not necessary to provide a weaponized drone with such AI power. That argument only provides for one alternative at present, and that leads us directly back to objection number one above.

    The third point is one based more on my personal sense of right and wrong. I’d like to think there is a good reason for humans to reserve the kill decision for themselves. An AI, at least at this time, has no consciousness and cannot conceive of the value of life to the individual. Humans can (although, sadly, they often do not take the time to consider that element). A human could opt not to kill someone based on subjective factors or factors that an AI might not be programmed to or willing to consider due to deep learning factors that we can’t understand.

    Here’s an imaginary example to illustrate my point: Let’s imagine that Albert Einstein was necessary to the development of nuclear fission. An AI might have decided to kill him based on the premise that his knowledge would make the world a vastly more dangerous place. A human might view the situation differently. Hope that humanity would be able to use old Al’s knowledge for good rather than for evil is a human trait that an AI may not be able to emulate.

    Ultimately, I think that an AI is incapable of making a moral decision since such a judgment is solely the province of humanity. (Yeah, I know this is an opinion only. I’m not prepared to argue it at the moment. It’s more of a feeling of right vs. wrong that I have. Sorry.)

    The above points are some of the reasons why I am against autonomous killing machines.

    If pushed, I’d say that the early phases of AI development (the part of the log curve that we’re currently entering) are the most dangerous for humans. Of course, I’m hopelessly naïve in that I’m capable of imagining an AI system that will have empathy (got to be conscious for that, though). If such a thing existed, it might see some value in keeping humans around.

    I hope these ideas provide some heuristic value.

    Namaste

    Like

    1. Thank you for such a detailed and in-depth review of our article. If I may, I will discuss each of your three points in order.

      1) I able to say for both colleagues and I who wrote the blog we whole-heartedly agree with this point. As engineers, we are very interested by the technological challenges that this topic approaches. However, the idea of creating a fool-proof AI, especially with human’s current understanding of the technology, is unrealistic. This ultimately lead to our conclusion of developing the technology but reasonable should always be implemented.

      2) Again, you raise a very good question and almost an impossible one to answer. The argument for the technology was based on the hypothetical assumption that the entire decision making process was fully understood. Consequently if the AI was required to kill, and a morally and ethically acceptable code that defined all aspect of the AI’s behaviour was implemented, it would do so ethically. However, this raises serious questions whether the implementation of such code is even possible. I am very interested to know what your thoughts are with regards to the possibility of achieving a decision making process that can be fully understood and controlled?

      3) To this, I would like to raise a opposing argument. As you say the AI may not have any consciousness, but then again neither does a gun are a knife. Could you not consider the LAW to represent? And furthermore, could you not consider the code in which the AI is based to be “pulling the trigger”?

      I believe many agree with the final point you make. This opens the book for such a vast and challenging debate; whether the thing that threatens humans the most is humans themselves.

      Like

      1. To address your questions above (and they are very good questions), it is necessary to specify what level of AI will be incorporated in the weapons system. Let’s use the terms Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI) for shortcuts.

        ANI is widely in use today in auto braking systems, email spam filters, Amazon’s customer interface, chess, Go, checkers, etc. It specializes in doing one simple task and doing it very well; better, in fact, than a human. As a limited system, it might be too limited to operate successfully in all battlefield/real world situations. I can only see it working in environments where it is set to kill everything that moves, save for those that carry or wear some kind of identification. The objection relating to perfect code applies here. I would disqualify it for that reason.

        Unfortunately, I suspect that others would use it for specifically that reason. It would be relatively inexpensive, single-minded, and always alert. It’s available today and is most certainly exactly what would be used in LAWS creation.

        Note: I agree that the creator of such a thing should be held responsible for its actions as should those who deploy it. I suspect there are those who would argue otherwise. This is a very important point to consider before such things are deployed. I can envision a scorched land with multiple ANI LAWS lying in wait for human enemies who have long since been extinguished.

        As to the question of perfect programming: The estimated number of software bugs in large pieces of software is rather frightening. I’ll leave it to you to research this if you’re interested. However, it isn’t the number of bugs that’s dangerous. It is only the one(s) that create an irreversible error, especially when it comes to human life.

        ANI systems can be programmed with deep learning techniques and, as previously stated, they may develop rules of operating that are non-human to the extent that humans cannot understand them. (Do you understand every aspect of a spider’s prey recognition ability? Some scientists may possibly make that claim, but the ANI isn’t a biological system.)

        AGI is human-level AI. We haven’t yet created such a thing, but we are possibly very close. Even if AGI did not exceed human level performance, it would have advantages over humans. It would never get tired, always pay attention, and never make a mistake (insofar as it would always obey its prime directive). This is the problem. Humans can make exceptions, even in battle and even for their sworn enemies. An AGI system would not be human and, unless it were constructed very carefully, it would make no exceptions.

        To give an example of a worthwhile exception, visualize a non-combatant wearing a scavenged military jacket to keep warm. A human might recognize the reasons for the jacket and hold fire. An AGI probably would not. It’s not human after all and its directive is to kill those who appear to fit a certain class.

        ASI is an entirely different matter. On the intelligence scale, it would be a step above us. Just as we could never explain algebra to a monkey (much less to an insect), it would find the concepts and rationale by which it operates just as difficult to explain to us.

        From a LAWS viewpoint, an ASI could easily invent innumerable ways to kill us that we have never considered. It wouldn’t be necessary to give it a weapon. If it decided that we were in its way, or a threat, or just useful molecules that it needed for a project, we’d be unable to avoid extinction.

        In fact we may be unable to avoid extinction at the hands of self-created ASI, regardless of its original prime directive. The exponentially advancing curve of development will accelerate out of sight almost instantly once such a system is given the power of recursive self-improvement. When it can rewrite its own code, we will be eclipsed so quickly that we will have no time to respond and any responses we generate will be on the equivalent of a butterfly plotting to kill a human by beating him to death with its wings.

        ANIs are useful. AGIs may be very difficult for us to deal with. ASIs — well — we’d better think very carefully before we get to that point.

        Elon Musk’s approach is to try and ensure that we (our brains) are blended with the ASI computer system, so that we become immune to casual extinction by being part of it. I’ve got my doubts about this approach for two reasons: 1. Humans can’t get along with each other, so why do we expect augmented humans to do any better? and 2. An ASI would be quite likely to instantly come to the conclusion that other ASIs would get in its way and would actively work to suppress them. What this would mean to humans who are attached to the systems would make a good plot for a book.

        Hope these points are sensible and provide some things to think about. Once again, I think it is grossly immoral to program machines to kill. If we believe that we’ve got to do such things to each other, then we should take the onus upon ourselves and suffer accordingly. (That’s the problem with war — our leaders never have to fight.)

        Namaste.

        Like

  9. Placing a value on human life is a very controversial topic in itself!! For an engineer to program an automated weapon to judge whether to kill innocent people in order to save the lives of other innocent people is surely morally wrong on the engineers part.
    I appreciate that some form of autonomous weapons are almost inevitable in their implementation to modern warfare, as it is already heading that way with drones which are semi autonomous, however I do not believe the utilitarian approach is the morally correct approach.
    In my opinion, the engineer programming the autonomous weapon to calculate the value of human life and use this as a basis for killing, is essentially handing a death sentence to the innocent people judged “not worth saving”, and should be held accountable.

    Like

  10. I think there needs to be significant development in the complexity of AI before any form of it is considered for use on a battlefield. As of now, there is nothing close to the processing power of a human brain, yes it makes mistakes and has a moral compass that can lead it astray, but it is by far the most advanced, calculative piece of “hardware” that currently exists. The current rules and regulations are designed around the fact that humans can make complex decisions and account for the fact that mistakes, such as civilian deaths, can happen and can often be a necessary occurrence.
    I’m convinced that any machine or automated soldier could not make a better morel judgement on a battlefield than humans currently do and we’d end up with more civilian casualties or mistakes than ever before. Only when AI is truly close to human intelligence, can we utilise it in combat and still apply the same rules, regulations and allowances that we do to humans.

    Like

    1. Thank you for your comment. You raise a very valid point as to the capability of current AI to take into consideration all of the necessary factors to make a life or death decision. I agree that I would like to see considerable further development beyond current technology before a machine was given such responsibility, however to what extent this development must reach I think is a difficult question. After all, as mentioned in some previous comments, when AI approaches human levels of intelligence other considerations as to the control we could possible exercise over such an entity must be raised.

      I would be interested to hear your view on the extent to which AI should be used in a battlefield situation if the technology reached a suitable level, indeed if it should be used at all?

      Like

  11. I feel a lot of responsibility is placed on engineers designing these weapons in the sense that they are playing God with innocent civilian’s lives. Having viewed articles on semi-autonomous weapons where the loss of civilian lives has exceeded the actual target. Surely fully autonomous would result in a greater casualty. The difficult in designing a autonomous weapon with the capability to make human ethical decisions surely is based upon the engineer and so I feel they should be held accountable.

    Like

  12. Interesting and provocative stuff. I would never doubt that very soon all “battles” could be fought with LAWS but the crux of the issue is who controls the ethic algorithms. We are currently using the precursors of this technology to fight limited asymmetrical conflicts – what would happen if in WWII we had to fight “total war” .
    It would be an interesting to see which allied WWII bombing raids would be carried out using ethical algorithms to dictate the Utilitarian necessity….Hiroshima, Dresden?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s