Humanity…terminated?

If you’re familiar with the genre of science fiction, you would be lead to believe that the development of artificial intelligence leads to only one grim conclusion: the destruction of humanity.  However, with the exponential increase in technological development, what was once a fictional idea of superintelligence portrayed in films such as The Terminator is soon becoming a reality. As a civilisation, we have a moral obligation to ask ourselves if we are capable of managing and setting precedence for such a God-like intelligence… should we continue to develop narrow artificial intelligence until it reaches superintelligence? It appears that we have three options:

  • Allow AI to develop naturally.
  • Control AI development through safety frameworks.
  • Cease development of AI completely.

The Problem

Siri, self-driving cars and Amazon Echo are enablers of us simplifying processes. Through the use of these devices, non-value adding tasks are near eliminated and more time is added to our days. As a result, we create a huge market for autonomous products, and one which companies are all too happy to supply. For example, Google has invested in DeepMind to compete in this race of developing artificial intelligence into marketable products. But will there be an end? In the short term, we are certain to see a shift in the types of jobs required to be completed by people, monotonous unhealthy manufacturing jobs will continue to decline and be replaced by robots; the maths just makes sense. In the long term, these products will solve more complex problems, financial advisors and paralegals will see their positions being taken by a worker downloaded off the internet.

The first most probable scenario of devastation is that of an accident- the programmer deploying the AI has good intent, but fails to define constraints for which it is to operate. Take the scenario of that virtual financial advisor being given the goal of maximising profit to its clients, and advised investing into defence shares. The superintelligent AI could plant the seed for a war resulting in a boom for orders from the defence company. This is one way in which artificial intelligence could indirectly threaten humanity.

A second scenario is that the superintelligent AI is programmed with a direct goal of doing something devastating. It would become the deadliest and most competent weapon to have ever existed. The enemy may try to fight back, but given that electrical circuits run 1,000,000 times faster than biological human minds, the superintelligent AI would effectively be hundreds of years ahead within a few hours.

An interesting point to note here is that in both scenarios the superintelligent AI is not itself malevolent, but that it is extremely competent.  It appears that the root cause of disaster occurring is when the AI’s goals diverge from that of our own. Take humans and ants; humans don’t hate ants, and will generally leave them be. But if an ant was walking in the middle of the road, we wouldn’t think twice about driving over it. In terms of superintelligence, we must avoid finding ourselves in the position of the ants.

 

Options for Actions

1) We allow artificial intelligence to continue to be developed without any regulation. Whilst mostly negative aspects of superintelligence have been discussed in this article, advancing our civilisation many would see as positive; an intelligence explosion could see cures to the most awful diseases, answers to climate change, and solutions to problems that we are yet to even discover. However, a consequentialist would argue that developing AI further brings about a plethora of uncertainties which cannot be risked.

2) Another, more responsible approach exists, suggesting that ‘safety research’ of AI must begin to be carried out, no matter what path of development AI takes. Safety research aims to set constraints so that once superintelligence arrives, it’s goals will not diverge from human goals. Neuralink is a Californian start-up looking to ensure our goals stay aligned through developing implantable human-computer interfaces such as a neural lace, and preventing humans from becoming futile in an age of superintelligence. In 2015, Elon Musk launched a safety research company called OpenAI, stating that it is not known how long it will take to develop safe constraints for superintelligence, and so the research must start now. If the safety constraints are not ready for the arrival of superintelligence, the scenarios discussed above could well become reality. Without government incentives, there is no direct reward investing in safety research, and so superintelligence has the upper hand in winning the race against safety constraints already. A utilitarian may view this as the best option as it allows the positive aspects of a superintelligent AI to prosper and the dangerous possibilities to be controlled.

3) Completely ceasing development of AI is the final option. Over the next few decades, David McAllester believes that superintelligent AI “would enable machines to become infinitely intelligent”. In a prevailed era of spying drones and automated ballistic missiles, imagine the outcome if militaries of powerful nations like the USA, Russia, or China plan to exercise this limitless potential of intelligence.

It is common sense to think offensive military robots are a perilous threat to life itself. In movies, this threat is usually personified as an evil Terminator-esque machine fuelled with malicious intent to destroying humanity for reasons of its own. In reality, the threat comes from within us- human warfare. If preliminary testing of ballistic missiles in North Korea or even nuclear-uranium enrichment in Iran is enough to create political hysteria, what would happen if a powerful nation started to advance their military using intelligence superior to what humans are capable of comprehending. Superpowers will not only risk their enemies having a monopoly, but fear their ability to defend themselves. This could lead to political discord, assertive military actions and stimulate a global arms race. Will anyone benefit from this? The possibility of using lethal autonomy superintelligent AI as a weapon, would threaten everyone.

Group 72: Kyle Farrell, Will Gale-Hasleham, Jude Don

 

17 thoughts on “Humanity…terminated?

  1. In any situation where technology can be used as a tool for power, there needs to be an element of responsibility about how its managed. The development of AI is brilliant, but in the same instance scary. Whether its going to benefit or hinder us really depends on who you are.

    For businesses, its a dream. They’re replacing humans that they have to pay wages, give holiday and sick pay to, with machines that can do their job more efficiently, 24 hours a day. For the workers obviously it spells bad news.

    In terms of the ‘malevolent AI’ and what if they turn into Arnold Schwarzenegger’s Terminator, that’s ridiculous. Take Uber for example, they want to turn all of their cabs into self-driving cars. Are they going to programme the cars to assault people, like some of their drivers have done? No. Obviously once they extensively tested the technology, it will be much safer than having a human driver who has the conscious choice to go off the rails if they want to.

    The problem is rarely the technology, its the people behind it. Hypothetically countries could spawn an army of demon robots, but they could just choose to wipe us out with the press of a nuclear rocket button if they wanted to. It would be much cheaper.

    AI development must be regulated, there’s no doubt about that. The work that Musk is doing is great, but the regulation needs to be backed by government and law to prevent it getting out of hand and going against whats in our best interests.

    Like

  2. The idea of machines replacing human beings sounds wonderful. It appears to save us from all the pain. But is it really so exciting? Ideas like working wholeheartedly, with a sense of belonging, and with dedication have no existence in the world of artificial intelligence. Imagine robots working in hospitals. Do you picture them showing the care and concern that humans would? Due you think online assistants (avatars) can give the kind of service that a human being would? Concepts such as care, understanding, and togetherness cannot be understood by machines, which is why, how much ever intelligent they become, they will always lack the human touch.

    Like

    1. Well noted, Siva. Can we really code a ‘moral conscience’ into machines; its a diffcult concept for even humans to grasp . Super AI should never subsutite duties of a human, rather supplmenent it.

      Like

  3. At the end of the day, humans will always have control over AI, it will only ever a get smart as the programmer allows it to be. Any sensible programmer will always ensure that there is a fail switch to be used when required. AI should be developed depending on individual purposes, for example in Uber, the smartest thing the autonomous vehicle will be capable of is to ensure that the passenger arrives at the destination safely. Other than a large number of job losses and not being developed with bad intentions, I don’t see AI causing any problems.

    Like

  4. There are pros and cons of allowing for AI to develop. Yes, it will allow for more opportunities to be developed and will make life easier and cheaper for many but at the end of the day, they are just computers. They will only be able to do what is asked of them and therefore are not capable of going above and beyond to ensure that feelings and emotions are taken into consideration. Unless a system can be made to incorporate this, they can never replace humans completely. Another worry that can be addressed is that they are computers and as with computers in this day and age, they can be prone to hacking. If they are able to be controlled for undesirable purposes, this can cause huge disasters. Though they are useful in society, I think there needs to be total control over them which can only be done by humans.

    Like

  5. The world has already started to be taken over by AI and is someway or another part of our every day routine. It is up to us to determine how much we decide to implement AI into our lives.

    Like

  6. We live in a world whereby technology is outperforming human tasks. Supermarkets use’s self checkouts, however human interaction still remains intact, they could have replaced all humans with machines, however as mentioned above its about ‘moral conscience’. We humans need to interact with one another, no AI is able express true feelings or emotions therefore its creator will always be in control. A regulated system is needed, A system that allows AI models to learn from its past experience will reinforce its training allowing the AI to determine the best course of action. Human interaction can’t be replaced by machines, AI can be used to enhance human lifes.

    Like

  7. From a medical perspective, being a natural thinker is crucial…with the advancement of medical science, we let newer technologies take important medical decisions, at times even diagnosing certain conditions. This has long been identified as “dangerous” medical practice, as nothing can truly replace a natural thinking mind that has been trained for hours to pick up certain subtle variations in health which presumably no artificial machine/equipment could ever do.
    In conclusion, I believe this is a good concept but it can’t be the only approach, especially in medicine and surgery!

    Like

  8. Interesting article, I particularly liked the ant analogy. My opinion is that AI should not be banned… The advancement of science cannot be ignored… it will only end up re-surfacing later. AI is simply the start of a new technological era for man, and rather than trying to hinder the progress of science, we need to re-direct our minds to analyse the actual root cause of the dilemma. To me, this boils down to the potential for technological abuse. The real focus should be on identifying possible ways in which this technology could be intentionally or unintentionally abused, and the real call to action is working to create the right regulatory frameworks which ensure that human utility remains maximized. I feel like the media has a large part to play in “scaring” the public by painting unrealistic scenarios that may never actually happen. We need more “factual” information, and as pointed out in the article, governments need to get involved and incentivize social scientists/researchers to carry out more realistic surveys and social experiments so that high risk consequences of the utilization of this technology can be identified and accordingly regulated. To me, the pros of AI do definitely outweigh the cons – but the ball is our court to make sure it stays this way.

    Like

  9. Well rounded argument, I especially like the in depth look at the possible options that can be chosen in regards to the future of AI. However, one point I feel was missed is that if AI is continued to be developed then eventually it may replace the need for humans to work. Many monotonous jobs could easily be replaced by AI, which is very bad news for these employees. Many peoples income would be lost and this could potentially lead to their life being ruined.

    Like

  10. So far artificial intelligence is not able to learn and make changes to its own programming, the program can only be created by the writer, a person. I agree there is an argument -should we continue to create AI as it could cause harm- but the software writer must be held responsible for creating the AI if it is programmed that way, any program cannot be more intelligent than it is when its written. So I agree with option two as an action, the research should continue and we should continue to use AI to make using technology and services easier but there needs to be some sort of restriction governing what the software writer can and cant include to keep AI working for us and not against us.

    Like

  11. To be honest i do not really think that Safety research is going to do anything because at the end of the day it is the human that yields the power to change. Therefore in the wrong hands there isn’t really much you can do and so in that sense its down to chance. But that does not mean that you should ‘cease the development of AI’, because good things can be done and i’m not talking about trivial issues either. For example Boxever is a company that leans heavily on machine learning to improve the customer’s experience in the travel industry and deliver ‘micro-moments,’ or experiences that delight the customers along the way. Travel industry is obviously huge and this is something that will essentially affect each person who travels so you can see how its more than just Siri and Alexa etc.

    Like

  12. A very interesting article, giving lots of food for thought.
    Firstly, I believe that the whole purpose of the invention of Artificial Intelligence was to ease the way of life for humans. For example, the creation of autonomous cars ensures the availability of taxi services at night without the necessity of drivers. It can be argued that this puts drivers out of jobs, on the other hand, it also provides jobs in the manufacturing of such facilities.
    Secondly, AI will only be as intelligent as its programmer, the likelihood of AI enabling machines to become infinitely intelligent is low. If AI is able to develop its own intelligence, there is a possibility for infinitely intelligent development, however, that is unlikely due to the constraints put in place by humans.
    Finally, as with most things in this era where there is a fight for power, regulations are required. This means that there should be a limit to the research that can be undertaken in the AI industry. Therefore, controls should be placed in the development of AI through safety frameworks.
    Thus, I believe that the development of AI is beneficial to society, but must be controlled to avoid misuse.

    Like

  13. A good read!
    There seems to be a lack of clarity with regards to the engineer’s responsibility in this dilemma.
    Engineers will undoubtedly be involved in the development of AI, but the responsibility of choosing one of the three options for action outlined above will surely stretch further than the engineer’s remit. These questions relate as much to ordinary people as they do to engineers, which I think is what makes this issue so interesting.
    In general an engineer working on an AI project will have a moral and ethical responsibility to cease work if he or she foresees a negative societal impact because of the work he or she is doing. The issue with this problem, is that AI may get to a point of intelligence, where its designer can no longer predict its actions or ambition.
    This is why I think frameworks will be required to keep AI design safe and productive, however deciding what these frameworks should be is a question that should be very considered and is a responsibility that stretches beyond the people working within them, I think.

    Like

  14. A very well written article. I agree with your point about superpowers not risking enemies having a monopoly. It seems obvious to me that certain nations will have advanced technology, where in the future it may possibly lead to a few super elites basically controlling the rest of the world. In such a society, there will be little rights for so called “smaller countries”. There has been an open letter sent out where over 20,000 experts have signed, including Stephen Hawking, Elon Musk (Tesla) and Steve Wozniak (Apple), calling for a ban on artificial intelligence. Do you think this open letter will ultimately be listened to, or will a catastrophic event have to take place before it is upheld, much like how nuclear weapons led to countries pledging no-first-use (NFU), or to only use defensively? Or in fact, could we one day live in a world where control by whatever means will rule?

    Like

  15. An interesting read, making some good points with a diverse array of scenarios. I wonder though if you considered the possible case:

    The majority of your hypothetical scenarios work on the basis of AI being allowed near limitless freedom in both ‘thought’ and action. However, the majority of technology today can roughly fall into two categories: those that can ‘think’, but not act (without direct consent), and those that can act, but not ‘think’ (without the necessary tools to do so). For example, Siri is capable of thinking based on a human command, and only act (e.g. phone someone, set a reminder etc.) with consent from the user, while Roombas are capable of carrying out tasks based on their programming, but are unable to disobey or develop upon it, unless a human enabled it to do so.

    It seems to me that the majority of the catastrophic scenarios you discussed stem from the fact that there is no human intervention between the thought and action stage. This leads me to believe that if we wish super intelligent AI to flourish as safely as possible, we must either develop AI so that either it falls into one of these two categories, or we design them so that actions can lead to thought, and thought can lead to thought, but thought cannot lead to actions without the direct consent from a supervisor.

    You have mentioned the case of autonomous cars where it both thinks and acts without human intervention, and in my opinion this form of AI should remain in devices that are not capable of causing catastrophic damage. Autonomous cars are capable of causing harm under the event of a malfunction, but will be unable to cause widespread damage before human intervention. However, in cases where AI is able to control military weapons, power grids, stock markets etc., it should be designed so that no action (or no large developments in logic) is without human consent.

    Not only does this negate progressive logic leading to malicious actions, but it would also preserve (to some extent) jobs for humans (which under current economic constructs, is a necessity). This does however still enable malicious actions to occur, as the supervisor could approve something either out of ignorance, or out of intent. But this is just a by-product of every advancement in technology; things that have been created with good intentions can also be used for harm. One example being the creation of the V-2 rocket for the Nazis, with its creator, Wernher von Braun, stating: “The rocket worked perfectly except for landing on the wrong planet.”

    I do understand that this idea can, in some regards, be considered to fall under your ‘options for actions (2)’, but I do feel you were wholly talking about AI that was able to act under set rules, but with little to no supervision.

    While it CAN slow down the progress that fully autonomous super intelligent AI can provide, it does make them far safer and easier to control.

    Like

    1. Perhaps at the start of our article we failed to define what is meant by artifical superintelligence – “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” (Thanks Wikipedia), which is generally percieved as being the point thereafter from which technological singularity and an intelligence explosion have occurred. It appears extremely hard to fathom this concept as it is imagined as lines of code which we are used to seeing programming applications or games, rather than syntax which has the potential to dictate the future direction of our world and just so happens to be viewed via a computer screen.

      What appears to be the most concerning aspect is that as a civilisation we are in a technological race to develop artificial intelligence; a race which is being lead by private entities from different cultures, with varying agendas, rather than a single government organisation developing a single product with complete control.

      Were a superintelligent bot to be created (purposefully or accidentally) with few initial parameters, be recursively self improving and yielding all the information on the internet, we would find ourselves in a scenario whereby a blanket regulatory framework (most likely written by a differing agency) would be obselete.

      Like

Leave a comment