AI: The end of humanity – or merely the beginning?

In order to decide to what extent engineers should continue to work on artificial intelligence, it is first necessary to distinguish between different types of AI. Narrow intelligence is a type of intelligence which can perform a limited number of tasks, but to a good capability. General intelligence (AGI) is defined as AI which have at least the same intelligence as humans across the board. A super-intelligent machine is so much more intelligent than humans that its capabilities are unpredictable.

Domo Arigato, Mr Roboto

With artificial intelligence we have the opportunity to elevate the quality of life of every single being on this planet to unimaginable heights: AI will be able to answer many of the “hard problems” humanity face; human development, climate change and averting a Malthusian catastrophe. It is for this reason that this side believes that, from a utilitarian point of view (i.e. it brings about the greatest good for the greatest number of people), it is ethically imperative that we grasp this opportunity with both hands and pursue AI to the greatest (in)conceivable extent: Superintelligence. However, in order to ensure an AI would bring about these numerous, life-changing possibilities without risk we must first deal with a number of issues, principally; predictability, responsibility and transparency. Once an AI starts to work in social and professional situations, would it inherit the ethical and moral responsibilities that a human would carry? Or would these responsibilities lie elsewhere?

We are dealing with cognitive technology, which is an issue when looking to previous ethical dilemmas for precedent; this technology may act in ways unpredictable to humans. Instead of proving a system’s safe behaviour in a number of different operating environments we must be able to prove exactly what the system is attempting to do; specific behaviours may not be predictable. For this reason we must ensure that any AI is made to think like a human being and is not just a product of human beings.

Beyond the technical, a legal precedent must be set regarding who is ultimately responsible for the actions of an AI; is it the end user, the programmer or he who decided AI was a suitable option for that particular case? If we assume that engineers and programmers are able to implement a manual override, it is easily foreseeable that a user would rather lay the blame on the AI rather than risk an override going awry, deontologically she may be perfectly within her rights to do so if it is decided that an AI inherits professional responsibilities. If she decides to intervene, is she then to blame if the result of the intervention is worse than that which she attempted to stop?

The final aspect to consider is that of transparency. Ideally any AI would be totally transparent to inspection and amendment, thus ensuring that any unintended actions can be mitigated. If the system is based on Bayesian networks, or decision trees, then this is not a particularly arduous task, however if the algorithm is based on neural networks or is a genetic algorithm then it could be impossible to figure out why it acts the way it does. If we are to progress with AI, we must ensure optimum transparency.

Nobody is saying that the road to AI is an easy one; it is likely fraught with difficulty, but to turn our backs on such a momentous possibility is akin to standing on the precipice of the Industrial Revolution and deciding it isn’t worth the ephemeral smog.

image1.png

A Robot Can Never Harm A Human?

When one thinks about the potential dangers of artificial intelligence, one usually imagines a scenario in which super-intelligence is used in warfare. However, the dangers are much more subtle and hard to prevent in real-world scenarios. A super-intelligence programmed with the goal of collecting stamps may for example decide that the most efficient way of getting stamps would be to use humans as raw materials. Renowned scientists have discussed the negative outcomes that may arise through AI such as weaponization, monetary exploitation or the potential of human extinction. According to Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.”

The moral issue here is should engineers contribute to systems which could have such disastrous consequences?

Firstly, we simply aren’t ready for general intelligence. In order to prevent the accidental catastrophes which could arise, a stringent ethical framework would be necessary to apply to the AI. Such a framework would both be difficult to decide upon, due to the lack of consensus around ethical issues, and difficult to implement, since any programming errors could have disastrous impacts. Additionally, governing bodies would also need to introduce regulations on the development and usage of AI, and so development must halt while these issues are resolved.

But would we ever be ready for general and super-intelligence?  From a utilitarian perspective, meaning that both the positives and negatives of the decision are measured to decide whether the net outcome would be good or bad for the general population:

The positives could have an extreme positive impact, leading to huge advances in every field of science, from simple image recognition to drastically lengthening the human lifespan. However the negatives could have an infinitely negative impact, since they include the possibility for the extinction of the human race. Therefore, no matter how small the chance of a negative outcome, the net total would always be negative, meaning that a general intelligence should not be created.

If we assume that we won’t produce general intelligence, the next logical question is “How far do we take narrow intelligence?”

Narrow intelligences are already widely used, from speech processing to automated factories. These examples have a strong benefit to society, but other types of narrow intelligence may not be as benevolent. The UK, USA, and Israel are currently developing LAWS (lethal autonomous weapons systems) and have opposed attempts by the UN to ban such systems. Using a consequentialist approach, the consequences of LAWS could include the accidental killing of innocents when used properly or when in the wrong hands could be exploited for acts of terrorism. Therefore development on LAWS should also be prevented.

In conclusion the engineers directly involved in the design of artificial intelligence should restrict their work within the bounds of narrow intelligence. Any increase in the calibre of intelligence could lead to unknown consequences which is particularly applicable in the area of warfare where human lives are at stake and dangerous technology could end up in the wrong hands.

image2

We are unsure of the overall title of blog, we prefer the catchy title but if deemed inappropriate use informative title.

49: James Jones, David Westcough, Jozef Quinn, Steven Turner

7 thoughts on “AI: The end of humanity – or merely the beginning?

  1. I totally agree how a great AI would elevate the quality of life for most, and help solve the hardest of problems faster than a collective of human brains can compared to one AI. However, exceeding human intelligence is definitely a dangerous feature of AI, no matter how much control we try to hardcode into the AI, it will eventually break it with the exponential growth of intelligence. In doing so, it could eventually act on its own accord and perform malicious acts out of our control, scary thing. I also agree how such technology could fall into the wrong hands especially on the warfare part mention on this article. Great read! We should learn from the movies…

    Like

  2. Places I see AI would be most beneficial to humanity and not posses any threat, is in service sector. Something like customer service over the phone or drive through in McDonald’s. Anything beyond this scope would posses a direct threat to humanity. AI should be a tool. a service, convenience to humans – not a competitor. Stephen Hawking indicated that one of the most dangerous things for the future of humanity is AI. It is inevitable that AI will fall into wrong hands and it will be used to do wrong acts, but that should not stop us from developing it as long as we know the limit which should never be exceeded. From the social standpoint, it has it’s downsides too. In these modern times people have never been less social. Video games are keeping children (and adults) inside. AI has potential to even have artificial “friends.” This is very bad. People need to spend more time together. Off their phones.

    Like

  3. AI is definitely a topic that deserves to be further developed. One of the fields where it could find a helpful and smart use could be the one of medical research. Many experiments could be accelerated and boosted through the use of AI saving previous time.
    At the same time, it’s important to be very careful in developing these technologies because the whole process could go out of control endangering the human beings and exposing ourselves to something that we are not able to control anymore.

    Like

  4. A very thought provoking article, a bit brief in places but very good overall! While I definitely agree that AI has great potential for benefit, I feel that we need to be careful where and how it is implanted. Especially as stated in the article for general intelligence systems, as these have the greatest potential for harm.

    However, if we control the instances where it is used it is a promising technology. Recently Googles alpha go demonstrated the capacity for learning of its neural network, where in under a year it went from being beaten handily by mid to high tier go players to beating one of the highest level of players in the world. Looking from a medical perspective it is exciting to think that if we employed this kind of technology for advancing our knowledge of certain areas, for example, chemical interactions with various target receptors in the human body and the structures of potential medicinal compounds that can interact at these receptors. It has the potential to provide novel candidates for drug development. Highly beneficial especially considering it will be human level thinking at a far greater speed!

    I definitely agree that the use and availability of this technology should be limited and probably not be used in military applications if possible. Particularly as not just misuse but technical glitches could result in loss of life.

    Like

  5. The decision to stick to narrow intelligence seems very shortsighted. The choice of empiricism and consequentialism for the two counterarguments seems arbitrary and poorly implemented, especially since a) human fighter pilots are far from infallible and regularly kill innocent people and b) much of what the human race has been doing for the past 50 years has been potentially destroying the human race, and yet we continue to do it even know, when the evidence that we are damaging the planet is almost irrefutable.

    The world could face the major crises listed imminently, and narrow intelligence, by virtue of the fact that it cannot deal with all the variables involved, could not deal with this, and could end up doing some of the crazy logical leaps that lead to destruction of the human race that are discussed. A general intelligence has a much greater chance of being able to deal with these problems (particularly if it wasn’t limited to human like thinking like is mentioned here), by being able to factor in all parameters, and thus all eventualities, intelligently. General accountability could not lie with the programmer, because the AI by definition learns and improves itself, but if certain controls were programmed in (probably by law), these could be accountable to the programmer. Full transparency would not be possible here because there is a potential to think above human understanding, but it would (if the appropriate controls were in place) be possible to audit the outputs to monitor behavior.

    Like

  6. Interesting article! A few points which may be worth considering in your argument.

    1) Why ‘the end of humanity – or merely the beginning?’
    Beyond being a catchy title, your whole article is framed in this supposed dichotomy, without any acknowledgement of a middle-way. Sure, AI could eventually lead to the end of the world, but so surely could any big advancement if it goes drastically wrong. Surely it is more useful to consider the immediate potential negative and positive effects; to look at potential safety precautions etc., and how these would affect the productivity of AI. You do this to an extent, when discussing what ethical framework to consider, but this is dismissed as a problem much too quickly through the challenge of cultural relativism. Cultural relativism is not an un-answerable problem when considering ethical frameworks. It has managed to weather the creation and instigation of a United Nations, a Declaration of Human Rights, The WTO and more; why would the creation of an ethical framework for AI be troublesome when we have been successful in other worldwide ethical advancements?

    2) How much will it help?
    The underlying (and often explicit) assumption in this article is that AI can bring humanity to near-euphoric heights. Why? You use the example of ‘Human Development,’ yet poverty around the world is a product of corruption, institutionalised mistreatment of the Global South by the Global North, natural disasters and more. Maybe this is just my simplistic understanding of AI, but when most global blockades of development (war, poverty, climate change etc.) are problems of a political nature, why assume AI is bound to solve problems? I do not see how AI could ever, for example, tackle problems of corruption around the world. As long as we have corruption, we will have war, poverty, climate change (albeit maybe less excessively than currently), and more. So your statement that ‘AI will be able to answer many of the “hard problems” humanity face; human development, climate change and averting a Malthusian catastrophe’ needs to be expanded upon. These problems are ‘hard’ because they are complex; if we knew the solutions, we would have been able to implement them without AI. Further, you could consider how AI could reinforce or further complicate these problems, as opposed to being a natural solution.

    3) Is AI unique?
    Throughout your article, you have littered references to engineering and political initiatives related to the progress of AI. For example, ‘a legal precedent must be set regarding who is ultimately responsible for the actions of an AI.’ Is this not the same as any other product? If an individual misuses a product – or maliciously uses a product – they are culpable. If the product is mis-sold or made with a fault, the designer and programmer are culpable. I think you need to further your argument as to why AI is a special case – why it requires special laws and political regulation, which does not apply to other products etc. You may be right, but currently this is just your assumption – I don’t quite buy it yet.

    Hope this is useful!

    Like

  7. Artificial Intelligence is a unique branch of technology, like any technology it can be used for bad or for good. However this is a special case because it can potentially grow to a ‘conscious being’ greater than ourselves. AI units unlike humans will not need biological resources to survive and may grow to an extent where it is independent from human control. There is no indication that a new ‘species’ of being, AI, given choice will work with the interest to serve its creators. Given the current connectivity of the world through the internet, any AI unit will potentially have a hive mind which can connect it to any other AI around the world, exchange information, develop strategies for whatever plot they may deem necessary for human interest or their own interest.

    AI would obviously help advance all fields of science where human cognitive processes limit the speed of processing large amounts of data (imagine AI doing a doctorate research) but that could accelerate growing inequalities in the world as developed countries will have the AI access and have even more power than already do. it will be down to humans to use this advancement for the betterment of all humanity or for self interest. Nonetheless the field of AI should continually be developed with the appropriate safety measures so it does not become more powerful than its creators.

    Like

Leave a comment