Artificial Intelligence: Precursor to Human Immortality or Extinction?

How do you feel about working until your 60s? Would life without work be paradise, or is work an intrinsic part of life? These questions may seem trivial, but they are becoming relevant thanks to Artificial Intelligence (AI).

AI is the application of code to create machines able to perform tasks that normally require human intelligence. AI surrounds us, from speech-recognition software to autonomous vehicles and financial management services.  Development is geared towards creating Artificial General Intelligence (AGI), machines that can learn and become as intelligent as humans. This point is known as the “singularity” after which AI learns at an exponential rate, transforming into Artificial Superintelligence (ASI) with limitless ability. Experts believe this could happen by 2050, with breakthroughs occurring at Google and Facebook.

AI will have an enormous impact on humanity. Its potential to cause human extinction or enhance mankind has led to fierce scientific and ethical debate.

Artificial_Intelligence.jpg

Impact on Jobs: A Case Study

Machines are increasingly replacing human labour, with companies looking to boost productivity. Machines do not need food, rest or holidays, providing significant advantages over human labour. A prime example is Changying Precision Technology Company. This Chinese manufacturer replaced 90% of its human workforce with robots, resulting in a 162.5% increase in production rate and a 20% drop in defects. This is attributable to the consequentialism of capitalism, where it is ethically acceptable to maximise profits for stakeholders within lawful methods. This is not limited to manufacturing: machines are already affecting the service sector too, from taxi drivers to waiters.

Can our economic systems adapt to a world where human labour has increasingly lower value? We live in a capitalist society where broadly speaking, we gain income according to the value we add to the economy. If a machine adds more value than a human ever could, the human becomes redundant. With AI capable machines a human workforce will be rendered needless. In our society, redundancy leads to a loss of income, and subsequently a lower quality of life. Furthermore, the redundant workforce may find themselves unemployable due to a low skillset, causing suffering and further increasing socioeconomic inequality. From a utilitarian perspective, where overall human happiness should be maximised, this renders AI machines unethical. But perhaps we are identifying the wrong fault. If our society was not centred on the economy, becoming redundant may not be such a bad outcome.

‘Work’ takes on different meanings in different cultures. In Scandinavia, overtime is frowned upon, but leaving work on time can be seen as disloyalty in Japan! Economic, cultural and legal differences mean the impact of AI will vary. Developed countries are largely capable of retraining workers for different occupations. However, in developing countries where a significant proportion of workers depend on low-skilled manual labour, AI would destroy peoples’ income streams, worsening inequality. This lends greater credence to the utilitarian argument that AI is unethical.

On the other hand, if humanity could make it past the initial job loss and inequality caused by the transition from a human to an AI-based workforce, the future is bright. Relieving people from the burden of work would allow the pursuit of more fulfilling interests and enable more time to be spent with loved ones. We could view this world as a hedonistic utopia, a world where employment is a choice. Could AI make this utopian concept a reality, or would AI’s impact be far more sinister?

What are our options?

An ethical analysis of AI must go beyond employment and consider its wider impact on humanity.  There are several potential courses of action:

  1. Cease development of AI completely.
  2. Do nothing; allow natural development of AI, similar to previous technological revolutions
  3. Develop regulatory framework guiding the development of AI.

Any decision must consider all stakeholders. Governments are heavily investing in AI for economic and military purposes while large corporations are seeking technological and financial benefits. Individual users are developing AI for personal benefits, while investors seek a return on investment. As AI will affect all humanity, the general public must be regarded as a key stakeholder.

Utilitarianism would suggest ceasing development before singularity is achieved. As AI can be termed humanity’s ‘biggest existential risk’, this outweighs any potential benefits. However, given the wide range of stakeholder interests, is this an enforceable option? Probably not.

Can we rely on entrusting AI development to humans with good morals, as virtue ethics would suggest? If so, how can this be ensured, and who should be responsible for vetting AI developers? Additionally, ensuring ethical humans are in charge of developing AI doesn’t guarantee humanity’s safety. Evidence of this is prevalent, with creations such as nuclear warfare causing human suffering and posing an existential threat themselves.

The lack of regulatory framework and guidelines hinder the use of duty ethics, which proposes following known norms to implement ethically correct actions. In order to develop an overarching framework to guide the safe development of AI, collaboration between all stakeholders, as advocated by care ethics, is necessary. This requires overcoming various obstacles, including cultural differences and the conflicting views of AI experts. How do we amalgamate all considerations into a unified, mutually agreeable global framework? Examples such as the UN Human Rights charter exist, but their effectiveness is questionable.

Undoubtedly, ASI will lead mankind into uncharted territory, with unpredictable consequences. For example, imagine an intelligent being capable of using mass probability analysis to predict the future. If it knows what will happen, does that erode the human concept of free will?

We tend to be scared of things humanity cannot comprehend. Does this make them ethically wrong? As scientists and engineers, our duty is to explore the unknown for the benefit of humanity. If we make ethical decisions based on fear, are we not constraining humanity? Perhaps, purely in the pursuit of knowledge, AI is our future. But is this argument valid for a subject that risks humanity’s extinction? As Professor Stephen Hawking said, ‘creating Artificial Intelligence would be the biggest event in human history. It may also be the last.’

11: 
Alan Middup
Brian Muriithi
Gabriel Bracken
Victor Yuan

Advertisements

17 thoughts on “Artificial Intelligence: Precursor to Human Immortality or Extinction?

  1. Good read, concise and well cited. It has a nice structure and good explanations of complex ideas, although perhaps more could have been done at the end to explain the differences of ethical systems. Also good consideration of other cultural and global perspectives, makes it much more relevant to the reality of an issue of this scope. Enjoyed the flair, particularly “the human becomes redundant” !

    Like

  2. Really well written and well rounded approach to the topic. Opens a discussion well allowing gateways for further points. Very easy read for someone who needs an introduction to the discussion!

    Like

  3. This is effective in explaining the basics of AI to someone with little knowledge on the subject, and outlines all the necessary questions. When elaborating on the various options we may have in solving the ethical issues ASI creates, it seems a little more clarification could be useful: For instance, surely the effective (or ineffective) nature of the UN Human Rights Charter is related more to a lack of human communication on a global scale in the past? In other words, if ASI, with its “limitless ability,” leaves its future to be so unpredictable, then any unethical consequences would be impossible to predict, therefore practically impossible to prepare for, and so we cannot simply look to past governmental decisions for an answer. Perhaps more could be investigated into the motives behind using ASI. This is a truly well-rounded and open-minded outline of such a complex concept, yet naturally it leads us to ask so many more questions – I personally would be intrigued by the physical reality of ASI: what would “an intelligent being capable of using mass probability analysis to predict the future” physically look like?

    Like

  4. Building machines with embedded moral code (e.g. not to endanger humans) might be an answer to ethical issues, but that could be removed or replaced by hackers. I definitely agree that a framework needs to be established in order to prevent harm.

    The way machines reason and make decisions is also very different. If AI expands and is used to fulfil utilitarianist principles, It will be inevitable that machines will be responsible for the deaths of humans, as a result of decisions that will be computed in life or death situations.

    AI could become like the many tools used worldwide by humans to make life easier, like maths or language. The evidence tells us that AI will be more rationale, in terms of using statistics in everyday life and removing error created by bias. But the suggestion here is that AI or ASI may grow to have a level of sentience that far suceeds our own.

    Also would the ethics of free will/human rights need to be applied to AI machines? If, somehow, the human brain is understood in such a way that it could be ported to a machine, would we then need to treat that machine differently?

    I thought this was a really great piece. It got me thinking about how AI might be the next step of human evolution. One cannot stem the flow of technological advancement and I have no doubt that AI will change human life as we know it

    Like

  5. An interesting discussion on the pros and cons of AI and ASI. As the article says, as engineers do we not have a duty to explore the unknown for the better of humanity? Fail systems can be placed when developing AI, we are surrounded by dangerous items on a daily basis that only operate because of the measures taken to make them safe which is why I don’t think we should be restricted to develop AI in a sensible manner. AI won’t be something that arrives from one day to another, it will take decades of development and small incremental steps, which will lead to a great understanding of what it actually is we are creating meaning that safe systems can be imposed step by step. It is ironic that humans may find a way to abuse the system and make it a danger to humanity, but that just proofs that the only thing we should fear is ourselves.

    Like

  6. The questions raised about work productivity and the value that places on human life were very interesting. The application of an ethical principal to this idea was very engaging and rounded out the argument nicely, however in the beginning of the paragraph you state that we operate in a capitalist society, where ‘we gain income depending on the value we add to the economy’, but then apply utilitarianism as your ethical example. This confused me a little because this centres around human happiness, yet capitalism does not so it seemed a little incongruous. Unless of course it was the intention was to provide an alternate ethical commentary that concludes AI is morally questionable, but in that case it needs rewording to make that clear.
    I particularly enjoyed the final comment in that paragraph, about how this indicates the potential need for a reevaluation of how we value human life. It was very astute and really brought together your argument before you began expanding on more specific lines of questioning later.
    Overall it was brilliant, I had no prior knowledge about any of the issues surrounding AI and now I feel very clever.

    Like

  7. The article is well written and very easy to follow. It’s unique as it delves into some of the often overlooked aspects of AI and provides a beautiful critical analysis of the challenges it brings and the options currently available to deal with them.

    Like

  8. Really interesting and well written article. The topic of AI is pretty contentious and a lot of key points have been covered well here.

    We use a fair bit of automation at work (in logistics), particularly for picking and replenishment of parts. The tech used will not make the same mistakes what human workers would in the same roles, such as miscounting or misplacement of stock – provided, of course, it’s all been programmed correctly. An important thing to note is that whilst these machines don’t require rest and holidays per se, they will require downtime for maintenance and often recharging or replacement of a power source. I would question if this would then created more skilled jobs?

    Overall, a great blog post and one I will be reading more around.

    Like

  9. An interesting read! It really touches upon the current stage AI is at and how it can be developed in the near future. It’s difficult to foresee a future where the AI becomes more intelligent than a human and can do stuff like designing but when it comes it will shock alot of us.

    Like

  10. Thanks for writing this, it was very thought provoking and know I feel like I have a deeper understanding of AI.

    I don’t think AI should be something to fear but embrace, as long as a governing framework (akin to Asimov’s 3 laws of robotics) and police it. Through out modern industrialised nations we see that automation of processes result in more skilled jobs, accompanied by an up skilling of existing labour. There is no reason we could not see the same with AI. I find it hard to imagine ASI recreating creative thought that is essential to so many careers. However ASI could be invaluable in data heavy industries such as finance, law, fraud prevention and could revolutionise health care, something which IBM’s Watson has already begun to do.

    Many times we have heard that a new technology spells the end for humanity, wo far they’ve all been proved wrong. I hope this one is too.

    Like

  11. Very interesting read I have to say that I fall in the camp where I see the positive impact of ai before fear of the super ai. I think that there will always be humans in charge of ai and ultimately built in to any code should be human override and human decision making, that in my opinion would avoid the scenario where ai could at in detriment to humanity. As for the fact that some jobs will become redundant that is one of the facts of life in this day and age I think that as we as humans develop transition methods that will see the population be retrained in ethical ways.

    Like

  12. This is a really nicely balanced and well constructed analysis of a highly interesting and complex topic.

    The article considers the negative impact robotics has on human jobs in the manufacturing industry, due to the productivity benefits they bring, which is, of course, hugely detrimental to the job security of those involved in physical process work environments. However, it is important to also highlight the increase in productivity AI can enable from humans. For example, AI through self-driving cars allows typical wasted time commuting to instead be used as time for mobile working. For those with substantial commutes, this will be invaluable; effectively creating more hours in the day to address the work-life balance struggles many people have.

    Overall, an excellent thought-provoking piece on one of the most important issues in modern society.

    Like

  13. Interesting blog coming at a time when artificial intelligence is taken seriously by researchers and the industry alike.
    Just adding a quick point against the motion (though my heart is with the motion).
    AI is an interesting notion but needs to be developed strategically. ‘Today’s technology is tomorrows junk’. We often tend to see (in the case of gadgets) that with technological advancements, existing technology becoming obsolete and more often than not feeding into the electronic junk. If AI research across the globe is strategically managed, I think solutions that wouldn’t be painful to the environment could be evolved at, potentially serving mankind to the fullest..! 🙂

    Like

  14. Very well written interesting and informative read.

    Opens up a wide variety of future issues yet presents the excitement and positive side of the discussion.

    Like

  15. Really interesting read and introduced the topic really well. A few extra facts and references would have been useful, to help develop the argument for an against AI further. Will certainly be taking an interest in developments within this field in the future!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s