Are We Ready for a World Dictated by Artificial Intelligence?

Traditionally, the usage of AI within industry has been limited to areas in which computers are known to have a clear advantage, such as data analysis, weather forecasting, and auxiliary storage information. Is now the time to consider the implementation of this technology to areas which are more conventionally thought of as the “human” domain, such as financial decision making, national defense, and the judicial system?

AI’s Implementation is the Next Logical Step

Group22pic1

Go is widely regarded as the world’s toughest game. There are more potential arrangements in a Go board than there are atoms in the known universe (1). Initially developed as an alternative to Chess, in order to redress the ever-growing imbalance between humans and computers, Go is often used to validate the efficiency with which artificial systems are capable of learning and compiling information.

AlphaGo, a specialised AI system, built upon its initial programming by iteratively competing against itself, developing to a level where it was able to comprehensively defeat scores of skilled human competitors (2). It is now seen as a potential precedent for the implementation of AI into real world, complex systems. After all, if it is capable of overturning an in built disadvantage, learning the game from scratch, and mastering it to the extent that it is able to go on to crush the playing community, why wouldn’t it be able to reach the same level of success in the spheres of finance, or law?

Engineers have already successfully developed “Celo”, an AI money management assistant, which interacts with users via instant message, analysing user information to generate personalised advice, such as financial management plans (3). The success of AI in the financial industry goes further than a simple assistant however, with many fund finance companies developing their AI trading system, which has quickly come to be regarded as more effective than any human trader (4).

In terms of how far this success can go, the sky truly is the limit. There have been major inroads in law, medical diagnosis, and national security, with many military leaders recognizing the potential of AI as a means of outsourcing decision making, information sharing, and even combat participation. With proper programming, the AI would be able to call upon vast data banks, and use the information stored to develop a best possible solution to any problem.

Put simply, AI has reached a stage where it is the perfect tool for rapid problem solving. The responsibility now lies with humans to apply it properly to maximise the level of good it can do, whilst minimizing potential harm. Though the nature of AI is often regarded as complex or mysterious, is this not the same standard we would apply to any tool with potential for misuse, such as an axe, or even fire? AI is ready for widespread implementation. All that is needed now, is the authorisation.

From a utilitarian perspective, widespread implementation of AI makes perfect sense (5). Full scale automation of decision making saves time, money, and avoids the mental health risks associated with allowing humans to make difficult decisions. Admittedly, it is subject to occasional oversight, and successful implementation may mean that people’s livelihoods are put at risk, but the overall potential for good – a society which operates based on facts, data, and objectivity – means that these can practically be ignored.

AI is not yet ready

Group22pic2

The idea that the recent inroads made in AI mean that we should now open the floodgates completely, is highly problematic. Though a lack of feeling may be conducive to success from an analytical perspective, it does not lend itself to the effective resolution of real life issues. Human problems are messy and can be packed with nuance, requiring compassion, understanding, and often creativity, rather than just algorithms and data, to resolve. Unfortunately, as things stand, empathy is not a part of the AI toolbox.

When engineers program in “artificial emotion”, the decisions the computer makes are always informed by the biases of the individual responsible for programming it (1). There are already examples of AI systems, utilized for their so-called objectivity, exhibiting discriminatory behaviors. Northpointe, under government contract, developed a system intended to calculate a coefficient which would indicate a felons’ likelihood of reoffending, to be used to inform both length of sentence, and bond amount. It was found that not only was the system “totally unreliable”, possessing an accuracy rate of just 20%, but it also falsely flagged BME defendants with twice the frequency of white ones (2)(3).

Such situations are often worsened by the fact that when things go wrong, the apparent complexity of these systems makes culpability difficult to assign (4). In the Northpointe case, once the lack of success of the system was publicized, the company outright refused to share its methodology (4). People’s lives should not be placed at the mercy of a program which can neither comprehend the significance of the decisions it is making, nor be held accountable when they are found to be lacking.

Cases such as this, and the shortcomings they highlight, lead to further concern when we consider the interest of various militaries in utilizing AI (5). AI is notoriously bad at processing ambiguity, and unpredictable or conflicting information, which are all subject to exist in frontline combat. Improper processing of such information could quite easily manifest itself in a failure to differentiate between an ally and an enemy (6). Thus, the potential for, and consequences of, failure, would be enormous if this implementation is allowed to go ahead.

From a virtue based perspective, as well as a common sense one, wholesale deferment of human decision making roles is a step in the wrong direction, for now at least. Our gut feeling, which is substantiated by the constant stream of cases, like the Northpointe one, tells us that until clear, fair, and universal standards can be reached, AI’s unmonitored expansion would result in discrimination, profiling, and other such risks to human wellbeing.

References for Part 1

1 – https://blog.google/topics/machine-learning/alphago-machine-learning-game-go/

2 – https://www.newscientist.com/article/2117067-deepminds-alphago-is-secretly-beating-human-players-online/

3 – http://fintechnews.ch/pfm/the-rise-of-chatbot-banking-and-ai-money-managing-assistants/5976/

4 – http://www.turingfinance.com/dissecting-algorithmic-trading/

5 – https://www.utilitarianism.com/utilitarianism.html

References for Part 2

1 – https://motherboard.vice.com/en_us/article/weve-already-taught-artificial-intelligence-to-be-racist-sexist

2 – http://bgr.com/2016/05/23/court-risk-assessment-algorithm-northpointe/

3 – https://motherboard.vice.com/en_us/article/ai-could-resurrect-a-racist-housing-policy

4 – http://theconversation.com/robot-law-what-happens-if-intelligent-machines-commit-crimes-44058

5 – http://www.businessinsider.com/military-capabilities-of-china-investing-in-us-startups-2017-3?utm_source=feedburner&%3butm_medium=referral&utm_medium=feed&utm_campaign=Feed%3a+businessinsider+(Business+Insider)&utm_content=FeedBurner+user+view&IR=T

6 – https://www.techwalla.com/articles/military-use-artificial-intelligence

Group 22: Guangqiao Zheng, Qi Jin, Xiaochen Mo & George Frankland

Advertisements

12 thoughts on “Are We Ready for a World Dictated by Artificial Intelligence?

  1. In the era of cyber terrorism, AI surely has a limited application. Obviously, a computer has no free will, and any user that is able to take control, legitimately or otherwise, could potentially create massive problems with no real resistance until the attack has already occurred.

    Until there is a sure-fire way of ensuring that any AI in a position of power is impervious to any sort of ‘hacking’ or unauthorised access from an external source, AI should stay well within the realms of Sci-Fi media.

    Like

  2. As science and technology are developing very quickly, solving problems using computers is an unavoidable trend.Artifical intelligence is a tool to save time and increase efficiency so it is really good and is beneficial to human being.

    Like

  3. It is a shame that Artificial Intelligence does not have the same political backing as other major technological innovations (Industrial Revolution, Electrification or similar developments). AI and Automation would have beneficial effect on transport, manufacturing and some elements of skilled professions (such as document scanning).

    One of the major benefits, but one that is not widely supported, would be that it help introduce Universial Basic Income.

    Alas time will tell.

    Like

  4. Handing the responsibilities of decision making for key areas to AI in a human society is not something that should ever happen. While there are some areas where AI could have a prominent role such as finance, where solid facts and figures can be used to give advice, areas where human compassion/emotion/empathy are definitely not suited to AI.

    Like

  5. Good points. AI does help us imrove the efficiency in mass production and smart lifes. However, it brings many problems as well, will they replace labour force which may increase the unemployment rate.

    Like

  6. I am not being cynical but there is no substitute to a human brain. It can’t be duplicated or cloned. So, while AI has a lot of promise and possibilities, it should be limited to simple tasks. Leave decision-making to human.

    Like

  7. As computing power advances rapidly with computer science at the forefront, we are ready for a world assisted by AI, but definitely not a world dictated by AI. Just like how we improve and evolve into human beings, AI continues to develop rapidly that seems to mimic an evolutionary process. I’d like to see a future where we are AI-enhanced rather than AI-dictated, and the ideas in your article are important points for contention.

    Like

  8. I think current stage of AI on a industrial scale is very much comparable to big data 10 years ago. Today big data is trenched in our everyday life as a result of genuis from companies such as Google, Amazon, Uber etc. Throughout human history wide acceptance of discourse on new idea always took place despite whether if we are ready or not. Like any other human discovery, AI is a no exception in my opinion.

    Like

  9. I think the presumption of this statement is quite interesting: It seems like we acknowledged and accepted the fact that AI will be dominating our lives. However, I think it is important make it clear. Human beings created AI, for the purpose of higher efficiency and better service. Under no circumstances would this purpose be changed.

    AI serves people, not the other way around.

    It is true, the presence of Artificial Intelligence may be shaping the way we think, act and function, yet it is crucial for people to understand that we, the human beings, should be the one that makes the ultimate decision and be the one receiving the consequence of it. Thus, be vigilant and sharp. Never let AI takes away your ability to critically think, act and made those important decisions.

    Like

  10. Brilliant article and quite easy to follow and understand. It really is important to develop AI, as there are numerous examples of scenarios in which human skill can be supplemented with or replaced by the use of AI mentioned in the article.
    However, with reference to such TV shows as Black Mirror, Humans and a host of sci-fi movies too- which though entertaining, really do raise valid concerns about the future effects of AI on people, it seems necessary that AI will need to be limited. In it’s ability if not in scope. Failsafes ought to be included so that our helpers do not become our oppressors. Again, great read, guys!

    Like

  11. Interesting read. What about the issues regarding employment? I feel that automation of factory processes has already resulted in a huge amount of job losses, and implementation of AI machines could feasibly cause a new wave of unemployment as AI systems take over roles that used to belong to sentient human beings. I’d be concerned that AI developments would drive the wealth of the large companies who adopt these AI techniques, and pay detriment to the general public.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s