Artificial Intelligence: Ethics, existential risk and apparent consciousness

Alex Charlesworth.

Artificial intelligence (AI) was first created in 1956[1] and is used all around us from satellite navigation to internet search engines. AI is an integral part of modern society and great lengths are being made to significantly improve it. We have created AI that is incredibly proficient at operating on simple tasks like playing chess or doing calculus, however we have not made an AI that is intelligent across the board and can perform any intellectual task that a human is capable of such as abstract thinking and planning. This level of intelligence is called artificial general intelligence (AGI) and is a lot more complicated to create. “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’” – Computer scientist Donald Knuth[2].

In order to go from AI to AGI, computing power needs to be increased to the point that an AI can complete a similar number of calculations per second (cps) to the human brain which is approximately 1016 cps[3]. This level of computing power is currently available only if you have access to the world’s fastest supercomputer but it should be available in the near future due to the law of accelerating returns[4] and Moore’s law[5].When computing power is no longer a problem, some of the greatest minds right now think that AGI will be formed by providing AI with 2 major skills; The ability to research AI; and the ability to alter its own code. If successfully carried out, the result could be a large and sudden increase in intelligence of the AI.

As the AI system undergoes recursive self-improvement[6], it can swiftly change from AI to an AGI. But after this there is an intelligence explosion which transforms the AGI into artificial superintelligence (ASI). This is the type of AI that people such as Elon Musk[7] are especially worried about as it carries an existential risk with it. The original aim of an AI is carried out with no regard to any moral code or set of ethics and once connected to the internet and it has the vast wealth of all human knowledge with which to teach itself and learn. Due to the number of calculations it can make per second and the fact that an ASI can function constantly means that it can become orders of magnitude smarter than humans extremely quickly. Basically, once we create an ASI there is no way of going back to life without ASI. The problem comes when an ASIs original aim is coded recklessly. For example, if an ASIs original aim were to make humans happy, as it grew smarter it may deduce that humans are happiest when they are captured, detained, and having their dopamine receptors constantly stimulated. In my opinion, this wouldn’t go down too well with the human race and shows that coding an AIs original aim should be taken very seriously.

Other than the obvious risks with artificial intelligence, there are many ethical quandaries with regards to the subject, too. One of the main concerns is AI consciousness. If an AI became sufficiently smart, would it be conscious or would it just have apparent consciousness? John Searle’s ‘Chinese room’[8] thought experiment refutes the idea that AI has consciousness due to the fact that the program will simply be interpreting data, not understanding it – no matter how intelligent it is. In argument to this, say a brain is scanned layer by layer and an accurate model is acquired. When the model is implemented on a powerful computer with such exact accuracy that the brains personality and memory is intact, would the computer wake up as the previous owner of the brain? If the brains neurones are analogous to transistors in computers, then it is difficult to argue that the AI wouldn’t have the personality and memories of the previous owner. John Locke’s memory theory[9] states that what makes a person is their memory and experiences – so surely this new AI would be the same person that previously owned the brain but in a different form. In this case, does the AI now have emotion and feeling or is it simply an AI like before – only capable of processing data and outputting appropriate answers? This is a question which cannot currently be answered but both sides seem to have their merit.

Whether it is ethical to shut down an AI is also a tricky question. Is AI similar to human life in that turning it off could be constituted as murder? An important factor in this is whether or not an AI actually has conscious thought – as discussed in the previous paragraph – but also whether it feels. If it is decided that the AGI has conscious thought and can feel emotions such as sadness, surely AI should get many of the same rights a human would? The question of whether it is even ethical to code a machine to have feelings also arises as it could be seen as playing god.

Despite the existential risk of artificial superintelligence (ASI) – AI is already due to increase the unemployment rate in the United States of America to 50%[10] in thirty years’ time when simple jobs such as café workers, retail assistants are taken over by artificial intelligence. Around 10%[11] of all US jobs involve driving a vehicle which are all easily done by an AI. Smarter computers also mean that many mid-paying jobs involving data entry and number crunching will be replaced. This will mean that approximately 7 million business and financial operations jobs will be lost in the United States alone. If we do not prepare for this situation appropriately then, similar to the industrial revolution where Russian and Chinese revolutions swiftly followed, there could be major negative implications for the modern world. Hopefully we can learn our lessons from the past so that we avoid the potentially grim future we are on track for.

 

[1] http://aitopics.org/misc/brief-history

[2] http://www.sfgate.com/science/article/Why-artificial-intelligence-always-seems-so-far-6033907.php

[3] http://www.merkle.com/brainLimits.html

[4] http://www.kurzweilai.net/the-law-of-accelerating-returns

[5] http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html

[6] http://io9.gizmodo.com/how-artificial-superintelligence-will-give-birth-to-its-1609547174

[7] http://motherboard.vice.com/read/elon-musk-on-superintelligent-robots-well-be-lucky-if-they-enslave-us-as-pets

[8] https://en.wikipedia.org/wiki/Chinese_room

[9] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3115296/

[10] https://www.cnet.com/news/robots-could-make-half-the-world-unemployed-in-30-years-says-prof/

[11] https://www.ft.com/content/063c1176-d29a-11e5-969e-9d801cf5e15b

Artificial Intelligence: Ethics, existential risk and apparent consciousness

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s