top of page

Conscious AI: Philosophical Issues

Updated: Oct 10, 2022

According to AI researchers, computers are predicted to achieve human-level intelligence in the next few decades. However, when we say that technology is "intelligent", it's a pretty vague statement. Do we mean that, for example, a computer can perform the same functions as a human? Or, are we saying that a computer is conscious in the same way that a human feels emotions and has a sense of self? These are some of the biggest unanswered questions in philosophical circles.


This blog post is dedicated to reviewing some of the popular theories about computers being conscious, and then developing your own perspective. If you would like to go more in-depth about how people argue for these ideas, you can look at the "further reading" list at the end.


First and foremost, are computers "intelligent"?


For a computer, "intelligence" can be defined as the ability to perform tasks autonomously, without a human telling it exactly what to do. This is the case with machine learning in that the algorithm can be fed some type of data and classify it, based on the patterns it has already seen.


There are two basic types of AI: Narrow AI and Artificial General Intelligence (AGI).


Narrow AI is computer systems that are better than humans in one super-specific job (such as playing chess, generating images, or diagnosing cancer). This is the type of AI we already have on a large and powerful scale. In fact, chess computer Deep Blue beat world champion Garry Kasparov back in 1997! But, it has its limitations. If you showed, for instance, an image of a walrus to a program designed to differentiate between images of cats and dogs, it would yield no good results.


AGI, on the other hand, is more of what we think of AI is like in science fiction. These types of computers are able to learn anything humans can through observation. A perfect kind would be able to act just like a person. We currently don't have computers like these yet, but we do have a good method on identifying them.


In the 1940s and '50s, Alan Turing developed the Turing test to determine if a computer is intelligent. If output produced by a computer and one from another human were indistinguishable, then that computer is intelligent.


"Turing test diagram", by Juan Alberto Sánchez Margallo, licensed under CC BY 2.5

In the diagram above, computer A and person B both write a piece of text, which are read by person C. If person C can't tell which was written by a computer, then computer A is considered intelligent.


Will computers ever be conscious?


If something can create its own subjective views on an experience, it is conscious. For example, we all can experience the same event, like being at the beach, but we'll all have different opinions and reactions to the crashing waves or hot sand.


Image by Camille Minouflet

Consciousness is the thought of "what it's like" to feel an experience. But the question is, what causes consciousness in our brains? If we are able to know that, we'll be much better equipped to replicate consciousness in computers.


The two biggest theories about how consciousness arises are the Global Neuronal Workspace Theory (GNW) and the Informational Integration Theory (IIT).


The GNW theory states that consciousness happens when the brain receives some kind of signal, like sight or sound, and then that information spreads throughout the brain to all of the other parts. For example, if you saw another person, those visual cues would be sent all around your brain, to the long-term memory, evaluative systems, and then to motor systems which carry out the action you want to take. This theory focuses on consciousness being the how the brain is flexible in the functions it carries out, which may be easier to consider in terms of AI.


"Global workspace setup", by Drking1234, licensed under CC BY-SA 4.0

In contrast, processes like your heartbeat are non-conscious because no amount of thinking can directly make your heartbeat start or stop.


The GNW theory gives an explanation about why consciousness happens, but it's still unclear what exactly happens in the connection mechanism (middle of the diagram above) that causes us to think and feel like people. If we could program a computer to adaptively perform all of the functions of a human brain, under the GNW theory the computer could be conscious.


But, even if a computer could learn to do everything we can do, doesn't that just make it a really good AGI? Sure, the computer may be indistinguishable from a human, but can it actually feel emotions or have morals? That's where IIT comes in.


IIT considers considers consciousness as something unique to the human brain because of the way it is physically structured.


To explain what consciousness is using IIT, imagine you were in the room pictured below:



Your senses would be picking up a wide variety of information, like the plants outside, the color of the walls and bed, the framed picture on the left, or any noise or smells. Additionally, you'd perceive even deeper information, for example if anything in the scene reminds you of any other room or object that you remember.


According to IIT, the collection of these pieces of information you perceive into one complete picture or feeling is consciousness. If you took any one of piece of information out of the picture, the experience would be completely different.


Of course, both theories leave a lot of ambiguity and aren't super intuitive, which makes it difficult for philosophers and scientists to have one concrete explanation as to what creates consciousness, and even less so for recreating it in a machine.


Personally, I feel that the GNW theory is a little easier to understand than IIT and may be more suited for duplicating in AI. Machines are capable of sensory input through cameras, microphones, and the like. They can store memory, either short-term or long-term through RAM or ROM, and can compare what they are sensing to what they have sensed in the past. What computers don't have, as of now, are their own thoughts.


Based on what we know computers are capable of doing, most experts don't believe that computers could ever be conscious. We know that they can be intelligent, but that isn't the same as being conscious.


For instance, if your pet passed away, you'd intrinsically feel grief and anguish. An AI program experiencing the same thing would recognize that the pet has died, realize that based on past data, people normally feel sad, and therefore display sadness. It wouldn't feel anything on the inside though.


Another example of computers being intelligent but not conscious is with morality. A human would be angry or outraged if they were morally wronged, but a computer would only be able to recognize the situation, identify an appropriate response, but not feel wronged the way a human would.


Image by Possessed Photography

Although computers being conscious doesn't seem possible with the technology we have or know can exist, it doesn't mean it's impossible. People centuries or even decades ago probably wouldn't have imagined the internet or AI ever existing. You never know what can happen.


Questions to think about:


Does it matter if AI is conscious or not, if its behavior is indistinguishable from that of humans?


Would conscious AI be dangerous to humans?


Further Reading


Chalmers, D. (2007). The hard problem of consciousness. In M. Velmans & S

Schneider (Eds.), The Blackwell companion to consciousness (pp.

225–235). Blackwell

Publishing.


Haugeland, J. (1979). Understanding Natural Language. The Journal of

Philosophy, 76(11), 619–632. https://doi.org/10.2307/2025695


Nagel, T. (1974) What is it like to be a bat? Philosophical Review 4, 435–50. .

https://doi.org/10.2307/2183914


Searle, J. (1990, January). Is the Brain’s Mind a Computer Program? Scientific


60 views0 comments

Commentaires


bottom of page