top of page
Writer's pictureNeville Huang

Consciousness – An obstacle to the understanding of intelligence.



In 2017, the researchers in DeepMind created AlphaZero, a single program that masters three distinct board games: chess, shogi and Go. To be fair, AlphaZero isn’t the first algorithm created that can beat human in a board game. However, it is the first model that can master three games at the same time, with the performance far better than professional human players as well as all algorithms existed at that time. As a matter of fact, AlphaZero is not only good at these games. People who have witnessed it played described that the deep reinforcement learning algorithm was “toying” with its opponents, “tricking” them, even making one of them (i.e., Stockfish) crashed during the game. This leads some of them to believe that AlphaZero is insightful, and it has actually “understood” the nature of chess, shogi and Go.


But just what exactly is “understanding”, and how can we tell whether an artificial system, like AlphaZero, has it or not? These are not easy questions to answer. First of all, the concept is poorly defined. And since we are not clear about the criteria of “understanding”, there is no way we can tell if a system possesses comprehension.


To make thing more baffling, “understanding” can be totally silent, meaning that it’s not necessarily accompanied by any explicit behavior. In other words, by simply receiving words transmitted to your eyes as visual stimuli, for example, your brain automatically catches their meanings. You can be, say, just sitting there, doing absolutely nothing, yet still knowing that “you’ve got it”.


If the question said on top keeps you wondering, then you’ve come to the right place. In this article, we’ll proffer our own opinions on how to define “understanding”, and how to determine whether a system has it. And as you’re about to see, the devil that keeps us from understanding “understanding” itself is the notoriously intractable phenomenon known as consciousness in psychology.



The easy & hard problems of consciousness


One of the reasons that makes consciousness so puzzling is that the term incorporates too many things. In fact, there are at least two common usages of the term that are corresponding to completely different meanings, respectively. They are described in below:


  • Example 1: The person regained consciousness.


This meaning is often used in medical scenarios, and it indicates that the person is alert to his or her surroundings and can respond to input stimuli sensibly.


  • Example 2: I had no consciousness of the pain in my leg.


This meaning often appears not only in medical field but also scientific studies, which signifies that the individual is feeling or aware of something, and such feeling or awareness is usually called “conscious experience” in psychology, or “qualia” in philosophy. Note that conscious experience can be high- or low-level. Low-level conscious experiences include the “seeing of color red” or the “tickling feeling”, etc. (i.e., these feelings are corresponding to certain physical signals from outside, such as mechanical forces, light waves, sound waves and so on), while high-level conscious experiences include senses such as “sense of guilt” or “sense of knowing”, etc. (i.e., these feelings lack obvious physical correspondent).


As you can see, the first usage of the term concerns the neural mechanisms in our cerebrum that realize our mental functions (since we can only responds to our surroundings properly if our brain is functioning normally), while the second usage is related to an illusory phenomenon (i.e., our feelings) that are still poorly understood by us. Accordingly, we can separate the problems of consciousness into two distinct groups, which are known as the easy (corresponding to the first usage) and hard problems (corresponding to the second usage) of consciousness. Figure 1 explains further about what these two types of problems cover, respectively:


Figure 1. The easy and hard problems of consciousness. The concept was first proposed by the Australian philosopher, David Chalmers, in his 1995 paper “Facing Up to the Problem of Consciousness”.

In short, we must keep in mind that when speaking of consciousness, we’re actually talking about two things. The first one is neural mechanisms, which can be studied through typical scientific measures (hence easy problems), and the second is conscious experiences or qualia, which are very difficult, if not impossible, to explore by current experimental approaches (hence hard problems).



An operational definition of conscious experience


Now that the two components of consciousness have been identified, we can start working on giving them an even clearer definition for our future discussion. Note that since the first component of consciousness (i.e., neural mechanisms) is rather obvious, the discussion in this section will focus mainly on the second part that pertains to conscious experience. Also, keep in mind that the definition we’re discussing here is operational instead of conceptual, meaning that the definition will give one a practical hint about how to tell what is conscious experience and what is not (FYI, please refer to this article in case you’re unfamiliar with operational definition).


So, how can we define the term? To us, conscious experience is something that cannot be understood by any means unless one experiences it himself or herself. For instance, imagine you’re attempting to explain how “red” feels like to an individual who hasn’t seen the color before. In this case, the only thing you can do is to say something like “it’s the color of a strawberry”, or “it’s like seeing a tomato”. However, if no red object is currently available to the individual or the individual is colorblind, there is no way that he or she can truly know how color red seems like. In contrast, imagine instead that you’d like to elucidate what “Duhem-Quine thesis” means to others; you should realize that it can be done easily through verbal description.


With this operational definition in mind, we can have a deeper grasp about the difference between the easy and hard problems of consciousness. That is, the easy problem can be explained through words (e.g., writing paper) since it’s unrelated to conscious experience, while the hard problem is inexplicable by human language because it’s all about conscious experience.



Resolving the Chinese Room Argument


Based on our previous discussions, we’ll expound why consciousness, conscious experience to be exact, is an obstacle for understanding intelligence in this section, and this can be done nicely by considering the Chinese Room argument proposed by the American philosopher, John Searle, in his paper, "Minds, Brains, and Programs" (1980).


The thought experiment involves the following scenario: A man who does not understand Chinese is sitting in a room. His job is to translate English sentences provided to him from a window into Chinese, and then output them through another window. Because the man has no comprehension about Chinese, an English-Chinese bilingual outside the room writes him a book with detailed instructions on how to perform the translation so that the man in the room can do his job smoothly. Furthermore, given that the instructions in the book are perfect, the man can not only complete the task given to him by simply following what the book says, but also producing results so flawless that people outside think the man in the room was a native Chinese speaker.


The above scenario can be perceived as a metaphor of an artificial-intelligent system: The man in the room represents a computer, the book of instructions is a computer program, and the bilingual stands for a computer programmer. The fact that people think the man in the room understands Chinese symbolizes that the computer has passed the Turing test. Now, please note that no matter how good the man in the room is in doing his job, it does not change the reality that he has absolutely no understanding of Chinese. And because of this, Searle argues that the computer functioning according to the program, as represented by the man in the room, does not have “understanding” of Chinese.


So, does the computer in the Chinese Room argument know Chinese? We believe to answer the question, we must first recognize the following key: John Searle treats the conscious experience of understanding (i.e., the feeling that makes one realize he or she has comprehended something) as the only criterion for determining whether a system has “understanding” about something or not. In other words, to say that a system understands a matter, we must ensure that the system is aware that it understands that matter. To Searle, simply proving that the system has knowledge about the matter (by outputting valid results through the use of proper mechanisms) is insufficient to justify its understanding.


However, as we’ve discussed previously, many of our mental functions, including understanding, consist of two components: neural mechanisms and conscious experience, and it’s unfair to make the later the only criterion of judging “understanding” in our opinion. To further explain our argument, considering the following scenario: In order to complete his translation without using the book of instruction, the man in the room takes an online Chinese course in order to master the language. However, without him realizing it, the so-called “Chinese” that he has learned is actually Japanese, which also utilizes Chinese characters in its writing system (i.e., kanji). In this case, the man does have the sense of understanding towards Chinese (because he doesn’t know that he has learned a wrong language), but his knowledge about Chinese is incorrect. Now, will you consider the man “understand” Chinese in this case? We believe the answer should be a certain “no”, and from this example one should see that “having the conscious experience of understanding” is actually not a very good evidence for showing that a system understands something (Figure 2).


Figure 2. Among the four systems shown in this figure, which of them truly understand Chinese in your opinion? Most people are prone to chose (a) and (c), and this result shows that “conscious experience” is not a very good indicator of “understanding”.

All in all, when thinking whether a system is intelligent or not, people are often confused by “conscious experience”. Just like the case of Chinese Room argument, people tend to emphasize the importance of “the sense of knowing” while it’s actually a poor indicator of “understanding”. As a matter of fact, to us, a smartphone that can carry out certain functions according to the user’s inputs is sufficient to be considered having comprehension (i.e., it understands user’s command). If you find the claim said is hard to accept, please realize that what the device lacks are “conscious experience” and “self-awareness”, not “intelligence”!



Footnote: Neurozo Innovation provides viewpoints, knowledge and strategies to help you succeed in your quest. If you have any question for us, please feel free to leave a comment below, or e-mail us at: neville@neurozo-innovation.com. For more articles like this, please join our free membership. Thank you very much for your time, and we wish you a wonderful day!


185 views0 comments

Comentários


bottom of page