Can a machine ever become self-aware?
Date: Unknown
By: Giorgio Buttazzo
Published: "Artificial Humans, an historical retrospective"
"Los Angeles, year 2029. All stealth bombers are upgraded with neural processors, becoming fully unmanned. One of them, Skynet, begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern Time, August 29th"
This is the view of the future described in the James Cameron film 'T2 Judgment Day'. Skynet's self-awareness and 'his' attack against humans sets the beginning of a war between robots and humans, which represents the first scene of the movie.
Since the early Fifties, science fiction movies have depicted robots as very sophisticated machines built by humans to perform complex operations, to work with humans in safe critical missions in hostile environments, or, more often, to pilot and control spaceships in galactic travels. At the same time, however, intelligent robots have also been depicted as dangerous machines, capable of working against man through wicked plans. The most significant example of robot with these features is HAL 9000, main character in the Stanley Kubrick/Arthur Clarke 1968 epic film "2001: A Space Odyssey".
In the movie, HAL controls the entire spaceship, talks amiably with the astronauts, plays chess, renders aesthetic judgments of drawings, recognises the emotions in the crew, but also murders four of the five astronauts to pursue a plan elaborated out of the pre-programmed schemes [Sto 97]. In other science fiction movies, as Terminator and Matrix, the view of the future is even more catastrophic: robots will become intelligent and self-aware and will take over the human race.
In very few movies, robots are depicted as reliable assistants, which really cooperate with men rather than conspiring against them. In Robert Wise's 1951 film "The day the Earth stood still", Gort is perhaps the first robot (extraterrestrial in this case) who supports captain Klaatu in his mission to deliver a message to humans.
Also in Aliens (second episode of the lucky series, directed by James Cameron in 1986), Bishop is a synthetic android whose purpose is to pilot the spaceship during the mission and protect the crew. With respect to HAL and his predecessor (encountered in the first Alien episode), Bishop is not affected by malfunctioning, and he remains faithful to his duty till the end of the movie. Remember one of the final scenes in which Bishop, with his body divided in two pieces after fighting with the alien creature, still works for saving Ellen Ripley (Sigourney Weaver), offering his hand to avoid her to be sucked away from the ship. Finally, a positive sign of optimism in science and technology from James Cameron.
Also Robocop (Paul Verhoeven, 1987) is a dependable robot who co-operates with men for law enforcement, although he is not a fully robotic system (as Terminator), but a hybrid cybernetic/biological organism (a cyborg) made by integrating biological parts with artificial components.
The dual connotation often attributed to science fiction robots represents the clear expression of desire and fear that man has towards his own technology. From one hand, in fact, man projects in a robot his irrepressible desire of immortality, embodied in a powerful and indestructible artificial being, whose intellective, sensory, and motor capabilities are much more augmented with respect to the ones of a normal man. On the other hand, however, there is a fear that a too advanced technology (almost mysterious for most of people) can get out of control, acting against man (see Frankenstein, HAL 9000, Terminator, and the robots in Matrix). The positronic brain adopted by Isaac Asimov's robots comes from the same feeling: it was the result of a so advanced technology that nobody knew its low-level details any more, although its construction process was fully automated [Asi 68].
Recent progress of computer science technology strongly influenced the features of new science fiction robots. For example, the theories on connectionism and artificial neural networks (aimed at replicating some processing mechanism typical of the human brain) inspired the Terminator robot, who is not only intelligent, but can learn based on his past experience.
In the movie, Terminator represents the prototype of imaginary robots. He can walk, talk, perceive and behave like a human being. His power cell can supply energy for 120 years, and an alternate power circuit provides fault tolerance in case of damage. But, what is more important, Terminator can learn! He is controlled by a neural-net processor, a computer that can modify its behaviour based on past experience.
What makes the movie more intriguing, from a philosophical point of view, is that such a neural processor is so complex that it begins to learn at an exponential rate and, after a while, it becomes self-aware! In this sense, the movie raises an important question about artificial consciousness:
"Can ever a machine become self-aware?"
Before answering this question, we should perhaps ask: "how can we verify that an intelligent being is self-conscious?". In 1950, the computer science pioneer Alan Turing posed a similar problem but concerning intelligence and, in order to establish whether a machine can or cannot be considered intelligent as a human, he proposed a famous test, known as the Turing test: there are two keyboards, one connected to a computer, the other leads to a person. An examiner types in questions on any topic he likes; both the computer and the human type back responses that the examiner reads on the respective computer screens. If he cannot reliably determine which was the person and which the machine, then we say the machine has passed the Turing test. Today, no computer can pass the Turing test, unless we restrict the interaction on very specific topics, as chess.
On May 11, 1997 (3:00 p.m. eastern time), for the first time in the history, a computer named Deep Blue beat world chess champion Garry Kasparov, 3.5 to 2.5. As all actual computers, however, Deep Blue does not understand chess, since it just applies some rules to find a move that leads to a better position, according to an evaluation criterion programmed by chess experts. Thus, if we accept Turing's view, we can say that Deep Blue plays chess in an intelligent way, but we can also claim that it does not understand the meaning of his moves, as a television set does not understand the meaning of the images it displays.
The problem of verifying whether an intelligent being is self-conscious is even more complex. In fact, if intelligence can be the expression of an external behaviour that can be measured by specific tests, self-consciousness is the expression of an internal brain state that cannot be measured.
From a pure philosophical point of view, it is not possible to verify the presence of consciousness in another brain (either human or artificial), because this is a property that can only be verified by his possessor. Since we cannot enter in another being's mind, then we cannot be sure about his consciousness. Such a problem is deeply discussed by Douglas Hofstadter and Daniel Dennett, in a book entitled The Mind's I [Hof 85].
From a pragmatic point of view, however, we could follow Turing's approach and say that a being can be considered self-conscious if he is able to convince us, by passing specific tests. Moreover, among humans, the belief that another person is self-conscious is also based on similarity considerations: since we have the same organs and we have a similar brain, it is reasonable to believe that the person in front of us is also self-conscious. Who would question his best friend's consciousness? Nevertheless, if the creature in front of us, although behaving like a human, was made by synthetic tissues, mechatronic organs, and neural processors, our conclusion would be perhaps different.
With the emergence of artificial neural networks, the problem of artificial consciousness becomes even more intriguing, because neural networks replicate the basic electrical behaviour of the brain and provide the proper support for realising a processing mechanism similar to the one adopted by the brain. In the book "Impossible Minds", Igor Aleksander [Ale 97] addresses this topic with depth and scientific rigour.
Although everybody agrees that a computer based on classical processing paradigms can never become self-aware, can we say the same thing for a neural network? If we remove structural diversity between biological and artificial brains, the issue about artificial consciousness can only become religious. In other words, if we believe that human consciousness is determined by divine intervention, then clearly no artificial system can ever become self-aware. If instead we believe that human consciousness is an electrical neural state spontaneously developed by complex brains, then the possibility of realising an artificial self-aware being remains open. If we support the hypothesis of consciousness as a physical property of the brain, then the question becomes:
"When will a computer become self-aware?"
Attempting to provide even a rough answer to this question is hazardous. Nevertheless, it is possible to determine at least a necessary condition, without that a machine cannot develop self-awareness. The idea is based on the simple consideration that, to develop self-awareness, a neural network must be at least as complex as the human brain.
The human brain has about 1012 neurons, and each neuron makes about 103 connections (synapses) with other neurons, in average, for a total number of 1015 synapses. In artificial neural networks, a synapses can be simulated using a floating point number, which requires 4 bytes of memory to be represented in a computer. As a consequence, to simulate 1015 synapses a total amount of 4*1015 bytes (4 millions of Gigabytes) is required. Let us say that to simulate the whole human brain we need 8 millions of Gigabytes, including the auxiliary variables for storing neurons outputs and other internal brain states. Then, our question becomes:
"When will such a memory be available in a computer?"
During the last 20 years, the RAM capacity increased exponentially by a factor of 10 every 4 years. By interpolation, we can derive the following equation, which gives the RAM size (in bytes) as a function of the year:
bytes = 10[(year - 1966)/4]
For example, from the equation we can find that in 1990 a personal computer was typically equipped with 1 Mbytes of RAM. In 1998, a typical configuration had 100 Mbytes of RAM, and so on.
By inverting the relation above, we can predict the year in which a computer will be equipped with a given amount of memory (assuming the RAM will continue to grow at the same rate):
year = 1966 + 4 log10 (bytes)
Now, to know the year in which a computer will be equipped with 8 millions of Gbytes of RAM, we have just to substitute that number in the equation above and compute the result. The answer is:
year = 2029
TF remark: An interesting coincidence with the date predicted in The Terminator.
In order to fully understand the meaning of the achieved result, it is important to make some considerations. First of all, it is worth recalling that the computed date only refers to a necessary, but not sufficient, condition to the development of an artificial consciousness. This means that the existence of a powerful computer equipped with millions of gigabytes of RAM is not sufficient alone to guarantee that it will magically become self-aware. There are other important factors influencing this process, such as the progress of theories on artificial neural networks and on the basic biological mechanisms of mind, for which is impossible to attempt precise estimates. Furthermore, someone could argue that the presented computation was done on personal computers, which do not represent the top of technology in the field. Some other could object that the same amount of RAM memory could be available using a network of computers or virtual memory management mechanisms to exploit hard disk space. In any case, even if we adopt different numbers, the basic principle of the computation is the same, and the date could be advanced by a few years only.
Finally, after such a long discussion on artificial consciousness, someone could ask:
"Why building a self-aware machine?"
Except for ethical issues, that would significantly influence the progress in this field, the strongest motivation would certainly come from the innate human desire of discovering new horizons and enlarging the frontiers of science. Also, developing an artificial brain based on the same principles used in the biological brain would provide a way for transferring our mind into a faster and more robust support, opening a door towards immortality. Freed from a fragile and degradable body, human beings with synthetic organs (including brain) could represent the next evolutive step of human race. Such a new species, natural result of human technological progress (not wanted by a dictatorship) could start the exploration of the universe, search for alien civilisations, survive to the death of the solar system, control the energy of black holes, and move at the speed of light by transmitting the information necessary for replication on other planets.
Indeed, the exploration of space aimed at searching for intelligent civilisations already started in 1972, when Pioner 10 spacecraft was launched to go out of our solar system with this specific purpose, transmitting information about human race and planet Earth. As for all important human discoveries, from nuclear energy to atomic bomb, from genetic engineering to human cloning, the real problem has been and will be to keep technology under control, making sure that it is used for human progress, not for catastrophic aims. In this sense, the message delivered by Klaatu in the 1951 film "The day the Earth stood still" is yet the most actual!
References:
[Ale 97] Igor Aleksander, "Impossible Minds: My Neurons, My Consciousness", World Scientific Publishers, October 1997.
[Asi 68] Isaac Asimov, "I, Robot" (a collection of short stories originally published between 1940 and 1950), Grafton Books, London, 1968.
[Hof 85] Douglas R. Hofstadter and Daniel C. Dennett, "The Mind's I", Bantam Books, 1985.
[Sto 97] David G. Stork, "HAL's Legacy: 2001's computer as dream and reality", Edited by David G. Stork, Foreword by Arthur C. Clarke, MIT Press, 1997