Only philosophers worry about zombies. The “philosophical zombie” is a thought experiment. Imagine a hypothetical being like ourselves in every possible observable way except one. The one thing a philosophical zombie lacks is a mind. If the zombie’s behavior and language performance were no different than what we would expect from a real person, how on earth could we possibly ever tell the difference between a real person with a mind, and a zombie devoid of inner experience? We couldn’t.
It is fortunate that zombies in the movies always stagger with outstretched arms and blood on their mouths, because that helps us identify them as zombies. If they behaved more normally (as in the recent movie, “Fido,” for example, or in the original “Invasion of the Body Snatchers”), we would begin to have difficulty discriminating them from real people. If a zombie acted completely normal, how would we know it had no mind?
The puzzle of the philosophical zombie may seem silly at first, but when you think about it more, you realize it is really just a way to pose a more urgent question: How do we know that any other person has a mind? I only know my own mind; nobody else’s.
As a practical matter each of us assumes that other people have a mind roughly comparable to our own. This assumption is confirmed by observing that other people’s behavior and verbal output is for the most part as expected. But oddly enough, we don’t actually know if anyone else has a mind.
It is an odd quirk of nature that each of us has access to only our own mind. It could have been otherwise. I can see your body. I can hear your words. I can watch your behavior. Why can’t I perceive your mind? Why couldn’t evolution have proceeded down that path? That would seem to be a better choice for a social animal like us. As it is, you could be a zombie, a perfect one, a philosophical zombie with no inner experience, and I would never know as long as you acted appropriately.
The same issue underlies a basic problem of artificial intelligence. It seems only a matter of time until robots become so sophisticated that they act and speak normally. When that happens, they will be functional philosophical zombies.
As long as a robot has a metal skin, and blinking lights on its head we will not be too worried. But as soon as such a robot is dressed up in a convincing artificial skin and a good suit of clothes, it will be come a perfect philosophical zombie. We will not be able to deny that it has a mind and a full complement of inner experiences and feelings like us, because we aren’t even sure about each other! If I deny the robot has a mind, why wouldn’t I also deny that you have a mind?
This puzzle of “other minds” bothered me for a number of years, but no longer. I now believe it arises from a faulty assumption, the assumption that our minds are private. They’re not, at least not completely. They are inherently social. Even introspection is social because it is a kind of thinking, and thinking is social. Thinking is social because language is social. Language is a social invention, arising out of human interaction.
Language does not grow on trees. It is a product of people interacting with each other. You must acquire language from another person, through explicit teaching and learning. If you don’t get the training (as feral children often don’t), language does not develop spontaneously. There is no pill you can take, no exercise you can do on your own to acquire language. It is uniquely a social phenomenon.
To the extent that thinking involves language, and introspection involves thinking, it is clear that introspection is fundamentally a social phenomenon, imbued to its core with the values and assumptions embedded in the individual’s community. Therefore, because I speak and understand the same language as you, I do in fact know what is in your mind (more or less) and I know how you think about things (in general), and most importantly, I know you are “in there,” and not a zombie.
With only a little difficulty, we can make a similar argument about visual imagery and other explicit mental representations of sensory experience and expression, like songs, and so on. They are all social conventions, taught and learned.
What about a robot programmed to have completely appropriate language? Could I discriminate it from a real person? That question constitutes the famous “Turing test” proposed by Alan Turing in 1950. In that test, you have a conversation with a robot and a person hidden from you by curtains, and if you cannot tell which is which, the robot passes the test. In that case, you must, to avoid inconsistency, admit that it has a mind, albeit an artificial one.
Some robots have already passed a limited version of the Turing test, fooling adults, children, experts, psychologists, and many others (http://www.loebner.net/Prizef/loebner-prize.html . While these tests have been limited in scope, are we justified to expect a future robot that qualifies as a perfect linguistic zombie? I think not.
The robot contains the language knowledge of the programmer. In that sense the robot is not a natural language user. It did not acquire its language through the normal course of socialization, which takes many years of daily social interaction. The robot has no family, no peers, no social network out of which language understanding grows. The programmer has all those social connections and is a natural language user. The robot becomes a repository of the programmer’s lexicon and grammar, but not of the programmer’s social history. Consequently, it is not possible in principle for the robot to ever be a perfect linguistic zombie, because genuine language usage and understanding arise from living in a community. That’s why a linguistic robot language will inevitably be identified in an unconstrained Turing test.
Well then, couldn’t a robot be made that does live in a community of humans, and does partake of ordinary social interactions, and does acquire language through interaction like a human does? That would work in principle, but nobody has any idea how to make such a robot, because we don’t even know exactly how the process works for a human being. So in the end, there is no fear of zombie robots.
But more importantly, we can rest assured that if there are any “pod people” among us whose bodies have been snatched, we will know it.
Thursday, December 13, 2007
Subscribe to:
Posts (Atom)