

The puzzle of the philosophical zombie may seem silly at first, but when you think about it more, you realize it is really just a way to pose a more urgent question: How do we know that any other person has a mind? I only know my own mind; nobody else’s.
As a practical matter each of us assumes that other people have a mind roughly comparable to our own. This assumption is confirmed by observing that other people’s behavior and verbal output is for the most part as expected. But oddly enough, we don’t actually know if anyone else has a mind.

The same issue underlies a basic problem of artificial intelligence. It seems only a matter of time until robots become so sophisticated that they act and speak normally. When that happens, they will be functional philosophical zombies.
As long as a robot has a metal skin, and blinking lights on its head we will not be too worried. But as soon as such a robot is dressed up in a convincing artificial skin and a good suit of clothes, it will be come a perfect philosophical zombie. We will not be able to deny that it has a mind and a full complement of inner experiences and feelings like us, because we aren’t even sure about each other! If I deny the robot has a mind, why wouldn’t I also deny that you have a mind?
This puzzle of “other minds” bothered me for a number of years, but no longer. I now believe it arises from a faulty assumption, the assumption that our minds are private. They’re not, at least not completely. They are inherently social. Even introspection is social because it is a kind of thinking, and thinking is social. Thinking is social because language is social. Language is a social invention, arising out of human interaction.

To the extent that thinking involves language, and introspection involves thinking, it is clear that introspection is fundamentally a social phenomenon, imbued to its core with the values and assumptions embedded in the individual’s community. Therefore, because I speak and understand the same language as you, I do in fact know what is in your mind (more or less) and I know how you think about things (in general), and most importantly, I know you are “in there,” and not a zombie.
With only a little difficulty, we can make a similar argument about visual imagery and other explicit mental representations of sensory experience and expression, like songs, and so on. They are all social conventions, taught and learned.

Some robots have already passed a limited version of the Turing test, fooling adults, children, experts, psychologists, and many others (http://www.loebner.net/Prizef/loebner-prize.html . While these tests have been limited in scope, are we justified to expect a future robot that qualifies as a perfect linguistic zombie? I think not.
The robot contains the language knowledge of the programmer. In that sense the robot is not a natural language user. It did not acquire its language through the normal course of socialization, which takes many years of daily social interaction. The robot has no family, no peers, no social network out of which language understanding grows. The programmer has all those social connections and is a natural language user. The robot becomes a repository of the programmer’s lexicon and grammar, but not of the programmer’s social history. Consequently, it is not possible in principle for the robot to ever be a perfect linguistic zombie, because genuine language usage and understanding arise from living in a community. That’s why a linguistic robot language will inevitably be identified in an unconstrained Turing test.

But more importantly, we can rest assured that if there are any “pod people” among us whose bodies have been snatched, we will know it.
No comments:
Post a Comment