Rebuttal of the Chinese Room Argument

While discussing the subject of Artificial Intelligence in another forum, someone brought up the old "Chinese Room" argument against the possibility of AI. My wife suggested I post my response to the point, as it seems a good rebuttal of the argument itself.

If you're unfamiliar with the CR argument, there's a great entry in the Stanford Encyclopedia of Philosophy. It summarizes as follows:

    The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle's argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind. As a result, there have been many critical replies to the argument.

To my thinking, this is a basically flawed argument from the start. What if the instructions were given in English by another, Chinese-speaking (yes, I know "Chinese" is not a language) person? Really, the human following the processing rules is just a conduit for those processing rules. He might as well be a mail courier with no inkling what's in the envelope he's delivering. It doesn't mean the person who sent the mail is not intelligent. The CR argument says absolutely nothing about the nature of the data processing rules. It dismisses the possibility that those rules could constitute an intelligent program without consideration.

I think the CR argument holds some sway with people because they've seen the famous Eliza program from 1966 and tons of other chatbots based on it. Most of them take a sentence you type and respond to it either by reformulating it (e.g., replying to "I like chocolate" with "why do you like chocolate?") using predefined rules or by looking up random responses to certain keywords (e.g., responding to a search on "chocolate" in "I like chocolate" with "Willy Wonka and the Chocolate Factory grossed $475 million in box office receipts.")

Anyone who has interacted with a chatbot like this recognizes that it's easy to be fooled, at first, by this sort of trickery. The problem with the Chinese Room argument is that it posits that this is all a computer can do, without providing any real proof. In fact, the human mind is the product of the human nervous system and, really, the whole body. But that body is a machine. It's constructed of material parts that all obey physical laws. A computer is no different in this sense. What separates a cheap computer trick like Eliza from a human mind is how their systems are structured.

I take it as obvious, these days, that it's possible to make a machine that can reason and act "intelligent" like we do, generally. And I've never seen the CR argument as having any real bearing on the possibility of intelligent machines. It only provides a cautionary note about the difference between faking intelligence and actually being intelligent.

Comments

Popular posts from this blog

Coherence and ambiguities in problem solving

Neural network in C# with multicore parallelization / MNIST digits demo

Mechanical finger