There Are No Ghosts

There are ghosts definitely. Strong AI.

I don’t know where this comes in but it did

)(

Chinese room

For the British video game development studio, see The Chinese Room.

The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness,[a]regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled “Minds, Brains, and Programs” and published in the journal Behavioral and Brain Sciences.[1]Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov(1961), Lawrence Davis (1974) and Ned Block(1978). Searle’s version has been widely discussed in the years since.[2] The centerpiece of Searle’s argument is a thought experiment known as the Chinese room.[3]

In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book’s instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just following syntacticrules without semantic comprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking.[4]

The argument is directed against the philosophical positions of functionalism and computationalism,[5] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis:[b] “The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.”[c]

Although its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display.[6] The argument applies only to digital computers running programs and does not apply to machines in general.[4] While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers.[7][8]

Searle’s thought experiment

edit

Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.[4]

The questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it had a mind?[4]

Now suppose that Searle is in a room with an English version of the program, along with sufficient pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, he follows the program step-by-step, which eventually instructs him to slide other Chinese characters back out under the door. If the computer had passed the Turing test this way, it follows that Searle would do so as well, simply by running the program by hand.[4]

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.[4]

Searle argues that, without “understanding” (or “intentionality”), we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have a “mind” in the normal sense of the word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.[4]

History

edit

Gottfried Leibniz made a similar argument in 1714 against mechanism (the idea that everything that makes up a human being could, in principle, be explained in mechanical terms. In other words, that a person, including their mind, is merely a very complex machine). Leibniz used the thought experiment of expanding the brain until it was the size of a mill.[9] Leibniz found it difficult to imagine that a “mind” capable of “perception” could be constructed using only mechanical processes.[d]

Peter Winch made the same point in his book The Idea of a Social Science and its Relation to Philosophy (1958), where he provides an argument to show that “a man who understands Chinese is not a man who has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese language” (p. 108).

Soviet cyberneticist Anatoly Dneprov made an essentially identical argument in 1961, in the form of the short story “The Game”. In it, a stadium of people act as switches and memory cells implementing a program to translate a sentence of Portuguese, a language that none of them know.[10] The game was organized by a “Professor Zarubin” to answer the question “Can mathematical machines think?” Speaking through Zarubin, Dneprov writes “the only way to prove that machines can think is to turn yourself into a machine and examine your thinking process” and he concludes, as Searle does, “We’ve proven that even the most perfect simulation of machine thinking is not the thinking process itself.”

In 1974, Lawrence H. Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called the China brain, also the “Chinese Nation” or the “Chinese Gym”.[11]

John Searle in December 2005

Searle’s version appeared in his 1980 paper “Minds, Brains, and Programs”, published in Behavioral and Brain Sciences.[1] It eventually became the journal’s “most influential target article”,[2] generating an enormous number of commentaries and responses in the ensuing decades, and Searle has continued to defend and refine the argument in many papers, popular articles and books. David Cole writes that “the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years”.[12]

Most of the discussion consists of attempts to refute it. “The overwhelming majority”, notes Behavioral and Brain Sciences editor Stevan Harnad,[e] “still think that the Chinese Room Argument is dead wrong”.[13] The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as “the ongoing research program of showing Searle’s Chinese Room Argument to be false”.[14]

Searle’s argument has become “something of a classic in cognitive science”, according to Harnad.[13] Varol Akman agrees, and has described the original paper as “an exemplar of philosophical clarity and purity”.[15]

Philosophy

edit

Although the Chinese Room argument was originally presented in reaction to the statements of artificial intelligenceresearchers, philosophers have come to consider it as an important part of the philosophy of mind. It is a challenge to functionalism and the computational theory of mind,[f] and is related to such questions as the mind–body problem, the problem of other minds, the symbol grounding problem, and the hard problem of consciousness.[a]

Strong AI

edit

Searle identified a philosophical position he calls “strong AI”:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[c]

The definition depends on the distinction between simulating a mind and actually having one. Searle writes that “according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.”[22]

The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simondeclared that “there are now in the world machines that think, that learn and create”.[23]Simon, together with Allen Newell and Cliff Shaw, after having completed the first program that could do formal reasoning (the Logic Theorist), claimed that they had “solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind.”[24] John Haugeland wrote that “AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.”[25]

Searle also ascribes the following claims to advocates of strong AI:

  • AI systems can be used to explain the mind;[20]
  • The study of the brain is irrelevant to the study of the mind;[g] and
  • The Turing test is adequate for establishing the existence of mental states.[h]

Strong AI as computationalism or functionalism

edit

In more recent presentations of the Chinese room argument, Searle has identified “strong AI” as “computer functionalism” (a term he attributes to Daniel Dennett).[5][30]Functionalism is a position in modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.

Stevan Harnad argues that Searle’s depictions of strong AI can be reformulated as “recognizable tenets of computationalism, a position (unlike “strong AI”) that is actually held by many thinkers, and hence one worth refuting.”[31] Computationalism[i] is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processingsystem.

Each of the following, according to Harnad, is a “tenet” of computationalism:[34]

  • Mental states are computational states (which is why computers can have mental states and help to explain the mind);
  • Computational states are implementation-independent—in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that
  • Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive.

Recent philosophical discussions have revisited the implications of computationalism for artificial intelligence. Goldstein and Levinstein explore whether large language models (LLMs) like ChatGPT can possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation, such as informational, causal, and structural theories, by demonstrating robust internal representations of the world. However, they highlight that the evidence for LLMs having action dispositions necessary for belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges, such as the “stochastic parrots” argument and concerns over memorization, asserting that LLMs exhibit structured internal representations that align with these philosophical criteria.[35]

David Chalmers suggests that while current LLMs lack features like recurrent processing and unified agency, advancements in AI could address these limitations within the next decade, potentially enabling systems to achieve consciousness. This perspective challenges Searle’s original claim that purely “syntactic” processing cannot yield understanding or consciousness, arguing instead that such systems could have authentic mental states.[36]

Strong AI vs. biological naturalism

edit

Searle holds a philosophical position he calls “biological naturalism”: that consciousness[a]and understanding require specific biological machinery that is found in brains. He writes “brains cause minds”[37] and that “actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains”.[37] Searle argues that this machinery (known in neuroscience as the “neural correlates of consciousness”) must have some causal powers that permit the human experience of consciousness.[38]Searle’s belief in the existence of these powers has been criticized.

Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, “we are precisely such machines”.[4] Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur.

Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to both behaviorism and functionalism (including “computer functionalism” or “strong AI”).[39]Biological naturalism is similar to identity theory (the position that mental states are “identical to” or “composed of” neurological events); however, Searle has specific technical objections to identity theory.[40][j] Searle’s biological naturalism and strong AI are both opposed to Cartesian dualism,[39] the classical idea that the brain and mind are made of different “substances”. Indeed, Searle accuses strong AI of dualism, writing that “strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn’t matter”.[26]

Consciousness

edit

Searle’s original presentation emphasized understanding—that is, mental states with intentionality—and did not directly address other closely related ideas such as “consciousness”. However, in more recent presentations, Searle has included consciousness as the real target of the argument.[5]

Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.[41]

— John R. Searle, Consciousness and Language, p. 16

David Chalmers writes, “it is fairly clear that consciousness is at the root of the matter” of the Chinese room.[42]

Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room.[43]

Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese. In Searle’s words, “the computer has nothing more than I have in the case where I understand nothing”.[4