For those who know what the chinese room argument is (for those who dont, here is a wiki that sums it up nicely en.wikipedia.org/wiki/Chinese_room_argument) what is your opinion on it? I think that in principle, it shows that no symbol maniplulator without semantics has a mind, but in practice, I do not think it invalidates the Turing Test.
From what i remember of it (which, i admit, was very minimal to begin with), Searle is too fascinated with language. Why go to China when he could have just looked at English symbology. After all, you are understanding the general “sense” of what i am typing here even though there is no real guarantee. Language is overrated, so why bother with it? Just assume it and let’s get to something worthwhile.
The Chinese Room argument is the same as the ‘appearance and reality’ distinction. No one knows the machine is a machine, so they receive appearances from the machine without knowing it is a machine. But in such a case, when appearance is reality, then appearance is reality.
The man ask how can we tell appearance from reality when appearance gives us reality? This is not imaginable. I mean, for all I know, posters here might be machines chunning out words of English, there would be no way for me to know, but the man want to know what he can not know. i.e think what he can not think of. I give full credit to Wittgenstein for solving all problems of philosophy with
‘for whatever we can not speak of, let us pass over it in silence’
I’ve never got excited about Searle’s arguments, purely because our cells operate on exactly the same level as the Chinese room.
If Searle’s argument stands against computers, it’ll stand against humans.
His argument is akin to saying because my nerves aren’t sentient, I’m not. He got distracted by the fact that as a part of the machine, he didn’t understand, therefore there’s no way the machine could.
the difference between the chinese machine and a human brain is that the chinese machine is looking on a table for a specific answer instead of ‘understanding’. whenever it sees “what color is the sky” it returns “blue” and because of this, it appears to be human and it passes the turing test. but it only did that because a person sat down and meticulously pre-answered all questions and wrote them on the table.
i think searle basically just created a half-assed AI and said that all AI must fall under the same category. good AI would be more complicated, and instead of simply finding “blue” printed on the table next to “what color is the sky” the computer would have to know how and when to ask and answer questions like “what is a color”, “what is the sky”, and “what are the words that denote colors”, and to do this, it would merely need a pre-programmed understanding of syntax, just like humans.
if searle sat inside and watched all this happen in chinese, he would only know that the original question is “what chang is the chong?” and if he sat there long enough, he would see that the word chang is always one of five to ten words (the chinese words for color), then he would also see that if ‘chong’ is described by one of those words, then it is always a certain one of those few words, and that same word is used to describe other blue things, which the person could discover must have something in common with ‘chong’. at that point he would have the same understanding as the computer, and it would not be a complete understanding.
but if the computer went outside, and someone programmed into it “‘sky’ is the big area of ‘blue’ in the upward direction and ‘blue’ is light of a certain frequency” and the computer has instruments to determine the frequency of light and to determine the direction his eyes are pointing, just like humans, then how is the computers understanding of the situation any different than a person’s? if the person trying to understand chinese went outside and learned the same thing with the new words ("‘chong’ is the big area of ‘chang’ above, and ‘chang’ is light of the highest visible frequency), how is that any different?
instead of looking on the table next to the question “sky is blue” it is possible to program into it that in order to answer “what color” about “sky” it needs to examine the frequency of light when it is outside and pointing its eyes higher than 45 degrees above the horizon when there isnt water vapor in the way. thats exactly what a human would do!
it just seems like searle wrote a crappy, incomplete AI simulation. virtually all computer programs that arent specifically trying to fake intelligence are crappy in this same way, but that doesnt mean that i couldnt try real hard to specify the knowledge of the computer into bits as small as the ones that humans have, that rely on the same sensory organs and the same other pre-defined brain conditions like syntax. i dont know, i could be wrong.
No, I dont think so. I believe it was formulated in the 1980’s, when the Computational Theory of the Mind was the main theory, at least from what I heard.
Well a computer can say a word without really understanding all that it means, unlike me. I can say a word and know exactly all the things I mean by it.
Umm, perhaps you should read the argument and get to know it again. because Searle’s main point was not about language(though thats a large part of it) but understanding. He was trying to show that machines that are nothing but symbol manipulators have no real understanding, which I think he proves his point quite well.
I don’t see the distinction? Why is looking at a table any different from looking at a bunch of memory ‘tables’ and a bunch of ‘personality trait’ tables and using the two to come up with a response, as our mind may do?
Don’t get obsessed with the fact that Searle rather cheekily simplifies the computation that these ‘tables’ would have to do and leads the reader down the garden path of concluding that it would use a simple 2-D table to fool anyone into thinking someone understand’s chinese.
Don’t forget this chinese room will have to understand other stimuli than speech, it will have to be able to assess the local conditions in case a speaker asked them about the weather, or perhaps commented on a passing man/woman or asked the speaker’s opinion on a nearby object.
i was eventually trying to agree, matt. there is no difference between the chinese machine and a human. we dont magically ‘understand’ any more than the machine does. we just happen to have a much bigger table, with much smaller contents in each box than you would think of creating with a regular machine. luckily, our dna and our body automatically write up those tables for us in the first few decades of our lives. but its the same concept. everyones a robot.
And if you read people like Lacan, Deleuze, Blanchot, etc, you would have known that “understanding” (i.e. “sense”) cannot be separated from “language.”
Obw, you are correct in that i didn’t mention my “sources” for my argument, but that doesn’t invalidate it or make it less applicable. It was the way that i “understood” the argument in a wider range of philosophy and how i approached it. Nobody said i couldn’t do that.
The Chinese room experiment brings to our attention the crux of the argument for strong artificial intelligence. We have the appearance of sense - the symbols go in, they are organized with reference to the list of instructions and then they are sent out. Because we understand (or at least, those who read Chinese understand) the resulting sentence or phrase, one might be inclined to say that the machine has “made sense”.
Imagine the machine has a massive number of references in its “look-up table”. Imagine that it could supply coherent responses to every phrase I might ever offer to it. That may well be true one day of artificial intelligence. But I don’t think that any of us would agree that the machine has “understood”. It has recieved, referenced and responded, but it hasn’t felt the meaning of the words in the process. It has not equalled a human cognitive experience. It is not the dreamt of strong A.I.