The self in itself

I’ve recently read on Turing and the Chinese Room argument by Searle and I think their arguments are inherently invalid and nonsensical.

According to Wikipedia, Chinese room argument goes like this,

this is all very speculative for alot of assertions have been made such as the book of rules which enables character translation.

my point is this, Searle did not define by what is meant by ‘understanding’. how do we know someone understand if not by their responses. how do I know everyone I talk to are not machines? the question is irrelevant.

aren’t we all machines in that respect? we have inputs and we have rules by which we produce outputs. we have just forgotten that we do not understand.

also, does a human translator really understand the content of an author’s work? if not, is he a machine? isn’t a translator like a machine using the rules in translation?

to finish off my rant, I say we do not know if anyone understands, we ought be focused on their responses. If I tell a machine to turn on by saying ‘on’ and it does turn on then that’s as far as I go. If I tell a human to turn the machine on and he doesn’t, then he did not turn the machine on, I do not know if he ‘understands’ but that he did not comply with my instructions.

Understanding is irrelevant. A machine is not a human in that a machine is a machine. but a machine can pretend to be a human just as a human can pretend to be a machine.

Well, your hypothetical book of rules would be more akin to a static program than a learning machine, such as a neural network. So, you’re right. It may be able to fool people, but it won’t be intelligent.

It’s running the “questions” against a fixed array of rules, but the book is not recording memory, or modifying it’s rule-set in response to the question. The rules may represent some information about the outside world, but if it’s fixed, it won’t do anything but provide one output for one input.

If your book instead contained rules to construct a “world model” based on that information, then to apply the best scoring set of rules to the world model and generate it’s reply accordingly. Then accept feedback and do the same. Then, if the book contains rules to write new rules based on the feedback and the response, and stored these new rules for later reference . . . .

then the book would be learning, and thinking. It would be applying a self-modifying learning algorithm to the problem. Of course, the poor guy who has to follow the book’s uber-complex procedures and make the whole thing work still won’t have a clue about what is going on. He would be more akin to a computer processor - it is the bookthat would represent the intelligent agent (weird, huh?). The chinese room would probably quickly fill up with paper, and would have to eat several trees worth of scrap paper for every iteration though. :stuck_out_tongue:

The chinese characters would obey rules. So if you asked the same question twice, you would get exactly the same answer twice. In fact you could get the same answer 100 times. People do not communicate with the same answer 100 times. So this would not pass the Turing test.

hi MRM1101, Pincho Paxton,

let me first extend you both a warm welcome.

Now, maybe I have not explained to your satisfaction in the first post. what I meant to say was that there is no difference between a computer and a human being if you are to place the difference on the ability to understand.

my proposition is that we humans do not understand, we like a computer are simply following rules in communication. When I say, ‘Sun rises in the east’, what does it take to so called understand the statement. The understanding is revealed in its application. therefore, in the morning and I wish to see the sun. if I understand the statement I would look to the east. if a comptuer follows similar rule which we call logic, the behavior it exhibt is on the same nature as human behavior.

thus, the chinese room argument against AI is flawed. the Turing test is nonsense because we do not know a person, but his appearance. just like the so called ‘thing in itself’ but an appearance of the self.

EDIT: To understand is to be able to follow instructions. Computers can follow instructions, therefore they understand.

actually the Turing test is not flawed at all…

all it states is that the appearance of intelligence is enough for the observer to posit the standing of “intelligent” on whatever it is with which that communication occurs…

in other words: how does one say if a person is intelligent? by observing that person respond (usually linguistically) to stimulation…

if a person responds properly to our linguistic prompts, we call that person intelligent…

there is no difference between a person and a computer in this regard…

if the computer responds exactly as a person would respond the outside judge of intelligence would have to judge it as being a person especially if the communicator did not know that with which he was conversing…

if the person judging the “intelligence” sees no difference between the response of the computer and a person, he would have no justification to believe that the computer was not “intelligent” and the person was “intelligent” because their responses would be identical…

no where in the Turing test is intelligence actually defined or limited… if the quality is demonstrated, the object is judged to posess it… the computer is “intelligent” because it displays intelligence… just as the sky is blue or the grass is green because one perceives them as being such…

-Imp

You are not talking about the Turing test, you are talking about sentience, or self-awareness which is a different thing.

Hard to tell if something is sentient or self aware. I don’t even know if the person standing in front of me is self aware. I just presume that they are.

in the underlined regard, I regard the Turing test as rubbish because you can’t tell whether it is a tape recorder talking or a person speaking on a microphone.

no offense isn’t that a weird thought even to post on the world wide web 0_0