Turing Test Defeated?

for themselves?

ever see a chess program that can beat any human? big blue came close I believe… but in the limited scope of that program, that program produced results for itself… actually, all programs produce results for themselves, that is what programs do… whether programmed artifically through schooling/programming, or naturally/biologically programmed…

are you assuming the need for a programmer?

it could be argued that a child (man made biological machine) is programmed from the moment it is born by all the sensory data it gathers… the “flow chart” and “be a human” program is hard wired into the biology of the child… it cannot be any other way than its biology dictates… (e.g. it can’t grow wings)…

or must we have “terminator” before artificial “intelligence” exists?

-Imp

The turing test is just a mental illusion. You have to just believe that the computer is real… I think you already have gathered that.

But if you really want a sentient computer, you might be able to connect a soul to a computer…Like my hands are typing this, but without the hands.

Take a fetus, and replace the brain with a computer somehow. Ok so that sounds way too simplified, but it will have to do for this topic.

Programming sentience is back to front. Our sentience fills our brain with information… Not… Our brain full of information gives us sentience…Otherwise we would never die…because we would always have our brain, and we would always be sentient.

Somehow you need to start with just the energy on its own, and the only thing I can think of getting the energy from is a fetus. Now find the energy.

Impenitent:

Yes, but I've never seen one play a good game of Twister.  My point in talking about the program doing things 'for itself', is that when we design a computer program, we always design it with the visible, outer results in mind. The product. In that sense, Deep Blue is not much different than a hammer- a tool. Now, a sea-slug is not a tool. Or if it is, it's its own tool. 
I wouldn't necessarily disagree with that (or rather, I don't need to to make my points!)  What I would say is, there's something in the hardware of a child that gives it an inner mental life- it thinks and has self-awareness. Programs that work soley to produce visible results like Deep Blue or a conversation simulator won't achieve that because the creators[i]aren't even trying [/i] to achieve that. If we agree that zombies (in the philosophical sense) are [i]possible[/i], then it seems obvious to me that a zombie is what we are on the road to producing.

But self awareness does not occur until the child gets enough information or ‘programming’ from the outside world. Contrary to popular belief, babies are not born self actualized.

I mentioned this before, but any good program would need some kind of if/then probability decision making tree as its interal mind. So, that would be a thought process. I also think that that would be pretty close to our own.

OG

Waitaminute. "Who is to say" that computers aren't self-aware...but apparently we're more than qualified to say that babies aren't self-aware? This doesn't make any sense to me.

I think the point of the Turing test is that the AI operate so that it convinces others that it is a normal human being acting like a human being - NOT a human being that is trying to act like a computer.

I’m not saying computers arn’t self aware, or babies. I’m saying the opposite. When they get enough input or programming, it is then that they become self aware. But I brought up the self awareness in babies because they are essentially programmed.

The whole point of my argument is that humans are functional programs running on the hardware of our brains. So in other words, we are computers. When I talk on here I may be operating a certain part of my consciousness, a subroutine if you will… but that doesn’t mean that that function is wholly separated from my collective whole. Look up some of the objections to Searle’s chinese room argument for more on this.

Even though the programming for AI may be done by ‘us’, it doesn’t mean that it (the program) is not serving itself. Because this would also mean that we are not serving ourselves, but rather the enviroment that has programmed us. <---- Now that last one is debatable, but that’s basically my point.

That was a good post OG.

The question for me is “what is self aware” or even better “how is self aware” because those two questions ask how it is that we know ourselves. There are only two possibilities. The first is that we have some built in mechanism that allows us to understand that we are who we are. The second is that the information that we gain builds to a point where we are able to understand it.

Years ago I knew this mentally retarded girl that did something amusing and sad at the same time. When she would sneeze she would jump and defensively look around as if she had been attacked. So, I concluded that she was not self-aware enough to know that it was her that had sneezed. Based on that, I have to conclude that we do not have a built in mechanism for self-awareness.

So, that means that being self-aware is a matter of learning that you exist via your senses, but more importantly via other people. That means that all you would have to do is program a computer to indentify itself and there you go.

and zombies are exactly what we have been producing… but we say they are autonomous, free, posessing a soul or some other nonsense like that…

-Imp

We have rigid prime directives, and subtle sub-directives. The flexibility of these sub-directives come about through social programming. Humans are social animals. So must AI be social, with the same adversity threatening their objectives, and the same objectives motivated by prime directives. Human intelligence is about need, pain and fear. At some point there’s going to be a dose of mighty tough love heaped on our digital progeny before they learn to pass the Turing test…a thing they will only do once failure is not an option.

Creating an AI is not like sculpting a statue from the ground up. It is a culling of random behavior, a culling articulated by desired outcomes…much like evolution. A robot will be given a set of parameters, within each perameter a range of variables. THe parameters will have to resemble ours. The robot must go into a simulated social environment. Random behaviors will ensue throughout each variable action…generating various combinations of behaviors and outcomes. As the outcomes approach desirability, judged by percentage of achieved prime directives (according to human judgement), the AI will be closer to passing the Turing test. Shotgun evolution…evolution is the only thing that tricks us into “passing” our own Turing test.

A few points…

If/Then statements could create a working intelligence, but you cannot put much information into a good AI program. The brain is not fully pre-programmed, it works on levels of energy.

IF something is TRUE THEN Energy = Energy + 1
IF something is FALSE THEN Energy = Energy - 1
If something is MAYBE then make a new branch.

That is how the brain is supposed to work according to some research.

Neural Networks are computer programs that simulate the Human Brain. They use the three rules…True/False/Maybe.
They have shown some intelligence, and are not programmed, but taught.

pincho we seem to share the same theory of wich you posted on this thread isn’t that weird?

barf i need to post when im not drunk ion order to provide anything meaningful. im sorry, io already wrote this upon the previouds realization. the pronouns were meaningful when i wrote thm, mqaybe less so mnow. i refuse to edit!!!

i think that it will ask you what you mean by “artificial life”

you will say… what?

“life that only responds to specific inputs”

“life that only responds to specific inputs when it has learned the appropriate response to such inputs”

“life that only responds when it ‘understands’ what a particular input ‘means’ and a particular input’s ‘context’ is?”

Data from an early star trek episode: “i believe it would be unadvisable to combust certain materials in this environment”

jordi LaForge:“no, burning the midnight oil is a figure of speech”

Data: “blank stare”

future man :“lol”

Who votes for a future man/ timecube duel? :smiley:

Hey, I’m just glad to see Future Man back! :smiley: It’s been too long- what have you been doing with yourself all this time?

I’m gonna wager a guess and say… drinking :smiley:

You’ve kind of got things twisted through fractional dimensions here. The Turing Test is for how you can tell what you have created qualifies as an AI;you don’t need to go back to the drawing board any more if it passes the test.

But the Turing Test isn’t really meaningful. Humans use shared-worldview cues in language, and there’s no particular reason that an AI should have a great overlap in worldviews: our experience certainly doesn’t overlap.

What you have described is not an AI, it’s a human-mimicer, for which an AI, it was discovered, oh, forty years ago, is not necessary. I can’t recall the dates off the top of my head, but programs that mimiced nondirective psychiatrists have been around for a very long time. The trick to that is to reflect back the patient’s own language and then use Gricean cooperation to let the patient fill in the analyist’s reality.