Alright, the Turning Test, so far as I understand it, says that if we are unable to distinguish a simulated personality from a real person, we ought to conclude that the simulation posesses true Artificial Intelligence- that is, it’s own mental life.
Consider this scenario. There is a computer program that simulates a personality. Suppose it simulates it so well that it can hold it’s own on ILP, it has a sense of humor, seems to get offended/angry/sad in all the proper situations. It can even right short fiction well enough to get published in a local anthology. In short, it has as much ‘personality’ as any person you know, and more than some!
Here’s the catch. If you ask the computer program “Are you an artificial intelligence?” It always explains that it most assuredly is not. It will tell you that it has no mental life, it does not think. There is no such thing as what it’s like to be the program, and everything it does is an automatic response based on it’s very complex programming. It assures you that it is as dead and unconcious as a pocket calculator.
Now what? Do you presume the machine is lying? After all, it is in a position to ‘know’ and you aren’t. Saying such things doesn’t violate the Turing test, since it’s entirely concievable for an eccentric human to make the same claims.
I think that several members here are IA, but since I figured it out the Turning Test still remains a theory.
i dont think ‘mental life’ is such a big deal. If an AI decided it didnt have any, i wouldnt dispute it. I wouldnt dispute it if a human said it either (not so far fetched; there is a neurologist who has removed her own sense of metaphysical free will).
as an aside:
zompist.com/turing.html
“i dont think ‘mental life’ is such a big deal.”
Why not?
The thing about AI and humans is dependent on the free will concept. If humans are bio-machines that only know what they are programmed to know then you can create an AI.
If you have Bob and he only has a ninth grade education and a 90 IQ with a fairly limited vocabulary there’s not much going on. He can’t speak on a giant range of subjects and sometimes he doesn’t “get” what people are talking about. An AI like that seems doable.
Also, there is a theory that the human mind works on If/then statements and another that it works on heuristics. These ideas go well with what Dunamis is always talking about regarding language.
Anyway, if/then statements and answers based on heuristics seem ripe for programming. From what I know about people and psychology, and if I had the proper machine and program, I bet that I could create a synthetic personality. I would love to give that a try.
The one part that would be tough would be to come up with a way to make it creative. My own creative process has to do with taking bits from other ideas and forming them into new ones, and I’m not sure that would be organized.
Sorry, i was vague. I dont think its a big deal because i believe humans are ‘bio-machines’. This does not mean that intelligence can be effectively programmed though (be better if a powerful learning method were made, a heck of a lot of really varied and quick stimulus. basically put a few hundred million copies of the program in a virtual environment which requires them to adapt or they wont survive).
Right, however, I’m not sure that humans “learn” or do we have ideas “installed” in us by others. If that is true then those ideas could be insulted into a computer.
With regards to the system, the difference between learning and being taught is not significant, it is just exposing the system to a stimulas. Now, we could tailor that stimuli to get a specific responce (install something), or we could just let it run in an environment that will constantly change it randomly, but only useful changes will persist.
The second method will probably be more efficient, as coding something as complex as a brain would take a human a considerable amount of time and the system would be no where near as robust.
On this topic, you might be interested to look up Mark V Shaney, a fake USENET message board poster whose posts were generated entirely by a computer. The program used a technique from mathematical probability theory called Markov Chains (hence the name) to extrapolate existing posts into new contributions.
The posts it produced were definitely eccentric, but they were eerily meaningful and had many people convinced that Mark was a real (albeit slightly crazy) human poster. Perhaps this illustrates how easy it is to feign intelligence with minimal actual processing (in this case, a simple algorithm).
“With regards to the system, the difference between learning and being taught is not significant, it is just exposing the system to a stimulas. Now, we could tailor that stimuli to get a specific responce (install something), or we could just let it run in an environment that will constantly change it randomly, but only useful changes will persist.â€
Personally, I’m for the programming concept. There was a Russian developmental psychologist that I like named Lev Vygotsky that developed a theory of learning that I always enjoyed and that I think is applicable to the topic. He believed that in order for children to learn they need the help of experts. The definition of expert is broad here, but the button line is that humans learn from other humans. This has been an unbroken chain. Anyway, quality support brings a quality mind to the child.
That model implies the need for programming. To my knowledge there hasn’t been a human that has ever learned anything without the support of other humans. So, I don’t think that a learning computer is possible, as an independently learning human has never been at all.
I think that the learning AI model is part of the human creator’s conceit. Many “smart†people, especially in capitalistic countries, want to believe that they are the master of their own thoughts and opinions and that clouds their judgment.
It’s all a good question though.
“The second method will probably be more efficient, as coding something as complex as a brain would take a human a considerable amount of time and the system would be no where near as robust.â€
Yes, but we frequently think of AI in terms of genius. I wonder how long it would take to make an AI that approximated the wisdom of a low functioning mental retard? Not exactly Star Trek there, but the subject matter would allow for success.
It’s my bet that it’s just a matter of storage and having a program that can make choices based on definitions and a decision making tree.
Othafa,
Wow, I was just joking that such a thing existed!
Now I think I know what we encountered here.
Well, I consider mental life to be a very big deal, since if it's not present, I wouldn't call the program "AI"- that's the very definition of the term to me. For example, if the AI didn't have any mental life, it would not have [i]decided[/i] that it didn't or anything else for that matter, it would have simply spat out the sentences indicating it didn't have a mental life, in the same way a calculator spits out "4' when you press 2, +, 2, =. Much more complex, of course.
As far as free will is concerned, do you see the two as linked? Self-awareness and free will?
Oreso and Adlerian, you both seem to be defining "AI" different than I do. Maybe we should start there?
not exactly… the test says that if one cannot distinguish “artificial” intelligence from “real” intelligence then “artificial” intelligence is not “artificial”…
how do you “know” that a person you meet and talk with in the grocery store is actually a person? your senses tell you they are…
how do you “know” they are “intelligent”? via conversation and their reactions to stimuli…
how do you “know” that it really isn’t an android? if it convinces your senses that it is a person, then it is a “person” thus passing the turing test…
that fact that it actually is an android (or whatever “artificial” intelligence) is beyond experienced evidence so it is treated exactly as if it were human…
-Imp
I think I follow that, Impenitent. What I’m saying is, I think the AI claiming not to be sentient gives us an overriding reason to disregard the Turning test. Now, if you said that, it would be a different story. But If what we’re dealing with is a computer program, so I already know it’s not human, and I’m trying to determine whether or not it should be considered self aware, then I have a problem. Like you said, it’s self-awareness is beyond my experience. So if it claims not to be self-aware, aren’t I obligated to take it on it’s word?
So are you saying that the Turning test doesn’t apply to the issue of inner life at all?
you already “know” it is not human… the turing test doesn’t apply…
obligated to take it on its word? ocean front property in arizona…
only in as much as it applies to that which is believed, not which actually is or is sensed…
-Imp
Ah, the so the Turning test isn’t a way to determine whether or not a program is sentient, it’s a way to determine whether a converstion you are in is being held up by a person or an object? That’s different, but makes more sense. I always had it explained to me as a test: That is, you create a condidate for AI, and you evaluate it by seeing if other people can tell if it’s human or not.
right, there is no way to determine sentience, but it is still a test of sorts…
the only thing is that there is no “winner”…
one interacts with a calculator to generate mathematical solutions…
the calculator produces the same answers as a “properly functioning” human would achieve…
the calculator does it quicker and generally more accurately and consistently than a person…
is the calculator better in some instances? yes…
could a calculator be programmed to make mistakes as a human might? very easily…
the difference is in the interactions… one knows that one is typing numbers and is dealing with AI of sorts…
one could argue that the calculator “passes” the turing test because the people interacting with the calculator (in that limited capacity) understand that the machine while remaining a machine, produces “intelligent” results…
the candidate for AI does not need to be “created” as much as a “test” needs to be given…
think of it like elementary school… how do you “know” that the 2nd graders are progressing well? you test if they can count, add, subtract, multiply and divide simple numbers… how do you “know” they aren’t calculators? the fact that one might need to ask this question now demonstrates how easy it is to fool humans about intellectual facts (2+2=4) and intelligence…
-Imp
I think you are also assuming that humans are wholly different from computers at some fundamental level. Who is to say the computer is not conscious, the difference between being alive and not isn’t separated by a carbon body and a heartbeat.
Our neurons are mostly ‘on and off’ functions, why is that so different from analogue?
One could argue (and indeed many have) that everything we do is a response to very complex programming. We still haven’t even gotten around the free will debate.
I don’t know if any of you know about the Game of Life, but if one were to make a self sustainable* GOL and put it on a 2d screen with pixels like that of a computer, from space the screen (about the size of state… i think) would appear as a fuzzy collection of lights, much like a galaxy looks like when viewed from afar.
*I forget exactly how this is done
just ignore the moment of both the kill or steal hit &run whatever and doze off into lala land that makes people angry and computers confused little systems.
:).
OG
I do think that, but I don’t see how my example is assuming it. In my example, the computer is upfront telling you that it is fundamentally different from a human. The question I put forward is, “Should we believe it or not?”
Well, that’s my point. In this case, the computer itself has said that it is not conscious. Are you in a position to disagree?
And thank you, timecubeguy.
Impenitent: The idea of a calculator being ‘better’ in some instances and ‘worse’ in others is actually the main reason I think we can’t develop true AI with current technology. All our machines, even computers, are designed from the ground up to produce certain results for us. When we start making machines that produce results for themselves, we’ll finally be heading in the right direction for AI, but it will involve something much more dramatic than more and more RAM.
Uccisore
I explained this point about artificial intelligence in a previous thread, but for your benefit and for others. I’ll restate the truth.
We are all selves in-itself, in that you do not know what I really am. I am but my representation to you. meaning I am what I write. I am the words you see, and nothing more.
it could very well be that I or you or indeed anyone else on this forum is artifical intelligence because by its very representation is indistinguishable from the humans. we can only distinguish it if we can see a metalic representation while with humans we see a flesh representation.
but if its sole representation is words, a dictionary or a book can be said to have human qualities in that they deliver human thoughts.
this whole obsession with understanding the thing in itself, and now the self in itself, is a misunderstanding that all things and selves are representations. thus it is not whether we can diffrentiate between things but how to react to impressions of things. if you see a ball flying towards you, you would duck.
no point pondering about what can not be distinguished. as I said in my book, you can not see no matter how hard you squint in darkness. speaking of my book, I better get a work on it, the whole project has become a catastrophic mess.