How Do You Make a Perfect AI; or Human?

i think the only difference between humans and robots who pathedically and unconvincingly attempt to simulate humans is the number of memories that each refers to when coming up with a thought.

a real human lives for at least 15 years, recording countless descriptions of how things work, and how things should be reacted to before you are able to have a real meaningful conversation with him. a computer has to have all these countless thousands/millions of specific examples of memories manually typed in by some dude who just doesnt have that much time on his hands.

the only way to make a robot that can effectively answer a question as impressively and accurately as i can is to simulate my life and all the experiences that have flashed into my brain. without them, i would be nothing but a super-efficient card catalog with no cards.

either we create a machine that interprets sensory data the same way humans do, and we let this thing run around for 15 years learning, or give him some kind of strict regimen so that he can learn it all in 5, or what i would do is, as im raising a real human child, observe all of the things that that child learns, interpret the exact context of all of his little discoveries, and input them somehow over a 15 year period and eventually the end product should be a lot like my son.

im curious as to how a computer would reference such a database. humans are able to make some pretty abstract connections. like i can imagine the universe and its mission for all humans to do selfless good is equivalent to a persons arm, and the act of lifting a weight up and down directly relates to how strong the thing is.

the way i imagine this is that i have created a category in my brain: things that naturally help themselves do work better by repeatedly doing that work. it seems that for all such abstract comparisons, there must be a category that links the things i will compare.

its like there is a list of qualities that dedscribes every single term i have ever installed. the list is huge, it includes every possible way the term can be described. i learned all of these characteristics by my experience of them, and so recording them in a computer as my child learns them ought to do the same thing.

but how efficiently can a computer possibly cross reference everything in this way. it seems to me like i already have a term defined that describes the arm and the universe and other things, and the list of things that fit this category is just as easy to compile as the list of things that describe an individual object such as my arm. there doesnt seem to be any ‘searching’ through my brain for things that fit the category, its like the category was already formed as i discovered that certain things fit into it, and the category itself, ‘things that improve with practice’ is just as much of a basic term as ‘my arm’ is.

but how many categories are there? is this just how memories are? is there a way to teach humans that incorporates this idea? would knowing this allow me to further harness my brain or allow me to better help my son harness his?

cause my next big idea, and probably the way this thread should end up going someday is how do you most efficiently input experiences into your childs brain. it will soon be my mission to stop screwing with people who dont believe in the omnisoul, and start figuring out how to raise another future man, but im not used to correcting the work of thousands of years of human civilization unless i know a little bit about it.

There’s the key to your question. How exactly do we make the “mind” capable of discovering, inventing, imagining, remembering, fantasizing, and blabbering just because. Is efficiency desirable in the way humans form their beliefs about the world? What will you exclude if this efficiency is possible. To know efficiency is to have efficiency. If we know it, we have it already. We are essentially thinking, social, interacting beings. MOST of what we believe we encounter indirectly through interaction. We are not stationary, passive learners. There was a question posed by my lecturer in creative writing: if a chip could be implanted in your brain and you could have memories of things you never actually experienced, such as meeting your favorite rock star, traveling around the world, climbing the mount everest, etc. would you do it? The question most of the class posed was, Could I cross-reference, verify these information with a friend, a loved one, or something that’s written like in a news article or magazine?

So, we have this information in our brain, but what we really want is something to coincide, to correspond it with. What would be inefficient for you? The subtleties, the nuances, the obscure which a lot of us feed on because this is what’s exciting—not the in-your-face important things? Would you really want to exclude these from our awareness?

This is a sad, sad portrayal of the human consciousness, Future Man.

A true artificial mind involves much more than just how fast it can process data, and what sorts of behaviors it can exhibit. The Chinese Room argument (Searl?) demonstrates this. I won’t describe the whole thing, because Future Man would whine, but basically, you can create a device that spits out correct answers and behaves however you want it to behave, and all you’ve done is created something with the appearance of a mind to us- there’s no reason to suppose that the device you’ve created is in any way aware of it’s own actions.
I would say that as long as computer development is geared towards producing apparent results, all we’ll ever get is the simulation of a mind, and not the real thing. You have to remember, computer technology was developed on the principal of it being a tool, which is about as close to the opposite of ‘conscious, self-aware being’ as you can get.

I think there’s nothing wrong with asserting that within the Chinese room scenario, and the scenario set out above, once the system is aware of itself it is a ‘real’ mind. Just because the means of creating the mind is different to ours doesn’t mean it can’t develop into a self conscious mind.

zompist.com has a good refutation of the Chinese room style tests. However, the premise remains the same, if something responds exactly like a sentient being then it IS a sentient being. Or at least, you have as much evidence for such as you do any other sentient being, which most people would regard as fairly conclusive (you could only get round this with references to an apparant soul).

However, i read a paper that agrees with quite alot of whats been said, consciousness as we understand the term only arises in a scenario of being forced to comprehend for itself. However, i would add the tag: “in a group of like beings”. Consciousness is an expression of thought, thought is an expression of language, language is an expression of communication and communication can only occur between like beings. Thus consciousness wont naturally arise until the being has something to talk to. Only in recognising that it is a being amongst many will the subject ‘I’ occur (and this will only happen in stages, they wont suddenly wake up one morning, a la Johnny 5 in Short Circuit).

Not only that, it would probably be best if we adopted a generational system to mimic our own development, so, the old programs after a set number of tests will be instructed to create a new program (or several varient new ones) which contains only the most successful instructions, and then delete itself. This will ensure we dont have huge programs that memorise useless things, only processing information which is shown to be of benefit in our tests will be persistant.

Let a mega fast computer process this for a month or so and by the end of it we should have programs that have completely mastered the virtual environment we have given them, let them only succeed through coordination and we’ll have language too. If we let a hundred of them observe the outside world, and let the most successful programs persist, then, badoom, we have sentient AI.

In short, combine hard coded AI with the mutable development of A-life and the results would be really cool.

Cheers!

plato.stanford.edu/entries/turing-test/

-Imp