No, they aren’t as a flat premise. Instances of learning are this way, not even relatively most of the time. Sensory information is far larger in the learning sphere, and it is what creates the ability of discovery.
The least common denominator approach does not work with intellect, on the simplest basis, there aren’t even definitives on neural mapping because of disagreement over types, and interfunction, of neurons.
And again. The human mind does not have a database of prefiltered information to start from, such as AI does, and erroneously claims is intelligence, which is the capacity to learn, not a measure of how much can someone else store for you and have you IF/THEN regurgitate.
I thus far concur with your perspective … until I know more;
I perceive us, we the lumbering species, as highly sophisticated – entertaining ‘sophisticated’ loosely, here – batches of DNA sequencing, wirelessly interacting/interfacing (at times) with our environment. For example, are my proclivities due to genetic programming or am I programming/influencing my genetic code as I interact with this environment (an environment from which, I would imagine, I’m not too far removed). How did I find myself asking this question?
As you can see, until I get more data, there are still kinks in my formulation I need to work out. Perhaps, a nutritious meal to enhance my storage capacity will do the trick.
I think that the issue is one of full human or perhaps greater intelligence. That we may never approach, but Mr. P is definitely correct in that we can create forms of AI that are suitable for a specific range of tasks. We are doing that in limited ways now. I am reminded of Brave New World and the Alphas being served by “humans” with just a wee bit more intelligence than our chimpanzes; capable of a limited range of tasks, but not ready for prime time philosophical pursuits. Bicentennial Man is, and may always be, fictional, but given enough of them all with a limited range of capabilities might be just enough.
I for one, want an AI machine who can scrub the damned toilet, put TP in the holder when it is empty, and scrub the tub…
Oh, and not bitch when I leave my wet towel on the floor.
The biggest problem I see with most modern AI work is that it relies on a non-coporeal intelligence.
If you give a robot very minimal intelligence, but give it a body it can navigate around objects much more quickly than if you give a robot incredible intelligence and have it map out a way across a room on a theoretical level and then move a dummy bot across the real terrain.
I still think Mr. P has it right. Most of human activity that passes for intelligence isn’t very intelligent. So how about we forego genuine human intelligence and shoot for mimicking intelligence? After all, how much intelligence does it take to sit in front of TV watching reality shows?
Human intellect is an outgrowth of the intellect seen in other animals. Our intellect is rooted in our physicality.
But it goes further than that, I could explain the way my lab looks to you in such a way that you would have a pretty good intellectual understanding of it. If I then gave you a blind robot to navigate around the lab, in a race a blindfolded me will win 100% of the time – and that is assuming you have a map and I don’t! Heck, a blindfolded TheZeus18 would win (he has never seen my lab) given much less information than you had.
Mr. P.,
My current organic experiences–eating, sleeping, breathing, shitting–all of which perpetuate my existence; and all of which are a part of intelligence. On a deeper level my organic experience is e pluribus unum. To say that something caused by and maintained by organic processes can be uploaded into a machine is the same as saying that I can uproot a daisy, transplant it in a nonnuitrive environment such as concrete and expect it to still grow. Or better yet, I can extract the essence of a daisy and render its totality on my computer. Stuff and nonsense.
Xunzian,
=D>
Hence the studies of mirror neurons and their effect upon neuromuscular repitition being one of the largest factors in social intellect and coinciding behaviors.
Which still falls back on the issue of awareness of self to define “not self” in a meaninful way to the learner, which a non-organic construction cannot have as a program.
That’s teetering pretty close to vitalism, don’t you think?
No reason why it has to be organic, though organic systems do seem most fitted to the job. We’ve already created non-organic creatures with intelligence on the level of insects (they say ‘ants’, but the robots aren’t social, so beetles might be a more appropriate description). No reason why we couldn’t take that further.
So how do you propose to program emotional/social intelligence into a machination that has no representative manner for adjusting the differences of “self” and “not self”?
Ant or beetle, their life is determined completely by chemical responses, unordered and uncontrolled. That seems to be a poor comparison, at best.
And if not through electrochemical reactions, how do humans think?
I definately think that organic systems are best suited for this type of task, don’t get me wrong. I also think that evolution rather than design ends up with a much better product (if ever there was a good argument against intelligent design AI and robotics would be it!).
You’re doing it again Xunzian, either jerking my chain for your own amusement, or testing my knowledge base. ?
Ant/beetle is a reactant creature, based solely upon chemical/electrochemical input/response. There is no thinking, there is only reacting.
The miraculous thing about the human mind is, it is not “simply” an electrochemical factory. It is an electrochemical factory that extrapolates into “thought” by manner of wavefunction. Wavefunction is thought. Under the priniciples of such, we are biologically inadequate for such, as decoherence occurs when this wavefunction approaches collapse due to thermodynamic principles, (i.e. our brains would be more “stable” at a small margin above absolute zero). But in another miraculous development biology has been able to counter this by maintaining the function overall by using H2O to counter the negatives created by near collapse, which still likely causes abbreviation of thought processes to a certain degree. H2O “helps” to maintain environmental separation tubulin, but is not completely effective.
Now understanding that human thought cannot be reduced to a matter of chemical acuity, how do you propose to elicit a similar condition in a machination which operates purely on only voltage signatures, to achieving multiple level wavefunctions, and then reducing them back down to a voltage for comparative analyses by database?
I’m no neurologist, but do you have a citation on that?
My understanding is that it is that the human CNS works like any other nervous system. Much more complex, but the fundamentals are the same.
Where does this waveform method you mention originate? Only in humans? In apes? In dolphins? In any creature with a CNS?
Since the CNS comes from the PNS, to me it makes sense to first build up a workable PNS in machines and see what we learn from that. It’ll be no simple task to make that jump, but I see no reason why it can’t be done.
You’re being overtly reductionist again, but I understand.
It is the interreactivity of the entirety of the nervous system, not the “bits and pieces” approach. You really mean to tell me you aren’t aware that thought is based off of measureable wavefunction? I would find that very surprising, neurologist or not.
With the jump you are describing, it is more of a leap, because it is higher than physical function alone.
Dolphins and higher apes do exhibit this, apes to a lesser degree, the measurements on dolphins are sketchy at best, due to environmental variables of predicated intellect. I’ve seen them, but won’t waste your time with it, you would find them immediately invalid, which took me some reading to figure out.