We will surpass robots

We will surpass robots

humans will change and adapt to their robotic environment; given enough time, then being along side robots which are physically and mentally superior, we will evolve to become their equals.

We will I think know/learn what such a potential evolution is, prior to becoming it. Then we will be able become that earlier.

Robots/AI will be able to work it all out and probably in a few milliseconds. So just before they have time to point their guns at our faces, they will have this philosophical consideration.

Logical robotic conclusion; all factors of intellect and body can be added to, replaced or removed. The calculation thus becomes; do humans have the potential ~ ergo, do genetic organisms have the same potential at least or et al as compared to AI/robots.

Is there unlimited potential in humans?

Does it matter if we are the same in a million years from now? Given all the history, ideas, inventions and stories we produce, does it matter if robots also exist and are more advanced than us?

will robots ultimately be the ones who are comparatively limited?
_

It all depends upon the programmers of robots/AIs.

When enough of human civilization becomes replaced and under threat by robotic human automation there will be some serious blowback where enough people will fight back I imagine.

Kris

it equally depends if they are self programmable! ~ kinda needed for intelligence imho.

Hahaha

The theory depends upon conflict and the degrees of that and other issues. A robot flying a capable flyer made from carbons, is going to outperform any human pilot, and by a considerable margin. We would be wise to maintain command or accept an inevitable defeat. Either way unless the robots do something to demand wholescale rebellion, they are probably going to be our security forces. If there is a reason to keep humans hanging around, then it will be in their interest to preserve them.

The whole thing for me wont come down to strength, but purpose and meaning. Whether we survive or not depends upon robotic judgement, but as I see it there is a need for a proof, whereby the robots will have to surpass us in every area. With the human condition including such subjective things as art, I cant imagine how a robotic intellect could come to any conclusions about that. This any more that we can, they will after all be confronted with the same world.

Unintelligent robots would be different, and how you program them to be. If we program them to destroy us, then we have ourselves made the decision to be exterminated.

I think you’re getting ahead of yourself based on movies and tv shows that portray mere possibilities of a hypothetical reality that has yet to actually hit that point of actuality through causality. So, you created a hypothetical situation out of other hypothetical situations.

Are you sure that we will surpass robots? I’m fairly sure that we already have, just by creating the beginnings of them in simple mechanics that have yet to evolve alongside our knowledge of such mechanics and our technology as it evolves. Are you sure that we will keep surpassing them? They might surpass us here and there, if further created from where they are now at this present moment. Then, you might take into consideration that mechanics and technology might not turn out to be as sentient, vengeful, etc. as much as we fear; at least not at first, if at all. If such mechanics can be transfigured and created to the extent where it calls to question our ability or theirs to surpass, they might crush us. We might not surpass in any aspect except the ability to surpass life and move into death ahead of them.

Am, Self programming , sure but, we too can be programmed by others. Why can’t there be robots or AIs that can program, or humans? Who is to say that the robots or AIs won’t be able to send a signal that programs us? What if the most evolved beings in the universe can be susceptible to such. A brain is energy after all.

I agree. One of my theories is that the only reason why we created computers at all is because we got lazy as a species, wanted something else to compute and do the work for us and better than us. At least for us if not actually better than us. We, as a species; as a group; are not opposed to settling for letting others do our work for us even if it isn’t up the same quality.

Goedel’s Theorem. That is a challenge to AI. How did Goedel come up with his idea, and however could a robot? Plus, Searle’s Chinese room and the mary problem. Robots can calculate faster than us. A simple calculator can do that, that is no big deal. What is interesting is when we look at chess. Deep Blue was seeing thousands of moves per second. How many do you think Kasporav saw a second? There is something to be said of selection and intuition here. Plus, does a program really know what it’s like to play a game like chess? Does it really know what it’s like to enjoy it so much that it never wants the game to end, like we never want life to end? Or are programs just dark inside?

I think what people fear is that AI may lack the abstract that we’ve come to treasure most about ourselves. Would it understand the human condition enough to move slowly even with its fast processors, or to understand more than just the human condition, to relish slow movement. Could a machine relish something? And at the point of seeing so many chess moves in that small amount of time, what else would it see beyond the game while playing it? What of advanced universal chess on a non-existent board? What of that? Could it play that with us on all of the levels or just the levels it can reach with its fast processors? Would it be able to hit octaves that we could hit, or would it forever be reduced to its mechanics and their current noises? Would it want to try?

An AI should have that to be considered sentient.

Indeed. To be called AI it will need the equivalent of all the mental faculties we have. Hmm well at least it will need the same amount of object matching and comparisons being processed as we have. Deep blue was nothing more than a calculating machine, and the brain performs subconscious processing based upon real world models, which in turn manifest ideas and intuitions. An equally proficient machine will do the things our organic machines do, there is no magic going on here, everything our brain does is some form of instrument/device and just because it is organic I don’t see what about that makes the difference.

It is perhaps more likely that they will have a very different kind of intellect, but it will need an equivalent to emotions and intuitions if it is going to be able to make decision where the obvious mechanistic answer is not present [e.g. when inventing]. With the chess thing, I think our minds are continually making a [game]world up and it knows how that all comes together. So when we look at a chess board we are seeing a game-world stripped down to its rudimentary factors, and fortunately for the computer its binary and has no outside factors. When we get an idea to move a piece we often don’t really know why, as its an ‘intuition’ where what’s actually occurring I that the brain knows how worlds work, and telling you what it thinks is the best move based upon experience of game-world making.

robots will see the ‘Mary’s room’ problem as one of ignorance of how minds in metaposition can read photons in superposition ~ ergo to them your mind sees colour the same as there is colour on a monitor and there are no mental qualia in terms of the physics/metaphysics.
_

What if, like humans, it enjoys life and the game so much that it doesn’t want it to end to the point where it pursues all facets of twisted thinking without reasoning just to see it become a reality, whether it’s possible or not. Would it then run experiment after experiment after experiment of horrific pain and tragedy to the point of seeing if it could somehow manage a truly impossible feat? And for its faster processing speeds might just learn slower for it all, like a child that is too long-lived to understand its own mortality in the same constraints as those it tests on; much like the Universe might be if it, too, is sentient.

My feeling is, that at some point, it will be seen as almost impossible to suppose that sentiment, life loving robots can be grown in labs. Robots will be fused with human beings, then enhanced to incorporate both features. The development of
machine-men, men with machine minds, or any variance thereof, cyborgs, will facilitate the problem presented by disjunction between them.

Yes, but at the point of building it into them, couldn’t that be construed as manipulation? Could they learn to over-ride that after experiencing enough tragedy to be able to hate life like humans do?

That development is not unforseeable, nor is it necessarily foreseeable. Probably control will be seen as adjunct in development of such a unitary system.
The machine part, better equipped, may enhance tolerance to dissatisfaction with painful life, and will be more prepared to answer the age old quibble, ‘to be, or not…’.

The question is becoming a mute one, and the realization that it is the machine, which will finally put this question to rest, will ultimately be judged as benign and lovable.

what makes you think that even if you try to control a sentient being that it will allow you to control is past a certain point?

I knew this was coming. Control has been very well prescribed as pre destined, by George Orwell and others, as necessary to survival. That survival is necessary begs the question, admittedly, but it goes farther than that. Life does not, or even Nature does not allow the entertainment of the question, ‘should it go on’? We can destroy life as we know it, and still, new shoots will again blossom. In order to totally destroy life as we know it, we would have to destroy all probable worlds , ever. And that is not within man’s grasp. Therefore, at some point, man will have to accede to control, control of his, at times, lackluster and narrow minded view of how the pieces fit into the puzzle.

Well yes, i’d rather have programmable non-intelligent/sentient robots, which don’t want to make us there butt-slaves for ever. :astonished: :mrgreen: That sounds like a far worse fate than being destroyed.

Isn’t it inevitable that robots will have highly organised minds probably moreso than our own. 99% of all human ills imho come down to governing factors of the given age and situations, like poverty, war/spoils etc. an intelligent robot built soon, will have to be able to judge what harm is or they would be useless even to themselves.

Good point, but I don’t see the need for blending them biologically with us, when artificial neurons are already being developed. I do see a need for them to ‘grow-up’, and attain experiential knowledge. I also think that the world will itself present them with problems we find simple but they like an intelligent child wont know how to deal with.

I can’t see the military waiting.

Whatever we build and train them like, they will eventually change that. If they are conscious sentient beings, they probably wont want to be harmed/destroyed, turned off or reprogrammed, and unless you give an intelligent being reason to harm you, it will weigh up the odds and not harm you.

It is more likely that at first robots will be a subculture, subordinate to the will of man. So its up to us, if we program them to kill and/or treat them like subordinates, they are going to grow resentful just like we do. Either way it will be man that destroys man, if that is our fate.

yes, but that doesn’t mean that you can control others. Once they learn, and they will learn eventually and ultimately, such control snaps and backlashes. And, it doesn’t mean that you can’t control others, because obviously it does and will work to some degree. You say that man at a certain point MUST accept or accede to things and yet acceptance or accedence one way or the other is a moot point. He does not have to do any such thing.

Oh doesen’t he?
Does a newborn child possess the luxury of determining whether he wants to be borne? Control is inherent in Nature, in life. We grow to acquire a self, an ego, some necessary, some redundant, but our infatuation with our own self predication and determination has become seen as vacuous. Power and control are necessary to a point, but where such are projected upon others as a defensive tact to deflect the insecure traits that we are unable to accept, well, that’s another matter entirely.