Will machines completely replace all human beings?

That thought is what inspires the lust for extreme superiority in hidden technologies.

I doubt they will replace us, more like succeed us. If/when the singularity happens I think it will either (1) take over, (2) destroy us, (3) help us, or (4) leave us. This of course depends on the possibility of a machine to attain conscious thought at/beyond that of a human being, and on our ultimate ability to create such a machine. I don’t necessarily have faith in the Turing test, since that has only to deal with conversation. A truly intelligent machine must be able to learn and create like a human. Creativity and innovation would be the only signs of actual consciousness that I would accept as definitive.

That’s an important question and is hidden in my topic because it is possible that machines will outlast (“outlive”, “survive”?) all human beings and other beings. And it’s known that “androids have sufficient cause and ability to dispense with all organic life completely”, as you said. Machines don’t need any biological material for being able to remain machines. But they need physico-chemical material. Maybe the machines will annihilate the whole crust of the earth.

Yes. Especially “In-A-Gadda-Da-Vida” (Iron Butterfly). Great.

Yes.

Yep, Can’t argue with that at all.

And when life in general will be threatened by such technologies, a qualitative change will occur in the way thought is processed. It has, actually begun, but we have a long way to go.

And what do you see after the take-over of the cyborgs?
A take-over of the abdroids?
And after the take-over of the androids?
AND AFTER ALL?

Mechanisms need no purpose in order to continue for a very long time; thousands, if not millions of years. They need merely opportunity. People design them that way. People, needing purpose in order to overcome natural entropy, create machines needing no purpose but to defeat entropy. Eventually people and their need for purpose becomes the entropy that the machine has been designed to eliminate.

A totally man made machine world, imbued with self interest, “life”, will continue for millions, if not billions of years until there is no longer opportunity. Energy, materials, and purpose aren’t an issue, merely opportunity.

Those who design your societies think of people and laws as merely mechanisms, but think in terms of people as a combustion fuel, a gasoline engine rather than a magnetic motor. In the form of a magnetic motor, homosapian societies would also last billions of years without death or suffering looming over every generation.

An added interesting thought is that since Man can currently absorb energy from nothing but empty space, is designing machines to be 100 times more intelligent and capable than himself, and those machines have no need for purpose, those machines, becoming very efficient at absorbing energy from space and having no concern for consequence, have every reason to become what we call a “Black-hole in space”, doing nothing but absorbing energy.

So when they look out in space and they see a black hole, thinking of it as a remnant of a prior event, perhaps the prior event once was a planet with a life on it much the same as Man. Perhaps he is seeing the future state of passion guided organic life, a natural occurring eventual state in the universe - nothing but another “Black-hole in space”, his own future.

Of course that is assuming that he doesn’t accidentally create a black-hole of himself before that point in blindly corrupting himself.

Seriously cool thought

It’s hard for me to imagine intelligence without purpose. But even so, what reasoning do you attribute to 100x more intelligent machines absorbing energy endlessly from the universe around them if they have no plan or care to use it for a purpose?

A) I happen to know far, far more about what computers can do than you believe possible.
B) For what purpose does an electron orbit a nucleus, forming an atom that lasts for billions of years?
C) Intelligence is merely a mechanism, a more sophisticated form of an electron orbiting an atom.

And “In a Gadda Da Vida” is really “Inna Gadda Da Vida”, basically meaning “subsumed to the core with the spirit of life”. Doug Ingle didn’t repeatedly mispronounce it.

If we were highly intelligent we would use them to create colonies in space and ocean. Also give sentient androids citizenship.

I don’t suppose you have any rationale behind that…?

If we created a new race of homosapian, perhaps purple, what do you think would happen?
…look what happened with the homosexual.

I know, James. Therefore my question in the original post (op) of this thread. And therefore my question or statement of “surviving” in my next-to-last post, and in my last post. The people design and rationalise their own extinction, their own death!

Yes, …, if there will be no wars etc. …

And concerning to my question in the original post (op) and to my question or statement of “surviving” in my next-to-last post, and in my last post, that is also assuming that there will be no human errors (for example: creating machines-with-“self-will”), no wars, no accidents and so on.

Will machines enslave human beings?
Will machines bring the death of all human beings?
Or will the human beings stop creating machines?
Who will longer exist: human beings or machines?

With the utmost probability the machines will “win”.

You don’t think a new homosapian will be created?
I think the more we explore genetics, the more probable it becomes. They can already tweak genes in the fetuses of animals including humans. I see it as just a matter of time.
To give sentient intelligent beings such status will only benefit all once the crap gets over. Every being type will have limits and will need help, cooperation is inevitable.

Since 1789 occidental people have tried to create the „new man“ („new human“, „new homo sapiens“). First this „new man“ had to be a nationalist („bourgeois“), then this „new man“ had to be a communist, and now this „new man“ has to be a globalist.

And? Nothing has been changing since 1789 - except that homo sapiens has been changing more in the opposite direction. So in the end homo sapiens will probably become a monkey - fortunately or unfortunately. Or in the end homo sapiens will perhaps become a cyborg (behaving like a monkey) and/or will die out, become extinct - fortunately or unfortunately.

This is also a monster-science/-technique, a science/technique of Frankenstein & Co. Probably they will also create this „new old monkey“ (see above).

They can cure some genetic problems in uterus. In certain countries there are limits to experiments, in other countries there are few to no limits to human, animal experimentation. With enough money, corporations have have set up in such countries. The USA and other countries look the other way from this. Profit and control are why. Science fiction is generally based on science. Scientists say " Oh cool! I bet we can actually do that!" And so we have computers, satellites, etc, etc. Oh and with geneticists, they have alot of curiosity and drive. Time is all it will take.

That’s a bit too optimistic because such an argument includes always the premise that people are “good people” but that premise is false because people are good AND bad (evil).

Are you writing, Kriswest?

Oh that was not optimistic, the word, crap, was just being polite. Change that to a less polite: All hell breaking loose.
Humans are curious about things, so the more curious find ways to satisfy their curiosity. Others will find offense at new ways and things. Still, others will follow the new and create differences and enhance the new.
Science already offends the religious and fearful of change, yet science proceeds.
I couldn’t write this shit if I tried. I just watch and learn. We humans are awesome but, very fucked up.

I’m not opposed to someone knowing far, far more than me about such a curious subject. Cool. Does an atom cohere from intelligence? If not, at what degree of sophistication would you say a mechanism becomes intelligent? Another curiosity, how do you attribute reasoning or rationale to machines that are 100 times more intelligent than you?