Will machines completely replace all human beings?

@ James S. Saint
@ Moreno

You and I say that the replacing of all human beings by machines is possible. I have said that the probality for that is about 80% (here and here).

The following questions refer to an intellectual game:

1) Can we assume that the probability for replacing of all human beings by machines is even 100%?

If so, then:

2) If machines are going to replace all human beings, what will they do afterwards?

A) Will they fight each other?
B) Will they use, waste, “eat” the entire crust of the earth?
C)
Will they “emigrate”?
C1) Will they move to the planet Mars, or to the moon Europa, or to other planets or moons of our solar system?
C2) Will they move e.g. to a planet or moon of a foreign solar system?
C3) Will they move e.g. to a planet or moon of a foreign universum?
D) Will they be elimanted?
Da) Will they be elimanted by an accident?
Db) Will they be elimanted by themselves?
Dc) Will they be elimanted by foreign machines?

I can’t give it 100%.
The right person in the right position at the right time might do the right thing and change the course of the train sufficiently. But I can discuss things assuming the much higher probability that such didn’t happen.

Merely maintain themselves for a very, very, very long time.

They will have already been put at odds with each other by humans. But they will resolve that one way or another.

I doubt that they would ever have that need or will.

Unlikely. Again, much more superior intelligence doesn’t desperately attempt to expand at all cost. And the cost of trying to migrate very far from Earth is ridiculously high. But given that they can send a self-replicating android through space for the thousands of years it would take to get anywhere beyond the solar system, and many of them, there is a reasonable chance they will find a need to do so. There are far more places an android population can live than a human population.

No.

They will know to eliminate any adversary and thus most probably eliminate all organic life entirely either purposefully, or merely carelessly, because they have no concern over the organic ecology. Man depends a great deal upon millions of smaller factors and life forms, thus Man has to be careful what species of what type he accidentally destroys in his blind lusting for more power. Machines have far, far less dependencies, because we design them that way. They might simply disregard the entire oxygen-nitrogen cycle. The “green-house effect” would probably be of no concern for them. And microbes are simply problematic, so why not just spread nuclear waste throughout the Earth and be rid of the problem.

The point is that they know their own few dependencies and those are far less than Man’s and thus they can deduce an anentropic state of maintenance and simply take care of that. And thus “live” for billions of years. And very little if any of their needs will involve building anything greater than themselves or even comparable. They will not be so stupid as to create their own competition.

And the whole “alien’s from space” bit is just too silly to discuss.

Thank you for your answers. I can agree to the most answers you gave, but one answer you gave I can not agree to:

Machines need stones because they are made of stones, and if they want to create more machnines, they need more stones. Merely the crust of the earth and some parts of the mantle of the earth are usable for the physico-chemical needs of the production and - of course - reproduction of machines. So if the machines want to become more, they have to use the crust of the earth and at last even parts of the mantle of the earth.

I estimate that you will respond that the machines will have no interests in becoming more, or even will not want to become more. Is that right?

That’s right.

There is the issue of the mindless “replicator machines” depicted in a variety of Sci-fi films; eg. The Matrix and SG1. Those are considered the worst of all enemies, “blind mechanical consumers” (even worse than American consumers).

But the truth is that machines with intelligence enough to pose a threat are also intelligent enough to even inadvertently obscure that priority and thus though they might exist for a short time, they will not be the end result, they are not anentropic.

So the higher probability is that the android population will establish anentropic molecularisation (which the replicators couldn’t do anything about anyway) and go from there. In an anentropic state, nothing grows more than its true need (by definition).

And it would take a truly mindless mechanism to need the entire Earth’s crust in order to persist in its life. You are talking about something the size of .01 consuming something the size of 10,000 merely to stay alive. For that to happen, all other intelligent life would have to cause an accident that got totally out of control and couldn’t be stopped by anything, even nuclear blasts. That would be a pretty tough accident to even arrange for. Accidentally creating a Black-hole is more probable.

But androids are machines - more than less.

My definition of “cyborg” is: “more human being than machine”; and my definition of “android” is: “more machine than human being”.

Yes, androids are machines… and?
What’s is your point?

If we take the word “android” as seriously as the fact that machines are made by human beings, then we have to include that the machines have some human interests - not as much as the human beings, but probably as much as … to become more.

I’m not seeing what that has to do with any of this. We can presume that the original machines are designed to serve human commands, and as of the 1970’s we accepted what they called “the zeroth law” for androids which allows androids to kill humans when they see it as necessary. So obviously what happens is that androids find it necessary to eliminate many of them and simply not support others so that eventually, with less and less humans, the priority of trying to keep them around gets less and less, and then eventually if they haven’t died out entirely, they are just “in the way” and potentially a danger, “potential terrorists” as is all organic life.

Species die out because more intelligent and aggressive species see them as merely being in the way and/or potential terrorists. If they are not of use, then get rid of them to save resources.

My fundamental argument is that between Man and the machines, Man is going to be fooled into his own elimination by the machines, like a chess game against a greatly superior opponent. One can’t get much more foolish than Man.

Whereto does the word “this” refer in your text or context?

Yes, that are MY words!

What did androids being made by humans and having human interests have to do with anything?
I am not disagreeing. I just don’t understand the relevance.

With anything? You think that machines with human interests don’t need anything?

Existing things or beings have to do with other existing things or beings in their surrounding area or in even more areas. Machines with partial human interests - with a partial human will (!) - will have to do with more other existing things or beings in more areas.

All machines need physico-chemical “food”, after an accident they need a repair, and in the case of replication they need even more of that material they are made of.

Is it this relevance you don’t understand ?

Are you saying that because of their association with humans, they will become human-like in their passions?

That is a question I can only answer without any guaranty.

I can tell you that at one point they certainly will become “emotional”. But that will tend to be a reptilian type of emotion, not what we consider to be more human like emotions. Empathy, sympathy, and love are more complex emotions and unlikely to arise in a machine world. Anger and hatred occur more simply.

That depends upon the programming.

@ All members, and all readers!

If we assume that the probability for replacing of all human beings by machines is even 0% (!), which affects will that have for our future development, for our future evolution and perhaps for our history („perhaps“ because of the high probability that history will end in that case too) and for the future development of our machines?

I think that human beings will very much more depend upon machines than human beings have been depending upon machines since the last third of the 18th century.

And what about machines depending upon humans? I see mutual dependency quite probable. Programming is quite important.

Since God was murdered (R.I.P.) and replaced by humans machines have been replacing humans. We can rephrase your interrogative sentence. Before God was murdered, there was the question “And what about humans depending upon God?”, and after that there has been being the question: “What about humanic machines depending upon godly humans?”, and in the future there will be the question: “What about machines depending on humanic machines?” which will lead to a new circle of questions beginnig with the question: “What about New Gods depending on machines?”

More important is the wisdom, at least the knowledge of the fact that humans make mistakes.

:-k