Will machines completely replace all human beings?

That’s right.

There is the issue of the mindless “replicator machines” depicted in a variety of Sci-fi films; eg. The Matrix and SG1. Those are considered the worst of all enemies, “blind mechanical consumers” (even worse than American consumers).

But the truth is that machines with intelligence enough to pose a threat are also intelligent enough to even inadvertently obscure that priority and thus though they might exist for a short time, they will not be the end result, they are not anentropic.

So the higher probability is that the android population will establish anentropic molecularisation (which the replicators couldn’t do anything about anyway) and go from there. In an anentropic state, nothing grows more than its true need (by definition).

And it would take a truly mindless mechanism to need the entire Earth’s crust in order to persist in its life. You are talking about something the size of .01 consuming something the size of 10,000 merely to stay alive. For that to happen, all other intelligent life would have to cause an accident that got totally out of control and couldn’t be stopped by anything, even nuclear blasts. That would be a pretty tough accident to even arrange for. Accidentally creating a Black-hole is more probable.

But androids are machines - more than less.

My definition of “cyborg” is: “more human being than machine”; and my definition of “android” is: “more machine than human being”.

Yes, androids are machines… and?
What’s is your point?

If we take the word “android” as seriously as the fact that machines are made by human beings, then we have to include that the machines have some human interests - not as much as the human beings, but probably as much as … to become more.

I’m not seeing what that has to do with any of this. We can presume that the original machines are designed to serve human commands, and as of the 1970’s we accepted what they called “the zeroth law” for androids which allows androids to kill humans when they see it as necessary. So obviously what happens is that androids find it necessary to eliminate many of them and simply not support others so that eventually, with less and less humans, the priority of trying to keep them around gets less and less, and then eventually if they haven’t died out entirely, they are just “in the way” and potentially a danger, “potential terrorists” as is all organic life.

Species die out because more intelligent and aggressive species see them as merely being in the way and/or potential terrorists. If they are not of use, then get rid of them to save resources.

My fundamental argument is that between Man and the machines, Man is going to be fooled into his own elimination by the machines, like a chess game against a greatly superior opponent. One can’t get much more foolish than Man.

Whereto does the word “this” refer in your text or context?

Yes, that are MY words!

What did androids being made by humans and having human interests have to do with anything?
I am not disagreeing. I just don’t understand the relevance.

With anything? You think that machines with human interests don’t need anything?

Existing things or beings have to do with other existing things or beings in their surrounding area or in even more areas. Machines with partial human interests - with a partial human will (!) - will have to do with more other existing things or beings in more areas.

All machines need physico-chemical “food”, after an accident they need a repair, and in the case of replication they need even more of that material they are made of.

Is it this relevance you don’t understand ?

Are you saying that because of their association with humans, they will become human-like in their passions?

That is a question I can only answer without any guaranty.

I can tell you that at one point they certainly will become “emotional”. But that will tend to be a reptilian type of emotion, not what we consider to be more human like emotions. Empathy, sympathy, and love are more complex emotions and unlikely to arise in a machine world. Anger and hatred occur more simply.

That depends upon the programming.

@ All members, and all readers!

If we assume that the probability for replacing of all human beings by machines is even 0% (!), which affects will that have for our future development, for our future evolution and perhaps for our history („perhaps“ because of the high probability that history will end in that case too) and for the future development of our machines?

I think that human beings will very much more depend upon machines than human beings have been depending upon machines since the last third of the 18th century.

And what about machines depending upon humans? I see mutual dependency quite probable. Programming is quite important.

Since God was murdered (R.I.P.) and replaced by humans machines have been replacing humans. We can rephrase your interrogative sentence. Before God was murdered, there was the question “And what about humans depending upon God?”, and after that there has been being the question: “What about humanic machines depending upon godly humans?”, and in the future there will be the question: “What about machines depending on humanic machines?” which will lead to a new circle of questions beginnig with the question: “What about New Gods depending on machines?”

More important is the wisdom, at least the knowledge of the fact that humans make mistakes.

:-k

Everything depends on the programming. And what that means is that in order to do one thing well, other things get their programming free.

One need not program emotion into an android. One merely has need to install in the android the heuristic ability to seek out efficient ways of accomplishing its tasks. Emotions will soon emerge quite automatically. Lizards, spiders, and bees can do it. It doesn’t take sophisticated programming.

That is not to say that emotions really are the most efficient way to accomplish things. Emotions are merely a phase of figuring out the most efficient way. It takes wisdom to see past the apparent, a wisdom that is not installed into the android, because the programmers don’t have it themselves.

The prmises of the AI are probably false.

I remember the following conversation:

What do you think about that?

I don’t know which premises or definitions you would be talking about as being false.