Will machines completely replace all human beings?

I’m not seeing what that has to do with any of this. We can presume that the original machines are designed to serve human commands, and as of the 1970’s we accepted what they called “the zeroth law” for androids which allows androids to kill humans when they see it as necessary. So obviously what happens is that androids find it necessary to eliminate many of them and simply not support others so that eventually, with less and less humans, the priority of trying to keep them around gets less and less, and then eventually if they haven’t died out entirely, they are just “in the way” and potentially a danger, “potential terrorists” as is all organic life.

Species die out because more intelligent and aggressive species see them as merely being in the way and/or potential terrorists. If they are not of use, then get rid of them to save resources.

My fundamental argument is that between Man and the machines, Man is going to be fooled into his own elimination by the machines, like a chess game against a greatly superior opponent. One can’t get much more foolish than Man.

Whereto does the word “this” refer in your text or context?

Yes, that are MY words!

What did androids being made by humans and having human interests have to do with anything?
I am not disagreeing. I just don’t understand the relevance.

With anything? You think that machines with human interests don’t need anything?

Existing things or beings have to do with other existing things or beings in their surrounding area or in even more areas. Machines with partial human interests - with a partial human will (!) - will have to do with more other existing things or beings in more areas.

All machines need physico-chemical “food”, after an accident they need a repair, and in the case of replication they need even more of that material they are made of.

Is it this relevance you don’t understand ?

Are you saying that because of their association with humans, they will become human-like in their passions?

That is a question I can only answer without any guaranty.

I can tell you that at one point they certainly will become “emotional”. But that will tend to be a reptilian type of emotion, not what we consider to be more human like emotions. Empathy, sympathy, and love are more complex emotions and unlikely to arise in a machine world. Anger and hatred occur more simply.

That depends upon the programming.

@ All members, and all readers!

If we assume that the probability for replacing of all human beings by machines is even 0% (!), which affects will that have for our future development, for our future evolution and perhaps for our history („perhaps“ because of the high probability that history will end in that case too) and for the future development of our machines?

I think that human beings will very much more depend upon machines than human beings have been depending upon machines since the last third of the 18th century.

And what about machines depending upon humans? I see mutual dependency quite probable. Programming is quite important.

Since God was murdered (R.I.P.) and replaced by humans machines have been replacing humans. We can rephrase your interrogative sentence. Before God was murdered, there was the question “And what about humans depending upon God?”, and after that there has been being the question: “What about humanic machines depending upon godly humans?”, and in the future there will be the question: “What about machines depending on humanic machines?” which will lead to a new circle of questions beginnig with the question: “What about New Gods depending on machines?”

More important is the wisdom, at least the knowledge of the fact that humans make mistakes.

:-k

Everything depends on the programming. And what that means is that in order to do one thing well, other things get their programming free.

One need not program emotion into an android. One merely has need to install in the android the heuristic ability to seek out efficient ways of accomplishing its tasks. Emotions will soon emerge quite automatically. Lizards, spiders, and bees can do it. It doesn’t take sophisticated programming.

That is not to say that emotions really are the most efficient way to accomplish things. Emotions are merely a phase of figuring out the most efficient way. It takes wisdom to see past the apparent, a wisdom that is not installed into the android, because the programmers don’t have it themselves.

The prmises of the AI are probably false.

I remember the following conversation:

What do you think about that?

I don’t know which premises or definitions you would be talking about as being false.

One of the false premises is for example the one which Zinnat mentioned:

Machines don’t have to repeat a child’s development at all. And there is no proof for your claim that „thinking entity must pass two benchmarks; evaluation and evolution, and both on its own.

Well, I don’t know which things that Zinnat thinks have to be fed into the machines that don’t have to be fed (via DNA) into a human. As one of those films showed, machines can learn on their own without being simply “fed information”.

The fundamental needs for an AI are pretty simple. And as Zinnat said, the AI and the AW are pretty much the same thing.

Yes, that is what he said, but that is because of a false premise.

Arthur Schopenhauer said that there is a will (Wille) in the world (Welt), and this will expresses itself in living beings for example. The will itself can be understood as Kant’s thing-in-itself (Ding an sich).

The programmers and designers don’t have to follow what the theory of evolution and the theory of evaluation dictate. They just have to find the correct programm in order to feed the machines with it. It is not necessary to follow strictly a theory when it comes to program.

I think Schopenhauer was merely referring to the general “spirit/behavior of life” as a thing in itself, much like we refer to entropy as a thing, even though it is really just the after effect of a great many things.

True. The program has to merely do what any intelligent thing would do. If they try to make it follow evolution intentionally, then it cannot evolve. Evolution can only work by trying to defeat it.