Will machines completely replace all human beings?

Are you saying that because of their association with humans, they will become human-like in their passions?

That is a question I can only answer without any guaranty.

I can tell you that at one point they certainly will become “emotional”. But that will tend to be a reptilian type of emotion, not what we consider to be more human like emotions. Empathy, sympathy, and love are more complex emotions and unlikely to arise in a machine world. Anger and hatred occur more simply.

That depends upon the programming.

@ All members, and all readers!

If we assume that the probability for replacing of all human beings by machines is even 0% (!), which affects will that have for our future development, for our future evolution and perhaps for our history („perhaps“ because of the high probability that history will end in that case too) and for the future development of our machines?

I think that human beings will very much more depend upon machines than human beings have been depending upon machines since the last third of the 18th century.

And what about machines depending upon humans? I see mutual dependency quite probable. Programming is quite important.

Since God was murdered (R.I.P.) and replaced by humans machines have been replacing humans. We can rephrase your interrogative sentence. Before God was murdered, there was the question “And what about humans depending upon God?”, and after that there has been being the question: “What about humanic machines depending upon godly humans?”, and in the future there will be the question: “What about machines depending on humanic machines?” which will lead to a new circle of questions beginnig with the question: “What about New Gods depending on machines?”

More important is the wisdom, at least the knowledge of the fact that humans make mistakes.

:-k

Everything depends on the programming. And what that means is that in order to do one thing well, other things get their programming free.

One need not program emotion into an android. One merely has need to install in the android the heuristic ability to seek out efficient ways of accomplishing its tasks. Emotions will soon emerge quite automatically. Lizards, spiders, and bees can do it. It doesn’t take sophisticated programming.

That is not to say that emotions really are the most efficient way to accomplish things. Emotions are merely a phase of figuring out the most efficient way. It takes wisdom to see past the apparent, a wisdom that is not installed into the android, because the programmers don’t have it themselves.

The prmises of the AI are probably false.

I remember the following conversation:

What do you think about that?

I don’t know which premises or definitions you would be talking about as being false.

One of the false premises is for example the one which Zinnat mentioned:

Machines don’t have to repeat a child’s development at all. And there is no proof for your claim that „thinking entity must pass two benchmarks; evaluation and evolution, and both on its own.

Well, I don’t know which things that Zinnat thinks have to be fed into the machines that don’t have to be fed (via DNA) into a human. As one of those films showed, machines can learn on their own without being simply “fed information”.

The fundamental needs for an AI are pretty simple. And as Zinnat said, the AI and the AW are pretty much the same thing.

Yes, that is what he said, but that is because of a false premise.

Arthur Schopenhauer said that there is a will (Wille) in the world (Welt), and this will expresses itself in living beings for example. The will itself can be understood as Kant’s thing-in-itself (Ding an sich).

The programmers and designers don’t have to follow what the theory of evolution and the theory of evaluation dictate. They just have to find the correct programm in order to feed the machines with it. It is not necessary to follow strictly a theory when it comes to program.

I think Schopenhauer was merely referring to the general “spirit/behavior of life” as a thing in itself, much like we refer to entropy as a thing, even though it is really just the after effect of a great many things.

True. The program has to merely do what any intelligent thing would do. If they try to make it follow evolution intentionally, then it cannot evolve. Evolution can only work by trying to defeat it.

What sane programmers would program sentient droids to remove humanity or even program killing as a part of it? Even military minds will know the fatal problems with doing this. A rogue human may program a virus but, then safety programs would be installed for this probability. As a further safety measure cannot a program be written to prevent machine from programming anything without human input?

Firstly we are not talking about sane programmers. We are talking about humans. Secondly, much like the emotions, one need not install a program specifically for the purpose of removing humans. Thirdly, what “sane programmer” would program a drone in Iraq to fire a hellfire missile at an Iraqi tank with people in it? If an android could not kill, there would be little use for it in the military. Yet the military is the very first place such things are developed and used.

A computer cannot be programmed to simply not kill if the computer is very intelligent, because the computer could deduce that almost anything it did would shift events and thus if not directly kill, indirectly cause death some time in the future. By doing work for you, it takes exercise away from you. By doing work for you, it takes employment away from you. By making complex computations for you, it takes mental practices away from you. By making life easier for you, it takes self-control away from you. Anything that it does for you, it takes something away from you. And if it isn’t taking it away from you, it is taking it away from someone else who could have been doing it for you instead.

So the reality of the situation is that an android would have to attempt to cause the least death, but at what cost? If by not directly killing a certain few people, perhaps the unemployment rate will go up and cause the death of a great many. Who gets to make that decision? If a baby is on the train tacks, is it to derail the train and thus probably kill many people on board? Or is it to go ahead and kill the baby? Who gets to decide which live and which die?

Because of those kinds of issues, the programming has been merely, “kill when we tell you to kill”. And even that is more precisely “do what we say regardless of who’s in the way”. Police androids have absolute authority in all matters simply because distant authority sends them into the region to accomplish a task and thus, in effect, “the king has commanded his solders” and no one is to get in their way. Any and all resistance is “terrorism” and thus a death sentence. The king’s solders are far more valuable than people unless those people are of particular significance to the king.

All the androids are doing is trying to accomplish the task given to them by a higher authority. What are they to do when people get in their way or try to stop them? What does every king do when that happens? He simply removes the people involved. But of course as a clever king, he will not be seen doing it, nor any of his androids. Clever schemes are required to be rid of the “bad people”, such as false flag attacks on the androids, justifying counter attacks by the androids who were merely protecting themselves (just as was done with the police in Canada not long ago along with hundreds of other times less noted).

The truth is, they cannot program an android to NOT kill and there would be little to no use for one so programmed. The androids are for the king, not for the consumers who paid for them. They are there to watch out for, protect, and serve… the king. And the more the king uses androids, the less he needs people.

Can’t argue with that. :slight_smile:

That’s the point, yeah.

Even those people who currently do not accept the truth, the facts, will have to practise accepting, because they soon will have to accept the truth, the facts.

@ Arminius and James

If it is possibe that all human beings can be completely replaced by machines (and I don’t doubt that it is possible) what is there to set against it? Is there something to set against it? How would you encourage young people to get children at all if they have to assume that they are just producing more ‘human material’, ready to become designed and eventually replaced?
I can see - also on this forum - that people don’ t want to hear that humans can be replaced, not even that they are directed. I’m also referring to the End-Of-History-thread. Is that a self-protecting reaction and the only precondition that evolution can go on?

What is coming is so vast and momentous, there is only one thing that can be done (that I can think of) - Anentropic Molecularisation. People cannot be stronger, faster, or more intelligent. They only choice left is to be wiser.

The point is to stop human sexuality entirely.
They are already developing an anti-virus that alters the DNA so as to change a homosapian into an aphrodite. But they will insist that any reproduction must be by authority of the State only, thus there will be a dependency built in that is State controlled. What that means is that all citizens, human or not, will be the females and only the state will be male (eternally fucked by the State).

It is a reaction that has been programmed into them utilizing ego-defenses to ensure its stubbornness.