Will machines completely replace all human beings?

This has been my conclusion also for a lot of years and I’m definitely beyond (make that two beyonds) my prime. But you do it too much honor to call it “a pathetic and sad comedy”. It seems to me more in the nature of a pathetic disgusting farce. If he screws himself out of existence or degenerates completely it cannot even be regarded as a tragedy. Farces don’t end in tragedy and “machines” whether intelligent or not have nothing to do with this. It’s not the machines, its the genes; the LCD genes which allows itself to be whitewashed, brainwashed and ground into total mediocrity by those who are just slightly smarter or more corrupt than they are.

Power equals leverage for anyone who has two IQ points above average; if you’re ten points below you may still end up with a far better pension “for services unrendered” - which is the least collateral damage which blank stupidity may produce - than a lot of guys who scored in the genius range. We can’t even score on a god or some deity who doesn’t end up as some perverse SOB according to scripture…and this is the guy we want to be with forever after we served our time on planet earth?

Come to think of it, that’s the most perfect ending imaginable to this demented comic opera.

We are in total agreement. :slight_smile:

Do you believe in that very much, especially in peak oil?

But then they wouldn’t and couldn’t be the people they want to be. And I don’t think that their rulers will give up that claim. Merley the mass of the human beings would agree to live „under nature instead of trying to dominate it“, if they are wanted to agree, and also because of such an agreementt they will be endangered by replacement. So the really meaning of their „living under nature“ in the future is their disappearance. What remains is the question, whether their rulers will later disappear as well or not.

Is that really „necessary and unavoidable“? Do you not have any hope?

Perhaps think of it this way, “Would an android hire a human?

Currently people are thinking in terms of androids doing small mundane tasks, “pick up this, move that,…” But it is only natural that such tasks become far more complex, “build a swimming pool in my yard”, “repair my car”, “repave my driveway”.

If an android is tasked to build a house or a swimming pool, it isn’t likely to do all of the labor itself, but rather utilize a team, especially if speed is an issue. So it contracts available subservient droids.

Would it dare trust any part of the labor or design details to a human? Why risk that?

Repave the highway from Florida to Los Angeles.” Hire humans??? Hell no (federal ordinance).
Build a building in downtown Chicago.” Hire humans??? Hell no (city ordinance).

The extremely wealthy don’t need an android to do mundane tasks, but rather the greater more sophisticated tasks. Why would they hire humans either?

From the perspective of a management android, humans are merely in the way, and a serious risk.

Xenophobia? James, I thought you were above name calling and was able to understand what my brief post meant. Others have hinted at the same thing–machines cannot be self aware in the human sense of self-awareness. This does not indicate that I see all forms of awareness as human. You are sounding like Lev.
Can machines experience free will?

Thas is a very rational thought, and androids are created because of rational thoughts, so referring to this the creators and designers of androids have the same thoughts.

Humans have no “free will”, but only a relative free will.

Those people who claim that the nachines can’t have self-awareness should be expected to prove, to adduce evidence, at least more than those
people who claim the self awareness of the machines. The former in order to protect people, especially those who claim the latter.

That’s an typical example for Phyllo’s derailing!

He is not interested in this thread, although he is writing more and more in this thread.

Unbelievable!

Here is an example for a constructive contribution to the discussion about the topic of this thread:

What would you do, if an android hires andriods but not you because you are a risk and in the way?


Does this picture illustrate an exaggeration or not?

I would build an android to replace that one. :evilfun:

Arminius,
Are there machines, then, who know 'relative" free will?

That’s - of course - a good question, and I answer it with: in the future machines will probably know "‘relative’ free will".

The will, how Schopenhauer defiend it (as Kant’s “Ding an sich” - “thing in itself”), is a free will, but not the will of the human beings because human beings depend on the will. Since God has been murdered - at the end of the 18th century - his free will has also been murdered. Since then human beings pride themselves to be like God, to have a free will, but that is a false conclusion.

Ah, and by whom or what exactly?

??? “By whom”???
I don’t understand the question.

And yes, machines experience “relatively-free will”.

Would you replace the other android by one or more androids or by one or more humans? And if the former, then by what? And if the latter, then by whom?

You can build an android and replace the other android by one or more androids or by one or more humans.

I thought Ierellus meant his question by referring to his statement that “machines cannot be self aware in the human sense of self-awareness”:

“Self-awareness”, “experience”, “know”? What would you answer?

any number of them can be replaced by either , and no one would notice, and even if they did they would not care. Flash: By the end of the year, driverless cars will be on the road. Some advocates are very against that. Wonder if they will take off like the electric cars. The hybrids are selling pretty good, though.

the fact is, that while we all act like we enjoy being lazy and sitting on our tails thinking about our problems, it is only through continuous habit and ritual, that we actually find happiness, that isnt self-destructive. You may think its the taste and sweetness of ice cream that makes that ice cream cone sooooo delicious, but how good would it taste if you had a fridge full everyday since birth

James,
Did you explain how machines have achieved relative free will? Or do I just have to take your word for it?

The less people need people, the more people suffer and die.

I was accepting the scenario of it having to be an android. If given the choice, I would not build a machine superior to humans except in very specific tasks, still requiring just as many humans as before, merely better equipped.

Explaining it is complicated, just as explaining how your computer works in the first place. You don’t have to take anyone’s word for anything. But then again, when you don’t know anything, you shouldn’t be making assertions.

“Free-will” is decision-making. As long as the decision is left up to you, you have “free-will” to the degree of the choices. Free-will is not something that either is there or not there. It comes in degrees determined by the number and types of choices allowed. No free-will means no choices. Total free-will means unlimited choices (followed by insanity).

And: