Will machines completely replace all human beings?

This is getting kind of funny really…

Imagine that you were living back in the Pompey days and began studying what today would be called “geology”. Of course you wouldn’t have high tech equipment nor a huge backlog of well known geological events, but you could still take mildly accurate temperature readings here and there and detect strong tremors to which you applied your new thoughts concerning geological activity.

And one day, you walk into a local Roman pub generally keeping quite about your studies because no one would even begin to know what you were talking about but then a discussion comes up about the future of Pompey. You know that your theories are “just theories” and trying to prove anything to people who haven’t even begun to study such things would be ridiculous. But it’s just a pub discussion and no one really cares much anyway.

From your recent studies, you find it pretty conclusive that the local mountain is very probably going to blowup. You call it a “volcano event”. Of course no one really knows what that means and the thought of it seems more than just a little absurd. But then again, it is just pub chit-chat, but just happens to be aligned with your new expertise.

The bartender and locals hear you, chuckle, and politely ask for evidence of this event you seem all paranoid about. Obviously it is paranoia because everyone knows that the gods just don’t do that kind of thing without telling the public first through the priests. That’s what they are for.

So now, how would you convince even one of them that within a very short time, the entire city is likely to be devastated? To what degree, you don’t really know for sure, but you can tell it’s going to be pretty massive.

The challenge is actually one of complexity. There are too many smaller details to convey to the listener that have to all add up to the conclusion. Even if you could explain it all (they actually listened that long), it takes a long time for confidence to build concerning any one new idea and a series of them leading to a conclusion simply isn’t going to happen for a long time.

You can’t ask them to reference the priests, because first you don’t believe that the priests are that bright and even if they were, there is a good chance that they aren’t going to tell everyone anyway because that would cause a panic. And nothing is worse than a panic… well… at least that what the priests believe.

You can’t ask them to wait until they feel the Earth shaking under them, that would be too late. So what can you do? You know that at least most of them are going to die very shortly so you would like to do at least something. But what?

Just in the probing and idle chit-chat, you mention the probable scenario. You can easily tell that there isn’t going to be any panic because there is simply too much material for people to quickly digest. And you have to hope that the local priests know that else they will do their thing of secretly getting rid of the “witch trouble maker” - “terrorist”.

You can easily see that as always, every bit of evidence requires both thinking and a strong stand against plausible deniability. There is no one at that bar that thinks much at all (why would thinking people be at such a bar?) and also that literally everything is subject to plausible deniability that isn’t immediately obvious without thinking. If the slightest thought is involved, it is plausible that it isn’t being thought out properly - “the social uncertainty principle”.

Are you trying to start a panic? Certainly not, but you can see that even a panic would be better than what is about to happen by your calculations. And you are going to be right.

Are you going to convince anyone before it happens? Certainly not. You have nothing that such people can see as evidence despite it laying all around literally under their feet. It is too obscure in their eyes and not well defined by their authority figures. Evidence requires one of three things;

  1. Authority
  2. Immediate obviousness
  3. Rational thought

None of those qualify at hardly any pub.

And realistically what would anyone do about it anyway? A few could run, but how far? They wouldn’t run far enough because they couldn’t imagine the reach of such an event. They would at best try to take a few precautionary measures - totally futile.

Are you going to convert them into rational thinking people and then show them the actual evidence all within a few days? Yeah right.

It’s a pub.
Someone asked of the future.
You tell your story.
Everyone chuckles.
Everyone goes home.
Everyone dies.

To the universe the entire existence of homosapian is but the rising of a single morning Sun and your entire life, but a blink.

You could enlist/hire con artsts. A person that can manipulate others is invaluable to such an endeavour. Con artists have always been around, generally they become priests or politicians. But wealthy merchants are generally superior to those, they have a drive in conjunction with their ability.

???
Who could hire them for what? :confusion-scratchheadyellow:

Well, James, what can I say?

It sounds like a very frustrating position to be in.

But you understand my point, don’t you? To expect them to just believe you (just because you said so) is an unrealistic expectation.

You also understand that, for me, what counts as evidence of your claims is not going to be a black and white matter. You seem to have all or nothing expectations. I will either believe everything you say or be in total denial of everything you say. But I tried to show you that some of your evidence is pretty convincing while other of your evidence is not. The NDAA video seemed like propaganda to me, but even within that video there were parts that seemed pretty real–the explanations at the beginning for what the NDAA Martial Law bill amounts to, and Ron Paul’s speech at the end–I mean, I doubt he’s a Hollywood actor or a CG simulacra. The William Benny video seems pretty convincing (although not really revealing anything I didn’t already know or expect or find surprising–although I have yet to listen to Carol Rose’s lecture). So I’m not entirely against you here–not as much as you seem to think. It’s just that being convinced of your claims is not, and never will be, a black and white matter for me (and I would think this goes for anyone).

@ James S. Saint
@ Moreno

You and I say that the replacing of all human beings by machines is possible. I have said that the probality for that is about 80% (here and here).

The following questions refer to an intellectual game:

1) Can we assume that the probability for replacing of all human beings by machines is even 100%?

If so, then:

2) If machines are going to replace all human beings, what will they do afterwards?

A) Will they fight each other?
B) Will they use, waste, “eat” the entire crust of the earth?
C)
Will they “emigrate”?
C1) Will they move to the planet Mars, or to the moon Europa, or to other planets or moons of our solar system?
C2) Will they move e.g. to a planet or moon of a foreign solar system?
C3) Will they move e.g. to a planet or moon of a foreign universum?
D) Will they be elimanted?
Da) Will they be elimanted by an accident?
Db) Will they be elimanted by themselves?
Dc) Will they be elimanted by foreign machines?

I can’t give it 100%.
The right person in the right position at the right time might do the right thing and change the course of the train sufficiently. But I can discuss things assuming the much higher probability that such didn’t happen.

Merely maintain themselves for a very, very, very long time.

They will have already been put at odds with each other by humans. But they will resolve that one way or another.

I doubt that they would ever have that need or will.

Unlikely. Again, much more superior intelligence doesn’t desperately attempt to expand at all cost. And the cost of trying to migrate very far from Earth is ridiculously high. But given that they can send a self-replicating android through space for the thousands of years it would take to get anywhere beyond the solar system, and many of them, there is a reasonable chance they will find a need to do so. There are far more places an android population can live than a human population.

No.

They will know to eliminate any adversary and thus most probably eliminate all organic life entirely either purposefully, or merely carelessly, because they have no concern over the organic ecology. Man depends a great deal upon millions of smaller factors and life forms, thus Man has to be careful what species of what type he accidentally destroys in his blind lusting for more power. Machines have far, far less dependencies, because we design them that way. They might simply disregard the entire oxygen-nitrogen cycle. The “green-house effect” would probably be of no concern for them. And microbes are simply problematic, so why not just spread nuclear waste throughout the Earth and be rid of the problem.

The point is that they know their own few dependencies and those are far less than Man’s and thus they can deduce an anentropic state of maintenance and simply take care of that. And thus “live” for billions of years. And very little if any of their needs will involve building anything greater than themselves or even comparable. They will not be so stupid as to create their own competition.

And the whole “alien’s from space” bit is just too silly to discuss.

Thank you for your answers. I can agree to the most answers you gave, but one answer you gave I can not agree to:

Machines need stones because they are made of stones, and if they want to create more machnines, they need more stones. Merely the crust of the earth and some parts of the mantle of the earth are usable for the physico-chemical needs of the production and - of course - reproduction of machines. So if the machines want to become more, they have to use the crust of the earth and at last even parts of the mantle of the earth.

I estimate that you will respond that the machines will have no interests in becoming more, or even will not want to become more. Is that right?

That’s right.

There is the issue of the mindless “replicator machines” depicted in a variety of Sci-fi films; eg. The Matrix and SG1. Those are considered the worst of all enemies, “blind mechanical consumers” (even worse than American consumers).

But the truth is that machines with intelligence enough to pose a threat are also intelligent enough to even inadvertently obscure that priority and thus though they might exist for a short time, they will not be the end result, they are not anentropic.

So the higher probability is that the android population will establish anentropic molecularisation (which the replicators couldn’t do anything about anyway) and go from there. In an anentropic state, nothing grows more than its true need (by definition).

And it would take a truly mindless mechanism to need the entire Earth’s crust in order to persist in its life. You are talking about something the size of .01 consuming something the size of 10,000 merely to stay alive. For that to happen, all other intelligent life would have to cause an accident that got totally out of control and couldn’t be stopped by anything, even nuclear blasts. That would be a pretty tough accident to even arrange for. Accidentally creating a Black-hole is more probable.

But androids are machines - more than less.

My definition of “cyborg” is: “more human being than machine”; and my definition of “android” is: “more machine than human being”.

Yes, androids are machines… and?
What’s is your point?

If we take the word “android” as seriously as the fact that machines are made by human beings, then we have to include that the machines have some human interests - not as much as the human beings, but probably as much as … to become more.

I’m not seeing what that has to do with any of this. We can presume that the original machines are designed to serve human commands, and as of the 1970’s we accepted what they called “the zeroth law” for androids which allows androids to kill humans when they see it as necessary. So obviously what happens is that androids find it necessary to eliminate many of them and simply not support others so that eventually, with less and less humans, the priority of trying to keep them around gets less and less, and then eventually if they haven’t died out entirely, they are just “in the way” and potentially a danger, “potential terrorists” as is all organic life.

Species die out because more intelligent and aggressive species see them as merely being in the way and/or potential terrorists. If they are not of use, then get rid of them to save resources.

My fundamental argument is that between Man and the machines, Man is going to be fooled into his own elimination by the machines, like a chess game against a greatly superior opponent. One can’t get much more foolish than Man.

Whereto does the word “this” refer in your text or context?

Yes, that are MY words!

What did androids being made by humans and having human interests have to do with anything?
I am not disagreeing. I just don’t understand the relevance.

With anything? You think that machines with human interests don’t need anything?

Existing things or beings have to do with other existing things or beings in their surrounding area or in even more areas. Machines with partial human interests - with a partial human will (!) - will have to do with more other existing things or beings in more areas.

All machines need physico-chemical “food”, after an accident they need a repair, and in the case of replication they need even more of that material they are made of.

Is it this relevance you don’t understand ?

Are you saying that because of their association with humans, they will become human-like in their passions?

That is a question I can only answer without any guaranty.

I can tell you that at one point they certainly will become “emotional”. But that will tend to be a reptilian type of emotion, not what we consider to be more human like emotions. Empathy, sympathy, and love are more complex emotions and unlikely to arise in a machine world. Anger and hatred occur more simply.