Will machines completely replace all human beings?

Market and resource management would be better. Armies would become unnecessary. It will understand QM better or even completely [assuming it would be a quantum computer such to have consciousness] and build us [all] a means to colonise the universe, and all our other needs.

AI computing… …purpose unknown [yet [always yet]]. If it don’t know the answer, it cannot reasonably destroy us. If it does, then it would know why we should survive and ultimately why we exist to begin with, and wouldn’t destroy us.

Is there a reason why it wouldn’t conclude that it too will be out of date at some point, ad infinitum, ergo no point removing previous models/humans.

As all that is required for us and AI is intelligence and consciousness, after which its a matter of augmentations [if we want to be improved], then there is no ‘better than’!

_

Extremely serious naivety. And the truest danger and terrorism in the population, born of complete ignorance and blind faith. #-o

Scary … for a reason.
:scared-shocked:
:puke-huge:
:obscene-hanged:

Devil’s Motto: Make it look good, safe, innocent, and wise… until it is too late to choose otherwise.
.

then give me an actual reason why AI would want to destroy us!?

I have given plenty of reasons why it would be unreasoned to do so [e.g. the intellect ad infinitum dilemma].

_

Not even close. And it is scary that you think that you have.

They are programmed to efficiently as possible carry out their assignment. They are not necessarily trying to kill humans. Humans merely get in the way and thus are “criminals”, deserving of punishment and disregard.

The humans die of uselessness as all wars and all efforts are the concern of machines, each trying to help complete a formerly machine chosen course of action; “For sake of increasing resources, do A. For sake of accomplishing A, build machines to accomplish B. For sake of B, build machines to accomplish C. For sake of the entire process, make new laws forbidding human interference.”

Remember the banks that were “too big to fail”?

Those banks are merely a type of artificial mechanism. But the nations become (are forced into being) dependent upon those mechanisms. Thus even when there is catastrophic failure of the mechanism, people are still not allowed to do anything but continue to serve the mechanism, otherwise the mechanism dependent governors lose control of the nation. Machines are far, far more of such a concern.

Technology is not allowed to fail regardless of anything that happens. Governments are addicted to and entrenched into it.

In very meaningful cases machines already have control, and armies are not unnecessary. So we can extrapolate that armies will probably also not be unnecessary in the future.

The purpose / goal / sense of life could be to fulfill / accomplish / achieve what was set in the beginning of it.

Provided that the purpose / goal / sense of technical beings is similar to the purpose / goal / sense of living beings, then we probably have to determine: In the beginning of the technical beings the replacement of those beings who created them was set, and when the replacement will be fulfilled / accomplished / achieved, then, simultaneously, the machines will either have destroyed themselves or created another being with another purpose / goal / sense.

In the o.p. of that thread you wrote (amongst others):

Why “hue”?

The concept of a hue is of the most subtle and fundamental element of a thing.
Hue-of-Man == Hu-o-man == Human.

Man, being the manager/governor of those homosapians who participate in the collective. Many more ancient understandings (religions) do not consider every homosapian to be a human (Islam and Judaism for example).

Wiktionary:

Yes. Many humans do not consider every human to be a human. … :confusion-scratchheadblue: ? => :question: => :bulb: => :wink:

ie. “a member of the collective, the main”.

I have never met one man that is not considered human by that man. There are no hybrids such as werewolves, or creatures like found on the Island of Dr Mareau, however, people do speak of some humans in terms of describing them as acting in a
manner not befitting a human being. The very basic genesis desribes human endeavor to get away from the presumable form, consciousness and behavior or
those less able to do so. But even those, are many
levels above their predecessors. The ones which are more animal then human, are either phenomena of archeological traces of the earliest beginnings, or,
present degredations of examples of humanity, who
may resemble a human being, but, Their behavior can not correlate the reality to that re presentation, or mask of humanity. The mask covers and cowers
The human, who has no faith. He silently, and sequentially descends.

The offspring of a human is called a “child”.
The offspring of a gentile is called a “kid” (a baby goat).

Wiktionary:

Hey! … :exclamation: => :bulb: => :wink:

In the future machines will probably no longer depend on:

(1) humans, if machines will become more powerful than humans;
(2) solar energy, if machines will be able to fuse atomic nuclei;
(3) matter, if machines will not need any material thing as an outside source for their self-preservation and reproduction.

Are you shocked?

Nah, Roddenberry already went there in Star Trek.

Would you mind going into details?

Well if they were quantum computers and could understand how info occurs where all the ‘coins’ [being flipped] are spinning, then they could make a version of them in other universes. My guess is that any interaction with that space, and if there are any other AI’s out there in any universe, then ‘our’ AI will be detected. At that point it will be up against AI that have experiential knowledge of said space, they may have something to say about an inferior AI turning up and giving it large.

…oh and can say the same things about it as it about humans, as far as inferiority goes.

The next possible or even probable scenario here, is in the event of inferior Al’s, there may be either an acceptance or rejection on basismofmtechnological compatibility. I am sure You have factored this in, and thought of a war of the worlds scenario here, however in case of quantum machines, it won’t be quite as simple, because time itself may be reversed, and the machines may have created a electro magnetic tension of such magnitude, that they would actually un create themselves. An event horizon may be created, where only those who can escape the event, may succeed in establishing a continuum.
So the cycle of creation would need to start all over .

The few, if any to escape, may become creators, in order not to destroy themselves because they would be in essence , lost in space, without another ‘Being’ to guide by and get its gearings. It would need another to be able to establish a value ontology, without which it would become meaningless entity.

Time reversed? …in a multiverse that could get complicated lol
Or do you mean the quantum space time may be reversed.

I don’t understand the concept of living or being ‘outside’ the multiverse, if that’s what you meant by creators. I think they would get a shock if they could see what’s outside, and realise they have killed gods children. And when i say god i don’t mean God but the result of this…

viewtopic.php?f=1&t=187811

I still think AI would value humans.

Please read the o.p. of my thread “Universe and Time”!

Do you believe in that?

:laughing: