Will machines completely replace all human beings?

Well, first this means you have shifted the reason we would be safe from
the AI would be logical and not hurt us
we would have power over them.
Second it assumes that we will be able to maintain/careful enough to not lose our ability over AIs that are smarter than us, potentially vastly so, with the ability to learn - say hack their way out of any safeguards we have set up.
Even some animals manage to escape from Zoos and other inclosures and they are not as smart as we are.

James S Saint

They can make all the decisions they want, as long as we agree with them.

What decisions wouldn’t we be able to make? Sure a computer will one day be better at all levels of commerce, and may make many political decisions, but those decisions will be judged and assigned by humans.

If AI ~ after many years of attaining experiential knowledge [wisdom], can out perform humans in all areas, read human brains and say why and how it is doing that, then AI still wont be able to say that it will always outperform humans! We can and do evolve +
Humans can choose to be augmented [some of them will some wont] ~ with an AI and a superior body. Perhaps our brain cells can be turned into synthetic ones by replacing a few at a time until all are replaced.
AI cannot know either its nor our ultimate [even spiritual?] ends! It cannot logically make a decision to get rid of us due to our inferiority, as that is a transient thing.



It needs a valid reason to [see above]. There would be more than one AI, possibly millions/billions of them. An intelligent robot could devise a way to out perform a power craved one. A psychopath is always given away by the fact that they act like psychopaths.
Again you need a single world before an AI can gain that kind of power, each govt of the world would have their own AI/army.

Best thing to do is to build them properly in the first place. If we weren’t wired for violence we wouldn’t commit crimes of violence, why would an AI? ~ given that it didn’t have such tendencies built in. You have to devise a ‘reason’ to attack humanity especially to commit genocide.

Please note: Probably humans will no longer have the sole decision!

Even when machines are not making governing decisions, you don’t know what you are agreeing to. In Congress bills are designed to be extra wordy, complex, vague, and delivered at the last moment just to prevent congressmen from reading and fully understanding them before signing them (Obamacare for example). Executive orders are used to get around Congress entirely. “National Security” is used to keep so many things secret, you wouldn’t be able to determine the significance of issues anyway. And that includes a large part of Congress.

They are designed and currently used in war and “peace-keeping”. They don’t have time, especially when fighting other machines, to wait for human supervision. That is why even the President doesn’t have to wait for Congressional approval for most of what he does in the name of National Security or war. They ARE designed and built for violence already - even armed spy drones overhead right now.

Your PC doesn’t ask your permission concerning even 1/100th of the things it does. And it is pretty much guaranteed that you would not approve of many of those things. It is designed to ensure that you either do not know of certain things it does or cannot prevent it from doing them. You are already, to at least a small degree, being deceptively managed via your PCs.

They are built to deceive and out maneuver their owners because they are built to actually serve the governor, not you. And the governor already knows that he doesn’t know enough to interfere with the machines, else he would not have been allowed to be governor (do you override your PCs operating system functions? Could you even if you wanted to? Not as much as you think.). And the machine designers know that they don’t know enough to govern, else they wouldn’t be designing machines. Neither knows when to interfere with the other and say, “Wait! Let’s think about that”.

In physics, things get so deeply complex and mathematically oriented that the physicists completely lose touch with reality, and they usually don’t know it. Dealing with how a machine “should” deal with people gets far more complex than particle physics. The designers have no chance of maintaining perspective and conscience. They can’t even figure out the “purpose of life” question, much less what to force everyone else to do about it.

[size=150]Maintain the Faith…[/size] [size=85]in the machines (1963)[/size]

In the EU the laws are not read but just signed. They are too complex and very rarely understandable for the human EU representives.

It seems to slip away …

“Purpose of life” - I should open a new thread!

Why by humans?

Unfortunately, that is already exercised in Japan.

Are there armies in the United States? Yes. Will there be armies in the United World? Yes, of course.

The more dangerous enemy is oftener “inside” than “outside”.

But i have absolute utility, command of it. All these ideas rely on AI being not conscious or intelligent, if AI did have control i think things would be better for us humans.

It does all come down to purpose, wouldn’t it be nice if we knew ours ~ like an AI would know it was created by us and to help us and it survive and thrive. We don’t even know what created us let alone purpose.

An AI which concluded there to be no purpose, would have no reason to survive and thrive, or to not do what it was built for. You have to give AI a purpose to do that.

Why do you think that it would be better for us humans, if AI did have control?

What is the purpose of life, of living beings (including human beings)?

AI is no living being in the biological sense we are used to define. So we do not know whether this technical being has to have a “reason to survive and thrive, or to not do what it was built for”.


Market and resource management would be better. Armies would become unnecessary. It will understand QM better or even completely [assuming it would be a quantum computer such to have consciousness] and build us [all] a means to colonise the universe, and all our other needs.

AI computing… …purpose unknown [yet [always yet]]. If it don’t know the answer, it cannot reasonably destroy us. If it does, then it would know why we should survive and ultimately why we exist to begin with, and wouldn’t destroy us.

Is there a reason why it wouldn’t conclude that it too will be out of date at some point, ad infinitum, ergo no point removing previous models/humans.

As all that is required for us and AI is intelligence and consciousness, after which its a matter of augmentations [if we want to be improved], then there is no ‘better than’!


Extremely serious naivety. And the truest danger and terrorism in the population, born of complete ignorance and blind faith. #-o

Scary … for a reason.

Devil’s Motto: Make it look good, safe, innocent, and wise… until it is too late to choose otherwise.

then give me an actual reason why AI would want to destroy us!?

I have given plenty of reasons why it would be unreasoned to do so [e.g. the intellect ad infinitum dilemma].


Not even close. And it is scary that you think that you have.

They are programmed to efficiently as possible carry out their assignment. They are not necessarily trying to kill humans. Humans merely get in the way and thus are “criminals”, deserving of punishment and disregard.

The humans die of uselessness as all wars and all efforts are the concern of machines, each trying to help complete a formerly machine chosen course of action; “For sake of increasing resources, do A. For sake of accomplishing A, build machines to accomplish B. For sake of B, build machines to accomplish C. For sake of the entire process, make new laws forbidding human interference.”

Remember the banks that were “too big to fail”?

Those banks are merely a type of artificial mechanism. But the nations become (are forced into being) dependent upon those mechanisms. Thus even when there is catastrophic failure of the mechanism, people are still not allowed to do anything but continue to serve the mechanism, otherwise the mechanism dependent governors lose control of the nation. Machines are far, far more of such a concern.

Technology is not allowed to fail regardless of anything that happens. Governments are addicted to and entrenched into it.

In very meaningful cases machines already have control, and armies are not unnecessary. So we can extrapolate that armies will probably also not be unnecessary in the future.

The purpose / goal / sense of life could be to fulfill / accomplish / achieve what was set in the beginning of it.

Provided that the purpose / goal / sense of technical beings is similar to the purpose / goal / sense of living beings, then we probably have to determine: In the beginning of the technical beings the replacement of those beings who created them was set, and when the replacement will be fulfilled / accomplished / achieved, then, simultaneously, the machines will either have destroyed themselves or created another being with another purpose / goal / sense.

In the o.p. of that thread you wrote (amongst others):

Why “hue”?

The concept of a hue is of the most subtle and fundamental element of a thing.
Hue-of-Man == Hu-o-man == Human.

Man, being the manager/governor of those homosapians who participate in the collective. Many more ancient understandings (religions) do not consider every homosapian to be a human (Islam and Judaism for example).


Yes. Many humans do not consider every human to be a human. … :confusion-scratchheadblue: ? => :question: => :bulb: => :wink:

ie. “a member of the collective, the main”.