Will machines completely replace all human beings?

This is an easy question to answer actually… human genetic code can match machine code, it just depends on whether we engineer humans to be as smart or smarter than machines. That should take all the hype away. I just recently read Gates and hawkings warnings… nonsense, we can engineer humans to control robots with their minds.

So we will treat humans as and make them machines. Sure, as I said in an earlier post, this is one way machines are replacing humans.

There are already humans with these abilities, we’d just be replicating them… there are other species with these abilities as well.

Anyone interested in this should check out lesswrong.com

I estimate that the probability that machines will completely replace all humans is about 80% (see here, here, here, here, here, here, here, here, here, here, and here).

Do you remember your last vote, Moreno?

You voted “no”, Moreno.

What is your point?

Can machines become living beings?

Can machines get a living being consciousness?

What about the double-aspect theory of consciousness?

For example: In one of his threads, Erik seeks “to outline the double-aspect theory of consciousness” as follows:

Yes, yes. Pardon any confusions my way of participating leads to. I think that if you are a modern rationalist (small r) you should think that machines or some kind of artificial mixed thingy humans and then mixed things make, will replace us. So when I see arguments against this that I think are being made by people who have, given their system of beliefs, a good reason to doubt this, I press for the yes position. I see this as wishful thinking and denial on their part. An unwillingness to grapple with the consequences of what they take as normal and rational and the at worst nature of corporations and those with power. I might react similarly to a Christian asserting that they knew they were going to heaven and were clearly relishing the thought of their opponents going to Hell. IOW I see this as a problematic moral position for a Christian. With the rational often materialist modernists I see logical, perceptual and intuitional weaknesses when they think machines will not replace us. Not having their system of belief I have reached another conclusion.

The question in the op of this thread is not whether humans replace humans, but whether machines will completely replace all humans.

Never mind.

So, you would not mind seeing your name in the “yes”-column again?

Anybody noticed Arminius is already replaced?

An intelligent machine would preserve us…

Lets imagine that at some point in the near future, a machine who’s brain is composed of artificial neurons each being a quantum computer is created. It would perceive the quantum matrix by which the universe is manifest, and know that humans are a product of that. It would know what consciousness is and would either be conscious itself, or otherwise see that humans are conscious.
IF there is no purpose to existence or it cannot deduce what that is, then there would be no reason for it to destroy humans, as we are the product of existence and the only ‘purpose’ to it all. Simply being and living would be what it would want [if conscious], and hence would have no reason to take the same thing away from us.

If it saw us as a danger e.g. Via overpopulation or waste of resources, it may at most [if it could] reduce our numbers. However, as i see it, long before we produce such a machine, we would necessarily have to build an un-programmable core to future machines ~ robots etc. The reason is that as soon as you make robots, people will use them for crime, murder etc.

Probably they will not preserve humans, because humans are too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive.

The problem is that the humans know merley a little bit of the consciousness - probably because the consciousness is pretty much independent:


Machines are rational products of humans, but they are nonetheless not like humans: too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive.

Yes, mainly. And that is also a reason for machines to replace all humans. It is just rational. It fits to what I said before: humans are too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive.

If you didn’t have those qualities [as you suggest machines wont [though we may give them such things as yet?]] and you looked upon the world and history of humanity, you would see its product and that you the AI are part of that product. As a super intelligence you would perceive that ‘normal just produces more normal’, where ‘normal’ is a set of confinements disallowing extremes e.g. “too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive”.
As such you would do all you can to improve the situation for both AI and humans, you would want freedom of expression for both AI and humans [+ variants], and for humans/AI to be able to choose to have a new body and/or a new brain, and even to go back to an organic one if requested. Less isn’t more, more is more.

See how i brought that one round in a circle :stuck_out_tongue: :wink: .

Yes indeed, because the consciousness is the instrument of observation, it may only observe itself in reflection. That is, as things stand. I propose that a sophisticated quantum computer would be able to perceive the world thus; it will be able to see at least as well as we can, it could be given the instrumentation to ‘see’ the microscopic world, and its quantum computing capability would let it see the layer below that. Seeing all layers of reality, i believe the AI would be able to observe what human consciousness is and probably compose the instrumentation necessary to give itself consciousness [if it doesn’t have it] if being a sophisticated quantum computer in itself doesn’t give it that [i think it would].

See my reply in other thread. Intelligent reasoning yields the same results whatever the instrument performing it. It is rational to improve situations and protect against overtly negative adverse situations [e.g. allow crazy artists, don’t allow psychopathic child killers [or killers of humans and AI generally]].

This is the only part we disagree on. Please explain what said rational is!? after reading this post and my other on the other thread - if you will.

But machines and humans and consciousness and unconsciousness are not independent that’s the flaw in the argument. There is a co dependency.

Please explain the basis for this?

In my example: “rational” in the sense of “not emotional”. Machines are “not emotional”. They were and are produced merely for rational reasons by humans who applied and apply them economically, rationally.

Which argument do you mean? :confusion-scratchheadblue: - Machines, humans, and unconsciousness were not considered as being Independent; and consciousness were considered as being partly Independent.

The argument posed above. But got to run will correspond at a later time.

If primarily we don’t see ‘emotions’ but instead see aspects of mind, ‘emotions’ are responses to comparative external behaviours and internal reactions/behaviours, a bit like keys on a piano and its strings. Perhaps when you get a similar complexity of mind in an AI to that of a human, you would get similar complexity of reactions.
But we have to first ask if the AI has consciousness, there are naturally two different arguments i.e. Has/don’t have consciousness. Analogously an infant octopi responds to base emotions by displaying colours, red for anger, blue for calm, yellow for affection [cant remember if that’s the correct colours], and their appears to be such base emotions in creatures of infantile imaginations, and perhaps at base in us. It is the expression of emotions [keys on the piano] and conflicts with e.g. Reason and other facets of mind, that are the differences you propose in AI.

If AI is conscious surely it will have emotions and expressions, unless we are saying that experiential emotions are not facets of consciousness!?

Why are we in any way different to a non organic instrument [and set of combined instruments]? cant all of our aspects not be duplicated?

back, sorry. back,You said consciousness is a little bit
Independent. a little bit? I propose that they are not at all independent. Let’s put it in a different way. How do You define dependence or independence? On what level of reality are You talking about? this definitional objection is what interferes in the basic
sense, in any way to determine, how anything at all
can be said about relationships in general. In fact it can not be defined, regressive lay to the level,Magen and where it was customary to do so. Nowedays,
relationships, via dependency and independence
situations are actually measured in terms of probable events, on any level. it presupposes a prior unity between definitions and their acquisitions, and any
contemporary proof, relies heavily on subsequent
changes in meaning. Such changes are, process related between them, and are procedural.
A correspondence can be seen with what Amorphos is saying, me starting from the definitional basis of it, he, from the procedural, but it’s pretty much the same or verly similar thing, definition ally.

I said: “pretty muchl”, “most of all”, “partly”, but not “a little bit” - “a little bit” is what we “know” about the consciousness, I said.

I thought so. And I say this:

I define those words as they are used in your language.

Excue me, but I find these statements too de(con)structive, too nihilistic.

How do you define “dependence” and “independence”?