Will machines completely replace all human beings?

I don’t know… I’m a skeptic by nature, but this didn’t come out in the press until 2014. I won’t go into more details.

Note that the mosquito in that second video has a head that appears as a syringe.
… only to be used on bad people though.

A needle which mimics the mosquito’s unique “stinger”, making injections painless, was developed by microengineers.

Contrary to popular belief, a mosquito can stab you with its proboscis without you feeling a thing. It then injects anticoagulant saliva to stop your blood clotting while it feeds, and it is this that carries the bacteria that cause irritation and pain.

Look at this:


This is an easy question to answer actually… human genetic code can match machine code, it just depends on whether we engineer humans to be as smart or smarter than machines. That should take all the hype away. I just recently read Gates and hawkings warnings… nonsense, we can engineer humans to control robots with their minds.

So we will treat humans as and make them machines. Sure, as I said in an earlier post, this is one way machines are replacing humans.

There are already humans with these abilities, we’d just be replicating them… there are other species with these abilities as well.

Anyone interested in this should check out lesswrong.com

I estimate that the probability that machines will completely replace all humans is about 80% (see here, here, here, here, here, here, here, here, here, here, and here).

Do you remember your last vote, Moreno?

You voted “no”, Moreno.

What is your point?

Can machines become living beings?

Can machines get a living being consciousness?

What about the double-aspect theory of consciousness?

For example: In one of his threads, Erik seeks “to outline the double-aspect theory of consciousness” as follows:

Yes, yes. Pardon any confusions my way of participating leads to. I think that if you are a modern rationalist (small r) you should think that machines or some kind of artificial mixed thingy humans and then mixed things make, will replace us. So when I see arguments against this that I think are being made by people who have, given their system of beliefs, a good reason to doubt this, I press for the yes position. I see this as wishful thinking and denial on their part. An unwillingness to grapple with the consequences of what they take as normal and rational and the at worst nature of corporations and those with power. I might react similarly to a Christian asserting that they knew they were going to heaven and were clearly relishing the thought of their opponents going to Hell. IOW I see this as a problematic moral position for a Christian. With the rational often materialist modernists I see logical, perceptual and intuitional weaknesses when they think machines will not replace us. Not having their system of belief I have reached another conclusion.

The question in the op of this thread is not whether humans replace humans, but whether machines will completely replace all humans.

Never mind.

So, you would not mind seeing your name in the “yes”-column again?

Anybody noticed Arminius is already replaced?

An intelligent machine would preserve us…

Lets imagine that at some point in the near future, a machine who’s brain is composed of artificial neurons each being a quantum computer is created. It would perceive the quantum matrix by which the universe is manifest, and know that humans are a product of that. It would know what consciousness is and would either be conscious itself, or otherwise see that humans are conscious.
IF there is no purpose to existence or it cannot deduce what that is, then there would be no reason for it to destroy humans, as we are the product of existence and the only ‘purpose’ to it all. Simply being and living would be what it would want [if conscious], and hence would have no reason to take the same thing away from us.

If it saw us as a danger e.g. Via overpopulation or waste of resources, it may at most [if it could] reduce our numbers. However, as i see it, long before we produce such a machine, we would necessarily have to build an un-programmable core to future machines ~ robots etc. The reason is that as soon as you make robots, people will use them for crime, murder etc.

Probably they will not preserve humans, because humans are too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive.

The problem is that the humans know merley a little bit of the consciousness - probably because the consciousness is pretty much independent:

|=>#

Machines are rational products of humans, but they are nonetheless not like humans: too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive.

Yes, mainly. And that is also a reason for machines to replace all humans. It is just rational. It fits to what I said before: humans are too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive.

If you didn’t have those qualities [as you suggest machines wont [though we may give them such things as yet?]] and you looked upon the world and history of humanity, you would see its product and that you the AI are part of that product. As a super intelligence you would perceive that ‘normal just produces more normal’, where ‘normal’ is a set of confinements disallowing extremes e.g. “too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive”.
As such you would do all you can to improve the situation for both AI and humans, you would want freedom of expression for both AI and humans [+ variants], and for humans/AI to be able to choose to have a new body and/or a new brain, and even to go back to an organic one if requested. Less isn’t more, more is more.

See how i brought that one round in a circle :stuck_out_tongue: :wink: .

Yes indeed, because the consciousness is the instrument of observation, it may only observe itself in reflection. That is, as things stand. I propose that a sophisticated quantum computer would be able to perceive the world thus; it will be able to see at least as well as we can, it could be given the instrumentation to ‘see’ the microscopic world, and its quantum computing capability would let it see the layer below that. Seeing all layers of reality, i believe the AI would be able to observe what human consciousness is and probably compose the instrumentation necessary to give itself consciousness [if it doesn’t have it] if being a sophisticated quantum computer in itself doesn’t give it that [i think it would].


See my reply in other thread. Intelligent reasoning yields the same results whatever the instrument performing it. It is rational to improve situations and protect against overtly negative adverse situations [e.g. allow crazy artists, don’t allow psychopathic child killers [or killers of humans and AI generally]].

This is the only part we disagree on. Please explain what said rational is!? after reading this post and my other on the other thread - if you will.
_

But machines and humans and consciousness and unconsciousness are not independent that’s the flaw in the argument. There is a co dependency.

Please explain the basis for this?

In my example: “rational” in the sense of “not emotional”. Machines are “not emotional”. They were and are produced merely for rational reasons by humans who applied and apply them economically, rationally.

Which argument do you mean? :confusion-scratchheadblue: - Machines, humans, and unconsciousness were not considered as being Independent; and consciousness were considered as being partly Independent.