Is it possible that machines completely replace all humans?

Is it possible that machines completely replace all humans?

  • Yes.
  • No.
  • I don’t know.
0 voters

Is it possible that machines completely replace all humans?

See also this thread: Will machines completely replace all human beings?.

Now they are designing cars that assess your driving situation and choose when to allow you to receive cell phone calls. When you do all that you do is far better managed by machines: “You don’t need to be talking to anyone right now. Perhaps later … if you’re good.” “You really don’t need that sandwich either.”

but humans are still needed to do the communicating through the machines.

Needed by whom? The machines don’t need them. If they did, the machines would be redesigned to not need them.

Machine designers have the wrong priority, funding from the wrong incentives.

#=>|

An example:

Perhaps (!) the humans will be so stupid that they will don’t know or have forgotten how machines work and slow down the modern velocity; and then it will depend on the developmental stage of the machines’ intelligence whether they will be able to accelerate the velocity again or slow it down, and whether they will keep the humans alive or not.

That is what they were thinking 65 years ago when they tried slowing it all down by making Man too stupid, only to cause it all to become unstoppable. What didn’t kill them, made them stronger. Now there is no more they can do.

It could have been slowed down if done properly, but it wasn’t. Now it is an addiction to technological power. You can’t ask a power laden addict to reconsider and give it all up.

Change of the will of Man has to come from outside of Man.


Do you want machines completely replace all humans? I

Click one of the below,


YES NO


If person clicked YES
then whole mankind wants to know the mental status of that personality !! before he clicks !!!.

Yes, just depends on the extent of the machines design.

Why would machines want to? Scifi writers have been tackling this question for over a hundred years. You might go read the books written. Many are very interesting, others are crap.

Efficiency, which is the same reason given for thousands of currently made laws, not to mention machine designs.

I may have said this in the other thread, but I see a dual pattern. Machines are getting more comlicated and are able to mimic more human traits. Humans are getting more mechanical and are merging with their machines. Many will now say they feel incomplete without their phone, computer, etc. They also interact with each other in a simpler more mechanical way - face to face social interactions are vastly more complicated, even if much of the interaction is happening of less conscious to unconscious levels - though often the participants can access this later or during. Humans are becoming text on screens - which makes them simpler in a turing test sense themselves. I see an active, willing reduction of the range of humanhood. Not that everyone was a deep, aware individual before. I see no particular absolute golden age nor am I particularly nostaligic for some era as a whole. But I do see people losing facets that they once had in greater abundance. Note: one can use modern media and not reduce oneself. But one’s life likely has the following - deep interactions with other humans in face to face contexts AND deep interactions with the non-human. By the latter I do not include human made artifacts. IOW with what gets called nature: animals, plants, landscapes, natural materials and so on.

I realize this is a kind of side issue, but I see the whole process you are concerned about in these two threads as having two parts - a leaning away from life in its fullness by humans, a developing complexity in machines. I never much liked Freuds Death instinct, because he universalized it. You are alive, so you have a libido, but also this death instinct. No, I don’t so. But that many have a death instinct - an urge not to be free, to be more mechanical, to be merely a surface, to live outside oneself - instead of having a rich authentic inner life - to be made of chosen parts, modules that one can purchase, that is to be an object,
that I see as not only common, but endemic. Perhaps the desire not to live was present in many all along, but now we have

  1. the tools the master this way of killing oneself that looks like something positive to the inclined to be dead.
  2. corporate training from an early age. (corporate training - for ex. via advertising) distinguishes itself from earlier mind control in that it is based on vastly better science AND utilizes the most recent technological methods.)

But I do want to stress that a model that has

Humans (over here) and Robots/AI (over here)

and wonders if the group, discrete, over there, the robots are going to replaces humans

misses that they are merged, some more completely than others. And which is the tool of the other is not clear at all.

Corporations are kind of social machines. They have no global consciousness. They are collections of humans that function mechanically, along predicable patterns. And these robots are in the process of intermingling the already fuzzy categories of machines/humans.

EVen a human with a job, not yet replaced by a robot in any formal, let’s talk to the union about this shit, way, may already be a transhuman mechanism, so fucking interfaced with his blackberry, concepts of the self from advertising, cellphone, surfing, facebook, fashion, selfies
that he no longer exists as an organic life form.

Because they will be designed to, as they are being designed to right now. Later perhaps they will have their own momentum.

Ok just how out of their own momentum? What ethics, morals, needs and desire could do this? And how or why would one program such animal thoughts into another machine?

No one really wants to , Kris.Machines at a certain
point will vicariously reach a point where the cyborg will become more machine than man. At this point the subtlety of where the human stops and the machine begins will/might become a grey area where the program may not be able to be disabled by de programing. The reason I don’t believe in this, is because people will never loose sight of the knowledge base of technical evolution, men are aware of how to factor that kind of restraint into a system. When you learn math, the formula need not recurrently derived, since derivation it’s self becomes another accessible program. The program it’s self, even if it becomes integrated into the system, may not turn hostile, at most becoming aware of who created them, even if faced with the ultimate supposition that it is a self created entity. As with us, we are still debating the question of whether we are the products of evolution or a Creation. The ultimate program of course is us, and in this sense, we ourselves may very well be a machine, and so on ad perpetua. The question of good and evil is perpetuated in our own confidence as overcoming our machine like, animal within us, disregarding the redemptive power of the same.

They are either designed to reproduce themselves or they develop consciousness and do it out of their last human made programming -which might include programmed in values about efficiency solving certain problems.

I just can’t see it. Efficiency would dictate use of existing material and clean simple programs. Such programs as you say would have to involve competition threat. Machine and humans have different needs and humans would be a resource not all out threat.

We are talking about efficiency toward absolute power answering to a very small few people “on top” (and literally floating above the Earth, very similar as in the film Elysium). People are very inefficient creatures, incapable of competing for service to the elite, and far, far, less trustworthy.

If you found a completely dead planet somewhere that had a great resource on it and you had the free option of using people to mine it or merely machines alone, which would you choose? Business would dictate using the machines. Efficiency would dictate using the machines. Reliability would dictate using the machines.

I didn’t say anything about them being an all out threat. But they are hardly efficient about a lot of things, hence industry itself - iow humans - are deciding to replace humans already with machines. That intelligent machines might draw the same conclusion . via simple analysis of output, for example - seems not at all strange to me. I don’t know where you get the idea that using simple programs is dictated even now. Sure, if you can choose between a simpler program that does the task as well as more complicated ones, well both AIs and humans are likely to choose the simpler. But right now we all use unbelievably complicated programs to do similar tasks and this is going to increase. Treating humans as resources is precisely what corporations do now and likely will want AIs to do later. Once they are seen as resources, which can be described in numbers, they will be evaluated through performance and other mathematically represented indicators, and these will likely lead AI, just as human run corporations decide this kind of thing every day, to conclude that robotics and AI can work better/cheaper. These decisions may not always be correct, but trend is alreayd in place, it is already happening. Just as they would, in a heart beat, replace, say, plants and animals as food sources or sources of power, if they can come up with something better - see genetic modification or plans to grow meat directly or the replacement of horses as a major means of transportation and so on. They are replacing pets right now with robots, and sure, given the current technology, most pets are still organic. But more people have robot pets, so the need for organic ones, that market, it already being cut into. As technology increases more organic resources are going to be replaced. No reason to think humans will not be, since this is, as said, already happening.

Do you now answer the question whether machines will completely replace all human beings more differently, perhaps even with „yes“?

Let’s have an interim result for the question: „Is it possible that machines completely replace all humans?“

[size=140]We have 67% for „yes“, 14% for „no“, and 14% for „I don’t know“.[/size]

[size=140]Hey![/size] This result is different to that result I determined in the other machine thread with the question: “Will machines completely replace all human beings?”:

|=>#

Okay, the question “Will machines completely replace all human beings?” is not like the question “Is it possible that machines completely replace all humans?”. Probably the two results are different because there were other and more viewers of the other thread than of this thread. So we have to wait for more results.

Please vote!