Will machines completely replace all human beings?

Oh, thank God. Imagine if most people were those rancid, inbred hemophiliacs with the white wigs, snuff and so much boredom they need to be cruel to get wet or hard.

What’s wrong with robots? They haven’t committed any crimes and aren’t likely to be religious, political or hypocritical liars. Why give them a resume of evil before their careers even started. Since there is so little empathy by humans for humans and robots unlikely to feel hate assuming there is no human contamination, I can’t really object if that became the future. Far preferable to Muslims with Koran in hand taking over most of Europe in fifty years who are creating their own little robots programmed according to scripture.

Neutral is monstrous.

I would not let a neutral sentient entity be alone with my children.

Robots will, in the end, do what the powerful program them to do AND/OR whatever ‘who knows’ behaviors they develop due to ‘learning’, glitchs in programmed, errors, failures to forsee and so.

It’s like giving a virus a nuke.

I mean viruses are more or less machines. We might even make nice viruses that are intended to make our cells manufacture antibodies against dangerous viruses.

I wouldn’t let a virus babysit my kid, precisely because it has no human feelings.

For the same reasons I do not want human made - read:flawed - empathyless, neutral machines to have a tremendous amount of power.

They might kill everyone because it seemed logical to them.

They might do it in error.

They might do it with as much care and concern as we take turning on a light switch.

Imagine that the Dallas, Texas sheriffs in this video are androids obeying distant (possibly even foreign national’s) directives;

[youtube]http://www.youtube.com/watch?v=e1YiUNvq5r4[/youtube]

What chance would anyone have at all? No one could be “sued”. No one in charge is even known, much less worried whether anyone got hurt. The androids certainly wouldn’t care. And because of the myth concerning the massive anti-corruption programming, there would be no doubt that only the people were causing anything bad, because androids are programmed to not be able to cause bad.

Currently the police are bad merely because by law, only the police can sit in judgment of the police behavior. In a legal contest between a citizen and an officer, even if found unquestionably guilty, the degree of punishment is entirely up to the police. In one case in Dallas, I think during the 90’s, an officer was convicted of murdering an innocent man (by stomping on his neck after he had been strapped down). The officer then placed a gun next to the body and was convicted of planting false evidence. The officer, being guilty, was then sentenced: 6 months suspension.

It is a formula for absolute power and dis-compassion over the people - Comply or Die (and its your fault).

[i]James S Saint » Sat May 24, 2014 7:15 am

Imagine that the Dallas, Texas sheriffs in this video are androids obeying distant (possibly even foreign national’s) directives;
[/i]

Well - they sort of are doing that. Each of them is paid to obey a set of rules and commands, and to follow a set of endemic assumptions and ideological pathways of behaviour.
It is would near impossible to program an android to do that. In the living case all you have to do is put an advert in the media for some thugs who like to follow a boss, and all the engineering and programming is pretty much automatic, you get a long line of naturally evolved automata that go further than necessary to do their job because (unlike an android) they pretty much enjoy breaking heads and giving out tickets.
Why go to the expense to build an android, when you have a ready source of cheap, expendable biological organisms to do the job for next to nothing, who are self maintaing, and easy to mould.

Then you mean it not biologically, but culturally. It depends on the semantics of a language, and in both the German and the English language both is possible: (1) “Rasse” / “race” with a biological meaning and (2) “Rasse” / “race”" with a cultural meaning. When I said “we should not say „new race“ because androids are not human beings, but machines of human design”, then I meant the word “race” with a biological meaning, in a biological sense.

In German you can say someone “ist rassig” (“is racy”) or “hat Rasse” (“has race”), and that is an example with both meanings due to the fact that someone is racy or has race because of (1) biological attributes, or (2) cultural attributes, or even both.

Machines are a product of human beings, they are not biological, but cultural. They don’t evolve biologically, but culturally. A technique / technology of a certain culture produced,produces, and will produce them, and that includes that machines can also be produced by other machines which are produced by human beings or by machines which are produced by human beings … and so on.

Because of the fact that this development is irreversible, especially then, when the machines take over. Don’t open Pandora’s box!

Yes, but don’t forget Pandora’s box!

Yes, but please don’t forget that emotions have two sides: a good side and a bad side!

Empathy belongs doubtlessly to the good side, but can easily be changed in its contrary.

Yes.

Lets not call them robots in the first place which are nothing but mobile computers programmed by humans and therefore dangerous. In this discussion I was thinking more in terms of artificial intelligence machines created to eventually program themselves only more successfully then we have done to ourselves throughout history.

As for Pandora’s box, that been opened since Adam and Eve (figuratively) and never closed since then. Would AI make the world worse than it already is? I don’t think so. The best thing one can say about humans is that they themselves are nothing more than malfunctioning machines.

Yeah right! I don’t notice anything here that wasn’t done constantly for the last ten thousand years. Actually one can add a few more “human defects” to this summary that even a robot wouldn’t think of.

Why do we fear intelligence machines so much in the first place? Could it because initially we would be “infecting” them with our own codes and if that’s the case what then amounts to a greater evil, us or them?

70% OF HUMANS WILL BE REPLACED BY ROBOTS WITHIN 3 GENERATIONS?

Puzzling assertion. Since 95% of humans are regarded by the economic system as expendable why buy an expensive android that you have to periodically replace, fuel, program, and maintain. Why not just use humans are throw them when they are too old.
Humans are self sustaining and outlast all known machines.

You have to ask who are all these androids serving? If you don’t have any humans, why have robots?

People have been making this sort of claim since the Luddites destroyed the weaving machines, yet it has not happened.

Old people are socialists, so, they will develop a social system together with the machines which will replace their unborn children. That is the only replacement which will happen. And then when they die there will be neither machines nor people. end of story

How insane.
As people grow old, they also grow conservative, and tend to leave socialist ideas behind wanting to preserve and justify their looting of the future economy with what they call “pensions”.
Old people are seldom, socialist.

Are you a misanthrope or even a misanthropist? I reverse your sentence and say (merely in order to show both sides): The best thing one can say about machines is that they themselves are nothing more than malfunctioning humans.

You missing the point. You seemed to be arguing that neutral and lack of hate are pluses. I don’t think so. Sure, humans have acted horrifically. Should we add a new agent THAT HAS NO EMPATHY AT ALL to a world with the problems you want us to focus on?

YOu have no idea what a robot would not think of. Perhaps an AI driven robot would be curious about torturing killing and resurrecting someone or everyone for millions of years to see what happens to their minds.

Why do we fear intelligence machines so much in the first place?Well, I answered that, at least in part.

Could it because initially we would be “infecting” them with our own codes and if that’s the case what then amounts to a greater evil, us or them?So you want to focus on blame. I was focusing on consequences. Sure, the fact that humans would create them, and not just humans, but humans at the behest of a tiny segement of the human population, one that has shown repeated disdain for human and other life, that’s a factor. So yes, in part it is the combination of our flaws and hubris that make me very skeptical about what we would create. Then all the possible errors, then what I focused on about having a lot of entities with great power and no empathy.

Generally with humans you have to teach them NOT to feel empathy, generally using some idealogy that classifies other groups as non-human or evil or sub-human, and this justifies the coming violence. With robots there is no need to override the non-existent empathy and caution.

I don’t see why you take fear and concern about robots and AIs as some kind of approval of all human behavior. This is a false dilemma. One can be skeptical about the latter and have tremendous concern about the former.

If both conditions are equal then they cancel each other out…so, what’s the problem with AI we don’t have with humans since there’s seemingly no remainder to this division?

This makes a good point! But I think there is a very potent difference between the two. To eviscerate empathy from an individual through ideology or whatever means is to purposely create an evil in its wake. What other reason can there be when humans are deliberately deprived of their humanity by other “humans”?

An AI machine on the other hand, which was never endowed with the requisite emotions in the first place is not as a consequence brainwashed into committing atrocities. In fact I would think of an AI entity more as a kind of “Data” in StarTrek attempting to educate themselves to any new experience even those as mysterious as emotion.

Saw this article relevent to the thread. Some good info tucked into various places.

wtfrly.com/2014/05/23/the-robots … 4E4j3J_t1Y

Thank you for that link, and here come the Economic Collapse:

"The Robots Are Coming, And They Are Replacing …

47 percent of all U.S. jobs could be automated within the next 20 years.

47 percent?

That is crazy.

What will the middle class do as their jobs are taken away?

The world that we live in is becoming a radically different place than the one that we grew up in.

The robots are coming, and they are going to take millions of our Jobs." - The Economic Collapse

I think there will be weird and bad social effects, when service type jobs get taken over. I mean, it already feels like one is dealing with machines, but when checkout at the supermarket, coffee at STarbucks and so on is all handled by robots. The plasitification of everything. The derealization of everything. That choking feeling you get in a mall, where everything is a copy of a copy of a copy, but everywhere. Or that sad, empty fakeness, void feeling on suburban streets, but everywhere. I’ve seen the future and it is murder, to quote Leonard Cohen. Though this murder is the murder by lack of feeling and realness.