Will machines completely replace all human beings?

Too bad there are no machine politicians yet to make reasonable deals with those, to whom a simple reason does not suffice, wonder if there are some prototypes around which both reasonable sides could or would agree upon.

The only political machines running the US are
governed by the likes of the late Hoffa, who

disappeared strangely after some mixup with union funds.

machines and politicians are strangely reversible like
coats to attune to current conditions.

I meant that machines will be able to manufacture bespoke items designed by humans.

I expect they will also be able to compose from similar, or a bank of shapes [cars etc], but we’ll have to see how good they are at it.

…and who’s going to judge that?

The only thing that the machines don’t have is;

  1. Your weaknesses
  2. Your good looks.

In reference to the recent crash of German wings from Barcelona to Düsseldorf, the question came up of override of the computer systems which basically flew the airplane on auto pilot. pilots are uncomfortable with the idea of leaving takeoffs and landings to computers, but, and this is a big one, what of present trends continue,many most accidents happen as a result of pilot error or malice, would that not in the future call for very sophisticated systems which could detect those types of happenings?

at that point, pilots suspected of malfeasance, may not have the option of overriding systems, and this is where the Al type proposition is played as a futuristic possibility.

why not? If, totally computer controlled cars are in the works, why not airplanes? The 2004 space adventure may be duplicated, and that may have been a foreshadowing. The argument against it, is that most people would not like to travel in a pilotless plane, and, there are a preponderance of sane pilots out there, to assure a good likelihood of a safe and secure journey. To me this copilot acted like a run bisork computer, anamolous to Al, who shut the systems down, when it got suspicious that people were trying to mess with It. I think the similarity between human and artificial systems are narrowing, where both may be utilized sort of in tandem, one scrutinizing the other, but ultimately, working together, and controlling gradually, sequentially, to not let the situation reach critical levels. Computers would, for instance, at the notice of psychological distress, notify an emergency psych team, to diffuse the human system,nworking in conjunction with the other pilot, and the automatic system? In this scenario, both would be needed, and it,would become acquisition of not what replaces who, but of how synthetic system could be developed.

If it has emotions, then for those that think emotions are the root of problems in human interaction it should also be considered menacing if AI has emotions.

Not necessarily. Any real AI would go beyond any programming. It would change via experience, including programming. Just as one can be trained to not feel things as a human.

If the AI is smarter than us, it may and likely would be able to take power, especially if the AI is hooked into, say, the internet, or other sources of potential power or can manage to find a way to do this. Further corporations take risks with human lives all the time. I am not only concerned about AI. Nanotech and genetech also concern me. Now companies with a great focus on short term gain and systems of organization that reward this and punish resistence to this are working with technologies where mistakes can have global consequences. Forget the nuke reactor in Japan and the consequences of those mistakes, which are semi-global and as yet undetermined. We are talking about global level disaster possibilities by the same kinds of companies that have repeatedly made conscious errors and lack of care.

I would not use an intelligent machine to baby sit my kids. No way. Corporations will likely use intelligent machines to babysit the planet. Corporations do not seem to me to be good parents. That their parenting of AIs will make them good babysitters seems unliklely to me. That they will underestimate the threat of powerful AI seems a given. Because underestimating the threats of their products and services benefits the corporations short term. And they do this as a rule with millions of products right now.

That flight (9525) was no accident.

I remind you:

They have come up with lies which already stink to the high heaven.

We HAVE to let computers control aircraft because people are too unreliable and potentially dangerous.
… and trains,
and trucks,
and cars,
and kitchens,
and hospitals,
.
.
and corporations,
.
.
and armies,
.
.
and governments.

But never people.

Yes, I know, but the theme was “the buried arts and the ability/non-ability to reawaken arts”:

Why armies and governments? Govt has i accept a body of people running the state like a machine, and that could be replaced with a more efficient machine. However, humans would be making utility of those machines for its ends. Humans know they need the power of the off switch and equivalents, even where armies will be machines.

AI will a, have no ultimate choice, and besides is built to that purpose. An intelligent AI would equally require machines to facilitate is further function in the world beyond its own body/consciousness. It would be i think a mutually acceptable arrangement for humans and AI life-forms.

I remember an episode of star trek where two planets fought virtual battles, real people died as a matter of numbers, and of which represented its ultimate futility. Fighting machines with machines is too i think ultimately futile, though the reasons why they fight those battles may not be to the humans.

Sci-fi films show nightmare scenarios where AI which has control over an army, then turns on humans for illogical or at best reasons not born of experiential knowledge/information. If it did, then as things stand you would have all the main world powers with such armies, and no one party would have overall power. Those scenarios have their basis in a single power.

If the world is united with a single world army first, then that single power could happen. ‘If’ lols.
Oh, and if the world became united you wouldn’t need armies.

_

AI would at some point think and say; “i don’t want to be switched off”, humans reply; good, because I/we also don’t want to be switched off [effectively].

= an equilibrium between man and AI, and a need to pacify rather than make more destruction [=numbers of beings switched off].

Since the date when humans became “modern” - whenver it was - they have been following the idea that “something” should do the work for them, but they have never been considering that that also implies the possibility of their complete replacement by this “something”. Human beings as luxury beings have been considering mostly the comfort but rarely the danger of this development.

Who of the humans is really able to decide in place of every and any human being, especially those of the future?
I answer: No one of the humans. In that case the humans play “God”.

The USA Constitution founders also knew that they had to have a way to “turn off” the government if it got out of control. Guess what. That constitutional government was usurped and is now in the hands of those who seek only their own ultimate power (Socialists). The Constitution is basically meaningless. Meanwhile the general population, although feeling it, does nothing to prevent it. It isn’t being turned off, merely redirected into the opposite of its intent.

So let’s say that you use a machine to make decisions faster and more precisely than any human ever could (already happening). Such machines are used to make governing decisions (already are). The machines advise you as to exactly what things to say and do such as to ensure a stable and profitable government. You are certain that you can just shut them off if anything goes wrong. And like the USA population, for some inexplicable reason, even though things do really seem to be wrong, you are never quite inspired sufficiently to just turn them off.

Eventually you are no longer in charge of making that decision either. You are not smart enough.

Well, first this means you have shifted the reason we would be safe from
the AI would be logical and not hurt us
to
we would have power over them.
Second it assumes that we will be able to maintain/careful enough to not lose our ability over AIs that are smarter than us, potentially vastly so, with the ability to learn - say hack their way out of any safeguards we have set up.
Even some animals manage to escape from Zoos and other inclosures and they are not as smart as we are.

James S Saint

They can make all the decisions they want, as long as we agree with them.

What decisions wouldn’t we be able to make? Sure a computer will one day be better at all levels of commerce, and may make many political decisions, but those decisions will be judged and assigned by humans.

If AI ~ after many years of attaining experiential knowledge [wisdom], can out perform humans in all areas, read human brains and say why and how it is doing that, then AI still wont be able to say that it will always outperform humans! We can and do evolve +
Humans can choose to be augmented [some of them will some wont] ~ with an AI and a superior body. Perhaps our brain cells can be turned into synthetic ones by replacing a few at a time until all are replaced.
AI cannot know either its nor our ultimate [even spiritual?] ends! It cannot logically make a decision to get rid of us due to our inferiority, as that is a transient thing.

_

Moreno

It needs a valid reason to [see above]. There would be more than one AI, possibly millions/billions of them. An intelligent robot could devise a way to out perform a power craved one. A psychopath is always given away by the fact that they act like psychopaths.
Again you need a single world before an AI can gain that kind of power, each govt of the world would have their own AI/army.

Best thing to do is to build them properly in the first place. If we weren’t wired for violence we wouldn’t commit crimes of violence, why would an AI? ~ given that it didn’t have such tendencies built in. You have to devise a ‘reason’ to attack humanity especially to commit genocide.

Please note: Probably humans will no longer have the sole decision!

Even when machines are not making governing decisions, you don’t know what you are agreeing to. In Congress bills are designed to be extra wordy, complex, vague, and delivered at the last moment just to prevent congressmen from reading and fully understanding them before signing them (Obamacare for example). Executive orders are used to get around Congress entirely. “National Security” is used to keep so many things secret, you wouldn’t be able to determine the significance of issues anyway. And that includes a large part of Congress.

They are designed and currently used in war and “peace-keeping”. They don’t have time, especially when fighting other machines, to wait for human supervision. That is why even the President doesn’t have to wait for Congressional approval for most of what he does in the name of National Security or war. They ARE designed and built for violence already - even armed spy drones overhead right now.

Your PC doesn’t ask your permission concerning even 1/100th of the things it does. And it is pretty much guaranteed that you would not approve of many of those things. It is designed to ensure that you either do not know of certain things it does or cannot prevent it from doing them. You are already, to at least a small degree, being deceptively managed via your PCs.

They are built to deceive and out maneuver their owners because they are built to actually serve the governor, not you. And the governor already knows that he doesn’t know enough to interfere with the machines, else he would not have been allowed to be governor (do you override your PCs operating system functions? Could you even if you wanted to? Not as much as you think.). And the machine designers know that they don’t know enough to govern, else they wouldn’t be designing machines. Neither knows when to interfere with the other and say, “Wait! Let’s think about that”.

In physics, things get so deeply complex and mathematically oriented that the physicists completely lose touch with reality, and they usually don’t know it. Dealing with how a machine “should” deal with people gets far more complex than particle physics. The designers have no chance of maintaining perspective and conscience. They can’t even figure out the “purpose of life” question, much less what to force everyone else to do about it.

.
[size=150]Maintain the Faith…[/size] [size=85]in the machines (1963)[/size]
[youtube]http://www.youtube.com/watch?v=7uAP6HaHXnc[/youtube]

In the EU the laws are not read but just signed. They are too complex and very rarely understandable for the human EU representives.

It seems to slip away …

“Purpose of life” - I should open a new thread!

Why by humans?

Unfortunately, that is already exercised in Japan.