Will machines completely replace all human beings?

A trick is like a sleight of hand, but isn’t all human intercourse like a sleight of hand? The most convincing way to go, is the one most subscribe to? How can subscriptions work if not by fiat of those, who align themselves to a cause most beneficial to them and those they can convince ?
We are all tricksters borne of apes, mimicking one another for most benefit for us, singularly, while proclaiming the others’ benefit? Politics is a trick to get others to do your bidding. Can a machine ever become so altruistic , as to align themselves to the needs of other machines? I rather doubt that.

Machines have been designing new machines since the 1980’s
Try again.

And machines can adopt any strategy that a human can, including altruism (if it sees a reason to). And “emotions” are merely subtle strategies.

And even if so, who is to say, that a man maybe a superman will not come along to up the ante?

It has been predicted!

I hope so too.

A machine does not have to become altruistic in order to know what “altruistic” means, to conclude, and, according to the conclusion, to decide and act in an “optimal” way. This „optimal“ way is no problem for the machines, but for the humans.

The hope “dies” last. So, yes, we hope and will hope, Obe.

Oh okay, so let’s fling ourselves off a cliff because there is a chance that superman will fly by and save us. It’ll be fun!!

To become altruistic is not to act in accordance to the needs of others, so as to optimize the situation, but it is, to act, in order, to benefit the largest number of other machines/people. People can differentiate between these two types of behavior, but in order to do that, machines would need to differentiate between qualifying and quantifying the varieties of experience. So far, machine have been restricted to the latter, and i do not see any conceivable technological advance to overcome this hurdle.

Supermen have occurred in periods of extreme and perilous change in the past. There is no reason to conclude they will not re-occur again.

Consider that is only because you know almost nothing about it. on the other hand, I do. There is absolutely nothing that a person can do that a machine cannot be made to do much, much better, and most, if not all, have already been done. You are very far behind the curve.

Remind me to never allow you to pilot my plane. :confused:

Let me give you a simple example:

It is known that economists should be and sometimes really are rational humans. And what do economist mostly do? As far as possible, economists try to quantify any quality! But it is also known that economists are humans. Machines are much more rational than humans and their economists. Machines are much more efficient than humans and their economists. We count 1 and 1 together: machines are far more rational and far more efficient than humans and their economists; thus machines are also the much better economists.

Technologically spoken, the last two economic crises were caused by machines, although they had got their numbers and data from humans, humans with no idea, but power.

I don’t think any machines are rational. (not that they are irrational.) Their programming may follow logical lines (or not), but rational, to me implies qualities not yet achieved, at least by any publically revealed device. The computers that beat the best chess players still rely on a great deal of number crunching, if they have some guiding heuristics. Rationality, it seems to me includes some kind of overview of context, ability to set goals, choose what to evaluate and what is outside the scope of the issue, set priorities at this kind of abstract level and then move in on the specific question involved. Machines may make good choices that they are programmed to make, but I would not call that rationality, nor is it theirs, yet.

It may come, it may come soon, but I haven’t seen any examples of it.

My personal computer is not in anyway rational. No more so than my toaster, though it can perform more functions than my toaster.

Machines were created by humans because humans wanted the machines to rationally work for and/or instead of humans. Thus the reason for the existence of machines is a rational one.

If humans knew the exact origin, cause, reason for their existence, they would give themselves a name which refers to that origin, cause, reason. You may compare it with the hebrew name for the supposed “first human”: “Adam” = “loam”, “mud”, “clay”; so according to the Bible the first human is originated from loam. Therefore it is appropriate and correct to say: “machines are originated from the rationality of the humans”. Adam originated from loam, machines originated from rationality of humans. If humans were not as rational (or as rationally oriented) as they are, then there would be no machine. And that what machines do is rational (even if they relate to emotions). So one can really say: “machines are rational”.

Sure, the humans that made them were being rational.

As long as you don’t mean something that parallels the assertion machines are angry. (IOW that the adjective describes qualities and capabilities of the machine) That construction in English, with the word ‘rational’, implies not something about the purposes the makers had, but the qualities of the object in question, here a machine.

Humans created machines, but I would not say that machines are creative.

Or

A man created a scupture out of a pile of rocks. The pile of rocks is not, however, creative.

Some scientists might make a bacteria for specific purposes. The scientists may very well have been rational when they made it. The bacteria however is not rational.

It’s a weird thing to say, unless one is making claims about the mind of the machine (or bacteria or pile of rocks) - that it is capable of reasoning.

You have forgotten one point: Is that what machines do rational or not?

The humans who made machines wanted them to be rational (and nothing else).
The humans who made bacterias for specific purposes wanted them to be such bacterias (and nothing else).

And they also did what humans wanted them to do.

Humans didn’t want machines to be like humans, but wanted them to - more efficiently (!) - do what humans do; so they wanted them to be rational.
Humans don’t want bacterias to be like humans or to do what humans do.

Humans who want the machines to be rational, don’t want them to be exactly like humans, but they want them to be more rational than humans.
But what if they will replace all humans?

To be rational merely means that one “rations” out his actions in a logical progression toward a chosen goal. Machines, especially these days are doing it all the time. And AI drones even discern and choose targets before organizing an attack utilizing other support drones. They choose what objects to avoid, which of them is to fire first, how many rounds from each, which targets in what order, and so on.

The situation is rising wherein machines must out think and out maneuver other machines. Battle droids are fighting both humans and other droids, thus they must (on each side) get ever faster, more clever, and more insidious concerning their immediate actions and reactions because the other side will be. Of course, the human enemies die easier than their machines, so they inadvertently go out first even though most of the time, machines will target the enemy’s machines first. In the end, only the best machines are left to fight it out. And by the time Jones’ machines have finally won, Jones is no longer living.

To be rational is not to restrict the meaning of that term to actions rationed out in a succeeding progressive way, but also to be able to evaluate and reprogram such actions , feeding succeeding 'new' information in a calculus of reintegrating newer and newer information back into the program, thereby changing the qualitative aspect of the program. In other words, the machine should be able to change the nomenclature, the language and the function of the program.  I do not see how, this level of technical feasibility is yet at hand.  Not talking about feeding differential programs of multi variables, but the internal modification of the program it's self based on a changing functional analysis.  This may be futuristically possible, but it's doubtful whether such a scenario would even be allowed to be developed, due to safety 'Sal" type concerns.

One must “evaluate the actions” in order to ration them out. That is the only purpose in evaluating any action. Being able to update with oncoming new information is no magic trick either. That is what sensors are for. Intellacars are doing that right now in California. Autopilots mechanisms have been doing that for decades. Rewriting algorithms on the fly is far more sophisticated, but is also happen in many systems. Choosing the better way to store or restore altered data is a common AI function. Patterns of common use and anticipated use play a role in choosing which way to store data. Internet systems are doing it every day.

Just to give you a clue. I the early 1980’s, I wrote a small operating system for diagnosing and trouble-shooting a telephone system. The intent was to allow a technician to electronically “play with” any part of the hardware so that he could isolate and monitor the telephone system while it was still in use. So the small operating system had to also have a language with which to communicate with the technician. And as the technician told the diagnostic system what he wanted done, the program would figure out which lower level assembly language codes to use in order to assemble a program to meet the request, then lay that program into place, and run it as per the technicians orders.

The point is that my little operating system and communication program was writing and implementing code for the technician so that he wouldn’t have to learn the variety of languages potentially involved 30 years ago. The system was “hardware portable”. And in an automatic mode, it would search through all of the possible ports and memory, testing for any kind of anomaly. It could identify anomalies and ask the technician what he wanted to do about it, what kind of image that he wanted to see on his oscilloscope.

Today, things are far, far beyond that little device. There is no decision that you or anyone can make, by any means, that even I couldn’t have written a program to do instead and automatically go on to the next, no matter how complicated you felt the decision was. Every single thing you do can be modeled and automatically emulated. Machines today can even do the modeling and emulation code writing themselves. EVERYTHING can be automated, even learning new things to automate (even today).

The software world doesn’t actually have to invent anything new. They like to at times. It’s nice to get things a little faster and/or smaller, but almost every new item is merely a rearrangement of already documented items, in a new package to use or sell in a new market.

They are not inventing androids. They are assembling them to do different things in different ways. They are experimenting with which way they want. It is not an issue of having to invent anything.

That is absolutely right. Actually, softwares are not true inventions, as we understand that word. Those are merely more and more complicated presentations of what was invented once and that was Binary language. We can put that into the categoty of invention, if we like.

New softwares are merely the repetition of that invention in new verticals.

Like the concept of combusion engines. one can use it in a bike. One can use its bigger version in the car, even bigger version in the tank too. But, the engine of an tank is not a ture invention. It is the same concept that was used in the bike.

The same is with the softwares. Unless their basic binary concept does not change, it is all the same except complexity.

with love,
sanjay

Neither being a soft or hard ware specialist, in any sense of the word, there probably is no flaw in Your presentation, James.  However, the point is, and this THE critical point in any system, whether be it human or machine-like, is that the stage has not passed, where, machines are 'taking over'  Now i feel i am letting You down, with an over generalization as a response to technical analysis, but, with a goal of UNIFICATION, of de-differentiating the technical with the logical, one should have bearing on the other.  

S0 that to imply that the matter of absolute control is still sustained by the programmer over the program, i feel is still a valid assumption, since if it were otherwise, machines would have ways to punish by override any attempt at an intrusive control. If such was the case, the system would most definitely be shut down, or at least ,modified to the extent, that such an unlikely event could be prevented.

 2004: The Space Odd essy, is still at this point wildly futuristic, in my opinion, but then i am only trying to make philosophical sense.

Obe, in a war where machines are fighting machines (already begun throughout the world) would you program your machines to wait for each decision to be made by you when the enemy is allowing their machines to much more quickly make their own decisions?

See I just don’t get it. Machines like humans have Achilles heels. Programs in a machine can be stopped. A virus soup can be sent to any machine that is programmed to make decisions. Just hack the darn thing with viruses it will stop it or slow it down or turn it. You would have to be very careful about what viral commands are sent. A barrage of such would eventually work.