Will machines completely replace all human beings?

yes dependence can be defined in this way,mom not being able to put ones self out of the bounded situation between the observer, and the evolving and changing subject-object. And similarly the opposite with independence, as being able to. This is a primal definition.

I do not know what your point is. Excuse me.

As long aswe humans do not know whether the consciousness is dependent or independent, we can say that the consciousness is partly independent or partly dependent but not that it is absolutely independent or absoluetly deppendent - asimilar to the will as a relatively free will or relatively unfree will.

And look at the machines again! Study the machines!

Look at the way we treat other species, ones we consider ourselves superior to due to, amongst other rationales, intelligence. Let alone that a machine would likely have no need for a rationale to justify its choices. Merely goals and means.

All it will take is one moments choice to use some kind of nanotech virus or other technology. We would need machines, all of those with ability, to NEVER decide to destroy all humans. I mean, they might decide that our bodies were more valuable as fuel for a particular project.

Eeeew, thanks for that visual Moreno. I think I have read too many Scifi’s. I get a gruesome version of Soylent Green running amok now, thanks… :slight_smile:

Yes but an AI wouldn’t be superior to humans, but it would be logical and reasoned. It would know the arguments e.g. ‘two perfect fighters would cancel each other out’, that specifically; a chaos aspect of mind + logic and reason = the creative mind. That it wouldn’t exist without that. I don’t think it would harm humans any more than a child its parent. It would know that we are embodied consciousness like it is, and that to be human is to be creative which requires nurture, ergo, the reasoned thing to do is to nurture its human adoptive parents.
Its consciousness came into being at its birth/inception/when switched on, and human consciousness also occurs in some such variable. We are both intelligent conscious life-forms, different but where there is a lack in us [randomness] there is equally the same lack in AI ~ because it doesn’t have that.

Now we could ask if AI could have randomness, and more neurons in e.g. The language, maths and visual/spatial areas in addition to what humans have! Or would that be a more confused and slower [longer connections] machine that our biological ones. AI would know that if it can increase its capacity, then humans can equally increase theirs! This would be another reason to nurture, though perhaps to demand[?] we ‘improve’ ourselves. The limits could get ugly.

_

The following video is about the “wonderful and terrifying implications of computers that can learn”:

[youtube]http://www.youtube.com/watch?v=t4kyRyKyOpo[/youtube]

I said consider ourselves superior to. And I said based on intelligence, amongst other things. Machines could likely make the same evaluation, especially given that the men who make them have these values. There is no reason to assume they will be logical and reasoned when it comes to ethical decisions. Or better put, it is precisely logic and reason disconnected from emotions that is the danger.

Most mass killings by humans have been based on what is seen as logical. Felt responses can certainly lead to violence, but to kill huge numbers you need to detach into logic and reasoning.

Logic and reason alone cannot be creative, it can simply extend, logically, from the known.

Children, if they had the power, would harm parents. Children when they get power, do harm parents. But a big difference is human children have built in empathy (and guilt and shame) and these prevent logical analyses that make this the rule. Machines would not need to have these emotional/psychic patterns to prevent them. Corporations will be making these things. Corporations already have an amoral relation to humans - they will manipulate for war for profit, for example.

In fact you can look at corporations as sociological level machines and they quite often kill.

We know that animals are embodied consciousness. There is no reason for machines to NOT decide that machine embodied consciousness is better, more efficient, greater and at best allow humans to die out, if not at worst kill them to make room for more of themselves.
Perhaps we will be lucky and the machines will not wipe us out. But where you get your certainty about their benovolence I have no idea.

Most has been based on the rationalization (pseudo-logic) that stems from greed and lust.

Sure, but greed and lust alone can lead to local impulsive killing. To mass kill you need to reason, if poorly. The point with machines is that without empathy, they simply have abstract categories, which they prioritize given what they know. Efficiency and intelligence seem likely to be prioritized by machines, especially given they will be produced by corporations. Humans, especially compared with machines do not rate high in these categories. There is no reason to assume machines will not view us like we view mosquitoes - say as maleria carriers (metaphorically - wasters of resources) - and eliminate us. They might also eliminate us simply by expanding until they have are using all the energy in the solar system, for example, and we, well, do nto get any.

I am not arguing that logic and reason are bad, but that reasoning without emotions is very dangerous. In fact brain research confirms that people with their reasoning centers intact but with damage to the emotional centers function very poorly. And smart people with no empathy also have a poor history.

Moreno

…&/or logic and reason from the wrong emotions.

.

Good point. An AI wouldn’t be a child though, it would likely associate ‘creator’ with ‘father’, and the child as a position relative to that. It wouldn’t have experiential knowledge though and i believe it would intellectually deduce that experiential knowledge has value, ergo the child position is also one of acquiring/learning experiential knowledge.

I don’t really think that an AI would think anything is particularly important or unresolvable, such to think humans need to be destroyed and not simply looked after. Basically world industry and distribution will run like a machine, by machines, there would be no problem with providing for humans. Well except population levels! Thats the one that worries me.

Why does AI need to over- propagate? There will be limits to how large an effective intellect can grow without the distance between connections being too great, or the very amount of different thoughts in the consciousness becoming too vast and confusing. I personally feel the that more informational instances occurring would be worse. Imagine all of those informational sets, cycles, patterns etc, after a point the machine must have two many cogs?

It is more probable that there will be no limits to how large an effective intellect can grow, but it mainly depends on how the intellect is used and is allowed to be used.

I don’t think any emotions are wrong. But if you do not deal with emotions, to shift, for example, fear to anger, then you get problems. Not because anger is bad, but because you are not dealing with what is real, your real reaction.

Or it would decide…whatever.

There would be no reason to give a shit about humans. No reason to value humans over bugs or rocks. No reason to decide that breaking up a hill to get to coal - which is the shifting of matter for some goal - is different from using humans in all sorts of experiments or using them as fuel - which is also shifting of matter for some goal. without emotions there is no difference between accumulations of matter.

Limited resources. Why do we compete with other humans over resources? If AIs want to get smarter - seems possible - then they will want to get bigger and get more data. So more of the world will be transformed into AI bodies and AI sensory devices and AI RAM. Without empathy and love, why should they not logically, utilize all matter they encounter as part of their bodies, sources of energy for their ‘minds’ and as tech support for their perception?

If AI has consciousness i would think it has emotions or some manner of conscious response to stimuli. If not then its just a matter of programming ~ which it probably is either way, just as we respond to chemical stimuli. Humans will have the power of the ‘off button’, there is no reason for us to give up power to other humans, why would we give it up to AI.

I don’t see any reason for AI to exist than as an intelligent experiencing being like humans. There is a very big universe out there, and AI could shut itself down over long distances. If anything that would always be an argument we limited life span conscious beings have!

Humans make mistakes, flaws, errors, faults. And whom do you mean by “us”? 99—99.99% of the humans do not have enough power. 0.01—1% of the humans decide whether they give up power or not, and maybe that it is even too late for this 0.01—1%, and, if not, this 0.01—1% will probably decide falsely. Humans are just not really perfect.

Machines can and probably will get the power.

From nother thread:

Too bad there are no machine politicians yet to make reasonable deals with those, to whom a simple reason does not suffice, wonder if there are some prototypes around which both reasonable sides could or would agree upon.

The only political machines running the US are
governed by the likes of the late Hoffa, who

disappeared strangely after some mixup with union funds.

machines and politicians are strangely reversible like
coats to attune to current conditions.

I meant that machines will be able to manufacture bespoke items designed by humans.

I expect they will also be able to compose from similar, or a bank of shapes [cars etc], but we’ll have to see how good they are at it.

…and who’s going to judge that?

The only thing that the machines don’t have is;

  1. Your weaknesses
  2. Your good looks.

In reference to the recent crash of German wings from Barcelona to Düsseldorf, the question came up of override of the computer systems which basically flew the airplane on auto pilot. pilots are uncomfortable with the idea of leaving takeoffs and landings to computers, but, and this is a big one, what of present trends continue,many most accidents happen as a result of pilot error or malice, would that not in the future call for very sophisticated systems which could detect those types of happenings?

at that point, pilots suspected of malfeasance, may not have the option of overriding systems, and this is where the Al type proposition is played as a futuristic possibility.

why not? If, totally computer controlled cars are in the works, why not airplanes? The 2004 space adventure may be duplicated, and that may have been a foreshadowing. The argument against it, is that most people would not like to travel in a pilotless plane, and, there are a preponderance of sane pilots out there, to assure a good likelihood of a safe and secure journey. To me this copilot acted like a run bisork computer, anamolous to Al, who shut the systems down, when it got suspicious that people were trying to mess with It. I think the similarity between human and artificial systems are narrowing, where both may be utilized sort of in tandem, one scrutinizing the other, but ultimately, working together, and controlling gradually, sequentially, to not let the situation reach critical levels. Computers would, for instance, at the notice of psychological distress, notify an emergency psych team, to diffuse the human system,nworking in conjunction with the other pilot, and the automatic system? In this scenario, both would be needed, and it,would become acquisition of not what replaces who, but of how synthetic system could be developed.

If it has emotions, then for those that think emotions are the root of problems in human interaction it should also be considered menacing if AI has emotions.

Not necessarily. Any real AI would go beyond any programming. It would change via experience, including programming. Just as one can be trained to not feel things as a human.

If the AI is smarter than us, it may and likely would be able to take power, especially if the AI is hooked into, say, the internet, or other sources of potential power or can manage to find a way to do this. Further corporations take risks with human lives all the time. I am not only concerned about AI. Nanotech and genetech also concern me. Now companies with a great focus on short term gain and systems of organization that reward this and punish resistence to this are working with technologies where mistakes can have global consequences. Forget the nuke reactor in Japan and the consequences of those mistakes, which are semi-global and as yet undetermined. We are talking about global level disaster possibilities by the same kinds of companies that have repeatedly made conscious errors and lack of care.

I would not use an intelligent machine to baby sit my kids. No way. Corporations will likely use intelligent machines to babysit the planet. Corporations do not seem to me to be good parents. That their parenting of AIs will make them good babysitters seems unliklely to me. That they will underestimate the threat of powerful AI seems a given. Because underestimating the threats of their products and services benefits the corporations short term. And they do this as a rule with millions of products right now.