Will machines completely replace all human beings?

Not the point i was making. I want humans to be more valuable than machines, so it works both ways.

I was speaking of “reproduction interest”, and “reprdoduction interest” implcates to choose to reproduce or replicate and to choose to not reproduce or replicate.

It is clear anyway that machines are faster, stronger, more intelligent, and more reliable. If they were not, then we would have no single machine and live like the people of the Stone Age lived.

You “want humans to be more valuable than machines”, yes, but that is more wishful thinking than thinking about reality and the real or probable future. I mean it is possible to know something or even much about the current and the coming developments.

I meant that i didn’t mind if machines are worth less than humans. I agree it doesn’t matter what we want, but an intelligent machine would recognise or not even conceive of ridiculous notions of our destruction. Only an unintelligent AI would do that.

An own interest in reproduction or replication implies something like a simple stimulus-response mechanism or even a consciousness. All cells reproduce or replicate themselves, and the consciousness, if there is one, is able to influence the cells, to suppress the interest in reproduction or replication, to prevent the reproduction or replication (humans are an example for this kind of suppressing and preventing). Are machines already able to exactly do what cells do in the case of the reproduction interest? Is there already a stimulus-response mechanism in e.g. the nanobots?

I guess that in this case “their immediate choices” includes the immediate choice of each nanobot to reproduce or replicate itself. But is that true? Does each nanobot already reproduce or replicate itself without any human help?

Only the ones designed to do so, such as natural or artificial forming crystals. Everything responds to its environment. Even human cells will not replicate if in the wrong environment (starved of any means). To stop cell reproduction, the environment must change (and does). To stop a nanobot from reproducing either the environment must change or a signal must be received into the nanobot that alters its reproduction state (merely shifting a molecule out of alignment).

In a sense, nanobots are more capable than cells because they can be signaled to start and stop. How to process that signal is about the only thing holding them up at the moment. Human cells use hormones injected into their environment to alter the speed of reproduction.

Other than a higher decision to inject chemicals, send radio signals, or otherwise alter the environment, there is no consciousness involved with human cells nor nanobots.

Also realize that nanobots are pretty useless unless you have millions of them. That is why there is a need for them to reproduce. It is highly impractical to produce them with a much larger machine.

Yes, but in your way and according to your definitions/presumptions, not precisely according to my intent of asking.

Now, here you defined cyborgs and androids. Of course, i asked this but the point is whether we have any cyborg in reality!! And, if not, how it is any different from sci-fi films!

Here, you still not sure whether machines actually evolve or not but generally you say that machines evolve. Forced change/development from outside does not go well with intent of evolution, unless one wants to define in such way, which i consider intrusion.

I have some issues with this too. You can call a cell as a unit of the organism but it is neither the last step of the ontology nor the building block. When you say building block, it gives the impression that everything ends here and no further deduction is possible, which is not true in the case of cells. We are aware of the subsets of a cell.

Secondly, a cell is not an independently viable unit. Means, if you detach a cell from its mother organism, it will not survive. If that is true, how it becomes independent?

Yes, that was a linguistic mistake. I apologize for that. I am still finding difficulties to be accustomed with my phone. Laptop is far better alternative.

How humans can create principle no-3 (reproduction interest)in the machines?

That is James assumption and i cannot accept it as a fact unless he cannot provide some example/evidence. I do not consider the premise of one day it will as a fact. That is a possibility which may or may not happen.

Machines.

But, as i said above, your principle no-3 is not fulfilled in the case of machines. Then, how you are considering them evolving?

No, i am not. But, i do not see them happening independent of each other either.

Evolution cannot happen without life and whenever there is life, it evolves by default. It cannot be stopped from evolving by any outside force either, as long as evolving entity remains alive.

with love,
sanjay

7 real-life human cyborgs

[youtube]http://www.youtube.com/watch?v=EaHh50PHN5M[/youtube]

That isn’t really true either.

That refers much to RM:AO which is quite clear to me, but that does not answer my question, because reproduction or replication can be influenced by consciousness. So there are two levels of interest: (a) a kind of stimulus-response mechanism as an interest, and (b) a conscious interest. With “human help” I meant the help by using the human consciousness (=> b) not the human stimulus-response mechanism (=> a [for example in the human cells]). :wink:

Zinnat, excuse me, but I do not want to answer your question as if you were a young child.

We have many cyborgs. Zinnat, I answered your questions by using the definitions for those words, terms, and concepts you asked me about.

I was very sure. I asked like Sokrates asked. Thus it was a little rhetoric question (I knew the answer - of course). You can easily see in that and other posts of mine that I say that machines can evolve and do evolve, although by help of living beings. Here for example:

=>

Or here for example:

I think I can save the other examples.

Here you are decontextualising what I said, because I was referring to biology, biological definitions.

Here you are again decontextualising what I said, because I was referring to reproduction in the biological sense.

By programming, thus by consciousness.

There are two levels of reproduction interest: (a) a kind of stimulus-response mechanism as a reproduction interest, and (b) a conscious interest as a reproduction interest. With “human help” I meant the help by using the human consciousness (=> b) not the human stimulus-response mechanism (for example in the human cells). :wink:

But machines are no living beings.

They are fufilled, because of the help (programming) of the humans, thus of the consciousness of the humans. Humans choose and decide via their consciousness (see above: b) and by programming whether machines choose or not and decide or not via stimulus-reponse mechanism (see aboeve: a). Humans do with machines what humans do with humans. And if machines already choose and decide via their consciousness and by programming whether they choose or not and decide or not via stimulus-reponse mechanism, then machines influence their reproduction or replication by their consciousness, thus completely by themselves - as much as humans do.

You can easily see in that and other posts of mine that I say that machines can evolve and do evolve, although by help of living beings. Here for example:

=>

Or here for example:

I think I can save the other examples.

As consciousness is now defined. But that is begging the question of what consciousness is.

No human consciousness, no human cells. Are you sure that machines are already completely independent? (This includes that they also do not depend on a program which is or can be [for example: temporarily] controlled by humans.)

Perhaps observation? If say you take some observing particles then one pulls out and sees the others as a group, then it has perspective command over the others. If we then build up to a human or artificial brain, there would always be a single observer throughout the process which has ‘consumed’ the others. Naturally all those observing particles need to be put together in an instrument which utilises a subjective observer, such that an observer stands out as the singular focus. Rocks and other collections probably don’t do this.

For a computer to be more than a ‘rock’ it would require an observer. No amount of processes alone would achieve that, only the correct instrumentation would.
Then the observing instrument would require continuity, otherwise you would be switching observers where conscious processes require a singular experience throughout a given process, such that a full observation of said process occurs = conscious experience.

By whom?

Why?

That is not enough!

Observation needs senses and the possibility of processing, for example in a brain, in order to process the perceptions of the senses. But consciousness (especially human consciousness) is more than that. There are interpretations and interpretations of the interpretations, there is the possibility of thinking about god and the world, about transcendence, about existence and the own existence, about objectivity and subjectivity, and so on.

If you compare the observation with the whole consciousness (and not just a part of it), then the observation is merely simple.

But conscious experience is merely a part of merely one side of consciousness, and a part of one side of consciousness is not enough, because it is not the whole consciousness (see above).

Consciousness: Remote Recognition

True it does, but you can have all of that but without an observer/perceiver/experiencer, ergo it appears to be the difference between a conscious and non-conscious intelligence. maybe consciousness doesn’t even require intelligence.

True. Which makes me wonder if there is a fundamental consciousness which all life has. There may be spiritual concerns but my difficulty is in the idea of something coming into and leaving the body. In short i have concluded that there must be a way to build up to consciousness e.g. If you keep adding neurons starting with one or a few. …same if those neurons are artificial naturally.

So “consciousness is now defined” (Orb) as a “remote recognition” by you, James. But how do you define “remote recognition”? You say what and who does not have consciousness as “remote recognition” - but who (and what?) has it? And what does this mean in the context of this thread?

How would you define “consciousness” and “intelligence” then?

Being able to identify a remote object. The ability for your Samsung TV to recognize you and realize when you are not looking at the screen, as well as where on the screen you are looking, makes that TV conscious to that degree (still far from what you would call a “human consciousness”).

Huh?
I said that if an entity has the ability to recognize remote objects, it has consciousness of those objects.

It means that already a great many machines have various degrees of consciousness that is greater than a human and they will only gain more.