Will machines completely replace all human beings?

Only the ones designed to do so, such as natural or artificial forming crystals. Everything responds to its environment. Even human cells will not replicate if in the wrong environment (starved of any means). To stop cell reproduction, the environment must change (and does). To stop a nanobot from reproducing either the environment must change or a signal must be received into the nanobot that alters its reproduction state (merely shifting a molecule out of alignment).

In a sense, nanobots are more capable than cells because they can be signaled to start and stop. How to process that signal is about the only thing holding them up at the moment. Human cells use hormones injected into their environment to alter the speed of reproduction.

Other than a higher decision to inject chemicals, send radio signals, or otherwise alter the environment, there is no consciousness involved with human cells nor nanobots.

Also realize that nanobots are pretty useless unless you have millions of them. That is why there is a need for them to reproduce. It is highly impractical to produce them with a much larger machine.

Yes, but in your way and according to your definitions/presumptions, not precisely according to my intent of asking.

Now, here you defined cyborgs and androids. Of course, i asked this but the point is whether we have any cyborg in reality!! And, if not, how it is any different from sci-fi films!

Here, you still not sure whether machines actually evolve or not but generally you say that machines evolve. Forced change/development from outside does not go well with intent of evolution, unless one wants to define in such way, which i consider intrusion.

I have some issues with this too. You can call a cell as a unit of the organism but it is neither the last step of the ontology nor the building block. When you say building block, it gives the impression that everything ends here and no further deduction is possible, which is not true in the case of cells. We are aware of the subsets of a cell.

Secondly, a cell is not an independently viable unit. Means, if you detach a cell from its mother organism, it will not survive. If that is true, how it becomes independent?

Yes, that was a linguistic mistake. I apologize for that. I am still finding difficulties to be accustomed with my phone. Laptop is far better alternative.

How humans can create principle no-3 (reproduction interest)in the machines?

That is James assumption and i cannot accept it as a fact unless he cannot provide some example/evidence. I do not consider the premise of one day it will as a fact. That is a possibility which may or may not happen.

Machines.

But, as i said above, your principle no-3 is not fulfilled in the case of machines. Then, how you are considering them evolving?

No, i am not. But, i do not see them happening independent of each other either.

Evolution cannot happen without life and whenever there is life, it evolves by default. It cannot be stopped from evolving by any outside force either, as long as evolving entity remains alive.

with love,
sanjay

7 real-life human cyborgs

[youtube]http://www.youtube.com/watch?v=EaHh50PHN5M[/youtube]

That isn’t really true either.

That refers much to RM:AO which is quite clear to me, but that does not answer my question, because reproduction or replication can be influenced by consciousness. So there are two levels of interest: (a) a kind of stimulus-response mechanism as an interest, and (b) a conscious interest. With “human help” I meant the help by using the human consciousness (=> b) not the human stimulus-response mechanism (=> a [for example in the human cells]). :wink:

Zinnat, excuse me, but I do not want to answer your question as if you were a young child.

We have many cyborgs. Zinnat, I answered your questions by using the definitions for those words, terms, and concepts you asked me about.

I was very sure. I asked like Sokrates asked. Thus it was a little rhetoric question (I knew the answer - of course). You can easily see in that and other posts of mine that I say that machines can evolve and do evolve, although by help of living beings. Here for example:

=>

Or here for example:

I think I can save the other examples.

Here you are decontextualising what I said, because I was referring to biology, biological definitions.

Here you are again decontextualising what I said, because I was referring to reproduction in the biological sense.

By programming, thus by consciousness.

There are two levels of reproduction interest: (a) a kind of stimulus-response mechanism as a reproduction interest, and (b) a conscious interest as a reproduction interest. With “human help” I meant the help by using the human consciousness (=> b) not the human stimulus-response mechanism (for example in the human cells). :wink:

But machines are no living beings.

They are fufilled, because of the help (programming) of the humans, thus of the consciousness of the humans. Humans choose and decide via their consciousness (see above: b) and by programming whether machines choose or not and decide or not via stimulus-reponse mechanism (see aboeve: a). Humans do with machines what humans do with humans. And if machines already choose and decide via their consciousness and by programming whether they choose or not and decide or not via stimulus-reponse mechanism, then machines influence their reproduction or replication by their consciousness, thus completely by themselves - as much as humans do.

You can easily see in that and other posts of mine that I say that machines can evolve and do evolve, although by help of living beings. Here for example:

=>

Or here for example:

I think I can save the other examples.

As consciousness is now defined. But that is begging the question of what consciousness is.

No human consciousness, no human cells. Are you sure that machines are already completely independent? (This includes that they also do not depend on a program which is or can be [for example: temporarily] controlled by humans.)

Perhaps observation? If say you take some observing particles then one pulls out and sees the others as a group, then it has perspective command over the others. If we then build up to a human or artificial brain, there would always be a single observer throughout the process which has ‘consumed’ the others. Naturally all those observing particles need to be put together in an instrument which utilises a subjective observer, such that an observer stands out as the singular focus. Rocks and other collections probably don’t do this.

For a computer to be more than a ‘rock’ it would require an observer. No amount of processes alone would achieve that, only the correct instrumentation would.
Then the observing instrument would require continuity, otherwise you would be switching observers where conscious processes require a singular experience throughout a given process, such that a full observation of said process occurs = conscious experience.

By whom?

Why?

That is not enough!

Observation needs senses and the possibility of processing, for example in a brain, in order to process the perceptions of the senses. But consciousness (especially human consciousness) is more than that. There are interpretations and interpretations of the interpretations, there is the possibility of thinking about god and the world, about transcendence, about existence and the own existence, about objectivity and subjectivity, and so on.

If you compare the observation with the whole consciousness (and not just a part of it), then the observation is merely simple.

But conscious experience is merely a part of merely one side of consciousness, and a part of one side of consciousness is not enough, because it is not the whole consciousness (see above).

Consciousness: Remote Recognition

True it does, but you can have all of that but without an observer/perceiver/experiencer, ergo it appears to be the difference between a conscious and non-conscious intelligence. maybe consciousness doesn’t even require intelligence.

True. Which makes me wonder if there is a fundamental consciousness which all life has. There may be spiritual concerns but my difficulty is in the idea of something coming into and leaving the body. In short i have concluded that there must be a way to build up to consciousness e.g. If you keep adding neurons starting with one or a few. …same if those neurons are artificial naturally.

So “consciousness is now defined” (Orb) as a “remote recognition” by you, James. But how do you define “remote recognition”? You say what and who does not have consciousness as “remote recognition” - but who (and what?) has it? And what does this mean in the context of this thread?

How would you define “consciousness” and “intelligence” then?

Being able to identify a remote object. The ability for your Samsung TV to recognize you and realize when you are not looking at the screen, as well as where on the screen you are looking, makes that TV conscious to that degree (still far from what you would call a “human consciousness”).

Huh?
I said that if an entity has the ability to recognize remote objects, it has consciousness of those objects.

It means that already a great many machines have various degrees of consciousness that is greater than a human and they will only gain more.

No, that is not true for two simple reasons.

1- there is no nanobot (according to the definition of the nanobot) made so far thus there is no such possibility.
2-When we cannot make manipulating microbots so far, which is an easier thing to do, how can we make such nanobots?

Arminus, wikipedia is also a part of popular media, though certainly and slightly better than other ones. But, it is certainly not a word of the God thus should not be taken a fact but some loose or general information about the subject. More often than not, experts do not write wiki pages. People like you and me, take the work of the experts and quote those on wiki, imbued with their own understanding of the issue. Thus, when subtlety or precision is involved, it is better to look for particularly devoted sites instead of wiki. Like, for philosophical issues, The Stanford Encyclopedia of Philosophy is far better and reliable source than wiki.

But, when you are saying that they cannot reproduce without outside help, does not that mean that they either have no such interest or unable to do to?

Certainly, but there is a big if is in between.

My personal/previous opinion does not matter for me when i revisit any issue. I can throw it out of the window without any hesitation, provided i find a better alternative.

with love,
sanjay

There is certainly a limit/condotions for consciousness too and that is precisely why it cannot be found in/with every complexity.

Secondly, not the whole of nature, but only conscious part of the nature produces self replicating nanobots. This appropriate environment should be a such hosting body which entails consciousness. Otherwise, the guest DNA cell will die.

with love,
sanjay

Arminus, i am not sure whether you are asking or telling your reasoning?

with love,
sanjay

Microsbots;

I think they have the microbot issue well covered.

Functioning nanobots, but not replicating;

An unwarranted remark. I do not think that ia am being childish here by any streach of imagination.

Again, where?

Okay. I did not realize that.

How my question was out of the context? I was also referring to the definition of the independent viable unit in the biological sense.

You may say so but i do not think that it could be defined as reproduction interest in true sense, as far it is controlled by any outside entity. Yes, one time programming is acceptable but not a continuous interference.

with love,
sanjay