Will machines completely replace all human beings?

it would have no mathematical basis. If plants have sentience, why not candy? There are moving chemicals inside of candy.

My friend,

I am not competent enough to argue with you, about this issue at least. I apologise for my shortcoming. Please find a suitable match for yourself.

With love,
sanjay

Plants “make decisions” through auto-responses, much like a thermostat “decides” when to turn on the heater. There is no remote recognition involved, merely direct contact and response.

And a plant dies when it has systemic failure, no longer sustaining its nutrient cycle.

The analogy is very fitting, the plant as corresponding to the auto responses of a mechanical gadget, further points toward the view, that the plant, can be looked at as an evolutionary retrograde, most akin to machines. The composition of which have anomalous structural ingredients, where it(the plant) can be interpreted as more like a machine then a human being, judging from it’s actions. Therefore, the fact that the reverse appears to be happening, the retrogression into more rather than less conversion from human-ness, seems to further the view, that life, is more of a factor of adaptation, then to genetic typing. In other words, the function of a thing or an organism, determines it, as a type of thing, signifying a chemical or biochemical constitution.

Because they are not moving in the necessary patterns as like living things. Apparently plants have neurons of sorts, so it seems a matter of how many processes are going on, and if those processes are sophisticated enough to run concurrently and give neuronal plasticity. In other words, consciousness appears to be continuous, so has a pattern moving through the other patterns subjectively.

Perhaps it is possible that plants [all life?] have being and consciousness at some rudimentary level.

Problem with all of this is that i think you could have a computer which mimics all of this in machine-like fashion. There is something different about the info at work in my computer, to info in my mind. Notably a subjective observer.

If you made an instrument exactly like a human, then switched it on, would it have consciousness, or be like someone sleepwalking or some such unconscious thing?

plants do not have neurons. you will not find any scientific study that says they do. please provide some evidence that they have neurons.

it would not be less like a “machine” simply less like a “simple machine”. humans and ai are complex machines relative to plants.

Not convincing enough, James.

Decision entails discretion. There must be multiple choices available for any entity to chose from, otherwise it cannot be called a decision, but merely a law or default action.

If you drop a ball from your hand, it would hit the ground every time. It would never go towards the sky. So, can we say that the ball made a conscious decision to fall on the ground? Certainly not, because there is no other way in which the ball can react. Falling on the ground is binding on it.

The same is with the thermostat too. Its action is not discretionary but a binding one thus not decision.

We eat when we feel hungry. That is our natural action in that particular circumstances but we can chose not to eat, even till death. That is decision because we intentionally opted another alternative. A thermostat/machine cannot do that. It will always do the same for what it is designed for, unless you change its internal structure. It cannot change neither its structure nor its behavior on its own, means, it cannot evolve on its own but humans/plants can do that. That is the difference.

Secondly, a thermostat behaves in a particular way because we designed it in that way. We know that. But, do we know why plants/animals behave in that way? One can argue that they learn and evolve through circumstances. There is nothing wrong in that argument but why a thermostat cannot learn on its own in the same way? Who is asking it not to learn? Why it cannot learn and evolve on its own? What is the difference between the two entities?

Thirdly, plants are not that complex entities biologically. With the scientific means available now, we can deduct and analyze a plant up to the last pat of the cell. Everything is in black and white. But still, we cannot explain its synchronized behavior, why the whole plant acts for a common goal.

In humans/animals each and every cell on the body is connected to CNS through neurons, directly or indirectly. That is necessary for the survival of them. Even a single cell out of control can cause cancer. Cancer is nothing but a refusal of one or some cells to obey CNS. It starts living its own life independently from the rest of the body and we know the result.

This neuron network and CNS in the humans/animals integrates cells into a harmonious or unified entity. If this network is broken anywhere in the body, the affected or disconnected portion becomes non-active, and we call it paralyze. Right!

But, there is no such communication network in the plants. We have not found any. Every botanist would be agree with that. If that is true, how and why roots suck water from the earth for the whole of the plant, and why only leaves prepare food for the whole plant? Why should a stem of sunflower plant should be concerned about keeping its flower facing the sun all the time? How the stem becomes aware of the importance of its function? What is the communicating and binding agent between the different organs or the plants? Why every organ or cell of the plants does not declare Independence from the main body and not start behaving like human cancer cells, given that there is absolutely no governing network?

It is not surprising that a single broken nerve of that governing network can cause the whole of the human body becoming nonfunctional, but many times bigger plants can survive even many times more than humans without having a governing network at all!

James, there must be some binding agent/mechanism/entity in the plants, which makes sure that the whole of the plant always behaves as a unified entity. And, that is consciousness. That is what that creates life in true sense. Plants have consciousness too and, it is such a entity which we are not able to trace physically so far. But, it is there for sure, hidden and integrated with every live form. A live, decision making, intelligent and evolving entity cannot be created without consciousness. That is why machines will never be able to have AI.

You can say that one day machines would have AI but this one day is not an argument but mere assumption. It is must be established either philosophically or physically, to be taken as a fact. Your explanation of forming a particle through RM/VO is perfect but it explains the formation of non-live matter only, not live ones.

[quote=“James S Saint”]
And a plant dies when it has systemic failure, no longer sustaining its nutrient cycle.

Again, that is not up to the mark.

Why a plant or even an animal should die? Why its system should fail? Why they cannot live forever after having established properly once in the ambient? Why death of all living organisms is necessary?

James, forget about humans/animals but plants are there for million of years before them. They had for more time than humans to evolve. And also, look at their journey of evolution from tiny ones to huge ones. How much they have been evolved? But, they have not yet learned to live forever. Why? If survival is most important thing for any living entity, why they have not able to defeat dying so far?

What is the need of dying for the plants? Once established, everything works fine. Unlike humans/animals, they do not have brain which can produce hormones of aging. They do not have to fight for essential resources like animals do. Means, a lone tree should survive forever. Yes, they cannot grow beyond a certain limit because of the limitation of the resources, but that should not cause their death.

with love,
sanjay

No Orb,

You are confusing the issue. Read my reply to the James above.

With love,
Sanjay

Then it would not have anything to do with your former questions. But, okay, if you ask those new questions, I would appreciate it. (Thank you in advance!)

But, please, note that your new questions refer to another level than to the level biology.

The biological definition of “life” is the best one we have. There are also good definitions of “life” which come from life-philosophy, physics, system-theory, informatics (mathematics). Life-philosophy, physics, system-theory, informatics (mathematics), and also the ordinary experiences with machines have influenced some interpretations but not the biological definition of “life”, because it is based on cells, and cells are well known. Cells are not machines, and machines are not cells, although both have similarities and work similarly.


Another question is whether machines can evolve or not.

Evolution is an own-dynamic, self-organised process, and according to the systemic-evolution-theory its three principles are (1) variation, (2) reproduction (according to Darwinism: heredity), (3) reproduction interest (according to the Darwinism: selection [but that is partly false]). Self-preservation means preservation of the competence during the current own life. Variation (=> 1) means that there are and must be several units (often called “individuals”) because of the mutations, the variances in the genetic code. Reproduction (=> 2) means preservation of the competence beyond the own life (by having offspring [children]). Reproduction interest (=> 3) means the interest in the reproduction (the example homo sapiens shows that this interest can be non-existent or even negative). Can machines be or are they already part of this own-dynamic, self-organised process which we call “evolution”? Do the three evolution principles - variation (=> 1), reproduction (=> 2), and reproduction interest (=> 3) - also apply to machines?

Is that the only thing you got from that post?

I said neurons + of sorts…
en.wikipedia.org/wiki/Plant_perc … physiology
Neurochemicals[edit]
Plants produce several proteins found in the animal neuron systems such as acetylcholine esterase, glutamate receptors, GABA receptors, and endocannabinoid signaling components. They also use ATP, NO, and ROS like animals for signaling.[7]
plantcell.org/content/14/suppl_1/S3.full
sciencemag.org/site/feature/ … 11-667.pdf

I have seen some german research which suggested the same, but couldn’t find it in a 30 second search

wikipedia article is dead. It’s not a “neuron of sorts” if it doesnt belong to any kind of neural network. plants have no neural network just a basic pathway of sending basic rudimentary signals via chemical movements. Because it uses some common chemicals found in animals does not make it an animal.

Plants have rudimentary behavoirs. It’s very simply why cutting off a leaf doesnt kill the plant’s behavoirs. Because plant behavoirs are very basic and are at the primitive, genetic level. All of the plant is wired with the same basic behavoir, which is “face the sun when photo receptors are active” and “grow”. it accomplishes this through a very rudimentary chain of cells, which slowly pass chemicals through the path at a slow pace. There is not some marvelous sentience at play here.

If a plant never died it would never evolve. It would be the equivalent of your thermostat. Second I’m sure you are aware of a thing called aging. After a while the biological integrity degrades and you die of old-age. You believe machines will never grow AI, presumably, because of their rigid nature. However there already exists self-learning and self-repairing robots which function as a hive mind. With quantum computing they won’t be so rigid. When an AI is exposed to the physical world, the physical world functions as an extension of it’s self, adding the necessary randomness and entropy to overcome it’s rigid nature.

GreatandWiseTrixie

I largely agree with your basic position, but what do you think it would take to make the jump from machine to conscious machine?

Image you are adding extra elements of tech one piece at a time to a machine/computer, at what point does it become conscious. If you added all of mans knowledge to its data base and the software to locate relative answers, it would be more intelligent than any human. Yet i don’t see a mystery item being added that would make it more than a machine still? It would still be a calculator but with words and meanings instead of numbers.

I think a snail is more conscious than that. There is something there [like with us] observing or it is an observer even. Or/ where would you draw the line between a conscious and non-conscious creature?
Further, is a dog conscious, but is it less intelligent or complex than your pc which isn’t conscious?

_

Some sources which may shed some compelling hypotheticals here, are the quantum limits of artificial intelligence, correlated with near death experience.

Well the universe is reborn in every moment, so it would make sense that it has some way to rebirth the observers experiencing it. That is if we consider the cyclicity as fundamental and universal. After that its a question of what ‘attracts’ observers? If we built/printed a human from scratch it would probably be as alive as a human born from a womb. If so then something is correlating or connecting observers with an appropriate body.

Theoretically it should be possible to give an AI consciousness? I think you are right in that the quantum space is where all that spirituality [rebirthing] takes place. If so then there would be some manner of resonance between form and spirit, a ‘like attracts like’ nature.

We perhaps have to think of consciousness as like an ocean, much as physical reality is at base too. Then that they belong to a further classification where they too are indistinguishable [energy and consciousness]. Interestingly, if that is so, then the whole thing can be brought around in a circle, and consciousness should be able to be arrived at by building up to it.

Do you seriously want to make a thread issue out of this?
It is a bit embarrassing to have to try to explain such things to an adult.

Decisions do not “entail discretion”. Decisions ARE discretion and vsvrsa.

You keep habitually injecting teleology where it doesn’t belong and thus can’t help but believe that consciousness is independent of materiality. That is a typically primitive, backward mindset. In a highly sophisticated techo-world (regardless of how evil it might be) such is embarrassingly naive. I feel like I am trying to convince a “man” that a machine can actually run faster than his horse.

Yes, monkeys of the world, germs, chemicals, and even machines exist and can kill you even though you may never see them. Virginia, the wolf is real.

I do not think that teleology is bad thing to imply. Cause and purpose are essential parts or every ontology.

I do not see any real difference between what i said and what you suggested. The intent is still the same.

I have already decided to do so. It is all in my mind but i need some time to present it systematically. I will write an essay regarding this along with some peripheral issue as a new thread. I have promised this to lambiguous long time ago but not able to do so far. You are also welcome to criticize me along with him.

with love,
sanjay

I do not disagree with but that is precisely the issue also. Why cells are not machines? What is your benchmark of differentiation?

My argument is that plant cells are not machines because they are live and governed by the consciousness of the plant. What is your argument?

With love,
sanjay

Yes, there is no perhaps in it. Ontology cannot be completed without that.

If you can do that exactly, it would certainly have consciousness but the issue is whether you cane do that precisely and exactly or not?

The crux of the issue in this question is whether consciousness manifests from the complexity of the entity or is it necessary to built a live entity, in the first place!

If complexity can manifest consciousness, the machines would become live have AI and consciousness one day for sure, no matter how much time it would take. But, if consciousness is necessary to built a live entity at the initial level, the machines would never going to have intelligence or consciousness.

with love,
sanjay