Will machines completely replace all human beings?

I’m afraid you have to face up to the sad reality that there may be more of us who are thinking this way.

We can point to things that behave in ways that can be described algorithmically. It’s a linguistic tool, not a reification. So…

I find it plausible that it is caused by processes that can be described algorithmically. Or put into equations. Don’t you?

We know exactly what our own consciousness is. We extrapolate that to other beings based on their similarity to us - I have a better idea of my brother’s experience of something than that of a Kalahari bushman’s, and better that than an orang-utan’s; better that than a crab’s; better that than a flatworm’s. I have to say that the consciousness of a crab is a bit of a mystery, but I assume they feel pain because they respond in ways that indicate it, and I assume an oak tree doesn’t because it doesn’t (although it responds physically to damage and heals).

Consciousness doesn’t necessarily mean human consciousness. I’m perfectly willing to grant that a computer won’t know what it is to be human - or vice versa.

I’m not talking about computers now; we’re at the level of modelling basic invertebrates with a few dozen neurons. I’m trying to get to the root of your argument that “consciousness is consciousness, and computers just aren’t and can never be that.”

If consciousness isn’t the result of neural activity, what is it? It’s certainly easy to drastically modify consciousness by modifying neural activity, and to end it by ending that. It seems a reasonable proposition.

And if it is, why is organically-mediated information processing somehow different from electronically-mediated?

I think that would depend on how and why a computer manages to do so. If it’s by ELIZA-like language manipulation, then no.

As Wittgenstein said, “if a lion could speak, we could not understand him.” I’m still willing to grant lions the benefit of the doubt and not torture them for fun, though.

An algorithm is merely a specified process. The entire universe is nothing other than processes. Describe any one of them and you have specified an algorithm that physically exists (more scripturally known as a “spirit”). Regardless of whether Man understands the processes of the mind, because they physically exist, they are necessarily algorithms.

I’ve been studying compositional signatures, consciousness signatures and behavioral signatures for… Hmm… 23 years now. Apply a frequency to the signature and you get output.

We have a superposition process that really is the answer to Turing, if you can’t superimpose, it’s not conscious. Otherwise known as possessions.

Yes, but the cause then is the processes, not the description of them.

Actually I’ve changed my mind: the relevant processes in the brain can’t be described algorithmically or put into equations, because of our limited understanding of matter.

Further thoughts a couple of hours later: aspects of the relevant processes in the brain can be described algorithmically, but not the entire process. But the important point is that this description is not the cause of consciousness.

We can respond to injury without feeling pain (as when we pull away from a hot surface), so I don’t assume that crabs can feel pain.

I’m not willing to grant that a computer will know anything or feel anything.

I believe that consciousness is the result of neural activity, but what is going on in computers has nothing to do with that kind of activity.

When you say you are not talking about computers when modelling basic invertebrates, what are you talking about?

“Information processing” has a similar ontological status to “algorithm” and “equation”. If we look at a description of our visual system for example, we are likely to see sentences like “the optic nerve carries information from the retina to the brain”. But what the optic nerve actually carries is electrochemical impulses. The entire process can be explained without making reference to “information”.

So what kind of process would qualify?

It doesn’t matter if a computer can actually know or feel anything. That’s not what the Turing test is about. The test is about simulation of thought. Is it convincing? Can it fool you good enough?

So you might say, it can fool me, but it still can’t actually think, so it doesn’t really count. Who’s counting?

When you think of the simulation of a machine, it’s based human interaction. If you really want to bend to this logic, think about your interaction with other human beings. When they say something to you, you hear the words in your head, you see their lips move as image in your head. You then tell yourself, they are displaying thoughts behind those actions, they must be thinking, just like me! But all that is simulation.

There is very little beyond that which allows you to tap into their brain, know their exact thoughts like you are riding the waves of your own thought process. You can’t even prove to yourself that you know they have a brain that thinks. Even if you open up their skull and look. It’s a good guess about a series of associated implications. But it’s still a guess.

Just like when a person walks around the corner we assume they still exist. It’s an assumption we don’t let go of. That others exist and have thoughts. Probably because the alternative is a scary or lonely one. Either way, we don’t know the thoughts of others, it remains a surface assessment. Computers are not different when they beat the turning test.

Yes, computers will replace . . . our jobs. But new jobs will replace the old ones, especially when they start teaching kids to code in the first grade.

When I say we will become more like computers, start with the idea that computers are already extensions of ourselves. Our brains compute. I move on to emotions. Since feelings lead to craving which leads to suffering, the most human of problems. Since computers solve problems, ending suffering is high on the list. Before that ultimate cure arrives, people will become more logical than emotional. Or more emotionally intelligent. I call it numb as normal. Chrome Novocain. Rusted parts replaced, for a 3D printed heart.

We won’t be replaced. We’ll evolve into bionic men. Our thoughts will be assisted by the thoughts* of a machine. It’s called symbiosis. It occurs in nature.

This reads to me like nonsense, but maybe I am just ignorant. Can you explain what you mean in a little more detail?

Sure, every being has a consciousness signature in the same sense that beings have unique skin creases.

These signatures come in clusters, as everyone thinks multiple things at the same time…

What makes them easy to isolate is the vast amount of unique data that flows through…

You think that would make it harder, but it actually makes it easier to isolate unique signatures.

Having a signature in itself is not enough.

You have to send a charge through the signature to activate the consciousness itself.

This is how you develop mind reading software.

I actually don’t care if you thinks it’s crap.

I was just answering your question.

The Turing test is resolved with superimposition processes. If you have the process down, when it refuses to map, you have a philosophic zombie, or just a behavioral signature.

That’s nearly right: Turing said the question he wanted to answer was “can machines think?”. Not “can machines simulate thought?”.

Well Turing was, apparently.

I don’t think this “Problem of Other Minds” is a serious problem. It’s more of a conundrum, like Zeno’s Arrow Paradox:

There isn’t any real room for doubt that things move, and there isn’t any real room for doubt that other people are conscious.

Do you not think it’s because the alternative is just a bit too silly? It’s science fiction again. Don’t get me wrong, I like science fiction, but it is fiction, not philosophy (and not science either).

I don’t think it matters that we don’t (directly) know the thoughts of others, but in any case we do know about our own thoughts. We know the kind of thing they are and where they come from. They arise from our lived experience, what we feel, see, hear. When we program a computer to simulate human behaviour, we know that it is behaving like that because of the program, not because it is having thoughts like those we have. That is really quite a ridiculous suggestion, although surprisingly widely accepted.

Do you care if it is crap?

I gave you the answer to your question.

I’ve done it before and travelled back in time.

None of you are smart enough to do it, so I’ll explain outlines .

I’ve destroyed this whole world before

Yeah I know but it was crap.

Not my issue man.

You really have no clue WHAT you are talking to right now.

You’ll understand someday for sure .

I literally had to reconstruct this entire world to resurrect in it.

I got a second chance.

That sounds great, I’ll be sure to call on you if I need any building work done.

:slight_smile: that’s the smartest thing you’ve said here.

Actually, we all created this together, but I digress

I messed up big time… So bad, that we actually remade it…

I’m the tesla of consciousness …

Turns out, if you fuck with it, it fucks everything up…

And I used to think I was so fucking smart!

My favorite things actually…

Talking about the weather

Making jokes with cross dressers and trannies

Being the smartest person is not everything…,

Being the best you can be… Is

That’s not relevant to my point at all, though; I just said it’s the process and not the description. I don’t see how you’re not still confusing description and thing. It’s not the description of bits and bytes that causes these words to appear on your screen, it’s the process of charge moving through your laptop (/pc/tablet/phone).

Where does consciousness begin? Vertebrates? Mammals? Primates?

I believe that consciousness is the result of neural activity, but what is going on in computers has nothing to do with that kind of activity.
[/quote]
How do they differ? (I’m not saying they don’t, I’m curious)

I meant: I’m not talking about modern-day computers being conscious, as even if consciousness is an emergent property of neural network activity, the limit of neural network modelling is still at a very basic level.

Why are dynamic electrochemical processes fundamentally different from dynamic electronic ones? Is consciousness to be found in carbon not silicon, or ions not electrons?

At the very least, some kind of conceptualisation rather than manipulating linguistic tokens.

I’m not sure we are understanding each other correctly. My position is, it’s the electrical circuitry that causes the words to appear on the screen, not an algorithm, and it’s the neuronal activity that causes consciousness and allows us to read and write the words, not an algorithm.

We don’t know precisely. You could ask a similar question about the developing human foetus. When does it first have experiences, and what kind of experiences are they? I would speculate that touch arrives first, it seems somehow more primitive, and the surface of the body is there before the eyes. Maybe feeling your tongue in your mouth, your fingers rubbing together?

I think it’s quite possible that consciousness appears very early in evolution. Maybe worms can feel and could feel millions of years ago?

A computer running the same program can be made from different things, right? Vacuum tubes or transistors on silicon chips? And you can use different media, magnetic coatings or a laser reading bumps and hollows on a disk.

Consciousness arises from (or is) highly specific processes. It seems very likely that these developed from the processes that allow unconscious detection and response. Volvox is a green algae, a plant, which evolved 200 million years ago. It forms spherical colonies. The individual cells have eyespots and whip-like tails, which allows the colony to swim towards the light. As well as photoreceptor proteins, these eyespots contain a large number of complex signalling proteins. Light arriving at the eyespot sets off a photoelectric signal transduction process that ultimately triggers changes in the way the tail moves and causes the movement towards the light source.

Evolution has had 200 million years to develop on what was already a complex system, and all that time the system is becoming more specific.

Is it really credible that the same thing could now be achieved by vacuum tubes, or transistors, reading magnetic coatings, or microscopic bumps and hollows, or punched paper cards, or any number of other materials and technologies you could use? This just seems unscientific to me, irrational.

It’s a great sales gimmick, but really this term “neural” is a bit of a con when applied to computers.

Because they can’t produce the same effects. This is pretty obvious really!

If we can discover the precise causal mechanisms we may be able to produce consciousness in other media. Would we really want to? I think what we want is precisely the opposite. There are already more than enough conscious beings. What we want is unconscious computers.

Thanks Only, it’s been an enjoyable discussion so far. Maybe we can come back to the Turing Test issue later?

If one cannot settle upon what the word “consciousness” is referring, I don’t see how one can intelligently discuss from where it arises, what it takes to create it, or within what it might reside.