Will machines completely replace all human beings?

Well I replied, but only to indicate that I didn’t see what you were getting at. It isn’t immediately clear why you want us to think more about humans becoming more like computers.

People aren’t really becoming more like computers anyway, that’s science fiction.

They don’t know anything, they can’t know anything. We interpret their actions and states as having meanings. Knowing things is a property of minds, computers do not have minds, are not minds. Science fiction again.

Mildly deranged ranting?

Timewaster?

Whose sock puppet are you? You could be missing your true identity.

What is it that qualifies you to be so confident of that thought?

I think she explained very well in her post and her reply to me.

First off, people are becoming more like computers. It’s a gradual process, if you care to notice. To say that we aren’t is to suggest that consuming information and processing information isn’t a part of our new information age.

Second, it could be argued that knowing isn’t even property of the mind. That people don’t know anything either. That is philosophy 101. But please, let us continue with logical assumptions.

Science fiction becomes science reality. But I’ll admit, George Orwell probably should’ve used a different year in his title 1984

You are a denier and a debunker. Half-listener, at best.

I will tell you clearly. Human beings will merge with machines, not be replaced completely. The quality and the nature of shared thinking doesn’t matter to this debate. It will be shared.

The film industry? Is it "the most important group to consider“? - Maybe, maybe not. At least it is interested in the capabilities of machines.

Technicians must be more optimistic than pessimistic, which means that they could be in danger of overestimating the capabilities of machines.

Policymakers have to talk more optimistically than pessimistically, which means that, if they are politically interested in the capabilities of machines and talk about them publicly, they are in danger of over- and/or underestimating them.

The common understanding is a matter of a majority, and majorities do what they ought to do, which means these days: they are politically correct (so cf. policymakers).

If the shadowy cabal is interested in the capabilities of machines, then it could also be in danger of over- and/or underestimating them.

The answer to your question depends on the interest in the capabilities of machines in combination with the everlasting interest in the option of not wanting any majority to know what really happens. If the shadow cabal and the policymakers are interested in the capabilities of machines, then the majority with its common understanding is also interested in it. The shadow cabal and the policymakers are always interested in in the option of not wanting any majority to know what really happens, so that the majority with its common understanding does not know what really happens. I think that the political interest in the capabilities of machines is high, but it is not politically correct to talk as much about that theme as the common understanding becomes capable of estimating the capabilities of machines in the right way. There is always an interest in the option of not wanting any majority to know what really happens. This may lead to the following answer: currently, the capabilities of machines are over- and underestimated, namely overestimated by some and underestimated by many people.


Maybe this thread can show that said answer too (provided that ILP represents the world [ :laughing: ]): This thread has now 115050 views and 1975 replies, so it seems to be an important thread. But if I look at the number of those who posted in this thread and the number of those who did not post in this thread, then I have to say that the number of those ILP members who are really interest in the topic of this thread is a relatively small number. The majority is not interested in it. This majority is probably in danger of over- and/or underestimating the capabilities of machines.

I’m afraid you have to face up to the sad reality that there may be more of us who are thinking this way.

We can point to things that behave in ways that can be described algorithmically. It’s a linguistic tool, not a reification. So…

I find it plausible that it is caused by processes that can be described algorithmically. Or put into equations. Don’t you?

We know exactly what our own consciousness is. We extrapolate that to other beings based on their similarity to us - I have a better idea of my brother’s experience of something than that of a Kalahari bushman’s, and better that than an orang-utan’s; better that than a crab’s; better that than a flatworm’s. I have to say that the consciousness of a crab is a bit of a mystery, but I assume they feel pain because they respond in ways that indicate it, and I assume an oak tree doesn’t because it doesn’t (although it responds physically to damage and heals).

Consciousness doesn’t necessarily mean human consciousness. I’m perfectly willing to grant that a computer won’t know what it is to be human - or vice versa.

I’m not talking about computers now; we’re at the level of modelling basic invertebrates with a few dozen neurons. I’m trying to get to the root of your argument that “consciousness is consciousness, and computers just aren’t and can never be that.”

If consciousness isn’t the result of neural activity, what is it? It’s certainly easy to drastically modify consciousness by modifying neural activity, and to end it by ending that. It seems a reasonable proposition.

And if it is, why is organically-mediated information processing somehow different from electronically-mediated?

I think that would depend on how and why a computer manages to do so. If it’s by ELIZA-like language manipulation, then no.

As Wittgenstein said, “if a lion could speak, we could not understand him.” I’m still willing to grant lions the benefit of the doubt and not torture them for fun, though.

An algorithm is merely a specified process. The entire universe is nothing other than processes. Describe any one of them and you have specified an algorithm that physically exists (more scripturally known as a “spirit”). Regardless of whether Man understands the processes of the mind, because they physically exist, they are necessarily algorithms.

I’ve been studying compositional signatures, consciousness signatures and behavioral signatures for… Hmm… 23 years now. Apply a frequency to the signature and you get output.

We have a superposition process that really is the answer to Turing, if you can’t superimpose, it’s not conscious. Otherwise known as possessions.

Yes, but the cause then is the processes, not the description of them.

Actually I’ve changed my mind: the relevant processes in the brain can’t be described algorithmically or put into equations, because of our limited understanding of matter.

Further thoughts a couple of hours later: aspects of the relevant processes in the brain can be described algorithmically, but not the entire process. But the important point is that this description is not the cause of consciousness.

We can respond to injury without feeling pain (as when we pull away from a hot surface), so I don’t assume that crabs can feel pain.

I’m not willing to grant that a computer will know anything or feel anything.

I believe that consciousness is the result of neural activity, but what is going on in computers has nothing to do with that kind of activity.

When you say you are not talking about computers when modelling basic invertebrates, what are you talking about?

“Information processing” has a similar ontological status to “algorithm” and “equation”. If we look at a description of our visual system for example, we are likely to see sentences like “the optic nerve carries information from the retina to the brain”. But what the optic nerve actually carries is electrochemical impulses. The entire process can be explained without making reference to “information”.

So what kind of process would qualify?

It doesn’t matter if a computer can actually know or feel anything. That’s not what the Turing test is about. The test is about simulation of thought. Is it convincing? Can it fool you good enough?

So you might say, it can fool me, but it still can’t actually think, so it doesn’t really count. Who’s counting?

When you think of the simulation of a machine, it’s based human interaction. If you really want to bend to this logic, think about your interaction with other human beings. When they say something to you, you hear the words in your head, you see their lips move as image in your head. You then tell yourself, they are displaying thoughts behind those actions, they must be thinking, just like me! But all that is simulation.

There is very little beyond that which allows you to tap into their brain, know their exact thoughts like you are riding the waves of your own thought process. You can’t even prove to yourself that you know they have a brain that thinks. Even if you open up their skull and look. It’s a good guess about a series of associated implications. But it’s still a guess.

Just like when a person walks around the corner we assume they still exist. It’s an assumption we don’t let go of. That others exist and have thoughts. Probably because the alternative is a scary or lonely one. Either way, we don’t know the thoughts of others, it remains a surface assessment. Computers are not different when they beat the turning test.

Yes, computers will replace . . . our jobs. But new jobs will replace the old ones, especially when they start teaching kids to code in the first grade.

When I say we will become more like computers, start with the idea that computers are already extensions of ourselves. Our brains compute. I move on to emotions. Since feelings lead to craving which leads to suffering, the most human of problems. Since computers solve problems, ending suffering is high on the list. Before that ultimate cure arrives, people will become more logical than emotional. Or more emotionally intelligent. I call it numb as normal. Chrome Novocain. Rusted parts replaced, for a 3D printed heart.

We won’t be replaced. We’ll evolve into bionic men. Our thoughts will be assisted by the thoughts* of a machine. It’s called symbiosis. It occurs in nature.

This reads to me like nonsense, but maybe I am just ignorant. Can you explain what you mean in a little more detail?

Sure, every being has a consciousness signature in the same sense that beings have unique skin creases.

These signatures come in clusters, as everyone thinks multiple things at the same time…

What makes them easy to isolate is the vast amount of unique data that flows through…

You think that would make it harder, but it actually makes it easier to isolate unique signatures.

Having a signature in itself is not enough.

You have to send a charge through the signature to activate the consciousness itself.

This is how you develop mind reading software.

I actually don’t care if you thinks it’s crap.

I was just answering your question.

The Turing test is resolved with superimposition processes. If you have the process down, when it refuses to map, you have a philosophic zombie, or just a behavioral signature.

That’s nearly right: Turing said the question he wanted to answer was “can machines think?”. Not “can machines simulate thought?”.

Well Turing was, apparently.

I don’t think this “Problem of Other Minds” is a serious problem. It’s more of a conundrum, like Zeno’s Arrow Paradox:

There isn’t any real room for doubt that things move, and there isn’t any real room for doubt that other people are conscious.

Do you not think it’s because the alternative is just a bit too silly? It’s science fiction again. Don’t get me wrong, I like science fiction, but it is fiction, not philosophy (and not science either).

I don’t think it matters that we don’t (directly) know the thoughts of others, but in any case we do know about our own thoughts. We know the kind of thing they are and where they come from. They arise from our lived experience, what we feel, see, hear. When we program a computer to simulate human behaviour, we know that it is behaving like that because of the program, not because it is having thoughts like those we have. That is really quite a ridiculous suggestion, although surprisingly widely accepted.

Do you care if it is crap?

I gave you the answer to your question.

I’ve done it before and travelled back in time.

None of you are smart enough to do it, so I’ll explain outlines .

I’ve destroyed this whole world before

Yeah I know but it was crap.

Not my issue man.

You really have no clue WHAT you are talking to right now.

You’ll understand someday for sure .

I literally had to reconstruct this entire world to resurrect in it.

I got a second chance.

That sounds great, I’ll be sure to call on you if I need any building work done.