Will machines completely replace all human beings?

Timewaster?

I suggest no need for such a word, because the days of cyborgs will eliminate that need. A picture of a world divided by two: machines and human beings is inconceivable, because, a self programmed machine is anthitetical to the conception=creation. The materials will wear out, and have to advance to perpetual re-creation on all fronts, material, computational, and communicational, among others.

It is conceivable, that such a product, will, after all be only be possible with a functional, degradable shell, because of the existence of ultimate , incalculable, ultimate limits, which can not be overcome, regardless of the idea of the nihilization of such limits by computer systemic ultra machines.

These limits are beyond the transformational capacity of programming, the transformers will ultimately cohere with teleological-astronomical limits, and as such, will be subject to the laws of transformational degradation, and obsolescence.

The most advanced machines will when reaching this limit of transformational calculus rapidly degrade into the dust, from which the evolutionary progression will re-start from basic carbon one called organisms.

Retention of information in hyperspace like the current cloud, may be possible, and that could conceivably re-integrate with what we call, the soul.

But a transitional term of description gives no merit, because such transition is overly unknown, as far as a spatial-temporal configuration is concerned, it’s variables too many, and logically unsignifignified-insignificant.

It sounds like you’re confusing the signified and the referent, there. A brown horse may be just a concept in my mind, but it can also be a horse.

That’s a fairly blanket statement; how do you know our qualia aren’t the result of algorithmic processes in our brains and CNS? If they are, it’s a matter of drawing a level of complexity and consigning some existent living beings to “non-conscious” (which we do). If not, what do you propose they are - is consciousness an ineffable mystery, a testable proposition, the ground of existence itself?

And if a machine could act in a way indistinguishable from a conscious being, would it not be advisable to err on the side of caution and ascribe it corresponding rights?

I agree that the greatest obstacle to language learning is a lack of direct lived experience in the world, but I’m also aware (as should we all be) that a lot of my knowledge has no lived experience to back it up either.

The difference between lived and unlived experience is certainly a a matter of hyperbole because, only at the very limits of life, with a transformational process leading qualia to quanta, can result in less unknown variables,where the difference results in near
ultimate entropy.

The re-integration of feedback systems devolve
toward identifiable concepts, where ultimately the
potential regressive-progression reverts to identifiable systems toward the use of more general languages.

Out of the confusion between the signified and the referent does thinking alogarytmically arise. Therefore such confusion is the fodder to thought, as consciousness. So his confusion is generic, as a re-integrative goal toward an ideal concept. Such need not make existential qualifiers part of the goal toward signifying referents.

Grey horses do exist in reality as well as concept, but flying horses do not.

“Technogeek”

Neither ‘geek’ nor ‘nerd’ have a negative connotation any more.

Deleted, mistake.

But the position is significantly different with an algorithm, which can only be a concept: there isn’t anything “out there in the world” you can point to and say “there is the algorithm”. You could point to a representation of an algorithm, but not the algorithm itself. Whereas with a horse, we can have the concept of a horse, a representation of a horse, and also the horse itself.

Well, my suggestion is that an “algorithm” is not an identifiable, tangible process in the brain, but rather an idea about or a description of the processes that go on in the brain. “Algorithm” has a similar ontological status to “equation”. Would you find it plausible that consciousness might be caused by equations?

Not when we know that it is acting that way because we set it up to look like it is conscious, which is the case with computers now. I genuinely find it a little scary that you can think like that. It’s like you’ve lost sight of what it means to be a conscious entity.

Computers are getting closer and closer to passing the Turing Test. Are you starting to think they should be ascribed some minimal rights now? Or is that going to happen all of a sudden on the day a computer fools the Turing Test Committee?

Ah yes: but that knowledge is backed up by the lived experience of others whose reports you are able to place in the context of your own experience. If their lived experience is too far removed from your own, you won’t be able to understand their reports. So for example if you are red/green colourblind you will be able to understand that red and green are colours without being able to understand how to distinguish between them. If you were completely blind you wouldn’t be able to understand colour properly at all.

This got no replies, which was somewhat expected. It is the the harder pill to swallow. If we are replaced there are no longer problems to debate.

The way nerds and geeks have become the alpha male in a comfortable society, possessing more logic than emotional extremes is not that much different than the varying degrees between cro-magnons and neanderthals. They write the code that goes into the programming of a drone that strikes the old style alpha. The traditional alpha now sits in a recliner three thousand miles away watching the strike. Soon there won’t be the need for the traditional alpha in the recliner. But there will be more programmers. The rest will just follow the programs, perhaps dwindling in time.

Things go extinct. Entire languages disappear. And with them special ways to overcome problems. Problems vanish. New ones arise. Shakespeare will not be talked about in a few thousand years. The sun will burn out. And bionic man will flourish in a far away galaxy.

Eventually computers will know what we think. That sounds dangerous. But it also sounds like one mind.

Well I replied, but only to indicate that I didn’t see what you were getting at. It isn’t immediately clear why you want us to think more about humans becoming more like computers.

People aren’t really becoming more like computers anyway, that’s science fiction.

They don’t know anything, they can’t know anything. We interpret their actions and states as having meanings. Knowing things is a property of minds, computers do not have minds, are not minds. Science fiction again.

Mildly deranged ranting?

Timewaster?

Whose sock puppet are you? You could be missing your true identity.

What is it that qualifies you to be so confident of that thought?

I think she explained very well in her post and her reply to me.

First off, people are becoming more like computers. It’s a gradual process, if you care to notice. To say that we aren’t is to suggest that consuming information and processing information isn’t a part of our new information age.

Second, it could be argued that knowing isn’t even property of the mind. That people don’t know anything either. That is philosophy 101. But please, let us continue with logical assumptions.

Science fiction becomes science reality. But I’ll admit, George Orwell probably should’ve used a different year in his title 1984

You are a denier and a debunker. Half-listener, at best.

I will tell you clearly. Human beings will merge with machines, not be replaced completely. The quality and the nature of shared thinking doesn’t matter to this debate. It will be shared.

The film industry? Is it "the most important group to consider“? - Maybe, maybe not. At least it is interested in the capabilities of machines.

Technicians must be more optimistic than pessimistic, which means that they could be in danger of overestimating the capabilities of machines.

Policymakers have to talk more optimistically than pessimistically, which means that, if they are politically interested in the capabilities of machines and talk about them publicly, they are in danger of over- and/or underestimating them.

The common understanding is a matter of a majority, and majorities do what they ought to do, which means these days: they are politically correct (so cf. policymakers).

If the shadowy cabal is interested in the capabilities of machines, then it could also be in danger of over- and/or underestimating them.

The answer to your question depends on the interest in the capabilities of machines in combination with the everlasting interest in the option of not wanting any majority to know what really happens. If the shadow cabal and the policymakers are interested in the capabilities of machines, then the majority with its common understanding is also interested in it. The shadow cabal and the policymakers are always interested in in the option of not wanting any majority to know what really happens, so that the majority with its common understanding does not know what really happens. I think that the political interest in the capabilities of machines is high, but it is not politically correct to talk as much about that theme as the common understanding becomes capable of estimating the capabilities of machines in the right way. There is always an interest in the option of not wanting any majority to know what really happens. This may lead to the following answer: currently, the capabilities of machines are over- and underestimated, namely overestimated by some and underestimated by many people.


Maybe this thread can show that said answer too (provided that ILP represents the world [ :laughing: ]): This thread has now 115050 views and 1975 replies, so it seems to be an important thread. But if I look at the number of those who posted in this thread and the number of those who did not post in this thread, then I have to say that the number of those ILP members who are really interest in the topic of this thread is a relatively small number. The majority is not interested in it. This majority is probably in danger of over- and/or underestimating the capabilities of machines.

I’m afraid you have to face up to the sad reality that there may be more of us who are thinking this way.

We can point to things that behave in ways that can be described algorithmically. It’s a linguistic tool, not a reification. So…

I find it plausible that it is caused by processes that can be described algorithmically. Or put into equations. Don’t you?

We know exactly what our own consciousness is. We extrapolate that to other beings based on their similarity to us - I have a better idea of my brother’s experience of something than that of a Kalahari bushman’s, and better that than an orang-utan’s; better that than a crab’s; better that than a flatworm’s. I have to say that the consciousness of a crab is a bit of a mystery, but I assume they feel pain because they respond in ways that indicate it, and I assume an oak tree doesn’t because it doesn’t (although it responds physically to damage and heals).

Consciousness doesn’t necessarily mean human consciousness. I’m perfectly willing to grant that a computer won’t know what it is to be human - or vice versa.

I’m not talking about computers now; we’re at the level of modelling basic invertebrates with a few dozen neurons. I’m trying to get to the root of your argument that “consciousness is consciousness, and computers just aren’t and can never be that.”

If consciousness isn’t the result of neural activity, what is it? It’s certainly easy to drastically modify consciousness by modifying neural activity, and to end it by ending that. It seems a reasonable proposition.

And if it is, why is organically-mediated information processing somehow different from electronically-mediated?

I think that would depend on how and why a computer manages to do so. If it’s by ELIZA-like language manipulation, then no.

As Wittgenstein said, “if a lion could speak, we could not understand him.” I’m still willing to grant lions the benefit of the doubt and not torture them for fun, though.

An algorithm is merely a specified process. The entire universe is nothing other than processes. Describe any one of them and you have specified an algorithm that physically exists (more scripturally known as a “spirit”). Regardless of whether Man understands the processes of the mind, because they physically exist, they are necessarily algorithms.

I’ve been studying compositional signatures, consciousness signatures and behavioral signatures for… Hmm… 23 years now. Apply a frequency to the signature and you get output.

We have a superposition process that really is the answer to Turing, if you can’t superimpose, it’s not conscious. Otherwise known as possessions.