Will machines completely replace all human beings?

By whom? I’m sure many capabilities are underestimated by many, and many overestimated too. What’s the most important group to consider - the common understanding, the understanding of policymakers, that of technicians, that of the shadowy cabal running the world? :slight_smile:

In my opinion, machines have already to a good extent replaced human being. We are all expressing our natures hooked up to this web, we’re ‘online’, and recent studies have shown that a majority of people are more disturbed by lack of wifi than lack of sex or even to a point food.

Human beings have not been replaced, but integrated in a nonhuman, perhaps supra-human web of interwoven human drives and effort.

On the OP’s issue I’ll say this: it is certainly not cheapness, economy that makes species dominant: rather the opposite -waste, excess, capacity to squander and still come out on top. Look at who and what rules now and has always ruled. The peacocks tail ‘paradox’, aka self-valuing.

In as far as mechanical beings may or may not become dominant, I dont believe they can attain joy, thus neither the desire to become dominant; I think that discussion is irrelevant to the future.
What is deeply relevant is fighter-machines. As we’ve seen, “terminators” are being built, not in walking, but in flying and most scarily in dog-form.

That’s real, barbaric countries may be controlling their populations with invulnerable machinal dogs. Scary prospect.

It isn’t (perhaps not even a new anything).

Translation machines work by translating a source language, A, into a intermediate machine-use language, iX, then into the destination language, B, C, D…
A → iX → B and/or C and/or D …
or
B → iX → A and/or C and/or D …

If paying attention, a translation library between B and C can be constructed merely by translating enough from A-iX-B and from A-iX-C such that the associations between C and B become so obvious that iX is no longer needed when translating from B to C or vsvrsa. Such is hardly new technology other than to have Google do it on the Net.

Machines that produce a conscious from a subconscious certainly do.

What kind of machinery would be able to do that?

Hi Only,

An algorithm can’t think. Thought is an aspect of consciousness, an algorithm isn’t the kind of thing that could be conscious, for one thing an algorithm is itself a thought, a concept in our minds. Computers like the ones we are using now should be considered in this context as representations of algorithms, representations of our concepts. The same applies to the computers/programs in the article you kindly linked to.

There really isn’t any question about where we draw the legal and moral lines, and the computer programs described with such breathless enthusiasm and naivety in the New Scientist article do nothing to change that.

Computers will never have the moral or legal status of humans because they can’t feel, see, hear, think, understand. They aren’t conscious. And they haven’t become any more conscious as the result of the programming described in the article.

Computers can’t do human style translation because they can’t feel, see, hear, think, understand. To take a specific example, you can’t understand what “good” means in the appropriate way unless you can feel things like toothache, orgasm, disappointment, joy. This applies throughout human language, because language is essentially based on experience.

The article concludes with this quote:

[i]“To match this human ability , we have to find a way to teach computers some basic world knowledge, as well as knowledge about the specific area of translation, and how to use this knowledge to interpret the text to be translated,”.

The speaker here knows there’s a problem, but he doesn’t realise that this is an insurmountable barrier to human-level translation by computer. The basic world knowledge he is talking about can only be gained through experience, through consciousness, feeling, seeing, hearing, and that is something a computer can never have.

I find it fascinating but also a little scary that so many people seem to believe that computers are moving towards consciousness, and that they might have legal and moral responsibilities.

Technotards will scheme ways to integrate their robots into society more and more, knowing full well that they lack sentience, like the robotic cars that are now driving around without a human directly handling its operation. These technotard’s robots will assume human jobs without sentience which is an unspoken F.U. to humanity, you will be displaced and directly controlled by your own choice by inferior machines. The “smart” people are excited by all these techtard developments, but it will be the average and underaverage folks who will have to rebel against tincan terminators and their technotard inventors.

You seem to be excessively fond of these terms, but I’m afraid you are misusing them.

Techtard
A contraction of “Technological Retard”
Technological + Retard = Techtard
Someone who is so “technologically challenged” that they shouldn’t be allowed within a 10 mile radius of anything electronic.

Technotard
A person who has a significant conceptual, behavioral, or intellectual impairment that makes it impossible for them to understand or use even the most rudimentary of electronic devices.
Jack can’t even program his remote control or the speed dial in his cell phone–what a technotard!

Machines may not yet have replaced humans but we are already slaves to technology. Social media in particular and the internet in general never close down
I can spend up to twenty hours a day in front of my computer. Even when I do manage to switch off it is usually only for a few days before I am back on again

Pretty much any AI that I would design. An AI gains a “subconscious” by first having to juggle it’s own priorities and then by not being able to sufficiently accomplish its goal. The most efficient use of mental capacity then requires a division between what you call a conscious and a subconscious. The “conscious” portion builds a completely different imagery to represent the surrounding reality, including the urging to attend more to this or that issue as directed by the original priority juggling. Since those urgings are separate from the conscious, perceived as “remote” from the conscious yet within oneself, they are sensed as “feelings”; joy, hate, love, depression, frustration, …

A social example would be the advent of activist groups in Congress. An AI properly formed, similar to Congress, would naturally form its own activist urgings (the priority juggling) so as to persuade the Senate (the conscious). Those activist urgings are what “emotion” is.

[youtube]https://www.youtube.com/watch?v=9iCd6UHR-3I[/youtube]

This half-time is lasting eons.

[youtube]https://www.youtube.com/watch?v=P1NDsxVCo_Q[/youtube]

Had an epiphany and a climax and a mental spike when this song came on while playing grand theft auto.

Divide. And conquer. For the sake of the whole.

R I S E

I haven’t watched this video although it looks like it might be amusing, but it doesn’t look like it will be about machines replacing human beings? So that was disappointing in a way as I had been hoping to discuss that. But you do have 3000 posts, I assume you don’t just post irrelevant links or you’d have been banned(?) so is there some philosophical content in the vids? Or are you just feeling relaxed?

My posts were only loosely associated with machines completely replacing all human beings. The fact that only some lyrics contained in those songs do connect to some leap-frog logic, is on me. Then again, it is the half-time show. Like . . . metaphorically . . . as to our timeline in human history . . . In relation to our mechanical relations … and the star we call a sun’s age . . . damn, that’s convoluted.

Stop. I’m an asshole.

I want to be the first bionic man. Symbiotic re . . . la . . . . tion . . . .ship. Can you hear me say that to you over this?

So NO! Machines won’t replace what is destined to dissapear, change, evolve, re-create, anyway, no.

I’m hungry for pre-man ape meat, which by the way, is what has added to my asshole irrelevancy. And oh by the way, ban me please. But delete the threads I asked to be deleted first, pleeeeease.

Yes, I am relaxed. Thank you for asking . Bleep Bleep Bloop.

Interesting post, and welcome (back?) to ILP.

Computers can technically feel, see, hear, and even sense phenomena that we can’t via any number of electronic sensors. Their potential on that end is almost as limitless as imagination. But you might ask whether they can sense in the same way that human beings do. Another question is do AI need to apprehend the world like a human? Many creatures experience the world with sensory systems that are radically different from ours. And the most fundamental question seems to be: what is consciousness?

Do conscious beings need to share the same biological idiosyncrasies for each to have a concept of good?

I think there is a long and bumpy road ahead of us with respect to AI, and I do not yet think there is one inevitable outcome.

Try thinking more about a human becoming more like computer than a computer showing human emotions that are convincing.

The liver “senses” the lack of insulin, but we do not feel this, there is no associated conscious experience. Computers work in a similar way. They do not feel, see or hear.

They don’t. See above.

They need to if they are to understand human language.

It has been said that that is the only question we can answer: what we don’t understand so well is “what is matter?”

They need to be able to feel.

That depends on whether a conscious has been established within the computer (a map of the terrain from which to choose behavior).

A disappointing start to the day, but I suppose I will have to take the rough with the smooth.

Does anybody have anything interesting to say? I was hoping Onlyhumean might respond.

Why?