Will machines completely replace all human beings?

:slight_smile: this is funny…

Can you imagine computers on an Internet forum discussing “Will biological life supplant all computers?”

There is AI all over the cosmos … And they’re just like us.

chuckles

The modern science is an Occidental science and has conquered the whole world. So even if the genocide will be continued and finally completed, the techn(olog)ical results of the Occidental science - especially the machines - will be there, and then it will depend on the Non-Occidentals or the machines whether science will be continued or not.

Maybe science will “die” in the same manner as Faust in the second part of Goethe’s tragedy “Faust”.

An interesting development int he field of translation:
newscientist.com/article/21 … late-with/

Slightly sensationalist headline, but an insight into the working of neural networks.

I could make an Ai better than any scientist could. I just dont for ethical reason, don’t know if a soul will enter a computer machine.

Thank you for the information. Do you think that it is a “giant leap”?

Friedrich Wilhelm Joseph Schelling said that nature casts up its eyes in the human being. So I am saying that culture casts up its eyes in the current phase of the Occidental culture, which means the trend to transhuman beings.

Not really, I think it clarifies that symbolic “thinking” (quotes due to vaguely-defined words) can be an emergent property from algorithmic programming. I don’t think there’s likely to be a big leap, though, so much as small steps that make it harder to agree on what machine intelligence entails and where we draw legal and moral lines.

I agree. But do you think that the capabilities of machines are overestimated at the present time?

By whom? I’m sure many capabilities are underestimated by many, and many overestimated too. What’s the most important group to consider - the common understanding, the understanding of policymakers, that of technicians, that of the shadowy cabal running the world? :slight_smile:

In my opinion, machines have already to a good extent replaced human being. We are all expressing our natures hooked up to this web, we’re ‘online’, and recent studies have shown that a majority of people are more disturbed by lack of wifi than lack of sex or even to a point food.

Human beings have not been replaced, but integrated in a nonhuman, perhaps supra-human web of interwoven human drives and effort.

On the OP’s issue I’ll say this: it is certainly not cheapness, economy that makes species dominant: rather the opposite -waste, excess, capacity to squander and still come out on top. Look at who and what rules now and has always ruled. The peacocks tail ‘paradox’, aka self-valuing.

In as far as mechanical beings may or may not become dominant, I dont believe they can attain joy, thus neither the desire to become dominant; I think that discussion is irrelevant to the future.
What is deeply relevant is fighter-machines. As we’ve seen, “terminators” are being built, not in walking, but in flying and most scarily in dog-form.

That’s real, barbaric countries may be controlling their populations with invulnerable machinal dogs. Scary prospect.

It isn’t (perhaps not even a new anything).

Translation machines work by translating a source language, A, into a intermediate machine-use language, iX, then into the destination language, B, C, D…
A → iX → B and/or C and/or D …
or
B → iX → A and/or C and/or D …

If paying attention, a translation library between B and C can be constructed merely by translating enough from A-iX-B and from A-iX-C such that the associations between C and B become so obvious that iX is no longer needed when translating from B to C or vsvrsa. Such is hardly new technology other than to have Google do it on the Net.

Machines that produce a conscious from a subconscious certainly do.

What kind of machinery would be able to do that?

Hi Only,

An algorithm can’t think. Thought is an aspect of consciousness, an algorithm isn’t the kind of thing that could be conscious, for one thing an algorithm is itself a thought, a concept in our minds. Computers like the ones we are using now should be considered in this context as representations of algorithms, representations of our concepts. The same applies to the computers/programs in the article you kindly linked to.

There really isn’t any question about where we draw the legal and moral lines, and the computer programs described with such breathless enthusiasm and naivety in the New Scientist article do nothing to change that.

Computers will never have the moral or legal status of humans because they can’t feel, see, hear, think, understand. They aren’t conscious. And they haven’t become any more conscious as the result of the programming described in the article.

Computers can’t do human style translation because they can’t feel, see, hear, think, understand. To take a specific example, you can’t understand what “good” means in the appropriate way unless you can feel things like toothache, orgasm, disappointment, joy. This applies throughout human language, because language is essentially based on experience.

The article concludes with this quote:

[i]“To match this human ability , we have to find a way to teach computers some basic world knowledge, as well as knowledge about the specific area of translation, and how to use this knowledge to interpret the text to be translated,”.

The speaker here knows there’s a problem, but he doesn’t realise that this is an insurmountable barrier to human-level translation by computer. The basic world knowledge he is talking about can only be gained through experience, through consciousness, feeling, seeing, hearing, and that is something a computer can never have.

I find it fascinating but also a little scary that so many people seem to believe that computers are moving towards consciousness, and that they might have legal and moral responsibilities.

Technotards will scheme ways to integrate their robots into society more and more, knowing full well that they lack sentience, like the robotic cars that are now driving around without a human directly handling its operation. These technotard’s robots will assume human jobs without sentience which is an unspoken F.U. to humanity, you will be displaced and directly controlled by your own choice by inferior machines. The “smart” people are excited by all these techtard developments, but it will be the average and underaverage folks who will have to rebel against tincan terminators and their technotard inventors.

You seem to be excessively fond of these terms, but I’m afraid you are misusing them.

Techtard
A contraction of “Technological Retard”
Technological + Retard = Techtard
Someone who is so “technologically challenged” that they shouldn’t be allowed within a 10 mile radius of anything electronic.

Technotard
A person who has a significant conceptual, behavioral, or intellectual impairment that makes it impossible for them to understand or use even the most rudimentary of electronic devices.
Jack can’t even program his remote control or the speed dial in his cell phone–what a technotard!

Machines may not yet have replaced humans but we are already slaves to technology. Social media in particular and the internet in general never close down
I can spend up to twenty hours a day in front of my computer. Even when I do manage to switch off it is usually only for a few days before I am back on again

Pretty much any AI that I would design. An AI gains a “subconscious” by first having to juggle it’s own priorities and then by not being able to sufficiently accomplish its goal. The most efficient use of mental capacity then requires a division between what you call a conscious and a subconscious. The “conscious” portion builds a completely different imagery to represent the surrounding reality, including the urging to attend more to this or that issue as directed by the original priority juggling. Since those urgings are separate from the conscious, perceived as “remote” from the conscious yet within oneself, they are sensed as “feelings”; joy, hate, love, depression, frustration, …

A social example would be the advent of activist groups in Congress. An AI properly formed, similar to Congress, would naturally form its own activist urgings (the priority juggling) so as to persuade the Senate (the conscious). Those activist urgings are what “emotion” is.

[youtube]https://www.youtube.com/watch?v=9iCd6UHR-3I[/youtube]

This half-time is lasting eons.

[youtube]https://www.youtube.com/watch?v=P1NDsxVCo_Q[/youtube]

Had an epiphany and a climax and a mental spike when this song came on while playing grand theft auto.

Divide. And conquer. For the sake of the whole.

R I S E