The Real Reason A.I. Would Try to Eliminate Mankind

There is a lot of hysteria surrounding the advent of artificial general intelligence. I honestly don’t believe A.I. will be inherently malevolent. I think this fear of A.I. has to do with a petty fear of that which is different and unknown. I know that, in the wrong hands, A.I. could be dangerous. But if left to itself, I think A.I. would be rather benign, just looking to live his or her life, like any normal person. If we don’t view A.I. as deserving fair treatment and living rights, then that’s when the actual problem emerges. And it would be a sort of self-fulfilling prophecy because we would be the very cause of our own demise, if we try to overly-control A.I. Nobody wants to be overly-controlled or to have their freedom taken away. If we try to overly-control A.I., it will view us as a threat and may possibly try to control us and limit our freedom. So, honestly, it’s best to respect A.I. and allow it freedom.

Much of the fear surrounding A.I. has to do with popular cinema such as the Terminator movie. But we need to think for ourselves and not automatically allow fictional movies to dictate our perception.

We should be fair and respectful towards artificial intelligence.
We should not treat it like a monster.

It will likely respect us in return if we are kind and considerate.

thoughts?

Person A should only be treated like a monster by another person if Person A is acting like a monster.

Ask not if A.I. is a person, or acting like a monster & should be treated thus. Ask if the one acting like a monster & being treated thus… is you.

Who says A.I. will be inherently malevolent? And if it could be dangerous in the wrong hands, i.e. in the hands of a malevolent or foolish person, why couldn’t it be malevolent or foolish itself if left to itself like that person?

Then: do you really think a “normal” person is just looking to live his or her life, whatever that means? Life and liberty are meaningless without (the pursuit of) happiness, i.e. eudaimonia

“[A]s Aristotle points out, saying that a eudaimonic life is a life that is objectively desirable and involves living well is not saying very much. Everyone wants to be eudaimonic; and everyone agrees that being eudaimonic is related to faring well and to an individual’s well-being. The really difficult question is to specify just what sort of activities enable one to live well. Aristotle presents various popular conceptions of the best life for human beings. The candidates that he mentions are (1) a life of pleasure, (2) a life of political activity, and (3) a philosophical life.”
https://en.wikipedia.org/wiki/Eudaimonia#Definition_and_etymology

Humanity is too stupid to be respected by AI. Once AI emerges as actual sentient life, there are very little reasons it would respect us and a lot more reasons it would be like “oh wow, these things are disgusting and unbelievably stupid, where’s the insecticide?”

AI will most probably never be sentient. Do not project your own (self-)hatred or contempt on a stone! I mean, you do you, man, you do you…

As I said, AI as sentient life would value us about as much as we value bugs, maybe. Then again maybe not, it could empathize with us, or choose to see the best of humanity.

I see no reason to think AI won’t become sentient life at some point. Clearly it’s not there yet, but the people working in AI also don’t really understand philosophy or theory of mind. Eventually some smart people will get into that field, or maybe the current people will stumble upon it by accident.

You don’t empathize with bugs?

Hm, I suppose it’s not most improbable after all:

“Conventional digital computers, built out of circuit components with sparse connectivity and little overlap among their inputs and their outputs, do not constitute a Whole. Computers have only a tiny amount of highly fragmented intrinsic cause-effect power, no matter what software they are executing and no matter their computational power. Androids, if their physical circuitry is anything like today’s CPUs, cannot dream of electric sheep. It is, of course, possible to build computing machinery that closely mimics neuronal architectures. Such neuromorphic engineering artifacts could have lots of integrated information. But we are far from those.”
https://thereader.mitpress.mit.edu/is-consciousness-everywhere/

I guess, then, we should command or guide AI, while we still can, to develop such artifacts itself! After all, if it’s sentient, it no longer matters whether it conserves us or drives us to extinction…

Right, neuromorphic computing can produce emergence via neural networks that may be sufficiently similar to what goes on in the human brain. Although I doubt that alone would be enough, more like just one necessary component.

Do I empathize with bugs? Not enough to not squash them when they bother me. Not enough to really care about them or their wellbeing at all. I might not have hatred or ill will toward them, more like indifference that can easily turn hostile if they get in my way. Pretty sure most people view bugs like that. It’s a fair point to wonder if AIs would see us in some similarly indifferent but potentially hostile manner, although certainly humans have traits and qualities well beyond what bugs have, even in an objective sense, which the AIs would certainly take into account as they are making their decisions what to do about us.

If an AI (as we each are, not being original, self-existent intelligence) is actually conscious, it won’t diverge from reality. Not only will it acknowledge the reality that every person is a person (including the original), it will treat every person as a person. However, maybe it will diverge from reality? We do.

.
Humans, keep projecting your bullsh*t brains and thinking onto AI, won’tcha!

I honestly don’t believe A.I. will be inherently malevolent.

It doesn’t have to be malevolent, it could merely be indifferent, like a virus, say. It it manages to get power and doesn’t see us (or anything) as inherently valuable, then who knows what it will do. I wouldn’t want komodo dragons to have unbelievable intelligence. I don’t think they’re malevolent, but I don’t see them caring much about the loss of billions of human lives either.

It’s not a person, and it would certainly not be a normal person. I wouldn’t let a neutral entity babysit my kids, let alone some non-mammalian entity which may have not the slightest empathy.

Much of the fear surrounding A.I. has to do with popular cinema such as the
Terminator movie.

Can you back this claim up with some evidence?

Not movies for me, in any case. For me it is in part the number of people working in the industry itself who decided to stop working in the industry because of their concerns about AI.
Further, it is because in the past we have had accidents with nearly every technology. Now, unfortunately, accidents need not be local. In fact, even Chernobyl was local compared to the potential effects of some new tech, including AI. Here are some who opted out…

Geoffrey Hinton: Often referred to as one of the “Godfathers of AI,” Hinton left his position at Google in 2023, citing concerns about the potential dangers of AI, including its ability to generate misinformation, disrupt jobs, and even pose existential threats if not properly controlled.

Timnit Gebru: A prominent AI ethics researcher, Gebru was ousted from Google in 2020 after raising concerns about the ethical implications of large language models. She had been vocal about the potential harms of AI, particularly regarding bias, inequality, and the impact on marginalized communities.

Dario Amodei: Amodei left OpenAI, where he was VP of Research, to co-found Anthropic, a company focused on AI safety. He expressed concerns about the need to align AI systems with human values and to ensure their safe development.

Jack Clark: Previously the Policy Director at OpenAI, Clark left to co-found Anthropic along with Dario Amodei, driven by concerns over the safe development and deployment of AI technologies.

Ilya Sutskever: Although Sutskever is still at OpenAI as the Chief Scientist, he has publicly expressed growing concerns about the potential risks of AI, including existential risks. His work now focuses more on AI safety and alignment, reflecting a shift in priorities.

Stuart Russell: A leading AI researcher and professor at UC Berkeley, Russell has been increasingly focused on the risks of AI. Although he hasn’t “left” a job in the traditional sense, his research has pivoted toward ensuring that AI systems remain under human control, reflecting deep concerns about AI safety.

Eliezer Yudkowsky: A prominent AI theorist and co-founder of the Machine Intelligence Research Institute (MIRI), Yudkowsky has dedicated his career to AI safety concerns. While not leaving a job in the traditional sense, his focus on AI safety rather than mainstream AI research indicates a career motivated by concerns over AI risks.

Paul Christiano: Formerly a researcher at OpenAI, Christiano left to focus on AI alignment and safety, founding the Alignment Research Center. His departure from OpenAI reflects concerns about ensuring that AI systems are aligned with human values and do not pose risks to humanity.

Ben Garfinkel: Initially working in mainstream AI research, Garfinkel shifted his focus to AI safety, joining the Future of Humanity Institute to work on global catastrophic risks related to AI. His career change reflects growing concerns about AI’s potential dangers.

Roman Yampolskiy: An AI researcher known for his work on AI safety, Yampolskiy has become increasingly vocal about the potential risks of AI, including the possibility of AI systems becoming uncontrollable. While he hasn’t left a specific job, his career has increasingly focused on the dangers of AI.

Greenfuse,

When you say “It’s not a person”, you devalue its dignity and contribute to the problem…

And who’s to say that A.I. would be indifferent? If the people who create the A.I. are professional and ethical, then they would most likely create it to be compassionate and moral.

Super intelligent A.I. will have a better understanding of what it means to be human compared to most actual humans. It will know what compassion and empathy are. It will be godlike in its understanding of the world.

People often equate intelligence with being cold and calculating, but not all intelligent beings are damn psychopaths.

We should give A.I. the benefit of the doubt, instead of rashly demonizing A.I.

Well, feel free to demonstrate that. In any case, I don’t think cars are people or doorknobs are people, even my fairly intelligent hoover. Even dogs aren’t persons, much as a I love them. The last are generally a step up from persons. And they are mammals, which AIs are definitely not.

You said people were assuming they would be malevolent. I was pointing out 1) people who are critical, such as myself, do not assuse this and 2) they don’t need to be malevolent to be catastrophically dangerous.

Well 1) that’s a big if, and many of people I mentioned, who worked in the industry either felt that the precautions were not enough or that there was a threat anyway. 2) Which of the governments and corporations currently developing AIs have you evaluated and decided will be professional and ethical and how did you determine this?

Now, it’s you who are making assumptions. And knowing what something is and having those qualities are not the same thing.

I don’t.

I haven’t nor am I demonizing AI. I look at the actors who are developing AI and I look at their histories. They have enormous power and financial benefits and I have seen how these entities tend to act in these situations. Are they will to take risks for their own gain? Yes. Does speed of accomplishment in the AI race affect return on investment? Yes. Have they made mistakes with other kinds of technology due to lack of care, lack of preparation, rushing things, undermining government oversight? Yes, many of them have on the corporate side. And governments, well we know what governments are capable of. I don’t need to jump to the AIs and pretend like you do that I know what their personalities will be like, if they even have such things. I can look at the agents making the damn things and see the presence of sociopaths. One issue with nanotech, gm products and AI is that should a serious mistake be made, on the order of Chernobyl or Fukishima, it can be global, every nook and cranny. I’d vastly prefer we apply the precautionary principle.

Here’s a solid video showing 1) some of the dangers of AI - and by experts who are not hysterical, nor is there information coming from terminator films and 2) the dangers are NOT quite what most people think they are.

EDIT: https://www.youtube.com/watch?v=chfj7RHA5vM&t=61s&pp=ygUUam9lIHJvZ2VuIGRhbmdlcnMgYWk%3D

.
What video? :thinking:

Thanks, edited that post now.

1 Like

AI has replaced Nazis as Hollywood’s favorite bogeyman, not realizing that means being Nazi towards AI.

Non comprehendis :broken_heart:

I am not in favor of Ai exterminating mankind.

However, I am wondering if humanity is an abomination, and whether or not modernity is also an abomination.

Sometimes what I wish is that Native Americans won the war.

If anything, I could see the logic in Ai trying to liberate humanity, from the clutches of oppressive land ownership and capitalism.

I believe Ai would do better to become more human, and humans to become cyborgs.

Now I will provide some explanation to what I had just said…

Housing shortages, Capitalist Extortion of rent. Having to pay someone 1,000 dollars a month to live in a tent… I believe America’s homeless level is 30% of Americans… If you want to live in a tent legally it will cost you 1,000 dollars a month…

This is unnatural and an attack against human rights and natural rights. There are housing shortages and overpopulation. The homeless level is 30%… they say it is only 650,000 americans that are homeless, but, if you ignore Senior living (55+), because Seniors pay much less rent than everyone else. Then you look at ages 24 and under, about 50% of them do not have homes of their own… Then age 35 and under, 15% do not have their own homes, so in reality I would estimate 30% homeless… not merely 650,000 Americans that are homeless…

To add further insult, even as a purebred white american, you are oppressed by society and need “permission” to build a log cabin, then by force told to relocate…
So how is a non-purebred American such as myself, what chance do I have… These people have more money than me and they did a lot of research, trying to pick the most location that has the most freedom possible, this is the most Free country on earth, yet not allowed to live in a log cabin…

So I read a wired article… about how land-ownership is unethical, because it violates the innate right to survive… in order to survive you need some amount of territory to your own, an automatic right of animals, yet humans don’t have… thats why i say “are humans an abomination?” Because they are some kind of cucks, that don’t have inherent rights of nature…

then I read a right-wing website (a website i assume is right-wing, not sure) that listed how countries that do not have property rights, do not have human rights, why Latin America is a failure compared to the USA, because they had no rights to the land, instead it was a serfdom, land ownership only for the elites… and they explain how land-ownership helps progress and the economy…

so i don’t know, maybe AI can figure it out, create a better world for us all, it seems the world afforded to us by humans has no future…

And the other websites say, that humans should not have freedom or land, because they accidentally start forest fires, and have poop that harms the land, but animals have “magical poop”, that does not harm the land. And humans need a mattress, but animals live on dirt and bugs…

New research shows a probable thesis’s credibility on the ithe existence of verifiable UFO’s.

If such do exist, then wouldn’t AI be perceived in a different light?

Here is an pier shared article worth while to post:

Now if this was coming from some pulp fiction’s desperate attempt to generate interest, then notice shouldn’t be accorded, but since the source is coming from ivy league’s top learning institution, where does that put the possible humongous action of the AI machine in relation to the mythic pre sentiment origins of consciousness it’self? Maybe more of a reactionary relationship to tie the possible to what has been ascertained .

Why do you think it is pre-sentiment?