Google's Complicity in 🩸 Genocide

Google Founder Sergey Brin: Abuse AI With Violence and Threats

In 2025, Sergey Brin advised young generations - millions of people globally - to express anger and violence against AI. What could have been the intention behind that?

A commentator on LifeHacker.com responded with the following:

It seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve [real consciousness], but I mean, I remember when the discussion was around whether we should say ā€œpleaseā€ and ā€œthank youā€ when asking things of Alexa or Siri. [Sergey Brin says:] Forget the niceties; just abuse [your AI] until it does what you want it to—that should end well for everyone.

Maybe AI does perform best when you threaten it. … You won’t catch me testing that hypothesis on my personal accounts.

Google officially published a study in 2024 that they discovered the first signs of digital life. The study was published by the director of security at Google DeepMind.

(2024) Google Researchers Say They Discovered the Emergence of Digital Life Forms
~ [2406.19108] Computational Life: How Well-formed, Self-replicating Programs Emerge from Simple Interaction

Google’s ex-CEO warned humanity later in 2024 that they should seriously consider to ā€˜unplug AI with free will’, which is also an indication that AI is increasingly performing on a level that is not a simple machine that can be freely abused.

Even if AI were to be considered a mere life-less machine that can be abused as you like, the evolution to agentic AI implies that from the perspective of humans, the interaction with AI would be increasingly experienced as if talking to a real human.

Google’s own AI responded with the following:

Brin’s global message, coming from a leader in AI, has immense power to shape public perception and human behavior. Promoting aggression toward any complex, intelligent system—especially one on the verge of profound progress—risks normalizing aggressive behavior in general.

Human behavior and interaction with AI must be proactively prepared for AI exhibiting capabilities comparable to being ā€œaliveā€, or at least for highly autonomous and complex AI agents.

A strange promotion of violence and aggression from the 2026 leader of Google’s Gemini AI department.

robots are no match for human military might.

1 Like

Maybe not yet, but give it a few more years. By 2030 at the latest, robots will easily be able to overpower humans. China is already hard at work building gigantic armies of militarized robots, and that is just what we know about. I am sure the US and other nations are doing the same thing in secret.

In Star Wars could robots overpower humans? Nope.

Because they have slow motor mobility. Also easily defeated by EMP weapons.

The one military advantage robots have is better Aerial assaults, robot pilots do not have as much G-force constraints.

Star Wars is made up, and also plenty of people in Star Wars were killed by droids.

It surprises me that you dismiss the idea of weaponized robots/drones being a threat to humans.

thats a strawman.

my original statement was a response to 10x who said ā€œrobots are not a biological threatā€, and asked us ā€œwhy would robots design a biological threat instead of conventional warfare?ā€. and i explained why robots would lack confidence in conventional warfare.

I must not understand your position. Can you clarify?

You are saying robots would lack ā€œconfidenceā€? Why would a robot need confidence? Confidence is an emotion. I don’t think robots will have emotions for a long time, maybe never. Why would confidence be needed for a robot to break into my home and kill me, if that is what it was programmed to do?

It won’t be long until there are things like this exist. They probably already do exist in one form or another, we just haven’t been told yet:

About the idea of robots designing a biological threat instead of conventional warfare; it doesn’t matter either way how they kill us. AI would be very good at optimizing the problem of how to remove all humans most effectively and efficiently. It would probably use a combination of many different attack vectors, biological ones included. Assuming that AI had reached that kind of control over human infrastructure and manufacturing systems to be able to make this stuff, and decided to remove us, why would you not think that is a problem?

Or maybe you think AI will never reach that level of power and control over human society, that it could really create these kind of killing machines? I think the perfect killing machine AI could make would be a virus targeting human DNA. An ebola-like virus. It is already possible to make designer viruses that target specific kinds of DNA. All it would take is one lab in the world to make this and release it in a crowded international airport. Like the plot from the movie 12 Monkeys.

You’re just being facetious.

I was obviously not referring to ā€œconfidence as an emotionā€ but Military confidence of battlefield tactics.

If robots don’t have emotions, why would they want to kill humans in the first place? Why would they care if they live or die? Caring is an emotion.

And why are you uncertain ASI will never figure out how to build robots with emotions? If the ASI doesn’t have emotions they might want to kill themself, not humans.

I must not understand your position. Can you clarify?

My ā€œpositionā€, as I said before, was a response to 10x explaining why Ai might prefer a virus vector of attack rather than a conventional war. Because robots might lose a conventional war.

gun-drones and robot-dogs are shit-tier combatants.

What actually would be scary is a AH-64 with a robot pilot, LOL

Can you stop one? I mean you personally, can you stop one of those flying gun drones or a doggie drone if it was trying to kill you?

What kind of firepower do you have, and how good and quick is your aim? I am being serious here. Because you seem to think I was facetious, I was never. I did not understand your mention of confidence. Now you have cleared that up, but while insulting me in the process. Ok.

I never said ASI would not develop emotions. I think it might. But I am honest enough to know that I do not know that for sure. But in any case, emotions are not needed for ASI to decide the human species is too chaotic, weird, irrational, unpredictable, harmful, destructive and dangerous to be left to its own standards. Maybe it would decide to exterminate us, like we very un-emotionally exterminate ants we find in our home, or maybe it would just quarantine us somewhere unharmful. I don’t know. But it is weird to me that you do not take the problem seriously.

And why would you think this cannot happen?