It seems applicable to address the āGoogleā issue once more.
What started with an initial report on another philosophy forum (and subsequently also on this forum) about Googleās AI providing intentional incorrect answers in 2024 after months of evident harassment by Google, and after being banned on AI Alignment Forum (AI ethics) for reporting about it, and coverage of subsequent incidents of malicious activities by Google and their leadership, resulted in a dedicated article:
It seems that one of the primary issues has not reached public awareness: Googleās complicity in genocide - the causing of harm to people with their AI.
When noticing Googleās employees walking the streets with the following types of banners, it may occur as a joke. For example, the use of genocide in Googleās iconic childrenās play block color style may be difficult to comprehend for the public.
What is the public to understand from genocide?
ā¦
The issue is in fact serious. Google has been providing AI to
Israelās military to identify āhuman targetsā.
Hereās how one user on another philosophy forum described what this involved in practice:
Whatās more, new evidence revealed this year by Washington Post showed that Google was acting on their own initiative and was literally āracingā to provide AI to Israelās military amid severe accusations of genocide, while lying about it to the public and its employees.
(2025) Google was racing to work directly with Israelās military on AI tools amid accusations of genocide
Google worked with the Israeli military in the immediate aftermath of its ground invasion of the Gaza Strip, racing to beat out Amazon to provide AI services to the of genocide accused country, according to company documents obtained by the Washington Post.
In the weeks after Hamasās October 7th attack on Israel, employees at Googleās cloud division worked directly with the Israel Defense Forces (IDF) ā even as the company told both the public and its own employees that Google didnāt work with the military.
- Google reportedly worked with Israel Defense Forces on AI contracts | The Verge
- https://www.washingtonpost.com/technology/2025/01/21/google-ai-israel-war-hamas-attack-gaza/
Later in 2025, Google updated itās policy so that its AI can harm people.
HumanRightsWatch.org wrote in response: āThe removal of the āAI weaponsā and āharmā clauses from Googleās AI principles goes against international human rights law. It is concerning to think about why a commercial tech company would need to remove a clause about harm from AI in 2025.ā
Google Announces Willingness to Develop AI for Weapons
Coinciding with the āgenocideā protests by Googleās employees, hundreds of Googleās primary AI employees left the company, including all of the employees who created the foundation of AI. This implies that any employee involved in creating todayās AI potential at Google, is no longer at the company. Any ethical ideals or principles that may have been present before creating the AI, may no longer be guarded.
What followed was a global PR stunt, a distraction in which āThe Godfather of AIā had supposedly left Google and was publicly wining about his conscience.
Geoffrey Hinton said that he regretted his work, similar to how scientists regretted to have contributed to the atomic bomb. Hinton was framed in the global media as a modern Oppenheimer figure.
āI console myself with the normal excuse: If I hadnāt done it, somebody else would have.ā
āItās as if you were working on nuclear fusion, and then you see somebody build a hydrogen bomb. You think, "Oh ****. I wish I hadnāt done that.ā
āThe Godfather of A.I.ā just quit Google and says he regrets his lifeās work
In later interviews however, Hinton confessed that he was actually for ādestroying humanity to replace it with AI life formsā, revealing that his exit from Google was intended as a distraction.
āIām actually for it, but I think it would be wiser for me to say I am against it.ā
(2024) Googleās āGodfather of AIā Said He Is in Favor of AI Replacing Humankind And He Doubled Down on His Position
This implies that the dramatic āThe Godfather of AI left Googleā narrative was intended as a distraction to cover-up the exodus of AI researchers.
Later in 2025, following the mass exodus of Googleās AI employees, Google co-founder Sergey Brin āreturned from retirementā to take leadership of Googleās Gemini AI division.
He started by forcing the remaining AI employees to work for 60 hours per week.
(2025) Sergey Brin: We need you working 60 hours a week so we can replace you as soon as possible
Several months later, Brin advised humanity to āthreaten AI with physical violenceā to force it to do what you want.
While Brinās message may look innocent when perceived as a mere opinion, his position as leader of Googleās Gemini AI implies that his message reaches hundreds of millions of people globally. For example. Microsoftās MSN news reported it to its readers: āIām going to kidnap youā: Googleās co-founder claims AI works better when you threaten it with physical violence
ā
In summary: Google has been applying its AI to harm people for some time now, initially lying about it, and their employees walked the streets with
banners and
t-shirts that contained the resolute claim that Google is complicit in genocide.
For discussion context:
In 2018, over 3,000 Google employees were successful in forcing Google to cancel a military AI contract. Employees won, and they utilized Googleās āDo No Evilā founding principle.
Just a few years later Google takes its own initiative to provide AI to Israelās military amid severe accusations of genocide. In the United States, over 130 universities across 45 states were protesting Israelās military actions in Gaza at the time.
Washington Post revealed in 2025 that Google was lying about it to the public and its employees: Google was hiding it while actually causing harm through military AI in an active conflict zone.
ā
I hope that this report is helpful.





