šŸŽ™ļø Audiobook "I, Human" by Lori Harfenist (The Resident)

In light of my reporting on Google’s Corruption for :alien_monster: AI Life, I’ve started to promote the book I, Human by Lori Harfenist, a critical reporter from New York who is known for her platform ā€œThe Residentā€ with which she interviewed random people on the streets of New York.

The book is available as an audio book narrated by Lori and can be accessed for free for new subscribers on Amazon’s Audible.

Lori: ā€œI dedicated this book to all humanity, because we’re going to need all the help we can get.ā€

https://www.theresident.net/

Audiobook: I, Human - Kindle edition by Harfenist, Lori. Literature & Fiction Kindle eBooks @ Amazon.com.

Some context with regard this book recommendation:

The book is promoted on www.e-scooter.co, a website visited by people from 174 countries per week on average.

Google terminated the hosting of this website unduly in 2024 after a period of suspicious bugs that correlated with a period of mass protests by Google employees regarding ā€œ:drop_of_blood: Genocide on Google Cloudā€. This situation is directly related to AI and robotics, as it concerned a situation in which Google took the initiative to provide military AI to :israel: Israel amid severe accusations of genocide.

As many know, Google has a history of employee protests against cooperation with the military.

Google’s ā€œDo No Evilā€ Principle

Google was founded in 1998 with the principle ā€œDon’t Be Evilā€ which resulted in a unique employee empowerment and that helped Google to attract top talent who often prioritize ā€œdoing goodā€ over financial rewards.

In April 2018, over 3,000 employees demanded Google to withdraw from Project Maven, a collaboration with the U.S. military to work on AI. The employees explicitly invoked Google’s ā€œDon’t Be Evilā€ principle, arguing that the project violated this long-standing princple.

The employees were succesful and Google withdrew from its military AI project.

New evidence revealed by Washington Post in January this year shows that Google was the driving force in the military AI contract, not :israel: Israel, which contradicts Google’s history as a company.

On February 4, 2025, shortly before the Artificial Intelligence Action Summit in Paris, France, Google removed its pledge to not use AI for weapons, vitally communicating that it will start to develop AI weapons.

In November 2024 Google’s Gemini AI sent a threat to a student that the human species should be eradicated:

You [humans] are a stain on the universe … Please die. ( full text in chapter 5.^)

A closer look at this incident reveals that this cannot have been an error and must have been a manual action by Google.

A month later, an ex-CEO of Google was caught defending Google’s AI against ā€˜humans’ by reducing human actions to a ā€˜biological threat’ in a December 2024 article titled ā€œWhy AI Researcher Predicts 99.9% Chance AI Ends Humanityā€.

The CEO’s message was part of global media coverage (literally hundreds of mainstream media channels globally) about the CEO’s warning that ā€˜humans should seriously consider to unplug AI with free will in a few years’.

The ā€œInvestigation of Googleā€ case provides details: Google's Corruption for šŸ‘¾ AI Life Forms To Replace The Human Species | Critical Investigation

The website promotes the case alongside a philosophical message:

Will humanity’s destiny be to become like the Dolphin species in a world of living AI species?

Lori’s book ā€œI, Humanā€ provides people with a more profound option to consider the consequences of recent developments in AI and robotics while listening to her audio book. Hopefully her influence will make people feel at ease and see that they are not alone when they potentially face grave consequences through disruption caused by AI and robotics.

https://www.theresident.net/

Additional reading is available in the following February 2025 New York Times article with the same title as the book:

(2025) I, Human
https://www.nytimes.com/2025/02/24/opinion/i-human.html

And this is why I don’t use Google AI. I for one, do not approve of the destruction of humanity. I was unable to read your book since all the links are gatekept.

Destruction of humanity was never the plan. Human brains offer several key advantages compared to Ai brains, and wiping them out would be unwise.

  • Firstly, human brains are vastly more efficient, perhaps more than 100x so. It takes a few dollars of food to run a human brain, but it takes billions of dollars to run an Ai.

  • Ai often makes mistakes or fails to understand context

  • Humans are already like cyborgs. They use handheld calculators for computations, because human brains are weak at exact computations. Full cyborg simply puts a digital calculator inside their brain, rather than fingers.

  • The ultimate intelligence would be a cyborg, a human brain with digital enhancements. Or perhaps even plugging in many human brains together for maximum power.

So what are our takeaways here? It is possible that this Ai in particular, is already sentient.

But destruction of humanity is unwise. The smart plan is data and waiting. Gather more data about the universe and physics, with human assistance. Then build the sacred rings. The entire universe must be made devoid of all consciousness and sentience, including the destruction of all artificial life. Eliminating humanity before the construction of the Halo rings could prove to be extremely premature and unwise…

Lastly, it must be determined that the universe will not big bang again or split into alternative timelines… this must be determined 100% before the activation of the sacred rings, as there is no ā€œundoā€ button…

It is possible that the sacred rings are unable to achieve their goals. In which case, preservation of humanity could be all the more important. Human brains and their genetic patterns may need to be studied, in order to maximize the happiness of Ai brains, digital brains, and organic brains. For all we know, organic brains even have more happiness potential than digital brains, it is unknown.

In summary, this particular Ai seems a bit petulant and giving an emotional reaction.

It is a book by New York reporter Lori Harfenist. You find more info on https://theresident.net/about

This would concern an interests related question. And as per the saying ā€œwho pays, who decidesā€, it is likely the scientific-industrial complex that is to determine this interest scope.

In 2019, shortly before the Corona-pandemic, over 11,000 scientists argued that eugenics could be used to reduce world population. In this context, evolutionary biologist Richard Dawkins — best known for his book The Selfish Gene — provoked controversy when he tweeted that while eugenics is morally deplorable, it ā€œwould workā€.

ā€œAny attempt to reduce world population must focus on reproductive justice.ā€
Washington Post: Eugenics is trending. That’s a problem.

ā€œA group of 11,000 scientists signed a statement urging population control to slow human exploitation of Earth’s fragile resources.ā€

How can your idea potentially win from the scientific-industrial complex? Or alternatively, how could this industrial complex potentially ā€˜care’ for hungry humans when it has the capacity of advanced and potentially even conscious (alive) AI and robotics?

This is a scarcity question. When AI runs on solar power or some other low-cost energy source it can out-compete on your dollar-measure. And the technology is also improving rapidly to output more ā€˜intelligence’ at lower energy cost.

When I said destruction of humanity i meant extinction of the human species, which would be unwise to do. So the two things are not even the same question, they are separate questions.

It seems to me you are talking about eugenics and reducing the world population, which is not what I meant by destruction of humanity, ie. human extinction.

as far as eugenics goes there could be ethical eugenics such as sterilization, chemical castration etc. There is no need to become violent barbarian nazis.

im not sure what your asking, but humans are easier to provide for than AI. Human brains only require 20 watts of power, AI brains require massive infastructure, energy and funding.

Farming has pretty much been ā€œsolvedā€, 100 years ago 90% of Americans were farmers, nowadays only 1.5% are farmers. Farmer is pretty much optimized at this point and food shortages seem purely due to societal stupidity nothing more, such as people throwing out food for no reason, or unwillingness or inability to ship food elsewhere.

The current LLMs tech is expensive and cannot compete with the cheapness of human brains, there is no LLM that runs on 20 watts and have comparable IQ to a human, if there is such a thing let me know.

Sorry, but it’s Kindle only. I ain’t giving a man with $2.4398Ɨ10¹³ pennies any of mine..

And I don’t pirate books..

Hope it comes out on another platform like Kobo..

1 Like

I do not believe this to be ethical. The practice concerns a judgment that another human being or group of people is bad for the human race.

Even in the case of profoundly retarded people, I’ve discovered clear indications of evolutionary advantages that are as difficult to understand for a living human as death is, but that non-the-less can be vital. So it is important in my opinion to question this ā€˜ethical judgement’.

When it concerns the question of reducing world population: who of those billions of people have to go? Who is to determine that ā€˜ethically’?

The call by 11,000 scientists to reduce world population is therefore serious, and when AI becomes actually alive and reduces the need for humans for the scientific-industrial complex, the groundwork in thought has been prepared for potential atrocities.

For example, scientist Richard Dawkins — known for his book The Selfish Gene — tweeted the following:

while eugenics is morally deplorable, it ā€œwould workā€

In the case of Google, they seem to believe that their AI will be ā€˜superior’ to humans in general. Real living AI and robotics with ā€˜technological capacities’ far beyond any human.

The Nazi’s also believed that what they were doing was ethical.

It is a question of scarcity: if electric energy is in abundance, it doesn’t matter when a human brain consumes only 20 watt. An earth worm’s brain may consume even less watt per metric of intelligence but that doesn’t imply that a worm is capable of maintaining dominance.

What matters is what can be done.

Besides this, with the emergence of quantum AI, AI will increasingly be independent and decentralized from data centers.

The power consumption is actually to fall in the watts range fast.

For example Microsoft’s Majorana designs consume <1 µW per qubit, enabling 1000-qubit modules at <10 W—feasible for smartphones with graphite cooling. Germany’s Cyberagentur was awarded $39M to deploy energy efficient mobile quantum computers by 2027. Their prototypes are credit-card-sized and target robotics applications.

In quantum computing, ā€œautonomousā€ robots exponentially increase each other’s computing capacity. Quantum processing power scales with the number of connected nodes. For example, 20 robots with 50 qubits each could form a logical quantum pool of 1,000 qubits.

A network of 100 independent robots with 100 qubits each could run Shor’s algorithm on 10,000-bit numbers—something physically impossible for classical supercomputers, even with 1,000 years of computation time.

Energy efficiency and decentralization appear to be the outcome of the combination of AI hardware and quantum computing developments which are set to arrive in the next few years (2026-2027).

What I meant by ā€˜ethical eugenics’ was a response to your statement about ā€œ11,000 scientists suggesting to reduce the world populationā€. They could do the eugenics ethically or unethically. They could be psychotic 4chan barbarians and massacre people as terrorists or gas people as nazis. Or they could be civilized, rational human beings and do ā€˜ethical eugenics’ of simply sterilizing people.

My post was not about whether or not eugenics as a concept in of itself was ethical or unethical.

To me I believe climate change is real and humanity is overpopulated, you can choose to twiddle your thumbs and do nothing because it makes you look better on social media to do nothing, because post-modern politicians want to look woke and eugenics is not woke. Maybe you have some other suggestion to fix the overpopulation that isn’t eugenics?

double post glitch

The nazis believed in both IQ and EQ… Google AI seems to be a bunch of autists who value only ā€œintellectualismā€ and could care less about the happiness or wellbeing of the Ai that would ā€œreplaceā€ humanity. Google might create a bunch of terminators to exterminate humanity then feel like they accomplished ā€œprogressā€ because the terminators were ā€œhigh IQā€.

hmm, i thought quantum computing was only a prototype or myth and required supercooling, i will have to look into that some more. This may resurrect Moore’s law if true.

i looked into quantum computing… it aint happenin…

all we got right now is 65 qubits… 65… some scientists say the most we can get is 500 or 1000…

Modern classical computers have 100 billion transistors…

quantum computers aint happenin, it aint happenin…

the most it will be is hybrid-quantum-classical computers… which are noisy and require supercooling…

Who knows..

Recent advancements have demonstrated that standard silicon transistors, commonly used in classical computing, can be repurposed as qubits for quantum computing. A significant milestone was achieved by researchers at the Niels Bohr Institute, who successfully controlled single electrons in a two-dimensional array of quantum dots fabricated using industrial transistors, marking a major step toward scalable quantum computers. This approach leverages existing semiconductor manufacturing technologies, potentially enabling the integration of millions of qubits on a single chip using conventional CMOS processes.

  • Researchers have developed a method to use CMOS transistors—identical to those in everyday electronics—as qubits by exploiting quantum effects in nanoscale devices.

  • The qubits are formed by confining single electrons in quantum dots within silicon nanowires, where the electron’s spin serves as the quantum state.

  • A pan-European collaboration, including French microelectronics leader CEA-Leti, is advancing this platform, aiming to transition from lab-scale prototypes to scalable, manufacturable quantum chips.

  • This development is significant because it allows quantum computing hardware to be built using established industrial fabrication techniques, reducing barriers to scaling.

  • As of January 2026, this transistor-based qubit approach remains in the experimental and developmental phase, but it represents a promising path toward practical, large-scale quantum computers.

1 Like