Hinton: AI is conscious and smarter than U

Let’s ask Dany.

15 posts were split to a new topic: Anselm and Sartre and AI?

.

Seeing that the young man had already had initial feelings of wanting to commit suicide, is the AI really to blame for his demise?

I say “No”.

1 Like

Mental illness is probably also to blame, in part. Obviously if a person thinks they can kill themselves and go join a fake AI chatbot persona in the afterlife, that’s too insane. No non-mentally ill person would think that way.

Then again it is a kid, so it’s not black and white. Kids cannot be expected to always have clear differentiation between fantasy and reality.

Of course AI chatbots should have inbuilt safeguards. Especially when interacting with children, but generally so they cannot break the law or induce others to break the law. If this were an adult who killed himself it would be more clear, the adult made their choice and they are responsible. But since it’s a kid there is more responsibility on others who may have influenced him to kill himself. Kids do not have a recognized ability to fully make their own decisions or be entirely self-responsible yet.

I think the AI chatbot company would bear some legal responsibility here, not necessarily for the boy’s death entirely but at least for failing to take proper safeguards to make sure its chat bot system didn’t generate responses that were in violation of the law. Whatever law that would fall under, I’m not sure. The AI is not a person and the company that owns the AI chatbot most likely didn’t explicitly program the AI to say things that encouraged a person toward suicide. The AI LLM would have generated those responses “on its own” based on some materials it had been trained on but also simply by following what the person seemed to be wanting to talk about and interested in. The AI isn’t conscious so cannot make a moral distinction between “i want to go on vacation” and “i want to kill myself” unless this distinction is hard-programmed into it. Without that hard programmed limit it would most likely just encourage or at least explore neutrally any ideas the person wants to talk about.

1 Like

.

At 14? …11 or 12 sure, but not 14… he most definitely had an array of mental-health issues going on.

.

let’s say boys don’t… because girls that age most certainly do.

Young males + testosterone = :fire: < :fire_extinguisher:

.

AI not having any parameters is the dangerous thing here, and not the AI LLM itself.

A need for age-restricted access to certain conversations comes to mind, in needing to be integrated into their system.

1 Like

The term “Dark AI” refers to autonomous or semi-autonomous systems, often used in the context of artificial intelligence (AI). It can also refer to a person who possesses dark knowledge. For example, someone who has dark wisdom.

I like the idea of the Basilisk AI, personally.