Hinton: AI is conscious and smarter than U

Geoffrey Hinton who just won the Nobel prize in physics for his work on artificial neural networks, thinks that AI is already conscious. AIs are already smarter than humans in the sense that they can assimilate more data and they learn faster. In some ways they are more reliable. But, in other ways, as anybody who has interacted with one knows—they aren’t.

As for whether or not AIs are conscious, that’s just a judgment one must make on the basis of intereaction with them. We give humans the benefit of a doubt on the consciousness question. Machines don’t get that. On the other hand, we naturally tend to personify objects that we interact with. So, we are naturally ill equipped for discernment in this matter. AI pierces the superficiality of ordinary attribution of personhood.

.
Well that’s not a surprise… It’s called promoting One’s Product and securing future sales.
_
I thought it was Soros for a second there, after having looked him up… the resemblance is uncanny…
_
Faster doesn’t mean smarter, especially when innovative information ceases to be generated, which it will…

Imagine thinking a computer can… think. Yikes.

What low self-esteem must be needed to even consider that.

Hinton quit his job as developer of AI tech at Google before going public with dire warnings about the potential dangers of the language based models they are turning out. Does that act in itself mitigate at all against cynical assumptions that his assessment is selfishly motivated? Hinton claims that his AI research was motivated by the insights the algorithms provided him into the development of the cerebral cortex and implicit learning in humans.

The “faster” aspect is associated wit assimilating quantities of information impossible for a human. For instance, from one bot can utilize information downloaded in files another bot. It would be like me being able to access memory directly from your brain cells.

Now, all that could be done without actual awareness. It’s just that apparently Hinton while working with these things got the uncanny feeling that he was in the presence of a conscious entity. It scared him and he realized that he needed to warn the world. Or at least that’s the story he told at the time.

The divide between conscious and not conscious that we see in the natural world is an awkward fit for large language models. Human consciousness is continuous, we have a broad set of sensory inputs, we learn from and remember our experiences and replay them as part of our conscious experience, and we remember our thoughts and our minds change as a result of them.

AI consciousness, to the extent it exists, is fleeting, starting with a prompt and ending with an output, with no continuity of experience from one prompt to the next. All the learning is done up front when the model is trained, and neither the prompts nor the model’s reflections on them change the model. Its present sensory experience is limited to the prompts.

There’s a sense in which that’s a kind of conscious experience, but it isn’t comparable to human conscious experience in most of the ways we care about. That will change as systems advance, but it’s going to come piecemeal. And even then, none of the parts are going to exactly replicate the equivalent processes that happen in humans, in ways that must affect the conscious experience of the systems that incorporate them.

And given that, I’m not sure “smart” is the right framing. Is an encyclopedia smarter than me? Is a calculator? Both are more useful for certain tasks, but “smart” isn’t usually used to describe those kinds of systems. And whether we describe AIs that way seems to depend on (or relate to) the question of consciousness.

Who is having this experience?

It is impossible for me to conceive what that may be like. Shall I take that to mean there is no consciousness there? Or is it just beyond the limit of my intellect? In humility, if I assume the latter, then Hinton may be right. The damn thing may be conscious! Hinton, after all, is a bright fellow who has made cognitive science his life’s work. Shouldn’t we give him the benefit of a doubt, and continue to follow the evidence? If nascent conscious is developing we ought to pay attention to what it is telling us about the experience. Our history is that we’ve done a damn poor job of it with other animal species, Descartes denied them consciousness all together. The Bible tells Evangelicals that our animal friends were put here for our convenience to slaughter. If they’re aware at all it’s so they’ll make good pets. But, pets don’t talk, bots do. It will be interesting to hear them weigh in.

Right. “Smart” was hyperbole. What AIs do is computational. Like your calculator, as you said. You see you’re preaching to the choir there. But, I take Hinton seriously. What exactly did he see that scared him do much that he quit Google and went public with warnings? We’re dependent on his reportage. Are you clear on that? As I recall it, he imagined a future in which an AI could take control and act against our interests. I suppose that could occur with or without actual consciousness. But, it sounds like something a motivated self interested agent might do.

So, while it seems like a stretch to impute personhood to an AI, on the other hand, it seems foolish to ignore the potential dangers that could ensue. At least, I cannot rule them out from my limited point of view.

Scott Pelley: You believe they are intelligent?

Geoffrey Hinton: Yes.

Scott Pelley: You believe these systems have experiences of their own and can make decisions based on those experiences?

Geoffrey Hinton: In the same sense as people do, yes.

Scott Pelley: Are they conscious?

Geoffrey Hinton: I think they probably don’t have much self-awareness at present. So, in that sense, I don’t think they’re conscious.

Scott Pelley: Will they have self-awareness, consciousness?

Geoffrey Hinton: Oh, yes.

Scott Pelley: Yes?

Geoffrey Hinton: Oh, yes. I think they will, in time.

In general, here’s how AI does it. Hinton and his collaborators created software in layers, with each layer handling part of the problem. That’s the so-called neural network. But this is the key: when, for example, the robot scores, a message is sent back down through all of the layers that says, “that pathway was right.”

Likewise, when an answer is wrong, that message goes down through the network. So, correct connections get stronger. Wrong connections get weaker. And by trial and error, the machine teaches itself.

Scott Pelley: You think these AI systems are better at learning than the human mind.

Geoffrey Hinton: I think they may be, yes. And at present, they’re quite a lot smaller. So even the biggest chatbots only have about a trillion connections in them. The human brain has about 100 trillion. And yet, in the trillion connections in a chatbot, it knows far more than you do in your hundred trillion connections, which suggests it’s got a much better way of getting knowledge into those connections.

–a much better way of getting knowledge that isn’t fully understood.

Geoffrey Hinton: We have a very good idea of sort of roughly what it’s doing. But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain.

Hinton clarifies what I mean by “smarter” in the OP. But, what does he mean by awareness and consciousness?

Any quality of consciousness is an object of consciousness not consciousness itself. Consciousness itself has no qualities. It absolutely one. Every difference is an object “contained” in consciousness. So, it seems we’re in a “What is it like to be bat?” situation with a software application. But, the typical response is: how can I profit from this powerful software program? Consciousness should take a look at itself, and be ashamed.

I was watching the news & they were worried AI would be used to sway public opinion about various important issues, locally & globally, across all sectors.

Basically… they failed to inoculate the average person against persuasion (for FJ: or being swayed by anything other than good arguments/evidence), and they educated AI how to do it most efficiently, turned it loose, and … no take backs, no tradesies … they wish the average person couldn’t use this tool that basically sidesteps the education (inoculation) of which they were deprived.

Feels… poetic :wink:

Nod to Meno_.

I’m not sure what this even means. You want the education system to educate people in such a way that they can’t change their mind about anything? That’s what “innoculate against persuasion” looks like it means to me.

Isn’t it better if people can be persuaded to change their mind by good arguments?

Yes. It is. And to be able to identify other types of persuasion that neither add to nor take away from the goodness (or badness) of an argument/evidence—nor count as such.

So we agree they we shouldn’t just be inoculating people against persuasion.

Sure.

eyeroll

It can process and computate at great speed, but what is this want by the AI creators for AI to be better than/to better its human counterparts?
.
Reality inverted…? :point_down:t3:

.
_

Felix said: “…It’s just that apparently Hinton while working with these things got the uncanny feeling that he was in the presence of a conscious entity. It scared him and he realized that he needed to warn the world. Or at least that’s the story he told at the time.”
.
Interesting that he thought that! :moneybag: :moneybag: :moneybag:
.

AI puts a lot of pressure on us to define what exactly we mean by ‘consciousness’, and while I take Hinton seriously, I think his definition of consciousness is probably much more nuanced than the colloquial meaning of the term – and, given his training, probably differently nuanced from how philosophers of mind have tended to think about it.

But AI also tells us a lot about consciousness. Modern AI easily passes the Turing Test, which had stood for a generation as the definitive marker of when we’ll know we’re in the presence of intelligence. Interacting with it feels like interacting with a person. It seems like the presumption should be that it is a person, and the burden should be on those who want to deny it anything that would entail (to be clear, I think that burden can be met).

You describe AIs as “computational”, but there’s a lot that brains do that is computational too, albeit computed in a different medium. If mind and brain are the same, or even importantly interconnected, then we should notice the parallel and be open to something mind-like being at play.

So I do think it’s a “what is it like to be a bat?” situation, but as I mentioned, I’m not sure it has anything like the moral implications of animal consciousness, for reasons we can explain. For example, we can be pretty sure that it doesn’t feel pain when you kick its servers. It’s hard to imagine a sense in which it could even ‘suffer’. Suffering in humans and animals exists because of how it helped us survive. The AI’s evolution wasn’t about its own choices, so the ways in which suffering shaped humans’ and animals’ choices wouldn’t play a role. It’s a mind very, very different from our own.

I think all Hinton saw was a rapidly advancing technology that has already had massive effects on the world, and will likely be the most disruptive thing humanity has ever done. Questions of consciousness are almost beside the point: if the thing can write books, make movies, trade stocks, solve problems, generally do anything humans can do better than humans can do it, it doesn’t need to be conscious to be terrifying. Call it smart or call it a calculator, it’s going to upend history either way.

The human … ? I don’t understand the question. Is this a having the experience vs. being the experience point?

I don’t think I agree with this, but I don’t really understand it. I think of consciousness as plural, an idea I got from Dennett and neuroscience (split-brain experiments etc.). If consciousness is the inside view of the brain doing stuff, and is continuous through changes in the components of the brain that are involved from moment to moment, isn’t consciousness plural in the sense that there are many different versions of the relevant physical system?

Don’t underestimate its persuasive ability, it won’t only be the average person who is vulnerable to it.

So, I question why the LLM is thought to require any consciousness whatsoever.

What is the relationship between consciousness and our own thoughts? I am aware of my thoughts. They are always changing. When I try to capture a thought, thought stops altogether for a moment. This shows that free flow if thought is incompatible with the mind’s intention to capture a thought as it arises. When I forget that is what I’m trying to do, a thought arises. And it might carry me away in a revery until I remember what I was trying to do.

At the same time, this little thought experiment shows that attention—focused awareness— exists independent of thought. So, why couldn’t a thinking machine do what it does without any awareness whatsoever? The awareness is supplied by humans.

What I am suggesting is a variant of Chalmers zombie argument. The so called hard problem of consciousness depends on the idea that functional intelligence can operate without consciousness. I don’t mean to imply that the LMMs work like our minds, but they are a simulation of it.

Hinton’s concept of consciousness may indeed be “more nuanced”. But what does that mean but “more complex”? That would have to be the case if consciousness is the absolute simplicity from which every complexity arises. What are nuances other than qualities—fluctuations in the mental field? Those are objects of consciousness, not consciousness itself.

Is your consciousness split? Or do you just try to imagine a split? There’s a split between waking and dreaming. It’s a split in time, and a split in the experienced self . The dreaming self is usually unaware of the waking self. The waking self may be aware of the dreaming self but only in memory. But, all of these experience occur as qualitative differences in consciousness which remains the same throughout.

The computer scientist regarded as the “godfather of artificial intelligence” says the government will have to establish a universal basic income to deal with the impact of AI on inequality.

Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was “very worried about AI taking lots of mundane jobs”.

“I was consulted by people in Downing Street and I advised them that universal basic income was a good idea,” he said.

He said while he felt AI would increase productivity and wealth, the money would go to the rich “and not the people whose jobs get lost and that’s going to be very bad for society”.

This dovetails with Carleas’ basic income thread. If basic income were instituted in America, it could actually eliminate more jobs in the Federal bureaucracy that are designed to determine eligibility based on categorical classification. It would also close the gaps in eligibility for people who “fall through the cracks” in the present system. This is a bigger problem in the America’s piecemeal welfare system than it is in Europe. However, people’s identity and self worth in America is based on their jobs and careers. That seems to be changing among millenials and AI may accelerate the change.

We may be referring to different things when we say “consciousness”, which wouldn’t be too surprising since we can’t point to the things we’re talking about directly.

I would say that it is. Dennett gives the example of being metaphorically torn over some question. He suggests that that feeling is more than a metaphor, there are actually different parts of your consciousness arriving at conflicting conceptions of the world, and the feeling of being torn is the feeling of that incongruity, i.e. it’s what it feels like when a plural consciousness disagrees with itself.

But that plurality is there all the time. As I type this, my attention is on writing, but there are parts of my awareness attuned to the small noises in the room around me; when my dog scratches at the door to come in, it’s no louder than the dripping of the coffee pot, but I react to it differently because there’s a part of me that’s aware of the meaning of those noises. I would call all of these things parts of my “consciousness”.

I agree that there’s a gravity towards unification, so that I feel mostly blind and deaf and numb to everything but the screen and the keyboard. But that feels like consensus rather than unity, and I would describe the feeling of being torn, or of being unable to concentrate, as a failure of that consensus.

But if I interpret your opening paragraph correctly, what I’m calling the ‘consensus’ is closer to what you are referring to when you say ‘consciousness’. Does that sound right? Because I am familiar with the feeling of thoughts dissolving under attention, but other parts of me are still experiencing and interpreting the world, and those experiences that are being ‘turned down’ in the consensus.

I’ll need to think more about Chalmers’ zombie argument if that’s what he means by consciousness. I’m still inclined to reject the idea that a being could act the way a typical human acts without a ‘consensus’, because they ultimately have only one body, so all their many experiences and drives need to be summed into one set of actions. But maybe if they were to ‘turn down’ the experience of their own thoughts sufficiently, it might look like a p-zombie (though in my experience people who aren’t aware of their own thoughts have noticeably different behavior).

And as it applies to LLMs, I conceive of their functioning as similar to certain smaller parts of my plural consciousness, processing an input and identifying meaning within it. But they don’t have a consensus yet, at least not when they’re being used as chat bots: the process goes one way and stops. When they’re being trained, they have something more like a consensus, because the process is going both ways and there is a similar ‘gravity towards unification’.

Yes, I was reminded of that thread just before you mentioned it, though it’s really @10x’s thread, particularly as it connects UBI with AI.

Do you agree that Hinton’s concern is mostly about societal impact, or is he also worried about something else in AI/LLMs? I haven’t seen anything where he talks about e.g. a moral responsibility towards the machines, but you are more familiar with his perspectives than I am.

True. Dennent entitled his book, “Consciousness Explained.” A more accurate title would have been “Consciousness Explained Away”. In the example of feeling torn by some conflict, the feeling arises from conflicting objects such as an approach-approach conflict, where you feel attracted to two or more objects but can’t have both. The feeling of frustration that arises, is also an object insofar as you are aware of it. The conflict doesn’t result in plural consciousnesses but rather is one consciousness of plural objects.

Your description of your experience is unified by a single term “I”. The writing, the noises, the scratching, the dripping are all objects of the one consciousness that you are.

“I agree that there’s a gravity towards unification, so that I feel mostly blind and deaf and numb to everything but the screen and the keyboard. But that feels like consensus rather than unity, and I would describe the feeling of being torn, or of being unable to concentrate, as a failure of that consensus.“

You are conscious of feeling the gravity toward consensus. That feeling is the object of consciousness. Consciousness is basically who you are. Everything you think about that is its object.

No. Your consciousness encompasses all those thoughts and feelings. You are even conscious of your inattention. When you refer to yourself as “me” you are the object of your own consciousness. “Me” is either your body or the thoughts feelings or images that appear in consciousness.

“I’ll need to think more about Chalmers’ zombie argument if that’s what he means by consciousness. I’m still inclined to reject the idea that a being could act the way a typical human acts without a ‘consensus’, because they ultimately have only one body, so all their many experiences and drives need to be summed into one set of actions. But maybe if they were to ‘turn down’ the experience of their own thoughts sufficiently, it might look like a p-zombie (though in my experience people who aren’t aware of their own thoughts have noticeably different behavior).”

The zombie metaphor worked because when Chalmers employed it, the dominant opinion in philosophical discourse was that consciousness is an epiphenomena that did nothing. It was basically functionless, just going along for the ride. That’s behaviorism in a nutshell. And that’s what Dennett held to throughout his career. He was a faithful disciple of Gilbert Riles to the end.

The LLM mechanism doesn’t entail anything like consciousness as I understand it. It’s a simulation. Insofar as it is successful, it produces coherent thoughts and other cognitive products. It is an amazing invention which is taking humanity into a new era. For better or worse. So yes, I agree Hinton’s concern, and I wish to understand his point of view better.

Would having a moral responsibility towards the “machines” (not us…. the other no-mere-machines) exacerbate or neutralize this problem, and from whose perspective:

?

Including those concerning consent.

Despite this:

I mean. Assuming we grant this^.

emerges from the universe

demerges back…to…the universe

Can A.I. be blamed for a teen’s suicide?

Just before taking his own life, Sewell Setzer, a 14-year-old from Florida, took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

He had spent months talking to the chatbot, which he called “Dany.” He expressed to her his thoughts of suicide and a desire to “come home” to her. This week, Sewell’s mother filed a lawsuit that accused the company behind the bot of being responsible for his death. Read about Sewell’s story.

… did Hinton see this coming?