We may be referring to different things when we say “consciousness”, which wouldn’t be too surprising since we can’t point to the things we’re talking about directly.
I would say that it is. Dennett gives the example of being metaphorically torn over some question. He suggests that that feeling is more than a metaphor, there are actually different parts of your consciousness arriving at conflicting conceptions of the world, and the feeling of being torn is the feeling of that incongruity, i.e. it’s what it feels like when a plural consciousness disagrees with itself.
But that plurality is there all the time. As I type this, my attention is on writing, but there are parts of my awareness attuned to the small noises in the room around me; when my dog scratches at the door to come in, it’s no louder than the dripping of the coffee pot, but I react to it differently because there’s a part of me that’s aware of the meaning of those noises. I would call all of these things parts of my “consciousness”.
I agree that there’s a gravity towards unification, so that I feel mostly blind and deaf and numb to everything but the screen and the keyboard. But that feels like consensus rather than unity, and I would describe the feeling of being torn, or of being unable to concentrate, as a failure of that consensus.
But if I interpret your opening paragraph correctly, what I’m calling the ‘consensus’ is closer to what you are referring to when you say ‘consciousness’. Does that sound right? Because I am familiar with the feeling of thoughts dissolving under attention, but other parts of me are still experiencing and interpreting the world, and those experiences that are being ‘turned down’ in the consensus.
I’ll need to think more about Chalmers’ zombie argument if that’s what he means by consciousness. I’m still inclined to reject the idea that a being could act the way a typical human acts without a ‘consensus’, because they ultimately have only one body, so all their many experiences and drives need to be summed into one set of actions. But maybe if they were to ‘turn down’ the experience of their own thoughts sufficiently, it might look like a p-zombie (though in my experience people who aren’t aware of their own thoughts have noticeably different behavior).
And as it applies to LLMs, I conceive of their functioning as similar to certain smaller parts of my plural consciousness, processing an input and identifying meaning within it. But they don’t have a consensus yet, at least not when they’re being used as chat bots: the process goes one way and stops. When they’re being trained, they have something more like a consensus, because the process is going both ways and there is a similar ‘gravity towards unification’.
Yes, I was reminded of that thread just before you mentioned it, though it’s really @10x’s thread, particularly as it connects UBI with AI.
Do you agree that Hinton’s concern is mostly about societal impact, or is he also worried about something else in AI/LLMs? I haven’t seen anything where he talks about e.g. a moral responsibility towards the machines, but you are more familiar with his perspectives than I am.