I’m not 100% sure I follow your argument here. What are the 2 behaviors that match 1 brain state? I would reject that possibility as “speculation beyond what [we] can observe”: the very idea that the same brain state could be observed to occur twice seems wrong.
I’m more open to the idea of 2 brain states matching the same behavior. Mental processes can probably be instantiated in different medium (e.g. in brains and silicon), which would mean that two very different physical brain configurations can produce the same output.
But maybe I’m getting gummed up in language again. When you say ‘behaviors’, do you mean macro behaviors, e.g. picking up an object? Or brain “behaviors”, e,g, pathway xyz being activated?
Is this just solipsism?
One problem I have with this line of reasoning is that it erases what seems like a meaningful distinction that (assuming multiple minds exist) can be communicated from one mind to another, suggesting that it isn’t totally illusory: we talk about the distinction between mind and not-mind, and that difference is understandable and seems useful. At best, aren’t we just pushing back the question? Let’s say we accept that everything is mind. We still have the mind things of the sub-type mind (i.e. ideas, feelings, sensations, emotions), and the mind-things of the subtype not-mind (brains, rocks, hammers, whatever), and we still want to explain the mind-mind things in terms of the mind-not-mind things. And my argument still works for that.
How much of this is a linguistic problem? I grant that the only things we have access to are mind things, e.g. we perceive the world as sense impressions of the world. But are you saying that there is no world behind those impressions? There’s a difference between saying that my car’s backup sensor only delivers 1s and 0s and saying that there’s no garage wall behind me. I’d argue that the most coherent description of the world is one that isn’t dependent on mind, even though, being minds, our experience of the world is always going to be made of mind-stuff.
I guess I do think utility is meaningful. I say “this is mind and this isn’t”, and we can take that statement and test the things against it, so that e.g. the things that are mind only exist to the mind experiencing them and the things that aren’t exist to everyone. The fact that we can draw useful inferences from that distinction suggests the distinction is real.
As I mentioned in my previous reply to Karpel Tunnel, I think this is a difference of degree and not of kind. Good and evil are abstractions of abstractions of abstractions… I am indeed taking the “human brain/mind/consciousness [to be] in itself just nature’s most sophisticated machine”.
I am not sure what other type of arguments you intend here. Are there non-mind/body arguments that compare the structures of brains to the subjective experience of being a brain? Are there non-unity arguments that posit that two seemingly distinct things are in fact the same thing?
I don’t think I do, but I am open to arguments otherwise. I mean ‘experience’ in this context in a non-mental sense, e.g. “during an earthquakes, tall buildings experience significant physical stresses.” There’s absolutely a danger of equivocating, i.e. assuming that rocks and humans both ‘experience’ things, and concluding that that experience is the same thing. That isn’t my argument, but I do mean to point to light striking a rock and light striking a retina as the same phenomenon, which only differs in the respective reactions to that phenomenon. Call phenomena like being hit by a photon ‘experiencing a photon’. Similarly, we can aggregate these small experiences, and say that the sunflower experiences the warmth of the sun. In the same way, then, neurons experience signals from other neurons. Whole brain regions experience the activity of other regions. The brain as a whole experiences its own operations. The parts of AlphaGo that plan for future moves experience the part of AlphaGo that evaluates positions.
If I’m right, the internal experiencing and the external experiencing are in fact the same thing, and qualia etc. are the inside view on the brain experiencing itself experiencing itself … experiencing a photon of a certain wavelength. Qualia are not the incidence of light on your retina, but the incidence of the whole brain processing that event on itself.
I disagree. If we were perfectly informed, only the truth would be plausible.
That’s a bit like saying that the words I’m writing are really just collections of letters. And so they are, but that doesn’t prevent them from being words.
Information is a fuzzy term, which can be used to refer to single molecules of gasses transferring energy (“information about their speed”), up to very abstract things like what we might find in the Stanford Encyclopedia of Philosophy page on physicalism (“information about the history of physicalism”). I don’t think either usage is at odds with physicalism.
Given that we can’t ever experience someone else’s consciousness directly, we need to treat “acting like someone is conscious” and “being conscious” as roughly the same thing. I am assuming that other humans are conscious, and that rocks aren’t. I would also assume that anything that can robustly pass the Turing Test would also be conscious.
If we don’t make these assumptions, the hard problem is very different: the question would become “why do I have qualia”, since I am the only consciousness I can directly confirm.
Having “a practical model of intelligence similar to ours” must solve the hard problem at the limit where the intelligence is so similar to ours as to be identical, right? If we’re not ready to say that, we need to establish why we’re willing to accept all these similarly intelligent humans as conscious without better evidence.
This seems like defining the solution to the Hard Problem in such a way that it becomes vulnerable to Godel’s Incompleteness Theorem. i.e., as a mind, it is impossible for us to fully contain a comparable mind within ourselves, so “all that can be known about a mind” can never be known by a single mind. If the Hard Problem is just Incompleteness (and that’s not a totally unreasonable proposal), then we should call it the Provably Impossible Problem.