I only intend to say that “X is plausible” is a necessary condition for “X is the case”, and once we show the former, it’s only our lack of information that prevents us from concluding that X is the case. So, in making an argument that X is plausible, I am making a necessary part of the argument that X is the case.
Why only physicalists? What alternatives allow one to assume the existence of other minds?
My general response here would that, 1) we should apply consistent standards, so if we’re assuming the existence of other minds in other theories, it’s not a special weakness in this theory to assume the existence of other minds and reason from there, and 2) we do in fact assume the existence of other minds, and the burden is on anyone claiming that other minds don’t exist to make that case (I’d go as far to say that we don’t really understand our own mind are except by reference to other minds).
By inside/outside, compare to a computer set up to observe certain aspects of its environment and its own workings. The computer might report that the temperature in the room is X, that its #3 drive is on the fritz, etc. That’s inside-view reporting, it’s taking ‘sense data’ and ‘self-experiencing’ and reporting on its state as it perceives it. A technician working with the computer might note that it has a digital thermometer which translates the expansion of a spring into 0s and 1s, and that it has an algorithm that tries to read and write to locations on its drives and receives an error if the drives are on the fritz. That’s the outsid- view. In talking about the system, and appealing to either view, we don’t need to assume consciousness.
For humans, consciousness is the inside-view of the operation of their brain.
This is what I mean when I say “the experience of experiencing”, and my point is that the awareness is just the cognition about the cognition.
The claim that “this can happen without an experiencer” is question-begging. If I’m right, there’s a rudimentary experiencer in the motor. I’m not claiming that the motor is pondering about the nature of its existence, only that the motor’s perceptions about its own operations are effectively a very limited form of qualia.
I would say that we don’t need any more “intermediary substance” besides air, which allows connections in the form of spoken language. I hope that doesn’t sound too flip; if we’re acknowledging that we only ever have imperfect information about what’s really going on ‘out there’ in the non-mind universe, then the mental experiences that over time we’ve come to describe as ‘air’ and ‘noise’ and ‘language’ are just approximations of a real world we don’t and can’t know other than through the mental experiences. But if our mental experiences of ‘air’ are consistent, if when we communicate about them with these ostensible other minds we find that our experiences are consistent with their experiences, then we can talk about ‘air’ as a useful approximation for the real, unknowable world. There’s already plenty of hocus pocus in there, I don’t see the justification for more hocus pocus that adds more unknowability without any commensurate consistency within and between minds.
I can’t, but… again compare to a computer: let’s say we make a system that analyses its own hardware and software and spits out a model of how it all works. We might ask, Can it really propose a coherent description of itself without software? Of course not, the whole description is software mediated, it can’t escape its own software to see a non-software-mediated version of itself. Still, its model can describe the system without any specific line-item for software, e.g. “these electrical pulses go over here, where they turn on or off these gates, which pass or block other electrical charges, etc.” That’s likely to be a hugely inefficient way to describe what the system is doing, but that description can be complete while completely omitting software (despite that the description must be software-mediated).
So too with mind: my perception is mind, my interpretation of brain activity is mind-mediated, yet I’m arguing that a complete description of the brain can be a complete (if hugely inefficient) description of mind while omitting mind as any line-item in that description.
Mr Reasonable, I think I agree, although I’m open to the possibility that fine-enough grained behaviors are actually tightly tied to brain states. So, appealing one last time to the computer metaphor, two computers can both run Chrome, and ‘behave’ the same way, but when we get down into where and how the processing is taking place, the behaviors become ‘clicking with the pointer on pixel (x,y) and running the code stored at location z’, and can’t be very well abstracted into sets, and we really do have 1-to-1 correspondence. In brains, that becomes more salient, since we might both have the idea of a brain and say “brain”, and yet have a very different set of other ideas associated with it, so that I think of computers next and you think of drug companies, or whatever. But I think that’s just a matter of scale from what you’re saying.