I didn’t want to re-type this, so I’m quoting myself. It’s as good an intro for this threads as it was for this one.
John Searle has an interesting, and very strange, theory of consciousness which I’d like to debate. It goes as follows:
He says consciousness is causally reducible to brain processes, but not ontologically reducible. That is, the physical processes in the brain are enough to cause consciousness (and that’s good enough to explain how consciousness arises from the brain), but what consciousness exists as is a first-person entity, and that makes it incompatible with an ontological reduction to any third-person entity such as the brain. So while you can say that brain processes cause consciousness, you cannot say that consciousness is just brain processes.
Yet, at the same time, Searle is not a dualist. He’s not saying that consciousness is something separate from the brain. He maintains that consciousness is to the brain as the solidity of a rock is to the molecular structure and activity of that rock.
I’m finding this difficult to wrap my head around. I don’t quite understand how the link between consciousness and the brain can be likened to that between the solidity and the molecular structure/activity of a rock without being a form of ontological reduction. I’m hoping we can flesh this out in this thread. But I should say a few more things about how Searle explains this before we begin:
Searle talks about different kinds of causal relations: You have causal relations through time–where one event causally brings about a subsequent event. You have cotemporaneous, but spatially separate, causal relations–where the actions of one object cotemporaneously affect the actions of another object, like the Earth’s gravity keeping the Moon in its orbit. And you also have reductive causal relations–where properties or activities on a higher level are caused by events on a lower level, like the interactions between molecules causing solidity at a higher level.
The latter kind of causal relation is what Searle thinks is going on in the mind/brain relation. Thus, he calls it causal reduction. But he says that in science, we typically go a step further and make an ontological reduction. We go from saying that solidity is caused by molecular interactions, to saying that solidity is molecular interactions. This further step, he says, is a redifinition of our original terms. We redifine “solidity” as “a certain kind of molecular interaction”.
He says that we can’t do this for consciousness. He says that to redefine “consciousness” as “certain kinds of brain processes” is to defeated the whole point of having the term “consciousness”. The point of having the term, he says, is to refer to this entity as having a first-person mode of existence, and if we redefine it in materialist terms (as brain processes), it will then denote an entity as having a third-person mode of existence. We will thereafter need a new term to refer to this (same?) entity as having a first-person mode of existence, but in that case, why not go back to the term “consciousness”?
He’s not entirely clear as to whether he thinks this is also a problem for things like the relation between molecular interactions and solidity, but he does make the distinction between what he calls “eliminative reduction” and his aforementioned “causal reduction”. Eliminative reduction is the kind of reduction you do on a phenomenon that, in effect, gets rid of that phenomenon. He sites sunsets and rainbows as examples. We now know that the Sun never really “sets” (in the sense of the Earth being still and the Sun “going down” below the horizon). And we also know that rainbows don’t actually exist as material (or etheral?) objects in the sky. Light just behaves differently as it passes through rain drops, and at certain points in the sky and at certain angles, we get the optical illusion of a colorful arch in the sky. But consciousness is real, Searle says, just as solidity is real. Just because we’re able to carry out a causal reduction on a phenomenon, it does not follow that we’ve eliminated it. He also talks about the dinstinction between solidity as redefined in terms of molecular interactions and solidity as a macroscopic and sensual property of some objects (“our good old friend solidity” he says). The latter, he says, is still real, but it isn’t clear whether he thinks the redefining of this kind of solidity in terms of molecular interactions poses the same kind of problem as the redefining of consciousness in terms of brain processes. In either case, we would still need to retain a word to refer to the original kind of phenomenon (first-person consciousness or “our good old friend solidity”), but in the case of “our good old friend solidity,” that’s still a third-person thing, so go figure.
A few other things Searle says: He says he agrees with a certain watered-down version of emergentism but not the mainstream kind that most emergentists identify themselves with. He agrees with the latter up to a point–namely, that consciousness can be thought of as an emergent property of brain process, but he disagrees with them when they say that, once consciousness emerges, it “takes on a life of its own” (which is essentially a form of dualism, and in my mind is incompatible with the concept of consciousness as an emergent property since that would effectively make it a whole entity unto itself).
He also says that he thinks of nature as “wildly contingent” (or was it “radically contingent”?), which is to say that sometimes in nature, cause/effect regularities are not at all necessary, that if it happens that certain brain processes result in these subjective first-person experiences that comprise consciousness, that’s just a brute fact about nature. If we don’t understand how that happens, too bad for us. He even goes so far as to say that it need not be the case that there is some necessary way by which nature brings this about and it’s just that we are incapable of understand it, for he thinks that nature can be, at times, inherently contingent (he sites Hume as an example of how he thinks about this).
This is a subtle, but major, point of confusion for me–I’d go so far as to say it is the crux of the problem–for it seems odd to say that consciousness is not separate from the brain even though we can’t do an ontological reduction from consciousness to the brain, especially if he thinks there need not be any necessary connection between consciousness and the brain. After all, one could understand Searle’s prohibition against ontological reduction as a kind of conceptual difficulty. That is, he might mean that we cannot conceptualize a first-person entity as being ontologically identical as a third-person entity, but if the reality is that they are ontologically the same thing, then one would think there must be a necessary connection between them (Isn’t that just Aristotle’s first law: all things are identical with themselves?). But if he’s saying that nature is intrinsically contingent at times, then there is no necessary connection between consciousness and brain processes even in nature (not just in our minds). Doesn’t that mean they have to be separate things?
I have no doubt that this all makes sense to Searle, but to me it’s kind of a mess. I’m hoping we can sort it out in this thread.