There Is No Hard Problem

I concede that my language is sloppy, but then this is a sloppy area and language is going to let us down. At some point between “light hitting a photoreceptor” and “functioning brain experiencing the qualia ‘red’”, we get something we would call observation. That will be true for any explanation that accepts mind-brain identity. There may not be a sharp line between observation and not-observation as we abstract up to the whole brain, and that is not necessarily a defeater for a theory of consciousness.

However, some of the sloppiness is of course my own; I will attempt to tighten that up:

I did not mean to suggest that merely by entering the skull a causal chain becomes conscious. I mean to talk about subjective experience as isomorphic to function, i.e. mind state-transitions correspond to brain state-transitions. Cabining such function inside a brain is neither necessary nor sufficient to that isomorphism, it’s incidental. Allow me to clarify this point.

Consider again the sunflower, and compare it to a rock (a piece of graphite, say). We can see that light hitting the sunflower has an effect different from light hitting the rock. In particular, the light that hits the rock imbues some energy in the form of heat which is diffused uniformly through the rock. Sufficient light will result in a phase change or other reaction. By contrast, in the sunflower, upon being hit with the light a chemical cascade is initiated, in which energy from other sources is consumed and directed such that the sunflower moves. These reactions are different in kind. We can nitpick how exactly we want to define or express this difference, but I will take it at face value for our purposes here. Furthermore, the reaction of a photoreceptor cell in the eye is similar to the reaction of a photoreceptor cells in the sunflower (although the cell in the eye is more specialized, in the difference-in-kind between rock and flower the eye cell is clearly on the side of the flower). This seems like a non-arbitrary distinction between some portions of the causes effects and the totality. Yes, there is a cause and effect relationship between the rock and the light, but it is different-in-kind from that between the flower/retinal cell and the light.

Brains are effectively networks of this latter type of causal connection. The causal relation between the light and the photoreceptor cell is similar to the causal relation between the photoreceptor cell and the neural cells with which it is connected. One conceptual building block I’m using is chains of these causal connections. But these chains aren’t only neurons in series, but also in parallel: each neuron passes a signal to many other neurons, and these subsequent neurons may be interconnecting, including to neurons earlier in the chain.

In brain architecture, we can identify more or less discrete subnetworks composed of such chains, e.g. the occipital lobe. The occibital lobe consists of many millions of these chains, all trained to parse the signals from the photoreceptors into information about the world as represented by the light that strikes the retina.

Consciousness enters the picture each time some part of the network is causally influenced by a different part of the network, such that the former part is trained to recognize patterns within the latter part. When this occurs, the former part is “observing” the latter part, in the same sense that the occipital lobe is “observing” patterns in the retinal photoreceptors. It’s pattern matching, in the same way that AlphaGo pattern-matches on the arrangement on playing pieces on a Go board.

Consciousness is the mental experience of observing mental experience, which is what we would expect a system that is wired to pattern-match to patterns in its own pattern-matching to report. At lower levels, the network pattern-matches on photoreceptor cells firing. At high levels, other parts of the network pattern-match to collections of neurons firing in the photoreceptor-pattern-matching area. This layering continues, with collections of cells reacting to collections of cells reacting to collections of cells etc. This self-observation within the network is isomorphic to the self-observation of conscious experience.

And again, this is all distinct from the rock because the causal chain isn’t merely energy from light diffusing through this causal cascade, but the light starting a causal cascade that uses separate energy, and indeed depends on excluding diffusive energy (most often by residing inside a solid sphere of bone).

From this rough sketch, we need only abstract up to emotional or intellectual reactions, where the layers of self-referential causation permit highly abstracted patterns to be recognized, e.g. (in a super gross simplification) “ideas” made of “words” made of “sounds” made of “eardrum vibrations”.

This does not seem to fit with the observable ways in which purely ‘body’ causes can affect mind. For example, brain damage changes not only the intensity of mind, but the contents and fine-grained functioning. That makes sense if mind is just the working of the brain, but not if mind precedes the brain.

To me the blue is all fine, but then I see no justification in the redded portions above for a creeping in of assumed consciousness. More complex ‘phototropism’ happens in the brain, but why ‘experiencing’ should arise, I don’t think you’ve justified. Computer programs can recognize patterns. Machines can do this. Basically any physiological homeostatsis, including the sunflower, is recognizing patterns. Are there gradations of consciousness? How do we know where consciousness arises in complexity? How do we know consciousness is in anyway tied to complexity or self-relation? We lack a test for consciousness. We only have tests for behavior or, more neutrally, reactions. How do we know which reactions, including the stones, have some facet of ‘experiencing’ in them or not?

And we are heavily biased to view entities like us as conscious or more likely to be conscious. But we have no way of knowing if this is correct. The work in plant intelligence, decision-making, etc. that is now seeping into mainstream science is a sign that some of that bias is wearing off, just as the biases against animal intelligence and consciousness were deeply entrenched - one could say in a faith based religious way - well into the second half of the 20th century.

Ah, I think I see the gap in my argument that you’re pointing out.

My aim here is to tie the outside description of the brain (neurons, photoreceptors, netorks) to the inside description of consciousness. So that first section you highlight in red is a description of what the experience of consciousness is, rather than something that follows from my argument. My intent there is to frame consciousness in a way that makes the mapping to brain function plausible. “The experience of experiencing” (a trimmed and, as I mean it, equivalent version of the first red section) seems both a reasonable description of consciousness, and a reasonable description of a network that is trained on itself.

When we look at the brain, we see a network configured to receive information about the external world and identify patterns in that information, and also to receive information about its own operations as it does so. When we look at our own conscious experience, we feel ourselves feeling ourselves feeling the world. The subjective “experience of experiencing” and the objective “pattern-matching on pattern-matching” are two descriptions of the same process.

(The second red section is more to what I took to be Iambigous’ point, i.e. that the more abstract parts of experience are more difficult to explain. More abstraction only requires more layers of network. And if my clarification of the first red section is successful, I think it follows that more self-experience entails more abstraction.)

The test of my position here would be to create more and more sophisticated artificial minds that function in basically this way. AlphaGo and its successors are a significant breakthrough in this direction. There is a temptation to compare them to DeepBlue, but they operate in importantly-different ways. DeepBlue was spectacular because it was able to read out so many moves ahead, which was novel at the time (though it’s less that 1/10th as powerful as a modern smartphone). But humans aren’t very good at reading ahead, certainly not compared to computers; that’s not how humans play. Rather, humans look for patterns, they abstract based on experience. And that’s what AlphaZero does. It still analyzes a ton of moves, but many fewer than other engines (1/875 from this not-super-awesome source). Reinforcement learning and neural networks are modeled on the human brain and surpass humans in very human endeavors.

If I’m right, this should be the field that results in an artificial general intelligence for which the consensus is that it’s conscious. And such an advance should not be too far off.

Reducing mind to body is as easy and employing the old type/token distinction. We’ve only observed so many brain states, but we think there are so many more, (maybe infinite in number) that are possible, so we assume that behavior and brain states can supervene on each other or however you want to put it, but then someone comes along and says, but there are 2 behaviors that match the 1 brain state that we can observe, then you can either appeal to the whole idea that there are more states that are theoretically necessary and therefore must exist, (but you may get accused of speculating beyond what you can observe), or you can say that there are the brain states are tokens and the behaviors are types or whatever and use a bit of the old set theory to settle up your reductive theory of mind. Then you can say, “hey man I’m not saying I’ve solved the mind body problem, I’m just saying I’ve concluded that the best way to discuss them is by referencing the physical observable stuff and framing it as having a 1:1 correlation with the non physical stuff.”

I would still say that it does.

Reasoning: how is it possible to know that the brain has been damaged without a mind to observe it? Certainly, a damaged brain can directly result in a damaged mind, but given that the brain is a product of the mind, it’s still the mind being damaged causing a damaged mind. The middleman “brain” is a part of an observing mind that seems to directly represent a mind, it’s a subset of mind that isn’t actually a fundamental substance in itself (matter) but it is thought of as a substance that can be treated as fundamental for utility’s sake (even though it isn’t). Given what I said about human understanding requiring the object of understanding to differ in kind to the subject (to avoid tautology), it’s necessary for utility’s sake for us to treat the material conception in this way in order for us to attempt understanding of the mind “by proxy”.

This may bring to mind thoughts of trees falling in forests when nobody is around, but as a matter of epistemology, the matter of a tree cannot be known to have fallen until a corresponding conception of a tree in someone’s mind has occurred to confirm it. This is aside from the ontological question of whether “it actually makes a sound” if it falls. But even to the ontological question, I make the same argument that “the reality” of matter independent of an observer is the same product of utility, initially founded in mind and subsequently inverted in the mind such that the “reality” of matter precedes the reality of mind. And who can blame us for thinking in this way when evidence looks so much like things are going on even when nobody is around to perceive it? But does the quality of utility override the process that necessarily occurs before which utility is even conceived?

I’m still trying to get my head around your argument, maybe it’s my fault for not being able to, or maybe my version is the correct reasoning why there is no hard problem. Or perhaps as you were hinting, the only hard part of the problem is the language to explain it or the lack of it :wink:

Being good at reading/looking ahead can be measured in the either/or world. If you wish to achieve one or another goal and that involves calculating what you imagine will unfold given all the possible variables involved, you either achieve that goal or you don’t. Or you achieve it faster and more fully than others.

But what machine intelligence is any closer to “thinking ahead” regarding whether the goal can be construed as encompassing good or encompassing evil?

It would seem that only to the extent that the human brain/mind/consciousness is in itself just nature’s most sophisticated machine would that part become moot.

Note that here – en.wikipedia.org/wiki/AlphaZero – this is not touched upon at all.

Of course that part may well only be of interest to folks like me.

I sort of understand. Perhaps it would be good to ask you: how is what you are arguing unique to mind body unity arguments? If it is. I feel like I am missing something, but perhaps I interpreted the title as indicating that you’d found a new angle - even simply new to the ones you’ve read.

That said: I don’t think the phrase the experience of experiencing is useful and/or makes sense. If we are experiencing, iow that second part of the phrase, and then experience this experiencing, this cannot be an explanation of that first experiencing that we then notice. Now I assume you meant two different things by experience and experiencing in that sentence. We would be experiencing the reactions and effects in our brains. But I now see you are trying to make a model that is plausible which is different from making an argument that X is the case.

Or one could, from a physicalist point of view, consider this definition excessive. There is nothing receiving information. Rather a very complicated, effective kind of pachinko machine is the brain, and when causes hit this incredibly complicated pachinko machine, the machine reacts in specific determined ways. There is no information, just causes and effects. It looks like information is being received because evolution has led to a complicated object that responds in certain ways. But that’s an after the fact interpretation. (this is not my position, but I think it is entailed by physicalism, which your posts seem to fit within).

I think they will soon have things that act like our brains. Whether they will be experiencing is another story. And for all we know they already are. I think we should be very careful about conflating behavior, even the internal types focused on here, and consciousness. We have no idea what leads to experiencing. And we have a bias, at least in non-pagan, non-indigenous cultures, to assuming it is the exception.

I don’t think you are solving the hard problem, you are just presenting a practical model of intelligence similar to ours and suggesting that this will lead to effective AIs. I agree. However the hard problem is not that…

And like most formulations of the hard problem here it is assumed they know what is not conscious despite there being a complete lack of a scientific test for consciousness. All we have is tests for behavior/reaction.

There is one test that I co incidentally read which consists of the following and is quite recent.

Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.

The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.
The light change I believe results in a shift to green.

Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?

If so , can this be a model of measurement in a more general study?

I am not sure I understand the test. It seems to me that they can measure reactions. They see a reaction. Well, even a stone will react to light. What we cannot test is whether someone experienced something. And then this test seems to be for beings with retinas and we’ve pretty much already decided that creatures with retinas are conscious.

Around the turn of the last century Colin McGinn suggested that consciousness is too complex to be explained by a mind. What have we learned since then that would make such an explanation possible?

In this case, reaction was reported by the test subject, versus in the reaction observed by the test giver, in the case of the stone, is the difference.

The test subject reported perceived changes he experienced, connecting the test with both the qualitative and quantifiable factors.

I think that does meet the criteria for a relative test to the problem.

One would need to address this question to the neurological community. And I suspect they would conclude that much is known now that was not known then.

But the hardest part about grappling with “Hard Problem of Consciousness” is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.

Here of course some invoke God. But for others that just provokes another Hard Question: How and why and when and where did God come into existence?

I don’t get it. How does the test demonstrate the lack of consciousness as opposed to the lack in the ability to report what one has experienced? IOW how would it demonstrate an animal, plant, rock is not conscious, rather than simply that they do not respond about their experience?

I think conscousness is actually rather simple. But it is complicated to explain how it arises, especially in a physicalist paradigm.

I’m not 100% sure I follow your argument here. What are the 2 behaviors that match 1 brain state? I would reject that possibility as “speculation beyond what [we] can observe”: the very idea that the same brain state could be observed to occur twice seems wrong.

I’m more open to the idea of 2 brain states matching the same behavior. Mental processes can probably be instantiated in different medium (e.g. in brains and silicon), which would mean that two very different physical brain configurations can produce the same output.

But maybe I’m getting gummed up in language again. When you say ‘behaviors’, do you mean macro behaviors, e.g. picking up an object? Or brain “behaviors”, e,g, pathway xyz being activated?

Is this just solipsism?

One problem I have with this line of reasoning is that it erases what seems like a meaningful distinction that (assuming multiple minds exist) can be communicated from one mind to another, suggesting that it isn’t totally illusory: we talk about the distinction between mind and not-mind, and that difference is understandable and seems useful. At best, aren’t we just pushing back the question? Let’s say we accept that everything is mind. We still have the mind things of the sub-type mind (i.e. ideas, feelings, sensations, emotions), and the mind-things of the subtype not-mind (brains, rocks, hammers, whatever), and we still want to explain the mind-mind things in terms of the mind-not-mind things. And my argument still works for that.

How much of this is a linguistic problem? I grant that the only things we have access to are mind things, e.g. we perceive the world as sense impressions of the world. But are you saying that there is no world behind those impressions? There’s a difference between saying that my car’s backup sensor only delivers 1s and 0s and saying that there’s no garage wall behind me. I’d argue that the most coherent description of the world is one that isn’t dependent on mind, even though, being minds, our experience of the world is always going to be made of mind-stuff.

I guess I do think utility is meaningful. I say “this is mind and this isn’t”, and we can take that statement and test the things against it, so that e.g. the things that are mind only exist to the mind experiencing them and the things that aren’t exist to everyone. The fact that we can draw useful inferences from that distinction suggests the distinction is real.

As I mentioned in my previous reply to Karpel Tunnel, I think this is a difference of degree and not of kind. Good and evil are abstractions of abstractions of abstractions… I am indeed taking the “human brain/mind/consciousness [to be] in itself just nature’s most sophisticated machine”.

I am not sure what other type of arguments you intend here. Are there non-mind/body arguments that compare the structures of brains to the subjective experience of being a brain? Are there non-unity arguments that posit that two seemingly distinct things are in fact the same thing?

I don’t think I do, but I am open to arguments otherwise. I mean ‘experience’ in this context in a non-mental sense, e.g. “during an earthquakes, tall buildings experience significant physical stresses.” There’s absolutely a danger of equivocating, i.e. assuming that rocks and humans both ‘experience’ things, and concluding that that experience is the same thing. That isn’t my argument, but I do mean to point to light striking a rock and light striking a retina as the same phenomenon, which only differs in the respective reactions to that phenomenon. Call phenomena like being hit by a photon ‘experiencing a photon’. Similarly, we can aggregate these small experiences, and say that the sunflower experiences the warmth of the sun. In the same way, then, neurons experience signals from other neurons. Whole brain regions experience the activity of other regions. The brain as a whole experiences its own operations. The parts of AlphaGo that plan for future moves experience the part of AlphaGo that evaluates positions.

If I’m right, the internal experiencing and the external experiencing are in fact the same thing, and qualia etc. are the inside view on the brain experiencing itself experiencing itself … experiencing a photon of a certain wavelength. Qualia are not the incidence of light on your retina, but the incidence of the whole brain processing that event on itself.

I disagree. If we were perfectly informed, only the truth would be plausible.

That’s a bit like saying that the words I’m writing are really just collections of letters. And so they are, but that doesn’t prevent them from being words.

Information is a fuzzy term, which can be used to refer to single molecules of gasses transferring energy (“information about their speed”), up to very abstract things like what we might find in the Stanford Encyclopedia of Philosophy page on physicalism (“information about the history of physicalism”). I don’t think either usage is at odds with physicalism.

Given that we can’t ever experience someone else’s consciousness directly, we need to treat “acting like someone is conscious” and “being conscious” as roughly the same thing. I am assuming that other humans are conscious, and that rocks aren’t. I would also assume that anything that can robustly pass the Turing Test would also be conscious.

If we don’t make these assumptions, the hard problem is very different: the question would become “why do I have qualia”, since I am the only consciousness I can directly confirm.

Having “a practical model of intelligence similar to ours” must solve the hard problem at the limit where the intelligence is so similar to ours as to be identical, right? If we’re not ready to say that, we need to establish why we’re willing to accept all these similarly intelligent humans as conscious without better evidence.

This seems like defining the solution to the Hard Problem in such a way that it becomes vulnerable to Godel’s Incompleteness Theorem. i.e., as a mind, it is impossible for us to fully contain a comparable mind within ourselves, so “all that can be known about a mind” can never be known by a single mind. If the Hard Problem is just Incompleteness (and that’s not a totally unreasonable proposal), then we should call it the Provably Impossible Problem.

Well let me know when someone is perfectly informed, otherwise I will not conflate plausibility with truth or evenwith ‘the only model we can’t falsify at this point’.

Then it would seem to become a matter of how far one takes this. Taking it all the way this very exchange might be but an inherent manifestation of that which can only be. As would be the distinction between abstractions used to describe phenomenal interactions and the interactions themselves. They happen. They could not not have happend.

Next post. Next set of dominoes.

Of course Godel’s Incompleteness Theorem – simple.wikipedia.org/wiki/G%C3% … s_theorems – is no less entangled in the gap between what it argues about existence and all that can be known about the existence of existence itself.

And that inevitably takes us back around to this:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

But how would one go about proving the problem is impossible to solve?

Instead, from my frame of mind, the truly hard problems of consciousness revolve around the question “how ought one to live?” in a world of conflicting goods?

For example: To build or not to build Trump’s wall.

Taking an existential leap to autonomy.

No, because that’s just function. We would know it could function, in general, like us. Does Deep Blue have some limited experiencing? I would guess consensus is no and further we cannot know. Just because we make something that can function like us in many facets of our intelligence does not mean it is experiencing. It might be. It might not be.

Yes, physicalists who consider all contacted mediated and interpreted should have that concern. And some do.

It’s not necessarily Solipsism, but it is Idealism.

It certainly seems like minds have no overlap: one’s consciousness doesn’t overlap with others’, which could easily lead to Solipsism - but minds do appear to communicate with one another - the question is whether the separation borders perfectly or if there’s a gap and there’s some intermediary substance that allows the connection (since overlap is out of the question). The latter seems unfalsifiable, and the former seems a little convenient, but given the problems with all other suggestions about fundamental substance the suspiciously convenient seems to be the least contradictory regardless of any seeming lack of probability. Unless of course your argument is perfectly fine and I’m just missing it - which seems more likely than the convenience of what I’m suggesting.

Is it too hocus pocus to say there is no reality in terms of matter behind the impressions? They’re unfalsifiable afterall, as useful as they are to propose. Could it be that this reality only occurs once minds connect (without overlapping and with nothing in between) like bubbles? But of course these bubbles burst eventually… Life as some fancy bubble machine, eh? Haha. Can you really propose a coherent description of the world without mind? Is that what you’re attempting?

Either there is an equivocation on the idea of ‘inside’ above or there is a lack of justification for the physical definition of
inside
justifying the arising of subjective experience.

What is ‘inside’ to matter`?

The equivocation: just because some complicated interaction is happening ‘inside’ is not enough to say there will be the ‘inside’ of subjectivity. Inside the ocean, there are ecosystems of complicated interaction. But the inside of the oceanness does not lead to this being conscious - not in most physicalist models.

The lack of justification: why does matter start being an experiencer because causal interactions are inside. why would topology lead to to there being an experiencer? I don’t see the argument, I see a simple flat statement, that conscoiusness is happening when it is inside.

For some reason interconnection inside something leads to matter not just engaging in certain processes, causal chains, but there arises a noticing. I see nothing explaining why this noticing arises. That to me is the hard problem. Not cognition, but awareness.

Another way to put it is this: sure, things within an organism affect eachother and can produce responses. But this can happen without an experiencer. In motors that have feedback for homeostasis. Would you argue that there are the beginnings of conscousness in those motors? I can see saying this is using information from one part of thing to modify processes in another. But I see nothing explaining an experiencer. Cognition, even, should not be confused with awareness.