There Is No Hard Problem

I would still say that it does.

Reasoning: how is it possible to know that the brain has been damaged without a mind to observe it? Certainly, a damaged brain can directly result in a damaged mind, but given that the brain is a product of the mind, it’s still the mind being damaged causing a damaged mind. The middleman “brain” is a part of an observing mind that seems to directly represent a mind, it’s a subset of mind that isn’t actually a fundamental substance in itself (matter) but it is thought of as a substance that can be treated as fundamental for utility’s sake (even though it isn’t). Given what I said about human understanding requiring the object of understanding to differ in kind to the subject (to avoid tautology), it’s necessary for utility’s sake for us to treat the material conception in this way in order for us to attempt understanding of the mind “by proxy”.

This may bring to mind thoughts of trees falling in forests when nobody is around, but as a matter of epistemology, the matter of a tree cannot be known to have fallen until a corresponding conception of a tree in someone’s mind has occurred to confirm it. This is aside from the ontological question of whether “it actually makes a sound” if it falls. But even to the ontological question, I make the same argument that “the reality” of matter independent of an observer is the same product of utility, initially founded in mind and subsequently inverted in the mind such that the “reality” of matter precedes the reality of mind. And who can blame us for thinking in this way when evidence looks so much like things are going on even when nobody is around to perceive it? But does the quality of utility override the process that necessarily occurs before which utility is even conceived?

I’m still trying to get my head around your argument, maybe it’s my fault for not being able to, or maybe my version is the correct reasoning why there is no hard problem. Or perhaps as you were hinting, the only hard part of the problem is the language to explain it or the lack of it :wink:

Being good at reading/looking ahead can be measured in the either/or world. If you wish to achieve one or another goal and that involves calculating what you imagine will unfold given all the possible variables involved, you either achieve that goal or you don’t. Or you achieve it faster and more fully than others.

But what machine intelligence is any closer to “thinking ahead” regarding whether the goal can be construed as encompassing good or encompassing evil?

It would seem that only to the extent that the human brain/mind/consciousness is in itself just nature’s most sophisticated machine would that part become moot.

Note that here – en.wikipedia.org/wiki/AlphaZero – this is not touched upon at all.

Of course that part may well only be of interest to folks like me.

I sort of understand. Perhaps it would be good to ask you: how is what you are arguing unique to mind body unity arguments? If it is. I feel like I am missing something, but perhaps I interpreted the title as indicating that you’d found a new angle - even simply new to the ones you’ve read.

That said: I don’t think the phrase the experience of experiencing is useful and/or makes sense. If we are experiencing, iow that second part of the phrase, and then experience this experiencing, this cannot be an explanation of that first experiencing that we then notice. Now I assume you meant two different things by experience and experiencing in that sentence. We would be experiencing the reactions and effects in our brains. But I now see you are trying to make a model that is plausible which is different from making an argument that X is the case.

Or one could, from a physicalist point of view, consider this definition excessive. There is nothing receiving information. Rather a very complicated, effective kind of pachinko machine is the brain, and when causes hit this incredibly complicated pachinko machine, the machine reacts in specific determined ways. There is no information, just causes and effects. It looks like information is being received because evolution has led to a complicated object that responds in certain ways. But that’s an after the fact interpretation. (this is not my position, but I think it is entailed by physicalism, which your posts seem to fit within).

I think they will soon have things that act like our brains. Whether they will be experiencing is another story. And for all we know they already are. I think we should be very careful about conflating behavior, even the internal types focused on here, and consciousness. We have no idea what leads to experiencing. And we have a bias, at least in non-pagan, non-indigenous cultures, to assuming it is the exception.

I don’t think you are solving the hard problem, you are just presenting a practical model of intelligence similar to ours and suggesting that this will lead to effective AIs. I agree. However the hard problem is not that…

And like most formulations of the hard problem here it is assumed they know what is not conscious despite there being a complete lack of a scientific test for consciousness. All we have is tests for behavior/reaction.

There is one test that I co incidentally read which consists of the following and is quite recent.

Points of light are impinged at various intervals upon the eye using a multi colored scheme, consisting of red and blue. The duration of the test may be factored in as of primary relevance, but that has not been verified at this end.

The crux of the relevance of other considerations consists in the finding, that it takes the repetition of the incidental light exposures exactly half through, before a color change is reported by the study.
The light change I believe results in a shift to green.

Does this not point to a quantifiable relevance to qualify a tool with which toeasure internal and external effects to variable input of visual change which relates inner and outer sources of experience?

If so , can this be a model of measurement in a more general study?

I am not sure I understand the test. It seems to me that they can measure reactions. They see a reaction. Well, even a stone will react to light. What we cannot test is whether someone experienced something. And then this test seems to be for beings with retinas and we’ve pretty much already decided that creatures with retinas are conscious.

Around the turn of the last century Colin McGinn suggested that consciousness is too complex to be explained by a mind. What have we learned since then that would make such an explanation possible?

In this case, reaction was reported by the test subject, versus in the reaction observed by the test giver, in the case of the stone, is the difference.

The test subject reported perceived changes he experienced, connecting the test with both the qualitative and quantifiable factors.

I think that does meet the criteria for a relative test to the problem.

One would need to address this question to the neurological community. And I suspect they would conclude that much is known now that was not known then.

But the hardest part about grappling with “Hard Problem of Consciousness” is still going to revolve around narrowing the gap between what any particular one of us thinks we know about it here and now and all that can be known about it in order to settle the question once and for all.

Here of course some invoke God. But for others that just provokes another Hard Question: How and why and when and where did God come into existence?

I don’t get it. How does the test demonstrate the lack of consciousness as opposed to the lack in the ability to report what one has experienced? IOW how would it demonstrate an animal, plant, rock is not conscious, rather than simply that they do not respond about their experience?

I think conscousness is actually rather simple. But it is complicated to explain how it arises, especially in a physicalist paradigm.

I’m not 100% sure I follow your argument here. What are the 2 behaviors that match 1 brain state? I would reject that possibility as “speculation beyond what [we] can observe”: the very idea that the same brain state could be observed to occur twice seems wrong.

I’m more open to the idea of 2 brain states matching the same behavior. Mental processes can probably be instantiated in different medium (e.g. in brains and silicon), which would mean that two very different physical brain configurations can produce the same output.

But maybe I’m getting gummed up in language again. When you say ‘behaviors’, do you mean macro behaviors, e.g. picking up an object? Or brain “behaviors”, e,g, pathway xyz being activated?

Is this just solipsism?

One problem I have with this line of reasoning is that it erases what seems like a meaningful distinction that (assuming multiple minds exist) can be communicated from one mind to another, suggesting that it isn’t totally illusory: we talk about the distinction between mind and not-mind, and that difference is understandable and seems useful. At best, aren’t we just pushing back the question? Let’s say we accept that everything is mind. We still have the mind things of the sub-type mind (i.e. ideas, feelings, sensations, emotions), and the mind-things of the subtype not-mind (brains, rocks, hammers, whatever), and we still want to explain the mind-mind things in terms of the mind-not-mind things. And my argument still works for that.

How much of this is a linguistic problem? I grant that the only things we have access to are mind things, e.g. we perceive the world as sense impressions of the world. But are you saying that there is no world behind those impressions? There’s a difference between saying that my car’s backup sensor only delivers 1s and 0s and saying that there’s no garage wall behind me. I’d argue that the most coherent description of the world is one that isn’t dependent on mind, even though, being minds, our experience of the world is always going to be made of mind-stuff.

I guess I do think utility is meaningful. I say “this is mind and this isn’t”, and we can take that statement and test the things against it, so that e.g. the things that are mind only exist to the mind experiencing them and the things that aren’t exist to everyone. The fact that we can draw useful inferences from that distinction suggests the distinction is real.

As I mentioned in my previous reply to Karpel Tunnel, I think this is a difference of degree and not of kind. Good and evil are abstractions of abstractions of abstractions… I am indeed taking the “human brain/mind/consciousness [to be] in itself just nature’s most sophisticated machine”.

I am not sure what other type of arguments you intend here. Are there non-mind/body arguments that compare the structures of brains to the subjective experience of being a brain? Are there non-unity arguments that posit that two seemingly distinct things are in fact the same thing?

I don’t think I do, but I am open to arguments otherwise. I mean ‘experience’ in this context in a non-mental sense, e.g. “during an earthquakes, tall buildings experience significant physical stresses.” There’s absolutely a danger of equivocating, i.e. assuming that rocks and humans both ‘experience’ things, and concluding that that experience is the same thing. That isn’t my argument, but I do mean to point to light striking a rock and light striking a retina as the same phenomenon, which only differs in the respective reactions to that phenomenon. Call phenomena like being hit by a photon ‘experiencing a photon’. Similarly, we can aggregate these small experiences, and say that the sunflower experiences the warmth of the sun. In the same way, then, neurons experience signals from other neurons. Whole brain regions experience the activity of other regions. The brain as a whole experiences its own operations. The parts of AlphaGo that plan for future moves experience the part of AlphaGo that evaluates positions.

If I’m right, the internal experiencing and the external experiencing are in fact the same thing, and qualia etc. are the inside view on the brain experiencing itself experiencing itself … experiencing a photon of a certain wavelength. Qualia are not the incidence of light on your retina, but the incidence of the whole brain processing that event on itself.

I disagree. If we were perfectly informed, only the truth would be plausible.

That’s a bit like saying that the words I’m writing are really just collections of letters. And so they are, but that doesn’t prevent them from being words.

Information is a fuzzy term, which can be used to refer to single molecules of gasses transferring energy (“information about their speed”), up to very abstract things like what we might find in the Stanford Encyclopedia of Philosophy page on physicalism (“information about the history of physicalism”). I don’t think either usage is at odds with physicalism.

Given that we can’t ever experience someone else’s consciousness directly, we need to treat “acting like someone is conscious” and “being conscious” as roughly the same thing. I am assuming that other humans are conscious, and that rocks aren’t. I would also assume that anything that can robustly pass the Turing Test would also be conscious.

If we don’t make these assumptions, the hard problem is very different: the question would become “why do I have qualia”, since I am the only consciousness I can directly confirm.

Having “a practical model of intelligence similar to ours” must solve the hard problem at the limit where the intelligence is so similar to ours as to be identical, right? If we’re not ready to say that, we need to establish why we’re willing to accept all these similarly intelligent humans as conscious without better evidence.

This seems like defining the solution to the Hard Problem in such a way that it becomes vulnerable to Godel’s Incompleteness Theorem. i.e., as a mind, it is impossible for us to fully contain a comparable mind within ourselves, so “all that can be known about a mind” can never be known by a single mind. If the Hard Problem is just Incompleteness (and that’s not a totally unreasonable proposal), then we should call it the Provably Impossible Problem.

Well let me know when someone is perfectly informed, otherwise I will not conflate plausibility with truth or evenwith ‘the only model we can’t falsify at this point’.

Then it would seem to become a matter of how far one takes this. Taking it all the way this very exchange might be but an inherent manifestation of that which can only be. As would be the distinction between abstractions used to describe phenomenal interactions and the interactions themselves. They happen. They could not not have happend.

Next post. Next set of dominoes.

Of course Godel’s Incompleteness Theorem – simple.wikipedia.org/wiki/G%C3% … s_theorems – is no less entangled in the gap between what it argues about existence and all that can be known about the existence of existence itself.

And that inevitably takes us back around to this:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

But how would one go about proving the problem is impossible to solve?

Instead, from my frame of mind, the truly hard problems of consciousness revolve around the question “how ought one to live?” in a world of conflicting goods?

For example: To build or not to build Trump’s wall.

Taking an existential leap to autonomy.

No, because that’s just function. We would know it could function, in general, like us. Does Deep Blue have some limited experiencing? I would guess consensus is no and further we cannot know. Just because we make something that can function like us in many facets of our intelligence does not mean it is experiencing. It might be. It might not be.

Yes, physicalists who consider all contacted mediated and interpreted should have that concern. And some do.

It’s not necessarily Solipsism, but it is Idealism.

It certainly seems like minds have no overlap: one’s consciousness doesn’t overlap with others’, which could easily lead to Solipsism - but minds do appear to communicate with one another - the question is whether the separation borders perfectly or if there’s a gap and there’s some intermediary substance that allows the connection (since overlap is out of the question). The latter seems unfalsifiable, and the former seems a little convenient, but given the problems with all other suggestions about fundamental substance the suspiciously convenient seems to be the least contradictory regardless of any seeming lack of probability. Unless of course your argument is perfectly fine and I’m just missing it - which seems more likely than the convenience of what I’m suggesting.

Is it too hocus pocus to say there is no reality in terms of matter behind the impressions? They’re unfalsifiable afterall, as useful as they are to propose. Could it be that this reality only occurs once minds connect (without overlapping and with nothing in between) like bubbles? But of course these bubbles burst eventually… Life as some fancy bubble machine, eh? Haha. Can you really propose a coherent description of the world without mind? Is that what you’re attempting?

Either there is an equivocation on the idea of ‘inside’ above or there is a lack of justification for the physical definition of
inside
justifying the arising of subjective experience.

What is ‘inside’ to matter`?

The equivocation: just because some complicated interaction is happening ‘inside’ is not enough to say there will be the ‘inside’ of subjectivity. Inside the ocean, there are ecosystems of complicated interaction. But the inside of the oceanness does not lead to this being conscious - not in most physicalist models.

The lack of justification: why does matter start being an experiencer because causal interactions are inside. why would topology lead to to there being an experiencer? I don’t see the argument, I see a simple flat statement, that conscoiusness is happening when it is inside.

For some reason interconnection inside something leads to matter not just engaging in certain processes, causal chains, but there arises a noticing. I see nothing explaining why this noticing arises. That to me is the hard problem. Not cognition, but awareness.

Another way to put it is this: sure, things within an organism affect eachother and can produce responses. But this can happen without an experiencer. In motors that have feedback for homeostasis. Would you argue that there are the beginnings of conscousness in those motors? I can see saying this is using information from one part of thing to modify processes in another. But I see nothing explaining an experiencer. Cognition, even, should not be confused with awareness.

Carleas, it’s either way. But I mean picking up object kinds of behaviors. Maybe someone says there are more of those than brain states, or that there are more brain states than those, but either way you just think of the 2 categories in terms of sets and settle them up that way. So that way even if it’s practically impossible to take snapshots of 2 identical brain states, as well as it being practically impossible to observe the level of nuance necessary to account for all possible brain states, you can resolve the language by talking about them in terms of types of states and types of behaviors. Basically, you just generalize a bit to be able to reduce one to the other. Then if you want you can talk about how this behavior goes with that brain state, or vice versa. This is basically what the pharmaceutical companies want to do. It’s all just chemical states of the brain! That’s why little Johnny won’t stop abusing animals and taking drugs. I see where it gets tricky when we refer to mind and body and start reducing mind to body in ways that reference the brain as if it weren’t part of the body. But the same move you make to reduce behaviors like picking up objects to brain states, which is to generalize over brain states or behavior enough to be able to identify them with one another is the move you’d make to reduce “mind” to brain. This is what religious people don’t want you to do, because then you don’t need God’s will or any of that stuff to explain what’s being observed. If the mind, or the soul is separate from the body/brain, then there’s magic out there that can be used to appeal to all sorts of nonsense. But if you can match behavior with brain states, and then you can explain that all we can know about the mind/soul is whatever we can observe by looking at the effects that it has on the brain, then you can throw it all out and just look at the brain since that’s where all your observable shit is anyway.

I only intend to say that “X is plausible” is a necessary condition for “X is the case”, and once we show the former, it’s only our lack of information that prevents us from concluding that X is the case. So, in making an argument that X is plausible, I am making a necessary part of the argument that X is the case.

Why only physicalists? What alternatives allow one to assume the existence of other minds?

My general response here would that, 1) we should apply consistent standards, so if we’re assuming the existence of other minds in other theories, it’s not a special weakness in this theory to assume the existence of other minds and reason from there, and 2) we do in fact assume the existence of other minds, and the burden is on anyone claiming that other minds don’t exist to make that case (I’d go as far to say that we don’t really understand our own mind are except by reference to other minds).

By inside/outside, compare to a computer set up to observe certain aspects of its environment and its own workings. The computer might report that the temperature in the room is X, that its #3 drive is on the fritz, etc. That’s inside-view reporting, it’s taking ‘sense data’ and ‘self-experiencing’ and reporting on its state as it perceives it. A technician working with the computer might note that it has a digital thermometer which translates the expansion of a spring into 0s and 1s, and that it has an algorithm that tries to read and write to locations on its drives and receives an error if the drives are on the fritz. That’s the outsid- view. In talking about the system, and appealing to either view, we don’t need to assume consciousness.

For humans, consciousness is the inside-view of the operation of their brain.

This is what I mean when I say “the experience of experiencing”, and my point is that the awareness is just the cognition about the cognition.

The claim that “this can happen without an experiencer” is question-begging. If I’m right, there’s a rudimentary experiencer in the motor. I’m not claiming that the motor is pondering about the nature of its existence, only that the motor’s perceptions about its own operations are effectively a very limited form of qualia.

I would say that we don’t need any more “intermediary substance” besides air, which allows connections in the form of spoken language. I hope that doesn’t sound too flip; if we’re acknowledging that we only ever have imperfect information about what’s really going on ‘out there’ in the non-mind universe, then the mental experiences that over time we’ve come to describe as ‘air’ and ‘noise’ and ‘language’ are just approximations of a real world we don’t and can’t know other than through the mental experiences. But if our mental experiences of ‘air’ are consistent, if when we communicate about them with these ostensible other minds we find that our experiences are consistent with their experiences, then we can talk about ‘air’ as a useful approximation for the real, unknowable world. There’s already plenty of hocus pocus in there, I don’t see the justification for more hocus pocus that adds more unknowability without any commensurate consistency within and between minds.

I can’t, but… again compare to a computer: let’s say we make a system that analyses its own hardware and software and spits out a model of how it all works. We might ask, Can it really propose a coherent description of itself without software? Of course not, the whole description is software mediated, it can’t escape its own software to see a non-software-mediated version of itself. Still, its model can describe the system without any specific line-item for software, e.g. “these electrical pulses go over here, where they turn on or off these gates, which pass or block other electrical charges, etc.” That’s likely to be a hugely inefficient way to describe what the system is doing, but that description can be complete while completely omitting software (despite that the description must be software-mediated).

So too with mind: my perception is mind, my interpretation of brain activity is mind-mediated, yet I’m arguing that a complete description of the brain can be a complete (if hugely inefficient) description of mind while omitting mind as any line-item in that description.

Mr Reasonable, I think I agree, although I’m open to the possibility that fine-enough grained behaviors are actually tightly tied to brain states. So, appealing one last time to the computer metaphor, two computers can both run Chrome, and ‘behave’ the same way, but when we get down into where and how the processing is taking place, the behaviors become ‘clicking with the pointer on pixel (x,y) and running the code stored at location z’, and can’t be very well abstracted into sets, and we really do have 1-to-1 correspondence. In brains, that becomes more salient, since we might both have the idea of a brain and say “brain”, and yet have a very different set of other ideas associated with it, so that I think of computers next and you think of drug companies, or whatever. But I think that’s just a matter of scale from what you’re saying.

It is the interface between the two states which may determine the progression (repeatably) ,until the test repeats itself to exactly one half of the number of cycles.

This is similar in kind to qualitative change by increments, or . a fed back system in pattern re-cognition.

Of pattern recognition becomes some constant between a differentiation between variable yet not relatable parts, and by re integration of patterns where the opposite occurs, two different patterns will become recognizable enough to form an impress of likeness , where the difference will not be noticed however remaining calculable.

I think this is the closest it can get to a description

Sorry Carleas for pre-tempting Your line of thought , but I owed this to Karpel.

Physicalists have tended to view all contact as mediated. This hits that which impinges on that. So minds are always separated and solipsism is always a possibility (or zombies) for a physicalist. This is likely true for other belief systems. But some belief systems do not consider things as separate first. They can have separate and intermeshed at the same time. Idealists have an easier time with this also.

I am not arguing that there are not other minds. I was pointing out that physicalism leads to certain conclusions and doubts, given its nature.

But again, there is no reason for this to include experiencing. As in, there is subject feeling this or that while experiencing the phenomenon of X. Looking out, looking in, there is no reason one should have an experiencer more than the other.

But it’s not just that. Functional processes need not have awareness. You can say that when cognitive processes have themselves as the object conscousness arises, but this is just a flat assertion. I see no justification for that. Organism X looks at the gull, no consciousness. Organism X, wonders about its feeling, consciousness.

OK, good. At least we have that on the table. That is a rare physicalist position. But now we need a definition of perceptions of its own operations. Would this take place in a reef, say? Why not consider panqualiaism, while we’re at it. Why is the skin the limit of the orgasmism? We have causal chains coursing through matter, in all directions, why should it honor the skin as a special bounday?

I am still not seeing anything that explains why we do not have a zombie universe. You can say ‘when someone focuses on itself, it becomes consciousn’, but we don’t know if that is the limit of consciousness. We don’t know the mechanism. But we have an axiom that is very hard, if not impossible, to falsify or verify.