I may reply to you in more detail later, Gib, but at the moment, let me say thanks for showing me that what appears self evident to me is just another “theory” to you.
No sweat, felix. And for what it’s worth, I see my own views as just “theory” as well. Any time we engage in thinking, we’re engaged in theory.
But for now, I have some new thoughts to post…
===============================
This discussion so far has really helped me to flesh out my views on how the traditional concept of mind evolved. Before starting this thread, my view was that the imagination was the first “mind” that man recognized within himself. This is still my view, but since starting this discussion, I’ve expanded on this and now I would like to spell it out.
For me, the question of how the traditional concept of mind evolved is the question of how we came to recognize things that are “unreal”–anything we consider to be “unreal” we relegate to the category of the “mental”–we say it is “just in the mind”–meaning not in reality. This could be things that we know are unreal (like dragons or wizards or flying pink hippopotamuses (hippopotami?)) or things that we thought were real but turned out not to be (like dreams or false beliefs).
So the question of how the traditional concept of “mind” evolved can be deferred to the question of how the recognition of the “unreal” evolved, and here I think we can go further back than the imagination. I think the first “unreal” experience was memory. Memory is unreal in the sense that it is the reflection of events that are no longer happening. Yet, it is not experienced as something outside reality either, but rather something in the past. In other words, the advent of memory created time (at least retroactively) in order to have a realm in which to project, a realm other than what we had hitherto considered “reality”. It gets incorporated into reality, extends it, but in a way that the one reflecting on his/her memories can say it is not here now (nor will it ever be here now as the past is permanently gone).
This marks the beginning of an “inner simulation”. It’s fair to say, as far as I’m concerned, that the mind is like an inner simulation of the outer world, an “inner laboratory” where we can simulate things and events that mimic the outer world but that we recognize are not really happening. Memory marks the beginning of this. Yet it wouldn’t have been recognized as an “inner simulation” at first, not as a “mind”, for unless the concept of “mind” has evolved, we would have no way of conceptualizing our experiences as in any way “mental” or “inner”–rather, we would conceptualize all our experiences in their projected forms. Memory would have been conceptualized as the past (or events in the past). And this would have been good enough for recognizing memory as, if not unreal, at least not happening here and now.
So with memory, the brain was well on its way to evolving an imagination. It was already creating new realms of existence, if only as extensions of existence, so it wouldn’t have been that great a leap from projecting the past to projecting the imaginary. All it took was the projection of a new kind of experience, an experience that projected as the imagination–accomplishing the unthinkable, making real the unreal–this man eventually came to recognize as his mind, his inner laboratory in which he could conduct any experiment he wanted, put on any act or play or performance, without anyone getting hurt, without any serious consequences, without even anyone else knowing about it. ← You can see how this would have given man a great advantage in the game of survival. But it is only useful for survival if man can control it (and control it like a God), so unlike memory, this realm is not only of the unreal, but of the invented. Man has full control over this realm he calls his imagination, his mind, and so it becomes attached intricately with his self on a level unlike that of memory. Memory is still experienced as the past, which is not something man feels he has any control over, and therefore feels it is something separate from him, independent of him. Not the imagination. The imagination is his, and his alone, and comes to life directly from his own will. He decides what to imagine.
^ As you can see, the imagination is quite a game changer when the game is natural selection. It, at once, opens man to a realm of “unreal” things and makes him aware of his own mind, and also establishes a sense of self which exerts control, his will power, over this realm, thereby making it a part of him. Most animals have memory, but only homo sapien has this elaborate setup.
Now, this brings up my account of “self”–what we mean by “self” and what it is according to my theory–and I’ve said in my book that you need at least two things to have a self: 1) a point of view from which the world is experienced, and 2) the ability to recognize yourself as a person when you look in the mirror. You get a self when these two coincide. Now I think there is a third: having some sense of control, or the ability to exert your will, over the world, or at least yourself. And this third item I think is uniquely human. I think most animals experience the world from a certain point of view (a position in 3D space) and I think most social animals recognize a person (or “person”) when they look at their peers and relatives, maybe even when they see their own reflect in (say) water. Perhaps this counts as a self just as much as the human self, or maybe it is the human self which is the real “self” (now that it’s built upon this third item), and all animal selves should rather be called “proto-selves” or “quasi-selves” or “pre-selves” (I think it would be a shame for animals not to have a full self, even if that makes the human self a “fuller” self, so I know what I’d opt for).
Now, I’d also like to go a bit forward in our evolutionary history–past the emergence of the imagination and the traditional concept of the mind, to aspects of the human mind that are only possible once the concept of one’s own mind is established. For example, epistemic awareness is now possible. Epistemic awareness, I believe, began with some of the simplest, most rudimentary, forms of thought–an acknowledgement. The ability to acknowledge his owns experiences was the first form of epistemic awareness that man attained. To say “I see blue,” or “I feel pain,” or “I’m hungry” is to acknowledge an experience, to mentally note something you are feeling, to know you feel it. And this is different from acknowledging the existence of some object you see in the outer world–like “oh, there’s a tree”–you are acknowledging your own experiences as experiences, as “seeing”, as “feeling”, as “hunger”.
Now, the emergence of acknowledgements of one’s own experiences may not have evolved immediately after the emergence of the imagination. All that man recognized as his mind at first was his imagination–not his vision, not his memory, not his pains or hungers. I’m skeptical that man even categorized all his experiences under one label called “mind” in the beginning. Maybe vision was it’s own thing, as was feeling pain, or being hungry. Maybe it was regarded as an action the self carried out (I’m seeing) or a state the self is in (I’m in a state of seeing). Furthermore, for a thing to be mental is quite abstract. The mind is not just a concept but an abstract concept, which means that to recognized something as mental requires having the ability for abstraction, and I’m not sure we can say that came along automatically with the imagination. But then again, if man recognized in his imagination his own mind, are we saying he recognized something abstract? And so maybe abstraction did come along with the imagination? Then again (again), is recognizing a realm of the unreal that’s under your control and maybe a part of you abstract? ← That’s all I’m saying the original concept of mind was, after all–just the recognition of a realm of the unreal–not quite the fully fleshed out concept of mind we have today. And any way you cut it, acknowledgement of one’s own experiences obviously does require some concept of mind, so it still stands to reason that a concept of mind is needed for epistemic awareness–whether that comes prepackaged with the imagination or it requires some further evolution is the question.
Or it might require a form of psychological or individual evolution–as opposed to biological evolution which takes generations and generations. Psychological or individual evolution is simply the evolution of one’s own thoughts and understandings of the world as one grows up and learns from experiences, building more and more complex, perhaps more abstract, concepts along the way. For example, when one wakes from a dream and realizes the dream world wasn’t real, how does one process that? How does one process the having of experiences which seemed vividly real only to find out they weren’t? Well, if one knows about a realm of unreal things, one could reason that those experiences probably came out of there. They were “imaginary”. Your mind made them up. So then dreams become regarded as another form of the mental or an extension of the imagination. It probably wouldn’t take too long for a developing child to arrive at this insight, probably by the time he or she is 6 or 7. And this may become an ongoing process in the development of the child’s thoughts–the relegating of experiences of unreal things to the “mental” category. Eventually, one comes to regard misperceptions, mistaken beliefs, hallucinations, drug-induced altered states, as all being mental.
But then the question is, does one go further and regard even true beliefs, unmistaken perceptions, drug-induced insights that one still holds onto as mental? After all, once one has relegated a false belief (for example) to the realm of the imaginary, one has processed the experience and no longer has a need to resolve anything, certainly not to contemplate the reality of her remaining true beliefs. They can, and should, be regarded as true–she would certainly think so, and would take that for granted. This is why a philosopher must step in here. It gets philosophical beyond this point, and definitely abstract, and requires the likes of Descartes, Kant, Berkeley, Hegel, and so on to come up with ideas and work out theories and convince us all that everything we experience is in the mind. Whether a real world exists outside the mind, and whether that real world resembles in any way the appearance of the world in the mind, is an entirely other question. But I think this is a mistake. I think it’s a mistake to relegate anything that turns out to be false or unreal to the “mental” or at least the imagination. The imagination is the realm of experiences that project as unreal, that are experienced as such. The whole reason we are deceived by dreams, hallucinations, false beliefs, etc., is because these don’t carry the same characteristic. They aren’t experienced as unreal. Rather, as young under-developed thinkers, or as prehistoric thinkers, we simply couldn’t find any other way to explain how we can experience things as real yet find out that they’re not. What else could you have been experiencing then? Something imaginary was our best guess. But my theory offers an alternative that we can forgive young thinkers and prehistoric man for not stumbling upon. Everything, according to my theory, is real but also relative. Things are real relative to the reality we experience them under but false relative to a different reality in which they don’t exist (as when we make a reality transition–for example, from the dream to the wake reality). Under this approach, we don’t have to label any of our experiences as “imaginary” or “mental” (according to the traditional concept of “mental” that originates from the “imaginary”) but as the real things they appear to be (this is, of course, mental according to my alternative way of conceptualizing mind and experience–as a combination of quality, being, and meaning–but this conceptualization removes the “unreal” from the mental and allows them to be the things they project as). The only reason I still call these “mental” is because they are still characterized at their core by “feeling”–that is, in order for an instance of quality, being, and meaning to arise, it must be felt. The feeling is part of its being, and in fact captures exactly what esse es percepi means.
So while it’s hard to say what the exact order all our cognitive experiences evolved in, it seems clear that at least this order holds: first came memory, then the imagination, then at some point the conceptualization of our other experiences as “mental” (at least when found to be mistaken), and finally through classic modern philosophers, the idea of our entire perceptual world being mental without a way to verify a real world beyond it. But I’m not sure, for example, where ordinary thinking fits in. For example, in order for primitive man to use his imagination to simulate a new way of catching fish (say with a net instead of a spear), he must recruit the aid of his memory, thinking back to a time when he tried to spear fish in the river and noted how difficult it was, and he must construct the thought experiment of using a net with logic and knowledge of how the physical world works. This implies some kind of rational thinking was already in place. And if man felt he had full control over his imagination, would that mean he felt he had the same level of control over his rational thinking (considering his rational thinking is being used to control the imagination)? We know we do today, but when did this happen? Could we think rationally about our memories before the imagination evolved? And would that have felt in our control? Would such rational thoughts have felt “mental” to us, or more like the beholding of the logical facts surrounding past events? And at what point did abstract thought evolve? The ability to logically reason about past events isn’t abstraction, it’s just how our thoughts are processed when thinking rationally. And running mental simulations about alternative ways to catch fish doesn’t require abstraction either, just the ability to visualize hypothetical scenarios in one’s head. So is it fair to assume abstraction evolved later? And what about acknowledgements of non-experiences? Acknowledging, for example, that a storm is approach. That’s not something mental, it’s a real physical phenomenon. And is different from acknowledging one’s visual perception of a storm approaching. These kinds of thoughts should have been possible long before acknowledgements of one’s own mental states or experiences, and maybe even the imagination. And why shouldn’t these kinds of acknowledgements count as a form of epistemic awareness? Was I wrong above to state that the concept of mind made possible epistemic awareness? There had to be a point when “a storm is approaching” became “I see that a storm is approaching,” bringing with it not just epistemic awareness of the storm but of one’s seeing and indeed one’s “I”. ← That certainly had to wait for the first crude concept of mind to evolve.
I’m almost certain that a sense of control over one’s thoughts had to begin with the imagination as that is an essential requirement of the imagination, an imagination that would be useless or too chaotic without it. In order to be used as an inner laboratory, one must have complete control over it, one must be able to manifest anything within it at will. This is not so true for memory or mere thinking (rational or otherwise), and so the imagination, as far as I’m concerned, is where the third attribute of the self I talked about above comes into play–i.e. having some sense of control, or the ability to exert your will, over at least your thoughts and imagination–for only with such a sense of control could one conceptualize the source of that control as “me” and ownership of such thoughts as belonging to “me”. In fact, a more precise way of describing it might be to say control over one’s thoughts is simply the recognition that “I” am thinking–that is, I put myself in a state of thinking, or I am engaging in the activity of thinking, and I am doing so willfully, with full control–just as I would be when speaking, except this is an inner monologue. If I’m correct in thinking this is unique to man (because it seems only man has such a vivid imagination), then the feeling that one is in control of one’s mental states and experiences (even if that’s limited only to one’s thoughts) must also be unique to man. This means that if other animals also have forms of cognition, they don’t necessarily feel in control of them–not that they necessarily feel a lack of control, but that they don’t experience such cognition as belonging to them, as an activity they engage in, but rather as the things such cognitions project as (truths, logic, facts, past events, etc.)–more as the world around them than the inner world within them.
Anyway, that’s all I have to say on the subject of mind, the self, and the evolution of the two. Perhaps as this discussion continues, I will have more to say later.
Herein, lies the problem. To me the Absolute is not merely a theory. It’s my experience. That said, it cannot be communicated directly, so I do appreciate your feedback on how I’m doing. In language, communicating parties always stand in relation to each other.Therefore, it is intrinsically dualistic. In language, the absolute is reduced to a symbol for that which transcends it. As the Buddhists say, language is, at best, like a finger pointing at the moon.
No, I was saying that we perceive through the organs of sense. We experience the organs of sense themselves as well. We’re conscious of them and we are conscious of the world through them. They are not mere hypothetical constructs. We are immediately aware of them and the data they mediate to consciousness.
Mental images can be considered as real or more real than anything perceived by the sense organs. Denied expression while awake they will dominate your dreams. They tend to persist until they find full expression in waking life.
Thus, they are the mothers of invention. (Good name for a band!) The examples are myriad but I’ll give just one: the image of ourself flying. The image persisted until it was expressed in airplane and space travel. Think of the mind as a sense organ that perceives a dimension that the other five do not.
Now, one can discriminate which sensory organ is vibrating. Call the discriminative sense the intellect. I agree with you that differentiating the inner and outer world seems early. But perhaps pleasure and pain preceded even that distinction. And even that was preceded by forgetting. Because, if what we are fundamentally is consciousness, that is what we first forget before we begin to imagine ourselves as an individual organism. The descent into ignorance preceded our evolutionary journey.
So, instead of your supposition that the mind simulates the phenomenal world, consider the possibility that world is a simulation or projection of the mind. Only such forms as the mind can imagine are projected. What is projected is based on one’s desires and aversions out which the mind constructs one’s world.
But all that is downstream in the individual consciousness which is a mere part of the whole which is consciousness at large. Here’s how analytic idealist Bernardo Kastrup imagines consciousness evolved:
“… in the very beginning, the membrane of mind was at rest. It didn’t move or vibrate. Its topography and topology were as simple as possible: an entirely flat membrane without any bumps, protrusions, or loops of any sort. As such, not only was there no self-reflectiveness, but also no experience, since experience consists in the vibrations of the membrane. Only an infinite abyss of experiential emptiness existed; the deep, dreamless sleep of nature. Yet, such unending emptiness was not nothing, for there was inherent in it the potential for something. At some point, some part of the membrane moved, like in an involuntary spasm. Instantly, this movement was registered by the one subject of existence as a very faint experience. There is a significant sense in which an experience concretizes–brings into existence–its very subject. The membrane realized, at that moment, that there was something. It is not difficult to imagine that such a realization could lead to a kind of surprise and agitation that immediately translated into more spasmodic movements, more experiences. Shortly the membrane of mind was boiling with vibrations. And the more vibrations there were, the higher the agitation, and the more vibrations, etc., in a chain reaction of rising experience. The metaphor of a great explosion and inflationary expansion–a ‘Big Bang’–doesn’t seem that inappropriate here.
But since there were still no loops in the medium of mind, there was no self-reflective awareness. Existence was still a confusing maelstrom of instinctive experiences in which the subject was completely immersed. The subject was its own uncontrollable flow of passions and images, with no ability to step out and ponder about what was going on; no ability to make sense of its own predicament. Like a startled man in the middle of a giant, precariously balanced domino field, the subject was unaware that it was its own instinctive thrashing about amidst the falling dominoes that caused them to fall in ever-greater numbers. The one subject of existence was still a prisoner of its own instinctive unfolding. Love, hate, bliss, terror, color and darkness were all morphing into each other uncontrollably, like a storm. But all was still one. At some point, the thrashing about of the membrane caused a small part of its surface to fold in on itself, closing a hollow loop. Suddenly, there was a hint of self-reflective awareness. And it was enough: the idea of ‘I am’ arose in mind for the first time. And the questions ‘What am I? What is going on?’ followed suit. A fundamental awakening happened and a creod–a developmental path–was discovered: a path to self-reflective awareness. From here on, a still somewhat chaotic refinement and expansion of that creod was the name of the game. Folds and loops began to emerge elsewhere in the membrane in a precarious attempt to replicate and expand on the original event. And today, we may still be living through this process.”
Your order of development might pick up at that point beginning with memory and so on. Kastrup assumes that self reflected consciousness only evolved with humans. That may be true with regard to evolving animal species. But based on my experiences, I picture self-reflected consciousness encompassing the process all along.
Now Donald Hoffman argues that evolution has not provided us with anything like an accurate picture of consciousness at large. He proposes a “Fitness-Beats-Truth (FBT) theorem, which states that evolution by natural selection does not favor true perceptions—it routinely drives them to extinction. Instead, natural selection favors perceptions that hide the truth and guide useful action. “ That is what evolution has done. It has endowed us with senses that hide the truth and display the simple icons we need to survive long enough to raise offspring. Space, as you perceive it when you look around, is just your desktop—a 3D desktop. Apples, snakes, and other physical objects are simply icons in your 3D desktop. These icons are useful, in part, because they hide the complex truth about objective reality. Your senses have evolved to give you what you need. You may want truth, but you don’t need truth. Perceiving truth would drive our species extinct. You need simple icons that show you how to act to stay alive. Perception is not a window on objective reality. It is an interface that hides objective reality behind a veil of helpful icons.”
Hoffman’s interface theory of perception (ITP) claims that evolution shaped our senses to be a user interface, tailored to the needs of our species. Our interface hides objective reality and guides adaptive behavior in our niche. Spacetime is our desktop, and physical objects, such as spoons and stars, are icons of the interface of Homo sapiens. Our perceptions of space, time, and objects were shaped by natural selection not to be veridical—not to reveal or reconstruct objective reality—but to let us live long enough to raise offspring.“
Says Hoffman” A spoon is an icon of an interface, not a truth that persists when no one observes. My spoon is my icon, describing potential payoffs and how to get them. I open my eyes and construct a spoon; that icon now exists, and I can use it to wrangle payoffs. I close my eyes. My spoon, for the moment, ceases to exist because I cease to construct it. Something continues to exist when I look away, but whatever it is, it’s not a spoon, and not any object in spacetime. For spoons, quarks, and stars, ITP agrees with the eighteenth-century philosopher George Berkeley that esse is percipi—to be is to be perceived.“
Hi felix,
I appreciate your post. But before I respond to it, I have a whole other post continuing my thoughts on how human thought and the concept of mind evolved. So I will post that first. Then respond to you.
====================================
And so my thoughts on thinking continue. Since my last post I’ve been asking how thought started–not how the concept of mind started, but thought itself. Here’s what I came up with…
I’m convinced there had to be thought before the imagination. And not just memory which, as I said in my last post, was a precursor to the imagination, but the full process of thinking such that man (or animal) could figure things out using logic, analysis, and learning. If this process existed before the imagination, and it’s more than just remembering, I can see no other form of thinking except analyzing tangible things or situations while engaged in hands-on problem solving. That is, for example, how to build a shelter. If one is engaged in building a structure that will provide protection from the elements, withstand wind and rain, make it hard for animals to get in, etc. then there is no way one could do this without processing information on how it can be done while actually doing it. I don’t think mere conditioning could do the trick–conditioning as in what would happen after a long series of trial and error or by watching others–for there are bound to be new and unique challenges with each task. But I also don’t think this would have been full-blown imagining of how to do the task as we’re all familiar with what it’s like to lose ourselves in a task or be in the flow of an activity that requires our focused attention–we’re focused only on the task at hand, not on our imaginations. It’s the opposite of withdrawing into our imaginations. This type of thinking happens while engaged. And I don’t think it’s practicing a deeply ingrained habit either, as habits can permit withdrawing into one’s imagination, even day dreaming of things totally unrelated (ex. thinking of what you want to do on the weekend while driving your car). No, these are thoughts that spring into action while focused on a tangible task right here and now and one needs to think it through in order to analyze the state of the setup or to figure out how to get something done that one hasn’t encountered before.
In fact, I have an argument about the origins of this type of thinking–task-focused thinking–that drives the point home. Here’s where I want to introduce what I think must have been the first thoughts–acknowledgements. I talked about acknowledgements in the last post–they are essentially the thought “X is happening” or “there is an X”–some examples being “a storm is coming,” or “I’m hungry”. I also made a distinction between acknowledgements of things in the world and acknowledgements of one’s own experiences, the latter requiring a concept of mind, but the former requiring only attention to things in the outer world. Putting my thoughts together for this post, I wanted to propose that acknowledgements of the former variety were the first thoughts to ever evolve–the ability to focus on something and say to one’s self “there it is” or “it’s an X”. And this is also the first form of epistemic awareness, the first kind of “knowing”. But tying this back to the point here, acknowledgements, it seems obvious to me, require attention to something in the outer world, an approaching storm, hunger sensations, or a task you’re working on. For example, when picking fruit from a tree, one must be able to acknowledge when a fruit is rotten or not. One must be able to say “Ah, that peach is rotten. I’ll pass.” But that doesn’t require withdrawing into an inner world of thought–rather, it comes about by focusing on the outer world and occurs while focused. So at least with acknowledgements, we have an example of a form of thought that comes with, or is involved in, focused attention on outer tasks. And I propose that from acknowledgements evolved task-focused thinking in general.
Now, in my last post, I spoke about how memory can be thought of as a precursor to the imagination. But is it a precursor to task-focused thinking? To acknowledgements? Might it have evolved after? Well, I think it’s pretty clear that memory is an epically ancient cognitive function and animals since the Cambrian Explosion have relied on memory of some sort in order to survive. But I don’t necessarily think that thinking evolved from memory. It’s possible, even likely as far as I’m concerned, that memory evolved in parallel with cognition, initially independently of it but eventually intertwined with it.
Now, it’s important to distinguish between two kinds of memory–what we might call “reflexive memory” and “cognitive memory”. Reflexive memory would be like basic conditioning. It would consist of a series of experiences leading to changes in behavior that indicate anticipation of the same experience in the looming future. For example, ring a bell and feed a dog a treat over and over again and the dog will eventually salivate at the sound of the bell–as if it anticipates a treat coming. This obviously is a form of memory, some encoding of the experiences of bell being paired with treat in the brain such that the brain can eventually process the bell ringing as an indication that a treat is coming. However, it doesn’t imply any cognitive processing of the memory, or any consciousness of the memory at all. The dog may very well think (or feel) that a bell ringing simply means a treat is coming without reflecting at all on any memories he might have of past bells and past treats. Most likely, some of the earliest kinds of memory were of this sort, giving the organism the ability to anticipate things without necessarily the ability to reflect (cognitively) on any memories of those things.
This is where “cognitive memory” comes in. This is memory in the ordinary sense in which we use the term “memory” in casual language–that is, memory as reflecting (in your head) on past events–which actually does require withdrawal into an “inner world” (though the organism wouldn’t necessarily recognize it as such–they probably would recognize it as “seeing the past”–or maybe just “the past”). At the very least, it requires pulling your attention away from the here and now, away from whatever task you’re focused on, and into “another world”–in this case, the world of the past–thinking, that is, of past events instead of what you’re working on.
While memory and task-focused thinking evolved (probably) in parallel, it is hard to say when reflexive memory gave way to cognitive memory–before or after the advent of task-focused thinking–but at some point they must have become enmeshed–if for no other reason than that we function like this today–we sometimes fall back on memories (cognitive memories) while working on a task in order to try to remember how to solve certain problems or take certain actions–we sometimes even go into our imaginations (now that we have them) and come up with scenarios that were never encountered before. It has become part of a single process. This single process inherits the logic and analytic skills from task-focused thinking and the “inner simulation” from cognitive memory. The only thing to be added for the imagination to emerge was to extract the “unreal” from the past (and from time in general) and to allow for visualizations (inner simulations) of things that not only don’t exist but aren’t even remembered to have existed.
If this is how cognitive memory gets integrated into thinking, then you can see how it wouldn’t have taken much for the imagination to compliment memory and play the same role. It would have been especially useful for problem solving while engaged in task-focused activities. One can also see how control and will get incorporated into the imagination–if what it evolves from (in part) is task-focused thinking, then it inherits the sense of control and will one has over the task when engaged in it.
I’ve also been thinking about where abstraction fits in. When did abstract thinking evolve? In my book, I argued that the first abstract concept must have been the concept of reality itself. Why? Because unlike concepts of things in reality, it stands to question what reality itself is suspended in. At least with things in reality, one can say they are suspended in reality, but what is reality suspended in? Well, it is hard to say what primitive man thought of this question (a world beyond reality?) but forming the concept of reality is like forming the concept of the set of all real things (specifically in the context of set theory) such that one could conceptualize reality as a set with all real things being members of that set. What would that make all unreal things (things we only imagine)? They would be members outside the set, not belonging to the set, maybe belonging to the set of all unreal things. In any case, it gives one a crude answer to the question of what reality is suspended in: the realm of the unreal. What lies outside reality? Unreal things!
I reasoned (in my book) that the concept of reality must be abstract since the division between the realm of real things and that of unreal things (whether that’s conceptualized in terms of set theory or just recognizing a difference between one’s senses and one’s imagination) cannot map to anything in reality nor anything in the realm of the imaginary. The divide is what stands between these two worlds, and therefore if we are to say it exists at all, it must exist on a higher order than either reality or the imaginary, or a higher order in which both reality and the imaginary reside. To understand this divide is to think on a “meta” level just above that of reality. It is to think “beyond” reality, which is another way of describing the abstract (this is not to say all abstractions are “beyond reality”, but that the ability to think “beyond reality” is the groundwork for the abstract).
Now, it is said that sets (from set theory) are abstract–and I agree–so it stands to question whether man had the ability for abstraction even before forming the concept of reality and being able to contrast that with the unreal things of the imagination. A flock of birds, after all, is a set of birds. Surely, we’re not saying man was incapable of thinking of a flock of birds before he had an imagination. But I’ll maintain that before the imagination, man’s concepts of sets (or collections, or groups) wasn’t as abstract as our concepts of the same today. Rather, it was most likely based on Gestalt psychology. For example, draw a set of dots clustered in one area of a piece of paper and a set of dots clustered in a distinct other area of that paper, and the mind will automatically group the two sets of dots together, perceiving them almost as objects unto themselves, groups of dots that are, at the same time, collections of simple objects but also whole objects themselves. We are not forming the concept of a set of dots in abstraction but of perceiving higher order objects. This is what the Gestalt psychologists were all about, what they studied–our tendencies to perceive (not conceive) objects based on the arrangement and layout of smaller objects–and quite automatically. For a more obvious example, imagine your refrigerator. When you look at it, you see a whole object–even though you also see the many parts it’s made of. Do we therefore think of the fridge as an abstract set of parts? No, we think of it just as much as a concrete object as the parts, made of the parts but an object in its own right nonetheless. For an even more obvious example, consider the visual processing circuits of the brains of certain species of monkeys. It has been shown that there is hard wiring therein for perceiving objects as belonging together in groups when they move in the same direction and the same speed–birds flying overhead, for example. There may be other birds in the area flying in different directions or at different speeds, but they get excluded from the group in the monkey’s perception. It is an artifact of how the brain forms or recognizes objects. And without abstraction, I am saying, this is all a “set” would have been to primitive man (or animal)–not an abstraction, but a larger, perhaps more dispersed or rarified, object.
In fact, I don’t see how primitive man, before he had an imagination, could have formulated the concept of reality. Without an imagination, the only objects man could have recognized were those he perceived with his senses. And while he could form concepts of larger objects made from smaller objects (like your refrigerator and its parts), he could only do so because the larger objects were presented to him by his senses. What object is there that man could perceive and call “reality”? The ground? The Earth? Perhaps, but then what did man make of the stars in the sky? The Moon? The Sun? The clouds? No, without an imagination, Gestalt psychology was all that man could rely on to perceive groups of things as larger objects. And the key is, they had to be grouped (stand in proximity to each other, move in the same direction at the same speed, have the same color, etc.). The world, to primitive man, I’m convinced, was never perceived as a whole singular thing, but as a multiple of things (the Earth, the stars, the Sun, the Moon, and all other objects in the fray). In fact, if reality is an object, it spans beyond the boundaries of our visual field, that hazy area where our visual acuity tapers off into non-existence–and this makes it difficult for our brains to process it as an object. There is no place we can turn to finally see the contours of reality–it always extends into this hazy area of our visual field and beyond it–which falls short of the brain’s requirements for processing it as an object. This is not to say that man had no word for reality but that reality could not be conceptualized as one singular thing–more than the sum of its parts–like a set or an abstract object. A multitude of things is as deep as man could go with the concept of reality until the advent of the imagination which allowed for abstraction.
This ties directly into my treatment of concepts (which I write about in my book, and also went into detail in a post here). Concepts project as essences, you might recall, and essences are injected into things–making my coffee mug my “coffee mug” rather than an arbitrary arrangement of sense data–they are that which make the arbitrary arrangement of sense data more than the sum of their parts–it makes them its own thing–a mug!–a single object, not just an arbitrary arrangement of sense data. Here, I am saying the same thing about how primitive man (or animal) conceptualized “reality” in contrast to how we do. We definitely have a concept of reality, which is to say we project an essence into all the things we see and experience (even if that means recruiting the imagination to visualize all things in reality), making it all more than the sum of its parts, making it into “Reality” proper. Primitive man (or animal), before he had an imagination, didn’t have this ability. When he looked at everything real around him, he just saw everything real around him. He was unable to see an object encapsulating it all. But the imagination forces us to draw a line between what’s real and what isn’t. This line is used to demarcate the boundaries of the set we call “reality”, drawing a division between the real things in it and the imaginary things outside it. The demarcation dividing the former and the latter is what finally takes the concept of reality and makes it abstract. It gives it definition, a sort of contour, like a Venn diagram, and this renders it like its own object. But since it’s “suspended” in a sea of unreal things (or a higher order realm), it cannot be an actual physical object, and this makes it the first abstract object.
So what does this say about where abstraction fits into the evolution of human thought? It seems obvious that it stems from the imagination and the ability to distinguish between the real and the unreal. Because the imagination makes possible the conceptualization of things outside even reality itself, man gains the ability to think beyond the concrete and physical and ventures into the abstract–not so much into the imagination but into comparing and contrasting the imagination with reality and thereby recognizing a higher realm in which they must both subsist, a realm that must necessarily be abstract. And it seems obvious that this would have happened more or less immediately–that is, it wouldn’t have required an extra several thousand or million years to evolve after the imagination, but is a direct consequence of the imagination and our ability to recognize it as the realm of the unreal. That is to say, to recognize things as unreal is already to have one foot outside reality, which implies a higher order realm that can’t be described in any other way than “abstract”. And since we are arguing that to recognize the imagination is to recognize the first type of “mind” man was familiar with (an abstract concept), we can say that man recognized his own mind as soon as he was capable of imagining things period.
Now, the last point I want to end with is some elaboration on how the concept of mind evolved from the imagination to the concepts we have today. Consider Berkeley’s idealism, for example. Berkeley is saying far more than just that the imagination is mind, but that everything is mind. How did we go from the imagination to everything? Well, as I speculated in my last post, there must be a gradual lumping of all our experiences into the category of “mind” as we develop. A young girl, for example, not long after learning to walk, probably starts to question her perceptions. She notices that when she closes her eyes, the world disappears. But she knows it hasn’t really disappeared–it’s still there–she just can’t see it. And yet, she knows that something disappeared, something that was there when she had her eyes open but vanished the minute she closed them. Thus is born a rudimentary concept of “seeing” in the young girl’s mind. ← It’s seeing that disappears. But what informs the young girl that this “seeing” is “mental”? Well, there’s a few more things a young girl of her age has yet to figure out, and soon will. That the contents of one’s vision may differ from person to person, for example. Her mom might see a man on stage, for example, but the little girl can’t because she’s too short to see over the towering adults in the audience surrounding her. So she learns that vision, unlike the objects of the outer world, may differ from person to person, and is therefore personal, maybe even subjective (however long it would take to develop that concept). So vision, it turns out, belongs intimately to her. She learns all these things about all her other senses, and soon about her thoughts and feelings too, and soon learns to categorize them–as “sensation” in general, or “perception” in general.
What about mind in general? Well, most of these category labels will, in the vast majority of cases, be passed down to the child from other sources–teachers, parents, media–and so they might learn that “mind” is the top level category for all our experiences–but I’m interested in how these concepts evolved, how they first came about, not how they’re passed down–so the question for me is how, through our evolution and history, the concept of mind became the general category label for all our experiences. I hesitate to say “mind in general” is just a label for the highest level category of all our subjective experiences as there is this one snag: we think of the mental as just mental–as in, perceived but not necessarily real. As I said in my previous post, we inherit this from categorizing mistaken perceptions (or experiences) as just the imagination, the “inner simulation”, thus inheriting its “unreal-ness”. Why, then, would a top-level category for all our experiences inherit the character of “unrealness” from just one experience? Why wouldn’t the actual realness of our other experiences dominate? After all, it’s one thing to realize that one’s experiences are personal and subjective, but it’s another to suspect they are wrong. The only thing I can think of is that the maneuver of categorizing mistaken perceptions and experiences as just the imagination must be the grounds on which we, at some point in our evolution and history, came to associate that which is personal and subjective (our experiences) with the unreal. It doesn’t take much, in other words, to put two and two together and conclude that perceptions and experiences must therefore also be instances of mind (as we understand “mind” at that stage)–being personal and subjective makes them “inner”–the same place as the imagination. They’re probably one and the same–or so the child (young adult?) thinks.
I say “young adult” here because to come to this conclusion–that perception too must be a form of mind–is quite philosophical and abstract. Before this, a child might recognize his experiences and perceptions as subjective and belonging to him/herself, but not necessarily wrong–that is, a real-world/perception divide is recognized but the perception is still considered a faithful image of the real-world–which is not to say the child is unfamiliar with mistaken perceptions or beliefs, just that he/she doesn’t question his/her perceptions or beliefs like a young adult or philosopher would (not before they’re shown to be mistaken). And to stretch this further to a kind of Berkeleyan idealism or Kantian phenomenology definitely requires a fully grown philosophically inclined mind. Once these philosophically minded people present these ideas and make them public (at least among scholars), it doesn’t take much for such ideas to be whittled down and simplified for the average person to comprehend and digest. This is where the child, at some point in their development, ends up learning these concepts rather than figure them out for him/herself. ← And that’s how we get here.
In contrast, my theory of consciousness suggests that we could have taken a different route. At the point of questioning the reality of our experiences–supposing, as we would have, that we imagined them or that they came from the same place as our imagination (the mind)–we could have insisted that they are all real, that there is no distinction between perception and reality, and adopted a relativistic view of reality, supposing that our mistaken perceptions, experiences, and beliefs are real, always were real, but applying to reality as we understood it at the time. A dream, we might say, was not “just in our heads” but was a reality unto itself, a reality we were in the middle of, the full existence of which we vividly experienced, but that upon waking we transitioned from–from one reality to another. (I recall learning about a creed of indigenous people who actually believed this–that the dream isn’t an artifact of the mind or the imagination but truly another world that our souls travel to when we sleep.) And there may still be some logical/philosophical wrinkles to work out–like how do reality transitions work? Should we take the idea of reality transitions literally or metaphorically?–and I do offer my answers to these questions in my book, showing that such a philosophy can be worked out. But such a perspective doesn’t come natural to the human brain. We aren’t relativists by nature, and so we take the other path, creating a divide between reality and perception. But if we could have taken the path that my theory lays out, then these philosophical wrinkles could be worked out, and are worked out, in a book like mine. The likes of Berkeley, Kant, Descartes, etc. might have had an easier go at their careers if we had taken this path, and I propose that had they (or philosophers since) put an earnest effort into working it out (by “it” I mean idealism and subjectivism), it could have been done. ← That’s what I set out to prove in my book.
felix dakat:
Herein, lies the problem. To me the Absolute is not merely a theory. It’s my experience. That said, it cannot be communicated directly, so I do appreciate your feedback on how I’m doing. In language, communicating parties always stand in relation to each other.Therefore, it is intrinsically dualistic. In language, the absolute is reduced to a symbol for that which transcends it. As the Buddhists say, language is, at best, like a finger pointing at the moon.
Fair enough, but I wouldn’t insist that someone’s views (like yours) are merely theory, just that they are at least theory. You may have had experiences that compliment your theories, thereby bolstering them, but all I mean to point out is that my relativism rests on being able to say that any statement you say is true according to you (or according to your theory). This says nothing about whether such statements are true or false, just who they are true or false relative to. And this, I don’t think anyone can deny. Even the most stanch objectivists and absolutists will have to admit that what they believe is true according to them. What’s the alternative? That it’s true in reality, objectively, absolutely, but not according to them?
I have come to think of the absolute as not opposite to, not mutually exclusive with, relativism. I have come to think of it as a special case of relativism. As counter-intuitive as this may sound, I have come to think of the absolute as meaning “relative to that which I believe is absolute”. So a materialist may believe that matter is the absolutely reality, but I don’t see how he can deny that this is true according to his concept of absolute material reality–that is, relative to his concept of absolute material reality–so long as I don’t insist that his concept of absolute reality is false. Or, to take another example, how Darth Vader is Luke Skywalker’s father. This is true–one can even say absolutely true–in the world of Star Wars. The absolution of something, in other words, is a characteristic of something within one’s idea of reality, its nature and how it works–but that reality, the idea of it, the “design” of it, is what it is absolute relative to. And this hold whether such a reality design is correct or false.
Now when you say “herein lies the problem,” I actually see the root of the problem being elsewhere. I see it in the way we seem to conceptualize differently the distinction between substance and the forms that substance takes. I pointed this out before, and I’ll repeat it here: substance is the ultimate “stuff” from which everything else is made–and we therefore call everything else its “forms”–but unless you’re an eliminativist (tell me if you are), this doesn’t make the forms less real than the substance. A banana is no less real than matter just because it’s a form that matter takes. And this is how I think of consciousness and the things consciousness experiences–they aren’t distinct things–a thing that one experiences is consciousness taking the form of that thing–and what I argue in my theory is that the reason the thing seems real is because the reality of consciousness (which we both agree is absolute) subsists even in its forms (just as the matter that the banana is made of comes from the matter qua substance that everything is made of). In fact, I’m not sure we can have pure substance apart from its forms. Doesn’t matter, in order to exist, have to take a particular form? What does it mean for matter to exist without a form? Likewise, I’m not sure consciousness can exist without a form, without something being experienced. Therefore, if consciousness is the absolute real “stuff”, it can only be so in one form or another, and wouldn’t that make the forms just as real? ← This is what my argument here pivots around. I’m not arguing against the absolute status of consciousness (within the context of your and my theory), but that absolute as it might be, its forms (the things we experience) are no less real. I’m arguing for the reality of the things experienced–if not full reality then at least as real as consciousness itself.
This is true, but it’s hard to reconcile with a “refurbished” concept of mind and perception like that from my theory. The idea of perceptions being processed through the sense organs lends its very well to the conventional notion of “mind”–that passed down to us from Descartes and philosophers since, even that which seems to be the most natural, intuitive, and “default” notion of mind to common folk (as John Searle says, “The common man on the street is a dualist”). It is the concept of mind as something produced by the brain (or by the sense organs) and is therefore a phenomenon in material reality, not the framework within which we experience reality (consciousness within reality as opposed to reality within consciousness, as you pointed out more than once in this thread). This is why I’m surprised you would argue this. You agreed, elsewhere in this thread, that we have mistaken concepts of mental things like thoughts, emotions, perceptions–that they are not what we think they are–yet you seem to treat them in the typical conventional way when it comes to their relation to sense organs. This is not to say you’re concept of these mental things is just the typical conventional “default” concept that most people believe in with their folk psychology–you may yet have your own twist on it–but apparently not in this way. My particular twist on it makes mind and perception the basis for everything, and the sense organs we perceive just are these perceptions except metamorphized and transformed into a different experience–a different form for the substance of consciousness to take–the forms of organic and neural matter processing electric and chemical signals, but still representing the perceptions the individual experiences (in the same way the shadows on the wall represent real figures in Plato’s cave). There’s still a correlation, in other words, between perception and neural/organic processing of signals, but which is the foundation of which is reversed in my theory.
This is a good point, but then I would just say that the way we experience the imagination (while dreaming) must have to do with more than just how the imagination feels but (perhaps) how we interpret the images of our imagination. That being said, I don’t think dreams are merely our imagination run amok without our realizing it. I’ve had lucid dreams in which I took the opportunity to examine my experiences, take in my surroundings, and what I remember is that it doesn’t feel like imaginary images–it actually feels quite real–not the same as when we’re awake, but real in it’s own way (like high on a drug or something). We might see evidence of this by conducting brain scans comparing neural activity of a dreaming brain and that of a brain lost in its imagination. I’ll bet they’re different. Dreams are their own experience.
Yes, the things we imagine often come to fruition in material reality, but these are still two distinct experiences. When we invent flight–whether the modern airplane or space travel–we can clearly see that these inventions are real. But when we simply imagine an invention that might one day be, we know that it isn’t real yet. In other words, if I imagine a star ship capable of warp drive (like in Star Trek) I never confuse my image of warp drive technology with actual, already-invented, warp drive technology. I’m always able to distinguish between the things I imagine and the things that are real (even if they originated in the imagination)–I never imagine a tiger, for example, and leap back and run away because I think the tiger is real–and this is due solely to the way each experience feels. So I’m talking about the way our experiences feel in the moment, not their potential to become real in the future.
Yet, you bring up an interesting point. I said in my previous posts that images from our imaginations are experienced as “unreal” and that at best they may be images of memories, of events we know happened in the past but aren’t real now. As such, we know they aren’t real (now), and can never be real again as the past is, well, in the past. But the future is a different story. We never know what might be real in the future. We never know whether a unicorn, for example, is doomed to forever be unreal or whether some mad genetic scientist might implement some crazy idea to create unicorns by manipulating the DNA of horses. So while we will always recognize the imagination as unreal, and while we will always recognize that the past can never be relived, this is not so for the future. As far as the future is concern, we can never say for sure that the things we conjure up in our imagination won’t ever be real at some point in time (this is, after all, one of the most ancient uses of the imagination–trying to picture what could be–especially useful for task-focused problem solving, I’m sure you would agree). But I do have to say, we still don’t confuse imagined objects with already-invented objects–we still clearly have the ability to know when we’re experiencing one and not the other.
Do you mean that we, at some point in our evolutionary journey, forgot the basics of pain and pleasure? What causes pain and pleasure? And therefore proceeded full throttle into ways of living that exacerbated pain and pleasure?
I’m note sure that projections are fundamentally based on desires and aversions, but I think it’s both a simulation of the phenomenal world inside the mind and the world being a simulation of the mind. We clearly have imaginations–and clearly this is an “inner simulation” of things that could exist in the outer world–but the mind itself (our perceptions and experiences of the world) are also a simulation–or as I like to put it, a representation–of a world of mind beyond our individual minds. All our sense data are representations of the world beyond us–like X’s and O’s might be representations of football players on a field when drawn on a chock board–and therefore function as “simulations” projected by the mind but representative of what the universe is experiencing and communicating to us (albeit in this altered form).
felix dakat:
But all that is downstream in the individual consciousness which is a mere part of the whole which is consciousness at large. Here’s how analytic idealist Bernardo Kastrup imagines consciousness evolved:
“… in the very beginning, the membrane of mind was at rest. It didn’t move or vibrate. Its topography and topology were as simple as possible: an entirely flat membrane without any bumps, protrusions, or loops of any sort. As such, not only was there no self-reflectiveness, but also no experience, since experience consists in the vibrations of the membrane. Only an infinite abyss of experiential emptiness existed; the deep, dreamless sleep of nature. Yet, such unending emptiness was not nothing, for there was inherent in it the potential for something. At some point, some part of the membrane moved, like in an involuntary spasm. Instantly, this movement was registered by the one subject of existence as a very faint experience. There is a significant sense in which an experience concretizes–brings into existence–its very subject. The membrane realized, at that moment, that there was something. It is not difficult to imagine that such a realization could lead to a kind of surprise and agitation that immediately translated into more spasmodic movements, more experiences. Shortly the membrane of mind was boiling with vibrations. And the more vibrations there were, the higher the agitation, and the more vibrations, etc., in a chain reaction of rising experience. The metaphor of a great explosion and inflationary expansion–a ‘Big Bang’–doesn’t seem that inappropriate here.But since there were still no loops in the medium of mind, there was no self-reflective awareness. Existence was still a confusing maelstrom of instinctive experiences in which the subject was completely immersed. The subject was its own uncontrollable flow of passions and images, with no ability to step out and ponder about what was going on; no ability to make sense of its own predicament. Like a startled man in the middle of a giant, precariously balanced domino field, the subject was unaware that it was its own instinctive thrashing about amidst the falling dominoes that caused them to fall in ever-greater numbers. The one subject of existence was still a prisoner of its own instinctive unfolding. Love, hate, bliss, terror, color and darkness were all morphing into each other uncontrollably, like a storm. But all was still one. At some point, the thrashing about of the membrane caused a small part of its surface to fold in on itself, closing a hollow loop. Suddenly, there was a hint of self-reflective awareness. And it was enough: the idea of ‘I am’ arose in mind for the first time. And the questions ‘What am I? What is going on?’ followed suit. A fundamental awakening happened and a creod–a developmental path–was discovered: a path to self-reflective awareness. From here on, a still somewhat chaotic refinement and expansion of that creod was the name of the game. Folds and loops began to emerge elsewhere in the membrane in a precarious attempt to replicate and expand on the original event. And today, we may still be living through this process.”
While I find this fascinating, I can’t say I understand it all that well. Kastrup definitely needs some context here, that which I’m sure can only be provided by his full collection of writing. But thanks for the insights.
Do you mean to say that consciousness–the ultimate foundation of existence–is not just consciousness, but consciousness of consciousness? Consciousness of itself? To be honest with you, I could easily agree with this, as it is the only way that consciousness, as the ultimate sub-stratum of existence, could possibly serve as the sub-stratum of existence (consciousness must always be consciousness of something, after all, must always take some form, so why not itself? Why not what itself is in essence?). But when it comes to individual human beings (or animals), do you think this is an instance of our “forgetting”? That we can become unconscious of our consciousness?
I have no issue with this. It’s like the metaphor of X’s and O’s representing football players. True, the football players aren’t really X’s and O’s but the X’s and O’s on the chalk board do represent them, and furthermore, the X’s and O’s are made of real chock on a real chock board. Reach out and touch it and you will find tangible sensation (and chock) on your fingers. People typically don’t see beyond the reality of their sensations but their sensations do serve a representative function on top of being real, whether they realize it or not.
Sure, only I say the interface is just as real as that which it represents. They are different kinds of things, sure, but not made from wholy different substances–“real” and “unreal”.
Well, here I have to plead ignorance as to what Berkeley meant by “perceive”–I always thought, in the spirit of true idealism, that “perceive” could be generalized to any experience, any “feel”, that the mind was capable of. This includes concepts and beliefs, like that of the world persisting even when we close our eyes. In other words, I disagree with Hoffman that when we close our eyes, the spoon disappears. For the average man who doesn’t believe (or understand) such intricate philosophy, the world continues even when he closes his eyes. Why? Because he still believes in the world. Belief can sustain the reality of things just as much as sensory perception. Hoffman hints that even he can’t deny this when he admits that “something continues to exist when I look away” but he seems to think it’s not the spoon as such, but as far as I’m concerned, if the average man believes the spoon exists even when not seeing it, then it does exist (for him). (This is supported by my theory of concepts which says things are projected from concepts as the essence of things, which don’t necessarily need a sensory image to be projected into, but can be said to exist purely from belief.)
Anyway, I appreciate your thoughts, felix. Maybe this discussion will end in us agreeing to disagree, or maybe by some miracle we’ll end up realizing all our disagreements are based on a misunderstanding of each other’s views. I hope the way I articulated my responses here shed light on how I view things and hopefully make more sense to you now.
Like a headache, your spoon only exists for you ever! Nevertheless, I agree with you that, like all your phenomena, your spoon is real. In neoplatonic terms it “participates in being”. Nonetheless, the form it takes is determined by your sensory equipment, learning experiences, etc. which as I understand you would call”epistemic consciousness.” In other words, the spoon is ultimately consciousness as well as are all the objects of consciousness.
It’s like we’re programmed (with default & real-time play) crayons who try to get at the real spoon as if we were just multi-playing cameras, but can’t help but “mess with it” with all our feelings (default… & real-time conditioned… & messed with).
Well, I think we finally agree on something.
By “epistemic awareness” I mean awareness by knowing as opposed to awareness by experiencing. If you experience something, then you are at least experientially awareness of that something. But if that experience makes its way to the cognitive centers of the brain (the “knowing” centers) such that you can say “Ah, I know I’m experiencing X,” then you are also epistemically aware.
…voluntarily?
Hello again folks,
It’s time for the next installment of my blog on my theory of consciousness and why it’s so awesome. This time, I want to get quantum (rhyme unintended). I want to talk about quantum physics and why it poses a threat to my theory and how I resolve it.
Why is quantum physics a threat to my theory? Because my theory depends on necessity. This was the driving argument in part 2 of my OP–that mind serves as a better foundation for existence than matter because it manifests necessity, not contingency. Contingency, the reader might recall, is found in matter. The only place where we find necessity is in the mind. And given my supposition that mind always projects and becomes the real thing it feels like it is, necessity, it turns out, is real. But quantum physics is infamous for introducing indeterminism into the world, that which is grounded on probability, not necessity. So it clashes with my theory.
Think of it this way: the necessity with which experiences flow–one experience entailing the next by necessity–would have us conclude that whenever one experience is had, the subsequent experience must always be had… by necessity. So every time I think “2 + 2”, the next thought I must have is 4. Of course, human thinking is a lot more complicated than this–obviously, I could think “2 + 2 = 5” (not that I would believe it) or any manner of other things following the thought about “2 + 2”–but let’s assume for the sake of argument that the thought “4” always followed the thought “2 + 2”–by necessity–as though we had no control over our thoughts, no free will. This necessity with which we think is represented by physical processes in the brain–neurons firing and stimulating other neurons into firing–and the necessity we feel in this thought process is represented by the laws of nature governing the flow of these brain processes. In other words, the laws of nature are the physical representation of the necessity we feel in experience.
(One could imagine, if one were so inclined, a brain built with computer circuitry instead of mushy biological material… in this scenario, it’s much easier to imagine a brain whose thought processes are governed by necessity. This is not to say the mushy biological material isn’t governed by the laws of nature, just that they are way more complicated than computer circuitry, and so the necessity gets lost in the weeds so to speak.)
But quantum physics defies natural law. It is known in scientific circles to be the study of that which destroyed our notion of a perfect Newtonian world–where things always obeyed the laws of physics perfectly–and replaced it with a world of probabilities. Nothing is absolutely certain. There is no necessity. There may be trends–some coming with extremely high likelihoods of happening every time–like dropping an apple to the ground–but they are nonetheless just trends. There is a minute, almost infinitesimal, chance that the next time you let go of an apple, it will fall up.
How do I square this with my theory?
Simple. Things can be partially real. That’s it! It’s not binary. A thing isn’t categorically real or categorically not real, it varies along a gradient. Things can be half real.
Now, this obviously has implications for the two other elements in my definition of “experience”–quality and meaning–for if being can vary along a gradient, so too can quality and meaning.
In the case of quality, it means the quality of a thing is never perfectly precise–that is, perfectly precisely one specific and clearly defined quality. Take the color orange for example. What it means for the quality of being orange to be not so perfectly precise is just that it isn’t exactly orange. But it’s not exactly a different color either, like red or yellow. It’s not that it becomes a bit more red or a bit more yellow, it’s that it becomes a bit more not-a-color, not-anything, undefined.
This fits perfectly with what happens to being. Being, as I said, becomes less real (there’s less of it) and this seems to correspond to a loss in definition of its quality. It isn’t anything precisely (though not nothing either–it’s still “half” something). Incidentally, this sort of does mean the orange in question will become a little more red and a little more yellow. You see, for technical reasons having to do with quantum physics (which I explain in detail in my book), one can talk about the quality of an experience which is undergoing quantum superposition (or rather, it’s neurons undergoing quantum superposition) sort of “spreading out” its quality, in a way taking on those of its neighbors (red and yellow in the case of orange), but each instance of color harboring only a portion of the total being (a being that would otherwise manifest as perfectly precisely orange–and nothing else–if only it was able to contain all of itself on that precise color). So the orange is a little bit red but a red that isn’t quite red-as-such.
And when it comes to meaning (the third element of experience), the meaning of an experience simply becomes less clear. Simple as that. If the quality and being of an experience becomes less “anything” then what they are telling you, what the experience means, is unclear (fuzzy). The being and quality of the experience vary in proportion to the clarity of its meaning.
So for each element of experience, we have different (but complimenting) descriptions for what happens to the experience of systems undergoing quantum superposition. They become less real (being), less defined (quality), and less clear (meaning). Do I have a singular precise way to express this for substance as a whole? No. If I did, I would probably also have a singular precise way to define experience. But instead, I have a 3 part definition: experience is quality, being, and meaning. And what experience is “as a whole” is, in some way, a mix of these. I cannot even fathom what such a mix would look like as a single thing, or how to conceptualize it abstractly, but I maintain that as long as we think of it in these 3 ways–3 different ways of describing what experience is–then that’s good enough for my theory.
So it seems this substance, this experience, goes through these “in and out” phases–one minute becoming less real, less defined, and less clear, the next becoming more real, more defined, and more clear (but possibly not as the same thing it was before). This is the journey of the thing we apparently are a part if–streaming through time, going in an out, flickering, but never quite fully in and never quite fully out. If this is God, then God is uncertain of himself.
As scary as that might sound, I feel it resolves the conflict between my theory and quantum indeterminacy–essentially, my theory yields. It gives up necessity in exchange for a sort of “partial” necessity. And as it turns out, my theory can survive on “partial” necessity, so long as it still provides experience with being, quality, and meaning to the degree that we feel these in experience (felix should be happy ). Now you might say, “but gib, wasn’t necessity a cornerstone of your theory? Wasn’t that your whole reason for suggesting that mind is the basis for matter?” And you’d be right. But this was in the scenario where we didn’t take quantum effects into consideration–our picture of the universe might as well have been Newtonian–but now that we are bringing quantum effects into the picture (indeterminism and superposition to be exact), matter isn’t much help here either. Mind only loses necessity to such a degree when matter also loses something to the same degree: a particle’s position, its momentum, its energy, and all other measures that can undergo quantum superposition. In other words, matter fails in these instance to qualify as a perfect substratum of all existence just as much as mind. In other words, this is a problem for science, not my theory. Or to put it another way, mind doesn’t have to supply the necessity on which all else (matter, energy, physicality) rests since it isn’t the foundation for matter, energy, or physicality as such, but rather a watered-down version of matter, energy, and physicality, a “less real” version, a “less defined” version. And so to the degree that mind fails to be the ground on which matter, energy, and physicality stand, matter, energy, and physicality also fail to stand. And so, in the end, it is what you’d expect. It’s just another way of stating my theory. If necessity is the ground on which existence stands, then the degree to which you take it away, existence fails to stand to the same degree, and it happens that the latter scenario describes the universe science is painting for us.
Now, the superposition of particles is not the only quandary that one might think needs reconciliation with my theory, but there’s also the indeterminacy of the wavefunction collapse, the fact that a particle, once it collapses from its state of superposition, seems to have its states (position, momentum, energy, etc.) randomly determined. That is, for example, if a particle doesn’t have a precise position, but instead a “fuzzy” region in space where it is most likely to show up when measured (that happens to take the form of a wave expanding at the speed of light), that position, when it does show up due to a measurement, “collapses” to a more precise position, but what that position happens to be seems, according to all experiments, random (or indeterminate). How does my theory account for that? Well, it doesn’t need to. Once again, this is not a problem for my theory but for science. Science itself is telling us that these wavefunction collapses are indeterminate, so the same carries over to experience, that which the collapsing wavefunction represents (and why wouldn’t it if it’s not driven by full necessity?). My theory waits for science to tell us what’s happening during the wavefunction collapse (or with superposition for that matter) before it renders its own interpretation of what that represents vis-à-vis experience. If science hasn’t shed any light on this yet, we shouldn’t worry that our theory hasn’t either. My aim here is not to answer these questions but simply to reconcile the conflict between my theory and quantum superposition, and I believe the “partial” necessity compromise does the trick nicely.
Wow. The walls of text.
Everyone has to be different for existence to exist. Existence can’t unexist.
Consciousness occurs because infinity is trying the be itself. It can’t.
This causes self recursion…. Which is consciousness.
“Do or do not. There is no try.”
However, the mind, which, in my view, is the content of consciousness, creates the experience of time. (See Kant)The past doesn’t exist any more, and the future doesn’t exist yet. They are both useful fictions of the mind. (See Parmenides)
Being is absolutely necessary. And it is absolutely free. Quantum theory reflects that reality. Which is why it contradicts common sense.
Being and consciousness are absolutely one. Relativity originates with the mind. The world is our representation. (See Schopenhauer)
The past & future are as real as the present or they could not be blended “before” the “beginning” … all of which “began” whole.
There is no past, present or future before “the beginning” which is itself a temporal concept. Time is an a priori structure of the human mind which it superimposes on the empirical world.
I put before in scarequotes. I also put beginning & began in scarequotes.
Kant has two senses of time. One of them is God.
Where does Kant say time is God? And, if you think that’s true, how so?
Ask me again after I finish what I’m currently working on.
Based on what he says about time in the Critique of Pure Reason and the Prolegomena to Any Future Metaphysics, that would seem to be a remarkable and prima facie contradiction on Kant’s part. I haven’t read his early works (his “dogmatic slumber” years, but those are said to be consistent unoriginal, so I wouldn’t expect to find such a proposition there. If he said it later, I would expect he would have been called on it by his contemporaries. That would be the kind of metaphysical claim he argues against in CPR. I think you’re mistaken.