Note: I wrote this up to address the assertion in the “Critique My Philosophy of Life?” of the impossibilism of free will. However it didn’t feel right to initiate this level of analysis in that thread, so I’ve put it in the “Free Will” thread instead. Any counterarguments and discussion can thus be maintained in a more appropriately topic’d thread.
Analysis/argument regarding free will (ultimately concluding as pro-free will)
We (for the sake of this approach) live in a deterministic universe. “A implies B” means Q isn’t going to randomly pop up when it has no relation to A. We actually want this to be the case. If it’s determined that action B is good to do in situation A, I don’t want a random (indeterminate) result to be generated in situation A, I want action B. Free will would dissolve in the purely indeterminate case because there’s no way to predict what would happen, and it’s entirely possible that something that I don’t want to happen would happen. It essentially robs the “will” half of free will.
With a deterministic universe, there appear to be two ways to obviate free will: predictability and responsibility. Predictability looks forward, responsibility looks back.
The predictability argument is thus: If one could create a computer that completely modeled your brain (which can’t be done even theoretically because of the cloning problem, but ignoring that), and that model would always answer any question exactly the same way as you would, that would imply you don’t have inherent free will.
Aside from the cloning problem, it could be described as being able to predict state S+1 based on state S. To do that, it would need to know the entirety of state S. However knowing state S (which technically should encompass the scale of the entire universe, but even that isn’t necessary for this argument) is fundamentally impossible due to both the Observer Effect and Uncertainty Principal. Therefore, for multiple reasons, it’s literally impossible to disprove (or prove) free will based on predictability.
So next we’d look at the responsibility aspect, though even that needs a bit more clarity.
A small aside on free will: In order to better understand and argue for or against the existence of free will, we need a better description of what free will actually is. Current definitions are almost tautological (or whatever the reverse of that would be, when being disproved), and they shift most of the definition onto an ambiguous external element called “responsibility”.
More generally, “free will” is an aspect of behavior, and so we ought to be able to describe what free will does that affects behavior, rather than simply saying that behavior is the ‘result’ of free will.
Now, responsibility can be considered at two levels: the conceptual entity that exists within the framework that is even capable of discussing free will, or the “cause” of an effect. If we limit ourselves to the frame of reference that allows for conscious entities to exist, we can approach it solely as a human (ie: self-aware entity)-based cause/effect system (and assume that the non-human cause/effect layer supports the human layer). Responsibility then encompasses genetics, environment, instruction, and so on.
However ‘cause’ can only be considered transitive for as long as the same rules apply. If the rules change, the causal chain stops. One might argue that the laws of physics don’t ever change, and that would be correct – but only at the most fundamental (ie: quantum) level. Wetness or viscosity, for example, can’t be measured in an individual atom. It only even exists when multiple atoms are arranged together, and depends on other things such as collective energy states that are also only relevant to the collective, not the individual.
In other words, the rules that apply at the collective/macroscopic level are a superset of the rules that apply at the quantum level. The quantum rules still apply, but some other, additional rules -also- apply. This is largely described as emergence – that complete knowledge of the behavior of the components of a system (in isolation) is insufficient to describe the behavior of the system as a whole, which can only be described with respect to the interaction of the components, not the components themselves.
Any given action is thus composed of and described by rules at its native tier, each of which can be broken down in a compositional matter of rules from lower tiers (a conceptual stratification that can be seen as similar to supervenience). However while those lower tiers behave in a deterministically causal manner, they cannot themselves be considered causal for the effects of the higher tier event, which depends not only on the lower tier behavior, but on the structure and order of using those lower tier elements, which can only be described at the higher tier.
The mind (brain) is a self-modifying rule system that has to inherently work with incomplete information. Since information is incomplete, it -cannot- function without being able to change. Genetics gives it a default rule set that grows as we do.
At initialization, all the ‘software’ the brain has is genetic – some simple rules and the basic blueprint that allows it to change. External elements may be used as input in order to instantiate change. However if external factors were the only means of change, there would be no free will, since changes to the rule sets would be completely outside of any individual’s control. IE: The individual has no choice in the construction of the rule set, and thus no choice in ultimate behavior.
Fortunately that’s not the only means of modification. Self-awareness, introspection, hypotheticals, and other imaginaries allow us to simulate state scenarios and evaluate the value of any choice we could make based on predicted outcomes. We then use that information to modify our own rule set. And that, I believe, is where free will comes in.
Free will is the ability to evaluate the rule set that will be applied to state S, and determine if the outcome of using that rule set is desirable, or if the rule set should be modified, and, if it should be modified, -how- it should be modified. Exerting free will is modifying and/or using that new rule set.
This, I believe, is a solid description of most people’s understanding of free will.
What about causal effects that impact the initiation of this action? Well, the transitive barriers show that those can only occur within the scope of the frame of reference that includes free will itself. Free will can only exist within the rule tier that creates the rules that allows free will to operate (and, by extension, any higher tiers); it cannot exist within lower tiers. Because of that, the lower tiers cannot be considered as causative effects on free will. It also follows that if you examine activities below the free will layer, there will be no evidence of free will since free will literally cannot exist even as a concept at that point.
Because of this, ‘ultimate’ responsibility (ie: the base causal effect) for an action can only be described within the framework of the rule set under which the action is initiated. Actions humans take are thus only attributable within the rule set that the human mind uses (conscious and non-conscious), and responsibility can only propagate as far back as the basic rule set that was used to determine that an action be taken. Since such rule sets are not shared (ie: no two people are physically using the exact same rule set, even if what those rule sets represent is identical), ‘ultimate’ responsibility can only be tracked as far back as the individual.
Free will allows us to modify those rule sets, and thus the broader concept of responsibility is that the entity capable of modifying that rule set bears responsibility for the action that was done (in the cause-effect sense; moral responsibility is a separate aspect, possibly a separate layer, and not something I’m going to attempt to integrate at this time).
Note, however, that it still generally follows that, at any given point in time, with a specific state S of the universe (and thus a fixed rule set in place within the mind), the action one takes will (probably) not change. You will not have “done something different” until the rule set being acted on was changed (either via external conditions or free will), or unless the rule set includes some quantum indeterminacy at a fundamental (compositional) level.
Counterargument: Well, now you’ve changed the question of responsibility from “Could one have chosen to take a different action?” to “Could one have chosen to implement a different rule set?”. That is, is free will -itself- free?
I would say yes, because there are multiple chains of causality to follow: internal (all self-modifications of rule sets); same-level-external (all actions by others that modified your rule sets, occurring at the same rule tier); natural-external (all ‘natural’ events that modified your rule sets). Because there is no single causal, non-transitive chain, and because natural-external events include quantum indeterminacy (even if we exclude the possibility of quantum indeterminacy at the brain level), it’s impossible for there to be some ultimate causal element. At best, you can describe some event that occurred before all other events, but that event cannot be considered the -reason- that all later events occurred; only all events up to a transition boundary.
When one claims that there is some ultimate causality, it implicitly asserts that there is a complete predictability as well. A implies B, B implies C, and so forth, such that -because- of that original event, all other events must necessarily follow, and ultimately that your current action must necessarily follow from all previous events.
If one only considered the internal causal chain (within the rule set scope that includes free will), that could perhaps be the case. If the causal chain was purely transitive, that might be the case. If the causal chain was fully predictable, that might be the case. However, since each of these fails in assertion, or in the case of the first, fails in completeness, the denial of free will on the basis of ultimate responsibility cannot be considered valid.
Secondary musings:
From there, I would say that free will is -limited- by the incomplete information problem. The less that is known, the fewer ways that free will can manipulate the rule set. Likewise, predictability becomes easier with limited option sets, though that hardly matters on the larger scale. However the fundamental existence of free will as both a behavior pattern and as a choice without an ultimate cause outside of oneself seems valid.
From a minor point brought up in the "Demonstrating Free Will" thread --
Descriptive view of what "lack of free will" (in the sense of being able to self-modify behavior rule sets) would mean:
That one would be unable to change one's own behavior on one's own prerogative. That one would only be able to act based on rule sets defined by external stimulus, which would imply that all learning would essentially be Pavlovian. That choices could be explicitly predicted based on which external events (that correlate to the current event) one had been exposed to. That it probably would not be possible to convey knowledge to other individuals.
The above is put together more on the grounds of if humans lacked free will altogether. If only a single individual lacked free will then their learning and rule sets are more open to modification, subject only to what other people wanted to teach them.
One could also consider this to be intrinsically linked with self-awareness. Self-awareness means that one can treat one's own mind as both a subject and an object, and rewriting one's own rule sets would clearly be an act requiring that perspective.