"Free will"

The idea that we have a will is false, we have multiple wills that we weigh against each other.

It is our intellect that we require to have a certain degree of freedom from our own internal drives since it is this intellect that we can use to choose between competing caused options that are available for our wills.

The degree of our intellect is therefore the limit of our freedoms to choose. Choice based on knowledge is the only real freedom.

This obviously implies that the more intelligent a person is the freer he is. So ideas like, “flying free as a bird” are wrong thinking.

I disagree. I think the amount of power you have in relation to others with the same objective determines your freedom.

Intellect is more often a trap than a key to freedom, as it asks the question ‘why?’ which is the most limiting question in existence. Only wisdom, proper application of intellect to a real situation, can be an agent of freedom.

negative vs positive “freedom”.

word games

The freedom of the self is entirely about self control, not control of others (necessarily).

“Why?” is the first step in realising that there may be freedom in self restriction. So , for instance, asking why one should consider the cost to others
with regard to ones actions is the first step in realising that one’s own interests may not be the foundation of personal freedom. For instance, your freedom depends on the will of others.

Mine is the positive …obviously. :sunglasses:

Words are only games if you are not sure of their meaning. My meaning is clear.

I will choose free will- Rush

I would say choose Free Willy… that whale’s been beached since the 90s…

 My meanings are clear and I am still playing games with the words they derived from..  How do I know? The child I played it with told me she understood what I mean.

Multiple vs single will sounds like a disagreement on terminology. Will is more generally considered taking action on a choice among competing desires, where those desires vary among ‘baser’ desires, intellectual desires, etc.

So saying that you have multiple wills is really just restating the above idea, except naming each individual desire as a ‘will’ just to make things more confusing.

From my own understanding, “free will” is a description of the idea that you are free to choose to take action on any of those competing desires without any intrinsic predetermination of what that choice will be.

Alternatively, if you take the chaos theory approach, you could say that all actions are deterministic – that is, given the same set of initial inputs, you will always reach the same output. However you can only predict the next input if you have complete knowledge of all of the given inputs, all the way back to the ultimate initiation of the system; and further, that you still cannot predict what the next state is unless you run the entire system from its initial state up to its current state, again.

Essentially, that while the universe may be completely and absolutely deterministic, it’s impossible to predict a future state without running the entire universe through its entire existance. It is deterministic, but it is not predetermined.

Going back to the free will half: If you were to somehow be able to repeatedly scrub back and forth through a point in time, crossing a period where you made a decision, you would always make the same decision (ie: deterministic). However neither you nor anyone else would be able to say what that decision would be until after the decision was made (ie: not predetermined).

Whether that counts as “free” will is somewhat debatable, and probably a semantic argument.

You then seem to cross over from a discussion on free will to a discussion on free choice: the ability to analyze and choose among the various competing options the will is acting on, in the furtherance of some overall goal. So free will is the ability to act without predetermination, while free choice is the breadth and understanding of the options you can choose to act on (as well as external limits on the ability to act on those choices, whether due to other people or physical constraints or whatever).

It would not seem that greater intellect implies greater freedom, in the sense of “free will”. No matter how limited or expansive your intellect, it doesn’t affect your ability to make a choice, only the options from which you can choose.

“Free will” would seem to be a constant concept – either it exists, or it does not. No matter what choice you do or do not make, the free will aspect of it always remains the same. Your freedom of choice, on the other hand, is definitely impacted by other things, both in knowing what choices are available to you, and to what degree you are allowed to take action.

I don’t think that it is confusing to state that we are not creatures of single wills, that in fact we are creatures of constant ,multiple wills whose freedom involves being able to select the individual will that best suits our aims. That selection process involves subjugating wills to intellect (knowledge).

A wild animal does not have the same level as intellect as man, therefore that animal has far less freedom to be able to choose the right action for its aims…nor does it have the ability to moderate its aims through intellectualised (theoretical) knowledge.

An action willed via knowledge is an action that is not blind causation, it is no slave to it, rather it has become it.

Hope that’s clear, I’m a bit tired at the mo. :sunglasses:

Note: I wrote this up to address the assertion in the “Critique My Philosophy of Life?” of the impossibilism of free will. However it didn’t feel right to initiate this level of analysis in that thread, so I’ve put it in the “Free Will” thread instead. Any counterarguments and discussion can thus be maintained in a more appropriately topic’d thread.

Analysis/argument regarding free will (ultimately concluding as pro-free will)

We (for the sake of this approach) live in a deterministic universe. “A implies B” means Q isn’t going to randomly pop up when it has no relation to A. We actually want this to be the case. If it’s determined that action B is good to do in situation A, I don’t want a random (indeterminate) result to be generated in situation A, I want action B. Free will would dissolve in the purely indeterminate case because there’s no way to predict what would happen, and it’s entirely possible that something that I don’t want to happen would happen. It essentially robs the “will” half of free will.

With a deterministic universe, there appear to be two ways to obviate free will: predictability and responsibility. Predictability looks forward, responsibility looks back.

The predictability argument is thus: If one could create a computer that completely modeled your brain (which can’t be done even theoretically because of the cloning problem, but ignoring that), and that model would always answer any question exactly the same way as you would, that would imply you don’t have inherent free will.

Aside from the cloning problem, it could be described as being able to predict state S+1 based on state S. To do that, it would need to know the entirety of state S. However knowing state S (which technically should encompass the scale of the entire universe, but even that isn’t necessary for this argument) is fundamentally impossible due to both the Observer Effect and Uncertainty Principal. Therefore, for multiple reasons, it’s literally impossible to disprove (or prove) free will based on predictability.

So next we’d look at the responsibility aspect, though even that needs a bit more clarity.

A small aside on free will: In order to better understand and argue for or against the existence of free will, we need a better description of what free will actually is. Current definitions are almost tautological (or whatever the reverse of that would be, when being disproved), and they shift most of the definition onto an ambiguous external element called “responsibility”.

More generally, “free will” is an aspect of behavior, and so we ought to be able to describe what free will does that affects behavior, rather than simply saying that behavior is the ‘result’ of free will.

Now, responsibility can be considered at two levels: the conceptual entity that exists within the framework that is even capable of discussing free will, or the “cause” of an effect. If we limit ourselves to the frame of reference that allows for conscious entities to exist, we can approach it solely as a human (ie: self-aware entity)-based cause/effect system (and assume that the non-human cause/effect layer supports the human layer). Responsibility then encompasses genetics, environment, instruction, and so on.

However ‘cause’ can only be considered transitive for as long as the same rules apply. If the rules change, the causal chain stops. One might argue that the laws of physics don’t ever change, and that would be correct – but only at the most fundamental (ie: quantum) level. Wetness or viscosity, for example, can’t be measured in an individual atom. It only even exists when multiple atoms are arranged together, and depends on other things such as collective energy states that are also only relevant to the collective, not the individual.

In other words, the rules that apply at the collective/macroscopic level are a superset of the rules that apply at the quantum level. The quantum rules still apply, but some other, additional rules -also- apply. This is largely described as emergence – that complete knowledge of the behavior of the components of a system (in isolation) is insufficient to describe the behavior of the system as a whole, which can only be described with respect to the interaction of the components, not the components themselves.

Any given action is thus composed of and described by rules at its native tier, each of which can be broken down in a compositional matter of rules from lower tiers (a conceptual stratification that can be seen as similar to supervenience). However while those lower tiers behave in a deterministically causal manner, they cannot themselves be considered causal for the effects of the higher tier event, which depends not only on the lower tier behavior, but on the structure and order of using those lower tier elements, which can only be described at the higher tier.

The mind (brain) is a self-modifying rule system that has to inherently work with incomplete information. Since information is incomplete, it -cannot- function without being able to change. Genetics gives it a default rule set that grows as we do.

At initialization, all the ‘software’ the brain has is genetic – some simple rules and the basic blueprint that allows it to change. External elements may be used as input in order to instantiate change. However if external factors were the only means of change, there would be no free will, since changes to the rule sets would be completely outside of any individual’s control. IE: The individual has no choice in the construction of the rule set, and thus no choice in ultimate behavior.

Fortunately that’s not the only means of modification. Self-awareness, introspection, hypotheticals, and other imaginaries allow us to simulate state scenarios and evaluate the value of any choice we could make based on predicted outcomes. We then use that information to modify our own rule set. And that, I believe, is where free will comes in.

Free will is the ability to evaluate the rule set that will be applied to state S, and determine if the outcome of using that rule set is desirable, or if the rule set should be modified, and, if it should be modified, -how- it should be modified. Exerting free will is modifying and/or using that new rule set.

This, I believe, is a solid description of most people’s understanding of free will.

What about causal effects that impact the initiation of this action? Well, the transitive barriers show that those can only occur within the scope of the frame of reference that includes free will itself. Free will can only exist within the rule tier that creates the rules that allows free will to operate (and, by extension, any higher tiers); it cannot exist within lower tiers. Because of that, the lower tiers cannot be considered as causative effects on free will. It also follows that if you examine activities below the free will layer, there will be no evidence of free will since free will literally cannot exist even as a concept at that point.

Because of this, ‘ultimate’ responsibility (ie: the base causal effect) for an action can only be described within the framework of the rule set under which the action is initiated. Actions humans take are thus only attributable within the rule set that the human mind uses (conscious and non-conscious), and responsibility can only propagate as far back as the basic rule set that was used to determine that an action be taken. Since such rule sets are not shared (ie: no two people are physically using the exact same rule set, even if what those rule sets represent is identical), ‘ultimate’ responsibility can only be tracked as far back as the individual.

Free will allows us to modify those rule sets, and thus the broader concept of responsibility is that the entity capable of modifying that rule set bears responsibility for the action that was done (in the cause-effect sense; moral responsibility is a separate aspect, possibly a separate layer, and not something I’m going to attempt to integrate at this time).

Note, however, that it still generally follows that, at any given point in time, with a specific state S of the universe (and thus a fixed rule set in place within the mind), the action one takes will (probably) not change. You will not have “done something different” until the rule set being acted on was changed (either via external conditions or free will), or unless the rule set includes some quantum indeterminacy at a fundamental (compositional) level.

Counterargument: Well, now you’ve changed the question of responsibility from “Could one have chosen to take a different action?” to “Could one have chosen to implement a different rule set?”. That is, is free will -itself- free?

I would say yes, because there are multiple chains of causality to follow: internal (all self-modifications of rule sets); same-level-external (all actions by others that modified your rule sets, occurring at the same rule tier); natural-external (all ‘natural’ events that modified your rule sets). Because there is no single causal, non-transitive chain, and because natural-external events include quantum indeterminacy (even if we exclude the possibility of quantum indeterminacy at the brain level), it’s impossible for there to be some ultimate causal element. At best, you can describe some event that occurred before all other events, but that event cannot be considered the -reason- that all later events occurred; only all events up to a transition boundary.

When one claims that there is some ultimate causality, it implicitly asserts that there is a complete predictability as well. A implies B, B implies C, and so forth, such that -because- of that original event, all other events must necessarily follow, and ultimately that your current action must necessarily follow from all previous events.

If one only considered the internal causal chain (within the rule set scope that includes free will), that could perhaps be the case. If the causal chain was purely transitive, that might be the case. If the causal chain was fully predictable, that might be the case. However, since each of these fails in assertion, or in the case of the first, fails in completeness, the denial of free will on the basis of ultimate responsibility cannot be considered valid.



Secondary musings:


From there, I would say that free will is -limited- by the incomplete information problem.  The less that is known, the fewer ways that free will can manipulate the rule set.  Likewise, predictability becomes easier with limited option sets, though that hardly matters on the larger scale.  However the fundamental existence of free will as both a behavior pattern and as a choice without an ultimate cause outside of oneself seems valid.


From a minor point brought up in the "Demonstrating Free Will" thread --

Descriptive view of what "lack of free will" (in the sense of being able to self-modify behavior rule sets) would mean:

That one would be unable to change one's own behavior on one's own prerogative.  That one would only be able to act based on rule sets defined by external stimulus, which would imply that all learning would essentially be Pavlovian.  That choices could be explicitly predicted based on which external events (that correlate to the current event) one had been exposed to.  That it probably would not be possible to convey knowledge to other individuals.

The above is put together more on the grounds of if humans lacked free will altogether.  If only a single individual lacked free will then their learning and rule sets are more open to modification, subject only to what other people wanted to teach them.



One could also consider this to be intrinsically linked with self-awareness.  Self-awareness means that one can treat one's own mind as both a subject and an object, and rewriting one's own rule sets would clearly be an act requiring that perspective.

I understand that the fundamental aspect of the above argument is the non-transitivity of causation across rule boundaries (it can already be shown that causality may not be transitive in some circumstances). If that assertion is treated as false, the entire argument fails. So I need to look at that element more closely.

Looking at one examples of logical weirdness in transitivity, since I saw variants on it brought up a few times:

A = a fire started
B = a fire sprinkler set off
C = the house was not burned down

A → B, and B → C, so A → C: a fire started, so the house was not burned down

The problem with this one, I think, is that “the house was not burned down” was not actually an effect; it was the state of the house before the fire started, so nothing changed. So in reality, the only causal chain here is A → B. C is just an observation after the fact, and the transitive weirdness was a flaw in the original logical construction.

The original C would have been better stated as something like, “I observed that the fire was prevented from burning the house down.” In fact, I think most premise/conclusion elements of state are actually shorthand for the observation of said state, not the state itself. The observation can be considered an action, preserving the transitivity. [Aside: must the cause of a cause->effect be an action? I can’t think of a valid non-action cause, offhand.]

What about rule boundaries? Well, you cannot apply a rule when that rule cannot exist. Emergent phenomena are things that happen due to rules that only exist when certain other conditions are met (most commonly seen when individuals act as part of a collective). That is, the rules are contingent on other conditions.

So A → B, but only if condition/rule set Q is true, because Q is what allows you to even try to map A to B. A → B can’t be evaluated as either true or false without the existence of condition Q. One could simplify it as A & Q → B, though that’s still not complete.

Basically, Q cannot be evaluated with the rule set that applies only to A. It needs a rule set that can encompass both A and Q, since it would be logically inconsistent to apply different rule sets to each premise (in other words, the rule set for Q must be a superset of the rule set for A). The question is whether the rule set that can apply to Q intrinsically exists, or if it can only exist when the circumstances surrounding Q exist (eg: in a universe with only a single atom, do the rules for viscosity exist?).

Unfortunately at this point I have no validatable answer (and musings so far only give tests that are impossible to perform, so non-falsifiable), and I’m not sure where to even begin looking to see if anyone had done any research on this concept. My own conclusion is consistent with the idea that the rule sets do not exist without the conditions that give rise to them, but I can’t think of a way to prove that. The counterargument would be that they must necessarily exist a priori else they wouldn’t manifest when the conditions are met; if they didn’t already exist, you couldn’t be sure that the same rule sets would manifest in variously diverse places.

A counter-counterargument would be an appeal to quantum information transfer, as seen in entanglement experiments – that the information of what the rule set was, once observed, becomes used by all other manifestations of that rule set. On the other hand, that still presupposes a fixed state of information that can be transmitted, and of a limited set of possibilities, which would imply at least the possibility of the existence of the rule set before the probability collapse.

Overall, needs more thought.

I disagree.