Game Theory and Morality

So, I’ve been giving this a lot of thought, and I need a sounding board from people who spend more time on this as to the apparent validity of the concept. While this deals with math, it’s more about philosophy, so I’m putting it in this forum. I’ll try to avoid rambling too much, but I do want to be specific on details that are different than the commonly referenced uses.

First, the application of game theory to ethics. This paper does a pretty good job of detailing most of the mechanics of it (though it’s missing a few bits), so to avoid trying to repeat all that, I’ll just point to it for the basic mathematics of the idea for anyone who is unfamiliar with it.

The paper does touch on one thing that is a large basis for my theorizing: evolutionary game theory. Namely, you can reach a globally optimal (pareto-efficient) outcome if all the people participating in the decision process agree to act in a collectively beneficial way. However it faults it for the inability to be able to define what we consider to be moral results. It considers that a failure, when in actuality it seems to be missing the point of what morals actually are. My assertion is that morals are not ‘right’ or ‘wrong’, they are ‘optimal’, and the failure in the paper was in thinking that morals have to somehow be justifiable, and also that morals must necessarily apply solely to the utility function of individuals.

Essentially, ‘morality’ itself can be defined as an optimization mechanism for the decision making process that has already been ‘agreed on’ without needing to actively negotiate the decision parameters with each person each time; alternatively, the means by which to motivate people to select the globally optimal maximum rather than the pareto-inefficient rational outcome. When viewed this way, many other conclusions immediately manifest.



1. Basic definitions


* Since morality is a decision optimization mechanism when making constrained choices among 2 or more individuals, morality itself becomes an explicitly emergent phenomenon. It is not a 'natural' aspect of the universe (ie: something that is objectively true regardless of the observer); rather, it is something that can only exist (but also [i]must[/i] exist) when multiple entities making choices interact. The very fact that there is an optimal choice that can be reached means that there is a 'moral' which could define how to agree to reach that optimal choice. Note that 'optimal' is only within the bounds of the knowledge of the decision makers; if they do not know that there is a more optimal outcome, they cannot consciously attempt to achieve it.

* It's very easy to show where selfish/rational decisions lead to 'immoral' choices being made (within the current common conception of morality). It's also easy to show where those same scenarios allow for a greater global benefit if the 'moral' choice was made. The rational act benefits the self to the greatest degree, while the moral act benefits the whole to the greatest degree.

* The 'whole' is an arbitrary and abstract distinction. It is the group within which the act is being evaluated. It might be the two men of a prisoner's dilemma, a husband/wife, a family, a gang, a village, a country, etc. Each of these define their own individual application of the decision optimization.

* From the above, it's clear that 'moral' acts are defined to the benefit of the whole.  However, 'benefit' must be defined at evolutionary timescales, not instantaneous timescales.  From that, one can conclude that 'better' morals are ones that better optimize the decision paths of the parties within the whole in comparison to all other wholes. That is, the rules that most optimally benefit the whole, and, for efficiency's sake, benefit all wholes that an individual is a part of to the greatest total degree, are the decision patterns that are 'most' moral.

* Note that optimization does not require 100% participation.  In fact, optimization must necessarily account for less than 100% participation, and therefore the optimum might only actually be optimum when there is less than full participation.  More generally, a moral might only be optimum within a specific range of participation (eg: 70%-100%), and not be optimum outside of that range.


1. Progression


* Every instance of a whole is competing with every other possible version of that whole. For example, things such as nuclear families, single-parent families, extended families, clans, etc, as collectives for raising children, can be seen as competing with each other using their own optimization paths, and the one with the greatest total optimality generally wins out over the others, with the other options 'dying out' (though not necessarily going extinct).

* From that, one can see that there is no immediate pressure to achieve the 'best' optimum system; only one that is better than all other competing systems. Thus what is considered acceptable (moral) changes over time as more knowledge is gained and new options are tried, and changes more quickly when there is more competition from other groups at the same social layer.

* Moving aside slightly: There are always competing decision processes. An optimal (ie: moral) one should (usually) not fail when presented with a competing choice; rather, it should out-compete it. In terms of evolutionary game theory, a decision pattern should be stable. A strongly stable pattern should push to extinction all other patterns. A weakly stable pattern should remain dominant even in the face of competition.

Note that a weakly stable pattern may be preferred if the competition that it allows to exist is strongly stable against competition that would destroy the main pattern. That is, type A mostly beats B, and type B always beats C, while type C largely beats A. With that mix of options, type B will always fall to A, and type C will always fall to B, whereas type A can largely hold its own because the type Bs that it allows to exist will destroy any type Cs that show up.

* From that, one can understand that not following moral rules is not a weakness of the moral structure. It tests the validity and stability of the dominant structure, keeping it healthy when faced with alternative threats, while also allowing new patterns to emerge that may prove to be more optimal than the current standard. If a pattern can only remain stable as long as there is no anti-pattern (ie: no one who believes differently than the defined moral pattern), it must inevitably crumble.


1. Application


* From all of the above, it becomes clear that moral patterns define the evolutionary health and strength of a societal group, and is used in the competition between said groups to determine which is likely to persist to the next generation, carrying those morals forward.

* It also provides a basis for religion. Religion is a separate societal layer whose duty is to propagate a specific moral structure. The religious heads are the talekeepers, the literate, the ones who can remember what was learned so that it isn't lost or distorted or forgotten (which also explains their fading usefulness today, as everyone's ability to communicate and remember vastly outstrips past eras). It's also a meta-layer in society. It competes with other religious layer groups for the 'best' moral optimizations, but does not directly compete with most of the social groups that apply those optimizations. Put another way, religion itself is a moral optimization. The main limitation of religion is that it tends to stagnate, refusing to pick up new moral optimizations; on the other hand, it also makes it difficult for destabilizing elements to enter the system.

* For individuals, the 'moral' choice is often less personally optimal than the rational choice. However the rational choice is often more harmful to the other party than it is beneficial to the rational individual. Thus the whole sees less total benefit, and will often try to punish the outlier as a means to get them to adopt the choice that benefits the whole more. (This entire aspect gets very complicated, so I won't get into it much.)




~~~~ Original version ~~~~

1) Since morality is a decision optimization mechanism when making constrained choices among 2 or more individuals, morality itself becomes an explicitly emergent phenomenon.  It is not a 'natural' aspect of the universe (ie: something that is objectively true regardless of the observer); rather, it is something that can only exist (but also [i]must[/i] exist) when multiple entities making choices interact.  The very fact that there is an optimal choice that can be reached means that there is a 'moral' which could define how to agree to reach that optimal choice.  Note that 'optimal' is only within the bounds of the knowledge of the decision makers; if they do not know that there is a more optimal outcome, they cannot consciously attempt to achieve it.

2) It's very easy to show where selfish/rational decisions lead to 'immoral' choices being made.  It's also easy to show where those same scenarios allow for a greater global benefit if the 'moral' choice was made.  The rational act benefits the self to the greatest degree, while the moral act benefits the whole to the greatest degree.

3) The 'whole' in #2 is an arbitrary and abstract distinction.  It is the group within which the act is relevant, and affects.  It might be the two men of a prisoner's dilemma, a husband/wife, a family, a gang, a village, a country, etc.  Each of these define their own individual application of the decision optimization.

3) From the above, it's clear that 'moral' acts are defined to the benefit of the whole.  As such, it can be implied that that which benefits the whole to the greatest degree should also be an evolutionary pressure that improves the survivability of the whole when compared with other wholes.  From that, one can conclude that 'better' morals are ones that better optimize the decision paths of all parties within the whole in comparison to all other wholes.  That is, the rules that most optimally benefit the whole, and, for efficiency's sake, benefit all wholes that an individual is a part of to the greatest total degree, are the decision patterns that are 'most' moral.

4) From that, every instance of the whole is competing with every other possible version of that whole.  For example, things such as nuclear families, single-parent families, extended families, clans, etc, as collectives for raising children, can be seen as competing with each other using their own optimization paths, and the one with the greatest total optimality generally wins out over the others.  The 'moral imperative' (not Kant's) would thus suggest adhering to the most optimal result.

5) From that, one can see that there is no immediate pressure to achieve the 'best' optimum system; only one that is better than all other competing systems.  Thus what is considered acceptable (moral) changes over time, and changes more quickly when there is more competition from other groups at the same social layer.

6) Moving aside slightly: There are always competing decision processes.  An optimal (ie: moral) one should (usually) not fail when presented with a competing choice; rather, it should out-compete it.  In terms of evolutionary game theory, a decision pattern should be stable.  A strongly stable pattern should push to extinction all other patterns.  A weakly stable pattern should remain dominant even in the face of competition.  Note that a weakly stable pattern may be preferred if the competition that it allows to exist is strongly stable against competition that would destroy the main pattern.  

That is, type A mostly beats B, and type B always beats C, while type C largely beats A.  With that mix of options, type B will always fall to A, and type C will always fall to B, whereas type A can largely hold its own because the type Bs that it allows to exist will destroy any type Cs that show up.

7) From that, one can understand that not following moral rules is not a weakness of the moral structure.  It tests the validity and stability of the dominant structure, keeping it healthy when faced with alternative threats, while also allowing new patterns to emerge that may prove to be more optimal than the current standard. If a pattern can only remain stable as long as there is no anti-pattern (ie: no one who believes differently than the defined moral pattern), it must inevitably crumble.

8) From all of the above, it becomes clear that moral patterns define the evolutionary health and strength of a societal group, and is used in the competition between said groups to determine which is likely to persist to the next generation, carrying those morals forward.

9) It also provides a basis for religion.  Religion is a separate societal layer whose duty is to propagate a specific moral structure.  The religious heads are the talekeepers, the literate, the ones who can remember what was learned so that it isn't lost or distorted or forgotten (which also explains their fading usefulness today, as everyone's ability to communicate and remember vastly outstrips past eras).  It's also a meta-layer in society.  It competes with other religious layer groups for the 'best' moral optimizations, but does not directly compete with most of the social groups that apply those optimizations.  Put another way, religion itself is a moral optimization.  The main limitation of religion is that it tends to stagnate, refusing to pick up new moral optimizations; on the other hand, it also makes it difficult for destabilizing elements to enter the system.

10) For individuals, the 'moral' choice is often less personally optimal than the rational choice.  However the rational choice is often more harmful to the other party than it is beneficial to the rational individual.  Thus the whole sees less total benefit, and will often try to punish the outlier as a means to get them to adopt the  choice that benefits the whole more. (This entire aspect gets very complicated, so I won't get into it much.)


Note that none of the above defines what, exactly, is moral, only the process by which one can determine if something can be said to be moral.  Also note that it is not the same as "the greatest good for the greatest number" (despite it possibly seeming similar to that in some of the points I wrote).  Rather, it is defined by the evolutionary fitness of the societal group which employs that moral structure, when compared against all other possible options.  It may result in a lesser total 'good' than other options, or benefit fewer people, or perhaps have the 'good' deferred to a later time, or other contrary effects.  However the only thing that matters in the end is what works.

Likewise, no single act can ever be defined as intrinsically moral.  An act can only be considered moral with respect to the entirety of the moral structure within which the act is being compared (since optimality is a complex calculation), and mostly only evaluated within the scope of the societal group to which it is being applied (since the societal group influences the moral structure being used, and the decision process is explicitly made towards the optimization of said societal group).  The efficiency of a moral directive (how well it applies across multiple societal groups) does give it a certain degree of a more global evaluation, though.  In addition, certain moral guidelines tend to have an extremely high weight with respect to optimality (eg: murder), which means certain morals will almost always manifest in any societal structure.

This also is a system for which there is no 'motive' for individual morality, any more than there is 'motive' for someone to be right-handed, or have dark skin, or whatever.  There is no need for any individual to act morally, but a group that does not will tend to fall to evolutionary pressures from those that are part of a more optimal/moral structure.  That is, those that are non-optimal will tend to die out, while those who are optimal will tend to thrive and pass on that optimality.


Essentially, I see this as the next step up the ladder of normative ethics, which progresses from virtue ethics to deontology to consequentialism to amoralism.  This takes amoralism and shows that, even though there's no such thing as morals in an empty universe, nor can acts truly be said to be 'right' or 'wrong' absent of any context, morals do still actually exist.  It's just that the math to actually understand how to study and analyze them is a recent thing, and even now is inordinately complex for any real-world analysis.


So, are there any egregious errors in my thought process here?

I assume morality is life, and life is morality.
If there was no being or consciousness, there would be no desire, no feeling, etc.
People cut ties with morality to try to make it look like it can stand on its own.
Well, morality needs life and life needs morality.
Most people dualize and appeal to banal trends of thought.

I agree with most the points above.

I agree with the evolutionary basis and its emergences with regard to morality.
The critical point is humans has evolved with a necessary servomechanism.

A servomechanism is a controlling system that need something to be controlled against regardless it is fixed or fuzzy.

I agree with the principle of optimality.
In order for the servomechanism to work, we thus need a sort of ‘dynamic’ objective optimality [not oxymoron in this case] to start with.
What follows is all the sub-optimals must align with the master optimal.
For example, all the sub-optimal functions of all the organs and systems must align with the total human system to facilitate its optimal survival.

The System of morality should follow the above principles and mechanism.
The System of Morality need to cover all, i.e. humans, all living things, the Earth and the Universe.

That has been the only true issue. It has never been an issue of those people being right and these people being wrong. It is merely that the issue of morality and ethics is too complex for homosapian to agree upon. And without agreement, the complexity exponentially increases. The world is saved only when agreement is sought (hence an inherent moral/ethical act).

The concept of “optimal” must include the limits of information on hand. The optimal decision must be the decision that was made considering all information at hand within the time and talent of the decision maker, else the optimal could never be reached by anyone in the real world, ever.

The distinction is not one of “rational” vs “global”, but one of immediate reward vs long term reward.

Morality/ethics takes on the role of being a compass toward the greater long term optimization. When individual decisions are globally made in specific directions, the base line good of all individuals increases. That increase is often exponential due to it leading to greater optimizations. But more importantly, it lasts for a greater length of time through each person’s life and thus yields a greater total benefit for each individual for their life time.

A bit contradictory.

A) “it’s clear that ‘moral’ acts are defined to the benefit of the whole.”
B) “one can conclude that ‘better’ morals are ones that better optimize the decision paths of all parties within the whole in comparison to all other wholes”

The whole can sustain itself most readily by sacrificing any all individuals for sake of maintaining the amassing into a whole (government dependency through fear). One simply must not sacrifice all of them at once (or let them know). False flagging, intimidation, and terrorism are used expressly for that purpose.

What is best for the individuals and what is best for the whole can be antithetical (and in most nations today, are).

You offer contrary proposals.
A) “every instance of the whole is competing with every other possible version of that whole”
B) “nuclear families, single-parent families, extended families, clans, etc, as collectives for raising children, can be seen as competing with each other using their own optimization paths”

Proposal A proposes that the compass toward the most moral/ethical behavior is formed by comparing each possible behavior against each alternative. That seems to be an obviously sensible thing. But proposal B suggests that competition within the whole (nation) by its parts (groups and individuals demanding confrontation, conflict, hostilities, brutality, misery, and death) yields the better decision making process through trial and error … to the death.

You seem to be proposing the age old “let them fight it out” mentality for global decision making as an optimal ethic. There is and always has been an inherent error in that philosophy. But again, the issue is the complexity of such an issue.

The decision to let them fight it out as an optimal must itself be compared to the alternatives. But that cannot be done in the midst of real warring. Thus it is immoral to “let them fight it out” because it prevents the more optimal decision making process of comparing against alternatives. Science does not rise amidst social chaos without being corrupted into anti-science. The stronger infant cannot be found by dropping each into the volcano to see which climbs back out.

That seriously depends on what you are claiming as “better than”. Many systems are entrapping. Once they are established, one cannot discover that something else would be much better and change to it. Yet those systems are often “better than” the incumbent system. The establishment of the FED is a prime example; “something is better than nothing” … depending on which future it brings.

That is just simple minded and not true.

Gaming theory was proposed out of war games and mathematics. It inherently presumes eternal conflict; “kill or be killed” mentality. It slyly pits everyone against everyone else by subtly suggesting that there is no alternative, such as “just don’t play the game”. It is intentionally established as a part of a higher game called “divide and conquer” used for thousands of years (eg “Gladiators”). It is what is keeping the extremely, unimaginably wealthy very, very high above any and all potential competitors.

You are already in their game. There is only one way to win it.
Stop playing it. Seek agreement.

…great topic, btw. :sunglasses:

Moral results are about optimal outcomes which we generally place in the category of innovations that reduce undesirable outcomes. You need social cohesion to develop these innovations at optimal paces… and conflict interferes with this process. War takes so much cognitive space (which is finite on this planet) that it’s almost impossible for someone to figure out how to cure HIV and AIDS for example, for one it makes it almost impossible to even educate people.

So engaging in war may give you or someone who you rely upon AIDS and this may interfere with your optimal outcome. The best war game theory is to kill someone in their sleep after they’ve confessed to going out to kill, torture and rape people… nobody would ever confess to going to war, and groups would never get together to engage in it, and we’d have no war. Since men are basically the ones that do war, if women were more like the black widow with war confessors (and gender dimorphism plays no role when the person is asleep… they are extremely easy to kill no matter how big or small)… we’d have no war on earth, innovation and education would accelerate and undesirable outcomes would diminish at an accelerated pace.

Simple.

The most intelligent and sensitive people are often the ones who suicide, so you want to keep these people here for sure! The way you do this is to reduce sexual stratification, which the modification of will determine the suicide rates in a genders population. And boom world peace.

Simple.

The thing about game theory in general is that prevention always works better than tit for tat.

I would say the complexity increases regardless of agreement. The fact that you agreed on something says nothing about the complexity of the decision process.

Also, saying the world is “saved” by agreement is ambiguous and not well supported, and possibly even nonsensical within the current discussion context.

I think I left that out of this particular attempt to write things up (during the many edits), but yes, there was a clarification that ‘optimal’ is in evolutionary terms for the social grouping that the decision is being made in, not necessarily optimal for the individual or immediate effect.

Also, woops on the misnumbering.

Pretty much agree, there.

To the death, yes, but the death of the social group, not the individual. For example, of the example given in reference to the above quote, if nuclear families are more optimal than a communal system of raising children, than the communal system should ‘die out’. Also, there is no need for explicit hostilities between the social groups, only that some of them do ‘better’ (survive, prosper, propagate) than others.

I’m trying to work out if this is actually contradictory, or just a misunderstanding. Remember that there is no single ‘whole’. Family, church group, neighborhood, game club, online forum, etc, etc, all exist, in addition to local and national government. Every action is evaluated within every single social group that it can apply to.

Also, you’re evaluating things at a false level. Your disagreement has to be considered within the scope of: long-term evolutionary development in general, and the behavior of all other individuals (countries) within the evaluation, and whether the specific non-compliance is healthy with respect to the larger compliance pattern, or whether it dominates the compliance pattern, as well as how it fares with respect to all other patterns other individual countries might adopt. Just because one ‘can’ behave that way doesn’t mean it’s optimal, nor does the optimal mean that no one can behave otherwise, nor does having others behave otherwise even in the face of a large majority selecting the ‘optimal’ mean that the optimal is not optimal.

Basically, ‘optimal’ does not require that 100% of individuals comply, and may only be optimal when less than 100% of individuals comply. This gets into the math side of things, which is difficult to explain concisely in prose.

That is correct, though it gets into an aspect that I’d noted only a simple manner. From point #2, every single social group is itself an individual in a larger group, so the entire process can be carried out on the group as a whole just as it is for individuals. So how one social group interacts with another is itself a choice, and that choice is analyzed as part of the collective of individual social groups.

The truly optimal method would be to just punch some numbers into a computer and let it run the simulation, and then spit out the results. The next best would be thought experiments that would allow us to select the best choice. After that we’re left with trial and error, pretty much. Since we don’t yet have the first as an option, we’re mainly stuck with the latter two, for now. Or just stick with whatever worked before.

Well, this is an evolutionary system, not an individual one. There is no requirement that a given individual be able to change to another option, only that other options must exist, and that all of them are evaluated over time. One might consider that a meta-morality: that attempts to rig the system so that it can’t work properly are considered ‘bad’.

Umm… This is mathematically provable? Not sure what your exact objection here is.

It only presumes eternal conflict if you decide to define every decision made as a form of conflict. Game theory is not a ‘game’ in the sense that you can opt in or out; it’s a descriptive framework for analyzing interdependent choices. You can’t “not play the game” unless you both never make a decision, and never interact with another person (even indirectly, simply due to your presence or knowledge of your existence).

While it might be used to analyze “divide and conquer” or whatever, that’s not what it ‘is’. The modern study of game theory was founded on research into economics, not war games. And I’m not even sure where you’re going with the bit on the wealthy.

Basically, that entire paragraph is a whole lot of nonsense.


Rewriting things to clarify some bits and address points brought up (unfortunately, the forum doesn't appear to allow spoilers for hiding long bits of text).  Not sure I got everything, but hopefully can clarify more later.  Will update the original post rather than add yet more wall-of-text here.

I'll try to address just single points at a time in further posts.

Kinemaxx,

Families don’t work better for raising children than communities. It’s bad game theory to use family psychology. People die and people are ill equipped psychologically to raise people responsibly, a community can absorb whatever a family cannot… families are clans and tribes and this causes war, communities don’t have clans and tribes.

That was an example to attempt to show comparable/competitive social elements, to try to illustrate the point being made, not an assertion of fact. Plus, you’re mixing up multiple contexts; if you’re considering tribes and war between tribes then the optimality parameters are notably different than modern day. Remember that I’m positing an evolutionary model for morality, and what was optimal at one point in time may not be optimal at another point in time, as both knowledge and context changes.

That’s bullshit… optimal game theory holds for all contexts (meaning times).

Game theory includes the understanding of limited information, which is the case for all non-omniscient beings. Different times will have different levels of information available to them, and thus reach different results.

Also, different contexts (ie: different times, capabilities, and resources) will have different optimality constraints, so even with omniscience, you can still end up with different optimal results.

So your statement is flawed from multiple perspectives.

“Everyone go out and try to kill everyone else. Whoever is left alive defines morality!!” :open_mouth:

:icon-rolleyes:

Cute joke, but not a legitimate interpretation. On the other hand, I’m not entirely certain of your intent, nor exactly to whom you were addressing it (limits of internet forums and such), so am explicitly acknowledging it as such.

This is called the homicidal tension of the environment. Another way I’ve seen this phrased is: “War doesn’t decide who is right, only who is left.” You’d have to have complete faith in evolution to trust homicidal tension to determine fitness. But anyone with a basic understanding of evolution knows that there is runaway sexual selection that extincts species.