So, I’ve been giving this a lot of thought, and I need a sounding board from people who spend more time on this as to the apparent validity of the concept. While this deals with math, it’s more about philosophy, so I’m putting it in this forum. I’ll try to avoid rambling too much, but I do want to be specific on details that are different than the commonly referenced uses.
First, the application of game theory to ethics. This paper does a pretty good job of detailing most of the mechanics of it (though it’s missing a few bits), so to avoid trying to repeat all that, I’ll just point to it for the basic mathematics of the idea for anyone who is unfamiliar with it.
The paper does touch on one thing that is a large basis for my theorizing: evolutionary game theory. Namely, you can reach a globally optimal (pareto-efficient) outcome if all the people participating in the decision process agree to act in a collectively beneficial way. However it faults it for the inability to be able to define what we consider to be moral results. It considers that a failure, when in actuality it seems to be missing the point of what morals actually are. My assertion is that morals are not ‘right’ or ‘wrong’, they are ‘optimal’, and the failure in the paper was in thinking that morals have to somehow be justifiable, and also that morals must necessarily apply solely to the utility function of individuals.
Essentially, ‘morality’ itself can be defined as an optimization mechanism for the decision making process that has already been ‘agreed on’ without needing to actively negotiate the decision parameters with each person each time; alternatively, the means by which to motivate people to select the globally optimal maximum rather than the pareto-inefficient rational outcome. When viewed this way, many other conclusions immediately manifest.
1. Basic definitions
* Since morality is a decision optimization mechanism when making constrained choices among 2 or more individuals, morality itself becomes an explicitly emergent phenomenon. It is not a 'natural' aspect of the universe (ie: something that is objectively true regardless of the observer); rather, it is something that can only exist (but also [i]must[/i] exist) when multiple entities making choices interact. The very fact that there is an optimal choice that can be reached means that there is a 'moral' which could define how to agree to reach that optimal choice. Note that 'optimal' is only within the bounds of the knowledge of the decision makers; if they do not know that there is a more optimal outcome, they cannot consciously attempt to achieve it.
* It's very easy to show where selfish/rational decisions lead to 'immoral' choices being made (within the current common conception of morality). It's also easy to show where those same scenarios allow for a greater global benefit if the 'moral' choice was made. The rational act benefits the self to the greatest degree, while the moral act benefits the whole to the greatest degree.
* The 'whole' is an arbitrary and abstract distinction. It is the group within which the act is being evaluated. It might be the two men of a prisoner's dilemma, a husband/wife, a family, a gang, a village, a country, etc. Each of these define their own individual application of the decision optimization.
* From the above, it's clear that 'moral' acts are defined to the benefit of the whole. However, 'benefit' must be defined at evolutionary timescales, not instantaneous timescales. From that, one can conclude that 'better' morals are ones that better optimize the decision paths of the parties within the whole in comparison to all other wholes. That is, the rules that most optimally benefit the whole, and, for efficiency's sake, benefit all wholes that an individual is a part of to the greatest total degree, are the decision patterns that are 'most' moral.
* Note that optimization does not require 100% participation. In fact, optimization must necessarily account for less than 100% participation, and therefore the optimum might only actually be optimum when there is less than full participation. More generally, a moral might only be optimum within a specific range of participation (eg: 70%-100%), and not be optimum outside of that range.
1. Progression
* Every instance of a whole is competing with every other possible version of that whole. For example, things such as nuclear families, single-parent families, extended families, clans, etc, as collectives for raising children, can be seen as competing with each other using their own optimization paths, and the one with the greatest total optimality generally wins out over the others, with the other options 'dying out' (though not necessarily going extinct).
* From that, one can see that there is no immediate pressure to achieve the 'best' optimum system; only one that is better than all other competing systems. Thus what is considered acceptable (moral) changes over time as more knowledge is gained and new options are tried, and changes more quickly when there is more competition from other groups at the same social layer.
* Moving aside slightly: There are always competing decision processes. An optimal (ie: moral) one should (usually) not fail when presented with a competing choice; rather, it should out-compete it. In terms of evolutionary game theory, a decision pattern should be stable. A strongly stable pattern should push to extinction all other patterns. A weakly stable pattern should remain dominant even in the face of competition.
Note that a weakly stable pattern may be preferred if the competition that it allows to exist is strongly stable against competition that would destroy the main pattern. That is, type A mostly beats B, and type B always beats C, while type C largely beats A. With that mix of options, type B will always fall to A, and type C will always fall to B, whereas type A can largely hold its own because the type Bs that it allows to exist will destroy any type Cs that show up.
* From that, one can understand that not following moral rules is not a weakness of the moral structure. It tests the validity and stability of the dominant structure, keeping it healthy when faced with alternative threats, while also allowing new patterns to emerge that may prove to be more optimal than the current standard. If a pattern can only remain stable as long as there is no anti-pattern (ie: no one who believes differently than the defined moral pattern), it must inevitably crumble.
1. Application
* From all of the above, it becomes clear that moral patterns define the evolutionary health and strength of a societal group, and is used in the competition between said groups to determine which is likely to persist to the next generation, carrying those morals forward.
* It also provides a basis for religion. Religion is a separate societal layer whose duty is to propagate a specific moral structure. The religious heads are the talekeepers, the literate, the ones who can remember what was learned so that it isn't lost or distorted or forgotten (which also explains their fading usefulness today, as everyone's ability to communicate and remember vastly outstrips past eras). It's also a meta-layer in society. It competes with other religious layer groups for the 'best' moral optimizations, but does not directly compete with most of the social groups that apply those optimizations. Put another way, religion itself is a moral optimization. The main limitation of religion is that it tends to stagnate, refusing to pick up new moral optimizations; on the other hand, it also makes it difficult for destabilizing elements to enter the system.
* For individuals, the 'moral' choice is often less personally optimal than the rational choice. However the rational choice is often more harmful to the other party than it is beneficial to the rational individual. Thus the whole sees less total benefit, and will often try to punish the outlier as a means to get them to adopt the choice that benefits the whole more. (This entire aspect gets very complicated, so I won't get into it much.)
~~~~ Original version ~~~~
1) Since morality is a decision optimization mechanism when making constrained choices among 2 or more individuals, morality itself becomes an explicitly emergent phenomenon. It is not a 'natural' aspect of the universe (ie: something that is objectively true regardless of the observer); rather, it is something that can only exist (but also [i]must[/i] exist) when multiple entities making choices interact. The very fact that there is an optimal choice that can be reached means that there is a 'moral' which could define how to agree to reach that optimal choice. Note that 'optimal' is only within the bounds of the knowledge of the decision makers; if they do not know that there is a more optimal outcome, they cannot consciously attempt to achieve it.
2) It's very easy to show where selfish/rational decisions lead to 'immoral' choices being made. It's also easy to show where those same scenarios allow for a greater global benefit if the 'moral' choice was made. The rational act benefits the self to the greatest degree, while the moral act benefits the whole to the greatest degree.
3) The 'whole' in #2 is an arbitrary and abstract distinction. It is the group within which the act is relevant, and affects. It might be the two men of a prisoner's dilemma, a husband/wife, a family, a gang, a village, a country, etc. Each of these define their own individual application of the decision optimization.
3) From the above, it's clear that 'moral' acts are defined to the benefit of the whole. As such, it can be implied that that which benefits the whole to the greatest degree should also be an evolutionary pressure that improves the survivability of the whole when compared with other wholes. From that, one can conclude that 'better' morals are ones that better optimize the decision paths of all parties within the whole in comparison to all other wholes. That is, the rules that most optimally benefit the whole, and, for efficiency's sake, benefit all wholes that an individual is a part of to the greatest total degree, are the decision patterns that are 'most' moral.
4) From that, every instance of the whole is competing with every other possible version of that whole. For example, things such as nuclear families, single-parent families, extended families, clans, etc, as collectives for raising children, can be seen as competing with each other using their own optimization paths, and the one with the greatest total optimality generally wins out over the others. The 'moral imperative' (not Kant's) would thus suggest adhering to the most optimal result.
5) From that, one can see that there is no immediate pressure to achieve the 'best' optimum system; only one that is better than all other competing systems. Thus what is considered acceptable (moral) changes over time, and changes more quickly when there is more competition from other groups at the same social layer.
6) Moving aside slightly: There are always competing decision processes. An optimal (ie: moral) one should (usually) not fail when presented with a competing choice; rather, it should out-compete it. In terms of evolutionary game theory, a decision pattern should be stable. A strongly stable pattern should push to extinction all other patterns. A weakly stable pattern should remain dominant even in the face of competition. Note that a weakly stable pattern may be preferred if the competition that it allows to exist is strongly stable against competition that would destroy the main pattern.
That is, type A mostly beats B, and type B always beats C, while type C largely beats A. With that mix of options, type B will always fall to A, and type C will always fall to B, whereas type A can largely hold its own because the type Bs that it allows to exist will destroy any type Cs that show up.
7) From that, one can understand that not following moral rules is not a weakness of the moral structure. It tests the validity and stability of the dominant structure, keeping it healthy when faced with alternative threats, while also allowing new patterns to emerge that may prove to be more optimal than the current standard. If a pattern can only remain stable as long as there is no anti-pattern (ie: no one who believes differently than the defined moral pattern), it must inevitably crumble.
8) From all of the above, it becomes clear that moral patterns define the evolutionary health and strength of a societal group, and is used in the competition between said groups to determine which is likely to persist to the next generation, carrying those morals forward.
9) It also provides a basis for religion. Religion is a separate societal layer whose duty is to propagate a specific moral structure. The religious heads are the talekeepers, the literate, the ones who can remember what was learned so that it isn't lost or distorted or forgotten (which also explains their fading usefulness today, as everyone's ability to communicate and remember vastly outstrips past eras). It's also a meta-layer in society. It competes with other religious layer groups for the 'best' moral optimizations, but does not directly compete with most of the social groups that apply those optimizations. Put another way, religion itself is a moral optimization. The main limitation of religion is that it tends to stagnate, refusing to pick up new moral optimizations; on the other hand, it also makes it difficult for destabilizing elements to enter the system.
10) For individuals, the 'moral' choice is often less personally optimal than the rational choice. However the rational choice is often more harmful to the other party than it is beneficial to the rational individual. Thus the whole sees less total benefit, and will often try to punish the outlier as a means to get them to adopt the choice that benefits the whole more. (This entire aspect gets very complicated, so I won't get into it much.)
Note that none of the above defines what, exactly, is moral, only the process by which one can determine if something can be said to be moral. Also note that it is not the same as "the greatest good for the greatest number" (despite it possibly seeming similar to that in some of the points I wrote). Rather, it is defined by the evolutionary fitness of the societal group which employs that moral structure, when compared against all other possible options. It may result in a lesser total 'good' than other options, or benefit fewer people, or perhaps have the 'good' deferred to a later time, or other contrary effects. However the only thing that matters in the end is what works.
Likewise, no single act can ever be defined as intrinsically moral. An act can only be considered moral with respect to the entirety of the moral structure within which the act is being compared (since optimality is a complex calculation), and mostly only evaluated within the scope of the societal group to which it is being applied (since the societal group influences the moral structure being used, and the decision process is explicitly made towards the optimization of said societal group). The efficiency of a moral directive (how well it applies across multiple societal groups) does give it a certain degree of a more global evaluation, though. In addition, certain moral guidelines tend to have an extremely high weight with respect to optimality (eg: murder), which means certain morals will almost always manifest in any societal structure.
This also is a system for which there is no 'motive' for individual morality, any more than there is 'motive' for someone to be right-handed, or have dark skin, or whatever. There is no need for any individual to act morally, but a group that does not will tend to fall to evolutionary pressures from those that are part of a more optimal/moral structure. That is, those that are non-optimal will tend to die out, while those who are optimal will tend to thrive and pass on that optimality.
Essentially, I see this as the next step up the ladder of normative ethics, which progresses from virtue ethics to deontology to consequentialism to amoralism. This takes amoralism and shows that, even though there's no such thing as morals in an empty universe, nor can acts truly be said to be 'right' or 'wrong' absent of any context, morals do still actually exist. It's just that the math to actually understand how to study and analyze them is a recent thing, and even now is inordinately complex for any real-world analysis.
So, are there any egregious errors in my thought process here?