Functional Morality

Totally understandable, given that our conversation has become spaced out. I don’t think it harms the discussion, and your response was still good and well appreciated.

I’d like to start with something you say halfway through, because it’s a nice analogy and touches on many of your other points:

This is a good analogy because it distinguishes our positions well. My attempt here is to provide “the best description” of morality. You say that you are “not sure why [you] have an obligation to go against [your] nature and view morality or preferred social relations as to be evaluated only in terms of survival”, and my response is that that is just what it means to have a moral obligation. Insofar as “morality” means anything, insofar as it means anything that one “ought” to do something, it means that doing that thing will advance survival, for oneself or ones tribe.

And I agree with your observation that “[w]hat got selected for was a species that framed moral issues in other ways”. So too was flavor selected for rather than nutrients, and instinctive fear rather than insect biology, and pleasure and pain rather than reproduction and anatomy. And just as we have used the study of nutrition to recognize that some things that taste good are nonetheless harmful, and that some insects that scare us are nonetheless harmless, and that some things that feel good are bad and other that hurt are good, so too can we decide to overcome our moral intuitions in favor of an explicit morality that, while devoid of romance, is empirically rigorous.

I’ve been reluctant to narrowly define survival for two reasons:

  1. I don’t think it matters. If there’s a moral instinct, it comes from where all of our innate traits come from: a heritable pattern of thought and behavior that led our ancestors to survive. Regardless of how much of that is genetic, how much is culture, how much it operates on the individual and how much on the group, regardless of the many particulars of what such survival may entail, inherited traits can only be inherited where they lead to there being an heir to inherit them.

  2. I am unsure of where morality functions, i.e. what thing’s survival it’s influencing. On the one hand, certain parts of the inheritance must be genetic, but I am unsure how much. I am unsure, for example, whether a group of people left to their own devices would benefit from the inherited mental machinery that, when it develops within a culture, leads to a positive survival impact. If the group itself is part of the context for which the moral machinery of the brain evolved, then it’s not just the genes that produce that machinery that matter, the group itself also matters. I tend to think that’s the case (thus my concern that the “society” continue, and not just genetic humans), but I’m uncertain about it. That uncertainty leads me to want to leave this as an open question. Does this undermine point #1?

First, I’ll note that this is a bit question begging. A solution is dystopic in part for violating some moral principle, so to some extent this smuggles in intuitive morality as a given.

Second, as I said above, I think intuitive morality will fail us more and more frequently as time goes on. To use a near-term example that you bring up: in the past, we just didn’t know what genetic pairings would produce good or bad outcomes, so we left it to chance and instinct. But chance and instinct frequently misled us, and we ended up with immense suffering over the course of history as a result. Pre-modern societies just killed children who didn’t develop right, and many women died in childbirth as the result of genetic abnormalities in their developing babies. So if we suggest that greater deliberation or intervention in genetic pairings is bad going forward is somehow immoral, we need to weigh it against the immense suffering that still happens as a result.

I’m not arguing in favor of such intervention, rather I mean to say that merely knowing, merely developing the ability to predict genetic outcomes in advance requires us to make a moral decision that we never had to make before. It may be creepy to centrally control or regulate genetic pairing, but if we know that (a + b) will create resource hungry and burdensome locus of suffering, and (a + c) will create a brilliant and productive self-actualized person who will spread happiness wherever she goes, there is at least as strong an argument for the creepiness of not intervening. (Note that I don’t use “creepy” in the pejorative sense here, I intend it as shorthand for the intuitive moral reaction and, subjectively, I think it captures what intuitive moral rejection feels like).

So, I reiterate the point I made above: our intuitions are bad at the future, because they are the intuitions of savanna apes, and not of globe-spanning manipulators of genetic inheritance. We will need more than intuition to make sense of these questions.

My response is as you would expect: I think those things aren’t particularly function, since a large underclass of people without “dignity, sense of self, fairness”, etc. lead to things like the current collapse of global institutions (and, relevant to my discussion of the meaning of ‘survival’ above, institutions are beneficial to group survival). I think that’s always likely to be the case. Moreover, using fully functional humans, whose brains are some of the most powerful computers available, to do busywork is a waste of resources. I expect a society optimized to plug in all of humanity will be both functional and generally pleasant for its inhabitants.

But functional morality is ultimately a meta-ethical system, it admits of a lot of debate about what specific moral positions are permitted or best achieve its goals. I think most nightmare scenarios are likely to fail to optimize functionality, or for all moral systems to struggle with them equally (see the discussion of the consequences of genetic intervention above).