The Ends of Utility

If happiness and suffering are instrumental to fitness at all, they aren’t merely so. The lived experience of human life, the fulfillment and suffering therein, seems much more fundamental to my sense of morality than yesteryear’s fitness markers.

One could conceive of an environment where to be fit is to be a slave, to suffer indignities, and misfortune by design. What then is the value of fitness?

Also, evolutionary fitness is a goodenoughness for the present environment. Fitness is not optimization at all. And I should hope there is more to goodness than just whatever would lead to biological immortality and self-reproduction.

Sauwelios, your objection seems to be utilitarianism itself, simply rejecting the idea that it is happiness and suffering which form the basis of morality, and instead presenting the ‘will to power’ or some other form of individual strength as the proper basis. Is that right?

If so, whatever validity that argument has, I think it’s beside the point of my argument here. My argument here is that, given that utilitarianism is correct and we should value utility as a good, the next logical step is to look at the sources of utility, and when we do we find fitness etc. as their aim. While I think my argument does somewhat undermine utilitarianism (since it relies in part on the places where utilitarians have always nuanced around pure utility in favor of e.g. high and low pleasures etc.), I really do mean to make the case that it flows from utilitarianism itself, rather than to challenge the utilitarian premises.

I’m not sure what relation fitness and will to power have. Arguably they are the same, although above I described my conclusion as being that utilitarianism requires that we value “fitness-for-all”, and that seems incompatible with an individualistic notion like what I think you’re getting at. Maybe you have in mind a reductio argument that would require us to be Straussian individualists, even starting from utilitarianism’s premises? If so, I’d be interested to see it. It doesn’t seem obvious to me from here.

I think that’s right. But my point here is to place happiness and suffering, as subjective experiences (as well as the empathetic experience of the happiness and suffering of others) into the amoral evolutionary context. At some point in human evolutionary history, maximizing utility would be synonymous with maximizing fitness, because utility as a subjective experience evolved because it improved the fitness of our ancestors.

But now, the alignment is a bit off. We eat to much sugar, we do too many drugs, we live in a world of abundance with a subjective experience evolved for a world of scarcity. Utilitarianism takes this into account; Mill rejects hedonism because it’s the wrong kind of happiness, and modern utilitarians will refer to the discount rate to justify long-term planning over short term experience. But those asterisks are really just to bring utility back in line with its original aim, i.e. fitness.

For those purposes, I think we can remain agnostic about the situations you present. Whether chess playing is good depends on whether it’s a high or low pleasure, right? Is a pickup artist getting more pleasure than they’re causing pain to attractive women? How much value do we give to the future subjective happiness of the products of stealthing, relative to the pain incurred by an unwanted pregnancy? These are hard questions for utilitarianism, so it’s not a defeater if they’re also hard to resolve on the grounds of fitness.

Again, I’ll plead agnostic. Fitness is often a hard question. If someone is dying, preventing them from dying is going to improve their fitness, ceteris paribus. But whether freedom will improve happiness or fitness is not an easy question. I’d think (though this may owe more to my political biases than to any cogent reasoning) that the human evolutionary niche is one in which civil liberties, mutual respect, cooperation, and other liberal values are the most likely to increase the net fitness-for-all. But there are other evolutionary strategies, and certain forms of authoritarianism might be better (but then, arranged marriages seem by some metrics to be happier than freely chosen pairings).

Excellent hook.

Doesn’t this apply equality to utility, though? Or is the Brave New World a utopia, where to be a happy epsilon-minus is to be a slave, to suffer indignities and misfortune (literally) by design?

But my point here is just that in the thing utilitarians are using as the hallmark of goodness just is something that “would lead to biological immortality and self-reproduction”. It’s just that utilitarianism picks one or two of the the things, rather than everything.

No, I certainly don’t think that is simply right. So for the sake of argument, I will just say “No, that’s not right.”

I agree with this, but don’t think it goes far enough. I think there is also an even further logical step.

I don’t think the term “Straussian individualist” hits the mark. For me it’s about the species:

“What we have to determine upon, Carleas, is what makes the propagation of our species good.”

But what makes the species good, in my (“our”) view, is a choice of its individuals:

“And indeed, we can come up with no better answer than our pleasure in our picture of those who have inscribed their names and the question marks behind those names most intriguingly into the annals of history.”

Morality, the true morality, human morality, is the imposition of the ideal (εἶδος, speciēs) of man these individuals represent.

Unless such imposition is not a matter of choice. I’m Your own words, it predates such. Who can make such a distinction? Is that a question of the will? The power to will? The will to power
Is further down the line and as suxh, withstands analysis.

I do what I can.

Though it’s beside the point, I’ve always found utilitarianism problematic. But how would you answer the original question?

Regardless, evolutionary fitness lacks dimension as a moral standard. Fitness doesn’t even require a nervous system, much less consciousness.

Sorry about that; I’m not sure what role I intended the word “simply” to play in my response, but it now reads as unnecessary at best, and condescending at worst.

I agree. I now think that talking about fitness at all was a mistake on my part, since it’s clearly been a distraction from the point I wanted to make here, which is just that utility is a demonstrably instrumental good, and that that poses problems for utilitarianism on its own terms. What utility is instrumentally achieving is besides the point.

And while fitness is easiest to show, I agree that we can go further, though I think we disagree on the next step. I read your description of true morality to still be a form of challenge to the premises of utilitarianism. My next step would be to continue in the same way, still basically accepting utility and even fitness, but looking for the more general concept of which evolutionary fitness is an instance.

I answer it only by pointing out that the question is a challenge to utilitarianism, not to my ‘ends of utility’ argument. That’s a bit of a cop-out, but I think it’s legitimate for my purposes. Utilitarianism can be criticized for leading to absurd results (like that epsilon-minuses are a net good), and anywhere that that’s the case I would expect that whatever system follows from my argument here could be criticized in the same way.

So, if you aren’t convinced by utilitarianism, you aren’t likely to find my argument convincing. But if you take utilitarianism to be true, my argument should still work because it assumes that truth of utilitarianism. And there should be some critiques of utilitarianism that are resolved in systems that follow from my argument here, e.g. that high and low pleasures is an ad hoc distinction that is better justified by fitness than utility.

I agree, but I don’t find that to be a compelling criticism. Our demand for a nervous system and consciousness in order to recognize moral worth is anthropocentric.

I didn’t feel condescended to, so no apology necessary.

Not sure what such a concept would look like, but I think that would be even worse. I can bear with utilitarianism to an extent, as I too think happiness and suffering must form the basis of morality–though I may be thinking of a different kind of happiness and suffering than utilitarians tend to. In fact, it’s by filling in–in case it’s left open–or changing–in case it’s not–the kind of happiness that I wish to correct utilitarianism.

My problem with utilitarianism is that it would be fine with man’s evolving into something non-human–devolving into something sub-human, if we dare make a value judgment. And now you seem to suggest it would, or should, even be fine with man’s changing into something inorganic (“the more general concept of which evolutionary fitness is an instance”), as long as it or its type persists (a type of stone, for instance).

Utilitarianism is not that simple and I think you add more premises with your new proposal that utilitarians can reasonably object to. I also don’t see how a case like epsilon minuses are necessarily resolved by the fitness proposal, which seems to me to broaden even more the domain of that which is “moral,” and what’s more I don’t imagine it would be very difficult for a utilitarian to argue that epsilon minuses have a lower quality of happiness - had they not been stunted by design, they would contribute to more high quality happiness. How can the fitness argument resolve the epsilon minus absurdity?

The comment was not about moral worth, but moral agency and the capacity to care at all - to have a “moral life.” What would you say is required to be a moral agent? Must moral agents be aware of themselves or their environment? Or must they merely react to stimuli?

Finally, do you argue that fitness is an intrinsic good? If not, then fitness for the sake of life, and life for the sake of what? What is it about life that justifies its continuance in any possible form?

I think this question is going to be forced in the not-too-distant future: are super-intelligent machines more morally worthy than humans? If we assume (and I think it’s a reasonable assumption) that any human quality we prize can be instantiated ‘in silico’, is it possible to create an artificial life or mind that has more moral value than a human? Or must some kind of loyalty to our genes be baked into our conception of moral worth?

This seems particularly interesting to me as we approach a time when humans may be generally less able than machines for the most important tasks. But since you mention that a moral project that produces sub-humans might be rejected, I wonder what you think of one that would value super-humans, especially when created rather than evolved.

To start, do you agree that utility in humans is instrumental, i.e. we experience it because it is useful? If not, why not? If so, do you think we can draw any conclusions from that about the role of utility in moral reasoning? My argument here is just that 1) utility is instrumental, 2) it’s instrumental to fitness, and therefore 3) fitness must supersede utility in moral reasoning (or put another way, that arguments in favor of utility are actually arguments in favor of fitness).

To your question, I think going to fitness changes, but doesn’t resolve, the epsilon-minus problem. I agree there are ways of avoiding the epsilon-minus problem for utilitarianism, although they have some problematic consequences (see Singer on the mentally disabled). Fitness admits of similar moves, and they are similarly problematic.

Isn’t this a separate question? Children are at best partial moral agents, and most animals aren’t moral agents at all, even though we have every reason to believe they are conscious and have subjective experiences of pleasure and pain. The question of who is a moral agent does not seem to track (and probably shouldn’t track) the question of what gives something moral worth. This again is as much a problem for a system centered on utility as for one centered on fitness.

No, I wouldn’t say fitness is an intrinsic good. As I suggested in my response to Sauwelios earlier, I would use a similar line to argue that fitness, like utility, is instrumental to some other end. And I don’t think that’s a problem for my position: if it’s instrumental towards something else, that other thing must be where we should look for the intrinsic good.

Only if super-human qualities can be instantiated “in silico”–that is, if it’s possible to evolve (“sursumvolve”, evolve upward) beyond the human in the essential respects. Otherwise, it would at most be possible to create an artificial life or mind that has as much moral value as a human.

Not to our genes, I think, no; to our “phenes” (appearance, shape, build), perhaps.

What do you mean by “the most important tasks”? I think the most important task is grappling with the most important questions–i.e., philosophy.

Anyway, I think what’s crucial is the value judgment that distinguishes between human and sub-human, and thereby also, potentially, between human and super-human. A moral project that would value super-humans already distinguishes between human and non-human in a way that ranks the former above the latter. A super-human being would simply be a being that’s even more human than “a human being”.

In my view, the all-important question is: What is human, in the positive sense (i.e., not in the sense of “only human” or “human, all-too-human”)?