The Equality of Time: An Opportunity for Inter-Personal Comparisons of Utility?

As an enthusiastic amateur approaching philosophy from a background in engineering race cars, I’ve taken a real interest in the problem of inter-personal comparisons of utility, as I see it as a problem that must have a solution. Below is my proposal for a possible resolution, which I’m looking to gain feedback on, or enough criticism to prevent me from pursuing it further!

Short summary:

  • Starting from Bentham’s Dictum, which states that every life should count equally in a utilitarian framework, it seems like we need something to anchor our utility comparisons to; something that all humans can be thought to value equally. For me, this is ‘life itself’

  • This doesn’t feel like a controversial position when held up to an idea like ‘equal access to healthcare for all’, which is the norm in most modern liberal societies. Similarly, attempts to argue for the preservation of someone’s life over another solely because of their traits or circumstance (as opposed to, say, their ability to help others) tend not to hold much weight

  • From here, we can argue that the units of our life’s conscious experience are the units of time, in seconds, minutes or hours

  • Here we come up against a problem, that we can’t consider any specific periods of time in someone’s life as our anchor, as they probably all vary in value to the person experiencing them. For example, a trip to the dentist is likely to be valued less by someone than their wedding day, or time spent doing admin at home will be valued differently to being paid for something equally dull at work. However, if we think about time in more abstract terms, for example by thinking of an hour of a life as being comprised of equal proportions of every hour that someone is alive, we can arrive at an average ‘value’ of this time

  • I contend that the opportunity cost of the loss of a fixed length of someone’s life, when thought of in these more abstract terms, should be considered of equal value across individuals.

  • How might this help in making interpersonal comparisons across individuals practically? Let’s consider a question of the form ‘how much would you pay to avoid needing to have an extra 100 hours of sleep, distributed evenly throughout the rest of your life?’ In this scenario, you might imagine the other activities in your life would have to reduce in length to accommodate this extra period of unconsciousness, limiting your amount of time at work, with friends and family or doing your hobbies. This can be thought of a pure waste of life’s experiences. Now imagine a billionaire would put a monetary value to this loss of conscious life at $100,000, while someone on a more modest income would value this loss at $100. Now we can state that these amounts of money have equal subjective value to the individuals in question, such that if we wanted to cause equal harm to each of them (perhaps through taxation), we would require that they pay those values.

  • From this point, we can create models for the value of other quantities of money, or any other commodity through conventional means, which are all anchored to this point of comparison.

This is of course, not particularly rigorous, but I hoped would be a suitable introduction for a forum like this. Happy to expand on any area if it would help in understanding!

Sounds like a good idea, and you owe me $0.10.

2 Likes

Interesting idea here. I am not one to go into utilitarianism because I see it as a deeply flawed way of looking at the world, especially when looking at human life/society and human beings. But your attempt to find a common point of reference and then showing how this could be compared through personal evaluations of cost, is pretty cool I must say.

I would only critique it in the sense that adding more hours of sleep to one’s life is not always a bad thing. Not everyone wants to minimize sleep as much as possible so they can “do more stuff” during their waking hours. A lot of people enjoy sleep and feel better (higher quality of life and more positive emotions) when they have larger amounts of sleep compared to lesser amounts of it.

What if, instead, you did the following: “How might this help in making interpersonal comparisons across individuals practically? Let’s consider a question of the form ‘how much would you pay to avoid needing to do X, distributed evenly throughout the rest of your life, where X is something you personally very strongly dislike doing and causes you to feel unhappy or miserable?’”

Maybe something like avoiding being very sick, or avoiding a certain kind of injury that is both painful and interfering with our life but not actually fatal and we will eventually recover from it. If this approach could be standardized then I think it would be more useful as a unit of interpersonal comparison to have people assign a dollar value to avoiding.

1 Like

I agree with your point on sleep. What I was really after here is something that was a completely pure waste of time, and extra sleep (that you don’t really need) felt close enough to that, without being totally alien in a way that being in a coma/unconscious/under anaesthetic might be. I also considered a chore that was time consuming and where the satisfaction of completing it was only enough to offset the pain spent undertaking it exactly.

I don’t think the painful experience works, because you will introduce another commodity (a bad one) beyond exclusively time, muddying the comparison (you may have someone who is a hypochondriac that would be terrified of falling ill, for example).

One for another day is why you consider utilitarianism to be a dangerous meta-ethic. I’m fully bought in!

If you’re going for measuring time directly and for its own sake, maybe we can think about a thought experiment where future humans are able to “buy more time” for themselves. To live longer. Consider if in the future it becomes possible to pause or very much slow down aging. But this would come with a cost, it might be expensive. Let’s imagine this in terms of current valuations today, maybe it costs $1000 per month to buy access to the service that slows down and almost stops the aging of telomeres in our genetics. We are required to pay this each month, and if we do not pay it them the service turns off and we start aging normally again.

I don’t know if that makes sense in your paradigm here. But we might be able to ask people, “how much would you pay per month to not age?”

On utilitarianism specifically, I find the problem of quantifying values to be almost insurmountable. You are making a nice attempt at that here! Overall though, I don’t think human lives can be compared in value 1:1. To say it is always better to save three lives instead of two, which is a claim that must be accepted under utilitarianism, seems deeply wrong to me. Wouldn’t it entirely depend on the lives in question? And upon a whole series of other considerations as well. As an easy example of this, imagine the three lives are those of serial killers and rapists, while the two lives are those of innocent children. I would choose to save the lives of the two children over the lives of the three killers any day.

In fact this gets deeper into it because I am very skeptical of the idea that human-scale valuation can be objectified beyond the individual person who is doing the valuing. How you value those two or three lives is going to be different than how I do, or how anyone else does. That might be “wrong” from a more objective view, but should we be trying to remove the human subjectivity from valuation to emulate something like a purely impersonal accounting of the worth of a human being? That seems at risk of removing the very thing that gives human being worth in the first place, since it is not as if the universe itself has any care or concern or value for humans. It is only humans to value other humans; so why should we idealize a supposed objective position? I just see this as inaccurate and dismissive of so many actual truths involved, to the point where yes it could indeed become dangerous. But on the other side, it is a noble attempt to universalize valuation in this way to remove errors and inconsistencies, so we might find something like a common ground that can be used comparatively as a unit of ‘meaning currency’ on which to base our social or moral systems. Because of course as it stands right now no such common unit or currency exists, and this is a source of so many problems.

1 Like

Yes, I think the question you’ve proposed is valid in this scheme, provided when you say extending life, we’re talking about stretching it out, rather that just adding extra on the end. I suspect that there will be problems with any question, but this is also true of any empirical measurement, so I don’t see the imperfections as fatal to the whole idea.

I hope I can reassure on the first point you raise, that you can justify the saving of the children’s’ lives with a utilitarian meta-ethic. I find a lot of the critiques come from scenarios where the system is entirely closed. For example, the idea that you could kill someone and donate their organs to save multiple other lives, and I think this would be true were they the only people on Earth (who would do the surgery would be another matter…). But in reality, were someone to hear about this going on, you would imagine that it would cause them some distress to think that an authority figure could come along kill them to use their organs in this way. Multiply this up by the number of people that might hear of it, and I’m confident you’d have enough unhappiness to offset the life-extending gains of the original patients. I see this as a synthesis of utilitarianism, with the more Kantian ‘what if everyone did this?’ approach

The biggest problem that’s left for me is then how you might go about making calculations on this scale. Which, I suspect is beyond our practical ability in all but very simple scenarios, however I’m optimistic that our friends at OpenAI, Google etc could help us in the near future. Perhaps you could even use these axioms as a basis on which to build an Artifical General Intelligence, removing the fear that it will start sacrificing people for the benefit of others!

Yes, that is true about utilitarianism being more complex and able to handle situations like that. But the issue I see has to do with how to standardize that between people and now to capture it in pure quantification? For the theory of utilitarianism to be valid it would need to be able to define or describe the exact specifications that alter valuation; for example in the case of the three lives vs the two lives, there must be a clear and knowable standard that shows when the two lives become more valuable than the three lives. The example I picked is fairly easy since it is more of an edge case, but what if the example is more realistic? The three people are average adults and the two people are kids. But what if the two are 18 years old but the three are elderly? But what if the three are in their 50s, not elderly but a lot less life ahead of them compared to the 18 year olds? But what if one of the 18 year olds has cancer and only has a 50/50 chance of beating it? And on and on and on. I see no way for utilitarianism to systematize any of this, more like it just acts on an intuitive level in the background to aid in our value calculations.

And in this sense we already operate with a degree of utilitarian calculation, it is integrated in us at the semi-conscious and instinct/emotional levels. How we want to assign value to one person vs another is very personal, so while we already do use “utility” I don’t see how a system can arise that can translate and account for all individual differences in how this is done across all people. Valuing is very subjective, even as this subjectivity extends into more objective reality in terms of its broader causality; it is still a phenomenon of the individual who is doing the evaluting, and everyone is different.

Yet it is not like we should not attempt to quantify and compare different values, because we do that all the time and must do it. It is inevitable that decisions come up all the time, this vs that, which involve utility-based calculation of the value of different things. Saving three lives vs only two lives, all things being equal we choose the three lives, but if other factors appear this might shift our calculus. All of that can be encompassed under a utility-lens, I agree; but my problem is when this is universalized and attempted to be standardized entirely into a system. I don’t think there is a basis for that, not even with AI. If we wanted to rely on AI or AGI to do this, then we are simply giving over our powers of judgment and evaluation to the machines, trusting them to do a better job than we would do. Maybe they would do a better job sometimes, well in fact I would say definitely they would do a better job sometimes. But not always. And the cost of removing or downgrading ourselves in this way would probably not be worth it in the long run.

Again I’m not entirely sure if this is where you wanted to go with this topic, I don’t mean to move things in a different direction. I would like to keep exploring your idea of developing interpersonal comparisons of utility around this idea of time-extension or some different metric.

1 Like

I thought about this and never reached a clear conclusion. I think the Trolley problem is easier.

Because, the organ thing is far too vague. What if the people that need organs need it because of poor life choices such as eating junk food, toxins, etc? What if the people that need organs are total assholes? What if they are saints? None of this seems to be defined.

It also seems to lead towards a “random murders for the so called greater good” paradigm or an Anarchy society that becomes dystopian and easy to corrupt.

.

As to your original question, I would say time wasted=minimum wage. Or you could look up Jury Duty interpretations of what people on Jury Duty should be paid.

Some engineer working on high tech machines necessary for society probably should be paid more for Jury Duty than some random pleb on minimum wage.

1 Like

So I think you touch on a very relevant subject to the original proposition, which is the value we should put on lives of people at different ages. This forms part of my more detailed justification for thinking that the units of utility should be like those of time.

I think you can consider human beings as utility maximisers, in that everyone has a strategy for giving themselves the best possible life. It is, of course, fantastically flawed and basically impossible to predict the outcome, but we’re all doing the best we can. Because there is some homeostasis to welfare (utility sensitivity to money reduces as you gain wealth, as described by prospect theory), you can estimate that preference satisfaction tends to stabilise, meaning people don’t diverge significantly from one another in terms of overall satisfaction. An example of this is that lottery winners are about as happy as quadriplegics two years after the event.

By varying the length of a life, you therefore vary the strategy of a person for gaining utility, as now they have more opportunity to do the things they want to do, be it earn more money, spends more time with friends and family, or whatever else they like. It’s this that gives the sensitivity to life time.

If you consider the expected utility ‘potential’ for someone’s life strategy based on this anchor of sensitivity to changes in life length, you have a basis for the value of the rest of their life in utility. A loss of life is therefore a loss of this utility potential (plus the pain incurred by those around them).

1 Like