I intend to offer an answer to this question: we judge by comparing prices.
We can fiddle with a thousand knobs in the thought experiment, and when we adjust the knobs by e.g. putting Einstein on one side and some lowlifes on the other, we can create situations where the question becomes harder (Phyllo makes this point as well). That’s not a problem for my proposed solution, though: it’s harder to decide whether we want to trade stock X for stock Y when the price of X and Y are nearly equal, than it is when X >> Y. Similarly, we should expect moral hypotheticals to become harder as we fiddle with knobs that make the prongs of the dilemma more balanced, i.e. when we make the moral prices closer to each other.
Here’s a variation that’s useful for getting at the moral pricing question: suppose there’s one person on each side of the tracks, but each has a terminal illness for which we have the cure. Suppose further that one person’s treatment will cost $X, and the other’s will cost $Y > $X. Shouldn’t a consequentialist have an obvious preference about the outcome here?
Some people are in a coma. Others are sleeping. Clearly the term “non-sequitur” has no meaning because people who are in a coma or sleeping are unable to think it.
Sorry to be glib, but “some people can’t think it” is just not relevant to how well math can describe the world. People don’t have to understand general relativity or statistics for those things to accurately describe the world. And people don’t have to think about how much money something would be worth to them for there to be an amount of money that that thing is worth to them. If someone thinks “I’d rather die than kill a stranger”, it follows from that that the price of their life is less than the price of their killing a stranger. We don’t need to know what either price is, and the person doesn’t need to think about it explicitly for it to be the case: the person values A over B, price mediates value, therefore the price of A is higher than the price of B.
But let’s set this question even further aside, and just ignore the people who can’t think this way, and ignore whatever their moral obligations might be. If someone is mathy and familiar with pricing things, is it morally permissible for that person to do the mathing and pricing and come up with a break-even price where they can accept money to kill a stranger conditional on their then donating it to some value-maximizing cause?
This is interesting. I think of the trolley problem as useful primarily as an intuition pump, and that was its original use. I know it is widely used in surveys to look at moral intuitions among the lay public, but that wasn’t the context in which I learned about it, and not the reason I bring it up now. I invoke it as an intuition pump for moral intuitions about consequentialism and the role of action in morality.
Sorry for any confusion that’s causing; the problem has a dual life in abstract moral philosophy and in various areas of social science, and I should have clarified what role it was playing here.
Of course the problems are different. The question is whether their solutions are commensurable, and I hold that they are. If price can mediate how I value chewing gum and an appendectomy and my daughter’s education and how much pain I’m in from that car accident, then it can mediate both pulling the switch and pushing the fat man too. The problems are different, maybe “fundamentally” (whatever function that plays here), but so long as they are forms of value they can be mediated by price.
-
I took you to be taking the position that World A is not better than World B. But if Person B here is you, shouldn’t you be agnostic between them? If you can’t value a person in dollars, then you can’t know which is better. Your answer to the question, “Is X dollars worth more than P’s life” must be “mu”: the answer isn’t yes or no, because yes or no both imply weighing the value of a person against the value of dollars.
-
This isn’t anything like how I read this part of our exchange to now. I read it more like:
Person A: Is World A better than World B?
Person B: World A-plus-a-bunch-of-other-things is really really bad!
Person A: That’s not answering the question.
Person B: Here’s a dialog we didn’t have.