Not necessarily. I think that there is a price in theory for nearly all humans.
Let me take a step back here. We can say with certainty that some people’s stated prices are irrational, right? Experiments on loss aversion and framing that show that people evaluate prices differently depending on how they’re presented, e.g. being given a larger amount of money from which some is taken away vs. being given a smaller amount of money to which more is added; paying a single fee vs. a sum of several fees; a single fee at one time vs. many fees in installments. People’s subjective choices in these situations are internally logically inconsistent.
Similarly here: very few people may accept an explicit deal of $X for committing murder, but most would accept Y% of $X to commit an act with a Z% chance of killing someone, and even more would only be willing to pay a finite amount to avoid killing someone. But logically, those are the same choice.
A car designer that sells enough units is guaranteed – literally guaranteed – to have someone die in one of their cars due to some lack of safety precaution or test. Even if they’re operating at a six-sigma fault tolerance, once someone sells 34 million units there’s going to be a defect, and for some product that means a death. That’s inescapable, and it logically entails exchanging someone else’s death for money.
So it isn’t a matter of reading minds. A position that says “no amount, ever” for explicit murder, but also makes decisions about health insurance, auto safety, and other similar choices, is almost certainly logically inconsistent: those various claims, if taken as premises, lead to some (a) and (b) where both (a=b) and (a \neq b).
The argument that it’s a category error doesn’t work, because in practice we plug the same values into considerations we make all the time. It can’t be the case that ([life]=$X) is category error, but (Z% * [life] = Y% * $X) isn’t. And the latter is implicit in day to day decision making.
This seems like deontology, right? In consequentialist terms, it doesn’t seem like there should be a moral difference between “definitely kill one person” and “do something 34 million times and as a result definitely kill at least one person”. It’s not even a difference of intention, since you can know going into a non-zero risk venture that a death will inevitably result, and so that outcome is intentional.
Deontologically, I think this difference makes sense. For many, this may be an argument for deontology. But I am OK with the bitter pills of consequentialism.
I think this is right. I assumed coming into this thread a certain philosophy of prices that, it turns out, is anything but accepted. I do think that my approach to prices is right, and consistent with decreasing marginal utility, changes in behavior in the presence of monetary priming, etc.
I think it’s notable that no one is suggesting that e.g. Microsoft’s acquisition of Github for $7.5 billion is nonsensical, and doesn’t Microsoft realize that it will just lead to mo’ problems? It seems like we’re generally fine with the idea that things that sell all the time for incomprehensible amounts of money can really be worth those amounts. It seems like special pleading to reject large prices here, where we’re also tempted to reject the whole premise that the thing can be sold at all.
I think this is a good distinction, but I don’t think this is exhaustive. Take something like math: the fact that only humans do math, and have only been doing math for a few thousand years, doesn’t seem to tell us very much about the ontological status of math. One way of expressing this category is, if we meet self-aware aliens who evolved elsewhere, are they likely to be aware of math? I think the answer is clearly yes. Basic math developed independently in multiple human cultures, it seems to have an existence independent of humans even though it didn’t exist before humans developed it.
I would categorize certain parts of economics into that same class. If we meet self-aware aliens (that are biologically independent etc.), they are very likely to have something like money. Like math, money developed independently in multiple human cultures, it’s a generalization of barter. We should expect any society where strangers interact and exchange goods to also have a concept of a liquid medium of exchange, a.k.a. money.
They would probably be able to tell you how much they saved from the lack of safety precautions, that’s just a math problem. And they may hate the question, but they would know if it was worth it.