A lot of the lessons from Fukushima have been obvious for a while. Nuclear safety is a global challenge, and every country has to learn from the best practices of others. These best practices include retrofitting passive safety features wherever possible, and continuing to update safety measures in response to our changing understanding of the plant's environment. -- Ars Technica (emphasis added)This advice certainly sounds right, to me. But when I check consequentialism, I find that the words "whenever possible" give me pause. On the one hand, it seems that our only question should be "What effect does a given safety measure have on actual safety?" But proper consequentialism requires a two-sided analysis, so we must also ask "How much will this cost in relation to the expected number of lives saved? Is it worth it?"
I don't like this conclusion.
It certainly seems logical enough: if we're to make a change, we must do a proper cost-benefit analysis. But imagine this hypothetical: you're provided with an investing opportunity. The expected value is better than any other investment you're likely to find, and you're sure that it is legitimate. However, it's a risky investment, with a chance of returning nothing at all, not even the money you put in. I don't think any sane person would invest all of their money in such a scheme, and I don't think this falls under irrational risk aversion either. Yet that is exactly what basic economics (which is very similar to utilitarianism) says we ought to do, perhaps leaving some money out for basic living expenses, but certainly without a safety net. If someone did invest all their money like that, and ended up bankrupt, they would have only themself to blame.
Of course, most real financial analysts would never do this, instead preferring to diversify. But I can't understand how you get from "throw all your money at the best expected value" to "diversify", and I think it's just a kludge to make the system work right.
What can we distil from this? I think it's clear that we need to consider the worst-case scenario in any evaluation. But which one? A situation can always be worse. So, to keep this reasonable, we need to limit ourselves to outcomes which are related to the proposed action: those outcomes whose probabilities are significantly affected by our proposed action. Of those outcomes, we should focus on the very worst. We need to ask ourselves "Is this outcome tolerable? What effect does our decision have on its likelihood?"
But what of utilitarianism? This doesn't seem compatible with it. I feel that we may revise utilitarianism if we find it necessary; I may explain why in a later post.