Monday, January 5, 2015

Death is bad, 2

I fully expected the last post to be a one-shot, but then Scott Alexander wrote a thing on ethics offsets:

Some people buy voluntary carbon offsets. Suppose they worry about global warming and would feel bad taking a long unnecessary plane trip that pollutes the atmosphere. So instead of not doing it, they take the plane trip, then pay for some environmental organization to clean up an amount of carbon equal to or greater than the amount of carbon they emitted. They’re happy because they got their trip, future generations are happy because the atmosphere is cleaner, everyone wins.
We can generalize this to ethics offsets. Suppose you really want to visit an oppressive dictatorial country so you can see the beautiful tourist sights there. But you worry that by going there and spending money, you’re propping up the dictatorship. So you take your trip, but you also donate some money to opposition groups and humanitarian groups opposing the dictatorship and helping its victims, at an amount such that you are confident that the oppressed people of the country would prefer you take both actions (visit + donate) than that you take neither action.
 The concept is probably unappealing to a certain sort of person, but not me. My sort-of-utilitarian, definitely-consequentialist mind is 100% on board with the idea. Or at least, it was, until:

GiveWell estimates that $3340 worth of donations to malaria prevention saves, on average, one life.
Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are super duper sure we are saving at least one life.
So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…
 (Scott further specifies that you are a master criminal that will never get caught, it looks like death by natural causes so you don't waste police time, etc. or that you further offset those costs with more and more donations, as one in principle could)

So. As is its wont, my brain broke down on that one. One part of my mind says "Well, which world would you rather live in? The one where this mysterious millionaire didn't save all those lives, at the expense of killing one person? By any reasonable standard, that's a better world to live in: if death is bad, then saving lives is good, and saving more lives is better" The other part mostly yells "but murder is bad!".

The key insight here, as far as I can tell, is that my intuitions on morality break down somewhere in the vicinity of murder. I can be OK with the idea of killing one person to save many others (e.g. the trolley problem) because you didn't put the person in the tracks. I can even be OK with the fat man version of the trolley problem, because it's not your fault that's the only way to save five people. But I'm not OK with this. Where's the difference?

The obvious candidate is "But you have another available course of action: not murdering anyone, and donating the money anyway. That's clearly better." And that's true, and if true it applies equally to all ethics offsets, not just re: murder. And I agree: if it comes to me, the obvious ethical decision is not to murder anyone and donate almost all my money to the most efficient charity. No question. But people don't actually do the most ethical action if it's too inconvenient. 

Suppose I am building an ethics system to be used by imperfect humans, and some of those humans happen to be murderous millionaires. Suppose that those murderous millionaires would obey "don't murder anyone" as a rule, and would also obey "If you want to murder someone, donate X amount of money to charity to offset you murder", but they would not accept "Don't murder anyone, and also, donate all your money to charity". Sitting in this position, it seems to me, my brain can relax and think right: this is a trolley problem. The trolley was set in motion by some very peculiar quirks of the psychology of hypothetical millionaires, but it's no less trolley-ish. I still have several lives on one metaphorical track, and one on another. Sucks for the one.

I'm not sure what this means for the problem I discussed last post, (i.e. a good way to ground "killing people is bad" without resulting in life maximization), other than further confirmation that I can't trust my intuitions on killing people to be consistent.



No comments:

Post a Comment