I'm literally the only person who cares, since (as established many times) all my readers are imaginary. If somehow one of you is not imaginary and forgot to tell me, this is for your benefit too, I guess. Also if anyone happens by this blog on an accidental google search and for some unaccountable reason likes what I write and wants more.
I'm not the same person who started this blog, almost seven years ago. And that's a good thing, too; it would be pretty fucking sad if my thought process hadn't changed since I was 18. Most of things I complained about in this blog now don't really bother me, or at least I wouldn't care enough to write about. Many of the thoughts I've posted I've since changed my mind about, or restated in different ways, or don't seem relevant anymore.
This once tried to be an atheist blog. I'm still an atheist, but that seems much less important. That one question has been answered, let's move on.
When I realised how limited the atheist memespace was, I tried my hand at scepticism in general, and later on the rationalitysphere. I'm still working that last one, but the work I have here is not really a good example of that (especially everything before I read Less Wrong, but a good deal of what's after as well).
At one point this tried to be a writing blog; clearly that failed, and although I intend to go back to that, I've been "intending to go back to that" on and off for years. No reason to expect this time will be the one. And all the fiction I've posted here sucks, but I knew that even back then.
The point is: when I read back, I cringe a bit. I think that's about as healthy a reaction as one can have to a younger self. I've posted here a bit recently, testing the waters, but there's no reason to keep at it.
I have a Tumblr account now, which I got basically because it was a convenient way to follow some people. Still, having it empty bothered me, so I started using it. I've since posted a couple of longer form things I would have usually posted here. Which means, I have a platform for thinking out loud that isn't burdened with my past dumb stuff (yet!).
So yeah. It would seem Untheism is no more. I liked it while it lasted.
As a final note, if for some reason you need to interact with my online presence, I'm on twitter, the aforementioned tumblr, and I'm usually hanging out on the forum I admin, FQA. Also, anyone using the name "Sigmaleph" anywhere on the internet is probably me.
Bye!
Friday, January 16, 2015
Monday, January 5, 2015
Death is bad, 2
I fully expected the last post to be a one-shot, but then Scott Alexander wrote a thing on ethics offsets:
Some people buy voluntary carbon offsets. Suppose they worry about global warming and would feel bad taking a long unnecessary plane trip that pollutes the atmosphere. So instead of not doing it, they take the plane trip, then pay for some environmental organization to clean up an amount of carbon equal to or greater than the amount of carbon they emitted. They’re happy because they got their trip, future generations are happy because the atmosphere is cleaner, everyone wins.
We can generalize this to ethics offsets. Suppose you really want to visit an oppressive dictatorial country so you can see the beautiful tourist sights there. But you worry that by going there and spending money, you’re propping up the dictatorship. So you take your trip, but you also donate some money to opposition groups and humanitarian groups opposing the dictatorship and helping its victims, at an amount such that you are confident that the oppressed people of the country would prefer you take both actions (visit + donate) than that you take neither action.
The concept is probably unappealing to a certain sort of person, but not me. My sort-of-utilitarian, definitely-consequentialist mind is 100% on board with the idea. Or at least, it was, until:
GiveWell estimates that $3340 worth of donations to malaria prevention saves, on average, one life.
Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are super duper sure we are saving at least one life.
So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…
(Scott further specifies that you are a master criminal that will never get caught, it looks like death by natural causes so you don't waste police time, etc. or that you further offset those costs with more and more donations, as one in principle could)
So. As is its wont, my brain broke down on that one. One part of my mind says "Well, which world would you rather live in? The one where this mysterious millionaire didn't save all those lives, at the expense of killing one person? By any reasonable standard, that's a better world to live in: if death is bad, then saving lives is good, and saving more lives is better" The other part mostly yells "but murder is bad!".
The key insight here, as far as I can tell, is that my intuitions on morality break down somewhere in the vicinity of murder. I can be OK with the idea of killing one person to save many others (e.g. the trolley problem) because you didn't put the person in the tracks. I can even be OK with the fat man version of the trolley problem, because it's not your fault that's the only way to save five people. But I'm not OK with this. Where's the difference?
The obvious candidate is "But you have another available course of action: not murdering anyone, and donating the money anyway. That's clearly better." And that's true, and if true it applies equally to all ethics offsets, not just re: murder. And I agree: if it comes to me, the obvious ethical decision is not to murder anyone and donate almost all my money to the most efficient charity. No question. But people don't actually do the most ethical action if it's too inconvenient.
Suppose I am building an ethics system to be used by imperfect humans, and some of those humans happen to be murderous millionaires. Suppose that those murderous millionaires would obey "don't murder anyone" as a rule, and would also obey "If you want to murder someone, donate X amount of money to charity to offset you murder", but they would not accept "Don't murder anyone, and also, donate all your money to charity". Sitting in this position, it seems to me, my brain can relax and think right: this is a trolley problem. The trolley was set in motion by some very peculiar quirks of the psychology of hypothetical millionaires, but it's no less trolley-ish. I still have several lives on one metaphorical track, and one on another. Sucks for the one.
I'm not sure what this means for the problem I discussed last post, (i.e. a good way to ground "killing people is bad" without resulting in life maximization), other than further confirmation that I can't trust my intuitions on killing people to be consistent.
Friday, January 2, 2015
Death is bad (but I'm not sure why)
(thoughts prompted by this post on utilitarianism and abortion)
I would urge you to read the linked article on its own, especially if you self-identify as an effective altruist or utilitarian or somewhere on that philosophical area. But for the purposes of this blog, there's an argument there that goes like this: Murder is very bad. Most people who support abortion are sure that killing a 1st trimester foetus is not murder, but they should also be aware that there a lot of people who disagree with them. Therefore, they should not be 100% certain* that abortion is not murder. Therefore, if you admit something like a 1% chance that abortion is in fact murder, and therefore very bad, and if you put numbers on "very bad" (that's where the utilitarianism comes) it's very hard to make the math come out "abortion is good". (unless you are dealing with extreme cases like abortion to save the life of the mother/foetus will not survive/etc.)
Political disclaimer: My support for legal abortion has less to do with "abortion is morally good" and more with "abortion will happen anyway but if legal it's safer" and "we should probably give people the right to decide how their body is used as a matter of principle, even if they will decide to do bad things with them". "If it's bad it should be illegal" is not a principle I endorse in the general case. So no, I'm not trying to make or endorse an argument for banning abortion.
Back to the argument. There a number of obvious responses, like:
"I'm not an utilitarian and I don't think you can do math on life and death", in which case I would love to have a longer argument with you on the subject but this post is not the place, or
"I am, in fact, very much certain that foetuses are not people and killing them is not murder, less than 0.01% chance I'm wrong", or
"Well, obviously in conclusion abortion is wrong", which I think are both interesting positions and I'll address in a moment.
In the "put numbers on very bad" part above, the author uses Quality-Adjusted Life Years (QALY). The argument is that if it turns out abortion is murder, it costs the foetus ~76 years of life, on average. Here's the part where my brain, and my philosophy, goes nuts:
What if a foetus isn't a person? Aren't we missing out on exactly those same ~76 QALY anyway?
There's a philosophy that says no. You can only care about real people, not potential people; those 76 QALY only matter if the person who would have lived them already exists.
If you accept that, yay. You can go back one level to the previous argument and try to figure out the expected personhood of a foetus, which I'm sure must be a barrel of laughs. My problem is that I'm not sure I can.
If I'm certain of anything in meta-ethics, it's consequentialism: the idea that "good" or "bad" is about states of the world. The right action is the one that results in the best state of the world, and nothing else. Not which laws you follow, not which virtues you exercise, just how the world is.
In particular, if "person X exists" is a good state of the world, we should bring it about; if not, we shouldn't. But the "fuck you, potential people" principle says otherwise: If you already exist, then states of the world where you don't mean you were killed, so that's bad. But if you don't already exist, then states of the world where you don't exist are neutral. There's no reason to care about you in the future if you don't exist now.
It seems like a very weird twist on consequentialism: The same state of the world can go from good to bad depending on when you ask the question. That's a very ugly feature I don't really want in my metaethics.
But if you reject that, not only do you have to worry about abortion, but suddenly everything from contraception to not having sex falls in the same bucket: You are not taking action to make a person come into existence, this is the same as taking action to remove a person from existence (since they both result in world states where a person doesn't exist), ergo you are a murderer.
Which brings us to a nasty conundrum: If I want to be remotely consistent about ethics, then either I admit that murder is not always that bad, or I have to stop blogging right now and go impregnate as many women as possible. Since the second option sounds like a lot of work and would probably end badly for everyone involved (except our future children, who are being saved from counterfactual murder!), let's look at the first one.
Why is killing people bad?
... honestly, I'm not sure. I'm far more certain of the fact of "don't murder" than of any philosophical justifications for it, presumably because hominid brains evolved to have an innate sense of morality where we don't kill each other all the time, because social animals that kill each other all the time don't really work too well.
Like, there's the making people sad argument: if I kill you, your friends and family will be very sad, and making people sad is bad, therefore don't do it. And that's all well and good, except that if that's all it should mean it should be alright to kill people with no friends, or people who have lots of enemies who would be happy to seem the die. It does seem to allow not having sex, since people who don't exist yet don't have friends to care, so that's at least a point in favour.
There's the preferences argument: as a general rule, people's preferences being fulfilled is good, all other things being equal. People prefer not to die, ergo, don't kill them.
But that falls prey to the potential people problem just as well: hypothetical people would most likely also enjoy existing, ergo, if we care about their preferences we should bring them into being. Should we only care about the preferences of people who exist right now? If so, then that raises intriguing questions about the future: are we supposed to stop caring about what happens to the planet after the last currently-living person dies? After all, the people who would be alive then have no moral weight right now.
It seems to me that I intuitively care about people who don't exist, like, I would think it's very bad if the world a thousand years from now is every human being living a miserable existence. But I don't worry about my potential children not existing. My brain parses "not existing" and "it existed and then stopped" as very different things, even though the end result is the same.
It would then seem there are two choices:
Existence is not inherently valuable, and I need a good ethical grounding for why murder is bad that I don't have,
or,
We are morally obligated to maximise the number of people who exist, and will exist.
I'm currently defaulting to the first one, hence the title. This just might be because the second one is weird and uncomfortable, and I would really like a good answer for this. But I don't have it.
*As a general rule, you should not be be literally 100% certain of anything, for reasons I may have gone over in the past. Here though, I don't mean just "technically this could all be an illusion created by a trickster demon" but "There is a small but measurable chance you are wrong".
Subscribe to:
Posts (Atom)