This was going to be about a paradox of Calvinism and Newcomb's Problem. I might even write that post tomorrow. But, I was looking through old posts (searching for the one wherein I mentioned Necomblike problems first) and found this. And I think I was wrong, in more than one way.
First in order of increasing importance, I don't quite like the way it's written. Yes, yes, mostly irrelevant, but as long as I'm listing what's wrong with it...
Second, and more embarrassingly, the first example was badly designed. The actual logical chain as originally written had the consequences of the dilemma backwards. I can usually spot that sort of thing, so yeah, shame on me.
But that's details. The real reason for this post is that I think my entire point was wrong.
Now, I'm sure there are contrived scenarios where letting the two wrongs cancel out is the right thing to do. Much in the same way as there are contrived scenarios in which it's the right thing to do to kill a million people (if it's gonna save two million, say). But that's not a way of thinking that's useful for solving actual moral dilemmas you are likely to encounter. Similarly, the analysis I made of those two problems was bad as a general policy. And probably mistaken in the particular cases, as well.
"But wait," myself from 6 months ago says, "let's look at case 1. From the specification of the problem, the consequences of the action volunteer-the-evidence are a punishment you are opposed to. It follows inescapably that your ethics have to consider the action wrong, unless you went deontological sometime in the last half year"
I didn't, but thanks for your concern. The problem is that the consequences go further than that. When you decide that your personal ethics override the general societal system in place, you are in essence undermining it. A cooperation-based system that everyone ignores whenever it conflicts with their own personal feelings, collapses.
"But, of course my own personal ethics override the system! That's exactly what makes them my ethics, they are the standard to which I measure whether something is right or wrong. If I'm going to ignore them in favour of the system in place just because it's the system in place, then they are meaningless. My actual ethics would just be 'follow the crowd'."
Ignoring your ethics makes them pointless, yes, but that's not what I'm saying you should do. Rather, your ethics are acting on two levels. When you evaluate the system, you find that you want it to improve by not using the death penalty. But the advantage of having the system in place is also important, by your ethics, so the final calculation has to be influenced by what happens if the system collapses.
"The system won't collapse because I didn't help put someone to death-"
Massive Multiplayer Prisoner's Dilemma. If everyone defects from the common system, then the situation is the global worst. I suppose you didn't understand Timeless Decision Theory back then (not that I'm an expert on any kind of decision theory now), but think about it. Does the idea that when every individual does the right thing society collapses not sound off?
"I see what you mean, though that an idea sounds off is not really a counter-argument."
And yet you would call it the right thing to do to cooperate in the prisoner's dilemma, would you not?
"In the original scenario, perhaps, but it doesn't generalise to all PD-like situations. The reason I would call it right is that right involves an element of caring about people other than myself instead of just self-benefiting. In the case under consideration, I am caring for other people, namely the guy who'll be killed depending on my actions, when I defect"
Fair enough, but to the extent you have a sense of what 'right' means, doesn't it need to be good that most people do the right thing?
"Provisionally accepted, but I don't have a strict definition of right to compare it to"
You need to stop thinking so much in terms of strict definitions, especially when you don't have them. But back on topic, there's the honesty angle to consider. Not just honesty as a terminal value, but from the instrumental point of view. If you implement the general policy of acting within the common system, then people know they can trust you to be a cooperative agent. The price you pay when you defect is that other people, who work within the common system, must regard you as not-trustworthy. You become the act-equivalent of the little boy that cried wolf.
"Wait, what? The boy crying wolf is not doing the right thing by his ethics, he's just bored. He's in the wrong because he values his own time more than that of the other people, and that's not what I'm doing"
The point I'm making is that, aside of being a jerk, he's being stupid by sacrificing his trustworthiness. In the same way, it's unwise to take the risk of saying "Hey, I don't play by those rules" by cancelling wrongs with other wrongs.
"But that's not something you signal in either case, the scenarios are such that only you know what you chose, or even that you had a choice."
"But, of course my own personal ethics override the system! That's exactly what makes them my ethics, they are the standard to which I measure whether something is right or wrong. If I'm going to ignore them in favour of the system in place just because it's the system in place, then they are meaningless. My actual ethics would just be 'follow the crowd'."
Ignoring your ethics makes them pointless, yes, but that's not what I'm saying you should do. Rather, your ethics are acting on two levels. When you evaluate the system, you find that you want it to improve by not using the death penalty. But the advantage of having the system in place is also important, by your ethics, so the final calculation has to be influenced by what happens if the system collapses.
"The system won't collapse because I didn't help put someone to death-"
Massive Multiplayer Prisoner's Dilemma. If everyone defects from the common system, then the situation is the global worst. I suppose you didn't understand Timeless Decision Theory back then (not that I'm an expert on any kind of decision theory now), but think about it. Does the idea that when every individual does the right thing society collapses not sound off?
"I see what you mean, though that an idea sounds off is not really a counter-argument."
And yet you would call it the right thing to do to cooperate in the prisoner's dilemma, would you not?
"In the original scenario, perhaps, but it doesn't generalise to all PD-like situations. The reason I would call it right is that right involves an element of caring about people other than myself instead of just self-benefiting. In the case under consideration, I am caring for other people, namely the guy who'll be killed depending on my actions, when I defect"
Fair enough, but to the extent you have a sense of what 'right' means, doesn't it need to be good that most people do the right thing?
"Provisionally accepted, but I don't have a strict definition of right to compare it to"
You need to stop thinking so much in terms of strict definitions, especially when you don't have them. But back on topic, there's the honesty angle to consider. Not just honesty as a terminal value, but from the instrumental point of view. If you implement the general policy of acting within the common system, then people know they can trust you to be a cooperative agent. The price you pay when you defect is that other people, who work within the common system, must regard you as not-trustworthy. You become the act-equivalent of the little boy that cried wolf.
"Wait, what? The boy crying wolf is not doing the right thing by his ethics, he's just bored. He's in the wrong because he values his own time more than that of the other people, and that's not what I'm doing"
The point I'm making is that, aside of being a jerk, he's being stupid by sacrificing his trustworthiness. In the same way, it's unwise to take the risk of saying "Hey, I don't play by those rules" by cancelling wrongs with other wrongs.
"But that's not something you signal in either case, the scenarios are such that only you know what you chose, or even that you had a choice."
Which is unrealistic and part of what it makes the exercise one of low applicability. You might as well say you're saving a million people from the death penalty, it doesn't generalise.
"It's a thought experiment, the terms of it are the terms of it"
Let's not go there. Instead, I have one more angle you haven't considered: the possibility that you might be wrong.
"Of course I might be wrong, but that's true of any argument, that doesn't invalidate them"
I mean within the experiment. Perhaps, in fact, the death penalty is the right thing to do, if you automatically override society with your personal ethics you lose the chance to update on that information.
"Truth is not a democracy, lots of people being in favour of something doesn't make it right. And, again, you want me to sacrifice my personal beliefs for the sake of fitting those of other humans. Humans which, you know as well I do, commit a thousand and one errors in thinking."
Indeed we do, "we" being key. If your opinion disagrees with the majority, well, that doesn't mean you're wrong, but it does mean that you should give the other side's view serious consideration. If you dismiss every popular idea that seems wrong on the basis of human stupidity alone, then you forget that you are human too, and your cognitive machinery is prone to failure.
No comments:
Post a Comment