Earlier, I discussed my view of morality as a massive multiplayer prisoner's dilemma. Recently, I read quite a fascinating paper on something called Timeless Decision Theory. I knew a bit about it from Less Wrong, but extensive reading about it, in a more complete format, both helped me understand what it's about and prompted more reflection on the subject.
Warning, TDT's suggestions in, amongst other things, some instances of the prisoner's dilemma, go against the predominant views in decision theory, which follow something called causal decision theory. While I find the arguments for TDT extremely persuasive, do keep in mind this is not (yet) settled science.
The relevant version is something called the Twin Prisoner's Dilemma. In it, the person you're playing against is not another prisoner, but rather an exact copy of yourself. As usual, you both have the option of cooperating or defecting, and you must make your decisions without any interaction. Both cooperate, 3 years of sentence each. Both defect, 4 years of sentence each. If one defects and the other cooperates, the defector gets only 1 year and the other one 6. Presumably, you both want to minimise the time you spend in jail.
Now, according to CDT, making the other player an exact copy of yourself changes nothing. No matter what the other player chooses, you get less time in your sentence by defecting, so the "rational" choice, they argue, is defecting. And of course the other guy reasons similarly, both defect and both hit the globally worse situation.
Now, I argued when I first thought about this, if the other player is an exact copy of yourself you can expect, with reasonably high probability, that whatever decision you'll come to, the copy will decide the same. This rules out the "one cooperates-one defects" scenarios, meaning you're left to choose between both cooperate and both defect. And since both cooperate is better, globally and individually, the rational choice is cooperating.
This is exactly what TDT says. Specifically, when dealing with a problem which contains an element that outputs the same as your decision algorithm, you should decide as if your choice determined the output of that element. For example, your exact copy who will decide the same as you do, or a sufficiently advanced brain scanner which can determine what you will decide before you do.
CDT says that, if you making your decision doesn't cause your copy to act as you do, then you can't decide as if was determining. I think CDT is wrong about this, but you really should read the paper linked above for an exhaustive analysis of why TDT works better.
Now, in day-to-day life you aren't going to find exact copies of yourself, so this might seem like a silly thought experiment and nothing else. Some might argue that human thought is so random they are unpredictable. I strongly disagree with that idea. If the way humans make decisions is actually unpredictable, then how the hell do we manage to make correct decisions? Surely people, being thinking beings and not dice, actually base their ideas on something? If I offer you a million dollars vs a thousand, I can predict that you will very likely take the million. That's not random. Sure, we make mistakes, more so the more complicated the problem becomes, but it is possible to approach a systematic way to make correct decisions. That is, in fact, the whole point of decision theory.
So, suppose you are in the real world, where we cannot have exact copies of ourselves yet. What we can have, though, is people who approach decisions more or less rationally. Suppose two people who both implement timeless decision algorithms, and are each aware have common knowledge of that fact. Put them in a typical prisoner's dilemma scenario. They both know that their decisions are the result of the same algorithm, so they each know that they are in the same situation as in the twin dilemma, and TDT says cooperate in the twin prisoner's dilemma. So they both cooperate, for both individual and global maximum utility.
Now, if that doesn't blow your mind with meaningfulness, pretend you're me and that you see the prisoner's dilemma as one of the big things that get in the way of a good society. Think about it. A society of rational timeless decision theorists cooperates naturally, without the need of an outside enforcing authority, because they know the choice is between all cooperate and all defect. I cannot properly emphasise the meaning of this.
I realise that you can't reduce humans to simple decision algorithms. But the point is not suggesting that we are, it's showing what rationality really means. It's about dispelling the myth that selfish rationalists will take the path of everyone for themselves and collapse civilization.
It's about how, as always, if your smart choice predictably and systematically underperforms another, stupider choice, then it's probably not that smart.
Edit: I noticed I made a mistake in the setup of the problem. It's not enough that both parties know that the other implements a timeless decision algorithm, since the guarantee that they'll arrive at the same conclusion only matters if they both know this fact. Which is to say, they have to not only know that they are both TDT, but also that their knowledge of each other is identical (in the aspects relevant to the decision). I think, anyway, the moment the problem hits enough levels of recursion I get lost.
Edit: I noticed I made a mistake in the setup of the problem. It's not enough that both parties know that the other implements a timeless decision algorithm, since the guarantee that they'll arrive at the same conclusion only matters if they both know this fact. Which is to say, they have to not only know that they are both TDT, but also that their knowledge of each other is identical (in the aspects relevant to the decision). I think, anyway, the moment the problem hits enough levels of recursion I get lost.
No comments:
Post a Comment