There are many tricky questions in decision theory. In this post, I’ll argue that the choice between CDT and EDT isn’t one of them.
Causal decision theory (CDT) evaluates expected utilities under causal interventions, while evidential decision theory (EDT) evaluates conditional expected utilities. Humans tend to have strong intuitions in favor of CDT, but I’ll argue that CDT is only reasonable insofar as it is an approximation to EDT that degrades more gracefully given certain kinds of reasoning errors.
- EDT is supported by a simple argument (but not “why ain’cha rich?”), and absent some counterargument or objection we should prefer it.
- CDT degrades more gracefully than EDT in certain cases where we cannot condition on all available information. This explains the “failure” of EDT in cases like smoking lesion.
- CDT is simpler to implement and almost always agrees with EDT in the evolutionary environment; this probably explains human intuitions in favor of CDT. So those intuitions should not be interpreted as additional support for CDT in cases where the two theories disagree and where we are able to condition on all inputs to the decision process (as is the case whenever we make decisions explicitly).
- “Why ain’cha rich?” arguments support neither EDT or CDT, and instead support variants of updateless decision theory (UDT). Interpreting these arguments is subtle, as philosophers correctly recognize in the case of the EDT vs CDT debate, and it’s not obvious where on the spectrum between EDT and UDT you should end up.
- Starting from examples where both CDT and EDT perform poorly, we can easily construct cases where CDT makes a better choice “by coincidence” (including an example by Arntzenius, and “XOR blackmail”). These cases do not provide support for CDT any more than they provide support for procedures like “pick the alphabetically first option” which also sometimes make the correct decisions.
- There do not seem to be any remaining strong arguments in CDT’s favor to counterbalance the simple pro-EDT argument. We are left with a difficult philosophical problem of deciding between EDT and UDT (which are endpoints of a single spectrum).
Most of these points have been made either in the philosophy literature or in the rationalist community (e.g. see Abram here). My main contribution is to put it all together and to be aggressively overconfident about the conclusion.
The simple argument for EDT
Suppose I am faced with two options, call them L and R. From my perspective, there are two possible outcomes of my decision process. Either I pick L, in which case I expect the distribution over outcomes P(outcome|I pick L), or I pick R, in which case I expect the distribution over outcomes P(outcome|I pick R). In picking between L and R I am picking between these two distributions over outcomes, so I should pick the action A for which E[utility|I pick A] is largest. There is no case in which I expect to obtain the distribution of outcomes under causal intervention P(outcome|do(I pick L)), so there is no particular reason that this distribution should enter into my decision process.
This is a very simple argument, but simple arguments are often the best kind.
The reason most people have a hard time choosing between EDT vs CDT is not because they expect to find a more satisfying argument than this one, but because they think the simple argument is countered by equally strong arguments/intuitions in favor of CDT. In subsequent sections I’ll explain why I think those arguments and intuitions don’t hold up.
Divergence of CDT and EDT
CDT and EDT make different recommendations, so there is a real choice to make.
Consider the following example. There is a box and a predictor. You have the opportunity to give the predictor $1000; whether or not you do, you then open the box and take its contents. Before you arrived, the predictor predicted whether you would give them $1000. If they predicted you would pay, then they put $10,000 in the box, otherwise they put $100 in the box. The predictor is known to be very accurate. Before opening the box, do you pay the predictor?
In this case, the CDT agent agent believes that they will receive $9000 conditioned on paying up and $100 conditioned on not paying up (provided they have any uncertainty about their own decision—if they put probability 0 on paying up then the conditional probability can be undefined). But despite having those beliefs, the CDT agent doesn’t pay up, and so predictably receives $100. The EDT agent has the same beliefs, but actually pays up and predictably receives $9000.
Conditioning on inputs to the decision procedure
The smoking lesion problem seems to be the most common reason for rejecting EDT.
In this problem there are two kinds of people:
- Those who like smoking and will probably get lung cancer (whether or not they smoke).
- Those who don’t like smoking and probably won’t get lung cancer (whether or not they smoke).
We observe that 99% of people who smoke get lung cancer, and only 1% of people who don’t smoke get lung cancer.
An EDT agent who likes smoking will reason “if I don’t smoke, I only have a 1% chance of getting lung cancer, so I shouldn’t smoke.” This leads the EDT agent to incorrectly avoid smoking, while a CDT agent will correctly realize that they might as well smoke since they like it and it has no negative effects.
The reason that EDT does poorly is very simple: the EDT agent believes that they won’t get a tumor if they don’t smoke. But we know that the EDT agent likes smoking, and so will in fact get a tumor regardless of whether they smoke. The EDT agent errs because it is ignorant about a critical fact about the situation—the fact that it likes to smoke.
The EDT agent’s problem shouldn’t be blamed on EDT. No matter how good your decision procedure is, if you don’t know a critical fact about the situation then you can make a decision that looks bad (to an evaluator who does know the critical fact). This is actually a particular egregious failure, since the EDT agent’s decision procedure is using the fact that they like to smoke, but somehow not conditioning on it when evaluating utilities.
(See also: smoking lesion steelman.)
CDT as an approximation to EDT
The EDT algorithm is more complex than the CDT algorithm: in order to arrive at the right decision, the EDT agent needs to condition on all the inputs to their decision procedure. The CDT agent only needs to look at the causal structure of the situation, which makes the correct answer obvious.
So it seems like we have something to learn from CDT, even if we don’t consider this a strong objection to EDT. Why does CDT get the right answer more easily?
Consider the following simple assumption:
The CDT assumption: The output of my decision process is conditionally independent of facts I care about, given the actual decisions I make and the inputs to my decision process.
Under this assumption, CDT and EDT are equivalent. Taking a causal intervention surgically removes the update “backwards” from a decision to the output of the decision process. But given that the agent should already be able to update on the inputs to its decision process, and that we are intervening on the actual decision, the CDT assumption implies that screening off this update doesn’t affect the outcome.
(Technically we need to do a backwards induction on “the actual decisions I make” if there are multiple, but this isn’t really key.)
Debunking intuitions for CDT
The calculation in EDT can be very complicated, since the characteristics that determine a decision can themselves be complicated. An ideal Bayesian would of course have already updated on all of these characteristics, and so there would be no further calculation to perform—but humans are not ideal agents.
So CDT can degrade more gracefully given the kinds of errors that humans may make. If we operated in an evolutionary environment where CDT and EDT always agreed, then we’d expect to use CDT rather than EDT since it yields the correct behavior and is more robust to this particular error.
The CDT assumption is violated in cases like Newcomb’s problem, where others’ predictions of you are correlated with your decision and also with your utility. Humans handle these cases via a patchwork of heuristics like vengefulness, honor, generosity, etc. Does this give evidence against EDT?
In particular, if EDT was right, should we expect humans to use EDT rather than this patchwork of heuristics?I think the answer is “no,” and so we shouldn’t take human intuitions as any evidence at all about the correctness of EDT vs CDT.
One problem is that EDT itself isn’t actually a great decision procedure, and so an EDT agent needs these heuristics anyway. (This is discussed in the next section.) As a simple example, both EDT and CDT agents give in to extortion in single-shot games, and therefore are more likely to be the target of extortion. So either kind of agent needs to have a heuristic in favor of vengefulness to protect themselves. And once you have heuristics that patch the holes in EDT, they also patch the holes in CDT (in fact EDT can sometimes “double count,” taking too much vengeance because it both has decision-theoretic value and is satisfying).
A deeper issue is that Newcomb-style correlations between our behavior and others’ predictions of our behavior are typically weak, and so superficially similar cases are mostly about reputation and repeated interactions. The real role of these heuristics is mostly to replace complicated reasoning about iterated games rather than to complicated decision-theories. That just means that the evolutionary environment is even less likely to contain cases in which EDT and CDT come apart, and so we should interpret pro-CDT intuitions as even less evidence about CDT.
If CDT was great in the evolutionary environment, should we keep using it?
If we have an intuition in favor of CDT because it’s simpler and works just as well in the evolutionary environment, maybe we should keep using CDT for the same reasons—even if that isn’t much evidence about the actual correctness of EDT.
A first question is whether we can actually condition on the inputs to our decision procedure—if we can’t, then that’s an advantage for CDT. For implicit decisions I think this is a bit unclear, and it might be better to use CDT. For explicit decisions we do have access to all of the inputs into the decision process, since we had to make them explicit, and so should just condition on them rather than using CDT.
Given that, in any particular case where we believe that CDT and EDT come apart (e.g. weird cases with multiverse-wide cooperation), we ought to prefer the EDT recommendation. Moreover, once we accept EDT over CDT it becomes more plausible that we should move some of the way towards UDT, which differs from CDT in more cases (though still mostly agrees).
I think these cases aren’t very common, so that CDT is normally fine even in the modern environment. But if you are having a debate about decision theory, I don’t think the fact that CDT and EDT usually agree gives a good reason to prefer the CDT recommendation.
“Why ain’cha rich?” arguments against EDT
“Why ain’cha rich?” is usually presented as an argument in favor of EDT, but I think it actually provides one of the strongest arguments against EDT. Sometimes this fact is used to support CDT, but that appears to be a straightforward error.
UDT does better than EDT or CDT
EDT and CDT are both reflectively inconsistent, in the sense that an agent using either one of them would prefer stop as soon as possible. For example, everyone would prefer have a precommitment to not give in to extortion, in cases where having such a precommitment would decrease the odds of extortion. And so CDT and EDT agents alike would prefer precommitment to a decision theory with the clause “be bound by a precommitment X whenever having that precommitment would have led to higher utility.”
Updateless decision theory (UDT) is a natural formalization of this idea. An EDT agent with background information X chooses actions whether to take action A by evaluating E[U|X, I pick A given info X]. A UDT agent with background information X instead makes that decision by evaluating E[U|I pick A give info X]. It’s the same procedure, we just don’t update.
The biggest philosophical difficulty is exactly how far you don’t update. We can define a whole spectrum of views: EDT is at one extreme, which updates on everything, UDT is a hypothetical theory at the other extreme that updates on “as little as possible” (it’s only hypothetical because no one really knows how to formulate “update as little as possible” for logical facts). In between are views that update on some partial information.
At any point in time, an EDT agent in epistemic state X would decide to replace themselves with an agent that makes decisions, given info Y, by evaluating E[U|X, I pick A given info Y]. (This decision theory is sometimes called “son-of-EDT.”)
If you haven’t considered this argument in advance, or weren’t able to change your decision procedure, then it’s not clear how you should decide. This is the philosophically sophisticated analog of the usual debate about “why ain’cha rich?” arguments for EDT vs CDT—everyone would prefer be the kind of person who uses UDT, but it’s not clear whether that gives us a reason in the moment to prefer UDT to EDT. After all, by the time we are making the decision (e.g. by the time we are actually facing the extortion) it’s too late, and it feels weird to make the decision to benefit some hypothetical version of ourselves.
Overall I think the decision between EDT and UDT is difficult. Of course, it’s obvious that you should commit to using something-like-UDT going forward if you can, and so I have no doubts about evaluating decisions from something like my epistemic state in 2012. But it’s not at all obvious whether I should go further than that, or how much. Should I go back to 2011 when I was just starting to think about these arguments? Should I go back to some suitable idealization of my first coherent epistemic state? Should I go back to a position where I’m mostly ignorant about the content of my values? A state where I’m ignorant about basic arithmetic facts?
Constructing a “why ain’cha rich?” argument for CDT
There are many cases where a CDT or EDT agent underperforms a UDT agent (e.g. when they pay an extortionist who was able to make good predictions about their behavior).
Given that both EDT and CDT are making a mistake, and CDT sometimes makes an extra mistake, we can create cases where the two mistakes cancel out, and where CDT makes the recommendation that we would have wanted to commit to (though for reasons completely unrelated to the actual commitment).
For example, suppose that I encounter an extortionist who is able to make good predictions about my behavior. However, they decide to extort me via a weird Newcomb-like case:
- There is a button which I can press to pay them $1000.
- They’ve privately committed to a prediction about whether I’ll press the button or not.
- If they predicted that I wouldn’t press the button, then they’ll burn down my house.
In this case CDT will not recommend pressing the button: it would gladly pay up to extortion, but it believes that it can’t pay the extortionist because the payment is mediated by a prediction. Meanwhile, the EDT agent will correctly realize that its behavior is correlated with the extortionist’s behavior, so will pay up.
In this case the CDT agent gets the right answer. But it’s effectively a coincidence—the error of ignoring effects on predictors exactly offsets the error of being willing to pay into extortion. (Arntzenious gives a different example involving a predictor and betting on baseball. Rationalists sometimes call a similar case “XOR blackmail.” These seem to be isomorphic dilemmas, but I think it’s a bit clearer what’s going on in this case.)
Some people take these cases as evidence that CDT and EDT are on symmetrical footing with respect to actually achieving good outcomes, and we should not update from the superficially compelling argument for EDT. I agree that they are on symmetrical footing with respect to “why ain’cha rich?” arguments, in the sense that such arguments don’t support either CDT or EDT. But I don’t think these examples undermine the basic argument for EDT, and shouldn’t be considered an argument for CDT.
I think the main reason to endorse CDT is our intuition that we should decide based on cause and effect. This intuition can be explained away by the observation that the CDT assumption is approximately true in the evolutionary environment (that the output of decision processes are conditionally independent of outcomes given the decision itself and the inputs to the decision process) and that under this assumption CDT is a simple approximation to EDT that degrades more gracefully as you fail to condition correctly.
After explaining away that intuition, we are left with no really strong arguments for CDT. Meanwhile, EDT has a simple and relatively compelling argument in its favor.
The strongest argument against EDT is the “why ain’cha rich?” argument for UDT. This shows convincingly that EDT agents should try to change their own decision procedure to use something like UDT. However, when actually faced with a dilemma, it’s unclear exactly how far back in time we should “un-update” (and similarly, if changing your decision procedure it’s not clear if you should un-update or simply freeze updating).
Different answers to that question carve out a range of views between EDT and UDT. I think this roughly defines the spectrum of plausible views about decision theory.