EDT vs CDT

There are many tricky questions in decision theory. In this post, I’ll argue that the choice between CDT and EDT isn’t one of them.

Causal decision theory (CDT) evaluates expected utilities under causal interventions, while evidential decision theory (EDT) evaluates conditional expected utilities. Humans tend to have strong intuitions in favor of CDT, but I’ll argue that CDT is only reasonable insofar as it is an approximation to EDT that degrades more gracefully given certain kinds of reasoning errors.

My claims:

  • EDT is supported by a simple argument (but not “why ain’cha rich?”), and absent some counterargument or objection we should prefer it.
  • CDT degrades more gracefully than EDT in certain cases where we cannot condition on all available information. This explains the “failure” of EDT in cases like smoking lesion.
  • CDT is simpler to implement and almost always agrees with EDT in the evolutionary environment; this probably explains human intuitions in favor of CDT. So those intuitions should not be interpreted as additional support for CDT in cases where the two theories disagree and where we are able to condition on all inputs to the decision process (as is the case whenever we make decisions explicitly).
  • “Why ain’cha rich?” arguments support neither EDT or CDT, and instead support variants of updateless decision theory (UDT). Interpreting these arguments is subtle, as philosophers correctly recognize in the case of the EDT vs CDT debate, and it’s not obvious where on the spectrum between EDT and UDT you should end up.
  • Starting from examples where both CDT and EDT perform poorly, we can easily construct cases where CDT makes a better choice “by coincidence” (including an example by Arntzenius, and “XOR blackmail”). These cases do not provide support for CDT any more than they provide support for procedures like “pick the alphabetically first option” which also sometimes make the correct decisions.
  • There do not seem to be any remaining strong arguments in CDT’s favor to counterbalance the simple pro-EDT argument. We are left with a difficult philosophical problem of deciding between EDT and UDT (which are endpoints of a single spectrum).

Most of these points have been made either in the philosophy literature or in the rationalist community (e.g. see Abram here). My main contribution is to put it all together and to be aggressively overconfident about the conclusion.

The simple argument for EDT

Suppose I am faced with two options, call them L and R. From my perspective, there are two possible outcomes of my decision process. Either I pick L, in which case I expect the distribution over outcomes P(outcome|I pick L), or I pick R, in which case I expect the distribution over outcomes P(outcome|I pick R). In picking between L and R I am picking between these two distributions over outcomes, so I should pick the action A for which E[utility|I pick A] is largest. There is no case in which I expect to obtain the distribution of outcomes under causal intervention P(outcome|do(I pick L)), so there is no particular reason that this distribution should enter into my decision process.

This is a very simple argument, but simple arguments are often the best kind.

The reason most people have a hard time choosing between EDT vs CDT is not because they expect to find a more satisfying argument than this one, but because they think the simple argument is countered by equally strong arguments/intuitions in favor of CDT. In subsequent sections I’ll explain why I think those arguments and intuitions don’t hold up.

Divergence of CDT and EDT

CDT and EDT make different recommendations, so there is a real choice to make.

Consider the following example. There is a box and a predictor. You have the opportunity to give the predictor $1000; whether or not you do, you then open the box and take its contents. Before you arrived, the predictor predicted whether you would give them $1000. If they predicted you would pay, then they put $10,000 in the box, otherwise they put $100 in the box. The predictor is known to be very accurate. Before opening the box, do you pay the predictor?

In this case, the CDT agent agent believes that they will receive $9000 conditioned on paying up and $100 conditioned on not paying up (provided they have any uncertainty about their own decision—if they put probability 0 on paying up then the conditional probability can be undefined). But despite having those beliefs, the CDT agent doesn’t pay up, and so predictably receives $100. The EDT agent has the same beliefs, but actually pays up and predictably receives $9000.

Conditioning on inputs to the decision procedure

Smoking lesion

The smoking lesion problem seems to be the most common reason for rejecting EDT.

In this problem there are two kinds of people:

  • Those who like smoking and will probably get lung cancer (whether or not they smoke).
  • Those who don’t like smoking and probably won’t get lung cancer (whether or not they smoke).

We observe that 99% of people who smoke get lung cancer, and only 1% of people who don’t smoke get lung cancer.

An EDT agent who likes smoking will reason “if I don’t smoke, I only have a 1% chance of getting lung cancer, so I shouldn’t smoke.” This leads the EDT agent to incorrectly avoid smoking, while a CDT agent will correctly realize that they might as well smoke since they like it and it has no negative effects.

The reason that EDT does poorly is very simple: the EDT agent believes that they won’t get a tumor if they don’t smoke. But we know that the EDT agent likes smoking, and so will in fact get a tumor regardless of whether they smoke. The EDT agent errs because it is ignorant about a critical fact about the situation—the fact that it likes to smoke.

The EDT agent’s problem shouldn’t be blamed on EDT. No matter how good your decision procedure is, if you don’t know a critical fact about the situation then you can make a decision that looks bad (to an evaluator who does know the critical fact). This is actually a particular egregious failure, since the EDT agent’s decision procedure is using the fact that they like to smoke, but somehow not conditioning on it when evaluating utilities.

(See also: smoking lesion steelman.)

CDT as an approximation to EDT

The EDT algorithm is more complex than the CDT algorithm: in order to arrive at the right decision, the EDT agent needs to condition on all the inputs to their decision procedure. The CDT agent only needs to look at the causal structure of the situation, which makes the correct answer obvious.

So it seems like we have something to learn from CDT, even if we don’t consider this a strong objection to EDT. Why does CDT get the right answer more easily?

Consider the following simple assumption:

The CDT assumption: The output of my decision process is conditionally independent of facts I care about, given the actual decisions I make and the inputs to my decision process.

Under this assumption, CDT and EDT are equivalent. Taking a causal intervention surgically removes the update “backwards” from a decision to the output of the decision process. But given that the agent should already be able to update on the inputs to its decision process, and that we are intervening on the actual decision, the CDT assumption implies that screening off this update doesn’t affect the outcome.

(Technically we need to do a backwards induction on “the actual decisions I make” if there are multiple, but this isn’t really key.)

Debunking intuitions for CDT

The calculation in EDT can be very complicated, since the characteristics that determine a decision can themselves be complicated. An ideal Bayesian would of course have already updated on all of these characteristics, and so there would be no further calculation to perform—but humans are not ideal agents.

So CDT can degrade more gracefully given the kinds of errors that humans may make. If we operated in an evolutionary environment where CDT and EDT always agreed, then we’d expect to use CDT rather than EDT since it yields the correct behavior and is more robust to this particular error.

The CDT assumption is violated in cases like Newcomb’s problem, where others’ predictions of you are correlated with your decision and also with your utility. Humans handle these cases via a patchwork of heuristics like vengefulness, honor, generosity, etc. Does this give evidence against EDT?

In particular, if EDT was right, should we expect humans to use EDT rather than this patchwork of heuristics?I think the answer is “no,” and so we shouldn’t take human intuitions as any evidence at all about the correctness of EDT vs CDT.

One problem is that EDT itself isn’t actually a great decision procedure, and so an EDT agent needs these heuristics anyway. (This is discussed in the next section.) As a simple example, both EDT and CDT agents give in to extortion in single-shot games, and therefore are more likely to be the target of extortion. So either kind of agent needs to have a heuristic in favor of vengefulness to protect themselves.  And once you have heuristics that patch the holes in EDT, they also patch the holes in CDT (in fact EDT can sometimes “double count,” taking too much vengeance because it both has decision-theoretic value and is satisfying).

A deeper issue is that Newcomb-style correlations between our behavior and others’ predictions of our behavior are typically weak, and so superficially similar cases are mostly about reputation and repeated interactions. The real role of these heuristics is mostly to replace complicated reasoning about iterated games rather than to complicated decision-theories. That just means that the evolutionary environment is even less likely to contain cases in which EDT and CDT come apart, and so we should interpret pro-CDT intuitions as even less evidence about CDT.

If CDT was great in the evolutionary environment, should we keep using it?

If we have an intuition in favor of CDT because it’s simpler and works just as well in the evolutionary environment, maybe we should keep using CDT for the same reasons—even if that isn’t much evidence about the actual correctness of EDT.

A first question is whether we can actually condition on the inputs to our decision procedure—if we can’t, then that’s an advantage for CDT. For implicit decisions I think this is a bit unclear, and it might be better to use CDT. For explicit decisions we do have access to all of the inputs into the decision process, since we had to make them explicit, and so should just condition on them rather than using CDT.

Given that, in any particular case where we believe that CDT and EDT come apart (e.g. weird cases with multiverse-wide cooperation), we ought to prefer the EDT recommendation. Moreover, once we accept EDT over CDT it becomes more plausible that we should move some of the way towards UDT, which differs from CDT in more cases (though still mostly agrees).

I think these cases aren’t very common, so that CDT is normally fine even in the modern environment. But if you are having a debate about decision theory, I don’t think the fact that CDT and EDT usually agree gives a good reason to prefer the CDT recommendation.

“Why ain’cha rich?” arguments against EDT

“Why ain’cha rich?” is usually presented as an argument in favor of EDT, but I think it actually provides one of the strongest arguments against EDT. Sometimes this fact is used to support CDT, but that appears to be a straightforward error.

UDT does better than EDT or CDT

EDT and CDT are both reflectively inconsistent, in the sense that an agent using either one of them would prefer stop as soon as possible. For example, everyone would prefer have a precommitment to not give in to extortion, in cases where having such a precommitment would decrease the odds of extortion. And so CDT and EDT agents alike would prefer precommitment to a decision theory with the clause “be bound by a precommitment X whenever having that precommitment would have led to higher utility.”

Updateless decision theory (UDT) is a natural formalization of this idea. An EDT agent with background information X chooses actions whether to take action A by evaluating E[U|X, I pick A given info X]. A UDT agent with background information X instead makes that decision by evaluating E[U|I pick A give info X]. It’s the same procedure, we just don’t update.

The biggest philosophical difficulty is exactly how far you don’t update. We can define a whole spectrum of views: EDT is at one extreme, which updates on everything, UDT is a hypothetical theory at the other extreme that updates on “as little as possible” (it’s only hypothetical because no one really knows how to formulate “update as little as possible” for logical facts). In between are views that update on some partial information.

At any point in time, an EDT agent in epistemic state X would decide to replace themselves with an agent that makes decisions, given info Y, by evaluating E[U|X, I pick A given info Y]. (This decision theory is sometimes called “son-of-EDT.”)

If you haven’t considered this argument in advance, or weren’t able to change your decision procedure, then it’s not clear how you should decide. This is the philosophically sophisticated analog of the usual debate about “why ain’cha rich?” arguments for EDT vs CDT—everyone would prefer be the kind of person who uses UDT, but it’s not clear whether that gives us a reason in the moment to prefer UDT to EDT. After all, by the time we are making the decision (e.g. by the time we are actually facing the extortion) it’s too late, and it feels weird to make the decision to benefit some hypothetical version of ourselves.

Overall I think the decision between EDT and UDT is difficult. Of course, it’s obvious that you should commit to using something-like-UDT going forward if you can, and so I have no doubts about evaluating decisions from something like my epistemic state in 2012. But it’s not at all obvious whether I should go further than that, or how much. Should I go back to 2011 when I was just starting to think about these arguments? Should I go back to some suitable idealization of my first coherent epistemic state? Should I go back to a position where I’m mostly ignorant about the content of my values? A state where I’m ignorant about basic arithmetic facts?

Constructing a “why ain’cha rich?” argument for CDT

There are many cases where a CDT or EDT agent underperforms a UDT agent (e.g. when they pay an extortionist who was able to make good predictions about their behavior).

Given that both EDT and CDT are making a mistake, and CDT sometimes makes an extra mistake, we can create cases where the two mistakes cancel out, and where CDT makes the recommendation that we would have wanted to commit to (though for reasons completely unrelated to the actual commitment).

For example, suppose that I encounter an extortionist who is able to make good predictions about my behavior. However, they decide to extort me via a weird Newcomb-like case:

  • There is a button which I can press to pay them $1000.
  • They’ve privately committed to a prediction about whether I’ll press the button or not.
  • If they predicted that I wouldn’t press the button, then they’ll burn down my house.

In this case CDT will not recommend pressing the button: it would gladly pay up to extortion, but it believes that it can’t pay the extortionist because the payment is mediated by a prediction. Meanwhile, the EDT agent will correctly realize that its behavior is correlated with the extortionist’s behavior, so will pay up.

In this case the CDT agent gets the right answer. But it’s effectively a coincidence—the error of ignoring effects on predictors exactly offsets the error of being willing to pay into extortion. (Arntzenious gives a different example involving a predictor and betting on baseball. Rationalists sometimes call a similar case “XOR blackmail.” These seem to be isomorphic dilemmas, but I think it’s a bit clearer what’s going on in this case.)

Some people take these cases as evidence that CDT and EDT are on symmetrical footing with respect to actually achieving good outcomes, and we should not update from the superficially compelling argument for EDT. I agree that they are on symmetrical footing with respect to “why ain’cha rich?” arguments, in the sense that such arguments don’t support either CDT or EDT. But I don’t think these examples undermine the basic argument for EDT, and shouldn’t be considered an argument for CDT.

Conclusion

I think the main reason to endorse CDT is our intuition that we should decide based on cause and effect. This intuition can be explained away by the observation that the CDT assumption is approximately true in the evolutionary environment (that the output of decision processes are conditionally independent of outcomes given the decision itself and the inputs to the decision process) and that under this assumption CDT is a simple approximation to EDT that degrades more gracefully as you fail to condition correctly.

After explaining away that intuition, we are left with no really strong arguments for CDT. Meanwhile, EDT has a simple and relatively compelling argument in its favor.

The strongest argument against EDT is the “why ain’cha rich?” argument for UDT. This shows convincingly that EDT agents should try to change their own decision procedure to use something like UDT. However, when actually faced with a dilemma, it’s unclear exactly how far back in time we should “un-update” (and similarly, if changing your decision procedure it’s not clear if you should un-update or simply freeze updating).

Different answers to that question carve out a range of views between EDT and UDT. I think this roughly defines the spectrum of plausible views about decision theory.


22 thoughts on “EDT vs CDT

  1. One argument for CDT over EDT that you didn’t mention in this post: Suppose you live in a deterministic universe and know your own source code. Suppose you are deciding between taking a 5 dollar bill and a 10 dollar bill. Suppose your world model says you take the 5 dollar bill with 100% probability. Now conditioning on taking the 10 dollar bill gives you complete garbage, since you are conditioning on a probability 0 event. If you use EDT, then depending on other details of your decision procedure, this could lead to you always taking the 5 dollar bill. So then your world model would be accurate. (This is the “5 and 10” problem often discussed at MIRI; I don’t know if it has been written up anywhere)

    CDT never generates undefined expected utility estimates like EDT does. It takes the 10 dollar bill in this problem. However, if it always takes the 10 dollar bill, then its counterfactual for taking the 5 dollar bill is strange because it is one in which a physical law is violated. The violation of a physical law could have important consequences other than which action the agent takes.

    Both decision theories have trouble with this problem, but at least CDT always produces a defined answer.

    Here’s another way of thinking about this problem. A fully Bayesian version of EDT must construct all possible worlds and then condition on taking a certain action. But each of these possible worlds contains a running copy of the EDT algorithm. So, absent some defined method for taking a fixed point, this leads to an infinite loop, and you can’t actually have a fully Bayesian version of EDT.

    (What if you use reflective oracles to allow EDT to select some fixed point? We could specify that the reflective oracle returns arbitrary results when asked to condition on a probability 0 event (I think this is what the most natural way to emulate conditional queries on a reflective oracle results in, but I haven’t checked). Now there are multiple possible reflective oracles (i.e. fixed points); it’s possible to always take the 10 dollar bill and think bad things will happen conditional on taking the 5 dollar bill, and it’s also possible to always take the 5 dollar bill and think bad things will happen conditional on taking the 10 dollar bill.)

    A fully Bayesian version of CDT must construct all possible counterfactuals. Each of these counterfactuals contains a running copy of CDT, so one might think the same problem applies. But in each of these counterfactuals, the output of the CDT algorithm is “thrown away”, since the agent’s action is controlled by a magic counterfactual intervention rather than its algorithm. So, if the CDT algorithm is sandboxed, the CDT’s world model can simply ignore the running CDT algorithm, as it has no effect. Thus, at least in single-agent problems (with no predictors etc), a fully Bayesian version of CDT is possible in principle, though obviously not in practice.

    1. This seems like the best argument for CDT so far. I normally just think of it as a reason that EDT is hard to operationalize, and like you I consider CDT hard to operationalize (in a way that’s not terrible) for similar reasons. But the EDT problem does seem harder to ignore and so I should have mentioned it in the post.

      My tentative view is that beliefs are constrained by more than consistency conditions, and so you shouldn’t become certain about the output of your decision process before running it (and indeed we don’t have any concrete reason to think that you would become certain). This effect obviously becomes stronger if you are more updateless, since then you only have a problem if you are certain about your decision already in your un-updated state. But I do see this as a problem, since we don’t have an argument for why such a bad fixed point wouldn’t occur.

      On this view the argument in the post is more like: if not for the conceptual difficulty with EDT in cases where you become certain about your own decisions, it would be clearly more correct than CDT. That’s a motivation to focus on that conceptual difficulty.

      1. Update: I wrote a post about solving this problem using a conditional version of reflective oracles: < a href=”https://www.greaterwrong.com/posts/Rcwv6SPsmhkgzfkDw/edt-solves-5-and-10-with-conditional-oracles”>here

  2. How about you first explain what it would mean for CDT, EDT or UDT to be right. Which question you ask will make a huge difference in what decision theory gives the best outcomes.

    Indeed, I’d argue the whole assumption that there is a factual question about which decision theory is right is mistaken. No time now but will post later.

    1. OP’s use of ‘which decision-theory is correct’ seems to be something pretty close to ‘here are my goals and here is the situation I’m in, what can I do to advance my goals.’ I feel like responding to ‘here are my goals and here is the situation I’m in, what can I do to advance my goals’ with ‘this talk is meaningless unless you stipulate your criteria for evaluating decision-theories’ doesn’t really dissolve the problem.

      1. Indeed I think we SHOULD think of it like you suggest. But if we take that seriously then there is no place for this abstract argument about decision theories. Once you fully specify a situation and goals I’d argue that the answers become clear and that the whole debate is a pseudo-problem exactly because we don’t make clear exactly what we want and model whose choices we get to make to bring it about.

      2. To clarify a bit the problem is that the question: what should I do is fundamentally vague in the cases EDT and CDT come apart you exactly DON’T have anything resembling a choice in Newcomb problems (by hypothesis you are casually determined to act a certain way). The only difficulty comes from the unclarity between asking:

        Would it be better to be the sort of person who chooses one box.

        And

        Supposing we imagine (counterfactually) that you can make a choice other than that which is casually determined (ie model the choice as a minor miracle) which would be best.

        Both have a precise answer. You tell me which question your asking I’ll tell you which decision theory to use.

  3. Rationalists sometimes call a similar case “XOR blackmail.” These seem to be isomorphic dilemmas, but I think it’s a bit clearer what’s going on in this case.

    Can you explain why these are isomorphic? In the case of XOR blackmail, the researcher has no way to burn down your house, they just know some news before you do.

    1. I just mean the payoff matrices are basically the same:

      • Probability of receiving the letter is high if you pay and low if you don’t.
      • Paying is not causally connected to anything.
      • If you receive the letter and pay, you lose a small amount.
      • If you receive the letter and don’t pay, you lose a big amount.

      I agree that XOR blackmail is a worse vulnerability in the sense that the attacker needs less resources.

  4. “everyone would prefer be the kind of person who uses UDT, but it’s not clear whether that gives us a reason , but it’s not clear whether that gives us a reason in the moment to prefer UDT to EDT. After all, by the time we are making the decision (e.g. by the time we are actually facing the extortion) it’s too late, and it feels weird to make the decision to benefit some hypothetical version of ourselves.”

    How is this different from the CDT vs EDT comparison in Newcomb’s paradox? By the time we are making the actual decision, i.e. whether to one-box or two-box, it’s also too late.

    “In picking between L and R I am picking between these two distributions over outcomes, so I should pick the action A for which E[utility|I pick A] is largest. There is no case in which I expect to obtain the distribution of outcomes under causal intervention P(outcome|do(I pick L)), so there is no particular reason that this distribution should enter into my decision process.”

    This is probably too much of a tangent, but what does it mean here to say “I should pick the action A”? The claim that what we’re picking between is E[utility | I pick A] and not E[utility | do(I pick A)] seems to imply a rejection of free will (which I assume is your view anyways). Then, once you condition on Z describing the type of person you are / your reasoning processes, isn’t E[utility | I pick L, Z] = E[utility | I pick R, Z] = E[utility | Z]? (If you take into account non-determinism that goes into decision procedures, then this isn’t right, but that seems unlikely to me to be the crux of the confusion).
    I realize this isn’t really related to EDT vs CDT, feel free to just address the first question.

    1. Why is it too late by the time you are deciding whether to one-box or two-box? It’s only “too late” because you are thinking of causality as the decision-theoretically relevant concept, which I’m arguing is (a) unnatural, and (b) has no real arguments in its favor other than the intuition (which I think can be explained away).

      E[utility | I pick L, Z] is not E[utility | Z], because you have logical uncertainty over the output of your own decision.

      1. Hm sounds like I misunderstand what you originally wrote. I thought the argument was (a) an EDT agent should prefer to commit to UDT going-forward, if they have a way to do so, but (b) there are no commitment mechanisms for this so (c) by the time you get to an actual decision which depends on your choice between EDT and UDT, it’s too late, and so in that moment, maybe the benefits of EDT outweigh UDT.

        It feels like substituting EDT for UDT, and CDT for EDT everywhere here makes just as much sense.

        Maybe one misunderstanding – for someone who’s thought about UDT, do you think they should always follow UDT going forwards? My impression based on the post was “no.”

        The argument about logical uncertainty makes sense to me. Is there also a “standard view” on this among decision theorists? Since EDT vs CDT is a fairly standard topic to my understanding, and logical uncertainty is a new concept.

        1. It’s clear that an EDT agent would commit to stop updating if they had the chance. This is very similar to the usual “why ain’cha rich” argument for preferring EDT over CDT in Newcomb-like cases. However:

          a. There is a further reason to prefer EDT to CDT, that doesn’t depend on this (rightly controversial / unpopular) style of argument. Namely, EDT is the right formalization of the choice that a rational agent faces, at the time they are making it, from their perspective. Causality has no place in the analysis; it only comes in because of a human intuition about causality (that we can understand and explain as a pragmatic approximation to EDT).

          b. EDT doesn’t consistently outperform CDT, so this isn’t an unambiguous argument for EDT (e.g. consider XOR blackmail, where a law-abiding predictor sends you a letter saying “your house burned down while you were at work today XOR I predict that you are about to pay me $1000”). I describe that situation as “EDT and CDT both make a mistake, and then CDT makes a further mistake that cancels it out,” which I think is a fine way to understand what’s going on but doesn’t really salvage the why ain’cha rich argument.

          UDT seems to be (roughly) reflectively stable, so I do think a UDT agent will continue to use UDT even if they have the opportunity to self-modify. But really UDT refers to a spectrum of views based on how far you un-update. I think that everyone ought to use some view on this spectrum, but all the points are reflectively stable and so reflective stability as a desideratum doesn’t give us any more guidance. I think this is the most interesting and important open question in decision theory.

          Logical uncertainty is not a new concept amongst philosophers (I think considering the question formally in mathematical logic is only maybe 15 years old—Gaifman has a paper from 2004—but the basic idea of being uncertain about analytic facts is very old). I haven’t seen discussion of this decision theory + logical uncertainty in the philosophy literature, but I suspect that’s just because I have only a passing familiarity with that literature. The SEP entry doesn’t mention this at all, and in general seems somewhat cringe-inducing, so probably the literature is as well: https://plato.stanford.edu/entries/decision-causal/ (though it does have pointers to more careful versions of many of the arguments made in this post, which I probably should have referenced).

          1. I don’t think I understand the point in the original post, but the point that the “why aint’cha rich” arguments do not justify either CDT or EDT are well-taken.

            Is the point that an EDT agent would commit to UDT if given the option, an argument in favor of UDT? To me, it seems it should be, but I’m not entirely clear on this. In particular, while this doesn’t decide between “UDT using Paul 5 years ago” vs. “UDT using Paul now” it seems to rule out “keep updating as usual, and follow EDT.”

            I’m confused about the emphasis on the “how far to un-update” question, and why it seems so important to you. I agree that I can pick any prior distribution over worlds, and plug that into UDT, and get a decision theory. But I don’t see why different choices of the prior would really change what UDT would tell us in any significant ways. I’m happy to think more about this, and write that out, but if there’s an easy example of a scenario where this choice would lead to substantially different decisions, it’s probably easier to ground the discussion in that.

          2. Forgot to reply about logical induction. Fair enough, thanks!

            First paragraph: “Suppose that a student is considering whether to study for an exam. He reasons that if he will pass the exam, then studying is wasted effort. Also, if he will not pass the exam, then studying is wasted effort. He concludes that because whatever will happen, studying is wasted effort, it is better not to study.”

            Seems legit, guess I can’t argue with that (:

  5. Good overview!

    Most of these points have been made either in the philosophy literature or in the rationalist community (e.g. see Abram here). My main contribution is to put it all together and to be aggressively overconfident about the conclusion.

    It’s a bit of a shame, though, that you give almost no references. Here are some for interested readers.

    Your argument about the Smoking Lesion is one version (the most plausible one in my opinion) of the so-called tickle defense. See, e.g., Arif Ahmed: Evidence, Decision and Causality, sect. 4.3.

    I think the “CDT assumption” is similar to what Ahmed discusses as “Ramsey’s thesis” in Evidence, Decision and Causality, ch. 8.

    Humans tend to have strong intuitions in favor of CDT

    I wonder what you mean by this. Do you mean that philosophers tend to favor CDT? Because among the general population, one-boxing is slightly more popular in Newcomb’s problem than two-boxing. (See the overview of surveys here: https://casparoesterheld.com/2017/06/27/a-survey-of-polls-on-newcombs-problem/ ) Maybe you’re referring to the average person’s reaction to the Smoking lesion, but maybe most people would agree with the tickle defense you give?

    Failure of CDT

    The case you mention differs only insignificantly from Newcomb’s problem (right?) and the argument you give is essentially “Why Ain’cha Rich?” (right?). As you know, a lot of people (including most decision theorists, unfortunately) think two-boxing is the way to go in Newcomb’s problem (and would also think that not paying is the way to go in your problem).

    If you want to modify Newcomb’s problem to make it more persuasive, you might like the following scenario, which I am currently writing a paper about (with Vincent Conitzer):

    “Two boxes, B1 and B2, are on offer. A risk- neutral buyer may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, the seller put $3 in each box which she predicted the buyer not to acquire. Both the seller and the buyer believe the seller’s prediction to be accurate with probability 0.75. It is assumed that no randomization device is available to the buyer (or at least no randomization device not predictable by the seller).”

    Regardless of its credences about the seller’s prediction, CDT recommends buying one of the boxes. But — as we can see by assuming the perspective of the seller — this loses money in expectation.

    It’s still a “Why Ain’cha Rich?”-type of argument, but in this case CDT voluntarily loses money.

    1. It’s a bit of a shame, though, that you give almost no references. Here are some for interested readers.

      Thanks!

      I wonder what you mean by this. Do you mean that philosophers tend to favor CDT? Because among the general population, one-boxing is slightly more popular in Newcomb’s problem than two-boxing. (See the overview of surveys here: https://casparoesterheld.com/2017/06/27/a-survey-of-polls-on-newcombs-problem/ ) Maybe you’re referring to the average person’s reaction to the Smoking lesion, but maybe most people would agree with the tickle defense you give?

      I mean that humans have strong intuitions about the importance of causal reasoning, such that deliberative people seem to favor CDT tend to favor CDT at least typically (if they conclude that the why ain’cha rich argument is bogus), and then defend this view by what amounts to an intuitive appeal.

      The case you mention differs only insignificantly from Newcomb’s problem (right?) and the argument you give is essentially “Why Ain’cha Rich?” (right?). As you know, a lot of people (including most decision theorists, unfortunately) think two-boxing is the way to go in Newcomb’s problem (and would also think that not paying is the way to go in your problem).

      Yes on both counts. The point of this example was that there is a difference between CDT and EDT, so that there is a real question to be settled here. I’m mostly leaning on the “simple argument for EDT” as my actual argument for EDT.

      If you want to modify Newcomb’s problem to make it more persuasive, you might like the following scenario, which I am currently writing a paper about (with Vincent Conitzer):

      I think that’s a good example. I usually use rock paper scissors as the most intuitive example, since people can imagine opponents who would beat them at rock paper scissors, and can easily see the madness of CDT wanting to play over and over again despite losing most of the time.

Leave a comment