(*Subsumed by: Timeless Decision Theory, EDT=CDT)*

People sometimes object to evidential decision theory by saying: “It seems like the distinction between correlation and causation is really important to making good decisions in practice. So how can a theory like EDT, with no role for causality, possibly be right?”

Long-time readers probably know my answer, but I want to articulate it in a little bit more detail. This is essentially identical to the treatment of causality in Eliezer Yudkowsky’s manuscript Timeless Decision Theory, but much shorter and probably less clear.

## Causality and conditional independence

If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships. For example:

- In the causal graph A ⟶ B ⟶ C, the variables A and C are independent given B.
- In the graph A ⟶ B ⟵C, the variables A and C are independent, but are dependent given B.

To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions. We could still ask *why* such relationships exist, but the answer wouldn’t matter to what we should do.

## EDT = CDT

Now suppose that I’m making a decision X, trying to optimize Y.

And suppose further that there is a complicated causal diagram containing X and Y, such that my beliefs satisfy all of the statistical relationships implied by that causal diagram.

Note that this diagram will necessarily contain *me* and all of the computation that goes into my decision, and so it will be (much) too large for me to reason about explicitly.

Then I claim that an evidential decision theorist will endorse the recommendations of CDT (using that causal diagram):

- EDT recommends maximizing the conditional expectation of Y, conditioned on all the inputs to X. Write Z for all of these inputs.
- It might be challenging to condition on all of Z, given limits on our introspective ability, but we’d recommend doing it
*if possible*. (At least for the rationalist’s interpretation EDT, which evaluates expected utility conditioned on a fact of the form “I decided X given inputs Z.”) - So if we can describe a heuristic that gives us the same answer as conditioning on all of Z, then an EDT agent will want to use it.
- I’ll argue that CDT is such a heuristic.
**Edited to add**: This is wrong or at least badly incomplete. I don’t think it matters to the main point of this post (that EDT does “normal-looking causal inference” in normal cases). But it’s pretty central for the actual live philosophical debates about EDT v CDT v TDT.- In particular, it’s true that we’d like to condition on all of Z, but if we lack introspect access to parts of Z then this procedure won’t do that. It ignores effects via Z but doesn’t actually know the values in Z, so there’s no real reason to ignore those effects. Actually handling this issue is very subtle and has been discussed a lot. I think it’s fine if you use any algorithm A that conditions on A() = X, but in general it’s very messy to talk about algorithms that take facts as inputs without knowing those facts.
- This is a lot of what people are interested in when discussing the EDT vs CDT comparison, since I think everyone understands the basic point in this post. I think that the logical facts about what EDT outputs are on a different footing than the other inputs in Z (since e.g. you
*can’t*build an agent that knows them, whereas updating on all the inputs to Z is extremely reasonable) but they are often treated similarly in the decision theory literature.

- It might be challenging to condition on all of Z, given limits on our introspective ability, but we’d recommend doing it
- In a causal diagram, there is an easy graphical condition (d-connectedness) to see whether (and how) X and Y are related given Z:
- We need to have a path from X to Y that satisfies certain properties:
- That path can start out moving upstream (i.e. against the causal arrows); it may switch from moving upstream to downstream at any time (including at the start); it
*must*switch direction whenever it hits a node in Z; and it may only switch from moving downstream to upstream when it hits a node in Z.

- If Z includes exactly the causal parents of X, then it’s easy to check that the only way for X and Y to be d-connected is by a direct downstream path from X to Y.
- Under these conditions, it’s easy to see that intervening on X is the same as conditioning on X. (Indeed you could check this more directly from the definition of a causal intervention, which is structurally identical to conditioning in cases where we are already conditioning on all parents.)

Moreover, once the evidential decision-theorist’s problem is expressed this way, they can remove all of the causal nodes upstream of X, since they have no effect on the decision. This is particularly valuable because that contains all of the complexity of their own decision-making process (which they had no hope of modeling anyway).

So *if* the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure.

## Whence subjective causality?

You might think: causal diagrams encode a very specific kind of conditional independence structure. Why would we see that structure in the world so often? Is this just some weird linguistic game we are playing, where you can rig up some weird statistical structure that happens to give the same conclusions as more straightforward reasoning from causality?

Indeed, one easy way to have statistical relationships is to have “metaphysically fundamental” causality: in a world containing many variables, each of which is an independent stochastic function of its parents in some causal diagram, then those variables will satisfy all the conditional independencies implied by the that causal diagram.

If this were the only way that we got subjective causality, then there’d be no difference between EDT and CDT, and no one would care about whether we treated causality as subjective or metaphysically fundamental.

But it’s not. There are other sources for similar statistical relationships. And moreover, the “metaphysically fundamental causality” *isn’t* actually consistent with the subjective beliefs of a logically bounded agent.

We can illustrate both points with the calculator example from Yudkowsky’s manuscript:

- Suppose there are two calculators, one in Mongolia and one on Neptune, each computing the same function (whose value we don’t know) at the same instant.
- Our beliefs about the two calculators are correlated, since we know they compute the same function. This remains true after conditioning on all the physical facts about the two calculators.
- But in the “metaphysically fundamental” causal diagram, the results of the two calculators should be d-separated once we know the physical facts about them (since there isn’t even enough time for causal influences to propagate between them).
- We can recover the correct conditional independencies by adding a common cause of the two calculators, representing “what is the correct output of the calculation?” We might describe this as “logical” causality.

This kind of “logical” causality can lead to major deviations from the CDT recommendation in cases where the EDT agent’s decision is highly correlated with other facts about the environment through non-physically-causal channels. For example: if there are two identical agents, or if someone else is reasoning about the agent’s decision sufficiently accurately, then the EDT agent would be inclined to say that the logical facts about their decision “cause” physical facts about the world (and hence induce correlations), whereas a CDT agent would say that those correlations should be ignored.

## Punchline

EDT and CDT agree under two conditions: (i) we require that our causal model of the world and our beliefs agree in the usual statistical sense, i.e. that our beliefs satisfy the conditional independencies implied by our causal model, (ii) we evaluate utility conditioned on “I make decision X after receiving inputs Z” rather than conditioning on “I make decision X in the current situation” without including relevant facts about the current situation.

In practice, I think the main way CDT and EDT differ is that CDT ends up in a complicated philosophical discussion about “what really *is* causality?” (and so splinters into a host of theories) while EDT picks a particular answer: for EDT, causality is completely characterized by condition (i), that our beliefs and our causal model agree. That makes it is obvious how to generalize causality to logical facts (or to arbitrary universes with very different laws), while recovering the usual behavior of causality in typical cases.

I believe the notion of causality that is relevant to EDT is the “right” one, because causality seems like a concept developed to make and understand decisions (both over evolutionary time and more importantly over cultural evolution) rather than something ontologically fundamental that is needed to even *define* a correct decision.

If we take this perspective, it doesn’t matter whether we use EDT or CDT. I think this perspective basically accounts for intuitions about the importance of causality to decision-making, as well as the empirical importance of causality, while removing most of the philosophical ambiguity about causality. And it’s a big part of why I don’t feel particularly confused about decision theory.