On SETI

Previously I’ve assumed that SETI is irrelevant from an EA perspective. On further reflection I think that’s probably right, but it’s not obvious as I’d assumed. In this post I’ll make two relevant arguments:

  • If SETI rules out 50% of the possible places aliens could be, you shouldn’t conclude that searching the other 50% is a waste of time—at least not if you use UDT with aggregative values. I’d previously accepted this conclusion based on a mistaken argument.
  • If we have a significant risk of extinction then the positive impact of receiving an alien message might be relatively large, perhaps as good as reducing extinction risk by one percentage point. If we have a substantially higher risk of AI misalignment than our neighbors, then the positive impact may be even larger.

I think the biggest open question is: if there is a large, very distant, civilization trying to send a message, how confident should we be that we would have already seen it? And if we haven’t seen it, how plausible is it that we’d notice with a further 10x increase in sensitivity?

Image result for search for extraterrestrial life

Unfortunately I think it’s very likely we would have seen a large civilization if one was trying to contact us, even if it was very far away and had only invested a small fraction of its energy in signaling (thanks to Anders Sandberg for providing some background facts). If true, the prospects for SETI seem pretty bleak. I couldn’t easily reach a confident verdict on this but someone probably has.

It’s probably worth doing or finding a more thorough analysis of this question. If I’m wrong and the prospects aren’t as bleak as they look, I think it would be a reasonable topic for an EA to think about at some length. (I don’t think it’s likely to be worthwhile for EAs to fund SETI in any case, since it seems very expensive, but it may be worth trying to influence whether and how SETI is conducted.)

Are we alone?

Suppose that SETI has searched 50% of the places that we might find an alien civilization. In the past I accepted the following argument without thinking about it too much:

  • A priori we had a roughly log uniform distribution over the density of mature civilizations.
  • After observing that at least 50% of the universe is empty, the expected number of mature civilizations per Hubble volume is probably <4, and is roughly equally likely to be 2, 1, 1/2, 1/4, 1/8, 1/16…
  • Only in the first few buckets {2, 1, 1/2} should we expect to find a neighbor. But almost all of the mass is in the smaller buckets, since there are so many of them.

A similar argument suggests goes through even if we’ve only checked 10% of the places that aliens could be. In fact, our expectation of finding anything should very quickly drop towards zero as we examine the local universe (or even note that Earth has not yet been colonized).

The problem with this argument is that our decisions have a lot of influence over universes where the density is ~1, but almost no influence of universes where the density is <0.001 (since those universes are doomed to be barren regardless of what we decide).

If we cut off the long tail of universes where civilization is very rare, in addition to the long tail where we would have already seen it, we end up with significant probability on the buckets {2, 1, 0.5}. Since our decisions only affect these worlds, we might as well behave as if we live in one of them, and hence that there is a significant probability of another civilization in the observable universe.

I’m generally skeptical about making astronomically large anthropic updates, but in this case the update is only something like ~100x, which doesn’t seem outlandish. Everything is still fishy, but after marginalizing out uncertainty and fishiness I provisionally think we should take seriously the prospect of having exactly one other civilization in our Hubble volume.

Note that the same argument doesn’t suggest an overwhelming prior presumption in favor of universes with very high densities of life—our decisions have roughly the same impact on those universes as in sparsely populated universes. The impact only begins to decrease once we drop below a density of 1 per Hubble volume, since at those values large parts of the universe end up barren.

Would contact matter?

Even if we saw aliens, would it matter to a utilitarian?

I think it would:

  • If space colonization proceeds significantly slower than the speed of light, then we can see a distant civilization long before colonists can reach Earth. In the extreme case, where we observe a civilization near the edge of our Hubble volume, they may never be able to reach us.
  • If we kill ourselves, then by the time colonists reach Earth a substantial part of the currently-accessible universe will have become inaccessible. I’d estimate that the reachable value falls at a rate of about 1/(5 billion) per year.
  • If we receive a message from sophisticated aliens, it would probably eliminate the risk of extinction and allow us to start expanding almost immediately, by transferring technologies or ideas that effectively determine the course of our civilization.
    • A sophisticated civilization could have run trillions of simulations of worlds like ours in order to estimate the effect of different possible messages. These message would be by far the most optimized thing we’ve ever encountered. Imagining it as coming from a civilization kind of like ours seems insane.
    • (I think it’s realistic for aliens to transmit an AI that, when run with a modest amount of computation, effectively takes over the world. I expect the outcome would be good for complicated reasons, but figuring out the sign is also an important problem.)

For example, if space colonization proceeds at 50% the speed of light, we observe aliens 6 billion light years away, and then we go extinct, the colonization wave won’t reach Earth for more than 3 billion years. In that time, I think about half of the universe we could potentially reach will have receded too far to reach. The aliens will be able to reach about half of that anyway (since they can go there directly, they don’t have to go via Earth), but the other half will be forever inaccessible to them.

If we have a 10% risk of extinction, then this translates into an expected loss of 2.5% of our future light cone. Receiving an alien message now would effectively eliminate this risk by transferring control from us to an alien civilization, so would be about as good as reducing extinction risk by 2.5%.

I think 50% of the speed of light would be a surprisingly slow rate of colonization (based on correspondence with Anders Sandberg and my background techno-optimism), but even at 90% of the speed of light we would have a 0.5 billion year wait, which would lose ~5% of the value of the future and make SETI equivalent to a 0.5% effective reduction. Overall I think that a 1% absolute reduction in extinction risk is a not-crazy estimate for the value of successful SETI in a sparsely populated universe.

If we are likely to build misaligned AI, then an alien message could also be a “hail Mary:” if the aliens built a misaligned AI then the outcome is bad, but if they built a friendly AI then I think we should be happy with that AI taking over Earth (since from behind the veil of ignorance we might have been in their place). So if our situation looks worse than average with respect to AI alignment, SETI might have positive effects beyond effectively reducing extinction risk.

The preceding analysis takes a cooperative stance towards aliens.  Whether that’s correct or not is a complicated question. For the most part, I think growing the pie by enabling intelligence to access more of the universe, is probably the first order term here. That might be justified by either moral arguments (from behind the veil of ignorance we’re as likely to be them as us) or some weird thing with acausal trade (which I think is actually relatively likely).

The preceding analysis also implicitly presumes aggregative values—for which “twice as big is twice as good.” With respect to more easily-satiable values, like the desire to live out a happy life, I suspect that SETI is an even better deal. I’d expect the message to be something that allows humanity to keep living a happy life and to benefit from significant technological acceleration, since the benefits of killing us are de minimis and we’d have been willing to pay a lot to prevent it.

Other weird stuff

Predicting the outcome of contact is really complicated and I wouldn’t want to gamble my life (or our future) without thinking about it. There seems to be a significant risk that passive SETI leads directly to extinction; that risk isn’t as broadly appreciated as I’d like.

If there are other aliens in the observable universe, I have no idea what their actual motives are. I think the naive argument above is the largest expected impact, but “something we haven’t thought of” is probably a larger expected impact.

In general the possibility of weird stuff only makes me more excited to spend time thinking through details of SETI, since there is some probability it would turn out to even more important than it looks. I think there’s a reasonable chance that SETI would turn out to be quite bad, either because some other impact dominates or because it’s a mistake to take a cooperative stance towards aliens.

Why I’m not optimistic about finding anything

My impression is that if a distant civilization used 1/millionth of a galaxy’s power to broadcast a message with a similar spectrum to a star, we’d be able to see that message from land-based telescopes, and so it probably would have been noticed by early SETI or incidentally. Moreover:

  • By focusing on a narrow part of the spectrum, the aliens could be louder.
  • By using energy other than starlight, they could be louder.
  • By transmitting less frequently they could be louder, which might make it easier for us to hear.
  • I expect a mature alien civilization would have more creative ways to send easy-to-spot signals.
  • If the argument in this post is right, and a weak signal wouldn’t be detectable, I expect a sophisticated civilization would be willing to spend more than 1/millionth of a galaxy’s power on transmission.

That said, there are lots of possible problems in this argument. I don’t know how carefully we’ve looked at each distant galaxy, I don’t know how obvious such signals would be if you weren’t looking in the right place, I don’t know what other obstructions there might be to transmission, I don’t totally trust the analysis of brightness and don’t really know anything about the area, etc.

So I’d really love to see an analysis by someone who is much more familiar with SETI: if there were a distant civilization transmitting with some unknown fraction of a galaxy’s power (somewhere between 1e-9 and 1e-3, say), how inevitable is it that we’d have seen them already?

Probably this exists somewhere on the internet but I couldn’t find it. If you know where to find it, or happen to have the relevant background knowledge and find the question interesting enough to figure it out, I’ll be in your debt.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s