On SETI

Previously I’ve assumed that the search for extraterrestrial life (SETI) is irrelevant to altruists. On further reflection I think that’s probably right, but it’s not obvious as I’d assumed. In this post I’ll argue:

  • If SETI rules out 50% of the possible places aliens could be, you shouldn’t conclude that searching the other 50% is a waste of time—at least not if you use SIA or UDT with aggregative values. I’d previously accepted this conclusion based on a mistaken argument.
  • If we have a significant risk of extinction then the positive impact of receiving an alien message might be relatively large, perhaps as good as reducing extinction risk by one percentage point. If we have a substantially higher risk of AI misalignment than our neighbors, then the positive impact may be even larger.
    (Note: there is also a significant risk that noticing aliens is extremely bad, and without thinking more I believe the expected value is negative. Moreover, given that a group could run SETI unilaterally, I think that the topic is actually pretty fraught.)

I think the biggest open question is: if there is a large, very distant, civilization trying to send a message, how confident should we be that we would have already seen it? And if we haven’t seen it, how plausible is it that we’d notice with a further 10x increase in sensitivity?

Image result for search for extraterrestrial life

Unfortunately I think it’s very likely we would have seen a large civilization if one was trying to contact us, even if it was very far away and had only invested a small fraction of its energy in signaling (thanks to Anders Sandberg for providing some background facts). If true, the prospects for SETI seem pretty bleak. I couldn’t easily reach a confident verdict on this but someone probably has.

Are we alone?

Suppose that SETI has searched 50% of the places that we might find an alien civilization. In the past I accepted the following argument without thinking about it too much:

  • A priori we had a roughly log uniform distribution over the density of mature civilizations.
  • After observing that at least 50% of the universe is empty, the expected number of mature civilizations per Hubble volume is probably <4, and is roughly equally likely to be 2, 1, 1/2, 1/4, 1/8, 1/16…
  • Only in the first few buckets {2, 1, 1/2} should we expect to find a neighbor. But almost all of the mass is in the smaller buckets, since there are so many of them.

A similar argument suggests goes through even if we’ve only checked 10% of the places that aliens could be. In fact, our expectation of finding anything should very quickly drop towards zero as we examine the local universe (or even note that Earth has not yet been colonized).

The problem with this argument is that our decisions have a lot of influence over universes where the density is ~1, but almost no influence of universes where the density is <0.001 (since those universes are doomed to be barren regardless of what we decide).

If we cut off the long tail of universes where civilization is very rare, in addition to the long tail where we would have already seen it, we end up with significant probability on the buckets {2, 1, 0.5}. Since our decisions only affect these worlds, we might as well behave as if we live in one of them, and hence that there is a significant probability of another civilization in the observable universe.

I’m generally skeptical about making astronomically large anthropic updates, but in this case the update is only something like ~100x, which doesn’t seem outlandish. Everything is still fishy, but after marginalizing out uncertainty and fishiness I provisionally think we should take seriously the prospect of having exactly one other civilization in our Hubble volume.

Note that the same argument doesn’t suggest an overwhelming prior presumption in favor of universes with very high densities of life—our decisions have roughly the same impact on those universes as in sparsely populated universes. The impact only begins to decrease once we drop below a density of 1 per Hubble volume, since at those values large parts of the universe end up barren.

Would contact matter?

Even if we saw aliens, would it matter to a utilitarian?

I think it would:

  • If space colonization proceeds significantly slower than the speed of light, then we can see a distant civilization long before colonists can reach Earth. In the extreme case, where we observe a civilization near the edge of our Hubble volume, they may never be able to reach us.
  • If we kill ourselves, then by the time colonists reach Earth a substantial part of the currently-accessible universe will have become inaccessible. I’d estimate that the reachable value falls at a rate of about 1/(5 billion) per year.
  • If we receive a message from sophisticated aliens, it would probably eliminate the risk of extinction and allow us to start expanding almost immediately, by transferring technologies or ideas that effectively determine the course of our civilization.
    • A sophisticated civilization could have run trillions of simulations of worlds like ours in order to estimate the effect of different possible messages. These message would be by far the most optimized thing we’ve ever encountered. Imagining it as coming from a civilization kind of like ours seems insane.
    • (I think it’s realistic for aliens to transmit an AI that, when run with a modest amount of computation, effectively takes over the world. I expect the outcome would be good for complicated reasons, but figuring out the sign is also an important problem.)

For example, if space colonization proceeds at 50% the speed of light, we observe aliens 6 billion light years away, and then we go extinct, the colonization wave won’t reach Earth for more than 3 billion years. In that time, I think about half of the universe we could potentially reach will have receded too far to reach. The aliens will be able to reach about half of that anyway (since they can go there directly, they don’t have to go via Earth), but the other half will be forever inaccessible to them.

If we have a 10% risk of extinction, then this translates into an expected loss of 2.5% of our future light cone. Receiving an alien message now would effectively eliminate this risk by transferring control from us to an alien civilization, so would be about as good as reducing extinction risk by 2.5%.

I think 50% of the speed of light would be a surprisingly slow rate of colonization (based on correspondence with Anders Sandberg and my background techno-optimism), but even at 90% of the speed of light we would have a 0.5 billion year wait, which would lose ~5% of the value of the future and make SETI equivalent to a 0.5% effective reduction. Overall I think that a 1% absolute reduction in extinction risk is a not-crazy estimate for the value of successful SETI in a sparsely populated universe.

If we are likely to build misaligned AI, then an alien message could also be a “hail Mary:” if the aliens built a misaligned AI then the outcome is bad, but if they built a friendly AI then I think we should be happy with that AI taking over Earth (since from behind the veil of ignorance we might have been in their place). So if our situation looks worse than average with respect to AI alignment, SETI might have positive effects beyond effectively reducing extinction risk.

The preceding analysis takes a cooperative stance towards aliens.  Whether that’s correct or not is a complicated question. It might be justified by either moral arguments (from behind the veil of ignorance we’re as likely to be them as us) or some weird thing with acausal trade (which I think is actually relatively likely).

However, it may be a significant moral mistake to “hand over” our part of the universe to randomly selected aliens. In that case, the damage from SETI could be nearly as large as extinction. Overall I think that there is a significant chance of both positive and negative impacts, but that the expected value is probably negative because the downside is so much larger (if you don’t have the option value of “Think about SETI more and do it only if it looks like a good idea.”)

Other weird stuff

Predicting the outcome of contact is really complicated and I wouldn’t want to gamble my life (or our future) without thinking about it. There seems to be a significant risk that passive SETI leads directly to extinction; that risk isn’t as broadly appreciated as I’d like.

If there are other aliens in the observable universe, I have no idea what their actual motives are. I think the naive argument above is the largest expected impact, but “something we haven’t thought of” is probably a larger expected impact.

In general the possibility of weird stuff only makes me more excited to spend time thinking through details of SETI, since there is some probability it would turn out to even more important than it looks. I think there’s a reasonable chance that SETI would turn out to be quite bad, either because some other impact dominates or because it’s a mistake to take a cooperative stance towards aliens.

Why I’m not optimistic about finding anything

My impression is that if a distant civilization used 1/millionth of a galaxy’s power to broadcast a message with a similar spectrum to a star, we’d be able to see that message from land-based telescopes, and so it probably would have been noticed by early SETI or incidentally. Moreover:

  • By focusing on a narrow part of the spectrum, the aliens could be louder.
  • By using energy other than starlight, they could be louder.
  • By transmitting less frequently they could be louder, which might make it easier for us to hear.
  • I expect a mature alien civilization would have more creative ways to send easy-to-spot signals.
  • If the argument in this post is right, and a weak signal wouldn’t be detectable, I expect a sophisticated civilization would be willing to spend more than 1/millionth of a galaxy’s power on transmission.

That said, there are lots of possible problems in this argument. I don’t know how carefully we’ve looked at each distant galaxy, I don’t know how obvious such signals would be if you weren’t looking in the right place, I don’t know what other obstructions there might be to transmission, I don’t totally trust the analysis of brightness and don’t really know anything about the area, etc.

So I’d really love to see an analysis by someone who is much more familiar with SETI: if there were a distant civilization transmitting with some unknown fraction of a galaxy’s power (somewhere between 1e-9 and 1e-3, say), how inevitable is it that we’d have seen them already?

Probably this exists somewhere on the internet but I couldn’t find it. If you know where to find it, or happen to have the relevant background knowledge and find the question interesting enough to figure it out, I’ll be in your debt.


One thought on “On SETI

  1. You say:
    “A sophisticated civilization could have run trillions of simulations of worlds like ours in order to estimate the effect of different possible messages. These message would be by far the most optimized thing we’ve ever encountered. Imagining it as coming from a civilization kind of like ours seems insane.”

    Maybe this is naive, but wouldn’t that mean that most of the simulations they run would not receive an optimized message, and in turn, neither would we? Would they re-run the simulations once they have an optimized message a gajillion times to compensate? Would doing that actually add “measure”? If it does, wouldn’t they just intervene in the simulations as directly as possible?

    Semi-relatedly, could our decision theories be so underdeveloped that we’re not worth bothering with? (the whole “how often do you send optimized messages to ants” question, although presumably anyone who’s actively working on AGI thinks we’re over some critical threshold)

    In the absence of simulation working out, what’s the best message you can come up with that is likely to be unpacked by anything intelligent?

    I feel like some solution in the vein of “prime numbers” is a relic of affective death spirals around platonism. At best, that would look like some “ordered” bit of physical stuff, but why should that be interpreted as anything more interesting than logarithmic spirals in nature, or gravitational waves, or complex numbers in ontologies? In fact, given that there is so much order around, maybe an apparent violation of the laws of physics is much more attention-grabbing, if such a feat is manageable (and that is an easy assumption to make). I haven’t read it, but I think this is what Schrodinger suggested in What is Life? as a definition for life – just something weird.

    This seems too hard to me even for a very tiny pocket in the space of mind-designs. It doesn’t feel like there would be a natural attractor, and then, a natural protocol.

    And if they can talk to us precisely anyway, isn’t it more likely that they’d directly program us? I suppose you’re saying something similar with “AI transmission”. Unless some kind of “autonomy”, whatever that means, is seen as non-negotiable ethically ( in which case it’s hard to say whether giving us hints/spoilers is any less invasive), directly modifying our behavior seems like the way to go. Leaving it at just contact seems like it would be plausible only if they wanted to keep us as trade partners because we can do something better than they can.

Leave a comment