Sending a message to the future

If we humans manage to kill ourselves, we may not take all life on Earth with us. That leaves hope for another intelligent civilization to arise; if they do, we could potentially help them by leaving carefully chosen messages.

In this post I’ll argue that despite sounding kind of crazy, this could potentially compare favorably to more conventional extinction risk reduction.

Rough calculation

Here’s an incredibly speculative back-of-the-envelope calculation. The rest of the post fleshes out the various estimates in this calculation.

  1. If humanity drives ourselves extinct (without AI), I think there is a ~1/2 chance that another intelligent civilization evolves while the earth remains habitable.
  2. From an impartial and long-term perspective, I think that would lead to a future ~1/2 as good as if we had survived ourselves (in expectation).
  3. I’d guess that this new civilization would face an expected existential risk of ~20% (including risk from AI).
  4. By leaving carefully constructed messages I could easily imagine reducing that risk by 5-10%, i.e. by 1 to 2 percentage points.
  5. I’d expect to achieve those benefits for a total cost of ~$10M.

That suggests that by spending ~$10M, we can reduce the badness of non-AI extinction events by ~1/300. If we only cared about x-risk, that would be roughly as much bang for the buck as eliminating all non-AI extinction risk for $3B. Comparing to e.g. Open Philanthropy grantmaking in biosecurity, that looks like it would probably be a goodmessage-in-a-bottle-3437294_1920 deal.

If you just spent $10M on the program right now, I wouldn’t expect it to work out that well.  But if we perform low-cost initial investigation, we have the option value of spending the money only if it looks good. Given that, my guess is that initial investment in this area is actually more cost effective than eliminating all extinction risk for $3B.

The intervention

What could we say?

Foresight. We could tell our successors what happened to our civilization, what problems and ideas ended up being important, and what we would have done differently. It’s not clear how similar our successors’ trajectory will be, but there probably are at least some lessons they can learn.

We could be especially helpful when discussing the likely causes of our own extinction (though we’d avoid disclosing some risks since it’s an information hazard).

We could leave some opaque warnings about what to do or not to do, some small fraction of which might be heeded.

Differential science/technology progress. If we just hand our successors Wikipedia, we could accelerate their technological progress across the board, potentially quite a lot. But we might be able to do more good by being selective about what capabilities we give them.

In particular, a civilization’s ability to really mess itself up seems closely linked to its physical technology and infrastructure: their ability to make fast computers, big bombs, or novel bioweapons. Accelerating other kinds of intellectual progress relative to physical technology generally seems like a good idea in terms of long-run trajectory.

Some potentially useful kinds of differential progress:

  • Improving social science and institutional technology: economics, political science, game theory, evolution, cognitive science. Providing evidence about the institutional arrangements tried on Earth and the results.
  • Improving computer science, AI, mathematics, logic, formal philosophy.
  • Differential progress amongst physical technologies. I don’t know what physical technologies are better or worse, and I don’t know how reliably we could tell. But we could potentially have a significant effect on the trajectory of a civilization by giving them particular carefully chosen technologies.

Philosophy and values. Our collective values have changed a lot over the last 500 years. Much of this is due to technological or economic progress, but some part is due to information and arguments. We can communicate enlightenment ideals, a modern understanding of epistemology and scientific method, our best moral philosophy, etc. We can send the most convincing writings of our time. A lot of this content is clearly idiosyncratic and won’t be convincing to our successors, but some part of it might be.

Futurism and consequentialism. Today there are many people who direct their energy based on careful arguments about where the world is going and how to do the most good. Understanding relevant moral arguments, as well as futurist topics like x-risk and the trajectory change argument, the simulation argument, coordination across civilizations, the nature of AI risk, etc., might increase the number of forward-looking consequentialists amongst our successors and move them forward on the “figuring things out” timeline. This can be combined with foresight by explaining what interventions we think would have worked well in our past and why we now believe that.

Signs and portents. Information in any of these categories might be taken more seriously if we also included some cool technology to prove that we are sophisticated and to ensure that the message ends up being important to the recipients. If our society had discovered Maxwell’s equations on a giant stone tablet, I think it would cause many people to very seriously consider any other views discussed on that stone tablet.

How much would that help?

I feel like we could have a big effect on our successors’ overall trajectory. It’s harder to know whether we could have a reliably positive effect.

The impact is particularly hard to estimate because I haven’t thought of most of the cleverest things to send.

One way to think about the impact is to ask how much humanity might have benefited if in 1718 we had received a message from EAs in 2018. This will significantly overestimate the real effect, since the actual recipients would be so much more alien, but we can try to adjust for that.

Some plausible positive consequences:

  • Some researchers would likely have worked on AI alignment over a much longer period of time. We’d be much more likely to have a deep understanding of the area by the time we are building AI, in the sense that we currently have a deep understanding of many questions in theoretical computer science that won’t matter for a very long time if ever.
  • We would likely have invested more, sooner in extinction risk reduction. We’d be more likely to coordinate to avoid tail risks from MAD if we’d had the risks described credibly well in advance, and perhaps better prepared for pandemics.
  • Something like EA / forward-looking consequentialism would likely have existed earlier and  had better ideas.
  • We’d have modestly better institutions and policy. Governments, research institutions, philanthropists, etc. in the new world could reflect lessons from our history.
  • Scholars in 2018 might be able to make relatively specific recommendations about how to form new governments or how to negotiate international relations in a way that would help with long-term peace. (I think this would have had a modest effect.)
  • We might have a better policy response on AI if the issue had been credibly raised to attention centuries earlier as one of a short list of long-term consequentialist priorities.
  • Possibly greater credibility for recommendations in the message like “try not to kill yourselves this time,” given that it is accompanied by great technological expertise and was sent by some people who really did kill themselves.

Taking all of this together, and with some optimism about other clever things that we might think of, I feel like a 10-20% reduction in the total risk of extinction is reasonable.

I’m inclined to shrink the impact by ~2x if we are communicating with an alien civilization instead of our own, so a 5-10% reduction in total extinction risk.

Could we send a message?

All of the details in this section and the next are a bit silly. Overall I’m pretty optimistic that it’s possible to send durable messages cheaply, and I’m being a tiny bit concrete to make that claim a bit more credible. But this involves several areas way outside of my expertise, so the concrete details are a bit silly. Would still be great to get comments about where this proposal is unrealistic or where there are much better alternatives. I suspect that you could do radically better than what I describe.

Over >100M years, I think that most contemporary written records would be destroyed. However, I suspect it is possible to preserve reasonable amounts of information over 500M years with modest cost.

As a crude lower bound, the fossil record preserves large amounts of information over much more than 500M years. Many naturally occurring fossils preserve fine detail. From looking at photos on wikipedia, some fossils seem to have enough detail for >1 word / 5 mm^2, which would let you put a normal encyclopedia worth of words (at ~20M words) on 100 square meters.

I bet that we could reliably preserve much more information even if we just carved things in stone and set up optimal conditions for preservation. We could do even better by getting more creative. But even carving tiny words on 1000 square meters of stone doesn’t sound that bad.

If we are sending many millions of words and making an effort to be understood, I would be surprised if language or cultural barriers are a dealbreaker.

Preserving messages with high information-content may be somewhat expensive, so we don’t want to be too redundant. So in addition to the big messages with real content, we could distribute many small messages indicating the locations of other messages, such that if you found one you would be able to mechanically find all of them. Smaller messages might also be easier to find since we aren’t as sad if they get lost.

Finding buried messages seems quite difficult, so we’d ideally place large obvious markers to tell people where to dig. I don’t know how to most easily make an obviously unnatural marker that will last 600M years. In many places, I’d guess that you can just take a really big rock that weathers slowly (lots of quartz?) and has an unusual appearance.

This information would only be available once people located and dug up the messages, and were able and inclined to understand them. If European civilization had been sent such messages, I would expect this to have happened sometime between 1000 and 1800, depending on the details and chance. This gives a long time between uncovering the messages and details.

One of the easiest ways to destroy messages would be by someone digging them up and then having it degrade or get lost. We could potentially avoid this by making the messages relatively durable, so that they could survive at least a few centuries after being dug up (unless someone actively wanted to destroy them), or by ensuring enough redundancy and potentially burying some records in hard-to-reach places (e.g. just putting them deeper).

Costs

I don’t think it would be plausible to run a good version of the overall project on a budget significantly less than $10M, given the need for research and basic organizational work. So I’m happy to ignore anything that costs <$10M as small relative to the other costs.

I think that buying land distributed around the world, digging deep enough holes, and creating long-lived obvious markers is probably cheap. I think the digging cost is negligible, a site definitely needs to be < 1 acre and can be in the middle of nowhere, so you could get 100 sites for well under $1M.

I haven’t thought about how you’d spread large numbers of cheap markers to the locations. I think that the main expense would be visibly marking their locations. I could easily imagine running a cost of $10,000 or more for a clearly visible marker that will probably last 600M years, in which case you couldn’t afford that many of them.

A big question in costs is how much total information you need to transmit. That really depends on what we’d say, and I don’t really know. Intuitively, I feel like distributing a few encyclopedias worth of content (~100M words) around the world should be enough for almost any reasonable proposal, as long as a reasonably large fraction ends up getting preserved.

My greatest uncertainty is about the cost of preserving large amounts of information. The cost of the materials seems negligible. I’d guess that typesetting is the main cost, and that preservation is pretty cheap (e.g. you just pour something over the message or treat it in some other way, then bury it). Typesetting can probably involve working with something easy, then making a mold out of some material that will keep well (or making an implement that you use to efficiently etch something that will keep well, or something). I’d be quite shocked if there wasn’t some way to get the cost down to < $0.01/character = $5M for 100M words.

Background calculations

Probability of life re-evolving

I expect the Earth to remain habitable for at least another 600M years or so (Wikipedia).

600M years is less than the amount of time since the Cambrian explosion, I think that’s roughly the time of the last common ancestor between humans and cephalopods. Based on the timing of subsequent developments and independent evolution (e.g. cephalopods vs. vertebrates and birds vs. primates), I think it’s unlikely that the intervening time includes evolutionary or cultural “hard steps” that only happened by luck. (H/T Tegan McCaslin.)

So my best guess is that with 600M years we would be able to roughly recapitulate this part of humanity’s history. That suggests intelligent civilization would probably appear again unless we destroy essentially all complex multicellular life, including the tiny worms (or destroy enough of the ecosystem that complex life can’t survive).

It seems to me that most plausible extinction events wouldn’t be nearly that bad. For example, I would expect some small vertebrates to survive most biorisk, nuclear war, or climate disasters (even conditioned on killing all large life). And “small vertebrates” is a long way from tiny worms.

(If there is a significant risk that we destroy all sexually reproducing life, it might be interesting to try designing very long-lived “arks” that could introduce simple life after a delay of thousands of years. If we could effectively recolonize Earth in this way the value could be very high, and it seems to me like it might be possible.)

I come away with something like 50% of intelligent life re-evolving.

How happy are we with the next civilization?

I’m inclined to be altruistic towards other civilizations that evolve naturally, for golden-rule-style reasons similar to my sympathy for aliens or certain kinds of AI: from behind the veil of ignorance, I don’t feel much more likely to be Earth’s “first attempt” than Earth’s “second attempt,” so would want the first attempt to behave altruistically towards the second attempt.

This issue has been popping up for me a lot recently, so it seems worth thinking about more seriously, and is near the top of my list of important moral philosophy questions.

Another issue is that 600M years of delay may make space colonization less valuable. I think the first-order effect is probably the expansion of the universe, which pushes some distant galaxies out of our reach. I’d put that at <20% reduction in value over 600M years. I expect the real delay would be smaller than that, and so this seems less important than the moral uncertainty question.

Overall, I’d estimate ~50% as good.

How much risk does that civilization face?

I think that AI x-risk alone is around 10-20%.  I don’t have a strong view about whether our descendants would be better or worse prepared for AI risk by default.

If we condition on us killing ourselves with technology X, I think that our successors would have a significant risk of also killing ourselves with X. And there is a reasonable chance that our successors would be Earth’s last shot (300M years is a lot less than 600M years, so we might not have time for two more cycles).

I think there are other trajectory risks beyond AI and extinction, both unknown unknowns and more subtle forms of value drift (though that’s more complicated to discuss in the context of aliens).

Because I think our absolute level of risk is much closer to 0% than 100%, I think that greater uncertainty about our successors tends to increase total x-risk.

Overall, I feel comfortable with 20% as an estimate of risk. Making this estimate higher would increase our overall estimates for the value of messages to the future.

GCR vs. extinction risk

The analysis above applies only to reducing extinction risk. Most potential extinction risks carry a much larger risk of global catastrophe, and it’s been argued that global global catastrophe is bad enough, even from a purely future-focused perspective, that we should not neglect the catastrophic risk. If true, that would make messages to the future less attractive relative to catastrophic risk reduction.

I personally think that catastrophic risks mostly cause harm by increasing the probability of eventual extinction, for example by exposing us to a longer period with access to catastrophe-inducing technologies. There are other plausible mechanisms by which GCRs could pose an x-risk, raised by Nick in this article, but I don’t find them convincing:

  • Civilization might stagnate rather than either recovering or going extinct. I think this is possible but quite unlikely.  If civilization had a significant probability of stagnating, I would expect to see some reasonably long historical periods of stagnant or slowing growth. But there don’t seem to be any long stagnant periods, the closest thing is a period of 3-4 economic doublings with relatively steady growth between the agricultural and industrial revolutions. I do not see how to reconcile the historical record with any model where civilization has a significant probability of stagnating over millions of years. (I actually assume this point as part of the argument “given the speed of development, there are probably no hard steps between 600MYA and now” above. If this weren’t the case, then we could potentially have a larger impact by leaving messages designed to un-stagnate our successors.)
  • We may currently have an unusually good civilization given our level of technological maturity, e.g. we may have an unusual emphasis on scientific method or on liberal democratic values. Overall, I don’t think there is much evidence that our civilization is unusually good such that we should expect regression to the mean to go in the bad direction rather than the good direction.

Overall I feel comfortable approximately equating the total cost of non-AI catastrophic risks to the total risk of extinction from non-AI sources (including extinction risks that are indirectly caused by non-extinction catastrophes). This makes “message to the future” look better compared to conventional catastrophic risk reduction.

On an alternative perspective where the main purpose of biorisk prevention was to improve our civilization’s ability to cope with AI risk or to avoid value drift, then a message to the future would look less attractive relative to biorisk.

In scenarios where civilization is scarred by a GCR and only later goes extinct, then future people might be in a better position to leave records. However, it seems quite likely that they will face other severe problems and will have much less spare capacity than we do for this kind of wacky stuff, so I don’t think our effort would be wasted in that scenario.

Conclusion

This rough BOTEC suggests $10M for reducing non-AI extinction risk by 1/300. That estimate could be easily improved (and the $10M price point is pretty made up), but sounds kind of promising. My current guess is a ~10% chance that a more careful analysis would conclude that a project like this is worth doing with ~10M openphil last dollars.

Taking a real stab at answering “what message to send?” may raise the real price above $10M, if it requires lots of time from people with high hourly opportunity costs. Moreover, without someone competent who is excited about the project I don’t think that you could do any reasonable version of this project for a reasonable amount of money. So overall I think the probability that this gets done is well under 10%, but I’m glad we live in a weird enough world that this is at least worth thinking about.


7 thoughts on “Sending a message to the future

  1. “We may currently have an unusually good civilization given our level of technological maturity, e.g. we may have an unusual emphasis on scientific method or on liberal democratic values.”

    Civilizations coming back with less in the way of fossil fuels could be poor for a given level of technological maturity, and closer to a Malthusian world at the time of key developments. That could be offset by pre-mined metals, pre-domesticated crops, and especially preserved knowledge. Currently liberal democratic values are associated with less Malthusian conditions, although the causality is unclear (Hanson argues for per capita wealth->liberal democratic values).

    1. By the same token, they would be more technologically sophisticated at a given level of total output—e.g. if AI took X FLOPs to train, they’d have more sophisticated technology at the time they built that AI. It’s not super clear to me which way this goes.

  2. We could also help alien civilizations by sending them a message. If we fear that such a message might reveal us to dangerous aliens we could create long-lived satellites with deadman switches that will send messages after our civilization falls. The messages could contain the newsevents right before our fall to give a clue as to what got us.
    https://www.deepdyve.com/lp/elsevier/the-fermi-paradox-bayes-rule-and-existential-risk-management-egXAbHGMbq

  3. Hi Paul, if you’re a reader I recommend the three-body problem series by Lui Cixin, a lot of what you talk about is raised throught the series, especially towards the end, and it’s a great read!

Leave a comment