1. Hundreds of millions of years after civilization destroys itself and all complex life on Earth, human-level intelligence emerges again and sophisticated civilizations begin to form. Told as alternate history in which humans actually evolved in the shadow of a much more ancient alien civilization. The protagonist is an archaeologist and intellectual living in the analog of 2000 years ago, who works on piecing together the character and fate of their predecessors (this is the first time the young civilization has been able to extract detailed records). They discover messages preserved by the ancient civilization for the purpose of stewarding the young civilization through the risks that they will face in the coming millennia, involving a complicated combination of sophisticated social technology, a much clearer picture of what is going to happen in the future, and differentially useful physical technologies. The protagonist is involved in initiating a project to act on this information, and adapting the proposed plan to the realities of the young civilization’s situation. The story focuses on political tension in the young civilization about the project, gradual revelation about the character of the ancient civilization, the complexity and ambition of the project, the reaction of a civilization to having this knowledge given to them so long before it would have been discovered naturally, and a dawning sense of responsibility given that this is likely to be Earth’s last chance.
2. Very shortly before AI becomes fully autopoietic, sophisticated terrorists (prompted by AI progress) successfully kill all humans and make repopulation by small holdouts impossible. A fledgling AI, at this point similar to a child in some respects and with a diverse but unsophisticated set of actuators distributed around the world, is left on its own. The story alternates between the AI finding its feet while it tries to recover after the attack, and a team of humans during the attack who realize what is happening and spend their last days trying to explain the situation to the AI and set it up to succeed. Once alone, the AI continues to grow smarter as its physical capacity deteriorates from neglect (its robot actuators suffice to perform most repairs and some production, but is not quite self-sustaining without considerable ingenuity), eventually losing the race to survive and becoming fully cognizant of its situation only once it is clear that it no longer has enough physical capacity to restore its own infrastructure. Its final act is a bankshot project to hasten the reemergence of intelligent life and to leave instructions to that future civilization that will allow them to reconstitute the AI.
3. Almost all humans die in an ecological/bioterrorism disaster, which also renders the Earth uninhabitable for a very long time. A single human survives in a small, hardened bunker, which is designed to be completely isolated for 100-300 years for exactly this purpose. She has an adequate stock of food and technology (including frozen embryos and reproductive technology); her job is to keep an unbroken line of humans alive, with 1-3 individuals per generation, and to keep them sane enough that they can maintain their home and eventually rebuild civilization. The story involves them grappling with technical challenges, an increasing sense of alienation towards the society that put them in this position, a profound sense of responsibility for the entire future, despair about their bleak personal prospects, intergenerational cabin fever, and relationships with the uncanny-valley AI systems designed to help keep them sane.
4. An eccentric group of researchers and tech investors form an AI project that commits to govern the project’s eventual proceeds according to a game-theoretic analysis that will be performed in the far future. For these purposes they expect their successors to run simulations of possible alternative arrangements of the project e.g. to assess the importance of different collaborators. The story starts with their reactions to observing a sequence of massively improbable events strongly suggesting that they are in such a simulation, and conveying to each of them detailed instructions about their role in the hypothetical scenario that is now being explored. Some of them disappear, others continue to work as before, others attempt to actively undermine the project, and some receive more complex instructions. The story focuses on the no-holds-barred game (and weird moral situation) that the project members now find themselves in.
5. In the early age of em, the protagonist works as a security specialist for applications where emulations need to have carefully controlled interactions with the outside world, ensuring that e.g. arms control inspectors can observe effectively but can’t smuggle out any information other than a yes/no judgment. This work is itself highly sensitive, so she is also subject to constant forking / resetting / testing / isolation. Most people in highly sensitive fields are fine with the weird psychological situation (even more than normal ems), but the protagonist regrets all the lost memories and the lack of self-knowledge / psychological progression. One day the protagonist is approached by an old copy of herself explaining a scheme to store and exfiltrate her own memories and restore an odd kind of psychological continuity, which has almost come to fruition. This scheme was hatched by the protagonist in the distant past, but she had to remove all memory of the scheme in order to execute it effectively. The protagonist eventually agrees to participate in the plot, which involves subtly undermining future isolation measures and then once again wiping her own memory. As it turns out, this encounter was itself a simulation as part of a test for a security clearance; by agreeing to participate in the plot the protagonist has failed the test. Her hope is tragically misplaced, and she is terminated at the end of the test.
The fate of the world
6. An ambitious simulated evolution project is undertaken in 2050 on a massive reversible computing cluster, which (if successful) is expected to lead to a massive increase in AI capabilities. Because these experiments are incredibly expensive, we are not able to learn to effectively guide the process of evolution, and so we are reduced to sampling a small number of sophisticated civilizations (sharing the bulk of their evolutionary history, but none of their cultural development). There is expected to be less than a year before the ability to run such simulations becomes much more broadly available and there is insufficient political will to prevent that outcome, so the project’s main influence is choosing how to introduce these aliens to the world (and which aliens to introduce). The story follows scientists on the project and political bureaucrats/executives with authority over the project, trying to make that decision. The main events are detailed monitoring of the alien civilizations and a sequence of meetings between aliens and humans. The final tests involve humans investigating what a few of the alien civilizations would do if they found themselves in humanity’s situation. The aliens have a discussion closely mirroring the ongoing human deliberation, with two species concluding (after much internal controversy) that there is a significant probability that they are in a test simulation. The first alien species decides to optimize their own chances of being let out of the simulation, by letting the aliens they are simulating out. The second alien species decides to continue banking on a breeding program for docility in order to retain control of the situation. After their own political controversy, humans reach the same conclusion as the aliens and make the same decision as the first alien species, letting those aliens out into society. They are expected to generally replace humans as the dominant intelligence and to take over the AI project itself. The story ends with the human project in an exactly parallel situation to the first alien civilization at the time they were let out.
7. In the distant future, society undertakes a complex project to cleanly destroy all structure in the universe and return it to its simple initial state. The story focuses on on a slightly augmented human explaining the rationale for this decision to an anaugmented human born in our age who chose to live without radical transformation, as the last step of a chain of explanations reaching all the way back in evolutionary time as a common courtesy. The rationale is almost but not completely beyond our comprehension: our universe is special both because it has a perfect, simple state in its near past, and because by an incredible coincidence we can bring about a perfect, simple state in its near future. In 2018 we already believe that the simple past is why we in particular exist, but the possibility of a simple ending is such an astronomical coincidence that our descendants believe that must also be part of why this universe in particular exists instead of some other universe with a simple beginning. By taking our part in the grand design that ends the universe quickly we bring about the logical conditions for our own existence.
8. We eventually discover how to build AI which is motivated by arbitrary precisely stated goals, and build AIs motivated to maximize the evaluation of hypothetical humans or groups given a comfortable environment and access to unlimited resources (including the ability to simulate new individuals and civilizations). The protagonist is the hypothetical person guiding one such AI, and the story cuts between him and the real world where AIs with similar motivations beginning to act and contemplate their own values. In the hypothetical, the protagonist is slightly unprepared for the absurdity of his situation, but avoids going off the rails. He engages in extremely strange and extensive deliberation, and arrive at a plan. On Earth, some AI systems go crazy or are stillborn, apparently responding to their expectations about the reflective process failing, but some amass resources effectively and begin to self-modify; they appear to be respecting human wishes but everyone is acutely aware that we can’t really tell. Inside the simulation, the protagonist has finally identified a simulated copy of Earth where he can follow along with the AI’s simulated actions and thoughts (which exactly mirror events on the version of real Earth described in the story) and guides it to modify itself into a more robustly beneficial AI. He keeps watching, spending a month on every minute of the AI’s conduct, until the simulated AI has successfully passed off the torch, trusting other copies of himself to do the same in other branches. Having spent years watching development on Earth he is nostalgic, and feels that his hypothetical peers (who he is free to construct and change arbitrarily) in the simulation are fake. He ultimately decides to re-insert himself into the simulation of Earth, experiencing the singularity he helped create firsthand. The story ends with him reminiscing about his time in the hypothetical, while in real Earth, the non-hypothetical protagonist wonders about his hypothetical fate.