Takeoff speeds

Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)

(See also: AI Impacts page on the same topic.)

Slow takeoff

Slower takeoff means faster progress

Fast takeoff is often justified by pointing to the incredible transformative potential of intelligence; by enumerating the many ways in which AI systems will outperform humans; by pointing to historical examples of rapid change; etc.

This gives the impression that people who expect a slow takeoff think AI will have a smaller impact, or will take longer to transform society.

But I think that’s backwards. The main disagreement is not about what will happen once we have a superintelligent AI, it’s about what will happen before we have a superintelligent AI. So slow takeoff seems to mean that AI has a larger impact on the world, sooner.

TakeoffImage.001

In the fast takeoff scenario, weaker AI systems may have significant impacts but they are nothing compared to the “real” AGI. Whoever builds AGI has a decisive strategic advantage. Growth accelerates from 3%/year to 3000%/year without stopping at 30%/year. And so on.

In the slow takeoff scenario, pre-AGI systems have a transformative impact that’s only slightly smaller than AGI. AGI appears in a world where everything already happens incomprehensibly quickly and everyone is incredibly powerful. Being 12 months ahead in AGI might get you a decisive strategic advantage, but the world has accelerated so much that that’s just about as hard as getting to airplanes 30 years before anyone else.

Operationalizing slow takeoff

There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

At some point there will be incredibly powerful AI systems. They will have many consequences, but one simple consequence is that world output will grow much more quickly. I think this is a good barometer for other transformative effects, including large military advantages.

I believe that before we have incredibly powerful AI, we will have AI which is merely very powerful. This won’t be enough to create 100% GDP growth, but it will be enough to lead to (say) 50% GDP growth. I think the likely gap between these events is years rather than months or decades.

In particular, this means that incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out). If true, I think it’s an important fact about the strategic situation.

(Operationalizing takeoff speed in terms of economic doublings may seem weird, but I do think it gets at the disagreement: proponents of fast takeoff don’t seem to expect the 4 year doubling before takeoff, or at least their other beliefs about the future don’t seem to integrate that expectation.)

The basic argument

The prima facie argument for slow takeoff is pretty straightforward:

  • Before we have an incredibly intelligent AI, we will probably have a slightly worse AI.
    • Lots of people will be trying to build powerful AI.
    • For most X, it is easier to figure out how to do a slightly worse version of X than to figure out how to do X.
      • The worse version may be more expensive, slower, less reliable, less general… (Usually there is a tradeoff curve, and so you can pick which axes you want the worse version to be worse along.)
    • If many people are trying to do X, and a slightly worse version is easier and almost-as-good, someone will figure out how to do the worse version before anyone figures out how to do the better version.
    • This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump.
      • The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density.
  • A slightly-worse-than-incredibly-intelligent AI would radically transform the world, leading to growth (almost) as fast and military capabilities (almost) as great as an incredibly intelligent AI.

This simple argument pushes towards slow takeoff. But there are several considerations that could push towards fast takeoff, which we need to weigh against the basic argument.

Obviously this is a quantitative question. In this post I’m not going to get into the numbers because the substance of the disagreement seems to be about qualitative models.

Reasons to expect fast takeoff

People have offered a variety of reasons to expect fast takeoff. I think that many of these arguments make sense, but I don’t think they support the kind of highly concentrated, discontinuous progress which fast takeoff proponents seem to typically have in mind.

I expect there are other arguments beyond these, or that I’ve misunderstood some of these, and look forward to people pointing out what I’m missing.

Humans vs. chimps

Summary of my response: chimps are nearly useless because they aren’t optimized to be useful, not because evolution was trying to make something useful and wasn’t able to succeed until it got to humans.

Chimpanzees have brains only ~3x smaller than humans, but are much worse at making technology (or doing science, or accumulating culture…). If evolution were selecting primarily or in large part for technological aptitude, then the difference between chimps and humans would suggest that tripling compute and doing a tiny bit of additional fine-tuning can radically expand power, undermining the continuous change story.

But chimp evolution is not primarily selecting for making and using technology, for doing science, or for facilitating cultural accumulation.  The task faced by a chimp is largely independent of the abilities that give humans such a huge fitness advantage. It’s not completely independent—the overlap is the only reason that evolution eventually produces humans—but it’s different enough that we should not be surprised if there are simple changes to chimps that would make them much better at designing technology or doing science or accumulating culture.

If we compare humans and chimps at the tasks chimps are optimized for, humans are clearly much better but the difference is not nearly as stark. Compare to the difference between chimps and gibbons, gibbons and lemurs, or lemurs and squirrels.

Relatedly, evolution changes what it is optimizing for over evolutionary time: as a creature and its environment change, the returns to different skills can change, and they can potentially change very quickly. So it seems easy for evolution to shift from “not caring about X” to “caring about X,” but nothing analogous will happen for AI projects. (In fact a similar thing often does happen while optimizing something with SGD, but it doesn’t happen at the level of the ML community as a whole.)

If we step back from skills and instead look at outcomes we could say: “Evolution is always optimizing for fitness, and humans have now taken over the world.” On this perspective, I’m making a claim about the limits of evolution. First, evolution is theoretically optimizing for fitness, but it isn’t able to look ahead and identify which skills will be most important for your children’s children’s children’s fitness. Second, human intelligence is incredibly good for the fitness of groups of humans, but evolution acts on individual humans for whom the effect size is much smaller (who barely benefit at all from passing knowledge on to the next generation). Evolution really is optimizing something quite different than “humanity dominates the world.”

So I don’t think the example of evolution tells us much about whether the continuous change story applies to intelligence. This case is potentially missing the key element that drives the continuous change story—optimization for performance. Evolution changes continuously on the narrow metric it is optimizing, but can change extremely rapidly on other metrics. For human technology, features of the technology that aren’t being optimized change rapidly all the time. When humans build AI, they will be optimizing for usefulness, and so progress in usefulness is much more likely to be linear.

Put another way: the difference between chimps and humans stands in stark contrast to the normal pattern of human technological development. We might therefore infer that intelligence is very unlike other technologies. But the difference between evolution’s optimization and our optimization seems like a much more parsimonious explanation. To be a little bit more precise and Bayesian: the prior probability of the story I’ve told upper bounds the possible update about the nature of intelligence.

AGI will be a side-effect

Summary of my response: I expect people to see AGI coming and to invest heavily.

AI researchers might be optimizing for narrow forms of intelligence. If so we could have the same dynamic as with chimps—we see continuous progress on accomplishing narrow tasks in a narrow way, leading eventually to a jump in general capacities as a side-effect. These general capacities then also lead to much better progress on narrow tasks, but there is no reason for progress to be continuous because no one is optimizing for general intelligence.

I don’t buy this argument because I think that researchers probably will be optimizing aggressively for general intelligence, if it would help a lot on tasks they care about. If that’s right, this argument only implies a discontinuity if there is some other reason that the usefulness of general intelligence of general intelligence is discontinuous.

However, if researchers greatly underestimate the impact of general intelligence and so don’t optimize for it, I agree that a fast takeoff is plausible. It could turn out that “will researchers adequately account for the impact of general intelligence and so try to optimize it?” is a crux. My intuition is based on a combination of (weak) adequacy intuitions and current trends in ML research.

Finding the secret sauce

Summary of my response: this doesn’t seem common historically, and I don’t see why we’d expect AGI to be more rather than less like this (unless we accept one of the other arguments)

Another common view is that there are some number of key insights that are needed to build a generally intelligent system. When the final pieces fall into place we may then see a large jump; one day we have a system with enough raw horsepower to be very smart but critical limitations, and the next day it is able to use all of that horsepower.

I don’t know exactly how to respond to this view because I don’t feel like I understand it adequately.

I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period.

To the extent that fast takeoff proponent’s views are informed by historical example, I would love to get some canonical examples that they think best exemplify this pattern so that we can have a more concrete discussion about those examples and what they suggest about AI.

Note that a really good example should be on a problem that many people care about. There are lots of examples where no one is thinking about X, someone uncovers an insight that helps a lot with X, and many years later that helps with another task Y that people do care about. That’s certainly interesting, but it’s not really surprising at all on the slow-change view unless it actually causes surprisingly fast progress on Y.

Looking forward to AGI, it seems to me like if anything we should have a somewhat smaller probability than usual that a final “key insight” making a huge difference.

  • AGI was built by evolution, which is more likely if it can be built by iteratively improving simple ingredients.
  • It seems like we already have a set of insights that are sufficient for building an autopoetic AGI so we won’t be starting from 0 in any case.
  • Historical AI applications have had a relatively small loading on key-insights and seem like the closest analogies to AGI.

The example of chimps or dumb humans seems like one of the best reasons to expect a key insight, but I’ve already discussed why I find that pretty unconvincing.

In this case I don’t yet feel like I understand where fast takeoff proponents are coming from, so I think it is especially likely that my view will change based on further discussion. But I would really like to see a clearer articulation of the fast takeoff view here as an early step of that process.

Universality thresholds

Summary of my response: it seems like early AI systems will cross universality thresholds pre-superintelligence, since (a) there are tradeoffs between universality and other desirable properties which would let people build universal AIs early if the returns to universality are large enough, (b) I think we can already build universal AIs at great expense.

Some cognitive processes get stuck or “run out of steam” if you run them indefinitely, while others are able to deliberate, improve themselves, design successor systems, and eventually reach arbitrarily high capability levels. An AI system may go from being weak to being very powerful as it crosses the threshold between these two regimes.

It’s clear that some humans are above this universality threshold, while chimps and young children are probably below it. And if you take a normal human and you inject a bunch of noise into their thought process (or degrade it) they will also fall below the threshold.

It’s easy to imagine a weak AI as some kind of handicapped human, with the handicap shrinking over time. Once the handicap goes to 0 we know that the AI will be above the universality threshold. Right now it’s below the universality threshold. So there must be sometime in between where it crosses the universality threshold, and that’s where the fast takeoff is predicted to occur.

But AI isn’t like a handicapped human. Instead, the designers of early AI systems will be trying to make them as useful as possible. So if universality is incredibly helpful, it will appear as early as possible in AI designs; designers will make tradeoffs to get universality at the expense of other desiderata (like cost or speed).

So now we’re almost back to the previous point: is there some secret sauce that gets you to universality, without which you can’t get universality however you try? I think this is unlikely for the reasons given in the previous section.

There is another reason I’m skeptical about hard takeoff from universality secret sauce: I think we already could make universal AIs if we tried (that would, given enough time, learn on their own and converge to arbitrarily high capability levels), and the reason we don’t is because it’s just not important to performance and the resulting systems would be really slow. This inside view argument is too complicated to make here and I don’t think my case rests on it, but it is relevant to understanding my view.

“Understanding” is discontinuous

Summary of my response: I don’t yet understand this argument and am unsure if there is anything here.

It may be that understanding of the world tends to click, from “not understanding much” to “understanding basically everything.”

You might expect this because everything is entangled with everything else. If you only understand 20% of the world, then basically every sentence on the internet is confusing, so you can’t make heads or tails of everything. This seems wrong to me for two reasons. First, information is really not that entangled even on the internet, and the (much larger) fraction of its knowledge that an AI generates for itself is going to be even less entangled. Second, it’s not right to model the AI as having a gradually expanding domain that it understands at all, with total incomprehension everywhere else. Unless there is some other argument for a discontinuity, then a generalist AI’s understanding of each domain will just continuously improve, and so taking the minimum across many domains doesn’t make things particularly discontinuous.

People might instead expect a click because that’s what they experience. That’s very unlike my experience, but maybe other people differ—it would be very interesting if this was a major part of where people were coming from. Or that may be how they perceive others’ thought processes as working. But when I look at others’ understanding, it seems like it is common to have a superficial or weak understanding which transitions gradually into a deep understanding.

Or they might expect a click because the same progress which lets you understand one area will let you understand many areas. But that doesn’t actually explain anything: you’d expect partial and mediocre understanding before a solid understanding.

Of course all the arguments in other sections (e.g. secret sauce, chimps vs. humans) can also be arguments about why understanding will be discontinuous. In the other sections I explain why I don’t find those arguments convincing.

Deployment lag

Summary of my response: current AI is slow to deploy and powerful AI will be fast to deploy, but in between there will be AI that takes an intermediate length of time to deploy.

When AI improves, it takes a while for the world to actually benefit from the improvement. For example, we need to adjust other processes to take advantage of the improvement and tailor the new AI system to the particular domains where it will be used. This seems to be an artifact of the inflexibility of current technology, and e.g. humans can adapt much more quickly to be useful in new settings.

Eventually, powerful AI will become useful in new situations even faster than people. So we may have a jump from narrow AI, that takes a long time to deploy, to general AI that is easily deployed.

I’ve heard this argument several times over the last few months, but don’t find the straightforward version convincing: without some other argument for discontinuity, I don’t see why “time to deploy” jumps from a large number to a small number. Instead, I’d expect deployment to become continuously easier as AI improves.

A slight variant that I think of as the “sonic boom” argument goes like this: suppose each month of AI research makes AI a little bit easier to deploy. Over time AI research gradually accelerates, and so the deployment time shrinks faster and faster. At some point, a month of AI research decreases deployment time by more than a month. At this point, “deploy AI the old-fashioned way” becomes an unappealing strategy: you will get to market faster by simply improving AI. So even if all of the dynamics are continuous, the quality of deployed AI would jump discontinuously.

This phenomenon only occurs if it is very hard to make tradeoffs between deployment time and other features like cost or quality. If there is any way to tradeoff other qualities against deployment time, then people will more quickly push worse AI products into practice, because the benefits of doing so are large. I strongly expect it to be possible to make tradeoffs, because there are so many obvious-seeming ways to trade off deployment time vs. usefulness (most “deployment time” is really just spending time improving the usefulness of a system) and I haven’t seen stories about why that would stop.

Recursive self-improvement

Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement.

Powerful AI can be used to develop better AI (amongst other things). This will lead to runaway growth.

This on its own is not an argument for discontinuity: before we have AI that radically accelerates AI development, the slow takeoff argument suggests we will have AI that significantly accelerates AI development (and before that, slightly accelerates development). That is, an AI is just another, faster step in the hyperbolic growth we are currently experiencing, which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate).

The most common argument for recursive self-improvement introducing a new discontinuity seems be: some systems “fizzle out” when they try to design a better AI, generating a few improvements before running out of steam, while others are able to autonomously generate more and more improvements. This is basically the same as the universality argument in a previous section.

Train vs. test

Summary of my response: before you can train a really powerful AI, someone else can train a slightly worse AI.

Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.

But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?

If you are able to raise $X to train an AGI that could take over the world, then it was almost certainly worth it for someone 6 months ago to raise $X/2 to train an AGI that could merely radically transform the world, since they would then get 6 months of absurd profits. Likewise, if your AGI would give you a decisive strategic advantage, they could have spent less earlier in order to get a pretty large military advantage, which they could then use to take your stuff.

In order to actually get a discontinuity, it needs to be the case that either scaling up the training effort slightly, or waiting a little while longer for better AI technology, leads to a discontinuity in usefulness. So we’re back to the other arguments.

Discontinuities at 100% automation

Summary of my response: at the point where humans are completely removed from a process, they will have been modestly improving output rather than acting as a sharp bottleneck that is suddenly removed.

Consider a simple model in which machines are able to do a p fraction of the subtasks of some large task (like AGI design), with constantly increasing efficiency, and humans are needed to perform the final (1-p). If humans are the dominant cost, and we hold fixed the number of humans as p increases, then total output grows like 1 / (1-p). As we approach 0, productivity rapidly to the machine-only level. In the past I found this argument pretty compelling.

Suppose that we removed the humans altogether from this process. On the naive model, productivity would jump from 0 (since machines can’t do the task) to some very large value. I find that pretty unlikely, and it’s precisely what we’ve discussed in the previous sections. It seems much more likely that at the first point when machines are able to do a task on their own, they are able to do it extremely poorly—and growth thereafter seems like it ought to accelerate gradually.

Adding humans to the picture only seems to make the change more gradual: at early times humans accelerate progress a lot, and as time goes on they provide less and less advantage (as machines replace them), so totally replacing humans seems to reduce acceleration.

Ultimately it seems like this comes down to whether you already expect discontinuous progress based on one of the other arguments, especially the secret sauce or universality threshold arguments. Phasing out humans seems to decrease, rather than increase, the abruptness of those changes.

This argument is still an important one, and it is true that if one of the other arguments generates a discontinuity then that discontinuity will probably be around the same time as 100% automation. But this argument is mostly relevant as a response to certain counterarguments about complementarity that I didn’t actually make in any of the other sections.

The weight of evidence

We’ve discussed a lot of possible arguments for fast takeoff. Superficially it would be reasonable to believe that no individual argument makes fast takeoff look likely, but that in the aggregate they are convincing.

However, I think each of these factors is perfectly consistent with the continuous change story and continuously accelerating hyperbolic growth, and so none of them undermine that hypothesis at all. This is not a case of a bunch of weak signs of fast takeoff providing independent evidence, or of a bunch of weak factors that can mechanically combine to create a large effect.

(The chimps vs. humans case is an exception—it does provide Bayesian evidence for fast takeoff that could be combined with other factors. But it’s just one.)

I could easily be wrong about any one of these lines of argument. So I do assign a much higher probability to fast takeoff than I would if there were fewer arguments (I’m around 30% of fast takeoff). But if I change my mind, it will probably be because one of these arguments (or another argument not considered here) turns out to be compelling on its own. My impression is that other people in the safety community have more like a 70% or even 90% chance of fast takeoff, which I assume is because they already find some of these arguments compelling.

Why does this matter?

Sometimes people suggest that we should focus on fast takeoff even if it is less likely. While I agree that slow takeoff improves our probability of survival overall, I don’t think either: (a) slow takeoff is so safe that it’s not important to think about, or (b) plans designed to cope with fast takeoff will also be fine if there is a slow takeoff.

Neither takeoff speed seems unambiguously easier-to-survive than the other:

  • If takeoff is slow: it will become quite obvious that AI is going to transform the world well before we kill ourselves, we will have some time to experiment with different approaches to safety, policy-makers will have time to understand and respond to AI, etc. But this process will take place over only a few years, and the world will be changing very quickly, so we could easily drop the ball unless we prepare in advance.
  • If takeoff is fast: whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention. If we imagine AGI being built in a world like today, it’s easy to imagine pivotal actions that are easier than the open-ended alignment problem. But in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action. So we either need to coordinate to avoid deploying hard-to-control AGI, or we need to solve a hard version of AI alignment (e.g. with very good security / competitiveness / scalability).

These differences affect our priorities:

  • If takeoff is more likely to be slow:
    • We should have policy proposals and institutions in place which can take advantage of the ramp-up period, because coordination is more necessary and more feasible.
    • We can afford to iterate on alignment approaches, but we need to solve a relatively hard version of the alignment problem.
  • If takeoff is more likely to be fast:
    • We shouldn’t expect state involvement or large-scale coordination.
    • We’ll have less time at the last minute to iterate on alignment, but it might be OK if our solutions aren’t competitive or have limited scalability (they only have to scale far enough to take a pivotal action).

Beyond the immediate strategic implications, I often feel like I have a totally different world in mind than other people in the AI safety community. Given that my career is aimed at influencing the future of AI, significantly changing my beliefs about that future seems like a big win.


46 thoughts on “Takeoff speeds

  1. I like this post. Various thoughts:

    1) The argument we should focus on fast takeoff because they’re more dangerous feels less compelling than that we should focus on fast takeoffs because we have more leverage. I don’t think this is open-and-closed, but the idea is that as things are more continuous the more it’s reasonable to expect the key work to be done at the time with better vision of what the issues are. Some discussion at:
    https://www.fhi.ox.ac.uk/strategic-considerations-about-different-speeds-of-ai-takeoff/

    2) I think sometimes people want to use “slow takeoff” to mean that we will never get a doubling in a year. I am much more sympathetic to your version (indeed it has most of my probability mass) but feel uneasy about the appropriation of the term as it makes it harder to discuss the other position. I’m not sure what is ideal here. I’ve sometimes talked about “decentralised”, but that doesn’t make the claim in terms of timelines so maybe you want to treat it a little differently: https://docs.google.com/document/d/1lzQxEfKQEKF82qU3CZ2Jm__01RLgRiPL7VwWsJMGUAU/edit?usp=drivesdk

    3) One possible reason to expect centralised (but perhaps slow) intelligence explosions is if as it becomes clear that AI will be a big deal there’s consolidation into a small number of projects. If at a key time enough is concentrated in a single project, it could plausibly make several doublings-worth of progress internally in the time it takes the rest of the world to make one doubling (say a year).

    1. 1) I agree that technical safety work probably has a higher expected impact in fast takeoff world, since slow takeoff will tend to generate a bunch of extra interest in AI safety. That said, I think that slow takeoff will tend to involve less serial time, not more—all else equal, early AI will accelerate the rate of AI progress in slow takeoff world. With respect to strategy work or other interventions it’s a bit less clear whether we have more leverage in fast takeoff scenarios (though my default guess would be that we do). I would guess these factors are more like 3x than 30x, since complementarity across time means that the total impact of work today can’t be too much worse than the total, but that’s still a large factor relative to a 1:2 odds ratio.

      2) “Economic growth never accelerates substantially from its current pace” seems quite different from “slow takeoff,” seems like it should just get a different name. Also, to the extent it’s plausible it’s a claim about physical limitations for technologically mature civilization, not about AI.

      3) I agree that centralization is a plausible (though I think not very likely) response to safety concerns, but seems orthogonal to takeoff speed. It’s “we centralized to handle AI” rather than “the pace of AI progress caused centralization.”

      1. On Owen’s point #2:
        I find this operationalized definition of “slow takeoff” useful. Using “slow takeoff” to include the position “we will never get a doubling in a year” seems odd.

        Isn’t the “we will never get a doubling in a year” position equivalent to “no intelligence explosion will occur”?

  2. Interesting post!

    One point to mention about the “slow takeoff” is that, since all actors can see that there is something very important going on, we will tend more towards an arms-race scenario, where several teams are competing to be slightly ahead in AI development.

    If we assume that it takes more work to build a safer AI system than it takes to build a more unsafe system, then this arms-race seems like a very bad situation for AGI to be developed in.

    It is probably true, as you say, that in the “slow takeoff” case we will have more chances to experiment with different approaches to safety on various pre-AGI systems, and plausibly also more people and resources devoted to this effort than in a fast-takeoff scenario, but it is not entirely clear to me how similar pre-AGI safety requirements are to full-AGI safety requirements.

    To me it seems at least somewhat plausible that making a safe AGI system will be a qualitatively different problem than making a safe pre-AGI system.

  3. A few comments:

    (1) Not specific to your post: I would generally find the takeoff speed debate more productive if we started with the strategic implications — e.g., what type of alignment problem do we need to solve; how likely is a single project to obtain a substantial strategic advantage — and THEN asked how the rate/shape of progress would impact those things. For example, I can imagine continuous acceleration rates that would have some of the strategic implications of a hard takeoff.

    (2) Re the prospects for and necessity of large-scale coordination: (a) Prospects: I’m pretty uncertain whether large-scale coordination is harder or easier in an environment in which pre-AGI technology is radically transforming society. On the one hand, people know AGI will be a big deal, and they have a pretty good idea of what type of coordination will be necessary. On the other hand, chaos! (b) Necessity: I think large-scale coordination is quite useful even in a fast takeoff scenario; although I think it might be closer to necessary in a slow takeoff scenario.

    (3) Related to the above chaos comment, I’m worried about AI progressing at a rate that outstrips its various social counterweights. Right now, the world is in something like a strategic equilibrium (albeit a precarious one). That strategic equilibrium is a function not just of the relative AI capabilities of various companies and states, but also of a myriad of other counterbalancing forces, e.g., law, culture, non-AI weapons, non-AI means of making money, etc. So even if the relationship between different actors’ AI powers remains roughly constant through the advent of AGI, all of the counterbalances to AI might become less effective. That risks causing social instability, and the risk appears greater the faster AI progress is (no matter whether the rate is continuous, and no matter whether any one actor develops a relative AI advantage). My thoughts on this topic are fuzzy, and I’m not sure whether the question is confused, important, or tractable.

    1. (1) To me it seems worth factoring the question of “what will happen” from “what should we do given different beliefs.” I agree that it’s worth keeping our eye on what distinctions are strategically relevant. I picked the doubling time operationalization in part because it seemed to be something clean where (a) there is disagreement, and (b) it clearly has strategic implications (even if it’s unclear what they are).

      (2) I would guess that people caring about AGI is a bigger input (and that disruption itself is not even obviously net negative if you think that big changes are necessary). But it’s not totally clear, and I think the effect on the most feasible kinds of coordination might be more important than on the absolute level of feasibility.

      (3) I agree that AI can be destabilizing even if everyone were roughly equally.

  4. Paul: how many lines of code do you think would be required to write an algorithmically-efficient, takeoff-capable AGI? Not necessarily how many lines will actually be used for the first such AI(s), but how many would be required in retrospect, once all the pieces are figured out. For instance, if for some reason there were still universities post-AGI, how many lines of code would go into a built-from-scratch AGI in an “intro to AGI” course project?

  5. It seems like there are two very different questions baked into the fast vs slow takeoff question:
    1) If an AI is better at AI research than the best human, will it recursively self improve to become vastly better than all humans at almost everything, within a short timespan (days? weeks?); or will the difficulty of improving intelligence increase faster than the intelligence of the AI, such that the path from human level intelligence to superintelligence will take decades, and require all of human civilization?
    2) Will the process of getting to such an AI be gradual, such that the potential recursive self-improvement is just a continuation of what came before; or will it be the result of a small number of insights, such that a possible intelligence explosion will come without warning?

    I’ve previously thought that the fast/slow takeoff discussion was about whether 1 is likely or not; but all the arguments in this post seems to be about the likelihood of 2. I guess the questions have been lumped together because both are relevant for whether superintelligence will come suddenly and without warning, or gradually. It seems very useful to be able to distinguish between the two question, though, and it’s very counter intuitive to have ‘slow’ takeoff represent the world where AGI arrives faster. Hard/soft takeoff seems a bit better, but is associated with fast/slow. Maybe Owens decentralised/centralised could be good.

    In any case, this was good post! I hadn’t considered the fast-but-soft takeoff scenario before.

  6. Very interesting!
    I couldn’t understand why a lack of relevant past examples for humans “finding the secret sauce” suggests that there probably won’t be a secret sauce found that will suddenly enable AGI.

    Consider the following claim:
    If all the projects humans ever completed fulfill X, then the project of creating AGI probably also fulfills X.
    This claim is false for many assignments of X (e.g. X=”project destroys the world”). Why do you think it’s true for X=”no secret sauce dynamics”?

  7. I would be very curious to hear your scheme for making universal AI now. Is it different from just AIXI/Godel machines?

  8. As usual, I agree with most of what you say, and especially the contrarian aspects.

    What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.

    If we see something like “fast takeoff”, it is likely to occur in a world that is already far up the slope of a slow takeoff trajectory; if so, then many (though not all) of the key strategic considerations resemble those you’ve discussed in the context of slow-takeoff models.

    The continued popularity of scenarios that posit fast takeoff with weak precursors is, I think, the result of a failure to update on the actual trajectory of AI development, or a failure of imagination in considering how intermediate levels of AI technology could be exploited.

    BTW, “economic growth” (especially as measured by literal GDP) is at best a placeholder for our actual, implicit concern, which is the expansion of transformative economic and strategic AI capabilities. In an idealized, illustrative example, if a new production technology were to make half the goods and services in an economy free, GDP would be halved, not increased, and might in reality fall further owing to disruption. (See also: Baumol’s cost disease.)

    Even as a placeholder for our actual concerns, citing economic growth as a metric tends to focus attention on friction and automation-resistant tasks, rather than on prospects for rapid, disruptive deployment of broad but not comprehensive capabilities.

  9. In the interests of throwing ideas at you to chew on, I feel like there is a flaw in trying to bundle military / strategic advantages alongside tech / economic advantages.

    Because military and strategic advantages have a huge built in discontinuity.

    Conflict vs Non Conflict.

    That is to say, there is a clear line between using nukes and not using nukes, or total war and not total war, or small proxy war like Syria vs large scale war between multiple developed states heavily integrated into the economy.

    Let’s imagine that everyone is developing AIs and those AIs follow the slow takeoff model you propose. They gradually increase in power and so everybody has similar AIs.

    Militarily, we would expect these to be deployed in proxy wars and small conflicts and be very successful, just as they are being incrementally deployed in the economy and the world is changing fast. But you wouldn’t see mass scale war, because this is a strategic line you don’t want to cross unless you’re assured of victory. The existence of nuclear weapons and potential other weapons creates a stalemate between powerful actors.

    That means as a nation state, your AI development goal isn’t “Develop a slightly better war AI to get slightly better at war”, your AI development goal is “Make a total war winnable by neutralising the opponent’s ability to retaliate by performing mutually assured destruction.”

    So, everyone has similar AIs. And then one country gets an AI that can consistently disable enemy nukes, at which point that country wipes out everyone’s nukes, bombs their AI centres, and uses their own nuclear arsenal to hold the world hostage, whilst it can freely improve AI.

    Or it gets an AI that can shut down everyone’s power stations, or whatever. Anything that crosses a threshold into exposing a weakness large enough to justify a strike.

    In some ways, this is still a slow take-off scenario. The AI will still get incrementally better, BUT, due to the discontinuity in strategic advantage, we have lost all power to control this AI / actor in a very very short time.

  10. I think that the part where you say that a less intelligent AI can self improve but worse is wrong.
    Suppose a skilled and experienced programmer writes a high quality piece of code. They then hand it to the intern. Can the intern improve the code at all? I would say no, or nearly no. If there was an improvement that the intern could spot, the skilled programmer would have already made it. Now the skilled programmer could have missed a semicolon, or made some other similar error so obvious even the intern could spot it, saving the skilled coder a little time in fixing it. I would say that the quality of the code is more of a maximum function than a sum. You can’t get good code out of a lot of idiots.

    However, lets say the skilled programmer is coding an AI, if the AI is worse at coding than the programmer, it won’t be able to make any self improvements. (Or basically none, it might be able to work out how to do the same thing slightly faster, the way an optimizing compiler does.) If we also assume that the computer is much faster than humans, as soon as we get a system that can self improve at all, it can take off fast.

    In other words, the programmer has not been deliberately stupid. Any improvement to the code that a skilled programmer would consider obvious has already been made in the programmers head before they typed it. Any block of code that started “if(False){” has never been typed. So the minimum intelligence needed to self improve at all, the smarts to spot things that aren’t obvious to your coders.

Leave a comment