When we think about improving the long term future of humanity, two ideas stand out: maximising the rate of economic growth and minimising the likelihood of catastrophes. Ostensibly, these are competing goals: do we move fast and break things, or do we move slow and fix things? But it would be a mistake to assume these goals are in tension, or that we must substitute resources away from one to the other; rather, progress and catastrophe-minimisation are natural complements.

Now we know from Leopold that:

It is stagnation that is risky and it is growth that leads to safety.

So if you’re concerned about catastrophic risks, you should be receptive to accelerating growth. But I think the complementarity works in the other direction too! Why? Because economic growth is best seen, not as great booms in output, but as the absence of shrinkages and busts. That means minimising catastrophic risks is a *first-order concern* in achieving progress.

### Shrink theory of growth

To understand this “shrink theory of growth”, we need to head back to the 19th century. At the time, the Industrial Revolution had just kicked off, heralding in the defining economic transition of humankind.

That graph might make you think that these enormous improvements in economic output to come from a faster rate of growth. But this is deceptive, because it hides the massive high-frequency oscillations in growth before the Industrial Revolution.

Consider the following decomposition of overall growth^{1}: the average growth rate is the sum of the frequency of growth multiplied by the rate of growth and the frequency of decline multiplied by the rate of decline.

\[ g = f(+) g(+) + f(-) g(-) \]

What we find across is that places like the UK and Netherlands did not overtake places like Italy and Spain because of greater growth during growing periods. In fact, the average growth rate in growing periods fell! But what happened is that for the UK and Netherlands, they went from having contractions in half of the years to only a third of the years after the Industrial Revolution. And in those periods, their GDP was falling by less. Even in the post-war era, developed countries have continued to benefit from more infrequent and shallow shrinkages than their developing counterparts^{2}.

### O-rings in production

This asymmetric view of growth has its roots in the O-ring theory of economic development^{3}.

Suppose we have a production function that requires two workers indexed by 1 and 2. The production function \(Y = q_1 q_2\) depends on the quality of the workers, where quality can take a low or high level \(q_L, q_H \in [0,1]\) with \(q_L < q_H\). Thus the production exhibits complementarity between worker quality and this is not substitutable i.e. you can’t just replace one high-skilled worker with many low-skilled workers.

If we have two low-quality and two high-quality workers, we can pair them in two ways. Either we pair the high-quality workers with the low-quality workers, producing \(2 q_H q_L\). Or we put workers of the same quality together, giving us \(q_L^2+q_H^2\) which is greater.

The idea is that for modern economic production, you need many activities each done well for the output to be valuable. That means to get these cooperative gains, everything needs to go right and nothing can go wrong. This fragility gives us the asymmetric pattern we see above.

While we always had expansions arising from better productivity and technology, these were held back by bad patches. The Industrial Revolution didn’t herald a sudden onslaught of new ideas; rather, it meant we were avoiding more of these mishaps.

### Jensen’s inequality

Once you start looking, this sort of asymmetry is everywhere.

In Jenga, it’s a lot easier to knock over a tower of blocks than to build one up.

In population ecology, the greater variability of a prey’s abundance will reduce the population of the predator relative to a stable average. This is because there’s an upper limit to the predator’s foraging rate, so it cannot take full advantage of the good times, while suffering in the bad times when prey are scarce.

In business cycle macroeconomics, this is the plucking model^{4}. Recessions aren’t bad in the same way economic booms are good. The two don’t just cancel out. Consider a Wicksellian triangle: Alice wants bananas from Bob, Bob wants cherries from Charlie and Charlie wants apples from Alice. The volume of economic exchange depends on the person who least wants to transact.

One negative shock is enough to slow this down, but you need a positive shock to affect all three in order to raise it up. So if we stabilised economic output along the trend, as opposed to letting it fluctuate up and down, the average level of output would be higher^{5}.

In unemployment data, we see this cyclical asymmetry too. It’s a lot harder to search and match with a new employee than to fire them. So the rate and magnitude of job destruction induced by bad times is not matched by the job creation of good times^{6}.

I am loathe to attribute these “Murphy’s law” phenomena to entropy. As von Neumann put it:

Nobody knows what entropy really is.

So perhaps an alternative is to think about the world as subject to Jensen’s inequality. For some variable \(x\) and concave function \(y=f(x)\), it tells us:

\[ E[f(x)] \leq f(E[X]) \]

Reducing the variance of \(x\), even as its mean stays the same, will raise the expected value of \(y\).

Why might this be?

The Fisher-Orr model from evolutionary biology gives us a neat answer: small changes are more likely to be beneficial than large changes, especially as the trait in question becomes more complex and multi-dimensional.

Suppose some trait \(x\) which we care about is defined as a point in the vector space \(R^n\). We assume without loss of generality that the optimal point is the zero vector \(\mathbf{0}\), such that the point \(x\) can be defined by a vector \(z\) from the origin. Suppose some change of a random vector \(\epsilon\) occurs. Then the new trait is \(x' = x + \epsilon\).

As it turns out, a change with a large magnitude \(||\epsilon||\) is more likely to be deleterious and take you further away from \(\mathbf{0}\) than one with a smaller magnitude. This is easiest to see in \(R^2\), but it is actually true in general.

And as our goal has more and more dimensions, a large change becomes worse and worse. This is because it becomes less and less likely that a random change moves along the correct dimensions in the correct way to improve outcomes.

### Chesterton’s fence

How does this relate to longterm growth?

For far too long, economists have focused on the *deep magic* of growth: how proximate factors like capital accumulation and fundamental factors like institutions raise the average growth rate. But the *deeper magic (from before the dawn of time)* is about flattening the topology of growth: reducing the magnitude and frequency of negative growth episodes. So a full understanding of economic growth demands a focus on how we dampen growth reversals too^{7}!

And if we think many of these occur due to bad exogenous shocks described by a Poisson process, then aiming to reduce the likelihood of extinction and catastrophic risks necessarily reduces the frequency and magnitude of smaller setbacks. If we obsess about growth above all else without keeping this in mind, we will end up the most ambitious corpses in Potter’s field.

To be clear, our wariness of risk isn’t an argument for sitting around and twiddling our thumbs either. At the danger of restating Leopold’s argument that sluggish safetyism raises risk, I want to point out that the Fisher-Orr model doesn’t decry large changes. In fact, large beneficial mutations are crucial to the adaptation of real organisms^{8}. This is because small beneficial mutations tend to get swept away by genetic drift. So the Aschenbrenner argument for growth still applies.

What it does mean is that to fulfill our duty of underwriting the wellbeing of future *homo sapiens*, we need to balance the twin flames of innovation and institutions. If we want to keep reaping the rewards of growth, we need the institutional capacity necessary to cushion against disruption from the things we break. Progress and x-risk minimisation are two sides of the same coin, part of the same virtuous cycle. It’s our job to kick this off!