Tis Better to Have Grown and Inflated (Than Never to Have Grown At All)

A scratchpad estimate of central bank loss functions.

Trevor Chow

By some accounts, inflation is “the largest risk in the near term US macro outlook”, and these sentiments have been echoed across the pond. As the former Chief Economist of Threadneedle Street wrote in a recent New Statesman oped,

The inflation tiger is never dead … [and] an ounce of inflation prevention is worth a pound of cure.

These comments reflect the important premise behind why independent central banks exist in much of the developed world: to keep inflation low and stable. But most central banks don’t just look at inflation. In the USA, those in the Eccles Building are given a Congressional mandate under the Federal Reserve Act. According to the Statement on Longer-Run Goals and Monetary Policy Strategy, their dual mandate of maximum employment and stable prices is interpreted as follows:

The maximum level of employment is a broadbased and inclusive goal … [Inflation is] measured by the annual change in the price index for personal consumption expenditures … the Committee seeks to achieve inflation that averages 2 percent over time.

So it isn’t as simple as just putting a lid on inflation. Rather, an ounce of inflation prevention may lead to a pound of suffering in the form of below-potential employment, if the Fed is too cautious. One way of thinking about these costs is by considering the extent to which the Fed undershot its targets in the post-2008 pre-covid era.

Measuring Loss

A common approach to evaluating monetary policy is with a quadratic loss function. The loss function below L_t is a simplified version of Mike Woodford’s contribution in the 2010 Handbook of Monetary Economics. It treats deviations of inflation \pi above and below the inflation target \pi^\ast as symmetric in their harms, and likewise for deviations of log output y_t away from log full output y_t^\ast. The coefficient \lambda represents how much the central bank prioritises output deviations relative to inflation deviations.

L_t = (\pi_t - \pi_t^\ast)^2 + \lambda (y_t - y_t^\ast)^2 \; \text{where} \; y_t = \ln(Y_t)

When combined with the New Keynesian Phillips Curve, minimising the loss function produces an optimal monetary policy rule.

\pi_t = \beta E_t(\pi_{t+1}) + \kappa (y_t - y_t^\ast) + \epsilon_t

L_t = (\beta E_t(\pi_{t+1}) + \kappa (y_t - y_t^\ast) + \epsilon_t - \pi_t^\ast)^2 + \lambda (y_t - y_t^\ast)^2

\frac{dL_t}{dy_t} = 2\kappa(\beta E_t(\pi_{t+1}) + \kappa (y_t - y_t^\ast) + \epsilon_t - \pi_t^\ast) + 2\lambda(y_t - y_t^\ast) = 0

\pi_t - \pi_t^\ast = - \frac{\lambda}{\kappa} (y_t - y_t^\ast)

By taking the Great Moderation as an exemplar of optimal policy, it is possible to calibrate the coefficient \lambda by regressing inflation deviation against the output gap, while assuming \kappa is 0.0062 (as per recent work about the slope of the US Phillips Curve). This will let us know what the Fed’s implicit loss function is.

The Business Cycle Dating Committee of the National Bureau of Economic Research tells us that the last recession before 2008 which persisted for over a year ended in November 1982. It dates the Great Recession as starting in December 2007 and ending in June 2009. So we can take 1983 till 2007 as the Great Moderation. The data on PCE inflation is given to us directly by FRED, while the Fed’s estimates of output gap come from the “Greenbook” datasets as provided by the Philadelphia Fed.

From this, what we find is that \lambda is 0 to 3 significant figures. This appears surprising, since your prior assumption might be that it is equally important to control inflation and output variation i.e. \lambda equals to 1. However, modern macroeconomic research as outlined in Woodford’s canonical Interest and Prices gives a similarly small value of 0.05, based on a microfounded loss function. The intuition for this is straightforward: when the economy is mostly faced with demand-side shocks which push inflation and output in the same direction, stabilising inflation is approximately the same as stabilising output.

Inflation: Then vs Now

With this in mind, we can focus on how inflation has behaved since the crisis. The first diagram shows what the path of the price level would be if inflation had stayed at 2% every year, and we can see clearly that since 2008, the Fed has consistently undershot inflation. The second diagram focuses on what’s happened since the pandemic: although inflation fell massively in the beginning stages, it has now recovered and is above the 2% path.

I think it’s reasonable to suggest that the degree of inflation panic now is far greater than in the aftermath of the Great Recession. But these diagrams suggest this asymmetry makes little sense. After 2008, the price level drifted downwards and never recovered to its original path. All the while, Ben Bernanke was formalising his central bank’s 2% inflation target. By contrast, it is not yet clear that we will meaningfully shift away from the original path, given much of the inflation right now is due to supply-side bottlenecks. And the Fed now has more flexibility with its Flexible Average Inflation Targeting strategy, and is obliged to make up for past shortfalls in inflation. So I am befuddled as to why a less permanent deviation away from a more flexible target is raising louder alarm bells.

One explanation is that inflation begets inflation, and so even an initially small rise could cause prices to spiral upwards. The problem with this is that inflation expectations remain reasonably well-anchored, and the Fed has been clear that it will stay on course. More importantly, these costs aren’t asymmetric: if the economy is running below capacity and inflation is below target, there are harms too.

Firstly, inflation expectations may become unanchored downwards, reducing the room for monetary policy - indeed, the Fed’s Statement notes that “downwards risks to employment and inflation have increased”. Secondly, the underemployment of capital and labour can cause hysteresis effects where workers become discouraged and participate less in the labour force, as well as reducing the incentive for firms to invest in the capital stock. This may in turn slow the growth of the economy at large, with learning-by-doing or AK models implying that a lot of technical change comes from capacity utilisation. Thirdly, the period preceding the pandemic saw the longest economic expansion in US history - if we didn’t see signs of overheating past full employment then, it makes me wonder if we ever got there in the past.

So does that mean we should be entirely unconcerned? Of course not. As Professor Moody says, “constant vigilance”! But unless the inflation rate shows no sign of adjusting back to 2% across the next few months and unless longer-term inflation expectations start shifting meaningfully above 2%, our focus should be on running the economy hot, rather than worrying if the engine will overheat.


The code used to make the regression and diagrams is given below.

# Regression



base_dir = "/home/tmychow/Desktop/"
greenbook_file = paste0(base_dir, "outputgap.xlsx")
inflation = fredr(series_id = "PCEPI",
                  observation_start = as.Date("1983-01-01"),
                  observation_end = as.Date("2019-10-01"),
                  frequency = "q",
                  aggregation_method = "eop",
                  units = "pch")

download.file("https://www.philadelphiafed.org/-/media/frbp/assets/surveys-and-data/greenbook-data/greenbook_output_gap_dh_web.xlsx?la=en&hash=FFA675CD9C77F04E3F2BAA2D5657276D", destfile = greenbook_file)
output = read_excel(greenbook_file)

output = output[33:176,]
output = output["GBgap_151209"]
inflation$date = as.Date(inflation$date)
inflation = inflation[c("date","value")]

moderation = cbind(inflation[1:100,],output[1:100,])
colnames(moderation) = c("time", "inflation", "output")

moderation$inflation = moderation$inflation - ((1.02)^0.25 - 1)
moderation$output = moderation$output*(-1/0.0062)

lm(inflation ~ output, moderation)

# Before Covid Diagram


PCEpath = fredr(series_id = "PCEPI",
                  observation_start = as.Date("2000-01-01"),
                  observation_end = as.Date("2020-01-01"))

band = data.frame(x = c(as.Date("2008-01-01",origin = "1900-01-01"),as.Date("2020-01-01",origin = "1900-01-01"),as.Date("2020-01-01",origin = "1900-01-01")), y = c(93.102,93.102*1.01^12,93.102*1.03^12))

ggplot(PCEpath, aes(x = date, y = value)) + geom_line(color = "blue") + labs(title = "Personal Consumption Expenditure Price Index", subtitle = "January 2000 to January 2020 (2012 = 100)", caption = "Made by @tmychow. 2% Path from Jan 2008. Shaded Region Covers 1% to 3%.", x = "Year", y="Index") + geom_segment(mapping=aes(x=as.Date("2008-01-01",origin = "1900-01-01"),y=93.102,xend=as.Date("2020-01-01",origin = "1900-01-01"),yend=93.102*1.02^12),color = "red", linetype = "dashed") + theme_stata() + geom_polygon(band, mapping=aes(x = x, y = y), fill = "grey", alpha = 0.4)

# After Covid Diagram

crisis = fredr(series_id = "PCEPI",
               observation_start = as.Date("2018-01-01"),
               observation_end = as.Date("2021-05-01"))

ait = data.frame(x = c(as.Date("2020-01-01",origin = "1900-01-01"),as.Date("2022-01-01",origin = "1900-01-01"),as.Date("2022-01-01",origin = "1900-01-01")), y = c(110.917,110.917*1.01^2, 110.917*1.03^2))

ggplot(crisis, aes(x = date, y = value)) + geom_line(color = "blue") + labs(title = "Personal Consumption Expenditure Price Index", subtitle = "January 2018 to January 2021 (2012 = 100)", caption = "Made by @tmychow. 2% Path from Jan 2020. Shaded Region Covers 1% to 3%.", x = "Year", y="Index") + geom_segment(mapping=aes(x=as.Date("2020-01-01",origin = "1900-01-01"),y=110.917,xend=as.Date("2022-01-01",origin = "1900-01-01"),yend=110.917*1.02^2),color = "red", linetype = "dashed") + theme_stata() + geom_polygon(ait, mapping=aes(x = x, y = y), fill = "grey", alpha = 0.4)