Trevor Chow Blog Archive Newsletter About The Blog

Back-of-the-envelope low-quality musings: #takes


Longform stuff I've put time and thought into is at #effortposts.


To hear from academics and policymakers, #interviews.


Compiled scribbles on what I read can be found in #notes.


Topic-related tags include #economics, #coronavirus and #miscellaneous.

Output, Interest and Prices

27 May 2021

NB: The accompanying twitter video summary is available here. This has been a long time in the making, and at around 14,000 words, it’s turned out way longer than I expected it would be. Apologies in advance for the wordiness, but it was hard to do justice to the breadth of macroeconomics in fewer words than what I’ve written. The inline text is commentary about this historical evolution of macroeconomics or amusing snark and savagery between economists at the time: lots of really sassy quotes especially in sections 5, 6 and 7.


The history of macroeconomic thought is a story of how growth theory, monetary theory and business cycle theory came together into one discipline. Throughout this, there are three macroeconomic variables that have stood the test of time as being central: output, interest rates and prices. As such, this history can be understood by examining the evolution of the equations describing and relating these variables. And doing so makes it apparent that, although our methodology has advanced by leaps and bounds, the increase in our knowledge regarding substantive macroeconomic questions has been far less than many might believe.

Table of Contents

  1. Before Macroeconomics: The Old Classicals
  2. Explaining the Great Depression: The Keynesian Revolution
  3. Unifying Macroeconomics: The Neoclassical Synthesis
  4. Counterrevolution 1: The Monetarists
  5. Counterrevolution 2: The New Classicals
  6. Counterrevolution 3: The Real Business Cycle Theorists
  7. Salvaging Keynes: The New Keynesians
  8. Resuscitating Growth: Endogenous Growth Theory
  9. Reunifying Macroeconomics: The New Neoclassical Synthesis
  10. This Post Takes the Old Classicals Seriously

Before Macroeconomics: The Old Classicals

Before macroeconomics was a field unto itself, its precursors were the fields of political economy, monetary theory and business cycle theory as expounded upon by the Old Classical economists. Adam Smith and Jean-Baptiste Say were two of the leading political economists. In his 1776 The Wealth of Nations, Smith famously said that “it is not from the benevolence of the butcher, the brewer or the baker, that we expect our dinner, but from their regard to their own interest”. In doing so, the father of economics set up the notion that markets are able to align private incentives with the public interest. His most well-known analogy was that of the invisible hand, and noting that a producer is “led by an invisible hand to promote an end which was no part of his intention … By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it”.

Smith went on to argue that “the quantity of every commodity brought to market naturally suits itself to the effectual demand … The natural price, therefore is, as it were, the central price to which the prices of all commodities are continually gravitating. Different accidents may sometimes keep them suspended a good deal above it, and sometimes force them even somewhat below it. But whatever may be the obstacles which hinder them from settling in this center of repose and continuance, they are constantly tending towards it”. In saying this, Smith described the central tenet of Old Classical economics - that via the price mechanism, the market equilibrium would naturally equilibrate towards an efficient level.

This was summed up by Say’s Law of Markets in his 1803 Traité d’économie politique: “When the producer has put the finishing hand to his product, he is most anxious to sell it immediately, lest its value should diminish in his hands. Nor is he less anxious to dispose of the money he may get for it, for the value of money is also perishable. But the only way of getting rid of money is in the purchase of some product or other. Thus the mere circumstance of creation of one product immediately opens a vent for other products”. Put more succinctly, supply creates its own demand. It is worth being clear that the common caricature of Say’s Law as suggesting there can never be a demand shortfall is mistaken. Indeed, Say went on to say that “a glut can take place only when there are too many means of production applied to one kind of product and not enough to another”. So a misallocation of resources between markets could cause problems, but Say did reject the idea of a general glut.

As for the determination of real output $Y$, The Wealth of Nations described it as coming from “the annual produce of the land and labour”. In the long run, this could only be “increased in value by no other means, but by increasing either the number of its productive labourers, or the productive powers of those labourers”. That is, real output was seen by Smith as a function of land $T$ and labour $L$, with increases in productivity $A$ driving long-run growth of output per capita.

Meanwhile, Old Classical monetary theory had its three paragons in David Hume, Irving Fisher and Knut Wicksell. Hume’s “Of Money” in 1752 gave one of the first accounts of the uses of money, arguing that “money is … only the instrument which men have agreed upon to facilitate the exchange of one commodity for another”, establishing its role as the medium of exchange. Hume went on to say that “if we consider any one kingdom by itself, it is evident that the greater or less plenty of money is of no consequence, since the prices of commodities are always proportioned to the plenty of money”. And he was also able to identify that ““if the coin be locked up in chests, it is the same thing with regard to prices, as if it were annihilated” - that is, the demand for money matters too, alongside its supply. He repeats this in his 1752 “Of the Balance of Trade”, noting that there is a fallacy of composition when it comes to an increase in the money supply: “we fancy, because an individual would be much richer were his stock of money doubled, that the same good effect would follow were the money of every one increased, not considering that this would raise as much the price of every commodity and reduce every man, in time, to the same condition as before”.

In doing so, he set out the two key ideas in long-run monetary theory: the neutrality of money and the quantity theory of money. The former says that the nominal and real parts of the economy are separate in the long run i.e. you can’t print yourself rich, because the productive potential of the economy $Y^\ast$ is determined by the aforementioned factors of production, not the quantity of money. The latter says that the long-run effect of increasing the money supply $M$ is to raise the general level of prices $P$ by the same amount. These claims were corroborated by the writing of other classical economists, with Smith noting in The Wealth of Nations that for any good “its real price may be said to consist in the quantity of the necessaries and conveniences of life which are given for it; its nominal price, in the quantity of money”. And in the vein of the quantity theory, David Ricardo described in his 1810 The High Price of Bullion the effects of an increase in the money supply as one where “the circulating medium would be lowered in value and goods would experience a proportionate rise”.

Meanwhile, Hume went on to “trace the money in its progress through the commonwealth … it must first quicken the diligence of every individual, before it increases the price of labour”. In doing so, he outlined the important short-run idea in monetary theory i.e. the short-run non-neutrality of money, where changes in the supply of money $M$ temporarily affect the level of real output $Y$. Consequently, he argued for creeping inflation: that “the good policy of the magistrate consists only in keeping the quantity of money, if possible, still increasing; because, by that means, he keeps alive a spirit of industry”.

The breadth of Hume’s work extended beyond just discussing money and prices - he also produced “Of Interest” in 1752, where he argued that high interest rates arose from “a great demand for borrowing and little riches to supply that demand”. In doing so, he established the loanable funds model of interest rate determination - that is, the interest rate $r$ is the price at which the demand for loanable funds i.e. investment is equal to the supply of loanable funds i.e. savings. He also clarified that “though both these effects, plenty of money and low interest, naturally arise from commerce and industry, they are altogether independent of each other” - that is, the quantity of money does not determine the interest rate.

Questions regarding money and interest rates continued to be pursued by Fisher and Wicksell. Fisher’s defining work was his 1930 The Theory of Interest. In this book, he set up three concepts that still live on till this day. Firstly, he defined capital $K$ as an asset that would produce a flow of income - as such, the value of such an asset would be the net present value of the future income stream. Secondly, he saw interest rates $r$ as “an index of a community’s preference for a dollar of present over a dollar of future income”. This time preference theory of interest rate determination is reflected by the fact that the full title of his book was The Theory of Interest as Determined by Impatience to Spend Income and Opportunity to Invest - that is, the interest rate was set by how impatient people were to spend right now versus how much return people would get on the capital they saved and invested in. Thirdly, he produced the Fisher equation, which says that the nominal interest rate $i$ equals the real interest rate $r$ plus the inflation rate $\pi$.

Beyond interest rates, Fisher also elucidated two broader relationships. The first was the equation of exchange $MV = PT$, where nominal spending equals the money supply multiplied by the velocity of money i.e. number of times which a piece of money was used. Nominal spending also equals the price level multiplied by the real value of transactions. This equation of exchange would be echoed in a different form by the Cambridge economists Alfred Marshall and Arthur Pigou, who thought that the demand for money was equal to some fraction $k(i)$ of nominal output $PY$, since people held money to spend it. If nominal output were higher, people would want to hold more money. Equally, if the nominal interest rate were higher, people would want to hold less money due to the higher opportunity cost of holding cash in lieu of interest-bearing assets - hence why $k$ was a decreasing function of the nominal interest rate $i$. When the supply of money equals the demand for money, we get that $M=k(i)PY$ - and by taking $V=\frac{1}{k(i)}$, we get the equation of exchange $MV = PY$, except that this describes the volume of output rather than of transactions.

And the second of Fisher’s relationships was in a 1926 paper titled “A Statistical Relation between Unemployment and Price Changes”, where he noted a “close correspondence between unemployment and changes in the purchasing power of money”. In particular, he described the negative relationship between inflation and unemployment in the US. Importantly, he made clear the distinction between “the price level and changes in the price level” - while the former had little to do with whether full employment was achieved, the latter did have such an effect. One reason for this non-neutrality was outlined in his 1928 The Money Illusion, where he noted that people often faced a “failure to perceive that the dollar, or any other unit of money, expands or shrinks in value”. As such, changes in nominal quantities e.g. inflation, could affect real magnitudes via this monetary confusion.

As for Wicksell, his approach was put forward in his 1898 Interest and Prices. He claimed that the quantity theory held true in the long run, with changes in the price level originating “outside the commodity market”. Rather, “the total volume of money instruments … in relation to the quantity of commodities exchanged, was the regulator of commodity prices”, and “the conditions of production and consumption … affect only exchange values or relative prices”. However, he posited that in the period where prices were still adjusting, it failed to explain how money demand and supply were equilibrated - as such, he incorporated interest rates as the adjusting variable to equalise the two while prices were adjusting. Thus “the quantity theory is correct, insofar as it is true that an increase or relative diminution in the stock of money must always tend to raise or lower prices - by its opposite effect in the first place on rates of interest”.

He elaborated on what this meant by clarifying two interest rates. The natural rate of interest $r^\ast$ is the interest rate determined in the production sphere i.e. the loanable funds market, where the interest rate equals the marginal product of capital $MPK$. After all, suppliers of loanable funds shouldn’t be willing to lend at a lower rate than that which they can get from buying capital, while demanders of loanable funds shouldn’t be willing to borrow for more than the returns on capital they would be receiving. Meanwhile, the market rate of interest $r$ is the interest rate determined in the financial sphere i.e. whatever banks and other financial institutions lent and borrowed at. When the market rate is different from the natural rate, there is a Wicksellian “cumulative process” - for example, Wicksell noted that if $r<r^\ast$, people would borrow to spend, causing a rise in output and prices. As such, “the demand for money loans is consequently increased, and as a result of a greater need for cash holdings, the supply is diminished. The consequence is that the rate of interest is soon restored to its normal level, so that it again coincides with the natural rate”. Thus Wicksell provided a way of reconciling the loanable funds and financial theories of interest rate determination.

This distinction between the natural and market interest rate was sufficiently important that in his 1906 Lectures on Political Economy, Wicksell warned against reasoning from a change in interest rates - while “a fall in loan rates caused by increased supplies of real capital should thus in itself cause neither a rise nor a fall in the average price level”, a fall in the market rate due to “artificial capital created by credit” would caused a rise in the price level. That is, the market rate could fall due to a secular fall in the natural rate, or because it was deviating from the natural rate due to a monetary injection. Wicksell also recognised that the Humean notion of creeping inflation, noting in the lectures that “sometimes, it is true, we hear it said that … a progressive rise in commodity prices might be preferred” because it “would act as a stimulus to enterprise”. However he argued that this was “evidently naïve … if this fall in the value of money is the result of our own deliberate policy, or indeed can be anticipated and foreseen, then these supposed beneficial effects will never occur, since the approaching rise in prices will be taken into account … What is contemplated is, therefore, unforeseen rises in price”.

The logical implication of all of this for central bank policy was Wicksell’s rule: “so long as prices remain unaltered, the bank’s rate of interest is to remain unaltered. If prices rise, the rate of interest is to be raised; and if prices fall, the rate of interest is to be lowered”. The need for price stability was further elaborated upon by John Maynard Keynes in his 1923 A Tract on Monetary Reform, where he said that “inflation is unjust and deflation inexpedient”. This was because rising prices would redistribute away from people earning nominally fixed incomes, while falling prices would reduce expectations of earnings and increase real debt burdens, hurting economic growth. Crucially, this was best done by an independent central bank, because “the impecuniosity of governments and the superior political influence of the debtor class” would incentivise seigniorage i.e. “taxation by currency depreciation”.

Finally, the field of business cycle theory was still rather nascent. Compendiums of the field were focused on measuring business cycles and hypotheses about potential causes, as exemplified by NBER founder Wesley Mitchell’s 1913 Business Cycles as well as Gottfried Haberler’s 1937 Prosperity and Depression. Some suggested explanations included the unstable provision of money and credit, the possibility of overinvestment causing a misallocation of real resources, the redistributive effects of deflation towards creditors reducing consumption, the process of output growing too fast and leading to underconsumption as well as the unstable expectations regarding future prosperity.

So on the eve of the Keynesian revolution, the still nebulous field of macroeconomics had begun coalescing. In the long run, output depended on the factors of production i.e. technology, capital (subsuming land) and labour. The natural rate of interest was the marginal product of capital and the price level was given by the equation of exchange.

\[Y^\ast = F(A,K,L)\] \[r^\ast = MPK\] \[P^\ast = \frac{MV}{Y}\]

In the short run, the Wicksellian framework allowed the integration of goods and financial markets. The Wicksellian difference determined the deviation of output from its long-run potential. Monetary theory explained how the market interest rate deviated from its natural rate. Inflation was a function of output as per Fisher or a function of the Wicksellian difference - these two are consistent, because the Wicksellian difference determines output fluctuations. But it was also a function of expected inflation, since Wicksell noted that only unanticipated inflation would affect output. And with inflation determined, the nominal interest rate was set by the Fisher equation.

\[Y = G(Y^\ast, r - r^\ast)\] \[r = L(r^\ast, M)\] \[\pi = P(Y,E(\pi))\] \[i = r + \pi\]

This set up the hodgepodge of business cycle theories to explain how various factors affected the natural rate or the market rate, and thereby caused changes in output, interest and prices. It meant that the government had a role in improving the productive potential of the economy and there was a role for an independent central bank to stabilise prices and the business cycle. But although the degree of formal modelling was limited and this was by no means a complete and consistent explanation of growth, money or business cycles, many important ideas had been set out. And as we will see, these ideas have stood the test of time.

Explaining The Great Depression: The Keynesian Revolution

The Great Depression would shake this Old Classical approach to its core, catalysing the Keynesian revolution and heralding the beginnings of macroeconomics as a unified and coherent field. Although the Old Classicals were well-aware of the potential for business cycle fluctuations, the underlying philosophy remained one of equilibration. The depth and persistence of the Great Depression meant that many felt this process of seeing economies as naturally returning to its efficient equilibrium as unsatisfying, especially because so much of it came from ad hoc suggestions as opposed to a unified framework. Indeed, Keynes had declared in his 1923 Tract that “in the long run, we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again”.

So when the Depression arrived, Keynes produced his magnum opus in 1936: The General Theory of Employment, Interest and Money, which represented a dramatic turn from the writings of Old Classicals (including his own previous works in that tradition). In this book, he expounded upon how involuntary unemployment could occur and persist as was happening during the Depression. In particular, he argued that investment was especially volatile, because most investment decisions “can only be taken as a result of animal spirits - of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantified benefits … Thus if the animal spirits are dimmed and spontaneous optimism falters, leaving us to depend on nothing but a mathematical expectation, enterprise will fade and die”. This rather behavioural economics-esque explanation meant that “the level of employment at any time depends … not merely on the existing state of expectation but on the states of expectation which have existed over a certain past period … embodied in today’s capital equipment”. Consequently, there was a role for government intervention when such an adverse shock occurred.

Unfortunately, Keynes’s text is verbose and it is consequently difficult to parse what this means in a model - the canonical interpretation is by John Hicks in his 1937 “Mr. Keynes and the “Classics”. In this paper, Hicks defined the Keynesian story in terms of the IS-LM model. The IS curve relates to the equalisation of investment and savings, while the LM curve relates to the equalisation of liquidity preference and the money supply.

Savings are an increasing function of output, since people tend to save more their incomes are larger. “The amount of investment … depends on the rate of interest” according to Hicks, since a lower real interest rate makes it cheaper to borrow to invest. The preference for liquidity (or money) is increasing in nominal output and decreasing in the interest rate (for the same reasons as in the Cambridge equation of exchange), while the money supply is just that. Many of these ideas are not new - indeed, the disagreements between Old Classical monetary theorists were precisely about how to reconcile the fact that the interest rate was determined both in the goods and the money market. Hume had focused on the goods market in terms of loanable funds, Fisher had directed his attention to the time preference explanation in the money market, while Wicksell had argued for two interest rates. In this respect, the key Keynesian insight was the notion of simultaneous determination in a system of two equations. That is to say, because both real output and real interest rates were involved in the two equations, both would adjust simultaneously in this system of two equations to equilibrate investment to be equal to savings and liquidity preference to be equal to the money supply.

\[S(Y) = I(r)\] \[M = L(PY,r)\]

Notice that it is possible to rearrange the first equation, since savings are simply total output minus the consumption of individuals $C(Y)$ and the government’s net spending $T-G$ i.e. taxes minus government expenditure. And it’s possible to rearrange the second equation by looking at real money balances instead of nominal money balances.

\[Y = C(Y) + I(r) + (G-T)\] \[\frac{M}{P} = L(Y,r)\]

Suppose there were an adverse shock to investment as in a recession, due to pessimistic expectations of the future. Keynes, rightly or wrongly, characterised the Old Classical position as one where the real interest rate would fall to equilibrate and ensure real output stayed at its original level. By contrast, he argued that because the interest rate was equilibrating the money market, real output would be the variable that adjusts and falls. One response to this would be to increase the money supply. This would mean that given the preference for the liquidity stays the same, people would put this money into interest-bearing assets, lowering the real interest rate. As such, there would be a rise in investment and real output, counteracting the effects of the recession. But if interest rates were at 0 or if investment wasn’t very interest-elastic, one might struggle to induce a rise in investment via monetary expansion. It is in those circumstances that Keynes pushed for fiscal policy, raising net government spending $G-T$ directly to prevent real output from falling. In particular, he posited in The General Theory that he expected “the State, which is in a position to calculate the marginal efficiency of capital-goods on long views and on the basis of the general social advantage, taking an ever greater responsibility for directly organising investment”.

It is worth noting that there are many interpretations of Keynes, especially with respect to the question of why within nominal output, it is real output that falls as opposed to prices. The hydraulic Keynesianism of IS-LM assumes that the price level is fixed, perhaps due to sticky wages where workers refuse to accept nominal wage cuts - this was most persuasively put forward by Franco Modigliani in his 1944 “Liquidity Preference and the Theory of Interest and Money”. But there are other stories too.

For example, the post-Keynesian school has focused more on animal spirits, arguing that the stickiness of prices and wages is not essential to Keynes’s story. Rather, it is inconsistent expectations about the future that prevent the economy from reaching its full productive potential - this is best exemplified in Roger Farmer’s work, but a simple story of why flexible prices do not guarantee full output goes as follows.

Consider a world with perfect competition and flexible prices. It is still possible to have unemployed resources, because price rigidity or frictions in market structures aren’t the only things to cause suboptimal resource allocation. If we think of our macroeconomic model as made up of various supply and demand models in different markets, the expectations of producers and consumers feed into their supply and demand curves. If they have inconsistent expectations, this will shift the two curves by different amounts, causing the intersection at a lower level of output than is optimal. So in that sense, it is not prices that are stopping from market clearing, but that the market clears at a point where there are mutually beneficial trades foregone due to inconsistent expectations.

The focus of Keynes’s 1937 “The General Theory of Employment” in which he replied to comments regarding his 1936 magnus opus lends credence to this line of argument, since it overwhelmingly focused on expectations and did not mention stickiness or rigidity. Indeed, Keynes argued that the Old Classical theories were “incapable of dealing with the general case where employment is liable to fluctuate”, with the implication being that full output was a special case, and that the economy may remain under potential even in the long run when prices have fully adjusted due to inconsistent expectations.

Another avenue of interpreting Keynes is the disequilibrium approach explored by the likes of Don Patinkin, Robert Clower and Axel Leijonhufvud. They focused on the idea that recessions and falls in real output were a disequilibrium phenomena - that is, the fact that markets do not clear continuously via some centralised Walrasian auctioneer means there is room for difficulties of coordination within a decentralised economy. In particular, the speed of adjustment was one where prices were sluggish and quantity was rapid, rather than the usual idea of prices adjusting quickly. Although this has not caught on much within mainstream economics nowadays (bar a brief revival from Robert Barro and Herschel Grossman), the ideas of search theory are in many ways in a similar vein of rejecting the Walrasian auctioneer. But regardless of one’s personal choice of exegesis and whether one saw Hicks as having bastardised Keynes, the next few decades of macroeconomics would be dominated by the IS-LM approach, where short-run output and interest rates were described by the two aforementioned equations.

Although it is true that macroeconomics as a field is borne out of the Great Depression and Keynes, I am quite skeptical of the extent to he was revolutionary. Methodologically, the main shift was away from more hand-waving discussions of business cycles over time towards the simultaneous determination of macroeconomic variables within a system of equations. That was certainly a change, and Pigou in his 1950 Keynes’s General Theory: A Retrospective View argued that “nobody before Keynes, as far as I know, had brought all the relevant factors, real and monetary at once, together in a single formal scheme, through which their interplay could be coherently investigated”.

But in terms of the substantive content, Hicks concedes in his 1937 paper that the Keynesian description was only “completely out of touch with the classical world” in the special case of a liquidity trap i.e. where interest rates were at 0 and fiscal policy was required. So I think Keynes actually oversold his contributions by caricaturing the Old Classicals as more faithful adherents to Say’s Law than they actually were.

Unifying Macroeconomics: The Neoclassical Synthesis

The reason why IS-LM dominated was because of a convergence in macroeconomics that was beginning to take shape. While business cycle theory was perhaps the most prominent strand of macroeconomics in the aftermath of the Great Depression, economists behind the scenes had continued to work on issues of long-run growth as well as microeconomic foundations. And by the 1950s, these had been brought together into one coherent story that would be the cornerstone of the research agenda for the next few decades.

In business cycle theory, the final piece of the puzzle was the addition of a supply-side relationship between output and prices to the IS-LM model. This was provided by William Phillips, who published “The Relationship Between Unemployment and the Rate of Change of the Money Wage Rates in the United Kingdom” in 1958. In doing so, he reminded the profession of the forgotten finding by Fisher back in 1926: the negative relationship between unemployment and wage inflation. He suggested that since “when the demand is low relative to the supply we expect the price to fall … it seems plausible that this principle should operate as one of the factors determining the rate of change of money wage rates. With Paul Samuelson and Robert Solow going on to provide the theoretical explanation behind “the menu of choice between different degrees of unemployment and price stability” in their 1960 “Analytical Aspects of Anti-Inflation Policy”, they had closed out a model of output, interest rate and price determination. And since the model was dependent upon the idea of sluggish price adjustment, it could be relegated to being a short-run story only.

\[\pi = P(Y)\]

Meanwhile, the Smithian invisible hand had been formalised in microeconomics by Léon Walras’s 1874 Éléments d’économie politique pure, Kenneth Arrow and Gérard Debreu’s 1954 “Existence of an Equilibrium for a Competitive Economy” as well as Lionel McKenzie’s 1954 “On Equilibrium in Graham’s Model of World Trade and Other Competitive Systems”. These set up the foundations of microeconomic theory, which would be used to combine macroeconomics with general equilibrium theory as well as to formalise the microeconomic foundations of macroeconomics. This was best exemplified by the 1956 book Money, Interest and Prices by Don Patinkin, which brought together the three equations with microeconomic theory. Crucially, the combination of microfoundations and the explicitly short-run nature of the IS-LM-PC model allowed Samuelson to coin one of the most famous phrases regarding economic methodology in his canonical undergraduate textbook Economics: “I have set forth what I call a grand neoclassical synthesis … Its basic tenet is this: solving the vital problems of monetary and fiscal policy … will validate and bring back into relevance the classical verities. This neoclassical synthesis … heals the breach between aggregative macroeconomics and traditional microeconomics and brings them into complementing unity”.

So what was happening in growth theory during all of this? Although the Old Classicals had already set up an explanation of how the factors of production fed into the long-run productive potential of an economy, they did not have a clear model of the dynamics of economic growth over time. The Keynesian theory of growth was put forward by Roy Harrod in his 1939 “An Essay in Dynamic Theory” as well as Evsey Domar in his 1946 “Capital Expansion, Rate of Growth and Employment”. The Harrod-Domar model assumed a production function that only includes capital and has the marginal product of capital as constant - as such, capital is fixed proportion of real output. The capital stock depends upon new investment and what is left of the capital stock after depreciation. Investment is equal to savings, which is taken as a fixed proportion of income.

\[Y_t = F(K_t)\] \[K_t = \frac{Y_t}{MPK}\] \[K_{t+1} = I_t + (1-\delta)K_t\] \[I_t = S_t = sY_t\]

What they found was that this implied the growth rate of output was given by the savings rate, the marginal product of capital and the depreciation rate. Although this offered plausible policy prescriptions for increasing the growth rate, it also had a deeply Keynesian insight regarding the long run. If the population grew at a fixed rate $n$, the possibility of full employment would be an entirely knife-edge situation, since $n>g$ would imply that the population is growing faster than real output, meaning that there was no tendency for reverting to full employment.

\[g = \frac{Y_{t+1} - Y_t}{Y_t} = sMPK - \delta\]

And so in 1956, Robert Solow and Trevor Swan wrote “A Contribution to the Theory of Economic Growth” and “Economic Growth and Capital Accumulation” respectively. Their Solow-Swan model of growth relaxed a range of assumptions that the Harrod-Domar model had depended upon: it included labour and technology as factors of production, it set the factors of production as having decreasing rather than constant marginal products and it did not force the capital-ouput ratio to be fixed. It kept the process of capital accumulation, while also incorporating the growth of the population and technology over time. The Cobb-Douglas functional form of this model is given below.

\[Y = K^\alpha (AL)^{1-\alpha}\] \[K_{t+1} = I_t + (1-\delta)K_t\] \[I_t = S_t = sY_t\] \[L_{t+1} = (1+n) L_t\] \[A_{t+1} = (1+g) A_t\]

The result was that, because the factors of production had diminishing marginal products, there was a steady state level of output per capita $y^\ast$ which the economy would converge to. In the long run, the growth rate of real output $\dot{y}^\ast$ would be solely determined by the growth rate of technology $g$, which is also known was Total Factor Productivity. This was a powerful rejoinder against the Harrod-Domar model - not only did it better track the six stylised facts about growth which Nicholas Kaldor had proposed in his 1957 “A Model of Economic Growth”, it did not have weird knife edge conditions and it did not imply the absurd claim that it was possible to perpetually achieve economic growth by raising the savings rate. Later on, the combination of Frank Ramsey’s 1928 “A Mathematical Theory of Saving”, David Cass’s 1965 “Optimum Growth in an Aggregative Model of Capital Accumulation” and Tjalling Koopmans’s 1965 “On the Concept of Optimal Economic Growth” would provided a microfounded version of the Solow model by explaining how the savings rate was determined as opposed to assuming it was a fixed exogenous value. In any case, the Solow model and its microfounded cousin of the Ramsey-Cass-Koopmans model would become the mainstays of neoclassical growth theory.

The combination of all of this meant that up till the 1970s, the neoclassical synthesis reigned supreme in macroeconomics. Long-run potential output was still determined by the factors of production as per the Old Classicals, but we now had a formal model for how it grew over time i.e. at the rate of Total Factor Productivity improvements. The natural rate of interest and price level were consistent with Old Classical explanations.

\[Y^\ast = F(A,K,L)\] \[\dot{y}^\ast = g\] \[r^\ast = MPK\]

The short run was described by a combination of the IS-LM model and the Phillips curve. The main change from the Old Classicals is the fact that we had a clearer sense of the actual equations and of how everything was determined together. But we also lost some insights: the role of expectations in the Phillips curve was de-emphasised, as was the idea of the Wicksellian difference. Although to be clear, Samuelson and Solow did make clear in their 1960 paper that “it would be wrong, though, to think that our menu … will maintain its same shape in the longer run. What we do in a policy way during the next few years might cause it to shift in a definite way”.

\[Y = C(Y) + I(r) + (G-T)\] \[\frac{M}{P} = L(Y,r)\] \[\pi = P(Y)\] \[i = r + \pi\]

Regardless, this spurred on the next decade or so of research.

Macroeconomists mostly focused on understanding the various functions implicit in the IS-LM-PC model. In particular, the literature focused on the consumption function $C(Y)$, the investment function $I(r)$, the money demand function $L(Y,r)$ and the price-setting function $P(Y)$. The most important contribution to the consumption function was the permanent income hypothesis, which argued that people’s consumption depended on their expected path of income across their entire lifetime, rather than just their current income. This notion that people smooth their consumption across their lifetime was independently proposed by Milton Friedman in his 1957 A Theory of the Consumption Function as well as Franco Modigliani and Richard Brumberg’s 1954 “Utility Analysis and the Consumption Function”. On the question of the investment function, Jorgenson’s 1963 “Capital Theory and Investment Behaviour” as well as James Tobin’s 1969 “A General Equilibrium Approach to Monetary Theory” were two defining papers in that field. Jorgenson focused on the idea that in a frictionless world, firms invest to maximise the net present value of their profits. By contrast, Tobin looked at Tobin’s q i.e. the market value of the firm’s assets divided by the replacement cost of the firm’s assets, with a higher q suggesting that the firm’s market value is greater than the value of its capital and thus incentivising more investment. With respect to money demand, the 1952 “The Transactions Demand for Cash” by William Baumol and the 1956 “The Interest Elasticity of Transactions Demand For Cash” by Tobin formalised the ideas of liquidity preference that were already present in the IS-LM model.

At the same time, the rising computational power of computers meant that large scale macroeconometric models were being developed. By taking these functions and estimating them mathematically, economists were developing models aimed at predicting the effects of various policies on the economy. One of the first examples was Lawrence Klein and Arthur Goldberger’s 1955 An Econometric Model for the United States, with 20 simultaneous equations to describe the US economy. Augmented by work from the Cowles Commission, the Brookings Institution and others, this approach reached its apex with the Federal-Reserve-Board-MIT-Penn (FMP) model, which combined the IS-LM-PC model with the neoclassical growth theories in the background to produce a model with hundreds of equations.

But in spite of this seeming united research program, not all was well with macroeconomics, and this neoclassical synthesis didn’t go unchallenged. In fact, it was facing a challenge from Monetarists in the 1960s, and this would be followed up in the 1970s and 1980s by critiques from the New Classicals and Real Business Cycle theorists.

Counterrevolution 1: The Monetarists

The flagbearer of monetarism was Friedman. Having already influenced macroeconomics via his permanent income hypothesis, his much more significant contribution was to come in the ideas of monetarism. He had three main contributions: the stability of the money demand function, the role of money supply fluctuations in the business cycle and the expectations-augmented Phillips curve.

Friedman’s 1956 Studies in the Quantity Theory of Money set up “a theory of the demand for money” - in particular, he argued that the demand for real money balances was a function of the level of someone’s permanent income, the interest rate on a range of financial and physical assets as well as expected inflation. Compared to the original Keynesian view, this replaced current income with expected future income, incorporated a wider range of assets beyond just long-term bonds and included the role inflation would have in eroding the value of money.

\[\frac{M}{P} = L(E(Y),r,E(\pi))\]

The consequence is that money demand and velocity would be more stable, since permanent income is less volatile than current income and since changes in the money supply are unlikely to affect returns on all possible assets. The stability of money velocity implies “substantial changes in prices or nominal income are almost invariably the result of changes in the nominal supply of money”. This is buttressed by the portfolio adjustment mechanism, whereby a change in the money supply which causes excess money balances would lead to people buying up not just bonds but all sorts of assets, allowing monetary impulses to have a much larger effect across a range of markets. This was especially due to the fact that “monetary changes have their effect only after a considerable lag and over a long period and that the lag is rather variable”, as described in his 1960 book A Program for Monetary Stability. In this way, Friedman brought the equation of exchange back into the forefront of macroeconomics, and especially in relation to short-run business cycles. His 1963 book with Anna Schwartz, A Monetary History of the United States, applied this concept to the Great Depression. They argued that the Great Depression was caused by the failure of the Federal Reserve to respond appropriately to a fall in money demand.

The third contribution came in his analysis of the Phillips curve - by this point, many were treating the Phillips curve as a menu of inflation and output options that a central bank could choose from. Clearly this seems in tension with the classical dichotomy - how was it possible that a monetary choice could let central banks pick output beyond the productive potential of the economy? Alongside Edmund Phelps, Friedman provided an answer to this paradox. In Phelps’s 1967 “Phillips Curves, Expectations of Inflation and Optimal Unemployment over Time” and 1968 “Money Wage Dynamics and Labour Market Equilibrium”, an expectations-augmented Phillips curve was set up. The idea was that inflation depended upon the deviation from the natural level of output and expectations of inflation $E(\pi)$. If central banks attempted to exploit the Phillips curve to get higher output with higher inflation, people would soon adapt their inflation expectations - that is, there was not perpetual money illusion. Friedman modelled expectations as adaptive i.e. equal to the inflation in the previous period - as such, with the change in inflation being related to the deviation of output, pegging output above its natural level permanently would lead to an accelerating price level.

\[\pi = E(\pi) + P(Y-Y^\ast)\] \[\Delta \pi = P(Y-Y^*)\]

Friedman summarised the implications of monetarism in his 1968 address “The Role of Monetary Policy”. He saw the Depression as a “tragic testimony to the power of monetary policy - not, as Keynes and so many of his contemporaries believed, evidence of impotence”. Coupled with his skepticism of central bank finetuning and discretion due to long and variable lags, he argued that the role of monetary policy should be to “prevent money itself from being a major source of economic disturbance” as well as to “provide a stable background for the economy”, with the idea of “offsetting major disturbances in economy system” being “far more limited than is commonly believed”. Certainly, it was not possible for monetary policy to take advantage of the Phillips curve and peg real values like output at a level above the natural level. That is, “there is always a temporary trade off between inflation and unemployment; there is no permanent trade off”. This was because “the temporary trade-off comes not from inflation per se, but from unanticipated inflation, which generally means, from a rising rate of inflation”. As such, the best thing monetary policy could do was to follow a clear rule about the money growth rate which would keep the path of the price level stable and predictable.

The monetarist counterrevolution did successfully land a few powerful blows on the neoclassical synthesis. It highlighted the importance of monetary policy as well as its limitations. The revival of the quantity theory in Friedman’s 1970 The Counterrevolution in Monetary Theory reiterated that “inflation is always and everywhere a monetary phenomenon in the sense that it is and can be produced only by a more rapid increase in the quantity of money than in output”.

Friedman’s work with Schwartz provided a account of the Great Depression contradictory to the Keynesian story, one which is still taken as canon to this day. The Keynesian story invoked the idea that the lowering of interest rates implied expansionary monetary policy, and so the lack of a recovery meant fiscal policy was needed. What they showed was that monetary policy was actually still quite contractionary - in Wicksellian terms, the natural rate had fallen faster than central banks had lowered the market rate. And therein lies one of the costs of forgetting about the Wicksellian difference.

As for the expectations-augmented Phillips curve, although again not entirely new given Wicksell’s observations regarding unanticipated inflation, it was a powerful rejoinder to the neoclassical consensus. According to Greg Mankiw and Ricardo Reis in their 2018 “Friedman’s Presidential Address in the Evolution of Macroeconomic Thought”, his use of this to predict the 1970s stagflation where inflation rose while output stagnated was “one of the greatest successes of out-of-sample forecasting by a macroeconomist”.

However, monetarism did not replace the neoclassical synthesis. For one, Friedman conceded in his 1970 “A Theoretical Framework for Monetary Analysis”, where he translated monetarism into IS-LM language, that “the basic differences among economists are empirical not theoretical”. The dramatic change in the velocity of money that soon followed would limit the validity of his views too. And most importantly, it takes a model to beat a model - Friedman’s unwillingness to do general equilibrium macroeconomic modelling means that monetarism did not provide a sufficient alternative to take over. Nonetheless, it was clear that the neoclassical synthesis faced flaws, spurring on the New Classicals to do the formal modelling that would replace it.

Counterrevolution 2: The New Classicals

The research program of the New Classicals represented the birth of modern macroeconomics as it is now done - in that sense, it was less of just a counterrevolution as much as it was a Kuhnian revolution in itself. Although its contributions were incredibly wide-ranging, the core message the New Classicals offered was methodological: rather than having aggregate relationships and expectations described in an ad hoc fashion, the New Classicals argued that all of this ought to be derived from basic microeconomic principles in general equilibrium.

The groundwork for the New Classical revolution was laid by John Muth in 1961 with his “Rational Expectations and the Theory of Price Movements”. This paper introduced the idea of rational expectations i.e. the idea that consumers and firms formed expectations in a manner that would be “essentially the same as the predictions of the relevant economic theory”. In other words, people’s expectations were consistent with the model and they could not be systematically wrong. Notice that with adaptive expectations, it was possible for people to be permanently fooled since their expectations were entirely backwards looking. In the case of inflation, that meant that expectations of inflation would be equal to inflation plus an unpredictable error term $\epsilon$.

\[E(\pi)=\pi+\epsilon\]

The importance of this was underlined by Robert Lucas’s 1976 “Econometric Policy Evaluation: A Critique”. In this canonical paper, he put forward what is now known as the Lucas critique. He noted that in the neoclassical synthesis, “theorists suggest forms for consumption, investment, price and wage setting functions separately; these suggestions, if useful, influence individual components … The aggregate behaviour of the system then is whatever it is”. The problem was that using these models to predict outcomes when policy changed involved assuming people’s decisions did not vary with policy choices. As Lucas put it, “everything we know about dynamic economic theory indicates that this presumption is unjustified … to assume stability under alternative policy rules is thus to assume that agents’ views about the behaviour of shocks to the system are invariant under changes in the true behavior of these shocks”. Insofar as “optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models”.

Methodologically, this was a big change, since it implied that the only way to avoid this was by modelling from microfoundations - that is, structural parameters which were invariant to policy i.e. the tastes and technology available to consumers and firms as well as the constraints which implied tradeoffs in their preferences. So instead of having a consumption function that related output to consumption mechanically, it was necessary to model how consumers optimised intertemporally. Instead of having a Phillips curve that related output to price mechanically, it was necessary to model how firms might set prices based on their expectations of the future. In this way, Lucas set up the modern method of macroeconomic modelling: dynamic stochastic general equilibrium models. Dynamic meant that the model occured over time with agents being forwards-looking, stochastic meant there were random shocks, general meant considering the entire economy simultaneously and equilibrium meant thinking about how consumers and firms optimised their behaviour for their objectives subject to constraints.

Lucas deployed the insights of rational expectations in his seminal 1972 paper, “Expectations and the Neutrality of Money”, providing a general equilibrium explanation that reconciled the Samuelson-Solow Phillips curve with the Friedman-Phelps version. He would go on to build the first business cycle model with these features in his 1973 “Some International Evidence on Output Inflation Tradeoffs”. The idea in both was the Lucas Islands model, which was inspired by the work of Phelps et al. in their 1970 Microeconomic Foundations of Employment and Inflation. In short, there are producers on individual islands who can only see the nominal prices of their goods but not the general price level. As such, they would have to go through a process of signal extraction, trying to figure out if a rise in nominal prices was due to nominal shocks i.e. a rise in the general price level or due to real shocks i.e. a rise in demand for their good. In the former case, they shouldn’t do anything, but in the latter case, they ought to raise real output. Even with rational expectations and no money illusion, this imperfect information meant that they would produce more when faced with a nominal shock, producing an output and inflation relationship. However, their rational expectations meant that they could not be tricked by a systematic central bank policy of using nominal shocks to drive up real output. The consequence of making clear the difference between anticipated and unanticipated inflation is that there would be a temporary but not permanent Phillips curve in the form below, where output is a function of its natural level, of the deviation of inflation from expectations and of random shocks.

\[Y = Y^\ast + P(\pi - E(\pi)) + \epsilon\]

He would go on to build a more complete business cycle model in this vein with his 1975 “An Equilibrium Model of the Business Cycle” and expound upon that in his 1977 “Understanding Business Cycles”. This was a model of business cycles built on Walrasian general equilibrium with microfoundations for the behaviour of rational and maximising agents. And crucially, Lucas described “the central problem in macroeconomics”, given that “business cycles are all alike”, as finding a framework of showing monetary non-neutrality without the existence of persistently unexploited opportunities for mutual gain. By producing business cycle fluctuations without the need to resort to non-clearing markets or disequilibrium, Lucas did exactly that.

This combination of rational expectations, the natural rate hypothesis and general equilibrium microeconomics spawned the rest of the New Classical literature, concentrated among “freshwater” universities near the Great Lakes i.e. the likes of Chicago, Northwestern, Pennsylvania, Rocherster and Carnegie-Mellon. For one, Thomas Sargent and Neil Wallace wrote “Rational Expectations, the Optimal Monetary Instrument and the Optimal Money Supply Rule” in 1975, where they argued for the policy ineffectiveness proposition i.e. monetary policy could not systematically manage output and employment levels, since any systematic policy actions would be anticipated by rational agents. Another contribution would come in 1977 by Finn Kydland and Edward Prescott, who published “Rules Rather than Discretion”. In this, they noted that even if central banks had perfect information about economics shocks and how to deal with them, the problem of time inconsistency meant it might not succeed. For example, consider a central bank which promises to lower inflation to a certain level - in the next period, once people’s inflationary expectations are lowered, they have an incentive to renege on their promise and exploit the Phillips curve relationship. As such, rational agents anticipating this will not believe that the promise to lower inflation is credible. The implication of this for central bank policy was explored by Robert Barro and David Gordon in two 1983 papers, “Rules, Discretion and Reputation in a Model of Monetary Policy” and “A Positive Theory of Monetary Policy in a Natural Rate Model”. They pushed for the use of systematic rules that would constrain central banks, ensuring there wouldn’t be this inflationary bias where central banks attempted to exploit the use of inflation in the short run.

I think it is incredibly difficult to overstate how much of a change the New Classicals represented - if there was ever a father of modern macroeconomics, it is Lucas. In his 1991 “Macroeconomics in Disarray”, Greg Mankiw described the New Classicals as “young Turks intent on destroying the consensus”, who “led a revolution in macroeconomics that was as bloody as any intellectual revolution can be”.

As we have seen, many of these ideas aren’t necessarily new in their entireity: the natural rate hypothesis, the importance of expectations in the Phillips curve and even the policy ineffectiveness proposition naturally follow from the commentary of Wicksell and Friedman. But it is the way of doing the macroeconomics that was revolutionary!

Firstly, the formalisation of general equilibrium macroeconomics was a rejection of the increasing tendency within the neoclassical synthesis to go equation by equation without treating the system as a whole, due to the focus on large macroeconometric models. Secondly, the use of microeconomic foundations represented a complete rejection of the more ad hoc neoclassical approach. Thirdly, the idea of rational expectations provided a formal basis for including expectations within macroeconomic models that was consistent across all of its components. Combined together, they represent an entirely new paradigm in terms of writing down models and producing theory.

And they weren’t quiet about it. In 1978, Lucas and Sargent turned up to a conference at the Federal Reserve Bank of Boston. It was there, in the heart of the neoclassical hegemony, that they gave their contribution, titled “After Keynesian Macroeconomics”. This piece remains one of the harshest and fiercest rebuttals of Keynesian macroeconomics to date. They declared the notions that Keynesian predictions were “wildly incorrect” and that “the doctrine on which they were based is fundamentally flawed, are now simple matters of fact”. Their intent was to show that these flaws were “fatal” - in other words, “modern macroeconomic models are of no value … and that this condition will not be remedied by modifications along any line”. They spoke of the “spectacular failure of the Keynesian models in the 1970s” as “econometric failure on a grand scale”. In that sense, “Keynesian policy recommendations have no sounder basis, in a scientific sense, than recommendations of non-Keynesian economists or, for that matter, non-economists”. The job now was of “sorting through the wreckage”. As Lucas commented in an 1998 interview with Brian Snowdon and Howard Vane, they were “in the enemy camp and were trying to make a statement that we weren’t going to be assimilated”.

Unsurprisingly, the old guard didn’t take this sitting down. Even at the 1978 conference, Benjamin Friedman (no relation to Milton) responded by claiming that Lucas and Sargent had “declined to answer substantive questions raised about their equilibrium business cycle theory”, decrying their contribution as merely “unfocused rhetorical attack”. Solow joined in too, noting that they seemed “to regard the postulate of optimising behaviour as self-evident and the postulate of market-clearing behaviour as essentially meaningless … the one that they think is self-evident I regard as meaningless and the one that they think is meaningless, I regard as false”. He continued this rejection of New Classical economics by noting that “Lucas and Sargent say after all there is no evidence that labour markets do not clear, just the unemployment survey. That seems to me to be evidence”. And Tobin commented in his 1980 Asset Accumulation and Economic Activity that “obviously, we do not live in an Arrow-Debreu world”, so a “literal application of the market-clearing idea” as implied by the New Classicals was a “severe draft on credulity”

Nonetheless, this polemical approach continued in Lucas’s 1980 “The Death of Keynesian Economics”. He posited that “Keynesian economics is dead” in a sociological sense. That is, “one cannot find a good, under-40 economist who identifies himself and his work as Keynesian. Indeed, people even take offence if referred to in this way. At research seminars, people do not take Keynesian theorising seriously any more – audience starts to whisper and giggles to one another”. And he was in many ways correct for a while - but it wasn’t New Classical economists who took over.

Counterrevolution 3: The Real Business Cycle Theorists

Although the methodological innovations of the New Classicals were important, it was soon clear that business cycle models of monetary misperceptions were unable to match the magnitudes of real world fluctuations. And it seemed absurd that rational individuals would not simply find ways to figure out the growth of the money supply, given that the information was publicly available. So the need to account for business cycle fluctuations remained, and other freshwater economists took on this challenge. Charles Nelson and Charles Plosser argued in their 1982 “Trends and Random Walks in Macroeconomic Time Series” that “macroeconomic models that focus on monetary disturbances as a source of purely transitory fluctuations may never be successful in explaining a large fraction of output variation and that stochastic variation due to real factors is an essential element of any model of macroeconomic fluctuations”. Their argument relied on an empirical analysis of macroeconomic time series - if it were the case that real shocks were unimportant, we ought to see output and other variables return to their trend level. Instead, they found that a lot of shocks were persistent and affected the trend level. Buttressed by the very real nature of the 1970s oil shocks, real disturbances were put back into the forefront in analysing business cycles. Kydland and Prescott would be the first to model this, publishing “Time to Build and Aggregate Fluctuations” in 1982. They were quickly followed by John Long and Charles Plosser, in their 1983 “Real Business Cycles”. And thus real business cycle theory kicked off.

The basic idea behind Kydland and Prescott’s paper was that exogenous changes in technology could provide an impulse for fluctuations. These would be amplified by the lags in the investment process, the desire by workers to substitute their labour across time and the desire by consumers to smooth their consumption. The result was a model which matched the stylised facts and replicated the real world data in the US between 1950 and 1975. Without any sort of rigidities or frictions or the need for money, they had managed to produce a realistic-looking model of the business cycle. However, it is worth noting that they rejected traditional econometric tests as those “would have resulted in the model being rejected”, instead arguing for calibration, where the goal was simply to figure out which parameter values allowed the model to best fit the data. And when Prescott went on to integrate this into the Solow growth in his 1986 “Theory Ahead of Business Cycle Measurement”, with the technological shocks being ones which affected the marginal product of labour, the title itself is a testament to their view of econometric measurement. The implications of RBC theories were enormous - if true, they meant that involuntary employment wasn’t a thing, that all business cycles were efficient and that the role of government was simply to improve technological progress and not to stabilise the business cycle.

It was soon clear that RBC theory, at least in its simplest form, was not possibly accurate. Some of this is common sense - the Great Depression just wasn’t everyone deciding to go on holiday, and the microeconomic evidence on labour supply just didn’t match the theory. Technological diffusion is slow, and it’s not really plausible that society suddenly loses some technology. Money is very much non-neutral, which just wasn’t accounted for. Inflation is often countercyclical to output, unlike the procyclical implications of RBC models. And the persistence of shocks could by accounted for by other mechanisms, such as the ideas of hysteresis and learning-by-doing. The former is the notion that after a long negative shock, the productivity of workers may have worsened due to a long period of unemployment, while the latter is about the fact that productivity often rises as a result of workers and firms producing a lot and getting better at it. But in spite of all of these empirical difficulties, real business cycle theorists did have lasting contributions. The most notable is the idea put forward by Thomas Cooley in his 1995 book Frontiers of Business Cycle Research, which aimed at summarising the RBC research program: “growth and fluctuations are not distinct phenomena to be studied with separate data and different analytical tools”.

As with the New Classicals, the RBC theorists provided a huge upgrade in methodological firepower. And the basic Kydland and Prescott model would be the core of macroeconomic modelling going forward - in that sense, it surpassed the New Classicals, whose information imperfection models of the business cycles would not be taken further. Indeed, Lucas conceded in his 2001 “Professional Memoir” that “the Bald Peak conference … marked the beginning of the end for my attempts to account for the business cycle in terms of monetary shocks”. This was the 1978 conference where Kydland and Prescott’s model was presented. And consequently, he confessed to John Cassidy that “monetary shocks just aren’t that important” in a 1996 New Yorker interview. So perhaps the best analogy for what RBC theory did for the New Classicals is what the neoclassical synthesis did for Keynesians: it transformed a few revolutionary ideas into an entire field.

Salvaging Keynes: The New Keynesians

So given the implausibility of RBC theory, did Keynesians just go quietly into the night? No. While the methodological implications of the New Classicals and the RBC theorists were damning for Old Keynesian theory, a new generation of researchers kept up the criticism of RBC theory while looking for ways to revive Keynesian insights about recessions while keeping rational expectations and microeconomic explanations. They were located in coastal “saltwater” institutions, such as Harvard, Stanford, Yale and Berkeley. And whatever had been seen so far in the disagreements over the New Classical counterrevolution paled in comparison to the disagreements that arose in this period - this was by far the most bitter and vicious period in macroeconomics even to this day, with freshwater economists and saltwater economists duking it out from the early 1980s till the late 1990s.

One of the first salvos came from Larry Summers’s 1986 response to Prescott’s paper arguing for theory over measurement - in this piece titled “Some Skeptical Observations on Real Business Cycle Theory”, he gave his view that “real business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in the United States or other capitalist economies”. The fact that it could be calibrated to fit the facts was unimportant, since “extremely bad theories can predict extremely well”. Indeed, “many theories can approximately mimic any given set of facts; that one theory can does not mean that it is even close to right”. Summers suggested that “the image of a big loose tent flapping in the wind comes to mind” - that is, RBC theory being tied to reality by a few calibrated parameters did not make it an accurate theory. More importantly, it simply “defies credulity” not to think about exchange failures in explaining large-scale recessions - in that sense, “nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes”.

Referring to New Classicals and RBC theorists under the same broad umbrella, Alan Blinder followed suit, declaring in his 1988 “The Fall and Rise of Keynesian Economics” that “the ascendancy of new classicism in academia was instead a triumph of a priori theorising over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism”. Solow rejoined the fray too with his 1987 Nobel Lecture/1988 paper titled “Growth Theory and After”, noting that the setup of RBC models means that “any kind of market failure is ruled out from the beginning by assumption … I find none of this convincing”.

Plosser duly responded, by fending off these complaints around freshwater macroeconomics being too idealised with his 1989 “Understanding Real Business Cycles”. He said that “it is logically impossible to attribute an important portion of fluctuations to market failure without an understanding of the sorts of fluctuations that would be observed in the absence of the hypothesised market failure. Keynesian models started out asserting market failures (like unexplained and unexploited gains from trade) and thus could offer no such understanding”.

So the New Keynesians had their work cut out: they needed to find the rigidities which could explain rather than simply assert market failures. In pursuing that, they focused on microfounding four types: nominal wages, nominal prices, real wages and real prices.

The first generation of New Keynesians looked at nominal wages, with Stanley Fisher’s 1977 “Long-Term Contracts, Rational Expectations and the Optimal Money Supply Rule”, Edmund Phelps and John Taylor’s “Stabilising Powers of Monetary Policy under Rational Expectations” in the same year as well as Taylor’s “Aggregate Dynamics and Staggered Contracts”. The basic idea in all of these papers was that even with rational expectations, there could be rigidities if wages were determined via longer-term contracts in lieu of spot markets. While spot markets might be appropriate where the buyers and sellers could be anonymous i.e. financial assets or where the product was homogenous i.e. agricultural produce, these were abundantly not true of labour. Instead, the cost of transactions and negotiations meant that it was easier to stick to a contract, creating nominal wage rigidity.

This was followed by explanations of nominal price rigidity. One important suggestion was of menu costs, by Mankiw in his 1985 “Small Menu Costs and Large Business Cycles”, by George Akerlof and Janet Yellen in two 1985 papers of “A Near Rational Model of the Business Cycle, with Wage and Price Inertia” and “Can Small Deviations from Rationality Make Significant Differences to Economic Equilibria?” as well as by Julio Rotemberg’s 1987 The New Keynesian Microfoundations. The notion was that it was costly to reset prices, not only in the literal physical effort required to change it, but also because it was expensive to spend time renegotiating purchase contracts with suppliers and sales contracts with customers. Of course, price-setting only makes sense as an idea if firms have some sort of market power - as such, New Keynesians modelled this by assuming monopolistic competition with the help of the technical work in Avinash Dixit and Joseph Stiglitz’s 1977 “Monopolistic Competition and Optimum Product Diversity”. What Olivier Blanchard and Nobuhiro Kiyotaki outlined in 1987 within “Monopolistic Competition and the Effects of Aggregate Demand” was that this provided a strong foundation for explaining how small nominal price rigidities could affect real output, since the private incentive for changing prices differed from the social cost of not having price changes.

The importance of these nominal rigidities did not seem sufficient, but as Laurence Ball and David Romer noted in their 1990 “Real Rigidities and the Non-Neutrality of Money”, they could be amplified by real rigidities into realistic shocks. In terms of real wage rigidity, there were three main theories. The first was the idea of implicit contracts, put forward independently by Martin Baily’s 1974 “Wages and Employment under Uncertain Demand”, Donald Gordon’s 1974 “A Neoclassical Theory of Keynesian Unemployment” as well as Costas Azariadis’s 1975 “Implicit Contracts and Underemployment Equilibria”. Because firms had access to capital and insurance markets, they were better able to weather economic fluctuations than workers - consequently, workers would be willing to accept a stable wage which was on average lower, as a way to insure against a more variable income stream. The second was the insider-outside theory proposed by Assar Lindbeck and Dennis Snower in the 1988 The Insider Outsider Theory of Employment and Unemployment, whereby the costs of hiring new employees ensured that existing employees had some insider power to extract rents in the form of higher wages. And the third was the efficiency wage hypothesis as summarised by Akerlof and Yellen in their 1986 Efficiency Wage Models of the Labour Market, which posited that labour productivity is dependent on wages. This could be for a variety of reasons. One suggested by Andrew Weiss’s 1980 “Job Queues and Layoffs in Labor Markets with Flexible Wages” was adverse selection: firms are unwilling to hire workers willing to take lower wages, since that signals they aren’t very qualified. Another was shirking a la Carl Shapiro and Joseph Stiglitz’s 1984 “Equilibrium Unemployment as a Worker Discipline Device”: the principal-agent problem of monitoring worker productivity could only be ameliorated if firing was a credible threat i.e. if workers could not earn those wages elsewhere. And a third explanation suggested by Akerlof in his 1982 “Labour Contracts as Partial Gift Exchange” was the notion of fairness: workers that feel screwed over by low wages will be lazy. All of these real wage rigidities meant that insofar as “in equilibrium an individual firm’s production costs are reduced if it pays a wage in excess of market clearing … there is equilibrium involuntary unemployment”.

And for real price rigidities, several were suggested. One outlined in “Markups and the Business Cycle” by Julio Rotemberg and Mike Woodford in 1991 was that monopolistically competitive firms faced more competition in booms - by contrast, they were able to engage in implicit collusion more during recessions. As such, the markups of the price above marginal cost was countercyclical, resulting in a real friction. Another would be the idea of thick markets, whereby there are lower search costs within markets during booms due to the market being more packed - again, there would be a countercyclical marginal cost that would cause a real rigidity. A further option might be the fact that it is more expensive to seek external financing than internal financing, due to the information asymmetry between borrowers and lenders - in recessions, the availability of internal funds is limited and so the costs of borrowing, as Ben Bernanke and Mark Gertler argued in their 1989 “Agency Costs, Net Worth and Business Fluctuations”, would rise as firms shifted to external financing. And finally, Stiglitz posited that people might judge quality based on the price in his 1987 “The Causes and Consequences of the Dependence of Quality on Price”, resulting in another real rigidity against price flexibility.

In Mankiw and Romer’s 1991 book, New Keynesian Economics, they collated all of these aspects together, arguing that “a distinguishing feature of the new Keynesian economies” was the interaction between real imperfections where desired relative prices were not perfectly responsive to changes in demand and nominal rigidities where desired nominal prices were not perfectly responsive to desired relative prices.

So with their own research program blossoming, the New Keynesians doubled down on RBC theorists. Mankiw capitalised on his New Keynesian work by publishing “A Sticky Price Manifesto” with Ball in 1984, where they described “two kinds of macroeconomists … One kind believes that price stickiness plays a central role in short-run economic fluctuations … The other kind does not”. Those who did not conform to the traditional view that price rigidities mattered were “heretics”, and they claimed that “a macroeconomist faces no greater decision than whether to be a traditionalist or a heretic. This paper explains why we choose to be traditionalists”.

The RBC theorists were by no means quiet about their discontent. Lucas produced a furious and scathing commentary piece on Ball and Mankiw’s manifesto, asking “why do I have to read this? This paper contributes nothing - not even an opinion or belief - on any of the substantive questions of macroeconomics. What fraction of US real output variability in the postwar period can be attributed to monetary instability? … Ball and Mankiw have nothing to offer on this question a very difficult one. Ball and Mankiw have nothing to offer on this question, beyond saying, trivially, that they believe the answer is a positive number and suggesting, falsely and dishonestly, that others have asserted it is zero. Yet monetary non-neutrality is the intended subject of their paper! One can speculate about the purposes for which this paper was written - a box in the Economist? - but obviously it is not an attempt to engage other macroeconomic researchers in debate over research strategies”.

Barro would also get involved, with a 1989 paper titled “New Classicals and Keynesians, or the Good Guys and the Bad Guys”. With the title itself seething with pettiness, it is no surprise that the rest of the piece is just as hard-hitting. With respect to New Keynesianism, he said “it was hard to see how these ideas constitute a well-defined area of research that will actually rehabilitate Keynesian analysis”. As such, “macroeconomic research seems to be evolving into two camps: could it be the good guys versus the bad guys”?

In light of these responses, Mankiw proceeded to keep up the critiques with his 1989 “Real Business Cycles: A New Keynesian Perspective”, where he said that “real business cycle theory does not provide an empirically plausible explanation of economic fluctuations”. This was because “if society suffered some important adverse technological shock, we would be aware of it” - by contrast, “it seems undeniable that the level of welfare is lower in a recession”, making RBC theory’s explanations and implications dubious. In the same year, Taylor published his “The Evolution of Ideas in Macroeconomics”, where he referred to “this extreme view” of RBC theory as “far from reality”. He reiterated this view in his 2007 “Thirty Five Years of Model Building for Monetary Policy Evaluation”, where he described this period of domination by RBC theorists as a “dark age”.

Tobin piled on too in 1996, describing RBC theory as the “elegant fantasies” of “Robinson Crusoe macroeconomics” in his book Full Employment and Growth. And econometrician Chris Sims, who was no fan of the neoclassical synthesis (saying it “was corrupt” and “deserved its fate” in his 2011 Nobel Lecture), would nevertheless posit that “it is fair to say that most RBC research has ignored most of the known facts about the business cycle”, in his 1996 “Macroeconomics and Methodology”.

The weight of the rebuttal was such that by 1994, even Lucas had accepted in his “Review of Milton Friedman and Anna Schwartz’s A Monetary History of the United States” that RBC theory was not “a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not”. This certainly did not satisfy everyone - Maurice Obstfeld and Kenneth Rogoff expressed their dissatisfaction in their 1996 textbook Foundations of International Macroeconomics: “a theory of business cycles that has nothing to say about the Great Depression is like a theory of earthquakes that explains only small tremors”.

Nonetheless, it was clear by the mid-1990s that the New Keynesian research program, despite the best efforts of RBC theorists, was going to stay. But with such a huge array of possible frictions, it was still unclear on what the agenda would be going forwards.

Resuscitating Growth: Endogenous Growth Theory

Meanwhile, what were growth theorists doing in all of this? The insights of the New Classicals and the Real Business Cycle theorists had shifted the focus in macroeconomics away from stabilisation policy and towards the long-run growth of an economy’s productive potential. As Lucas put it in his 1984 Marshall Lecture/1988 paper “On the Mechanics of Human Development”, “the consequences for human welfare involved in questions like these are simply staggering: once one starts to think about them, it is hard to think about anything else”. Coupled with this was the inability of the neoclassical growth models to explain growth in the long run. In a theoretical sense, neoclassical models implied that output per capita would a steady state, with the only driver of long-run growth being that of Total Factor Productivity, which was taken as exogenous. But crucially, there was also an empirical lacuna - it was increasingly clear that the neoclassical prediction of convergence between countries was not occurring and many developing countries were actively falling in living standards. As such, the field of growth was ripe for research.

Paul Romer would fire the opening shot with his 1986 “Increasing Returns and Long Run Growth”. He posited that technological progress occurred endogenously as a result of the learning-by-doing process - that is, the investment by firms in capital had positive benefits on TFP. And since knowledge spills over to others, this would increase the TFP level in the economy as a whole, ensuring there weren’t diminishing returns such that the economy reached a steady state. Meanwhile, Lucas’s approach in his 1988 paper was to include human capital, which when combined with physical capital was not subject to diminishing returns.

The second generation of these endogenous growth models would come from the endogenous process of TFP creation, rather than simply looking at how side benefits from broadly-defined capital accumulation could provide constant returns to scale. Romer’s 1990 “Endogenous Technological Change” reiterated that “once the cost of creating a new set of instructions has been incurred, the instructions can be used over and over again at no additional cost” - but unlike his first paper, this one emphasised the endogenous decision to engage in research and development as a way of creating new blueprints and ideas. Gene Grossman and Elhanan Helpman proposed that R&D could allow firms to produce an expanding variety of products in their 1991 Innovation and Growth in the Global Economy. And Philippe Aghion alongside Peter Howitt, in their 1992 “A Model of Growth Through Creative Destruction”, spoke of the notion of a quality ladder - that is, innovation to make better and better products. All three of these models of endogenous growth provided an explanation for what the world patterns of growth look like, and they have broadly fared well against the facts.

Reunifying Macroeconomics: The New Neoclassical Synthesis

With the growth literature chugging along, let’s return to the questions of business cycles. By the mid-1990s, we had the two parallel research programs of RBC theorists and of New Keynesians: could short-run macroeconomics be reunified as it had once been by the neoclassical synthesis? This was the goal of the new neoclassical synthesis. As Jordi Galí put it in his 2008 textbook Monetary Policy, Inflation and the Business Cycle, this NNS had a “core structure that corresponds to an RBC model on which a number of elements characteristic of Keynesian models are superimposed”

The basic premise of the model was built off of RBC theory: a utility-maximising representative agent optimising for consumption-savings and labour-leisure decisions across time, subject to their budget constraints and the production technology. What defined New Keynesian was the fact that firms were taken as monopolistically competitive as per the Dixit-Stiglitz model, and only a fraction of firms could change their prices every period. This method of pricing was suggested by Guillermo Calvo in his 1983 “Staggered Prices in a Utility Maximising Framework” and was taken as a way to approximate the nominal price rigidity previously discussed. The fact of a utility-maximising representative agent meant that it was possible to talk about the welfare of the agent in relation to the optimal monetary policy. That’s exactly what Rotemberg and Woodford did in their 1997 “An Optimisation Based Econometric Framework for the Evaluation of Monetary Policy”. What followed was the widespread adoption of John Taylor’s rule as the canonical benchmark for monetary policy rules. In essence, what he suggested in his 1993 “Discretion Versus Policy Rules in Practice” was that central banks set their interest rate based on the natural rate, the deviation of output from its natural level and the deviation of inflation its targeted level.

This benchmark model of monopolistic competition, sticky prices and the Taylor rule would be summarised in Richard Clarida, Jordi Galí and Mark Gertler’s 1999 “The Science of Monetary Policy” as well as Woodford’s 2003 Interest and Prices. The sine non qua of this approach was the idea of inflation targeting described by Woodford, since the “efficient level of output is the same as the level of output that eliminates any incentive for firms on average to either raise or lower prices … it may well be more convenient for a central bank to concern itself simply with monitoring the stability of prices”. It would reach its pre-2008 apex in the form of Frank Smets and Raf Wouters’s large-scale New Keynesian DSGE model which they built in their 2007 “Shocks and Frictions in US Business Cycles”.

So on the eve of the financial crisis, we had reached and were building on a new neoclassical synthesis. Compared to the neoclassical synthesis of the 1960s, the main change in the long run was the fact that endogenous growth theory meant we didn’t just take TFP growth $g$ as given, but had an explanation for how it occurred.

\[Y^\ast = F(A,K,L)\] \[\dot{y}^\ast = f(g)\] \[r^\ast = MPK\] \[P^\ast = \frac{MV}{Y}\]

In the short-run, the IS-LM-PC model was replaced block by block. The determination of real output is now linked to potential output (as the RBC theorists had reminded us of), expectations of output (as the monetarists had noted with the permanent income hypothesis) and the Wicksellian difference. The real interest rate is set by central banks based on a Taylor rule (as proposed by the New Keynesians). And the price setting equation takes on a New Classical flavour, being set as a function of expected inflation, the output gap and random shocks.

\[Y = G(Y^\ast,E(Y),r-r^\ast)\] \[r = L(r^\ast,Y-Y^\ast,\pi-\pi^\ast)\] \[\pi = P(E(\pi),Y-Y^\ast,v)\] \[i = r + \pi\]

This Post Takes the Old Classicals Seriously

Ultimately, we can see how all five research programs that followed the neoclassical synthesis contributed to the new neoclassical synthesis which dominates macroeconomics today. According to Blanchard’s “What Do We Know about Macroeconomics that Fisher and Wicksell Did Not?” he wrote in 2000, there has been a “surprisingly steady accumulation of knowledge”. But in Woodford’s 1999 “Revolution and Evolution in Twentieth Century Macroeconomics”, he noted that “the degree to which there has been progress over the course of the century is sufficiently far from transparent”.

And as Keynes put it in 1936, “the world is ruled by little else” other than “the ideas of economists”. Since we “are usually the slaves of some defunct economist”, I want to go back further than the neoclassical synthesis and reflect on the progress we’ve made since the Old Classicals, because I think at each stage we’ve overestimated the degree to which things are new.

Let’s remind ourselves of the pinnacle macroeconomics had reached before Keynes. By then, it was already understood that output depended on factors of production and output growth depended on TFP improvements. The role of monetary policy was to stabilise prices and in doing so, output - it did so by affecting the Wicksellian difference. And it could not systematically trick people because only unanticipated inflation drives real output. What have we added since? We have a better sense for how TFP growth actually occurs and a model to describe all of the long-run stuff. In a similar vein, we have actual models for the short run now, within which we can better model expectations and behaviour. From that, we have more theoretically robust reasons to stick to target inflation, to use monetary rules over discretion, to follow the Taylor rule as well as to respect the natural rate and not exploit the Phillips curve. All of which allow us to achieve a reduction in price and output variability.

Notice that substantively, the ideas are quite similar. Indeed, Woodford described his approach as neo-Wicksellian in his 2003 textbook, and “an attempt to resurrect a view that was influential among monetary economists prior to the Keynesian revolution”. So the main differences are methodological: by doing the rigorous legwork, we have a much more serious modelling approach which we can quantitatively check - and hopefully this means we won’t forget insights as we did between the Old Classicals and now.

Friedman once said at a 1975 AEA presentation that it is hard “to specify what we … have learned in the past two hundred years”, commenting that “we have advanced beyond Hume in two respects only: first, we have a more secure grasp on the quantitative magnitudes involved; second, we have gone one derivative beyond Hume”. But unlike Hume, Wicksell already knew that it wasn’t the derivative of the price level i.e. inflation that mattered, it was the rate of change of inflation and whether it was anticipated. So to misquote Friedman, I contend that we have advanced beyond Wicksell in two respects only: first, we have a more secure grasp on the quantitative magnitudes involved; second, we have gone one layer of microfoundations beyond Wicksell.

[ economics  effortposts  ]