Snowdon & Vane Modern Macroeconomics
.pdf342 |
Modern macroeconomics |
example, can a government make the requisite distinction between demand and supply shocks, especially when such shocks are not independent but interdependent? (see Friedman, 1992).
Further support for keeping an open mind on the causes of aggregate instability is provided by G.M. Caporale (1993). In an investigation of business cycles in the UK, France, Finland, Italy, Norway, Sweden and West Germany, Caporale found that that neither demand nor supply shocks alone could account for economic fluctuations. Recent empirical research by Temin (1998) also finds a variety of causes responsible for US business cycles. Temin suggests a four-way classification of the causes of US business cycles over the twentieth century, namely Domestic Real, Domestic Monetary, Foreign Real and Foreign Monetary. According to Temin’s data it appears that domestic causes far outweigh foreign shocks (16.5 v. 7.5), and real disturbances dominate monetary disturbances (13.5 v. 10.5). The real domestic shocks are diverse and include all manner of real demand disturbance. Temin concludes that ‘all four types of shock have acted as a source of the American business cycle’ and the dominant conclusion of his enquiry is that ‘the sources of instability are not homogeneous’. In his study of large recessions in the twentieth century, Dow (1998) discusses three major findings. First, major recessions and growth slowdowns are mainly due to aggregate demand shocks. Second, these demand shocks can be identified, for example, the 1979–82 recession in the UK was largely the result of a decline in exports brought about by an appreciation of the exchange rate in response to the new monetary and fiscal regime under Prime Minister Margaret Thatcher. Third, recessions are not predictable given economists’ present state of knowledge.
A balanced conclusion from the above discussion would seem to point towards the advantage of taking an eclectic approach when analysing the causes of business cycles. There is no evidence that the business cycle is dead or that governments now have the ability and knowledge to offset the various shocks that buffet every economy. While governments can never hope to eliminate the business cycle, they should have the knowledge and capacity to avert another Great Depression or Great Inflation.
In a recent assessment of the contribution of REBCT to twentieth-century macroeconomics (see Snowdon, 2004a), Axel Leijonhufvud commented:
I think the legacy of Ed Prescott’s work will be in terms of the analytical machinery available to technically minded economists, although those techniques are not always appropriate and you cannot always apply them. In particular, you cannot meaningfully run these dynamic stochastic general equilibrium models across the great catastrophes of history and hope for enlightenment.
Even taking into account these important deficiencies, there is no doubt that the REBCT research programme has been ‘extremely influential’ and current
The real business cycle school |
343 |
work in the field is ‘a far cry’ from the initial representative agent competitive (and unique) equilibrium model constructed by Kydland and Prescott in the early 1980s (Williamson, 1996). However, while noting that new and controversial ideas are often the most fruitful, even when false, Hartley et al. (1998) conclude that ‘real business cycle models are bold conjectures in the Popperian mould and that, on the preponderance of the evidence, they are refuted’. However, for those economists who reject the real business cycle view that stabilization policy has no role to play, there remains the theoretical difficulty of explaining in a coherent way why markets fail to clear.
Beginning in the late 1970s, and continuing thereafter, many economists have taken up this challenge of attempting to explain why the adjustment of prices and wages in many markets is sluggish. ‘The rationality of rigidities’ theme is a major feature of the research of new Keynesian economists and it is to this work that we turn in the next chapter.
344 |
Modern macroeconomics |
EDWARD C. PRESCOTT
Edward Prescott was born in 1940 in Glens Falls, New York and obtained his BA (Maths) from Swarthmore College in 1962, his MS (Operations Research) from Case Institute of Technology in 1963 and his PhD from Carnegie-Mellon University in 1967. He was Assistant Professor of Economics at the University of Pennsylvania (1966–71), Assistant Professor (1971–2), Associate Professor (1972–5) and Professor of Economics (1975–80) at Carnegie-Mellon University, and Regents’ Professor at the University of Minnesota (1980–2003). Since 2003 he has been Professor of Economics at Arizona State University.
Professor Prescott is best known for his highly influential work on the implications of rational expectations in a variety of contexts and more recently the development of stochastic dynamic general equilibrium theory. He is widely acknowledged as a leading advocate of the real business cycle approach to economic fluctuations. In 2004 he was awarded, with Finn Kydland, the Nobel Memorial Prize in Economics for ‘contributions to dynamic macroeconomics: the time consistency of economic policy and the driving forces behind business cycles’. Among his best-known books are:
Recursive Methods in Economic Dynamics (Harvard University Press, 1989), co-authored with Nancy Stokey and Robert E. Lucas Jr, and Barriers to Riches (MIT Press, 2000), co-authored with Stephen Parente. His most widely read articles include: ‘Investment Under Uncertainty’, Econometrica
Edward C. Prescott |
345 |
(1971), co-authored with Robert E. Lucas Jr; ‘Rules Rather Than Discretion: The Inconsistency of Optimal Plans’, Journal of Political Economy (1977), co-authored with Finn Kydland; ‘Time to Build and Aggregate Fluctuations’, Econometrica (1982), co-authored with Finn Kydland; ‘Theory Ahead of Business Cycle Measurement’, Federal Reserve Bank of Minneapolis Quarterly Review (1986); ‘Business Cycles: Real Facts and a Monetary Myth’, Federal Reserve Bank of Minneapolis Quarterly Review
(1990), co-authored with Finn Kydland; ‘The Computational Experiment: An Econometric Tool’, Journal of Economic Perspectives (1996), co-authored with Finn Kydland; and ‘Prosperity and Depression’, American Economic Review (2002).
We interviewed Professor Prescott in Chicago, in his hotel room, on 3 January 1998, while attending the annual conference of the American Economic Association.
Background Information
Where and when did you first study economics?
I first studied economics as a graduate student at Carnegie-Mellon in 1963, which was then the Carnegie Institute of Technology. As an undergraduate I initially started out as a physics major – back then it was the Sputnik era and that was the glamorous field. I had two boring laboratory courses, which I didn’t enjoy, so I transferred into math.
What was it about economics that attracted you?
Having transferred from physics to math I first considered doing applied math
– I got my degree in operations research. Then I went to an interdisciplinary programme and it seemed to me that the smartest, most interesting people were doing economics. Bob Lucas was a new assistant professor when I arrived at Carnegie-Mellon. My mentor, though, was Mike Lovell, a wonderful person.
Apart from Bob Lucas and Mike Lovell, did any of your other teachers stand out as being particularly influential or inspirational?
Sure. Morie De Groot, a great Bayesian statistician.
With respect to your own research which economists have had the greatest influence?
I would say Bob Lucas. Also Finn Kydland, who was a student of mine – perhaps my two most important papers were written with Finn [Kydland and Prescott, 1977, 1982].
346 |
Modern macroeconomics |
For over 20 years you have had a very productive relationship with Finn Kydland. When did you first meet him?
My first position after leaving Carnegie-Mellon was at the University of Pennsylvania. When I came back to Carnegie-Mellon Finn was an advanced graduate student there, ready to work on research. We had a very small economics programme with approximately seven faculty members and seven students. It was a good programme where students worked quite closely with faculty members. Bob Lucas and I had a number of joint students – unlike Bob I didn’t scare the students [laughter].
Development of Macroeconomics
You have already mentioned that Bob Lucas was very influential on your own thinking. Which other economists do you regard as being the most influential macroeconomists since Keynes?
Well, if you define growth as being part of macroeconomics Bob Solow has to be up there. Peter Diamond, Tom Sargent and Neil Wallace have also been very influential.
What about Milton Friedman?
Well, I know Bob Lucas regards Friedman as being incredibly influential to the research programme in the monetary area. Friedman’s work certainly influenced people interested in the monetary side of things – Neil Wallace, for example, was one of Friedman’s students. But I’m more biased towards Neil Wallace’s programme, which is to lay down theoretical foundations for money. Friedman’s work in the monetary field with Anna Schwartz [1963] is largely empirically orientated. Now when Friedman talked about the natural rate – where the unit of account doesn’t matter – that is serious theory. But Friedman never accepted the dynamic equilibrium paradigm or the extension of economic theory to dynamic stochastic environments.
You were a graduate student at a time when Keynesianism ‘seemed to be the only game in town in terms of macroeconomics’ [Barro, 1994]. Were you ever persuaded by the Keynesian model? Were you ever a Keynesian in those days?
Well, in my dissertation I used a Keynesian model of business cycle fluctuations. Given that the parameters are unknown, I thought that maybe you could apply optimal statistical decision theory to better stabilize the economy. Then I went to the University of Pennsylvania. Larry Klein was there – a really fine scholar. He provided support for me as an assistant professor, which was much appreciated. I also had an association with the Wharton Economic Forecasting group. However, after writing the paper on ‘Invest-
Edward C. Prescott |
347 |
ment under Uncertainty’ with Bob Lucas [Econometrica, 1971], plus reading his 1972 Journal of Economic Theory paper on ‘Expectations and the Neutrality of Money’, I decided I was not a Keynesian [big smile]. I actually stopped teaching macro after that for ten years, until I moved to Minnesota in the spring of 1981, by which time I thought I understood the subject well enough to teach it.
Business Cycles
The study of business cycles has itself gone through a series of cycles. Business cycle research flourished from the 1920s to the 1940s, waned during the 1950s and 1960s, before witnessing a revival of interest during the 1970s. What were the main factors which were important in regenerating interest in business cycle research in the 1970s?
There were two factors responsible for regenerating interest in business cycles. First, Lucas beautifully defined the problem. Why do market economies experience recurrent fluctuations of output and employment about trend? Second, economic theory was extended to the study of dynamic stochastic economic environments. These tools are needed to derive the implications of theory for business cycle fluctuations. Actually the interest in business cycles was always there, but economists couldn’t do anything without the needed tools. I guess this puts me in the camp which believes that economics is a tool-driven science – absent the needed tools we are stymied.
Following your work with Finn Kydland in the early 1980s there has been considerable re-examination of what are the stylized facts of the business cycle. What do you think are the most important stylized facts of the business cycle that any good theory needs to explain?
Business cycle-type fluctuations are just what dynamic economic theory predicts. In the 1970s everybody thought the impulse or shock had to be money and were searching for a propagation mechanism. In our 1982 Econometrica paper, ‘Time to Build and Aggregate Fluctuations’, Finn and I loaded a lot of stuff into our model economy in order to get propagation. We found that a prediction of economic theory is that technology shocks will give rise to business cycle fluctuations of the nature observed. The magnitude of the fluctuations and persistence of deviations from trend match observations. The facts that investment is three times more volatile than output, and consumption one-half as volatile, also match, as does the fact that most business cycle variation in output is accounted for by variation in the labour input. This is a remarkable success. The theory used, namely neoclassical growth theory, was not developed to account for business cycles. It was developed to account for growth.
348 |
Modern macroeconomics |
Were you surprised that you were able to construct a model economy which generated fluctuations which closely resembled actual experience in the USA?
Yes. At that stage we were still searching for the model to fit the data, as opposed to using the theory to answer the question – we had not really tied down the size of the technology shock and found that the intertemporal elasticity of labour supply had to be high. In a different context I wrote a paper with another one of my students, Raj Mehra [Mehra and Prescott, 1985] in which we tried to use basic theory to account for the difference in the average returns on stock and equity. We thought that existing theory would work beforehand – the finance people told us that it would [laughter]. We actually found that existing theory could only account for a tiny part of the huge difference.
How do you react to the criticism that there is a lack of available supporting evidence of strong intertemporal labour substitution effects?
Gary Hansen [1985] and Richard Rogerson’s [1988] key theoretical development on labour indivisibility is central to this. The margin that they use is the number of people who work, not the number of hours of those that do work. This results in the stand-in or representative household being very willing to intertemporally substitute even though individuals are not that willing. Labour economists using micro data found that the association between hours worked and compensation per hour was weak for full-time workers. Based on these observations they concluded that the labour supply elasticity is small. These early studies ignore two important features of reality. The first is that most of the variation in labour supply is in the number working – not in the length of the workweek. The second important feature of reality ignored in these early studies is that wages increase with experience. This suggests that part of individuals’ compensation is this valuable experience. Estimates of labour supply are high when this feature of reality is taken into account. The evidence in favour of high intertemporal labour supply elasticity has become overwhelming. Macro and micro labour economics have been unified.
Many prominent economists such as Milton Friedman [see Snowdon and Vane, 1997b], Greg Mankiw [1989] and Lawrence Summers [1986] have been highly critical of real business cycle models as an explanation of aggregate fluctuations. What do you regard as being the most serious criticisms that have been raised in the literature against RBC models?
I don’t think you criticize models – maybe the theory. A nice example is where the Solow growth model was used heavily in public finance – some of its predictions were confirmed, so we now have a little bit more confidence in that structure and what public finance people say about the consequences of
Edward C. Prescott |
349 |
different tax policies. Bob Lucas [1987] says technology shocks seem awfully big and that is the feature he is most bothered by. When you look at how much total factor productivity changes over five-year periods and you assume that changes are independent, the quarterly changes have to be big. The difference between total factor productivity in the USA and India is at least 400 per cent. This is a lot bigger than if in say a two-year period the shocks are such that productivity growth is a couple of per cent below or above average. This is enough to give rise to a recession or boom. Other factors are also influential – tax rates matter for labour supply and I’m not going to rule out preference shocks either. I can’t forecast what social attitudes will be, I don’t think anybody can – for example, whether or not the female labour participation rate will go up.
In your 1986 Federal Reserve Bank of Minneapolis paper, ‘Theory Ahead of Business Cycle Measurement’, you concluded that attention should be focused on ‘determinants of the average rate of technological advance’. What in your view are the main factors that determine the average rate at which technology advances?
The determinants of total factor productivity is the question in economics. If we knew why total factor productivity in the USA was four times bigger than in India, I am sure India would immediately take the appropriate actions and be as rich as the USA [laughter]. Of course the general rise throughout the world has to be related to what Paul Romer talks about – increasing returns and the increase in the stock of usable knowledge. But there is a lot more to total factor productivity, particularly when you look at the relative levels across countries or different experiences over time. For example, the Philippines and Korea were very similar in 1960 but are quite different today.
How important are institutions?
Very. The legal system matters and matters a lot, particularly the commercial code and the property rights systems. Societies give protection to certain groups of specialized factor suppliers – they protect the status quo. For example, why in India do you see highly educated bank workers manually entering numbers into ledgers? In the last few years I have been reading quite a lot about these types of issues. However, there seem to be more questions than answers [laughter].
When it comes to the issue of technological change, are you a fan of Schumpeter’s work?
The old Schumpeter, but not the new [laughter]. The new suggests that we need monopolies – what the poor countries need is more competition, not more monopolies.
350 |
Modern macroeconomics |
In your 1991 Economic Theory paper, co-authored with Finn Kydland, you estimated that just over two-thirds of post-war US fluctuations can be attributed to technology shocks. A number of authors have introduced several modifications of the model economy, for example Cho and Cooley [1995]. How robust is the estimate of the contribution of technology shocks to aggregate fluctuations to such modifications?
The challenge to that number has come from two places. First, the size of the estimate of the intertemporal elasticity of labour supply. Second, are technology shocks as large as we estimated them to be? You can have lots of other factors and they need not be orthogonal – there could be some moving in opposite directions that offset each other or some moving in the same direction that amplify each other. Are the shocks that big? Marty Eichenbaum [1991] tried to push them down and came up with a 0.005 number for the standard deviation of the total factor productivity shocks. My number is 0.007. I point out to Marty that Ian Fleming’s secret agent 005 is dead. Agent 007 survives [laughter].
How do you view the more recent development of introducing nominal rigidities, imperfect credit markets and other Keynesian-style features into RBC models?
I like the methodology of making a theory quantitative. Introducing monopolistic competition with sticky prices has been an attempt to come up with a good mechanism for the monetary side. I don’t think it has paid off as much as people had hoped, but it is a good thing to explore.
The new classical monetary-surprise models developed in the 1970s by Lucas, Sargent, Wallace and others were very influential. When did you first begin to lose faith in that particular approach?
In our 1982 paper Finn and I were pretty careful – what we said was that in the post-war period if the only shocks had been technology shocks, then the economy would have been 70 per cent as volatile. When you look back at some of Friedman and Schwartz’s [1963] data, particularly from the 1890s and early 1900s, there were financial crises and associated large declines in real output. It is only recently that I have become disillusioned with monetary explanations. One of the main reasons for this is that a lot of smart people have searched for good monetary transmission mechanisms but they haven’t been that successful in coming up with one – it’s hard to get persistence out of monetary surprises.
How do you now view your 1977 Journal of Political Economy paper, co-authored with Finn Kydland, in which monetary surprises, if they can be achieved, have real effects?
Edward C. Prescott |
351 |
Finn and I wanted to make the point about the inconsistency of optimal plans in the setting of a more real environment. The pressure to use this simple example came from the editor – given the attention that paper has subsequently received, I guess his call was right [laughter].
What do you regard to be the essential connecting thread between the mon- etary-surprise models developed in the 1970s and the real business cycle models developed in the 1980s?
The methodology – Bob Lucas is the master of methodology, as well as defining problems. I guess when Finn and I undertook the research for our 1982 piece we didn’t realize it was going to be an important paper. Ex post we see it as being an important paper – we certainly learnt a lot from writing it and it did influence Bob Lucas in his thinking about methodology. That paper pushed the profession into trying to make macroeconomic theory more quantitative – to say how big things are. There are so many factors out there – most of them we have got to abstract from, the world is too complex otherwise – we want to know which factors are little and which are significant.
Turning to one of the stylized facts of the business cycle, does the evidence suggest that the price level and inflation are procyclical or countercyclical?
Finn and I [Kydland and Prescott, 1990] found that in the USA prices since the Second World War have been countercyclical, but that in the interwar period they were procyclical. Now if you go to inflation you are taking the derivative of the price level and things get more complex. The lack of a strong uniform regular pattern has led me to be a little suspicious of the importance of the monetary facts – but further research could change my opinion.
What is your current view on the relationship between the behaviour of the money supply and the business cycle?
Is it OK to talk about hunches? [laughter]. My guess is that monetary and fiscal policies are really tied together – there is just one government with a budget constraint. In theory, at least, you can arrange to have a fiscal authority with a budget constraint and an independent monetary authority – in reality some countries do have a high degree of independence of their central bank. Now I’ve experimented with some simple closed economy models which unfortunately get awfully complicated, very fast [laughter]. In some of those models government policy changes do have real consequences – the government ‘multipliers’ are very different from those in the standard RBC model. Monetary and fiscal policy are not independent – there is a complex interaction between monetary and fiscal policy with respect to debt management, money supply and government expenditure. So I think that there is a
