David Orrell has written an excellent book for the general reader. It doesn't explain economics. It isn't theoretical. But it does explain why current economics and policy makers are wrong-headed. It does explain what needs to be done to get economics onto a solid foundation. It is good at explaining how economics was ill-founded by economists deluded by a kind of "physics envy" during the late 19th century.
The book is extensive and should give you a good understanding for why the recent financial crisis and bank panic has forced a reset to economic theory-making.
The following bits give you a flavour of the book...
The fact that money and happiness are totally different concepts somehow knocks the steam out of neoclassical theory, which assumes precise and mathematical relationsihps between utility and price. If the economic machine isn't maximising utility, then what is it maximising?Orrell traces the 19th century origin of economics and has much to say about it. For example:
The answer, of course, is nothing. The economy is what it is. Free markets have many splendid attributes, which must be protected. They are the best way that we have come up with to make a wide variety of economic decisions. They offer individuals and companies the opportunity to either succeed, or fail and make room for others, in the process that Joseph Schumpeter called "creative destruction." Markets are a basic form of human interaction that exited before economics was invented, and they don't need neoclassical theories to justify them -- any more than human cooperation must be justified by Marxism. But if we are to base our quest for the good life on empirical facts, rather than corny 19th-century ideas, then we need to rebalance our priorities. ...
In unequal societies, though, it is hard for anyone to be satisfied with what they have in material terms. So the first priority is to reduce inequality, which is discussed earlier is also strongly correlated with a range of social problems. ...
Another plank of happiness is freedom from excessive levels of economic stress. So it is desirable that the economy be reasonably stable. Economic shocks such as financial crashes or unemployment have powerful emotional consequences. ...
As mentioned in Chapter 6, a large amount of labour -- including much done by women -- is unpaid. This work, which is performed according to social norms, is vitally important for maintaining the happiness of society. ...
Finally, we have to acknowledge as a society that money and happiness are completely different quantities. The reason that GDP has soard in Western economies over the last few decades, but reported happiness levels have remained relatively static, is because they are not the same thing.
It might seem strange that control theory has not had wider influence on neoclassical theory, given that they were both founded at the same time; and that market economies are famed for their creativity and dynamism, which often seems the opposite of stability. It makes sense only when you think of economics, not as a true scientific theory, but as an encoding of a particular story or ideology about money and society. Viewed in this way, equilibrium theory is attractive for three reasons. Firstly, it implies that the current economic arrangement is in some sense optimal (if the economy were in flux, then at some times it must be more optimal than at others). ...The following is one of my favourite bits that tweaks the self-confident "authority" of economists:
Secondly, it keeps everyone else in the game. As discussed further in Chapter 7, the benefits of increased productivity in the last few decades have flowed not to workers, but to managers and investors. If academics and the government were to let out the fact that the economy is unstable and non-optimal, then the workers might start to question their role in keeping it going.
Thirdly, it allows economists to retain some of their oracular authority. If the market were highly dynamic and changeable, then the carefully constructed tools of orthodox economics would be of little practical use. Efficient market theory, for example, makes no sense unless equilibrium is assumed. ...
A property of complex systems like the economy is that they can often appear relatively stable for long periods of time. However, the apparaent stability is actually a truce between strong opposing forces -- those positive and negative feedback loops. When change happens, it often happens suddenly -- as in earthquakes, or financial crashes.
It might seem that financial markets are so complex that they are impossible to regulate. However, this impression is largely due to the carefully maintained myths that markets are efficient and optimal, so any attempt to interfere with their function will be counterproductive.And this is a close second:
Our current approach to the economy is schizophrenic. We design an unregulated system that is economically and ecologically unstable; model it using techniques that assume stability; try to make predictions of the future; and then react in surprise when something goes wrong. If instead we acknowledge that the system is unstable, that opens up the opportunity to actively improve it, rather than passively try to guess its next move.This is an excellent book to understand the current economic mess. It is written by an outsider -- an applied mathematician -- and it succeeds because understanding the crash requires an approach not compromised by years of "education" within the current theory. A mathematician has the intellectual distance and depth of sophistication to truly understand the gross errors at the foundations of economics. Others are too easily intimidated by the pseudo-scientific formalizations of modern economics. Read this book.
Update 2011oct29: If you look at the comments to this post you find some claims about economics that strike me as incredible in their ideological obtuseness. I prefer a world in which people are willing to respect viewpoints and learn from each other. In that spirit, I offer the following post from an interview with Mark Thoma, a left wing economist who shows respect for right wing views. This post is also interesting because it opens up the idea of validating economic theory (and hints at how very hard that is):
FiveBooks Interviews: Mark Thoma on EconometricsIf you read the above, you will find that David Orrell falls in the "heterodox" camp, i.e. those who don't like the equilibrium models and is not happy with contemporary tools and techniques of traditional economics:
It’s a discipline popular with the Nobel prize committee and mysterious to most of the rest of us. So we asked an econometrician to explain what he does, and why there’s such a battle of ideas (and models) in economics
A number of econometricians – economists who use statistical and mathematical methods – have won the Nobel prize in economics. We’re going to talk about the ones that influenced you in your work. Before we start, what got you excited about econometrics in the first place?
I was in high school at the time of the oil price crisis of the 1970s and I asked my mom, “What causes inflation?” She didn’t know. I think it was from then on that I started wondering about money and inflation and what caused it – how the Fed was involved et cetera. When I got to college I took economics and started answering those questions. Then when I got to graduate school I learned theories about why it happened, and I got interested in how you test those theories. How do you know if this theory is right or that theory is right? How can we figure it out? That’s where econometrics comes in. Time-series econometrics was a set of tools I could use to answer that longstanding question I had about how the Federal Reserve impacts the economy, how it creates inflation and all those things.
Time-series econometrics is a particular area of focus for you. Can you give me an example of how it works?
Right now there is this big question about how much impact the Fed can have on the economy and whether or not deficit spending, or a change in government spending, creates employment. Is there a government-spending multiplier? You can use time-series econometrics to go back and look at data from the past, then use that data to estimate a model, and from that model you can figure out whether or not government spending works, and if it does work, when it works, when it doesn’t – all sorts of questions. Almost any policy question that you can formulate, you can take to the data. That’s when the econometric technique comes in, and allows you to answer it.
So it’s more empirical than straight economics?
Exactly. Our field is divided into two groups – there’s the theorists and the applied people. I’m more of an applied person. I apply and test the theories that the more highbrow, theoretical types build. They might build two different models of the economy, and then you want to know which one fits the data best. Which one is the best explanation for the data. You can use econometrics to sort out those competing theoretical models.
What then is your conclusion on the government-spending multiplier?
There’s a lot of uncertainty about it, so I can’t say for sure whether it’s one or two, or somewhere inbetween. There’s a fairly wide range of estimates, but I think it’s pretty clear that it’s somewhere in that range. I would say it’s about 1.5 presently.
The first ever Nobel prize for economics was awarded to two econometricians, Jan Tinbergen and Ragnar Frisch. Have econometricians been well represented in the Nobel prize for economics?
I think they have. The committee has been very good at rewarding the people who are building the tools that allow us to test these models, so it’s not unusual at all for them to give Nobel prizes to econometricians.
Of more recent awards, prior to this year’s award to Thomas Sargent and Christopher Sims, there was Robert Engle and Clive Granger in 2003, and before that Daniel McFadden and James Heckman in 2000.
Yes, those are all econometricians. The older cohort are more micro-economists. This morning I was wondering whether this year’s prize was something of a make-up for the macro people, but I don’t think it was. They built their own tools and techniques that were important.
Let’s start by talking about New York University’s Thomas Sargent, who along with Lars Ljungqvist is the author of your first book choice, Recursive Macroeconomic Theory. What is important about Sargent’s work?
Sargent went into the engineering literature, he took the tools and techniques that engineers use to do things like make sure your television is clear, and he brought them over into economics. There were all these tools that scientists were using to control systems. So an engineer might build a TV that has a feedback mechanism. Somehow, it can look at the picture and if the picture isn’t right, it can go back and adjust the inputs in a way that clarifies it. If there is a horizontal scroll or a vertical scroll, it does something internally to stabilise the system, and makes sure it doesn’t get out of whack.
All the same tools that engineers use to stabilise your television picture can be used to stabilise the economy. The difference – this is an important difference, and where Sargent’s contribution comes in – is that when I do something with a TV, it doesn’t try to get out of the way to protect itself. It doesn’t say, “Oh! I don’t like having a little more green in my colour, so I’m just going to turn it back down.” Unlike TVs, people have brains and they respond to policies. If you try to tax them, they try to get out of the way of taxes. If you try to tax a TV, it has no way of getting out the way.
That’s where rational expectations comes in. When you’re trying to control a person instead of a TV, you have to take into account their expectations – how they’re going to respond to the things you do as a policymaker. Sargent took all these tools and techniques for optimal control that were being used in engineering, all this heavy mathematics, brought it over into economics, added rational expectations to it and made it even harder to use than it already was in the engineering literature.
And that’s what he won the Nobel for?
Yes. It wasn’t so much that he was this grand theorist – although he is certainly good at that – it was more the tools and techniques that he brought to the profession.
I get the feeling that the Nobel prize in economics is awarded on somewhat political grounds – based on what’s happening in the world at that particular moment – and not purely on intellectual heft. Is Sargent’s work especially relevant to the economic crisis we now face?
He gave us the tools and techniques we need to analyse the crisis and build the models that we need to build to understand it. But it’s not as though he built those models themselves. He built the hammer, not the house.
Let’s talk a bit more about the book, Recursive Macroeconomic Theory.
This is the textbook I use in my PhD macro courses. In fact, it’s used in almost all of the major PhD programmes in the US right now – any respectable programme most likely uses this book. It’s a book of tools and techniques to solve what are called recursive macroeconomic models.
I think you’d better explain.
Modern macroeconomics uses DSGE models – dynamic stochastic general equilibrium models. That’s a bit of a mouthful, but all dynamic means is that it’s a model which explains how things move through time. So it explains what GDP will be today, tomorrow, the next day and the day after that, and how it might change if the government does different things or if certain things happen in the economy. It’s really a model of how the economy transitions from one point to the next – how it goes into recessions et cetera. Those models are extraordinarily difficult to solve.
What this book shows us – and this is the recursive part in the title – is a way of breaking down this really hard problem into little tiny pieces. You can actually solve a much simpler problem when you only have to look at how the economy moves from today to tomorrow. I don’t have to look at how it moves from tomorrow to the next day and all the way out as far as I can see. We can just break down these really hard problems into a recursion between today and tomorrow.
If you’re doing a PhD in macroeconomics, how important is this textbook? Is it one of 10 you have to work through?
It’s probably one of two. In macro there really aren’t very many. There’s this book, and there’s a book by David Romer of Berkeley. So this is sort of the Bible right now.
Let’s go onto what will count as your next two books, volumes one and two of Rational Expectations and Econometric Practice, edited by Thomas Sargent and Robert Lucas of Chicago, a leader of the rational expectations revolution and winner of the 1995 Nobel prize.
These are the books I used when I first came out of grad school and was trying to get tenure as a young assistant professor. It’s a collection of papers, a lot of them by Sargent, Sims and Granger. Nobel prize-winning economists are the dominant authors in it.
The second volume, especially, taught me how to use econometrics to test a brand new class of models. At the time, there were models coming out called new classical models and real business cycle models. Both involved rational expectations, which, it turns out, made estimation techniques really hard. These books were crucial in, firstly, giving me all the theoretical models behind what we were trying to test – there’s a whole section on theory – and, secondly, going through all the econometric techniques one needs to test those classes of models.
What surprised me, looking down your list, was that I think of you as very much on the left-wing side of the economics divide. Aren’t Lucas and Sargent at the opposite end of the ideological spectrum?
They are, and that’s an important point. Both sides of the spectrum within economics use the same tools and techniques. So I can honour everything they’ve done to allow me to do econometrics and understand theory without endorsing the way they’ve used those particular tools.
So these books are about the tools rather than the conclusions they reach?
There are conclusions in there, but they are more by way of example. Once you’ve learned the techniques, you’re all set to do anything. Lucas I would peg as very conservative. Sargent is a bit more open-minded. He’s done learning models with a colleague, and I sometimes go down the hall and he’ll be sitting in his office. Sargent’s dad lived in Eugene, Oregon for a long time (though he passed away recently) so he’d show up at our apartment quite often. That’s another reason why I like him.
Before I spoke to Paul Krugman, I’d never even heard the term “freshwater economist”, which I understand refers to the University of Chicago, the Minneapolis Federal Reserve, and various places near the Great Lakes which produce economists with a right-wing bent. I gather there’s a big divide between them and the “saltwater economists” who come from universities along the seaboards – Princeton, Harvard, Berkeley, Oregon – and who are more liberal.
There’s three groups really. There’s the divide between the new Keynesians and the new monetarists, or the real business cycle economists. That’s people like Lucas and Sargent versus people like me, Krugman, Brad DeLong and others. Then there’s another, much smaller group that don’t think any of us have a clue. Those are the heterodox economists. They don’t like the tools and techniques we use, they don’t like equilibrium models. It’s people like Jamie Galbraith, who don’t agree with either side.
Is Joe Stiglitz verging on the heterodox these days?
I would describe him as traditional minded. He certainly uses the same tools, though he pushes them to a bigger extreme than others would. He’s not saying: Throw out the toolbox, throw out everything we’ve learned in the last 30 years and let’s take a completely different tack using different kinds of models altogether. Though maybe that’s where we’ll end up as a result of this crisis.
I remain concerned at how economists can disagree so much. Doctors don’t disagree about how to treat a cancer patient.
Economists don’t disagree about certain things. And doctors do disagree about things – like whether cholesterol is good. There’s a big controversy right now about whether you should take vitamins or not, whether it’s helpful or harmful. When doctors have difficulty testing things experimentally, they run into the same issues as we do. When there’s just historical data, like we have – if, for example, they try to figure out heart disease by looking back at people’s lives – then a lot of the time they get things as wrong as we do. Doctors have changed their advice many times.
Going back to fiscal stimulus, which you mentioned at the beginning as something time-series econometrics can test, I take it there isn’t overwhelming evidence in its favour? Even though you’re on the Keynesian side, do you think people that question whether it works have a point?
Yes I do, completely. The reason is that we don’t have data for historical episodes like this one. The Great Depression was like this, but our data pretty much ends in 1945. We can’t go back any further with anything close to reliable data. As an econometrician I can estimate these multipliers, but they’re for good times not bad times. I don’t have the data that I need. I don’t have enough big recessions like this one in my data set to give a precise answer.
Your next book is by the winners of the 2003 Nobel prize, Robert Engle – who is now emeritus professor at UC San Diego – and the British economist Clive Granger, who sadly has since died. What was their big insight and contribution? Tell me about their book, Long-Run Economic Relationships.
This book is about a subject for which the technical term is cointegration. What it means in everyday language is variables that are tied together in the long run, that are related in some way. For example, you might think that consumption and income are tied together in the long run. If consumption takes a big left-turn at some point in the data, income ought to take a big left-turn in the data as well. One thing they got the Nobel prize for is how, within our models, we can tie these two variables together in a way that makes sense. It sounds easy, but it’s actually a very hard econometric problem.
The other thing they got their Nobel prize for was something called ARCH, or autoregressive conditional heteroskedasticity models. What that means is that for income, over time, we can measure the variance of income – how variable it is. There is some mean of income over time that follows some trend, and the variation around that trend is the variance. They showed us how to write down economic models that track that variance through time. So it’ll tell us what’s causing the variance of a financial asset to change. The variance of a financial asset would be how risky it is. If you’re looking at a financial asset, the mean would be the expected return on the asset and the variance is its risk profile.
These tools that they developed within this ARCH framework were then used – Engle says inappropriately – to do what were called value-at-risk calculations, prior to the crisis. When you hear about all these financial firms, looking at their portfolio and doing risk assessments, at the heart of what they were doing was using these models that Engle and Granger built, that allowed you to estimate a time series of variances and see how that variance, or risk profile, changes over time.
I’m presuming that in the run-up to the crisis, their models did not flag a big risk at these banks?
Yes, and Engle would say that the reason why that happened is that they weren’t using his models correctly. In some sense he’s just protecting himself. The important point here is that at the heart of all the risk analysis for the financial industry was their models. Prior to the crash, and even after the crash, if you wanted to know how risky the portfolio that Bear Stearns was holding was, you would use those techniques.
In spite of this, you’ve kept the book on your list…
The book is more about the first topic I mentioned, cointegration, though the other stuff is in there as well. Cointegration is important because it allows us to do a better job of looking at things like causality between variables. They showed that if you have variables that are tied together over time, then the standard tools and techniques that were in use at that time would be wrong. It would be inappropriate to use them, you have to use a completely different estimation technique. They showed us how to do tests to find out if you have this problem of a long-run relationship in your data, and if you have this problem how to fix it within the models. That was an important step forward, because we learned that these relationships are all over the place. We had probably been estimating our models wrongly up to that time.
If they hadn’t been awarded the Nobel in 2003, could they have won it now, post-crisis?
The crisis wouldn’t have changed anyone’s view of cointegration. I think the value of the ARCH models might have been questioned a bit more than they were at the time, because they didn’t signal the risks in advance like we expected them to.
Your last choice is an older textbook, Macroeconomic Theory, again by Thomas Sargent. What does this one bring to the table?
This was the first book I ever had in grad school, in my very first macro class. It was a good book for me to learn macroeconomics as it existed in the early 1980s. It actually presents a very Keynesian model, because that was the dominant model of the time and the beginnings of new classical models.
The reason I like it, and still use it, is that it shows me a lot of ways to solve expectational difference equations. These are just equations that have expectations in them. You might say that GDP today is equal to some function of government spending, of interest rates and the money supply – and it might be a function of expected income tomorrow. So income today depends on what you expect to happen tomorrow. Once you put those expectations into that equation, it’s really really hard to solve. In this book, Sargent begins showing us how to solve those problems in a way that’s general and works in a lot of different cases. So he brings a brand new technology to the literature that opens up a lot of questions you couldn’t ask before.
Isn’t this covered in the other books?
The book he wrote with Lars Ljungqvist is an updated, expanded and better version of this older book. But this book is still really good at solving models that have expectations in them. I still assign one chapter of it to my students. Particularly those tools for solving difference equations – they’re called expectational difference equations – are just as good as they ever were. It’s still the best source that I know of.
How much are these econometrics methodologies tied to the rational expectations assumption? If economic agents are boundedly rational and not capable of solving dynamic optimisation problems, do these econometric methods still apply or do they need to be fundamentally modified? If people follow simple rules of thumb, would these methods still work?
There’s a lot of tools in there that would still work. My colleague George Evans does exactly what you say. He builds learning models, and he doesn’t assume agents are rational. He then sees whether by using simple learning rules the models converge to a rational solution over time. He still uses quite a few of the techniques that Sims uses, like impulse response functions, causality testing, all those kinds of techniques. There’s a set of things that aren’t very model dependent, things that you can bring to any set of data to learn about it.
But there’s another set of techniques where the techniques themselves depend on implications of the rational expectations hypothesis. The rational expectations hypothesis, for instance, will tell you that some variables have to be uncorrelated. Stock returns have to be uncorrelated over time, because if you could predict stocks tomorrow, any rational agent would arbitrage that, make money and take away the predictability. What that gives you is a zero correlation between yesterday and today. That fact that that correlation is zero is often exploited in some of these techniques, to make it work. So if your rational expectations hypothesis falls apart, a lot of what I would call the more structural-based econometric techniques would fall apart with it, because they rely on the implications of rational expectations.
In your field of econometrics, and in economics in general, is there a lot of change going on as a result of the crisis?
There should be. But not as much as I would like to see. Since I was in grad school – I graduated in 1986 so it’s been about 25 years – we’ve probably gone through two or three generations of models. When I started it was very Keynesian, then it was new classical, then we got something called the real business cycle models, then we got the new Keynesian models, and today there is an emerging set of models called the monetarist models. Within the field there’s been a lot of churning of models. The reason those first sets of models didn’t survive was because they didn’t stand up to the data.
The models that Lucas got his Nobel prize for – the new classical models, where expectations play a fundamental role, only unexpected money matters and things like that – had some really strong implications. We took that model to the data and it couldn’t explain the size, the magnitude and the duration of business cycles. It got rejected. Then we went to real business cycle models. They did better. But they had trouble explaining great depressions and other sorts of things, so we rejected those models and went to new Keynesian models. Those models were doing great, right up to the crisis. Then they did horridly. You don’t need advanced econometrics to reject that class of models – it’s clear that they just didn’t handle the crisis. So we’re going to reject those too. There’s been a lot of change, and I expect that change will speed up. I wish it was even faster, because it’s very clear to me that the models we were using prior to the crisis are not going to get the job done.
You mentioned your colleague George Evans. Are there other economists you admire, in terms of what they’re trying to do to find new, convincing models?
I like George’s learning models. I also like what John Geanakoplos is doing at Yale. Eric Maskin mentioned him in his interview with you. What was wrong prior to the crisis is that the macroeconomy wasn’t connected to the financial sector. There’s a technical reason for that which has to do with representative agent models – we just didn’t have any way to connect financial intermediation to the macro model. And we didn’t think we needed to. We didn’t think that was an important question, we didn’t think there was any reason to worry about the kind of meltdown we had in the 1930s happening in the US today. So no one bothered to build these models. Nevertheless, even before the crisis, Geanakoplos was building models that tried to explain how we could have these endogenous models and these cycles. I really like that, because it uses the same tools and techniques that we’ve been using all along, but it puts them together in a different way, and in a way that I think makes a lot more sense.
Do you think there’s too much disdain among academic economists for what’s happening in the real world?
A little bit. It’s partly that, and it’s partly that the answers you get as an econometrician aren’t always that precise. Because of that lack of precision and the lack of ability to experiment, you often find people getting different values for the multiplier, getting different answers with different data sets – and that makes it look, perhaps correctly, that we really don’t have any answers.
What happened is that the theorists retreated into their deductive world, where they weren’t taking their models to the data enough. When they did take them to the data, and found that they didn’t work, they just said, “Oh it’s because of bad econometrics, the model is logically correct so we’re going to stick with it.” I think the arrogance of the theorists, and the lack of ability to do experiments, combined to make them way too insular in terms of taking their models and forcing them to interact with the actual world.
Interview by Sophie Roell
Published on Oct 28, 2011
Then there’s another, much smaller group that don’t think any of us have a clue. Those are the heterodox economists. They don’t like the tools and techniques we use, they don’t like equilibrium models. It’s people like Jamie Galbraith, who don’t agree with either side.Not being an economist, I fell free to express my opinion. I think the future lies with the heterodox camp. I think David Orrell's criticisms are dead right.