Economists suck at forecasting. Discuss.

Economists suck at forecasting. Discuss.

01 Oct 2009

harding Professor Don Harding

Originally published in The Australian on 1 October, 2009

Recently the Australian and international media has seen economists with a high public profile, but little evidence of current research output, criticise those active in research for not predicting the global financial crisis and subsequent recession. What follows is the response from one research orientated economist: One of the most fundamental economic propositions is that one shouldn’t expect to regularly make abnormally large returns from forecasting crises or asset prices. The reasoning is simple. As soon as a better forecasting method is developed it will be incorporated into asset prices, policies and behaviour in a way that makes it hard to earn excess returns from applying that method.

Economists who developed this proposition accurately predicted their own and anyone else’s inability to make excess returns forecasting crises and asset prices. It’s a prediction that has stood up pretty well to econometric tests and, more important, to events. What’s disturbing is how many commentators misrepresent failure to predict such crises as evidence of the failure of economics as a profession.

In the future there will continue to be financial crises and recessions, all of which economists, bankers, finance gurus, bureaucrats, politicians and the public will fail to predict To be sure there will be some who claim predictive success. They will fall into two groups. Most will be single issue tragics who predict the next calamity in the same way that I, an Essendon tragic, successfully predicted their win over St Kilda in round 20 by predicting that Essendon would win every match. The only information in such predictions is about the tragics who make them.

The smaller group comprises those who have a better technique for prediction. Once that better method is publicly known it is incorporated into asset prices, policies and behaviour, thereby making it hard to earn excess returns from forecasts made using that technique and publicly available information. This doesn’t mean that economists aren’t in the business of prediction, it just means that when we develop improved forecasting methods, they are incorporated into the economy as described earlier.

One reaction is that this seems to make economics and economists less useful. Let me show why the opposite is true, by applying these ideas to present policy debates.

Starting in the early 1980s, there was a modest reduction in the volatility of gross domestic product and several other variables in many economies, an event labelled the great moderation. There has been ongoing research focused on whether the great moderation reflected better forecasts, better policy, structural change or luck in the form of smaller macroeconomic shocks.

In a 2003 paper, two of the world’s most noted econometricians, Mark Watson and James Stock, presented central hankers at the Jackson Hole conference in the US with research that suggested that the great moderation was largely the result of good luck. Their inability to find convincing evidence of what caused the good run of luck was in effect a prediction that the run of luck could end unexpectedly. That’s precisely what happened with the present crisis.

Rather than try to understand and apply the results of research such as that just cited, we get continual criticism: the models are too mathematical; you assume rational behaviour; you don’t allow for frictions; you focus too much on equilibrium; you don’t allow adequately for judgment.

This last criticism came from a former opposition leader who lacked the judgment to win the 1993 election.

Research economists have answers to these criticisms. One is that economics is a broad church and researchers are working on each of these areas. Another is that we have learned from hard experience that more mathematics, rather than less, is necessary if we are to allow for frictions, learning and departures from rationality.

The big cost of the criticisms is that they allow governments and their advisers to avoid the unpleasant and inconvenient truths that emerge from research and mainstream economic thought The result is the macroeconomic policy mess that we are now in.

Take the fiscal stimulus as an example. The fundamental difficulty in forecasting the future means that when the fiscal stimulus was designed there was a high probability that the forecasts used to calibrate the size of the stimulus would fail to eventuate and thus would need to he revised.

The public was denied insight into the likely magnitude of future revisions because Treasury continued to depart from best practice and chose not to put evidence-based standard errors on its forecasts and projections.

Had they provided standard errors the evident uncertainty about the forecasts would have caused many to pause and think about the wisdom of so large a stimulus.

Some have said that such standard errors don’t help because the revisions can be positive or negative. Not true.

William Brainard’s seminal 1967 work addresses the question of whether it is optimal to fully implement a policy in the face of such uncertainty. Often the optimal approach is to only implement part of the policy and then implement the remainder of the policy if conditions warrant it. That’s pretty much the principle that guides monetary policy adjustments.

We never received credible official analysis of this issue for fiscal policy.

What we got instead of analysis was the slogan, "Go early. Go hard. Go households." That slogan is based on one data point, the recession of the 1990s.

It reminds me of the old joke that the difference in econometric skill between the minister, secretary and deputy secretary is this: a deputy secretary can draw a regression line between two points and reach an evidence-based conclusion; the secretary can draw a regression line through one data point and reach such a conclusion; and the minister can chose the location of the data point.

What we do have a good understanding of from research is that in a small economy, open to international trade and capital flows, with a floating exchange rate, such as Australia, an important impact of a fiscal expansion is to cause an appreciation of the exchange rate, which chokes off domestic activity.

Since October 28, 2008, we have had one of the largest and most aggressive fiscal expansions in the Organisation for Economic Cooperation and Development and the Australian dollar has appreciated by 32 per cent against a trade-weighted basket of currencies and has appreciated 42 per cent against the US dollar.

The effect is to make our traded goods sector less competitive.

How is it good economic policy to run up a deficit and spend the proceeds in a way that hurts our traded goods sector? How does this protect jobs? Public policy standards are so low in Australia that my expectation is that we won’t get well-researched, evidence-based answers to these questions from either the bureaucrats or the politicians. Instead we will get spin, vitriol and blame shifting.




La Trobe Media Release RSS