Economic forecasting is a long-standing joke, but the laughter has turned harsh and bitter in the wake of the credit crisis. The conventional wisdom seems to be that economic forecasting is impossible, and that economic forecasters are charlatans.
“In that case,” asked Professor David Hendry in a spring lecture at the Royal Economic Society, “why am I wasting my time on this?”
For one of Britain’s most respected economists, Hendry gives the strong impression of a man ploughing a lonely furrow.
His choice of field – the theory of economic forecasting – is to blame. It is viewed with scepticism not only by laymen but by most academic economists, too. But his research – a heady mix of bewildering computer-assisted mathematics and straightforward common sense – has convinced me that economic forecasting shouldn’t be consigned to the realm of quackery quite yet.
There is a simple reason why most economic forecasts are useless, which is that forecasting is hard. We don’t fully understand the underlying economic processes that produce the results we wish to forecast (growth, inflation, house prices), nor can we measure all the variables accurately, nor anticipate the sudden shifts caused by politics or technological change. Some forecasts – notably of the price of shares and other assets – are intrinsically self-defeating, because if it was obvious that share prices would rise, then they would have risen already.
But one of Hendry’s insights – developed with his co-author Michael Clements – is that not all of these difficulties produce bad forecasts. What really screws up a forecast is a “structural break”, which means that some underlying parameter has changed in a way that wasn’t anticipated in the forecaster’s model.
These breaks happen with alarming frequency, but the real problem is that conventional forecasting approaches do not recognise them even after they have happened. Oil-price forecasters have been predicting since 2000 that the oil price will fall; all the while it has been climbing. The reverse problem applied during the 1980s: oil prices collapsed, but the expert consensus was that the price would recover soon. That consensus persisted for years. The pound appreciated sharply in 1997; for the next eight years, forecasters predicted this appreciation would soon be reversed.
In all these cases, the forecasts were wrong because they had an inbuilt view of the “equilibrium” oil price or sterling exchange rate. In each case, the equilibrium changed to something new, and in each case, the forecasters wrongly predicted a return to business as usual, again and again. The lesson is that a forecasting technique that cannot deal with structural breaks is a forecasting technique that can misfire almost indefinitely.
Hendry’s ultimate goal is to forecast structural breaks. That is almost impossible: it requires a parallel model (or models) of external forces – anything from a technological breakthrough to a legislative change to a war.
Some of these structural breaks will never be predictable, although Hendry believes forecasters can and should do more to try to anticipate them.
But even if structural breaks cannot be predicted, that is no excuse for nihilism. Hendry’s methodology has already produced something worth having: the ability to spot structural breaks as they are happening. Even if Hendry cannot predict when the world will change, his computer-automated techniques can quickly spot the change after the fact.
That might sound pointless.
In fact, given that traditional economic forecasts miss structural breaks all the time, it is both difficult to achieve and useful.
Talking to Hendry, I was reminded of one of the most famous laments to be heard when the credit crisis broke in the summer. “We were seeing things that were 25-standard deviation moves, several days in a row,” said Goldman Sachs’ chief financial officer. One day should have been enough to realise that the world had changed.
Also published at ft.com, subscription free.