At last the con has been taken out of econometrics

27th March, 2010

In 1983, Edward Leamer published an article with contents that would become almost as celebrated as its title. “Let’s Take the Con Out of Econometrics” began with an analogy that remains useful. Imagine an agricultural researcher who tests the effectiveness of a new fertiliser by dividing land into strips and spreading the new fertiliser only on a randomly chosen selection of those strips. Because of the randomisation, any effect will presumably be thanks to the fertiliser.

Contrast this scrupulous scientist, continued Leamer, with two agricultural econometricians. One notices that crops grow under trees and, after taking careful measurements, announces that bird droppings increase crop yields; the other has noticed the same phenomenon and declares that it can, with confidence, be credited to the benign effects of shade.

This is the “identification problem” – trying to work out whether a statistical pattern is caused by what we think it has been caused by. It muddies any statistical analysis of data that have not been generated by a controlled experiment, and it particularly plagues econometricians, the statistical wing of the economics profession. But, complained Leamer, throughout the 1970s they too rarely cared, and much of their work was dubious at best. Leamer was not alone. David Hendry showed in 1980 that by using the standard methods of the day, he could demonstrate that rainfall caused inflation. Or was it that inflation caused rainfall?

That was then. Now Joshua Angrist of the Massachusetts Institute of Technology and Jörn-Steffen Pischke of the London School of Economics have published a new working paper arguing that econometrics has undergone a “credibility revolution”. Angrist and Pischke argue that the identification problem is now being faced head on and for many questions it is being solved. Modern econometrics works.

Given the recent financial crisis, I pause for sceptical chuckles, but academic econometrics is rarely used for forecasting. Instead, econometricians set themselves the task of figuring out past relationships. Have charter schools improved educational standards? Did abortion liberalisation reduce crime? What has been the impact of immigration on wages?

More data and more powerful computers explain some of the improvement, but the real progress has come through better techniques. One is the use of an “instrumental variable”, some outside force that partly mimics the effect of a proper randomised trial. Angrist, for instance, looks at lotteries that allow children to go to oversubscribed charter schools. The lotteries are imperfect, because some winners do not take up their places, and some losers manage to win places somehow anyway. But allowances can be made for that.

Another technique is to look for sudden jumps in a variable that are hopefully nothing to do with the matter at hand. For instance, struggling children are often put into small classes, making it hard to work out whether small classes help children learn. But Israel has a law that no class can exceed 40 children, so a school year with 39 children will have one large class while a year with 41 children will have two smaller classes. If, as seems likely, the smaller class sizes here have nothing to do with the raw ability of the children, this serves as a good quasi-experiment.

Whether laymen will be persuaded by these sophisticated – often incomprehensibly sophisticated – techniques is not clear. But where experiments are impossible, as they often are, econometricians are at least starting to convince each other. That is probably progress.

Also published at ft.com.

Pin It on Pinterest

Share This