Why we’re all far too sure of ourselves
At least since President Truman asked for a “one-handed economist” – who presumably would be unable to say, “on the other hand” – politicians have demanded the appearance of certainty where certainty cannot exist.
Economists and other academics tend to respond to this demand if they want to be heard in the corridors of power. They do so in a wide variety of ways: at a recent Leverhulme lecture at the Institute for Fiscal Studies in London, the economic statistician Charles Manski laid out a typology of unreasonable “certitudes”.
A memorable example is the “conventional certitude”, in which a spuriously precise number becomes the focus for all debate. In the US, the Congressional Budget Office estimates how much each piece of putative legislation will alter the budget deficit over the following decade. CBO estimates are about a hundred times too precise: they are reported to the nearest billion dollars, when a range of several hundred billion would be more reasonable for major legislation.
But they are then adopted as gospel by almost all politicians, analysts and media outlets. In the UK, the Treasury’s – now the Office for Budget Responsibility’s – forecasts are treated with a little more scepticism, but only a little. The central forecast of the budget deficit tends to be universally accepted. It just seems simpler that way.
Being open about uncertainty is not just a case of reporting some kind of statistically-derived “margin of error”. There are many ways for a conclusion to look statistically robust but be wrong. What is needed is to be clear about the underpinning assumptions and open-minded about what would happen if the assumptions were mistaken.
It is not clear why we enjoy certitude so much – certitude being the subjective experience of feeling certain. In contrast – as Kathryn Schulz observes in her wonderful book Being Wrong – there is simply no psychological experience of “being wrong” at all, only the lurching realisation of having been wrong until a moment ago.
Manski argues that analysts should be far more open about the extent of their doubts, and that politicians can and should be able to cope. I am more pessimistic. Politicians are creatures of certitude: they join a tribe of like-minded people, convinced that the tribe on the other side is wicked and stupid. The media love certitude, too. Newspaper editors hate headlines with “may” or “might” in them.
For these reasons, the scholar who is honest about her doubts will find her work ignored in favour of some clever-sounding chap who just seems to know so much more about how the world works. (How else could he be so certain?) Brilliant scholars with strong, clear views, such as Milton Friedman, John Maynard Keynes and Paul Krugman, enjoy larger followings than brilliant scholars who deal in doubts and complications, such as Elinor Ostrom and Thomas Schelling.
Manski might seem quixotic in his request that serious policy analysis be presented with more humility, given that neither politicians nor the media have much appetite even for overly-certain serious policy analysis.
But there is a serious cost to excess certainty. Whenever an analyst or academic presents a number or a conclusion with too much precision, they reduce the demand for better evidence. Why run a pilot, set up a proper survey, if the answer is already known to three decimal places?
The fact is that our political system simply does not take evidence seriously. If I had to suggest one single reason for that, it’s our love of certitude. Evidence is the way to reduce honest doubts. Stuffed on a fattening diet of certitude, who has room for doubt? And if we have no doubts, who needs evidence?
Also published at ft.com.