From The Tipping Point to Nudge, the rise of pop-social science has been a noticeable feature of the past decade in publishing. Not everyone is impressed. I recently interviewed a professor of education who is an expert in policy evaluation. She lamented the fact that politicians tend to get their facts from popular social science books containing innacuracies. A couple of hours later, I interviewed a politician who was fizzing with excitement about a popular social science book. If only I’d been able to introduce them, the explosion would have been something to see.
I think the professor was right to worry about ministerial exposure to authors such as Malcolm Gladwell, Dan Ariely, Richard Thaler and even Tim Harford – but not for quite the right reasons. The problem is not that such authors are inaccurate. I’m not sure that they are. Gladwell has plenty of critics, but I find him a careful reporter. Ariely is a respected academic; Thaler – also a professor – is widely tipped for a Nobel prize. And what can we say about Tim Harford? I am told he is all but infallible.
Yet infallibility is not enough. It’s perfectly possible for an author to do nothing but weave together credible, peer-reviewed research and yet produce a highly partial view of reality. Different pieces of research invariably point in different directions. Dan Ariely’s Predictably Irrational is full of examples of irrational behaviour. My own Logic of Life is full of examples of rational behaviour. Occasionally I am asked to explain the contradiction, but if there is a contradiction, it is a subtle one.
If Ariely describes a rainy day and I describe a sunny one, we are not really contradicting each other. We each offer our spin, but it’s really about whether most people expect sunshine or rain: Dan says that it’s rainier than we tend to think, while I say the sun shines more often than anyone would credit. A serious review of this metaphorical evidence would count up the rainy days and the sunny ones. It might describe different climates in different parts of the world, the degree of variability, and whether there was any way to forecast rain.
For real policy questions, such reviews exist. They are called systematic reviews. They are increasingly standard practice in medicine, although they are new and scarce in social policy. They should be the first port of call for anyone wanting to understand what works. But they are not exactly bestsellers in airport bookshops.
Quite apart from the fact that nobody wants to read all the evidence, there is a deep problem with the way evidence is selected throughout academia. Even a studiously impartial literature review will be biased towards published results. Many findings are never published because they just aren’t very intriguing. Alas, boring or disappointing evidence is still evidence. It is dangerous to discard it, but let’s not blame Malcolm Gladwell just because he doesn’t stick it on page one.
There’s a hierarchy of evidence here. The systematic review tries to track down unpublished research as well as what makes it into the journals. A less careful review will often be biased towards results that are interesting. A peer-reviewed article presents a single result, while a popular social-science book will highlight a series of results that tell a tale. The final selection mechanism is the reader, who will half-remember some findings and forget the rest.
Those of us who tell ourselves we are curious about the world are actually swimming in “evidence” that has been filtered again and again in favour of interestingness. It’s a heady and perhaps toxic brew, but we shouldn’t blame popularisers alone for our choice to dive in.
Also published at ft.com.