Tim Harford The Undercover Economist
    “Every Tim Harford book is cause for celebration. He makes the ‘dismal science’ seem like an awful lot of fun.”
    – Malcolm Gladwell, author of “The Tipping Point”
  • Adapt – Why Success Always Starts with Failure

    “A highly readable argument... Very impressive”
    – Nassim N Taleb, author of “The Black Swan”
  • Dear Undercover Economist
    “Harford has a knack for explaining economic principles and problems in plain language and, even better, for making them fun”
    – The New York Times
  • The Logic of Life
    “Highly readable, funny and daringly contentious... a whopping good time.”
    – The San Francisco Chronicle
  • The Undercover Economist
    “Required reading”
    – Stephen J. Dubner, co-author of “Freakonomics”
Undercover Economist

There’s more to life than money

Too often the debate over public policy becomes a toy argument, dressed up as the grown-up version

Scottish voters are in the middle of an unseemly bidding war. With the referendum for independence scheduled for September, Scotland’s first minister Alex Salmond is trying to tempt the Scots by promising that they will each be £1,000 a year better off after independence. From London, the UK Treasury has a better offer: it forecasts a dividend of £1,400 per Scot per year if Scotland stays part of the United Kingdom.

As it happens, the big difference between the two forecasts is that the Scottish government forecasts that productivity growth will be 0.3 percentage points higher each year in an independent Scotland. (That is a lot.) A smaller difference is that while both forecasts assign most North Sea oil revenue to Scotland, the UK Treasury is pessimistic about the value of the dwindling resource.

But the weaselly details of all this need not delay us. It’s astonishing that instead of being wooed by romantic ideals expressed with passion, Scots are being promised cash. The debate over the future of the country is being conducted in a style worthy of a clearance sale at a furniture showroom. One can only imagine what politicians are like on a date – presumably they pull out a roll of banknotes and haggle over the hourly rate.

You might ask why an economist, of all people, is shocked by such behaviour. I think the reason is that it’s a superficial impersonation of what economics really is. My two-year-old son happily imitates mum and dad at the stove, but while a knee-high plastic kitchen range may look like the real deal to him, it is not. Too often the debate over public policy becomes a toy argument, dressed up to resemble the grown-up version with financial forecasts serving as the sparkly accessories.

Stated plainly, the Scottish government’s case is that an independent Scotland would enjoy high economic growth thanks to better economic policies. Any costs would be swamped by the benefits of this growth. That is not an absurd claim, although not everyone will find it persuasive. Economic numbers could, in principle, serve as a sanity check – but that is not why the numbers are there. Instead, they’re designed to divert scrutiny away from the plausibility of the underlying argument.

Scottish independence is one of countless examples of toy-oven economic analysis. Consider the old standby that some illnesses – diabetes, dementia, breast cancer – cost “the economy” billions of pounds per year. For example, the Alzheimer’s Society reports that dementia “costs the UK over £23bn a year” – a statement that could mean all sorts of things. Yale’s Rudd Center says that “obesity-related direct and indirect economic costs exceed $100bn annually”, which makes a bit more sense.

In the UK the cliché is that some disease is problematic because it costs the National Health Service money, as if an instant cure for all cancers is desirable largely because it would allow us to stop paying salaries to the radiologists.

There is certainly merit in conducting a cost-benefit analysis of medical treatments. If we understand how well they work and how severe are the symptoms they alleviate, we can set priorities. But something has gone wrong when we say that the problem with a heart attack is that it will be an expensive nuisance for the ambulance service.

Where did we go astray? Three sensible propositions from economics have somehow been crumpled into a mess of public relations and politics.

The first is that opportunity costs matter. Time, money and attention that are poured into something cannot also be lavished on something else. For this reason it’s good to get a sense of how much a proposal is likely to cost and what the benefits might be. But the cost-benefit figures often convey a sense of certitude that is absurd: they are only as solid as the assumptions and forecasts that go into them.

The second proposition is about reducing everything to money. It follows from the first: if you are going to compare the costs and benefits of different things, you need some common unit of measurement. This unit doesn’t have to be money. It is just as true for the UK Treasury to say that independence will cost every Scot the equivalent of one knickerbocker glory a fortnight. But money is a more convenient yardstick than an ice-cream sundae.

The third proposition is that it’s worth paying special attention to spillover costs and benefits. In arguing over HS2, the fantastically controversial proposal to build a faster railway line between London and Manchester, people speculated over the value to passengers of a faster journey. Economics suggests that’s the last thing we should fret about, because passengers can make those benefits known by buying tickets. It’s the costs and benefits for those who don’t buy tickets that need more scrutiny.

Costs and benefits matter, money is a handy measuring rod, and spillovers deserve special attention. These three principles should be respected – but that does not mean the way to make good policy is to stick a price tag on everything.

Also published at ft.com.

Undercover Economist

The four lessons of happynomics

‘Happiness is surely important, but the case for letting economists loose on the subject is less clear’

The discipline of happynomics (or to give it an academically respectable title, “the economics of subjective well-being”) is booming. Respected economists have joined the field, from Lord Layard in the UK to White House appointees such as Alan Krueger and Betsey Stevenson. Several have been charmed in by Daniel Kahneman, a widely admired psychologist with a Nobel Memorial Prize in economics.

Happiness is surely important, but the case for letting economists loose on the subject is less clear. So are happyconomists discovering things that will put a song in your heart and a smile on your face? Perhaps. After reading a stack of books about the economics of happiness, and seeking advice from some of the researchers involved, allow me to present four tips for happiness from the dismal science.

Number one: don’t be distracted by the obvious. When buying a new car, it’s natural to imagine yourself thrilling to its acceleration. When buying a new house, it’s only human to ponder the pleasure of hosting guests at a summer barbecue on the patio. But such thoughts fall prey to what psychologists call “the focusing illusion”. Most of our time will be spent neither throwing summer parties nor overtaking lorries. Yet we’re swayed by these attractions because we’re focusing on them just at the moment that we decide.

The focusing illusion was splendidly captured by a pair of questions asked of college students by researcher Norbert Schwartz and others: “How happy are you?” and “How many dates did you have last month?” The research team discovered that people having a lot of dates also say they’re feeling very happy – but only if asked about the dates first. If the happiness question comes first, there’s a far smaller correlation between the answers. Those students asked initially about their love lives continued to think about them when pondering their happiness. Otherwise they might have been worrying about money, career or exams.

The focusing illusion lies in wait for us whenever we make a decision. Nattavudh “Nick” Powdthavee, an economist and author of The Happiness Equation, says that we have to try to “look beyond what’s salient about an experience”. Don’t just think about the obvious when making decisions; think about how day-to-day life is likely to change as a result – if it changes at all.

The second piece of advice is to pay attention selectively. It turns out that we grow accustomed to some conditions, happy or unhappy, but not to all.

The study which sparked the idea that we can get used to almost anything was published by Philip Brickman, Dan Coates and Ronnie Janoff-Bulman in 1978. It compared the happiness of paraplegic and quadriplegic accident victims to that of lottery winners – and discovered that the disabled people were scarcely less happy than the millionaires. Apparently we can bounce back from some awful experiences. (It is sad and troubling that a few years after making this discovery, Brickman killed himself.)

But how exactly is this apparent process of habituation supposed to work? Here’s where happiness economics has the long-run data to help. Consider bereavement: we cope by paying less attention as time goes by. A friend said to me, months after my mother and his father had both died, “You don’t get any less sad when you think about them but you think about them less often.”

The same is true, alas, for the nice things in life: we begin to take them for granted too. But there are experiences – unemployment is one of them; an unhappy marriage another – that depress us for as long as they last. What those experiences seem to have in common is the ability to hold our attention. Commuting, although shorter and less serious, is a classic case – annoying but also stimulating enough that we keep noticing the annoyance.

This suggests that we should look for the opposite of commuting: positive new experiences that are engaging enough to keep being noticed. In this case “count your blessings” is perfect advice.

Third, try nudging yourself to happiness with the techniques of behavioural economics. Paul Dolan, another economist and the author of the forthcoming Happiness by Design, advocates doing some preparation to make it easier to do what brings us joy. If you wish to read more, for example, set your browser home page to a book review site, leave books lying around your house and make a commitment to visit a literary festival. Make the new habit easy to do and hard to ignore.

Finally we must keep a sense of what’s possible. I asked Daniel Kahneman himself for his advice, and he made some evidence-based suggestions about the importance of friends and family. But he also pointed out that much of our happiness seems genetically predetermined. It’s possible to give good advice, he said, but “we shouldn’t expect a depressive person to suddenly become extroverted and leaping with joy”. In happiness, as in life, we economists must know our limits.

Also published at ft.com.

Undercover Economist

An astonishing record – of complete failure

‘In 2008, the consensus from forecasters was that not a single economy would fall into recession in 2009’

In the 2001 issue of the International Journal of Forecasting, an economist from the International Monetary Fund, Prakash Loungani, published a survey of the accuracy of economic forecasts throughout the 1990s. He reached two conclusions. The first was that forecasts are all much the same. There was little to choose between those produced by the IMF and the World Bank, and those from private sector forecasters. The second conclusion was that the predictive record of economists was terrible. Loungani wrote: “The record of failure to predict recessions is virtually unblemished.”

Now Loungani, with a colleague, Hites Ahir, has returned to the topic in the wake of the economic crisis. The record of failure remains impressive. There were 77 countries under consideration, and 49 of them were in recession in 2009. Economists – as reflected in the averages published in a report called Consensus Forecasts – had not called a single one of these recessions by April 2008.

This is extraordinary. Bear in mind that this is not the famous complaint from the Queen that nobody saw the financial crisis coming. The crisis was firmly established when these forecasts were made. The Financial Times had been writing exhaustively about the “credit crunch” since the previous summer. Northern Rock had been nationalised in the UK and Bear Stearns had collapsed in the US. It did not take a genius to see that trouble was on the way for the wider economy.

More astonishing still, when Loungani extends the deadline for forecasting a recession to September 2008, the consensus remained that not a single economy would fall into recession in 2009. Making up for lost time and satisfying the premise of an old joke, by September of 2009, the year in which the recessions actually occurred, the consensus predicted 54 out of 49 of them – that is, five more than there were. And, as an encore, there were 15 recessions in 2012. None were foreseen in the spring of 2011 and only two were predicted by September 2011.

Predictions from multinational organisations such as the IMF and the Organisation for Economic Co-operation and Development have remained very similar to the private sector consensus – similarly bad, that is.

We should not blame economics alone for our inability to peer into the future of a complex world. In 2005, Philip Tetlock, a psychologist, published a landmark work with the title Expert Political Judgement. Tetlock found that throughout the 1980s and 1990s, political and geopolitical forecasts had been scarcely better than guesswork. It made little difference whether the forecaster was an academic, journalist or diplomat, a historian or a political scientist. Forecasting is difficult, it turns out. (Supply your own punchline.) As for Tetlock, he is currently conducting a follow-up study to see whether forecasting practices can be saved.

Why are forecasts so poor? The chief explanation is that the economy is complicated and we don’t understand it well enough to make forecasts. We don’t even fully understand recent economic history.

Ben Chu, economics editor of The Independent, recently took a look at the UK recession of the 1990s in the light of two decades of data revisions. From the vantage point of 1995, the economy in late 1992 was slightly smaller than the economy in early 1988. But today’s best guess is that the economy of late 1992 was almost 6 per cent larger than in early 1988. The Office for National Statistics has substantially revised its view.

Not only is it difficult to forecast the future, then – forecasting the past isn’t straightforward either. What chance does any prognosticator have?

A second explanation for forecasting’s fallibility is that there is little incentive to do better. The kind of institutional chief economist whose pronouncement makes it into Consensus Forecasts will stick to the middle of the road. Most countries, most of the time, are not in recession, so a safe strategy is never to forecast one. Of course there are the mavericks who receive media attention for making provocative predictions and are lionised when they are right. Their incentives are different but it is unclear that their overall track record is any better.

The obvious conclusion is that forecasts should not be taken seriously. There is not a lot of point asking an economist to tell you what will happen to the economy next year – nobody knows. It is still a source of constant wonder to me that the demand for forecasts – in economics and elsewhere – remains undiminished.

John Maynard Keynes famously looked forward to a day when “economists could manage to get themselves thought of as humble, competent people, on a level with dentists”.

It’s a nice piece of self-deprecation, but it’s also an analogy worth exploring. We don’t expect a dentist to be able to forecast the pattern of tooth decay. We expect that she will offer good practical advice on dental health and intervene to fix problems when they occur. We should demand much the same from economists: proven advice about how to keep the economy working well and solutions when the economy malfunctions. And economists should bear in mind that no self-respecting dentist would be caught dead forecasting when your teeth will fall out.

Also published at ft.com.

Undercover Economist

Don’t blame statisticians for counting the wages of sin

Officials used to size up economies like butchers size up cows, writes Tim Harford

The good news is that the UK economy is about to surge by 5 per cent. The bad news is that this surge does not actually reflect economic growth but a change in statistical definitions. And the weird news is that this is happening in part because EU statistical guidelines demand that spending on cocaine and hookers be reflected in the UK’s official statistics.

Feel free to raise one eyebrow at this point. Recall that Greece’s economy grew overnight by 25 per cent after the country’s official statisticians included sex work and illegal drugs as part of their estimates of economy activity – a move that, by an astonishing coincidence, flattered the country’s budget deficit. That was in September 2006; the story has not gone well since then.

It is certainly true that the Office for National Statistics has only the vaguest sense of what is going on in the British economy’s nether regions. (Individual statisticians with a sense of adventure may know more, of course, but their observations are unlikely to be statistically reliable.) Most recreational drugs are illegal in the UK, while sex work is on the margins of legality. One suspects that not every transaction for sex or drugs is faithfully recorded on a tax return.

The ONS has made valiant assumptions in estimating that 60,879 sex workers are each employed 1,300 times a year at an average rate of £67.16. If true, that is an industry big enough to allow every man in the country between the age of 15 and 64 to visit a sex worker every three months. We might quibble that the ONS has no idea, really, how big these black-market businesses really are. Dr Brooke Magnanti, author of The Sex Myth – and pseudonymously of various memoirs of her time as a sex worker – is not impressed with the ONS’s methodology. “Why not ask escorts themselves? It’s not as if we’re hard to find,” she has written.

She may be right – but even if the ONS does not know exactly how big the sex industry is, we can be pretty sure that the answer is not “zero”. The ONS estimates are surely an improvement on what went before.

Critics will feel this is missing the point. Why should we celebrate drugs and sex work by immortalising them in the national accounts? Are politicians now to subsidise Rizla and Durex in the hope of boosting our economy further? But such criticism is confused in a way that affects far more than this particular statistical revision.
“The critics of GDP give it too much credit. It is not the guiding star for economic policy, public morality, or anything else”

We need to understand three things about gross domestic product statistics. First, GDP itself is ineffable – an attempt to synthesise, for practical purposes, something that defies description. Second, the national accounts are not designed to give a round of applause to the good stuff and a loud raspberry to the bad stuff. They are supposed to measure economic transactions. And, third, anyone who thinks politicians try to maximise GDP has not been paying much attention to politicians.

A hundred years ago, if you had asked someone, “how is the economy doing?”, nobody would have understood what you were saying. Back then economists might have tried to track the production of coal or the number of people with jobs. Yet the idea of putting all economic transactions into one big conceptual pile and trying to measure how big the pile was – that is a newer and quite radical invention. When economists such as Simon Kuznets first tried to calculate national income back in the 1930s, they were trying to understand the malfunctions of the Great Depression, and to measure the productive potential of an economy that was gearing up for war.

This number-crunching has always had a purpose. Governments used to size up economies like a butcher sizes up a cow; suddenly they were taking measurements the way a doctor takes a pulse. William the Conqueror’s Domesday Book, the definitive record of land ownership in Norman times, was replaced with the ONS’s Blue Book of national accounts. The organisation’s new efforts are designed to figure out how much money is being earned and spent – partly for the purpose of international comparison – and it is perfectly right that this includes all voluntary transactions, even undeclared or outright illegal ones. Let others argue over whether sex work and drug taking should be prosecuted or liberalised.

This error goes back at least as far as a speech given by Robert Kennedy in March 1968. The presidential candidate pointed out that official measures of economic output include napalm, nuclear warheads, cigarette advertising and jails – but not “the beauty of our poetry or the strength of our marriages”. It is a wonderful speech but like many wonderful speeches contains a rhetorical bait-and-switch. If poetry is dying and divorce is too common, that is not the fault of the statisticians.

If politicians truly aimed to maximise GDP, George Osborne, UK chancellor of the exchequer, would never have launched an austerity drive, there would be no subsidies for renewable energy, unemployment benefits would expire quickly if they existed at all, and everybody would be clamouring to increase immigration. There is more to life than mere prosperity and there is more to prosperity than GDP growth – and much as our politicians are a woeful gaggle of incompetents, even they seem to grasp that, both in their words and in their actions.

The critics of GDP give it too much credit. It is a painstaking attempt to try to measure the total production of the economy. It is not the guiding star for economic policy, public morality, or anything else.

Sex work and mind-altering substances have been around a long time. After this statistical tweak, it cannot be long before someone starts pointing at the pimps and the pushers, and blaming their existence on the Office for National Statistics.

Also published at ft.com.

Undercover Economist

Gary Becker – the man who put a price on everything

The Nobel Prize winner believed that no matter what the subject, economics always had something insightful to add

Perhaps it was inevitable that there would be something of the knee-jerk about the reaction to the death of the Nobel Prize-winning economist Gary Becker. Published obituaries acknowledged his originality, productivity and influence, of course. But there are many who lament Becker’s economic imperialism – the study of apparently non-economic aspects of life. It is now commonplace for those in the field to consider anything from smoking to parenting to the impact of the contraceptive pill. That is Gary Becker’s influence at work.

Becker makes a convenient bogeyman. It did not help that he could be awkward in discussing emotional issues – despite his influence inside the economics profession, he was not a slick salesman outside it. So it is easy to caricature a man who writes economic models for discrimination, for suicide and for the demand for children. How blinkered such a man must be, the critics say; how intellectually crude and emotionally stunted.

The criticism is unfair. Gary Becker’s economic imperialism was an exercise in soft power. Becker’s view of the world was not that economics was the last word on all human activity. It was that no matter what the subject under consideration, economics would always have something insightful to add. And for many years it fell to Becker to find that insight.

Consider his first book, still one of his most famous works, on the economics of discrimination (1957). There’s a lot to say about discrimination and Becker doesn’t even begin to claim that economics says it all. Instead, he heads straight to where an economist can make a difference. Assume that some people have a prejudicial distaste for others, says Becker. Assume bigoted workers would rather turn down a pay rise than work in a racially mixed environment. Bigoted employers would prefer to employ a white worker for much the same reason that lecherous employers prefer an attractive young personal assistant: because office life is about satisfying desires other than purely financial ones. Some other employers may prefer to hire a white worker because they know their other workers might be prejudiced against a non-white colleague; or because their customers don’t want to look at a non-white face or talk on the telephone to someone with the wrong accent.

Assume for a moment that we live in this wretched world – which we do, and did so even more in the 1950s. In which case, Becker asks: what happens? How do markets unfold? What happens to the wages for the victims of discrimination? Will competitive forces eliminate the wage gap between whites and non-whites, as non-bigoted employers see a chance to make money? In short, how is a social or psychological prejudice translated into economic outcomes?

These questions are neither reductive nor simplistic, and economics is well placed to provide some answers. Bigots can make life upsetting and dangerous for minority groups even if the bigots themselves are not numerous. But Becker showed that to have an impact on minority wages, bigotry would have to be widespread – otherwise unprejudiced companies would simply hire all the minority workers. Market forces tend to separate the bigots from the victims of their bigotry. Relative wages for minorities are determined not by the average level of prejudice in the population but by the most prejudiced employer who nevertheless ends up hiring someone from a minority.

Four decades after Becker’s original book, two economists Kerwin Kofi Charles and Jonathan Guryan published a research paper examining his theoretical predictions. They concluded: “We find strong support for all of the key predictions from Becker about the relationship between prejudice and racial wage gaps.”

The research paper from Charles and Guryan is one of many to have looked at Becker’s theories over the decades. He had a tendency to inspire or to provoke others to respond. In 2003 Steven Levitt and Pierre-André Chiappori conducted a survey of recent papers in empirical economics, looking for the economic theorists who had inspired the most empirical research. It was Gary Becker who topped their list – and his nearest rivals weren’t close.

Superficially, Becker appears to stand for the opposite of modern behavioural economics, which these days seems to be the acceptable face of the economics profession. After all, while the behavioural economists bring psychological insights into an analysis of markets, Becker did the opposite, imposing a rational-choice model on non-market situations such as marriage and parenting. Behavioural economists love empirical data but Becker was a theorist. Is he not, then, the opposite of all that is cool and forward-looking in economics?

That criticism only survives the most casual acquaintance with Becker’s work. His Nobel speech, for instance, opens with the comment: “I have tried to pry economists away from narrow assumptions about self-interest. Behaviour is driven by a much richer set of values and preferences.”

Or consider a research paper from the prehistory of behavioural economics (1962) with the title “Irrational Behaviour and Economic Theory”. The paper looks at what impact an “extremely wide class of irrational behaviour might have on some of the key theorems of economics”. The paper also declares that the ultimate defence of economic theory “is an empirical one”. Theories are powerful but in the end it is the facts that are definitive. These are key questions in behavioural economics literature today.

The author, of course, was Gary Becker.

Also published at ft.com.

Undercover Economist

When a man is tired of London house prices

‘Since Londoners cannot seem to stop asking, “Is there a bubble?”, I’ve been trying to figure out the answer’

I predicted the UK house price slump of 2007. I was even planning to devote an episode of my BBC2 series to the subject back in 2006, until my producers demanded a different topic. They argued that house prices would assuredly keep rising, so I would seem silly. The replacement theme was “It’s hard to predict the future”; prices duly fell by 15 per cent in real terms.

This triumph should be set in context: I had been forecasting a slump in 2002, 2003, 2004 and 2005. Nevertheless, since Londoners cannot seem to stop discussing the question, “Is there a house price bubble?”, I’ve been trying to figure out the answer, both for Londoners and others.

This isn’t an easy question for economists to answer – bubbles are a matter of psychology, and psychology is not our strong suit. But we can attempt a diagnosis by looking at economic fundamentals. The price of any investment asset should be related to the future income you can derive from that asset, whether it’s the rent you can earn as a landlord, the dividends from corporate shares, or the interest payments on a bond.

A good working definition of a bubble is that the price of an asset has become detached from fundamentals, which means the only way to make money in a bubble is to find a bigger fool to take the thing off your hands.

What are fundamentals telling us? The most straightforward comparison here is of the price of buying a house versus the price of renting one. In the UK, both prices and rents are at high levels relative to income. This is no surprise: everyone knows that the UK has been building too few houses for many years.

But have prices outpaced rents, suggesting a bubble? It seems so. US house prices are at historical norms relative to rents, and German and Japanese prices are unusually cheap to buy rather than rent. Yet in the UK, house prices are one-third above their long-term value relative to rents. And in London, gross rental yields are lower than in other UK regions at a slim 5-ish per cent. Such returns look low, given the costs of being a landlord. Logically, either rents should soar or prices should fall.

Yet London prices have lost touch with London earnings and with the price of houses in London’s commuter belt, and they continue to rise quickly. All this seems unsustainable, and when interest rates finally rise, surely the distressed sales will begin?

But there are three counter-arguments. The first is that housing is different. The second is that London itself is different. The third is that this time around is different.

Let’s dispense with the argument that housing is somehow bubble-proof. Bricks and mortar seem reassuring but there is no law of economics that says money is safe in housing. Real Japanese house prices have almost halved since 1992. Real house prices in the US have soared and slumped and are now cheaper than they were in the late 1970s.

But what if London itself has a housing market that never falls? We only need to go back 25 years to see that this isn’t true. According to Nationwide, a UK mortgage lender, the average London home was selling for just under £98,000 in the summer of 1989. Prices then fell by one-third and didn’t top £100,000 for nine years. Cumulative inflation over the same period was well over 50 per cent. London housing in the late 1980s was a disastrous investment.

All we have left, then, is the argument that there is something different about London’s housing market this time around, because London has become an investment hotspot for wealthy foreigners seeking a safe haven for their cash.

Surprisingly, there is truth in this story. Research by Cristian Badarinza and Tarun Ramadorai, economists at the University of Oxford, finds that trouble spots across the world are correlated with hotspots in local London property markets. When the Greek economy imploded, for example, areas of London with a higher proportion of Greek residents saw a measurable pick-up in demand. The thinking here is that if a Greek wants to get his money out of Greece and into London, he’ll pick a place where he already knows people who can scout out property and where he might someday want to live himself.

Yet the laws of supply and demand have a habit of reasserting themselves eventually. Londoners might want to glance at New York – another ludicrously expensive city, and another magnet for money and people from across the world. House prices in New York have fallen by about one-third since 2006 and are at about the same level as in the mid-1980s relative to rents, income or inflation. In the long run, why should London be any different?

The final counter-argument is the most depressing. It’s that returns on all assets will be low in future because the world has entered a secular slump. That means that house prices should be expensive because other assets are expensive too. As Oxford economist Simon Wren-Lewis points out, the secular slump theory should apply globally if it is true. And while some housing markets, including those in Australia, Canada, France and Sweden, also look expensive, others do not.

If the secular slump tale is true, London housing is sensibly priced and property in many other parts of the world has yet to catch up. Heaven help us.

Also published at ft.com.

Undercover Economist

Healthcare: the final reckoning

Somebody, somewhere has to be able to say, ‘‘That’s great – but it just costs too much’’

This piece incorporates a correction made on 9 May 2014.

If the American right is looking for a “death panel” ruling to complain about, one has just appeared: trastuzumab emtansine, a breast-cancer treatment produced by the Swiss pharmaceutical company Roche, looks unlikely to be endorsed by the UK’s National Institute for Health and Care Excellence (Nice). That is not because the drug doesn’t work – Nice thinks it does – but because it costs too much.

The death-panel fantasy has mutated over time. It originally raised the prospect that Barack Obama’s healthcare reforms would require bureaucrats to decide who was worthy of treatment and who would be left to die. Death panels do not exist, so now the allegation has shifted to the idea that the president’s reforms involve the rationing of healthcare.

So far there is little evidence of that, either. Yet deep beneath the scaremongering is a kernel of truth: if you want to keep costs under control, somebody, somewhere has to be able to say, “That’s great – but it just costs too much.” In the UK, that someone is Nice. Trastuzumab emtansine can still be supplied by the NHS through a special fund. In general, however, when Nice approves or rejects a treatment on the grounds of cost-effectiveness, that ruling will determine whether the NHS will or will not provide that treatment.

How can such decisions be made? The obvious standard is bang for the buck. If $10,000 will extend someone’s life by 10 years, that is better than spending $10,000 to extend someone’s life by 10 hours. Keep spending money on the most cost-effective treatments until all the money is gone. Simple.

Except it is not simple. There is more to a medical treatment than postponing the funeral. Treatments may delay death but with side effects. A hip replacement may improve life without extending it.

Enter the Qaly, or quality-adjusted life-year. The Qaly was dreamt up in 1956 by two health economists, Christopher Cundell and Carlos McCartney.  [This claim has been made both in academic papers and here in the New Yorker, which might make you think it is true. It's not. See here for a discussion.] The idea is straightforward enough: it introduces a way of comparing completely healthy years of life with years of life spent in pain or with limited independence. Living for four years with profound deafness might be – hypothetically – as good as living for three years in perfect health. If so, curing that deafness for four years would be worth one Qaly.

No doubt you can see the difficulties already. Are we really so sure we know how unpleasant it is to be deaf? The answer is subjective and fraught with politics: the worse the experience of a profoundly deaf person is deemed to be, the more resources the Qaly standard will mobilise in search of effective treatments. Yet the logic of the Qaly also says that if life is terrible for the profoundly deaf, curing cancer in people who can hear is more cost-effective than curing cancer for people who cannot. Ouch.

In practice this is unlikely to pose a problem: Nice will have either approved the cancer therapy or not, and nobody is going to deny it to the deaf by referring to a Qaly table. Still, the example is troubling.

There seems to be little prospect of cost-based rationing in the US. That is a shame: Americans may not realise quite how much this aversion to cost-effectiveness is costing them.

The UK system, dominated by the taxpayer-funded NHS, is far cheaper than the US healthcare system. That’s no surprise. But the astonishing thing about the US system, long caricatured by both its critics and its defenders as a bastion of free-market provision, is that the US taxpayer spends far more per person on healthcare than the UK taxpayer does. (This was true long before anyone had ever heard of Barack Obama.)

Indeed the US government spends more per person on healthcare than almost anywhere in the world. Norway, Luxembourg and Monaco can plausibly claim to have more generously funded public healthcare systems than the US but nowhere else comes close. That’s the cost of, well, not caring about cost.

If the US healthcare system is financially incontinent and the UK system is reliant on a centralised and philosophically troubling cost-benefit analysis, is there some better way?

. . .

Singapore offers an intriguing model. The aim of the country’s healthcare system is to get patients to face some of the costs of their own treatment. Healthcare is part-nationalised and somewhat subsidised. Individual citizens have a compulsory savings account to build up a nest egg for medical expenses, and there’s an insurance programme to deal with the most expensive treatments. But, broadly, the idea is that you have money in a healthcare account, and it’s up to you to decide how you want to spend it.

This makes healthcare more like a regular consumer market. If you have the relevant form of breast cancer and you want to give trastuzumab emtansine a go, then in a Singapore-style system it’s your money and it’s your choice.

I can’t see the idea catching on in the UK and probably not in the US either. People are too used to the idea that someone else – the state or an insurer – will pay the bill. Free choice is nice but what everyone seems to prefer is free treatment.

Also published at ft.com.

Other Writing

Gary Becker, 1930-2014

Gary Becker passed away on Saturday. My obituary for the Financial Times is below.

Gary Becker, the man who led the movement to apply economic ideas to areas of life such as marriage, discrimination and crime, died on May 3 after a long illness. He was 83.
Born in a coal-mining town in Pennsylvania, raised in Brooklyn and with a mathematics degree summa cum laude from Princeton, it was not until Becker arrived at the University of Chicago that he realised “I had to begin to learn again what economics is all about”.
He had considered taking up sociology, but found it “too difficult”. Yet he was to return to the questions of sociology again and again over the years, taking pleasure in wielding the rigorous yet reductive mathematical tools of economics. This approach was to win him the Nobel Memorial Prize in Economics in 1992, and make him one of the most influential and most cited economists of the 20th century.
His doctoral dissertation was on the economics of discrimination – how to measure it and what effects it might have. Becker showed that discrimination was costly for the bigot as well as the victim. This seemed strange material for an economist, and Becker attracted little attention for his ideas when he published a book on discrimination in 1957.
This didn’t seem to worry him. “My whole philosophy has been to be conventional in things such as dress and so on,” he told me in 2005. “But when it comes to ideas, I’ll be willing to stick my neck out: I can take criticism if I think I’m right.”
He received plenty of that criticism over the years for daring to develop economic theories of crime and punishment, of the demand for children, and of rational addicts who may quit in response to a credible threat to raise the price of cigarettes. His idea that individuals might think of their education as an investment, with a rate of return, caused outrage. Yet nobody now frets about the use of the phrase “human capital”, the title of one of Becker’s books.
That exemplifies the way that Becker’s approach has changed the way that economists think about what they do, often without explicitly recognising his influence. He was economically omnivorous: colleagues such as Lars Peter Hansen, a fellow Nobel laureate, would find Becker quizzing them and providing penetrating comments even on research that seemed far removed from Becker’s main interests.
“He will be remembered as a person who in a very creative way broadened the scope of economic analysis,” said Professor Hansen, “And as one of the very best economists of the 20th century.”
Becker’s life-long affection was for the subject he transformed. On weekend afternoons, he would often be found in his office, writing or answering questions from young academics six decades his junior. He continued to write a blog with the legal scholar Richard Posner until a few weeks before his death.
“He loved economics,” said Kevin Murphy, who taught a course alongside Becker for many years, “and he inspired so many economists.” Perhaps the most likely result of a class with Becker was not mastering a particular formal technique, but acquiring that distinctive economist’s outlook on the world.
That worldview was on display when on the way to his Lunch with the FT, Gary Becker parked illegally. On cross-examination, he cheerfully told me that after weighing the risks and benefits, this was a rational crime.
“That sounds like Gary to me,” said Prof Murphy. “He decided to give you a practical lesson in economics.”
Becker was widowed in 1970, and remarried in 1980 to a Chicago history professor, Guity Nashat. She survives him, as does a daughter, Catherine Becker; a sister, Natalie Becker; a stepson and two grandsons.

You can read my lunch with Gary Becker, or read more about his ideas in The Logic of Life. It was clear, speaking to his colleagues, that he will be greatly missed.

5th of May, 2014Other WritingComments off
Undercover Economist

The random risks of randomised trials

‘There are perils to treating patients not as human beings but as means to some glorious end’

The backlash against randomised trials in policy has begun. Randomised controlled trials (RCTs) are widely accepted as the foundation for evidence-based medicine. Yet a decade ago, they were extremely rare in other contexts such as economics, criminal justice or social policy. That is changing.

In the UK, Downing Street’s newly privatised Behavioural Insight Team has made it cool to test new ideas for conducting policy by running experiments in which many thousands of participants receive various treatments at random. The Education Endowment Foundation, set up with £125m of UK government money, has begun 59 RCTs involving 2,300 schools. In the aid industry, RCTs have been popularised by MIT’s Poverty Action Lab, which celebrated its 10th anniversary last summer – one estimate is that 500 RCTs are under way in the field of education policy alone.

With such a dramatic expansion of the use of randomised trials, it’s only right that we ask some hard questions about how they are being used. The World Bank’s development impact blog has been hosting a debate about the ethics of these trials; they have been criticised in The New York Times and in an academic article by economists Steve Ziliak and Edward Teather-Posadas.

Objections to the idea of randomisation aren’t new. The great epidemiologist Archie Cochrane once ran an RCT of coronary care units, with the alternative treatment being care at home. He was vigorously attacked by cardiologists: how could he justify randomly denying treatment to patients? The counter argument is simple: how could we justify prescribing treatments without knowing whether or not they work?

Yet that should not give carte blanche for evaluators to do whatever they like. Hanging in the background of this debate are awful abuses such as the “Tuskegee Study of Untreated Syphilis in the Negro Male”, which began in 1932. Researchers went to extraordinary lengths to ensure 400 African-American men with syphilis went untreated, although a proven treatment was available from 1947. When the experiment ended in 1972, many men were dead, 40 wives had been infected and 19 children with congenital syphilis had been born.

The Tuskegee study was not a randomised trial, but it demonstrates the perils of treating patients not as human beings but as means to some glorious end. This topic is rightly sensitive in development aid, as there is a clear power imbalance between the agencies who pay for new interventions and the poverty-stricken citizens on the receiving end.

In a perfect world, everyone involved in a trial would give informed consent, and everyone in the control group would receive the best available alternative to the approach being tested. (These are the basic guidelines laid out for medical trials by the World Medical Association’s “Helsinki” declaration.)

Yet compromises are common. Dean Karlan is professor of economics at Yale and founder of Innovations for Poverty Action, which evaluates development projects using randomisation. He points out that telling participants too much about the trial destroys the validity of the results by changing everyone’s behaviour.

Then there is the question of who consents. Camilla Nevill of the Economics Endowment Foundation says that trials are often agreed to and conducted by schools. Trying to persuade every parent to agree explicitly to the trial “decimates” the number of participants, she says.

Is this ethically troubling? At first glance, yes. But there is a risk of a double standard. Without the EEF funding, some schools would adopt the new teaching approach anyway. It is only when a researcher proposes a meaningful evaluation that suddenly there is talk of informed consent.

Ben Goldacre, an epidemiologist and author of Bad Pharma, says “it’s reasonable to hold researchers to a higher standard” if only to protect the reputation of rigorous research. But how high a standard is high enough?

Steve Ziliak, a critic of RCTs, complains about one conducted in China in which some visually-impaired children were given glasses while others received nothing. The case against the trial is that we no more need a randomised trial of spectacles than we need a randomised trial of the parachute.

The case for the defence is that we know that spectacles work but we don’t know how important it might be to pay for spectacles rather than, say, textbooks or vitamin supplements. None of these children was in line to receive glasses anyway, so what harm have the researchers inflicted?

I should leave the final word to Archie Cochrane. In his trial of coronary care units, run in the teeth of vehement opposition, early results suggested that home care was at the time safer than hospital care. Mischievously, Cochrane swapped the results round, giving the cardiologists the (false) message that their hospitals were best all along.

“They were vociferous in their abuse,” he later wrote, and demanded that the “unethical” trial stop immediately. He then revealed the truth and challenged the cardiologists to close down their own hospital units without delay. “There was dead silence.”

The world often surprises even the experts. When considering an intervention that might profoundly affect people’s lives, if there is one thing more unethical than running a randomised trial, it’s not running the trial.

Also published at ft.com.


Big Data: Are we making a big mistake?

Five years ago, a team of researchers from Google announced a remarkable achievement in one of the world’s top scientific journals, Nature. Without needing the results of a single medical check-up, they were nevertheless able to track the spread of influenza across the US. What’s more, they could do it more quickly than the Centers for Disease Control and Prevention (CDC). Google’s tracking had only a day’s delay, compared with the week or more it took for the CDC to assemble a picture based on reports from doctors’ surgeries. Google was faster because it was tracking the outbreak by finding a correlation between what people searched for online and whether they had flu symptoms.

Not only was “Google Flu Trends” quick, accurate and cheap, it was theory-free. Google’s engineers didn’t bother to develop a hypothesis about what search terms – “flu symptoms” or “pharmacies near me” – might be correlated with the spread of the disease itself. The Google team just took their top 50 million search terms and let the algorithms do the work.
The success of Google Flu Trends became emblematic of the hot new trend in business, technology and science: “Big Data”. What, excited journalists asked, can science learn from Google?
As with so many buzzwords, “big data” is a vague term, often thrown around by people with something to sell. Some emphasise the sheer scale of the data sets that now exist – the Large Hadron Collider’s computers, for example, store 15 petabytes a year of data, equivalent to about 15,000 years’ worth of your favourite music.
But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast. Google Flu Trends was built on found data and it’s this sort of data that ­interests me here. Such data sets can be even bigger than the LHC data – Facebook’s is – but just as noteworthy is the fact that they are cheap to collect relative to their size, they are a messy collage of datapoints collected for disparate purposes and they can be updated in real time. As our communication, leisure and commerce have moved to the internet and the internet has moved into our phones, our cars and even our glasses, life can be recorded and quantified in a way that would have been hard to imagine just a decade ago.
Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.
Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”
Found data underpin the new internet economy as companies such as Google, Facebook and Amazon seek new ways to understand our lives through our data exhaust. Since Edward Snowden’s leaks about the scale and scope of US electronic surveillance it has become apparent that security services are just as fascinated with what they might learn from our data exhaust, too.
Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.
But while big data promise much to scientists, entrepreneurs and governments, they are doomed to disappoint us if we ignore some very familiar statistical lessons.
“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”
. . .
Four years after the original Nature paper was published, Nature News had sad tidings to convey: the latest flu outbreak had claimed an unexpected victim: Google Flu Trends. After reliably providing a swift and accurate account of flu outbreaks for several winters, the theory-free, data-rich model had lost its nose for where flu was going. Google’s model pointed to a severe outbreak but when the slow-and-steady data from the CDC arrived, they showed that Google’s estimates of the spread of flu-like illnesses were overstated by almost a factor of two.
The problem was that Google did not know – could not begin to know – what linked the search terms with the spread of flu. Google’s engineers weren’t trying to figure out what caused what. They were merely finding statistical patterns in the data. They cared about ­correlation rather than causation. This is common in big data analysis. Figuring out what causes what is hard (impossible, some say). Figuring out what is correlated with what is much cheaper and easier. That is why, according to Viktor Mayer-Schönberger and Kenneth Cukier’s book, Big Data, “causality won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning”.
But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down. One explanation of the Flu Trends failure is that the news was full of scary stories about flu in December 2012 and that these stories provoked internet searches by people who were healthy. Another possible explanation is that Google’s own search algorithm moved the goalposts when it began automatically suggesting diagnoses when people entered medical symptoms.
Google Flu Trends will bounce back, recalibrated with fresh data – and rightly so. There are many reasons to be excited about the broader opportunities offered to us by the ease with which we can gather and analyse vast data sets. But unless we learn the lessons of this episode, we will find ourselves repeating it.
Statisticians have spent the past 200 years figuring out what traps lie in wait when we try to understand the world through data. The data are bigger, faster and cheaper these days – but we must not pretend that the traps have all been made safe. They have not.
. . .
In 1936, the Republican Alfred Landon stood for election against President Franklin Delano Roosevelt. The respected magazine, The Literary Digest, shouldered the responsibility of forecasting the result. It conducted a postal opinion poll of astonishing ambition, with the aim of reaching 10 million people, a quarter of the electorate. The deluge of mailed-in replies can hardly be imagined but the Digest seemed to be relishing the scale of the task. In late August it reported, “Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totalled.”
After tabulating an astonishing 2.4 million returns as they flowed in over two months, The Literary Digest announced its conclusions: Landon would win by a convincing 55 per cent to 41 per cent, with a few voters favouring a third candidate.
The election delivered a very different result: Roosevelt crushed Landon by 61 per cent to 37 per cent. To add to The Literary Digest’s agony, a far smaller survey conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt. Mr Gallup understood something that The Literary Digest did not. When it comes to data, size isn’t everything.
Opinion polls are based on samples of the voting population at large. This means that opinion pollsters need to deal with two issues: sample error and sample bias.
Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The “margin of error” reported in opinion polls reflects this risk and the larger the sample, the smaller the margin of error. A thousand interviews is a large enough sample for many purposes and Mr Gallup is reported to have conducted 3,000 interviews.
But if 3,000 interviews were good, why weren’t 2.4 million far better? The answer is that sampling error has a far more dangerous friend: sampling bias. Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all. George Gallup took pains to find an unbiased sample because he knew that was far more important than finding a big one.
The Literary Digest, in its quest for a bigger data set, fumbled the question of a biased sample. It mailed out forms to people on a list it had compiled from automobile registrations and telephone directories – a sample that, at least in 1936, was disproportionately prosperous. To compound the problem, Landon supporters turned out to be more likely to mail back their answers. The combination of those two biases was enough to doom The Literary Digest’s poll. For each person George Gallup’s pollsters interviewed, The Literary Digest received 800 responses. All that gave them for their pains was a very precise estimate of the wrong answer.
The big data craze threatens to be The Literary Digest all over again. Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is.
Professor Viktor Mayer-Schönberger of Oxford’s Internet Institute, co-author of Big Data, told me that his favoured definition of a big data set is one where “N = All” – where we no longer have to sample, but we have the entire background population. Returning officers do not estimate an election result with a representative tally: they count the votes – all the votes. And when “N = All” there is indeed no issue of sampling bias because the sample includes everyone.
But is “N = All” really a good description of most of the found data sets we are considering? Probably not. “I would challenge the notion that one could ever have all the data,” says Patrick Wolfe, a computer scientist and professor of statistics at University College London.
An example is Twitter. It is in principle possible to record and analyse every message on Twitter and use it to draw conclusions about the public mood. (In practice, most researchers use a subset of that vast “fire hose” of data.) But while we can look at all the tweets, Twitter users are not representative of the population as a whole. (According to the Pew Research Internet Project, in 2013, US-based Twitter users were disproportionately young, urban or suburban, and black.)
There must always be a question about who and what is missing, especially with a messy pile of found data. Kaiser Fung, a data analyst and author of Numbersense, warns against simply assuming we have everything that matters. “N = All is often an assumption rather than a fact about the data,” he says.
Consider Boston’s Street Bump smartphone app, which uses a phone’s accelerometer to detect potholes without the need for city workers to patrol the streets. As citizens of Boston download the app and drive around, their phones automatically notify City Hall of the need to repair the road surface. Solving the technical challenges involved has produced, rather beautifully, an informative data exhaust that addresses a problem in a way that would have been inconceivable a few years ago. The City of Boston proudly proclaims that the “data provides the City with real-time in­formation it uses to fix problems and plan long term investments.”
Yet what Street Bump really produces, left to its own devices, is a map of potholes that systematically favours young, affluent areas where more people own smartphones. Street Bump offers us “N = All” in the sense that every bump from every enabled phone can be recorded. That is not the same thing as recording every pothole. As Microsoft researcher Kate Crawford points out, found data contain systematic biases and it takes careful thought to spot and correct for those biases. Big data sets can seem comprehensive but the “N = All” is often a seductive illusion.
. . .
Who cares about causation or sampling bias, though, when there is money to be made? Corporations around the world must be salivating as they contemplate the uncanny success of the US discount department store Target, as famously reported by Charles Duhigg in The New York Times in 2012. Duhigg explained that Target has collected so much data on its customers, and is so skilled at analysing that data, that its insight into consumers can seem like magic.
Duhigg’s killer anecdote was of the man who stormed into a Target near Minneapolis and complained to the manager that the company was sending coupons for baby clothes and maternity wear to his teenage daughter. The manager apologised profusely and later called to apologise again – only to be told that the teenager was indeed pregnant. Her father hadn’t realised. Target, after analysing her purchases of unscented wipes and magnesium supplements, had.
Statistical sorcery? There is a more mundane explanation.
“There’s a huge false positive issue,” says Kaiser Fung, who has spent years developing similar approaches for retailers and advertisers. What Fung means is that we didn’t get to hear the countless stories about all the women who received coupons for babywear but who weren’t pregnant.
Hearing the anecdote, it’s easy to assume that Target’s algorithms are infallible – that everybody receiving coupons for onesies and wet wipes is pregnant. This is vanishingly unlikely. Indeed, it could be that pregnant women receive such offers merely because everybody on Target’s mailing list receives such offers. We should not buy the idea that Target employs mind-readers before considering how many misses attend each hit.
In Charles Duhigg’s account, Target mixes in random offers, such as coupons for wine glasses, because pregnant customers would feel spooked if they realised how intimately the company’s computers understood them.
Fung has another explanation: Target mixes up its offers not because it would be weird to send an all-baby coupon-book to a woman who was pregnant but because the company knows that many of those coupon books will be sent to women who aren’t pregnant after all.
None of this suggests that such data analysis is worthless: it may be highly profitable. Even a modest increase in the accuracy of targeted special offers would be a prize worth winning. But profitability should not be conflated with omniscience.
. . .
In 2005, John Ioannidis, an epidemiologist, published a research paper with the self-explanatory title, “Why Most Published Research Findings Are False”. The paper became famous as a provocative diagnosis of a serious issue. One of the key ideas behind Ioannidis’s work is what statisticians call the “multiple-comparisons problem”.
It is routine, when examining a pattern in data, to ask whether such a pattern might have emerged by chance. If it is unlikely that the observed pattern could have emerged at random, we call that pattern “statistically significant”.
The multiple-comparisons problem arises when a researcher looks at many possible patterns. Consider a randomised trial in which vitamins are given to some primary schoolchildren and placebos are given to others. Do the vitamins work? That all depends on what we mean by “work”. The researchers could look at the children’s height, weight, prevalence of tooth decay, classroom behaviour, test scores, even (after waiting) prison record or earnings at the age of 25. Then there are combinations to check: do the vitamins have an effect on the poorer kids, the richer kids, the boys, the girls? Test enough different correlations and fluke results will drown out the real discoveries.
There are various ways to deal with this but the problem is more serious in large data sets, because there are vastly more possible comparisons than there are data points to compare. Without careful analysis, the ratio of genuine patterns to spurious patterns – of signal to noise – quickly tends to zero.
Worse still, one of the antidotes to the ­multiple-comparisons problem is transparency, allowing other researchers to figure out how many hypotheses were tested and how many contrary results are languishing in desk drawers because they just didn’t seem interesting enough to publish. Yet found data sets are rarely transparent. Amazon and Google, Facebook and Twitter, Target and Tesco – these companies aren’t about to share their data with you or anyone else.
New, large, cheap data sets and powerful ­analytical tools will pay dividends – nobody doubts that. And there are a few cases in which analysis of very large data sets has worked miracles. David Spiegelhalter of Cambridge points to Google Translate, which operates by statistically analysing hundreds of millions of documents that have been translated by humans and looking for patterns it can copy. This is an example of what computer scientists call “machine learning”, and it can deliver astonishing results with no preprogrammed grammatical rules. Google Translate is as close to theory-free, data-driven algorithmic black box as we have – and it is, says Spiegelhalter, “an amazing achievement”. That achievement is built on the clever processing of enormous data sets.
But big data do not solve the problem that has obsessed statisticians and scientists for centuries: the problem of insight, of inferring what is going on, and figuring out how we might intervene to change a system for the better.
“We have a new resource here,” says Professor David Hand of Imperial College London. “But nobody wants ‘data’. What they want are the answers.”
To use big data to produce such answers will require large strides in statistical methods.
“It’s the wild west right now,” says Patrick Wolfe of UCL. “People who are clever and driven will twist and turn and use every tool to get sense out of these data sets, and that’s cool. But we’re flying a little bit blind at the moment.”
Statisticians are scrambling to develop new methods to seize the opportunity of big data. Such new methods are essential but they will work by building on the old statistical lessons, not by ignoring them.
Recall big data’s four articles of faith. Uncanny accuracy is easy to overrate if we simply ignore false positives, as with Target’s pregnancy predictor. The claim that causation has been “knocked off its pedestal” is fine if we are making predictions in a stable environment but not if the world is changing (as with Flu Trends) or if we ourselves hope to change it. The promise that “N = All”, and therefore that sampling bias does not matter, is simply not true in most cases that count. As for the idea that “with enough data, the numbers speak for themselves” – that seems hopelessly naive in data sets where spurious patterns vastly outnumber genuine discoveries.
“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever.

This article was first published in the FT Magazine, 29/30 March 2014. Read it in its original setting here.

Previous Next


  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook


  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

About Tim

Tim Harford is an author, columnist for the Financial Times and presenter of Radio 4's "More or Less".
Learn More ›

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Do NOT follow this link or you will be banned from the site!