Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

The Undoing Project – Book Review

Michael Lewis could spin gold out of any topic he chose but his best work has shone a spotlight into corners of the world that weren’t getting enough attention until Lewis came along. Liar’s Poker described bond trader Lewis Ranieri and the way securitisation revolutionised Wall Street in the 1980s. Moneyball covered baseball manager Billy Beane and anticipated the “quants” taking over the world. And The Big Short depicted Steve Eisman and Michael Burry, the men who spotted the financial crisis coming and bet vast sums on it.

The Undoing Project, then, is a departure, because it’s a biography of two well-established figures: Daniel Kahneman and Amos Tversky, the Israeli psychologists whose partnership produced the foundations of what we now call behavioural economics. Despite an introduction by Lewis declaring that he hadn’t heard of them until 2003, neither man remotely counts as an unknown.

When Tversky died young, in 1996, he was on the secret shortlist for a Nobel memorial prize in economics, and received a detailed obituary in The New York Times. Kahneman won the Nobel economics prize in 2002 and published his own bestselling book, Thinking, Fast and Slow, in 2011. Their ideas are everywhere; it’s almost impossible to find a book in the “smart thinking” section of a bookshop that doesn’t cite Kahneman and Tversky: an irony, since their work highlights many of the ways in which our thinking isn’t smart at all.

For example, they identified the “representativeness heuristic” — our tendency to make judgments by comparing an example to some mental model. When we meet a nervous, geeky-looking gentleman we note that he matches our stereotype of a programmer and, therefore, probably is a programmer. We forget that most of the people we meet are not, in fact, programmers, no matter how much they might resemble them.

This matters, because when judging probabilities we often skip over the real question, “Is this likely?”, in favour of a representativeness question: “Does this match my preconceptions?”. “Is the lump likely to be a malignant tumour?” becomes “Does the lump match my idea of what a malignant tumour looks like?”. It’s a reasonable rule of thumb that can lead us seriously astray.

All this is well known to anyone who has read Kahneman himself or popularisations of his work, so what does Lewis add? He’s a far better writer than most, meaning that even the familiar is fresh. And there is a great deal here that feels new. Lewis has done his homework; he has evidently talked to the right people — with the inevitable omission of the much-missed Tversky — and he knows how to tell a story simply, powerfully and with an eye for the telling detail.

Yet The Undoing Project gets off to a shaky start with a chapter discussing the selection of basketball players and the way in which basketball scouts commit various cognitive errors. Perhaps the success of Moneyball encouraged Lewis and his editor to think this was wise but it adds very little to our appreciation of the main characters, and much of the chapter is baffling unless one happens to be a fan of American sports.

All is forgiven in chapter two, when we meet the young Danny Kahneman, a Paris-raised Jew whose family spend the war dodging the Nazis and their sympathisers. No matter how many accounts one reads of such horrors, the reader is filled with sadness and a kind of awe at the survivors. At the age of seven, Danny was caught on the streets after curfew by an SS soldier. The man didn’t notice the yellow star under his sweater; instead, he hugged little Danny and, full of emotion, showed him a photograph of another young boy. Then he gave Danny some money and sent him on his way. “People were endlessly complicated and interesting,” Kahneman recalled.

Tversky is no less deftly portrayed: as a child, he was so bullish that he was willing to leap from a high diving board despite being unable to swim — he simply arranged for a bigger boy to be on hand to drag him to safety. As a soldier, Tversky saw a comrade pull the pin on a grenade-like explosive, then faint. As his commanding officer yelled orders to stay put, Tversky dashed forward, dragged the stricken man a few yards away, then dived to cover him, taking the shrapnel into his own body. Yet he berated his own men for carelessly taking risks. “If a bullet is going to kill me, it has my name on it anyway,” they would say. Tversky, a quick wit, reminded them that many bullets were addressed “To Whom It May Concern”.

Today, Kahneman and Tversky’s view of human psychology is widely accepted, and thanks to his longevity and his Nobel Prize, Kahneman is a more famous figure than Tversky. But Lewis takes us back in time, conjuring up the 1970s, when their ideas were new and controversial, they were operating in the backwater of Israeli academia, and when it was the mesmerising Amos rather than the quiet Danny who won all the attention.

Behavioural economics itself is not a major part of the book. Richard Thaler, the most important intellectual conduit between Kahneman and Tversky and economics, does not appear in the story until the closing chapters. While Tversky loved to have an intellectual foe to slay, it would diminish his work with Kahneman to define it merely as a takedown of textbook economics. By writing less about behavioural economics Lewis gives Kahneman and Tversky’s ideas room to breathe.

Lewis admires his subjects and believes they are right about everything important. He has no time for rational economic man, and brutally dismisses one noted critic of Kahneman, the psychologist Gerd Gigerenzer. But this isn’t a hagiography. Tversky is depicted as intellectually aggressive, contemptuous of many academics and perversely compelled to needle the vulnerable Kahneman. Meanwhile, a new side to Kahneman emerges. In my limited personal experience, Kahneman seems wise, kindly and stoic in the face of his advancing years. But Lewis describes the younger Kahneman as depressed, envious of his celebrated partner and desperately needy.

Nevertheless, it is clear that Lewis is cheering our heroes on, and the reader cannot help but join him. The story he tells of their intellectual love affair, and its painful disintegration, is vivid, original and hard to forget.
Written for and first published in the Financial Times.

My new book “Messy” is now out and available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

23rd of December, 2016Other WritingComments off
Other Writing

Thomas C. Schelling, 1921 – 2016

Thomas C. Schelling, who died on December 13 at the age of 95, was a self-described “errant economist” who worked as a Cold War strategist and won the most prestigious prize of his profession.

Schelling was a popular winner of the Nobel memorial prize for economics. Journalists found his lively prose and counterintuitive ideas easier to describe than the complex equations of his fellow laureate Robert Aumann.

But the Californian-born economist was an unlikely laureate. His ideas were rich and influential — and easily expressed in plain English. He highlighted weaknesses in standard economic approaches, deploying vivid thought experiments more suited to moral philosophy than to economics, and rarely cited other academics.

Instead, Schelling used academia as a vantage point from which to advise the administrations of Presidents Kennedy, Johnson and Nixon. He was at Harvard University for 31 years, and said of one role there that it had given him a decade of “freedom to write and to consult, and I spent much of my time, especially during the summer, doing advisory work for the government.”

That advisory work drew on one discipline in particular.

Game theory had been dreamt up by the mathematician John von Neumann, as an attempt to model in mathematical terms human interactions from poker through to strikes or cartels.

The Hungarian-born von Neumann was a hawk (“If you say why not bomb [the Soviets] tomorrow, I say why not today?”) but Schelling took game theory in a new direction. He emphasised that even the most implacable foes could find areas of common interest — most obviously, during the Cold War, the necessity of avoiding mutual annihilation.

To this end, in the late 1950s and the 1960s, Schelling’s advisory work and his publications focused on issues of effective deterrence, communication, and the strategic limitation of arms. He was a consultant for Stanley Kubrick’s 1964 film Dr Strangelove, a nuclear annihilation comedy which introduces a “doomsday device”. The device is the ultimate deterrent: it will be triggered automatically in the case of war. Alas, it’s a secret, which limits the deterrent effect.

The doomsday device was the perfect illustration of some of Schelling’s favourite themes: strategic commitment, miscommunication, and unintended consequences. It is no coincidence that it was Schelling who insisted that Washington and Moscow establish a secure hotline and work out protocols for ensuring it was tamper-proof. This attention to details that others overlook was a spark for his best academic work. It is also is one reason why nuclear weapons have not been used since 1945.

Schelling ended his advisory work with a letter opposing the 1970 US military campaign in Cambodia. He later worked on the problem of addiction, developing models of self-control that were precursors to what is today known as behavioural economics. This research was partly motivated by his own long and eventually successful struggle to stop smoking. And in 1980, at the request of President Carter, he became one of the first economists to work on the problem of human-induced climate change.

He also anticipated the use of complexity science in economics with a celebrated “chessboard” model of segregation. This showed how two racial groups could completely segregate from each other in a chain reaction despite being quite comfortable in a mixed neighbourhood.

These days such modelling is done on a computer, but Schelling originally explored the idea in a notebook doodle on a long flight. “It was hard to do with pencil and paper,” he told the FT in a 2005 interview. “You had to do a lot of erasing.”

Thomas Crombie Schelling was born in Oakland, California, on 14 April 1921. His father was in the US Navy, but despite Thomas Schelling’s crew cut, square jaw and family history, he did not fight in the war. For medical reasons, the military would not accept him. Instead he studied economics at the University of California, Berkeley and earned his PhD at Harvard. After a spell working on the Marshall Plan, he taught at Yale, Harvard and finally the University of Maryland.

Schelling married Corinne Saposs in 1947. After that four-decade marriage ended in divorce, he married Alice Coleman, who survives him, as do four sons, two stepsons and his younger sister, Nancy.

Written for and first published in the Financial Times.

17th of December, 2016Other WritingComments off
Other Writing

How the shock of Brexit could make the British economy stronger

In 1975, the American jazz pianist Keith Jarrett found himself in an unenviable position. Shortly before beginning one of his improvised solo performances, he discovered that some backstage bungle had left him with an old rehearsal piano. It was out of tune, tinny and had sticky keys and pedals. After protesting and realising nothing could be changed, he decided to play anyway. The flaws in the piano pushed him to play in a new style, discovering fresh ways to express himself. And against all expectations — certainly against Jarrett’s — the result was a masterpiece: The Koln Concert album.

I have been thinking about the unplayable piano a lot since Britain voted to leave the EU. By any conventional analysis Brexit was an act of economic self-harm. But by any conventional analysis, a creaking little piano does not make for great music either. Might the UK economy somehow burst into a display of unexpected virtuosity in unpromising circumstances? Let us review the sticky keys and see what fresh tunes might be playable.

First, immigration. The debate on this has taken a xenophobic turn but the pure economics of tighter immigration also looks challenging, particularly for agriculture, catering, the National Health Service and higher education. Since EU migrants have more than paid their way, discouraging them will also weigh on public finances.

Second, trade. We don’t know what the post-Brexit trade landscape will look like but the UK will find it harder to remain an open economy. It will be more difficult to integrate with pan-EU supply chains, the costs of imports will rise and, while exporters benefit from a weaker pound, they may find themselves facing higher tariffs and, more important, non-tariff barriers.

Third, financial services. London will be a less attractive financial centre hub if it cannot be used as a base to provide financial services across the EU. US banks, in particular, may find Dublin, Frankfurt, Paris or New York to be more sensible vantage points to serve the EU market.

These, then, are the obstacles. What are the opportunities? As labour markets tighten, companies may invest more in skills and particularly in capital: better tools, smarter software and more robots. We may see a more productive economy with higher wages, at least for those who can manage the robots rather than be replaced by them.

If the UK economy cannot integrate smoothly with EU suppliers, that will raise costs but it may also stimulate more local networks. This import-substituting strategy is often associated with the policies of Latin American strongmen but it has occasionally worked.

Is there a bright side from a weaker City? Perhaps. A country that exports a lot of a commodity such as oil can start to suffer from the “Dutch disease”, a condition resulting from a currency so strong that it becomes almost impossible to do anything except pump oil and spend the earnings. In principle, the same thing might occur with a very concentrated industry such as the high finance of the City of London. If the oil — or the high finance — dries up then the exchange rate weakens and other industries can flourish. Perhaps this is part of what we are seeing now as the pound falters, and perhaps the misfortune of the City will be beneficial to other industries such as software or high-tech manufacturing.

There is also the possibility of building affordable houses. Once the country’s tabloid press can no longer blame Brussels about red tape, they may turn their fire on the British regulatory thicket holding back the economy: land use restrictions. If we had built more houses where people wished to live, fewer people would be feeling left behind and blaming Lithuanians for troubles that were engineered in Westminster.

All this suggests a British economy with a larger presence as a producer and consumer of high-tech software and robotics: the Japan of Europe, although hopefully without the quarter-century of economic stagnation. It is not impossible. Data collected by Massachusetts Institute of Technology’s Atlas of Economic Complexity project suggest that the UK has untapped capacity in industries such as cars and precision engineering.

I do not believe in “economic models”. Models are all very well when we are talking about Lego. When it comes to a major 21st-century economy, things are too complicated for that. We will have to see what emerges. The situation looks unpromising but so, too, did Keith Jarrett’s unplayable piano.

First written for the FT’s “Future of Britain” project.

My new book “Messy” is now out and available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

31st of October, 2016Other WritingComments off
Other Writing

Why everyone should give a TED talk and how to do it

I found out the hard way that bad public speaking is contagious. As a schoolboy I was pretty good at speeches, in a schoolboyish way. I won competitions; being a sharp, witty speaker was a defining part of who I felt myself to be.

Then I grew up and started a corporate job, and something strange happened. My talks sagged into “presentations”, burdened by humourless clip art and plodding bullet points. The reason? I was surrounded by people who were stuck in the same beige offices giving the same beige presentations. Like many workplaces, we had reached an unspoken consensus that giving bad talks was just the way things were done.

Aside from tradition — and it is a powerful one — why else are most talks bad talks? One reason is fear. Being afraid does not itself make a speech bad; fear can make a talk electrifying or touching. But most speakers take the coward’s way out. Afraid of running out of words, they overstuff their speeches. And they prop themselves up by projecting their speaking notes on the wall behind them, even though everyone knows that providing rolling spoilers for your speech is a terrible idea.

A second reason is lack of preparation. Most speakers rehearse neither their argument nor their performance. That is understandable. Practising in front of a mirror is painful. Practising in front of a friend is excruciating. Rehearsing offers all the discomfort of giving a speech without any of the rewards of doing so. But it will make the end result much better.

For these reasons, I think you should give a TED talk. Almost anyone can. All you need is 18 minutes, a topic and an audience — if only your cat. No matter how often or how rarely you usually speak in public, the act of trying to give a talk in the tradition of TED will change the way you think and feel about public speaking.

As with anything popular, TED talks have their critics, but it is hard to deny that the non-profit organisation behind the videoed presentations on subjects from science to business has helped reinvent the art of the public speech.

TED talks are vastly more entertaining than traditional lectures, while more thought provoking than most television. But that is TED from the point of view of the audience. From the view of an aspiring speaker, the lesson of TED is that most speakers could raise their game. A few TED talks are by professional politicians or entertainers such as Al Gore or David Blaine. Most are not.

There are more than 1,000 talks on the TED website with more than 1m views, typically delivered by writers, academics or entrepreneurs who have been giving mediocre talks as a matter of habit, and who have been suddenly challenged to stop being mediocre. Faced with the obligation to deliver the talk of their lives, they decided to do the work and take the necessary risks.

These speakers have been offered good advice by the organisers of TED, but that advice has never been a secret. It is now available to anyone in the form of TED Talks (buy in the UK) (buy in the US), a guide to public speaking from Chris Anderson, the TED boss. It is excellent; easily the best public speaking guide I have read. (I should admit a bias: I have spoken twice at TED events and benefited from the platform that TED provides.) Unlike many in the genre, Anderson’s book is not a comprehensive guide to going through the motions of wedding toasts and votes of thanks. Instead, it focuses on the stripped-down TED-style challenge: an audience, a speaker, plenty of time to prepare, and 18 minutes to say something worth hearing.

There is no formula for a great talk, insists Mr Anderson, but there are some common elements. First and most important: there is a point, an idea worth hearing about. Second, the talk has a “throughline” — meaning that most of what is said in some way supports that idea. There may be stories and jokes, even surprises — but everything is relevant.

Third, the speaker connects with those listening — perhaps through humour, stories, or simply making eye contact and speaking frankly. Finally, the speech explains concepts or advances arguments by starting from what the audience understand, and proceeding step by step through more surprising territory. It can be very hard for a speaker to appreciate just how much she knows that her audience do not. One reason to rehearse is that an audience can tell you when they get lost.

Most speakers are able to do some of this, some of the time — an interesting anecdote, a funny line, an educational explanation. We are social beings, after all. We have had a lot of practice talking.

Much of what turns a half-decent talk into a brilliant one is the ruthless excision of the fluff — the throat-clearing introduction, the platitudes, the digressions, the additional points that obscure the central message, and the “er, that’s about it” conclusion. With an audience of 60 people, for instance, every minute you waffle is an hour of other people’s time you are wasting. Sharpen up.

My only quibble is that the book offers less to a speaker who is short of preparation time. Because Mr Anderson is so keen to tell speakers how to prepare, he does not fully engage with the challenge of improvised speaking or debating.

Marco “Rubot” Rubio’s presidential dreams may have been snuffed out because he seemed over-rehearsed and unable to improvise. And Martin Luther King Jr’s greatest moment as a speaker — the second half of “I have a dream” — was unscripted. Sometimes the improvised response is more powerful than a prepared speech can ever be.

Instead, Mr Anderson’s aim is to help readers give a full-blown TED talk, despite the hard work that entails. Fair enough. Preparing to give a high-stakes speech is like training for a marathon or studying for an exam: even if you only do it once, the process will teach you things you will always remember.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)


A short-cut to speeches

A TED-style talk takes weeks of preparation. What if you have hours, or minutes, to prepare?

• Say something worth hearing. “It’s not about you,” says Chris Anderson, who warns that business presentations are often sales pitches or boasts. He adds that the same information will land much better if it is “here’s what we’ve learnt” rather than “look how great we’ve been”.

• Less is more. Once you have found something worth saying, focus. Strip it down to a single core point. Everything about your speech — stories, jokes, statistics, graphics — should connect to that point.

• Your speaking notes should not intrude. Bullet points are a good idea if they are written on handheld cards, but not when projected on the wall behind you. If your speech is scripted, do not try to memorise it if you have no time, but become familiar with it. “There’s a big difference between being 90 per cent down in the script, and 60 per cent up and connected,” says Anderson.

• You are usually your own best visual aid. By all means use pictures, diagrams or video when they are good. But do not use substandard slides as wallpaper; when you have nothing to show, show nothing. Hit “B” to blank the screen and focus attention on you, or use empty slides.

• Practise. Even one run-through with a friend will help. Or find an empty room and record yourself on your phone. It is awkward but worth it.

• First and final impressions last. Improvised talks often suffer from a slow start and a limp finish. Think of a good opening and closing, and practise them. If you can start and finish strongly, you and your audience will both feel better.

11th of May, 2016Other WritingComments off

How Politicians Poisoned Statistics

We have more data — and the tools to analyse and share them — than ever before. So why is the truth so hard to pin down?

In January 2015, a few months before the British general election, a proud newspaper resigned itself to the view that little good could come from the use of statistics by politicians. An editorial in the Guardian argued that in a campaign that would be “the most fact-blitzed in history”, numerical claims would settle no arguments and persuade no voters. Not only were numbers useless for winning power, it added, they were useless for wielding it, too. Numbers could tell us little. “The project of replacing a clash of ideas with a policy calculus was always dubious,” concluded the newspaper. “Anyone still hankering for it should admit their number’s up.”

This statistical capitulation was a dismaying read for anyone still wedded to the idea — apparently a quaint one — that gathering statistical information might help us understand and improve our world. But the Guardian’s cynicism can hardly be a surprise. It is a natural response to the rise of “statistical bullshit” — the casual slinging around of numbers not because they are true, or false, but to sell a message.

Politicians weren’t always so ready to use numbers as part of the sales pitch. Recall Ronald Reagan’s famous suggestion to voters on the eve of his landslide defeat of President Carter: “Ask yourself, ‘Are you better off now than you were four years ago?’” Reagan didn’t add any statistical garnish. He knew that voters would reach their own conclusions.

The British election campaign of spring last year, by contrast, was characterised by a relentless statistical crossfire. The shadow chancellor of the day, Ed Balls, declared that a couple with children (he didn’t say which couple) had lost £1,800 thanks to the government’s increase in value added tax. David Cameron, the prime minister, countered that 94 per cent of working households were better off thanks to recent tax changes, while the then deputy prime minister Nick Clegg was proud to say that 27 million people were £825 better off in terms of the income tax they paid.

Could any of this be true? Yes — all three claims were. But Ed Balls had reached his figure by summing up extra VAT payments over several years, a strange method. If you offer to hire someone for £100,000, and then later admit you meant £25,000 a year for a four-year contract, you haven’t really lied — but neither have you really told the truth. And Balls had looked only at one tax. Why not also consider income tax, which the government had cut? Clegg boasted about income-tax cuts but ignored the larger rise in VAT. And Cameron asked to be evaluated only on his pre-election giveaway budget rather than the tax rises he had introduced earlier in the parliament — the equivalent of punching someone on the nose, then giving them a bunch of flowers and pointing out that, in floral terms, they were ahead on the deal.

Each claim was narrowly true but broadly misleading. Not only did the clashing numbers confuse but none of them helped answer the crucial question of whether Cameron and Clegg had made good decisions in office.

To ask whether the claims were true is to fall into a trap. None of these politicians had any interest in playing that game. They were engaged in another pastime entirely.

Thirty years ago, the Princeton philosopher Harry Frankfurt published an essay in an obscure academic journal, Raritan. The essay’s title was “On Bullshit”. (Much later, it was republished as a slim volume that became a bestseller.) Frankfurt was on a quest to understand the meaning of bullshit — what was it, how did it differ from lies, and why was there so much of it about?

Frankfurt concluded that the difference between the liar and the bullshitter was that the liar cared about the truth — cared so much that he wanted to obscure it — while the bullshitter did not. The bullshitter, said Frankfurt, was indifferent to whether the statements he uttered were true or not. “He just picks them out, or makes them up, to suit his purpose.”

Statistical bullshit is a special case of bullshit in general, and it appears to be on the rise. This is partly because social media — a natural vector for statements made purely for effect — are also on the rise. On Instagram and Twitter we like to share attention-grabbing graphics, surprising headlines and figures that resonate with how we already see the world. Unfortunately, very few claims are eye-catching, surprising or emotionally resonant because they are true and fair. Statistical bullshit spreads easily these days; all it takes is a click.

Consider a widely shared list of homicide “statistics” attributed to the “Crime Statistics Bureau — San Francisco”, asserting that 81 per cent of white homicide victims were killed by “blacks”. It takes little effort to establish that the Crime Statistics Bureau of San Francisco does not exist, and not much more digging to discover that the data are utterly false. Most murder victims in the United States are killed by people of their own race; the FBI’s crime statistics from 2014 suggest that more than 80 per cent of white murder victims were killed by other white people.

Somebody, somewhere, invented the image in the hope that it would spread, and spread it did, helped by a tweet from Donald Trump, the current frontrunner for the Republican presidential nomination, that was retweeted more than 8,000 times. One can only speculate as to why Trump lent his megaphone to bogus statistics, but when challenged on Fox News by the political commentator Bill O’Reilly, he replied, “Hey, Bill, Bill, am I gonna check every statistic?”

Harry Frankfurt’s description of the bullshitter would seem to fit Trump perfectly: “He does not care whether the things he says describe reality correctly.”

While we can’t rule out the possibility that Trump knew the truth and was actively trying to deceive his followers, a simpler explanation is that he wanted to win attention and to say something that would resonate with them. One might also guess that he did not check whether the numbers were true because he did not much care one way or the other. This is not a game of true and false. This is a game of politics.

While much statistical bullshit is careless, it can also be finely crafted. “The notion of carefully wrought bullshit involves … a certain inner strain,” wrote Harry Frankfurt but, nevertheless, the bullshit produced by spin-doctors can be meticulous. More conventional politicians than Trump may not much care about the truth but they do care about being caught lying.

Carefully wrought bullshit was much in evidence during last year’s British general election campaign. I needed to stick my nose in and take a good sniff on a regular basis because I was fact-checking on behalf of the BBC’s More or Less programme. Again and again I would find myself being asked on air, “Is that claim true?” and finding that the only reasonable answer began with “It’s complicated”.

Take Ed Miliband’s claim before the last election that “people are £1,600 a year worse off” than they were when the coalition government came to power. Was that claim true? Arguably, yes.

But we need to be clear that by “people”, the then Labour leader was excluding half the adult population. He was not referring to pensioners, benefit recipients, part-time workers or the self-employed. He meant only full-time employees, and, more specifically, only their earnings before taxes and benefits.

Even this narrower question of what was happening to full-time earnings is a surprisingly slippery one. We need to take an average, of course. But what kind of average? Labour looked at the change in median wages, which were stagnating in nominal terms and falling after inflation was taken into account.

That seems reasonable — but the median is a problematic measure in this case. Imagine nine people, the lowest-paid with a wage of £1, the next with a wage of £2, up to the highest-paid person with a wage of £9. The median wage is the wage of the person in the middle: it’s £5.

Now imagine that everyone receives a promotion and a pay rise of £1. The lowly worker with a wage of £1 sees his pay packet double to £2. The next worker up was earning £2 and now she gets £3. And so on. But there’s also a change in the composition of the workforce: the best-paid worker retires and a new apprentice is hired at a wage of £1. What’s happened to people’s pay? In a sense, it has stagnated. The pattern of wages hasn’t changed at all and the median is still £5.

But if you asked the individual workers about their experiences, they would all tell you that they had received a generous pay rise. (The exceptions are the newly hired apprentice and the recent retiree.) While this example is hypothetical, at the time Miliband made his comments something similar was happening in the real labour market. The median wage was stagnating — but among people who had worked for the same employer for at least a year, the median worker was receiving a pay rise, albeit a modest one.

Another source of confusion: if wages for the low-paid and the high-paid are rising but wages in the middle are sagging, then the median wage can fall, even though the median wage increase is healthy. The UK labour market has long been prone to this kind of “job polarisation”, where demand for jobs is strongest for the highest and lowest-paid in the economy. Job polarisation means that the median pay rise can be sizeable even if median pay has not risen.

Confused? Good. The world is a complicated place; it defies description by sound bite statistics. No single number could ever answer Ronald Reagan’s question — “Are you better off now than you were four years ago?” — for everyone in a country.

So, to produce Labour’s figure of “£1,600 worse off”, the party’s press office had to ignore the self-employed, the part-timers, the non-workers, compositional effects and job polarisation. They even changed the basis of their calculation over time, switching between different measures of wages and different measures of inflation, yet miraculously managing to produce a consistent answer of £1,600. Sometimes it’s easier to make the calculation produce the number you want than it is to reprint all your election flyers.

Very few claims are eye-catching, surprising or emotionally resonant because they are true and fair

Such careful statistical spin-doctoring might seem a world away from Trump’s reckless retweeting of racially charged lies. But in one sense they were very similar: a political use of statistics conducted with little interest in understanding or describing reality. Miliband’s project was not “What is the truth?” but “What can I say without being shown up as a liar?”

Unlike the state of the UK job market, his incentives were easy to understand. Miliband needed to hammer home a talking point that made the government look bad. As Harry Frankfurt wrote back in the 1980s, the bullshitter “is neither on the side of the true nor on the side of the false. His eye is not on the facts at all … except insofar as they may be pertinent to his interest in getting away with what he says.”

Such complexities put fact-checkers in an awkward position. Should they say that Ed Miliband had lied? No: he had not. Should they say, instead, that he had been deceptive or misleading? Again, no: it was reasonable to say that living standards had indeed been disappointing under the coalition government.

Nevertheless, there was a lot going on in the British economy that the figure omitted — much of it rather more flattering to the government. Full Fact, an independent fact-checking organisation, carefully worked through the paper trail and linked to all the relevant claims. But it was powerless to produce a fair and representative snapshot of the British labour market that had as much power as Ed Miliband’s seven-word sound bite. No such snapshot exists. Truth is usually a lot more complicated than statistical bullshit.

On July 16 2015, the UK health phentermine secretary Jeremy Hunt declared: “Around 6,000 people lose their lives every year because we do not have a proper seven-day service in hospitals. You are 15 per cent more likely to die if you are admitted on a Sunday compared to being admitted on a Wednesday.”

This was a statistic with a purpose. Hunt wanted to change doctors’ contracts with the aim of getting more weekend work out of them, and bluntly declared that the doctors’ union, the British Medical Association, was out of touch and that he would not let it block his plans: “I can give them 6,000 reasons why.”

Despite bitter opposition and strike action from doctors, Hunt’s policy remained firm over the following months. Yet the numbers he cited to support it did not. In parliament in October, Hunt was sticking to the 15 per cent figure, but the 6,000 deaths had almost doubled: “According to an independent study conducted by the BMJ, there are 11,000 excess deaths because we do not staff our hospitals properly at weekends.”

Arithmetically, this was puzzling: how could the elevated risk of death stay the same but the number of deaths double? To add to the suspicions about Hunt’s mathematics, the editor-in-chief of the British Medical Journal, Fiona Godlee, promptly responded that the health phentermine secretary had publicly misrepresented the BMJ research.

Undaunted, the health phentermine secretary bounced back in January with the same policy and some fresh facts: “At the moment we have an NHS where if you have a stroke at the weekends, you’re 20 per cent more likely to die. That can’t be acceptable.”

All this is finely wrought bullshit — a series of ever-shifting claims that can be easily repeated but are difficult to unpick. As Hunt jumped from one form of words to another, he skipped lightly ahead of fact-checkers as they tried to pin him down. Full Fact concluded that Hunt’s statement about 11,000 excess deaths had been untrue, and asked him to correct the parliamentary record. His office responded with a spectacular piece of bullshit, saying (I paraphrase) that whether or not the claim about 11,000 excess deaths was true, similar claims could be made that were.

So, is it true? Do 6,000 people — or 11,000 — die needlessly in NHS hospitals because of poor weekend care? Nobody knows for sure; Jeremy Hunt certainly does not. It’s not enough to show that people admitted to hospital at the weekend are at an increased risk of dying there. We need to understand why — a question that is essential for good policy but inconvenient for politicians.

One possible explanation for the elevated death rate for weekend admissions is that the NHS provides patchy care and people die as a result. That is the interpretation presented as bald fact by Jeremy Hunt. But a more straightforward explanation is that people are only admitted to hospital at the weekend if they are seriously ill. Less urgent cases wait until weekdays. If weekend patients are sicker, it is hardly a surprise that they are more likely to die. Allowing non-urgent cases into NHS hospitals at weekends wouldn’t save any lives, but it would certainly make the statistics look more flattering. Of course, epidemiologists try to correct for the fact that weekend patients tend to be more seriously ill, but few experts have any confidence that they have succeeded.

A more subtle explanation is that shortfalls in the palliative care system may create the illusion that hospitals are dangerous. Sometimes a patient is certain to die, but the question is where — in a hospital or a palliative hospice? If hospice care is patchy at weekends then a patient may instead be admitted to hospital and die there. That would certainly reflect poor weekend care. It would also add to the tally of excess weekend hospital deaths, because during the week that patient would have been admitted to, and died in, a palliative hospice. But it is not true that the death was avoidable.

Does it seem like we’re getting stuck in the details? Well, yes, perhaps we are. But improving NHS care requires an interest in the details. If there is a problem in palliative care hospices, it will not be fixed by improving staffing in hospitals.

“Even if you accept that there’s a difference in death rates,” says John Appleby, the chief economist of the King’s Fund health think-tank, “nobody is able to say why it is. Is it lack of diagnostic services? Lack of consultants? We’re jumping too quickly from a statistic to a solution.”

When one claim is discredited, Jeremy Hunt’s office simply asserts that another one can be found to take its place

This matters — the NHS has a limited budget. There are many things we might want to spend money on, which is why we have the National Institute for Health and Care Excellence (Nice) to weigh up the likely benefits of new treatments and decide which offer the best value for money.

Would Jeremy Hunt’s push towards a seven-day NHS pass the Nice cost-benefit threshold? Probably not. Our best guess comes from a 2015 study by health economists Rachel Meacock, Tim Doran and Matt Sutton, which estimates that the NHS has many cheaper ways to save lives. A more comprehensive assessment might reach a different conclusion but we don’t have one because the Department for Health, oddly, hasn’t carried out a formal health impact assessment of the policy it is trying to implement.

This is a depressing situation. The government has devoted considerable effort to producing a killer number: Jeremy Hunt’s “6,000 reasons” why he won’t let the British Medical Association stand in his way. It continues to produce statistical claims that spring up like hydra heads: when one claim is discredited, Hunt’s office simply asserts that another one can be found to take its place. Yet the government doesn’t seem to have bothered to gather the statistics that would actually answer the question of how the NHS could work better.

This is the real tragedy. It’s not that politicians spin things their way — of course they do. That is politics. It’s that politicians have grown so used to misusing numbers as weapons that they have forgotten that used properly, they are tools.

You complain that your report would be dry. The dryer the better. Statistics should be the dryest of all reading,” wrote the great medical statistician William Farr in a letter in 1861. Farr sounds like a caricature of a statistician, and his prescription — convey the greatest possible volume of information with the smallest possible amount of editorial colour — seems absurdly ill-suited to the modern world.

But there is a middle ground between the statistical bullshitter, who pays no attention to the truth, and William Farr, for whom the truth must be presented without adornment. That middle ground is embodied by the recipient of William Farr’s letter advising dryness. She was the first woman to be elected to the Royal Statistical Society: Florence Nightingale.

Nightingale is the most celebrated nurse in British history, famous for her lamplit patrols of the Barrack Hospital in Scutari, now a district of Istanbul. The hospital was a death trap, with thousands of soldiers from the Crimean front succumbing to typhus, cholera and dysentery as they tried to recover from their wounds in cramped conditions next to the sewers. Nightingale, who did her best, initially believed that the death toll was due to lack of food and supplies. Then, in the spring of 1855, a sanitary commission sent from London cleaned up the hospital, whitewashing the walls, carting away filth and dead animals and flushing out the sewers. The death rate fell sharply.

Nightingale returned to Britain and reviewed the statistics, concluding that she had paid too little attention to sanitation and that most military and medical professions were making the same mistake, leading to hundreds of thousands of deaths. She began to campaign for better public health measures, tighter laws on hygiene in rented properties, and improvements to sanitation in barracks and hospitals across the country. In doing so, a mere nurse had to convince the country’s medical and military establishments, led by England’s chief medical officer, John Simon, that they had been doing things wrong all their lives.

A key weapon in this lopsided battle was statistical evidence. But Nightingale disagreed with Farr on how that evidence should be presented. “The dryer the better” would not serve her purposes. Instead, in 1857, she crafted what has become known as the Rose Diagram, a beautiful array of coloured wedges showing the deaths from infectious diseases before and after the sanitary improvements at Scutari.

When challenged by Bill O’Reilly on Fox News, Trump replied, ‘Hey Bill, Bill, am I gonna check every statistic?’

The Rose Diagram isn’t a dry presentation of statistical truth. It tells a story. Its structure divides the death toll into two periods — before the sanitary improvements, and after. In doing so, it highlights a sharp break that is less than clear in the raw data. And the Rose Diagram also gently obscures other possible interpretations of the numbers — that, for example, the death toll dropped not because of improved hygiene but because winter was over. The Rose Diagram is a marketing pitch for an idea. The idea was true and vital, and Nightingale’s campaign was successful. One of her biographers, Hugh Small, argues that the Rose Diagram ushered in health improvements that raised life expectancy in the UK by 20 years and saved millions of lives.

What makes Nightingale’s story so striking is that she was able to see that statistics could be tools and weapons at the same time. She educated herself using the data, before giving it the makeover it required to convince others. Though the Rose Diagram is a long way from “the dryest of all reading”, it is also a long way from bullshit. Florence Nightingale realised that the truth about public health was so vital that it could not simply be recited in a monotone. It needed to sing.

The idea that a graph could change the world seems hard to imagine today. Cynicism has set in about statistics. Many journalists draw no distinction between a systematic review of peer-reviewed evidence and a survey whipped up in an afternoon to sell biscuits or package holidays: it’s all described as “new research”. Politicians treat statistics not as the foundation of their argument but as decoration — “spray-on evidence” is the phrase used by jaded civil servants. But a freshly painted policy without foundations will not last long before the cracks show through.

“Politicians need to remember: there is a real world and you want to try to change it,” says Will Moy, the director of Full Fact. “At some stage you need to engage with the real world — and that is where the statistics come in handy.”

That should be no problem, because it has never been easier to gather and analyse informative statistics. Nightingale and Farr could not have imagined the data that modern medical researchers have at their fingertips. The gold standard of statistical evidence is the randomised controlled trial, because using a randomly chosen control group protects against biased or optimistic interpretations of the evidence. Hundreds of thousands of such trials have been published, most of them within the past 25 years. In non-medical areas such as education, development aid and prison reform, randomised trials are rapidly catching on: thousands have been conducted. The British government, too, has been supporting policy trials — for example, the Education Endowment Foundation, set up with £125m of government funds just five years ago, has already backed more than 100 evaluations of educational approaches in English schools. It favours randomised trials wherever possible.

The frustrating thing is that politicians seem quite happy to ignore evidence — even when they have helped to support the researchers who produced it. For example, when the chancellor George Osborne announced in his budget last month that all English schools were to become academies, making them independent of the local government, he did so on the basis of faith alone. The Sutton Trust, an educational charity which funds numerous research projects, warned that on the question of whether academies had fulfilled their original mission of improving failing schools in poorer areas, “our evidence suggests a mixed picture”. Researchers at the LSE’s Centre for Economic Performance had a blunter description of Osborne’s new policy: “a non-evidence based shot in the dark”.

This should be no surprise. Politicians typically use statistics like a stage magician uses smoke and mirrors. Over time, they can come to view numbers with contempt. Voters and journalists will do likewise. No wonder the Guardian gave up on the idea that political arguments might be settled by anything so mundane as evidence. The spin-doctors have poisoned the statistical well.

But despite all this despair, the facts still matter. There isn’t a policy question in the world that can be settled by statistics alone but, in almost every case, understanding the statistical background is a tremendous help. Hetan Shah, the executive director of the Royal Statistical Society, has lost count of the number of times someone has teased him with the old saying about “lies, damned lies and statistics”. He points out that while it’s easy to lie with statistics, it’s even easier to lie without them.

Perhaps the lies aren’t the real enemy here. Lies can be refuted; liars can be exposed. But bullshit? Bullshit is a stickier problem. Bullshit corrodes the very idea that the truth is out there, waiting to be discovered by a careful mind. It undermines the notion that the truth matters. As Harry Frankfurt himself wrote, the bullshitter “does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”


Written for and first published in the FT Magazine

Free email updates

(You can unsubscribe at any time)

20th of April, 2016HighlightsOther WritingComments off

Three pieces of Brexit Bullshit

A referendum on UK membership of the European Union is scheduled for June 23: dodgy statistics ahoy.

“Ten Commandments — 179 words. Gettysburg address — 286 words. US Declaration of Independence — 1,300 words. EU regulations on the sale of cabbage — 26,911 words”

Variants of this claim have been circulating online and in print. It turns out that the “cabbage memo” is a longstanding urban myth that can be traced back to the US during the second world war. Variants have been used to berate bureaucrats on both sides of the Atlantic ever since.

Part of the bullshit here is that nobody ever stops to ask how many words might be appropriate for rules on fresh produce. Red Tractor Assurance, the British farm and food standards scheme, publishes 56 different protocols on fresh produce alone. The cabbage protocol is 28 pages long; there is a separate 28-page protocol on pak choi and choi sum. None of this has anything to do with the EU.

Three million jobs depend on the EU

This claim is popular among “Remain” advocates — most famously the former deputy prime minister Nick Clegg. What makes this claim bullshit is that it could easily be true, or utterly false, and it all hangs on the definition of “depend”.

The claim is that “up to 3.2 million jobs” were directly linked to exports of goods and services to other EU countries. That number passes a quick reality check: it’s about 10 per cent of UK jobs, and UK exports to the EU are about 10 per cent of the UK economy.

But even if “up to” 3.2 million jobs depend on trade with the EU, that does not mean they depend on membership of the EU. Nobody proposes — or expects — that trade with the EU will just stop. Three million jobs might well be destroyed if continental Europe was to sink beneath the waves like Atlantis, but that is not what the referendum is about.

EU membership costs £55m a day

This one is from Ukip leader Nigel Farage, who says membership amounts to more than £20bn a year. In fact, the UK paid £14.3bn to the EU in 2014 and got £6bn back. The net membership fee, then, was £8.3bn, less than half Farage’s number.

But even the correct number is little use without context. It is, for example, just over 1 per cent of UK public spending. Not nothing, but not everything either. And non-member states such as Norway and Switzerland pay large sums to the EU to retain access to the single market, so Brexit would not make this bill disappear.

The membership fee is small relative to the plausible costs and benefits of EU membership, positive or negative. If EU membership is good for Britain then £8.3bn is cheap. And if the EU is holding Britain back, then a few billion on membership is the least of our worries.


Written as a sidebar for “How Politicians Poisoned Statistics“, and first published in the FT Magazine.

Free email updates

(You can unsubscribe at any time)

16th of April, 2016MarginaliaOther WritingComments off
Other Writing

A modest proposal – let’s just abolish the Budget

Once upon a time, it made sense to have an annual Budget speech. When the central economic fact of the year was whether the harvest had failed or not, it behoved the Chancellor to declare how he planned to spend whatever he happened to have in his coffers. But a vital institution for the pre-industrial age has mutated into a mere circus for the post-industrial one. The central question that this Budget provoked in my mind was this: why on earth do we still have a Budget?

Skim through the transcript of yesterday’s speech — if you can bear to — and you’ll find that the items fall into a few categories: (1) trivial; (2) responses to silly self-imposed rules; (3) economic forecasts that will later be wrong; (4) pure rhetoric; (5) worse than useless; (6) irrelevant.

Mr Osborne opened with a list of all the ways in which the UK economy is strong, skimmed over all the ways in which it is weak, and blamed the Labour party or foreigners for everything. (Rhetoric.)

Then he ran through the latest outlook from the Office for Budget Responsibility, an institution that represents all that is best about rigorous independent economic forecasting — and is therefore bound to be wrong. (Bad forecasts.)

He admitted that he had broken his own rather odd rule about the ratio of debt to gross domestic product, before announcing that according to a different, nonsensical metric, he looked rather good. (Silly self-imposed rules.)

Mr Osborne threw the usual glitter-bomb of little presents. (Trivial.)

Consider the £19m for “community-led housing schemes” in the south-west of England. Very nice, but if every £1m of spending earned one word from the chancellor, his Budget speech would be considerably longer than War and Peace. Then there’s the donation of tampon VAT revenues to charity, the halving of the Severn Bridge toll, and a tax break for touring museum exhibitions. Isn’t it strange how the treats are emphasised and the multibillion pound cuts and tax rises are always relegated to the appendices?

Then there’s the Chancellor’s odd tradition of pencilling in an increase to fuel duty year after year and, regardless of circumstances, coming up with an excuse to cancel the increase one more time. (Worse than useless.)

We also have major reforms to the school system. (Nothing to do with the Budget.)

What are we left with? The bodged introduction of a sugar tax and yet another wheeze to reform pension saving, the Lifetime Isa. Both policies would have been far better separated from the rabbit-out-of-hat Budget show, and considered on their merits.

There have been more dramatic Budgets than this, of course, but even then the drama has too often fallen into the “worse than useless” category.

Would the country’s economic policy really be harmed if the chancellor set out his fiscal direction at the beginning of the parliament and left it unchanged unless extraordinary circumstances intervened?

We should abolish 100 per cent of Autumn Statements and 80 per cent of Budgets — that’s a fiscal rule that I could really get behind.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

17th of March, 2016Other WritingComments off
Other Writing

Why Osborne’s sugar tax is half baked

I’m all in favour of a sugar tax, as I wrote in the FT Magazine on Saturday. It’s a shame, then, that — despite the headlines to the contrary — George Osborne hasn’t introduced one.

His proposal instead is to tax the manufacturers and importers of a particular variety of sugary drink. I am no dentist or dietitian, but it seems strange to take the view that sugar in general poses no risk to the nation’s teeth or waistline, unless it comes in a soft drink.

Coke and Pepsi are a problem, apparently. But it seems that sugar lumps in tea or coffee are not. Neither are cartons of chocolate milk. Nor syrupy concoctions from Starbucks and Costa. Nor soft drinks produced by boutique producers. Mars bars are fine. So are cakes. So are Coco Pops and Frosties, and for that matter the remarkable quantities of sugar that infuse cereals such as Bran Flakes, or are buried in the recipes of many ready meals. All these forms of sugar will continue to reach our taste buds free of a sugar tax.

Mr Osborne’s proposal will work, after a fashion. There is abundant evidence that people adjust their behaviour in response to financial incentives, whether through the window tax-avoiding architecture of 18th-century Britain or the inheritance tax-avoiding feat of Australians in postponing the date of their deaths to a more tax-efficient time.

So yes, as Mr Osborne expects, large companies will try to put less sugar in their soft drinks, or raise the prices of those drinks, or both. Sugar consumption from those sources will fall. But they may well rise elsewhere. For many people, a chocolate bar and a fizzy drink are substitutes. If the fizzy drink gets more expensive, the chocolate bar is a tasty alternative for the sweet-toothed consumer. It has more fat in it, too.

It’s clear enough why the chancellor has opted for this approach. He wants to blame large companies, not voters, and hide the fact that ultimately consumers will pay the tax. A broad-based tax on sugar itself would have been simpler, braver and far more effective. But Mr Osborne wanted his Budget to leave voters with a sweeter taste in the mouth.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

17th of March, 2016Other WritingComments off

Multi-tasking: how to survive in the 21st century

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

Subscribe to TimHarford.com

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 3,465 other subscribers

Other Writing

The myth of the robot job-ocalypse

“The number of jobs lost to more efficient machines is only part of the problem . . . In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries.”
This sentiment, from Time magazine, dates from the early weeks of John Kennedy’s presidency. Yet it would slot nicely into many a contemporary political speech. Like any self-respecting remorseless killer robot from the future, our techno-anxiety just keeps coming back.
Arnold Schwarzenegger’s Terminator was science fiction — but so, too, is the idea that robots and software algorithms are guzzling jobs faster than they can be created. There is an astonishing mismatch between our fear of automation and the reality so far.
How can this be? The highways of Silicon Valley are sprinkled with self-driving cars. Visit the cinema, the supermarket or the bank and the most prominent staff you will see are the security guards, who are presumably there to prevent you stealing valuable machines. Your computer once contented itself with correcting your spelling; now it will translate your prose into Mandarin. Given all this, surely the robots must have stolen a job or two by now?
Of course, the answer is that automation has been destroying particular jobs in particular industries for a long time, which is why most westerners who weave clothes or cultivate and harvest crops by hand do so for fun. In the past that process made us richer.
The worry now is that, with computers making jobs redundant faster than we can generate new ones, the result is widespread unemployment, leaving a privileged class of robot-owning rentiers and highly paid workers with robot-compatible skills.
This idea is superficially plausible: we are surrounded by cheap, powerful computers; many people have lost their jobs in the past decade; and inequality has risen in the past 30 years.
But the theory can be put to a very simple test: how fast is productivity growing? The usual measure of productivity is output per hour worked — by a human. Robots can produce economic output without any hours of human labour at all, so a sudden onslaught of robot workers should cause a sudden acceleration in productivity.
Instead, productivity has been disappointing. In the US, labour productivity growth averaged an impressive 2.8 per cent per year from 1948 to 1973. The result was mass affluence rather than mass joblessness. Productivity then slumped for a generation and perked up in the late 1990s but has now sagged again. The picture is little better in the UK, where labour productivity is notoriously low compared with the other G7 leading economies, and it has been falling further behind since 2007.
Taking a 40-year perspective, the impact of this long productivity malaise on typical workers in rich countries is greater than that of the rise in inequality, or of the financial crisis of 2008. In an age peppered with economic disappointments, the worst has been the stubborn failure of the robots to take our jobs. Then why is so much commentary dedicated to the opposite view? Some of this is a simple error: it has been a tough decade, economically speaking, and it is easy to blame robots for woes that should be laid at the door of others, such as bankers, austerity enthusiasts and eurozone politicians.
It is also true that robotics is making impressive strides. Gill Pratt, a robotics expert, recently described a “Cambrian explosion” for robotics in the Journal of Economic Perspectives. While robots have done little to cause mass unemployment in the recent past, that may change in future.
Automation has also undoubtedly changed the shape of the job market — economist David Autor, writing in the same journal, documents a rise in demand for low-skilled jobs and highly skilled jobs, and a hollowing out of jobs in the middle. There are signs that the hollow is moving further and further up the spectrum of skills. The robots may not be taking our jobs, but they are certainly shuffling them around.
Yet Mr Autor also points to striking statistic: private investment in computers and software in the US has been falling almost continuously for 15 years. That is hard to square with the story of a robotic job-ocalypse. Surely we should expect to see a surge in IT investment as all those machines are installed?
Instead, in the wake of the great recession, managers have noted an ample supply of cheap human labour and have done without the machines for now. Perhaps there is some vast underground dormitory somewhere, all steel and sparks and dormant androids. In a corner, a chromium-plated robo-hack is tapping away at a column lamenting the fact that the humans have taken all the robots’ jobs.

24th of August, 2015Other WritingComments off


  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook


  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Tim’s Tweets

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 3,465 other subscribers

Do NOT follow this link or you will be banned from the site!