Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

Why everyone should give a TED talk and how to do it

I found out the hard way that bad public speaking is contagious. As a schoolboy I was pretty good at speeches, in a schoolboyish way. I won competitions; being a sharp, witty speaker was a defining part of who I felt myself to be.

Then I grew up and started a corporate job, and something strange happened. My talks sagged into “presentations”, burdened by humourless clip art and plodding bullet points. The reason? I was surrounded by people who were stuck in the same beige offices giving the same beige presentations. Like many workplaces, we had reached an unspoken consensus that giving bad talks was just the way things were done.

Aside from tradition — and it is a powerful one — why else are most talks bad talks? One reason is fear. Being afraid does not itself make a speech bad; fear can make a talk electrifying or touching. But most speakers take the coward’s way out. Afraid of running out of words, they overstuff their speeches. And they prop themselves up by projecting their speaking notes on the wall behind them, even though everyone knows that providing rolling spoilers for your speech is a terrible idea.

A second reason is lack of preparation. Most speakers rehearse neither their argument nor their performance. That is understandable. Practising in front of a mirror is painful. Practising in front of a friend is excruciating. Rehearsing offers all the discomfort of giving a speech without any of the rewards of doing so. But it will make the end result much better.

For these reasons, I think you should give a TED talk. Almost anyone can. All you need is 18 minutes, a topic and an audience — if only your cat. No matter how often or how rarely you usually speak in public, the act of trying to give a talk in the tradition of TED will change the way you think and feel about public speaking.

As with anything popular, TED talks have their critics, but it is hard to deny that the non-profit organisation behind the videoed presentations on subjects from science to business has helped reinvent the art of the public speech.

TED talks are vastly more entertaining than traditional lectures, while more thought provoking than most television. But that is TED from the point of view of the audience. From the view of an aspiring speaker, the lesson of TED is that most speakers could raise their game. A few TED talks are by professional politicians or entertainers such as Al Gore or David Blaine. Most are not.

There are more than 1,000 talks on the TED website with more than 1m views, typically delivered by writers, academics or entrepreneurs who have been giving mediocre talks as a matter of habit, and who have been suddenly challenged to stop being mediocre. Faced with the obligation to deliver the talk of their lives, they decided to do the work and take the necessary risks.

These speakers have been offered good advice by the organisers of TED, but that advice has never been a secret. It is now available to anyone in the form of TED Talks (buy in the UK) (buy in the US), a guide to public speaking from Chris Anderson, the TED boss. It is excellent; easily the best public speaking guide I have read. (I should admit a bias: I have spoken twice at TED events and benefited from the platform that TED provides.) Unlike many in the genre, Anderson’s book is not a comprehensive guide to going through the motions of wedding toasts and votes of thanks. Instead, it focuses on the stripped-down TED-style challenge: an audience, a speaker, plenty of time to prepare, and 18 minutes to say something worth hearing.

There is no formula for a great talk, insists Mr Anderson, but there are some common elements. First and most important: there is a point, an idea worth hearing about. Second, the talk has a “throughline” — meaning that most of what is said in some way supports that idea. There may be stories and jokes, even surprises — but everything is relevant.

Third, the speaker connects with those listening — perhaps through humour, stories, or simply making eye contact and speaking frankly. Finally, the speech explains concepts or advances arguments by starting from what the audience understand, and proceeding step by step through more surprising territory. It can be very hard for a speaker to appreciate just how much she knows that her audience do not. One reason to rehearse is that an audience can tell you when they get lost.

Most speakers are able to do some of this, some of the time — an interesting anecdote, a funny line, an educational explanation. We are social beings, after all. We have had a lot of practice talking.

Much of what turns a half-decent talk into a brilliant one is the ruthless excision of the fluff — the throat-clearing introduction, the platitudes, the digressions, the additional points that obscure the central message, and the “er, that’s about it” conclusion. With an audience of 60 people, for instance, every minute you waffle is an hour of other people’s time you are wasting. Sharpen up.

My only quibble is that the book offers less to a speaker who is short of preparation time. Because Mr Anderson is so keen to tell speakers how to prepare, he does not fully engage with the challenge of improvised speaking or debating.

Marco “Rubot” Rubio’s presidential dreams may have been snuffed out because he seemed over-rehearsed and unable to improvise. And Martin Luther King Jr’s greatest moment as a speaker — the second half of “I have a dream” — was unscripted. Sometimes the improvised response is more powerful than a prepared speech can ever be.

Instead, Mr Anderson’s aim is to help readers give a full-blown TED talk, despite the hard work that entails. Fair enough. Preparing to give a high-stakes speech is like training for a marathon or studying for an exam: even if you only do it once, the process will teach you things you will always remember.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

 

A short-cut to speeches

A TED-style talk takes weeks of preparation. What if you have hours, or minutes, to prepare?

• Say something worth hearing. “It’s not about you,” says Chris Anderson, who warns that business presentations are often sales pitches or boasts. He adds that the same information will land much better if it is “here’s what we’ve learnt” rather than “look how great we’ve been”.

• Less is more. Once you have found something worth saying, focus. Strip it down to a single core point. Everything about your speech — stories, jokes, statistics, graphics — should connect to that point.

• Your speaking notes should not intrude. Bullet points are a good idea if they are written on handheld cards, but not when projected on the wall behind you. If your speech is scripted, do not try to memorise it if you have no time, but become familiar with it. “There’s a big difference between being 90 per cent down in the script, and 60 per cent up and connected,” says Anderson.

• You are usually your own best visual aid. By all means use pictures, diagrams or video when they are good. But do not use substandard slides as wallpaper; when you have nothing to show, show nothing. Hit “B” to blank the screen and focus attention on you, or use empty slides.

• Practise. Even one run-through with a friend will help. Or find an empty room and record yourself on your phone. It is awkward but worth it.

• First and final impressions last. Improvised talks often suffer from a slow start and a limp finish. Think of a good opening and closing, and practise them. If you can start and finish strongly, you and your audience will both feel better.

11th of May, 2016Other WritingComments off
Highlights

How Politicians Poisoned Statistics

We have more data — and the tools to analyse and share them — than ever before. So why is the truth so hard to pin down?

In January 2015, a few months before the British general election, a proud newspaper resigned itself to the view that little good could come from the use of statistics by politicians. An editorial in the Guardian argued that in a campaign that would be “the most fact-blitzed in history”, numerical claims would settle no arguments and persuade no voters. Not only were numbers useless for winning power, it added, they were useless for wielding it, too. Numbers could tell us little. “The project of replacing a clash of ideas with a policy calculus was always dubious,” concluded the newspaper. “Anyone still hankering for it should admit their number’s up.”

This statistical capitulation was a dismaying read for anyone still wedded to the idea — apparently a quaint one — that gathering statistical information might help us understand and improve our world. But the Guardian’s cynicism can hardly be a surprise. It is a natural response to the rise of “statistical bullshit” — the casual slinging around of numbers not because they are true, or false, but to sell a message.

Politicians weren’t always so ready to use numbers as part of the sales pitch. Recall Ronald Reagan’s famous suggestion to voters on the eve of his landslide defeat of President Carter: “Ask yourself, ‘Are you better off now than you were four years ago?’” Reagan didn’t add any statistical garnish. He knew that voters would reach their own conclusions.

The British election campaign of spring last year, by contrast, was characterised by a relentless statistical crossfire. The shadow chancellor of the day, Ed Balls, declared that a couple with children (he didn’t say which couple) had lost £1,800 thanks to the government’s increase in value added tax. David Cameron, the prime minister, countered that 94 per cent of working households were better off thanks to recent tax changes, while the then deputy prime minister Nick Clegg was proud to say that 27 million people were £825 better off in terms of the income tax they paid.

Could any of this be true? Yes — all three claims were. But Ed Balls had reached his figure by summing up extra VAT payments over several years, a strange method. If you offer to hire someone for £100,000, and then later admit you meant £25,000 a year for a four-year contract, you haven’t really lied — but neither have you really told the truth. And Balls had looked only at one tax. Why not also consider income tax, which the government had cut? Clegg boasted about income-tax cuts but ignored the larger rise in VAT. And Cameron asked to be evaluated only on his pre-election giveaway budget rather than the tax rises he had introduced earlier in the parliament — the equivalent of punching someone on the nose, then giving them a bunch of flowers and pointing out that, in floral terms, they were ahead on the deal.

Each claim was narrowly true but broadly misleading. Not only did the clashing numbers confuse but none of them helped answer the crucial question of whether Cameron and Clegg had made good decisions in office.

To ask whether the claims were true is to fall into a trap. None of these politicians had any interest in playing that game. They were engaged in another pastime entirely.

Thirty years ago, the Princeton philosopher Harry Frankfurt published an essay in an obscure academic journal, Raritan. The essay’s title was “On Bullshit”. (Much later, it was republished as a slim volume that became a bestseller.) Frankfurt was on a quest to understand the meaning of bullshit — what was it, how did it differ from lies, and why was there so much of it about?

Frankfurt concluded that the difference between the liar and the bullshitter was that the liar cared about the truth — cared so much that he wanted to obscure it — while the bullshitter did not. The bullshitter, said Frankfurt, was indifferent to whether the statements he uttered were true or not. “He just picks them out, or makes them up, to suit his purpose.”

Statistical bullshit is a special case of bullshit in general, and it appears to be on the rise. This is partly because social media — a natural vector for statements made purely for effect — are also on the rise. On Instagram and Twitter we like to share attention-grabbing graphics, surprising headlines and figures that resonate with how we already see the world. Unfortunately, very few claims are eye-catching, surprising or emotionally resonant because they are true and fair. Statistical bullshit spreads easily these days; all it takes is a click.

Consider a widely shared list of homicide “statistics” attributed to the “Crime Statistics Bureau — San Francisco”, asserting that 81 per cent of white homicide victims were killed by “blacks”. It takes little effort to establish that the Crime Statistics Bureau of San Francisco does not exist, and not much more digging to discover that the data are utterly false. Most murder victims in the United States are killed by people of their own race; the FBI’s crime statistics from 2014 suggest that more than 80 per cent of white murder victims were killed by other white people.

Somebody, somewhere, invented the image in the hope that it would spread, and spread it did, helped by a tweet from Donald Trump, the current frontrunner for the Republican presidential nomination, that was retweeted more than 8,000 times. One can only speculate as to why Trump lent his megaphone to bogus statistics, but when challenged on Fox News by the political commentator Bill O’Reilly, he replied, “Hey, Bill, Bill, am I gonna check every statistic?”

Harry Frankfurt’s description of the bullshitter would seem to fit Trump perfectly: “He does not care whether the things he says describe reality correctly.”

While we can’t rule out the possibility that Trump knew the truth and was actively trying to deceive his followers, a simpler explanation is that he wanted to win attention and to say something that would resonate with them. One might also guess that he did not check whether the numbers were true because he did not much care one way or the other. This is not a game of true and false. This is a game of politics.

While much statistical bullshit is careless, it can also be finely crafted. “The notion of carefully wrought bullshit involves … a certain inner strain,” wrote Harry Frankfurt but, nevertheless, the bullshit produced by spin-doctors can be meticulous. More conventional politicians than Trump may not much care about the truth but they do care about being caught lying.

Carefully wrought bullshit was much in evidence during last year’s British general election campaign. I needed to stick my nose in and take a good sniff on a regular basis because I was fact-checking on behalf of the BBC’s More or Less programme. Again and again I would find myself being asked on air, “Is that claim true?” and finding that the only reasonable answer began with “It’s complicated”.

Take Ed Miliband’s claim before the last election that “people are £1,600 a year worse off” than they were when the coalition government came to power. Was that claim true? Arguably, yes.

But we need to be clear that by “people”, the then Labour leader was excluding half the adult population. He was not referring to pensioners, benefit recipients, part-time workers or the self-employed. He meant only full-time employees, and, more specifically, only their earnings before taxes and benefits.

Even this narrower question of what was happening to full-time earnings is a surprisingly slippery one. We need to take an average, of course. But what kind of average? Labour looked at the change in median wages, which were stagnating in nominal terms and falling after inflation was taken into account.

That seems reasonable — but the median is a problematic measure in this case. Imagine nine people, the lowest-paid with a wage of £1, the next with a wage of £2, up to the highest-paid person with a wage of £9. The median wage is the wage of the person in the middle: it’s £5.

Now imagine that everyone receives a promotion and a pay rise of £1. The lowly worker with a wage of £1 sees his pay packet double to £2. The next worker up was earning £2 and now she gets £3. And so on. But there’s also a change in the composition of the workforce: the best-paid worker retires and a new apprentice is hired at a wage of £1. What’s happened to people’s pay? In a sense, it has stagnated. The pattern of wages hasn’t changed at all and the median is still £5.

But if you asked the individual workers about their experiences, they would all tell you that they had received a generous pay rise. (The exceptions are the newly hired apprentice and the recent retiree.) While this example is hypothetical, at the time Miliband made his comments something similar was happening in the real labour market. The median wage was stagnating — but among people who had worked for the same employer for at least a year, the median worker was receiving a pay rise, albeit a modest one.

Another source of confusion: if wages for the low-paid and the high-paid are rising but wages in the middle are sagging, then the median wage can fall, even though the median wage increase is healthy. The UK labour market has long been prone to this kind of “job polarisation”, where demand for jobs is strongest for the highest and lowest-paid in the economy. Job polarisation means that the median pay rise can be sizeable even if median pay has not risen.

Confused? Good. The world is a complicated place; it defies description by sound bite statistics. No single number could ever answer Ronald Reagan’s question — “Are you better off now than you were four years ago?” — for everyone in a country.

So, to produce Labour’s figure of “£1,600 worse off”, the party’s press office had to ignore the self-employed, the part-timers, the non-workers, compositional effects and job polarisation. They even changed the basis of their calculation over time, switching between different measures of wages and different measures of inflation, yet miraculously managing to produce a consistent answer of £1,600. Sometimes it’s easier to make the calculation produce the number you want than it is to reprint all your election flyers.

Very few claims are eye-catching, surprising or emotionally resonant because they are true and fair

Such careful statistical spin-doctoring might seem a world away from Trump’s reckless retweeting of racially charged lies. But in one sense they were very similar: a political use of statistics conducted with little interest in understanding or describing reality. Miliband’s project was not “What is the truth?” but “What can I say without being shown up as a liar?”

Unlike the state of the UK job market, his incentives were easy to understand. Miliband needed to hammer home a talking point that made the government look bad. As Harry Frankfurt wrote back in the 1980s, the bullshitter “is neither on the side of the true nor on the side of the false. His eye is not on the facts at all … except insofar as they may be pertinent to his interest in getting away with what he says.”

Such complexities put fact-checkers in an awkward position. Should they say that Ed Miliband had lied? No: he had not. Should they say, instead, that he had been deceptive or misleading? Again, no: it was reasonable to say that living standards had indeed been disappointing under the coalition government.

Nevertheless, there was a lot going on in the British economy that the figure omitted — much of it rather more flattering to the government. Full Fact, an independent fact-checking organisation, carefully worked through the paper trail and linked to all the relevant claims. But it was powerless to produce a fair and representative snapshot of the British labour market that had as much power as Ed Miliband’s seven-word sound bite. No such snapshot exists. Truth is usually a lot more complicated than statistical bullshit.

On July 16 2015, the UK health secretary Jeremy Hunt declared: “Around 6,000 people lose their lives every year because we do not have a proper seven-day service in hospitals. You are 15 per cent more likely to die if you are admitted on a Sunday compared to being admitted on a Wednesday.”

This was a statistic with a purpose. Hunt wanted to change doctors’ contracts with the aim of getting more weekend work out of them, and bluntly declared that the doctors’ union, the British Medical Association, was out of touch and that he would not let it block his plans: “I can give them 6,000 reasons why.”

Despite bitter opposition and strike action from doctors, Hunt’s policy remained firm over the following months. Yet the numbers he cited to support it did not. In parliament in October, Hunt was sticking to the 15 per cent figure, but the 6,000 deaths had almost doubled: “According to an independent study conducted by the BMJ, there are 11,000 excess deaths because we do not staff our hospitals properly at weekends.”

Arithmetically, this was puzzling: how could the elevated risk of death stay the same but the number of deaths double? To add to the suspicions about Hunt’s mathematics, the editor-in-chief of the British Medical Journal, Fiona Godlee, promptly responded that the health secretary had publicly misrepresented the BMJ research.

Undaunted, the health secretary bounced back in January with the same policy and some fresh facts: “At the moment we have an NHS where if you have a stroke at the weekends, you’re 20 per cent more likely to die. That can’t be acceptable.”

All this is finely wrought bullshit — a series of ever-shifting claims that can be easily repeated but are difficult to unpick. As Hunt jumped from one form of words to another, he skipped lightly ahead of fact-checkers as they tried to pin him down. Full Fact concluded that Hunt’s statement about 11,000 excess deaths had been untrue, and asked him to correct the parliamentary record. His office responded with a spectacular piece of bullshit, saying (I paraphrase) that whether or not the claim about 11,000 excess deaths was true, similar claims could be made that were.

So, is it true? Do 6,000 people — or 11,000 — die needlessly in NHS hospitals because of poor weekend care? Nobody knows for sure; Jeremy Hunt certainly does not. It’s not enough to show that people admitted to hospital at the weekend are at an increased risk of dying there. We need to understand why — a question that is essential for good policy but inconvenient for politicians.

One possible explanation for the elevated death rate for weekend admissions is that the NHS provides patchy care and people die as a result. That is the interpretation presented as bald fact by Jeremy Hunt. But a more straightforward explanation is that people are only admitted to hospital at the weekend if they are seriously ill. Less urgent cases wait until weekdays. If weekend patients are sicker, it is hardly a surprise that they are more likely to die. Allowing non-urgent cases into NHS hospitals at weekends wouldn’t save any lives, but it would certainly make the statistics look more flattering. Of course, epidemiologists try to correct for the fact that weekend patients tend to be more seriously ill, but few experts have any confidence that they have succeeded.

A more subtle explanation is that shortfalls in the palliative care system may create the illusion that hospitals are dangerous. Sometimes a patient is certain to die, but the question is where — in a hospital or a palliative hospice? If hospice care is patchy at weekends then a patient may instead be admitted to hospital and die there. That would certainly reflect poor weekend care. It would also add to the tally of excess weekend hospital deaths, because during the week that patient would have been admitted to, and died in, a palliative hospice. But it is not true that the death was avoidable.

Does it seem like we’re getting stuck in the details? Well, yes, perhaps we are. But improving NHS care requires an interest in the details. If there is a problem in palliative care hospices, it will not be fixed by improving staffing in hospitals.

“Even if you accept that there’s a difference in death rates,” says John Appleby, the chief economist of the King’s Fund health think-tank, “nobody is able to say why it is. Is it lack of diagnostic services? Lack of consultants? We’re jumping too quickly from a statistic to a solution.”

When one claim is discredited, Jeremy Hunt’s office simply asserts that another one can be found to take its place

This matters — the NHS has a limited budget. There are many things we might want to spend money on, which is why we have the National Institute for Health and Care Excellence (Nice) to weigh up the likely benefits of new treatments and decide which offer the best value for money.

Would Jeremy Hunt’s push towards a seven-day NHS pass the Nice cost-benefit threshold? Probably not. Our best guess comes from a 2015 study by health economists Rachel Meacock, Tim Doran and Matt Sutton, which estimates that the NHS has many cheaper ways to save lives. A more comprehensive assessment might reach a different conclusion but we don’t have one because the Department for Health, oddly, hasn’t carried out a formal health impact assessment of the policy it is trying to implement.

This is a depressing situation. The government has devoted considerable effort to producing a killer number: Jeremy Hunt’s “6,000 reasons” why he won’t let the British Medical Association stand in his way. It continues to produce statistical claims that spring up like hydra heads: when one claim is discredited, Hunt’s office simply asserts that another one can be found to take its place. Yet the government doesn’t seem to have bothered to gather the statistics that would actually answer the question of how the NHS could work better.

This is the real tragedy. It’s not that politicians spin things their way — of course they do. That is politics. It’s that politicians have grown so used to misusing numbers as weapons that they have forgotten that used properly, they are tools.

You complain that your report would be dry. The dryer the better. Statistics should be the dryest of all reading,” wrote the great medical statistician William Farr in a letter in 1861. Farr sounds like a caricature of a statistician, and his prescription — convey the greatest possible volume of information with the smallest possible amount of editorial colour — seems absurdly ill-suited to the modern world.

But there is a middle ground between the statistical bullshitter, who pays no attention to the truth, and William Farr, for whom the truth must be presented without adornment. That middle ground is embodied by the recipient of William Farr’s letter advising dryness. She was the first woman to be elected to the Royal Statistical Society: Florence Nightingale.

Nightingale is the most celebrated nurse in British history, famous for her lamplit patrols of the Barrack Hospital in Scutari, now a district of Istanbul. The hospital was a death trap, with thousands of soldiers from the Crimean front succumbing to typhus, cholera and dysentery as they tried to recover from their wounds in cramped conditions next to the sewers. Nightingale, who did her best, initially believed that the death toll was due to lack of food and supplies. Then, in the spring of 1855, a sanitary commission sent from London cleaned up the hospital, whitewashing the walls, carting away filth and dead animals and flushing out the sewers. The death rate fell sharply.

Nightingale returned to Britain and reviewed the statistics, concluding that she had paid too little attention to sanitation and that most military and medical professions were making the same mistake, leading to hundreds of thousands of deaths. She began to campaign for better public health measures, tighter laws on hygiene in rented properties, and improvements to sanitation in barracks and hospitals across the country. In doing so, a mere nurse had to convince the country’s medical and military establishments, led by England’s chief medical officer, John Simon, that they had been doing things wrong all their lives.

A key weapon in this lopsided battle was statistical evidence. But Nightingale disagreed with Farr on how that evidence should be presented. “The dryer the better” would not serve her purposes. Instead, in 1857, she crafted what has become known as the Rose Diagram, a beautiful array of coloured wedges showing the deaths from infectious diseases before and after the sanitary improvements at Scutari.

When challenged by Bill O’Reilly on Fox News, Trump replied, ‘Hey Bill, Bill, am I gonna check every statistic?’

The Rose Diagram isn’t a dry presentation of statistical truth. It tells a story. Its structure divides the death toll into two periods — before the sanitary improvements, and after. In doing so, it highlights a sharp break that is less than clear in the raw data. And the Rose Diagram also gently obscures other possible interpretations of the numbers — that, for example, the death toll dropped not because of improved hygiene but because winter was over. The Rose Diagram is a marketing pitch for an idea. The idea was true and vital, and Nightingale’s campaign was successful. One of her biographers, Hugh Small, argues that the Rose Diagram ushered in health improvements that raised life expectancy in the UK by 20 years and saved millions of lives.

What makes Nightingale’s story so striking is that she was able to see that statistics could be tools and weapons at the same time. She educated herself using the data, before giving it the makeover it required to convince others. Though the Rose Diagram is a long way from “the dryest of all reading”, it is also a long way from bullshit. Florence Nightingale realised that the truth about public health was so vital that it could not simply be recited in a monotone. It needed to sing.

The idea that a graph could change the world seems hard to imagine today. Cynicism has set in about statistics. Many journalists draw no distinction between a systematic review of peer-reviewed evidence and a survey whipped up in an afternoon to sell biscuits or package holidays: it’s all described as “new research”. Politicians treat statistics not as the foundation of their argument but as decoration — “spray-on evidence” is the phrase used by jaded civil servants. But a freshly painted policy without foundations will not last long before the cracks show through.

“Politicians need to remember: there is a real world and you want to try to change it,” says Will Moy, the director of Full Fact. “At some stage you need to engage with the real world — and that is where the statistics come in handy.”

That should be no problem, because it has never been easier to gather and analyse informative statistics. Nightingale and Farr could not have imagined the data that modern medical researchers have at their fingertips. The gold standard of statistical evidence is the randomised controlled trial, because using a randomly chosen control group protects against biased or optimistic interpretations of the evidence. Hundreds of thousands of such trials have been published, most of them within the past 25 years. In non-medical areas such as education, development aid and prison reform, randomised trials are rapidly catching on: thousands have been conducted. The British government, too, has been supporting policy trials — for example, the Education Endowment Foundation, set up with £125m of government funds just five years ago, has already backed more than 100 evaluations of educational approaches in English schools. It favours randomised trials wherever possible.

The frustrating thing is that politicians seem quite happy to ignore evidence — even when they have helped to support the researchers who produced it. For example, when the chancellor George Osborne announced in his budget last month that all English schools were to become academies, making them independent of the local government, he did so on the basis of faith alone. The Sutton Trust, an educational charity which funds numerous research projects, warned that on the question of whether academies had fulfilled their original mission of improving failing schools in poorer areas, “our evidence suggests a mixed picture”. Researchers at the LSE’s Centre for Economic Performance had a blunter description of Osborne’s new policy: “a non-evidence based shot in the dark”.

This should be no surprise. Politicians typically use statistics like a stage magician uses smoke and mirrors. Over time, they can come to view numbers with contempt. Voters and journalists will do likewise. No wonder the Guardian gave up on the idea that political arguments might be settled by anything so mundane as evidence. The spin-doctors have poisoned the statistical well.

But despite all this despair, the facts still matter. There isn’t a policy question in the world that can be settled by statistics alone but, in almost every case, understanding the statistical background is a tremendous help. Hetan Shah, the executive director of the Royal Statistical Society, has lost count of the number of times someone has teased him with the old saying about “lies, damned lies and statistics”. He points out that while it’s easy to lie with statistics, it’s even easier to lie without them.

Perhaps the lies aren’t the real enemy here. Lies can be refuted; liars can be exposed. But bullshit? Bullshit is a stickier problem. Bullshit corrodes the very idea that the truth is out there, waiting to be discovered by a careful mind. It undermines the notion that the truth matters. As Harry Frankfurt himself wrote, the bullshitter “does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”

 

Written for and first published in the FT Magazine

Free email updates

(You can unsubscribe at any time)

20th of April, 2016HighlightsOther WritingComments off
Marginalia

Three pieces of Brexit Bullshit

A referendum on UK membership of the European Union is scheduled for June 23: dodgy statistics ahoy.

“Ten Commandments — 179 words. Gettysburg address — 286 words. US Declaration of Independence — 1,300 words. EU regulations on the sale of cabbage — 26,911 words”

Variants of this claim have been circulating online and in print. It turns out that the “cabbage memo” is a longstanding urban myth that can be traced back to the US during the second world war. Variants have been used to berate bureaucrats on both sides of the Atlantic ever since.

Part of the bullshit here is that nobody ever stops to ask how many words might be appropriate for rules on fresh produce. Red Tractor Assurance, the British farm and food standards scheme, publishes 56 different protocols on fresh produce alone. The cabbage protocol is 28 pages long; there is a separate 28-page protocol on pak choi and choi sum. None of this has anything to do with the EU.

Three million jobs depend on the EU

This claim is popular among “Remain” advocates — most famously the former deputy prime minister Nick Clegg. What makes this claim bullshit is that it could easily be true, or utterly false, and it all hangs on the definition of “depend”.

The claim is that “up to 3.2 million jobs” were directly linked to exports of goods and services to other EU countries. That number passes a quick reality check: it’s about 10 per cent of UK jobs, and UK exports to the EU are about 10 per cent of the UK economy.

But even if “up to” 3.2 million jobs depend on trade with the EU, that does not mean they depend on membership of the EU. Nobody proposes — or expects — that trade with the EU will just stop. Three million jobs might well be destroyed if continental Europe was to sink beneath the waves like Atlantis, but that is not what the referendum is about.

EU membership costs £55m a day

This one is from Ukip leader Nigel Farage, who says membership amounts to more than £20bn a year. In fact, the UK paid £14.3bn to the EU in 2014 and got £6bn back. The net membership fee, then, was £8.3bn, less than half Farage’s number.

But even the correct number is little use without context. It is, for example, just over 1 per cent of UK public spending. Not nothing, but not everything either. And non-member states such as Norway and Switzerland pay large sums to the EU to retain access to the single market, so Brexit would not make this bill disappear.

The membership fee is small relative to the plausible costs and benefits of EU membership, positive or negative. If EU membership is good for Britain then £8.3bn is cheap. And if the EU is holding Britain back, then a few billion on membership is the least of our worries.

 

Written as a sidebar for “How Politicians Poisoned Statistics“, and first published in the FT Magazine.

Free email updates

(You can unsubscribe at any time)

16th of April, 2016MarginaliaOther WritingComments off
Other Writing

A modest proposal – let’s just abolish the Budget

Once upon a time, it made sense to have an annual Budget speech. When the central economic fact of the year was whether the harvest had failed or not, it behoved the Chancellor to declare how he planned to spend whatever he happened to have in his coffers. But a vital institution for the pre-industrial age has mutated into a mere circus for the post-industrial one. The central question that this Budget provoked in my mind was this: why on earth do we still have a Budget?

Skim through the transcript of yesterday’s speech — if you can bear to — and you’ll find that the items fall into a few categories: (1) trivial; (2) responses to silly self-imposed rules; (3) economic forecasts that will later be wrong; (4) pure rhetoric; (5) worse than useless; (6) irrelevant.

Mr Osborne opened with a list of all the ways in which the UK economy is strong, skimmed over all the ways in which it is weak, and blamed the Labour party or foreigners for everything. (Rhetoric.)

Then he ran through the latest outlook from the Office for Budget Responsibility, an institution that represents all that is best about rigorous independent economic forecasting — and is therefore bound to be wrong. (Bad forecasts.)

He admitted that he had broken his own rather odd rule about the ratio of debt to gross domestic product, before announcing that according to a different, nonsensical metric, he looked rather good. (Silly self-imposed rules.)

Mr Osborne threw the usual glitter-bomb of little presents. (Trivial.)

Consider the £19m for “community-led housing schemes” in the south-west of England. Very nice, but if every £1m of spending earned one word from the chancellor, his Budget speech would be considerably longer than War and Peace. Then there’s the donation of tampon VAT revenues to charity, the halving of the Severn Bridge toll, and a tax break for touring museum exhibitions. Isn’t it strange how the treats are emphasised and the multibillion pound cuts and tax rises are always relegated to the appendices?

Then there’s the Chancellor’s odd tradition of pencilling in an increase to fuel duty year after year and, regardless of circumstances, coming up with an excuse to cancel the increase one more time. (Worse than useless.)

We also have major reforms to the school system. (Nothing to do with the Budget.)

What are we left with? The bodged introduction of a sugar tax and yet another wheeze to reform pension saving, the Lifetime Isa. Both policies would have been far better separated from the rabbit-out-of-hat Budget show, and considered on their merits.

There have been more dramatic Budgets than this, of course, but even then the drama has too often fallen into the “worse than useless” category.

Would the country’s economic policy really be harmed if the chancellor set out his fiscal direction at the beginning of the parliament and left it unchanged unless extraordinary circumstances intervened?

We should abolish 100 per cent of Autumn Statements and 80 per cent of Budgets — that’s a fiscal rule that I could really get behind.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

17th of March, 2016Other WritingComments off
Other Writing

Why Osborne’s sugar tax is half baked

I’m all in favour of a sugar tax, as I wrote in the FT Magazine on Saturday. It’s a shame, then, that — despite the headlines to the contrary — George Osborne hasn’t introduced one.

His proposal instead is to tax the manufacturers and importers of a particular variety of sugary drink. I am no dentist or dietitian, but it seems strange to take the view that sugar in general poses no risk to the nation’s teeth or waistline, unless it comes in a soft drink.

Coke and Pepsi are a problem, apparently. But it seems that sugar lumps in tea or coffee are not. Neither are cartons of chocolate milk. Nor syrupy concoctions from Starbucks and Costa. Nor soft drinks produced by boutique producers. Mars bars are fine. So are cakes. So are Coco Pops and Frosties, and for that matter the remarkable quantities of sugar that infuse cereals such as Bran Flakes, or are buried in the recipes of many ready meals. All these forms of sugar will continue to reach our taste buds free of a sugar tax.

Mr Osborne’s proposal will work, after a fashion. There is abundant evidence that people adjust their behaviour in response to financial incentives, whether through the window tax-avoiding architecture of 18th-century Britain or the inheritance tax-avoiding feat of Australians in postponing the date of their deaths to a more tax-efficient time.

So yes, as Mr Osborne expects, large companies will try to put less sugar in their soft drinks, or raise the prices of those drinks, or both. Sugar consumption from those sources will fall. But they may well rise elsewhere. For many people, a chocolate bar and a fizzy drink are substitutes. If the fizzy drink gets more expensive, the chocolate bar is a tasty alternative for the sweet-toothed consumer. It has more fat in it, too.

It’s clear enough why the chancellor has opted for this approach. He wants to blame large companies, not voters, and hide the fact that ultimately consumers will pay the tax. A broad-based tax on sugar itself would have been simpler, braver and far more effective. But Mr Osborne wanted his Budget to leave voters with a sweeter taste in the mouth.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

17th of March, 2016Other WritingComments off
Highlights

Multi-tasking: how to survive in the 21st century

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

Subscribe to TimHarford.com

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,304 other subscribers

Other Writing

The myth of the robot job-ocalypse

“The number of jobs lost to more efficient machines is only part of the problem . . . In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries.”
This sentiment, from Time magazine, dates from the early weeks of John Kennedy’s presidency. Yet it would slot nicely into many a contemporary political speech. Like any self-respecting remorseless killer robot from the future, our techno-anxiety just keeps coming back.
Arnold Schwarzenegger’s Terminator was science fiction — but so, too, is the idea that robots and software algorithms are guzzling jobs faster than they can be created. There is an astonishing mismatch between our fear of automation and the reality so far.
How can this be? The highways of Silicon Valley are sprinkled with self-driving cars. Visit the cinema, the supermarket or the bank and the most prominent staff you will see are the security guards, who are presumably there to prevent you stealing valuable machines. Your computer once contented itself with correcting your spelling; now it will translate your prose into Mandarin. Given all this, surely the robots must have stolen a job or two by now?
Of course, the answer is that automation has been destroying particular jobs in particular industries for a long time, which is why most westerners who weave clothes or cultivate and harvest crops by hand do so for fun. In the past that process made us richer.
The worry now is that, with computers making jobs redundant faster than we can generate new ones, the result is widespread unemployment, leaving a privileged class of robot-owning rentiers and highly paid workers with robot-compatible skills.
This idea is superficially plausible: we are surrounded by cheap, powerful computers; many people have lost their jobs in the past decade; and inequality has risen in the past 30 years.
But the theory can be put to a very simple test: how fast is productivity growing? The usual measure of productivity is output per hour worked — by a human. Robots can produce economic output without any hours of human labour at all, so a sudden onslaught of robot workers should cause a sudden acceleration in productivity.
Instead, productivity has been disappointing. In the US, labour productivity growth averaged an impressive 2.8 per cent per year from 1948 to 1973. The result was mass affluence rather than mass joblessness. Productivity then slumped for a generation and perked up in the late 1990s but has now sagged again. The picture is little better in the UK, where labour productivity is notoriously low compared with the other G7 leading economies, and it has been falling further behind since 2007.
Taking a 40-year perspective, the impact of this long productivity malaise on typical workers in rich countries is greater than that of the rise in inequality, or of the financial crisis of 2008. In an age peppered with economic disappointments, the worst has been the stubborn failure of the robots to take our jobs. Then why is so much commentary dedicated to the opposite view? Some of this is a simple error: it has been a tough decade, economically speaking, and it is easy to blame robots for woes that should be laid at the door of others, such as bankers, austerity enthusiasts and eurozone politicians.
It is also true that robotics is making impressive strides. Gill Pratt, a robotics expert, recently described a “Cambrian explosion” for robotics in the Journal of Economic Perspectives. While robots have done little to cause mass unemployment in the recent past, that may change in future.
Automation has also undoubtedly changed the shape of the job market — economist David Autor, writing in the same journal, documents a rise in demand for low-skilled jobs and highly skilled jobs, and a hollowing out of jobs in the middle. There are signs that the hollow is moving further and further up the spectrum of skills. The robots may not be taking our jobs, but they are certainly shuffling them around.
Yet Mr Autor also points to striking statistic: private investment in computers and software in the US has been falling almost continuously for 15 years. That is hard to square with the story of a robotic job-ocalypse. Surely we should expect to see a surge in IT investment as all those machines are installed?
Instead, in the wake of the great recession, managers have noted an ample supply of cheap human labour and have done without the machines for now. Perhaps there is some vast underground dormitory somewhere, all steel and sparks and dormant androids. In a corner, a chromium-plated robo-hack is tapping away at a column lamenting the fact that the humans have taken all the robots’ jobs.

24th of August, 2015Other WritingComments off
Other Writing

George Osborne’s Magic Has Us Fooled, For Now

The chancellor can alter the law but cannot make costly workers worth hiring, says Tim Harford

He has mastered the art of misdirection as well as any stage magician. Everyone knew George Osborne was going to butcher the tax credit system on Wednesday, more or less halving the income at which they begin to fall away. But few expected him to announce a much higher minimum wage, and he did it with such an extravagant flourish that no one clearly remembers seeing him wield the cleaver.
For most of the poorer working households who qualify for tax credits, the combined effect of Mr Osborne’s Budget will be to make them worse off financially, and to push them away from the labour force by raising the effective rate of tax they pay.
Monique Ebell of the National Institute of Economic and Social Research reckons that a single mother working 30 hours a week at the minimum wage will be more than a £1,000 a year worse off in two years’ time than she is today, despite the increase in the wage she must legally be paid.
That assumes, of course, that she keeps her job at all. This is the big question about the minimum wage: will it increase the earnings of low-paid workers, or price them out of the job market entirely? Should we expect to see these workers laid off and replaced with one-touch espresso machines, automatic checkouts and call-centre workers from India? The minimum wage is a delicate balance, and Mr Osborne has put his thumb on the scale.
The chancellor’s aim is to raise the minimum wage for those over 25 beyond £9 by 2020, from £6.50 today. That is dramatic, although not quite as dramatic as it first seems. Mr Osborne is setting the minimum wage where it might be if the economic crisis of 2008, and the long stagnation that followed, had never happened. He is hoping that employment will not suffer. He has a few other countries to look to as a precedent. France is one example, and it is not encouraging. Australia is a more hopeful case.
Mr Osborne’s move would once have been unthinkable from a Conservative chancellor. A quarter of a century ago, the conventional wisdom was that the idea of a minimum wage was absurd at any level. The logic of that position was simple enough. If the minimum wage was below the market-clearing wage — at which employees want to work the same number of hours that businesses want to hire them for — it would be irrelevant; if it was above, it would be worse than useless. Productive workers do not need a minimum wage because they will anyway be well paid. Less productive workers will be harmed by a minimum wage because employers would rather sack them than pay more than they are worth. One does not simply repeal the laws of supply and demand.
The world has moved on since then, and we know that while supply and demand matter, there is more to the labour market than the simple story above.
Some employers have market power and could pay higher wages if they were forced to; the higher minimum wage may simply redistribute from employers to low-paid employees. Another possibility is that if forced to pay higher wages, employers will invest in training and equipment to justify the labour expense. On this view, wages do not need to follow productivity; productivity can be led by wages.
A third explanation is that since many low wage jobs are in non-traded sectors such as retail, employers will simply put up prices, spreading the burden of the higher minimum wage across all consumers, and possibly reducing inequality.
There is also the argument that higher wages can encourage workers to show up more often and smile at the customers. This is true, but in most cases managers will have reached that conclusion by themselves without the need for a legal minimum.
A large body of empirical evidence suggests either that reasonable minimum wages do not destroy jobs at all, or that they do not destroy very many. The evidence is, of course, mixed and contested.
Much of it comes from the US and concerns the experience of teenagers, who — in the words of Alan Manning of the London School of Economics, “represent about 2 per cent of hours worked and 98 per cent of the studies of the minimum wage”. But it is clear enough that if modest increases in the minimum wage were disastrous for jobs, we would know that by now.
Whether the chancellor’s wage rise counts as “modest” is far more questionable. Professor Manning is guardedly optimistic: he thinks that the bold increase in the minimum wage is worth a try. But he is nervous, and so am I. We are at the edge of what the data can tell us. Mr Osborne is about to provide a fascinating new case study.
The best scenario is that the minimum wage helps to drive up British productivity, which has long languished. Employers invest in training, and rather than replacing workers with machines they give them the latest tools to do their jobs.
To the extent that productivity does not rise, employers absorb the costs or pass them on to consumers, equitably bearing the burden of giving hard-working people a decent wage.
A gloomier scenario seems more probable for some sectors, especially social care. The law of supply and demand turns out to matter after all. Faced with a sharp increase in the minimum wage that runs well ahead of what the Low Pay Commission has felt able to endorse, employers lay off many workers and reduce the hours of others. The welfare bill rises and — as so often in the past — it proves much harder to create jobs than to destroy them.
My own bet is somewhere in the middle. We will discover that Mr Osborne has pushed too hard, and that the minimum wage must be allowed to slip back again relative to median earnings. Some jobs will be lost, a lesson will be learned, and Mr Osborne’s political purposes will have been served. He will be hoping to have upgraded his own job to that of prime minister by then, which may be appropriate: he is a masterful politician but has never shown much grasp of economics.

Written for and first published at FT.com

13th of July, 2015Other WritingComments off
Other Writing

George Osborne’s gamble with jobs

My response to the Summer Budget went up on the FT website yesterday:

The sharp hike in the minimum wage in the Budget was a shock, but it was true to form for the UK chancellor of the exchequer: clever politics and dubious economics. It is telling that, where the Low Pay Commission used to consider the evidence and carefully balance the risks and rewards of a higher minimum wage, it must now recommend whatever George Osborne tells it to recommend.

The risk is clear: forced to pay up to £9 an hour, many businesses will find that they would rather find other ways to conduct their affairs — buying robots, offshoring key functions or moving overseas entirely. Bankruptcy is, of course, another option.

 

Mr Osborne’s gamble is that some businesses will simply eat the cost of higher wages (unlikely), or train their workers better and give them better tools so that the higher wages can be justified with higher productivity. It is possible this may work. It is enormously risky, and if the move is the wrong one it will be hard to reverse. The lesson of the 1980s is that, once lost, jobs are not easy to find again.

One might ask why the chancellor is willing to take such risks and to order the Low Pay Commission to do his bidding rather than be guided by evidence. The answer is not hard to find: Mr Osborne needs political cover. He is hacking away at the welfare state, notably the system of tax credits that was designed to encourage people to work rather than stay at home.

One can only guess what Milton Friedman, one of the inspirations behind the Thatcherite revolution, would have made of all this. In place of a carefully designed system of incentives for people to go to work, we are to be offered a wage increase set by a politician’s whim. Friedman knew that, even in the complex market for jobs, one does not simply abolish the laws of supply and demand.

Mr Osborne promised a Budget for working people but reality does not match that sound bite. The biggest tax break was for people inheriting expensive homes from their parents; and, while benefits for the working poor were being squeezed, those for pensioners were — as always — protected. Those who hoped for radical and logical tax reform have been bitterly disappointed.

As for working people, many will thank the chancellor as their wages rise. Others will become unaffordable and will lose their jobs. No doubt they will be scapegoated as scroungers in some future Budget speech. It is possible that Mr Osborne’s gamble will pay off. It is even possible, although unlikely, that it will pay off spectacularly. But it is reckless, and it is not his job that is on the line.

9th of July, 2015Other WritingComments off
Marginalia

Paying to Get Inside A Restaurant

Me, writing in May’s edition of The Atlantic:

The next time you’re fortunate enough to have dinner at a high-end restaurant, take a moment to enjoy not only the food and wine, but the frisson of a really good puzzle: Why do restaurants price things the way they do?

The markup on food makes sense. It takes time and skill to prepare the perfect cold-smoked salmon with balsamic-vinegar sorbet. But why are the wine prices so inflated? How hard can it be to pop open a bottle? Meanwhile, restroom access is free and unlimited for customers—a curious cross-subsidy.

Most mysterious of all: When reservations at hot new restaurants are so sought-after, why are they simply given away?

Why indeed? The full article is here and free to read online.

23rd of April, 2015MarginaliaOther WritingComments off
Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Subscribe to TimHarford.com

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Search by Keyword

Do NOT follow this link or you will be banned from the site!