Tim Harford The Undercover Economist

HighlightsHighlights

From the geeks who took over poker to the nuclear safety experts who want to prevent the next banking meltdown, these are my favourite long-form articles.

Highlights

“Basic income is about the freedom to say no” – Rutger Bregman goes bouldering

“You have the instinct for it,” says Rutger Bregman, as I haul myself up an indoor climbing wall, nestled under the arches at Vauxhall station in London. “Shit, this is some talent!” he bellows, as I reach the top. I am inwardly delighted, even though I realise the praise is absurd: I have climbed about four metres and it’s a beginner’s route.

Bregman has suggested that we go bouldering together. Bouldering is a variety of rock climbing, done over short distances without safety ropes. Coming from Bregman, it seems a curious choice. The young Dutch historian and author is most famous for advocating a universal basic income — a regular cash grant to every single person, given unconditionally, to support them and provide a minimum standard of living, no matter what might go wrong.

His book, Utopia for Realists (UK) (US), has been a surprise bestseller, finding an audience eager for radical yet plausible policy ideas. Yet this celebrated advocate of unconditional handouts has chosen a sport that is all about self-reliance, and the ultimate departure from the principle of the safety net.

“There is a safety net — look!” says Bregman, pointing at the crash mats. I am not totally convinced. It doesn’t take long before I fall off — a combination of lack of skill and lack of fitness. As I peel myself off the mat, I realise the skin of one elbow has not remained with me.

Bregman’s contention is that a basic income would be the logical and perfectly affordable next step for a human race that has already taken huge leaps forward since before the industrial revolution, when, he writes, “nearly everyone, everywhere was still poor, hungry, dirty, afraid, stupid, sick and ugly”.

Bregman himself looks the picture of health, possibly because, at 29, he’s 15 years younger than me, and possibly because he’s been practising. He climbs twice a week; his T-shirt says Sterk, the name of his local bouldering hall in Utrecht. The word means “strong” in Dutch. My limited experience of rock climbing with my daughters has taught me that the legs take the strain. Bouldering, however, requires more upper-body strength.

“It’s more explosive,” I am told. And within 15 minutes, I’m done: the tendons below my wrist have given up and I am close to doing the same. The first three routes were exhilarating but without a rope, even the short climbs under the arches of VauxWall are starting to feel vertiginous. I’m losing my nerve as well as my strength. Bregman, on the other hand, is just getting started.

“How long is a typical session?” I ask. “Fifteen minutes or an hour or . . . I can’t imagine anyone keeping this up for an hour.

“Two, two-and-a-half hours, if I have the time. Which I usually don’t,” he says. “If you warm up slowly, not like today, then you are at your peak after 45 minutes, and then you can keep that up for another 45 minutes.”

I spend much of the next hour watching Bregman solve one route after another. Sometimes he is dangling loosely off an overhang, as though resting in an invisible hammock. Sometimes he is moving laterally, his legs as high as his arms in a spiderlike scurry across the wall. Once, he hangs vertically as he works his way from left to right across a whimsical hold: a huge pair of pouting lips in one corner, just below the roof. He took up the sport three years ago. “I didn’t like to exercise at all. It’s so soul-destroying. But this is different.”

Bregman sees soul-destroying activity in much of modern life. Too many people, he says, are doing jobs they dislike or see as pointless, because they have no alternative. A basic income would liberate people: perhaps a minimum of €1,000 a month, given unconditionally as a cash grant, or through the tax system as a negative income tax.

Bregman has branded a basic income as “venture capital for the people”. A good line, I congratulate him. But what does it mean?

“OK, so basic income is all about the freedom to say no. That’s a privilege for the rich right now. With a basic income, you can say no to a job you don’t want to do. You can say no to a city in which you no longer want to live. You can say no to an employer who harasses you at work . . . that’s what real freedom looks like.”

Part of the impetus for a basic income has come from the sense that the robots are coming for our jobs — maybe not today, maybe not tomorrow, but soon. The venture capital firm Y Combinator is funding research into basic income, which seems to be a popular idea in Silicon Valley. But Bregman has no patience for the idea that technological change underpins the case for basic income.

“This is not about AI,” he insists. “You go back to the 1960s, and all the economists, all the philosophers, all the sociologists said we’re going to be working less and less and less and less and boredom is going to be the great challenge of the future. Didn’t happen . . . mostly because we have this ideological obsession with creating new jobs.”

Advocates of basic income have included two rather different Nobel laureates: the civil rights activist Martin Luther King Jr and the free-market evangelist Milton Friedman. The idea draws support from leftwingers who see an opportunity to redistribute and to give workers more bargaining power, and rightwingers who see an opportunity to dismantle paternalistic bureaucracies and empower ordinary people to make their own choices.

Bregman’s own sympathies seem to lie more with the left. At one point I tease him about the fact that he is in London on Valentine’s Day while his wife Maartje (a photographer and collaborator) is not. His response is spat out with a vehemence that might have been for comic effect, and might not: “You know that Valentine’s Day is just a capitalist scam to make you buy stuff you don’t need, to impress people you don’t like, right?”

But like Friedman, Bregman is clearly no fan of paternalistic bureaucracies. “Nowhere you’ll find as much support for something like basic income as [among] people who work for unemployment agencies,” he says. “In Holland I did a couple of lectures for those groups and they just give me a standing ovation when you say that we should abolish their jobs.”

It is the unconditional nature of the cash transfer that particularly appeals to him. With the transfer of money, no strings attached, there is a transfer of dignity, of bargaining power, and of responsibility. People have to make their own choices.

Again, I venture a connection between the basic income idea and bouldering: it’s a solo sport in which individuals need to find their own path, judging risks for themselves?

“If I would make this sport political, what I like about it is that it is competitive, but with yourself. So you’re not competing with anyone else, you’re just trying to do better yourself. And it’s a puzzle, every time it’s different. It’s a very creative sport, I guess.”

Utopia for Realists was itself a slowly assembled puzzle. The early drafts were articles in De Correspondent, an online crowdfunded news website founded by a Dutch pop-philosopher and columnist, Rob Wiijnberg. “It’s an anarchist-idealist collective of journalists who don’t follow the news,” Bregman explains.

This may explain why Utopia for Realists is such a curiously enjoyable read. The title sums up Bregman’s belief that evidence-based pragmatism should not rule out provocative, ambitious ideas. The book is lively, well researched and full of unlikely pieces of history, from the Speenhamland system of poor relief, developed in England in 1795, to US President Richard Nixon’s flirtation with the idea of a basic income in 1969. (Bregman studied history rather than economics or politics.) It is also perfectly orthogonal to anything one might read in a newspaper. The book was published in Dutch by De Correspondent, built a following slowly, then was self-published in English.

“I was my own PR employee at that point. I was emailing everyone — no interviews, no reviews. Nothing.” Yet when Bregman emailed me out of the blue with the English translation and a request for my support, I was sufficiently impressed to endorse the book. Steven Pinker also gave it a glowing cover quote. And as Bregman and his colleagues were pondering giving up, the project suddenly took off. While not quite Fifty Shades of Grey, in a short space of time Utopia for Realists went from brave failed experiment to international bestseller, due to be published in 28 languages.

“Ideas always start on the fringe and then they move towards the centre,” he says. “Then I was invited to come to Davos this year. Like, yeah, that’s pretty much it, right? My first lectures about basic income were for anarchists with long hair, and smelly.”

Did he go to Davos? “No, I had to go to a book fair in Colombia.” He did, however, give a talk at TED last year, and seems aware of the irony of advocating the dismantling of an entire class of do-gooders.

“You’re talking for an audience of 1,500 people, many of them involved in kinds of charities. The CEO of Toms, for example, was there.” Toms donates a pair of shoes to a poor family for every pair purchased; Bregman isn’t impressed. “Buy one shoe, give one shoe. That is just a horrible, horrible idea.”

He got a huge round of applause when he proposed scrapping aid bureaucracies and replacing them with direct cash transfers. The rapturous reception struck him as odd. “I was saying we should hand over the salaries of all these paternalistic bureaucrats and give them to the poor, who are the real experts on their own lives. And they were all clapping and laughing, and I was thinking on stage, ‘But I’m talking about you! It’s you!’”

It’s a good talk, I tell him. “I like to prepare for these things. I knew it off by heart three months before I went on stage.”

I press him on the details of the talk. He skips a little too lightly between the idea of replacing international development aid with direct cash transfers to poor people, and the idea of overhauling modern western welfare states to place unconditional cash payments at their heart. The two ideas are cousins, not identical twins, I suggest. Adding a dollar a day, no strings attached, to a non-existent social safety net might be transformative in rural India or Africa. A resident of London is going to want a little more than that before she willingly gives up her housing benefit. Bregman agrees: his focus now is on welfare reform.

Another question mark is over the evidence base for a basic income. Bregman mentions “dozens of experiments” but, arguably, there has never been a completely satisfactory randomised trial of a long-term basic income. (A literature review by the charity GiveDirectly counted six shorter-term randomised trials; policymakers should conduct many more.)

One promising episode — a four-year trial in Manitoba, Canada, in the 1970s — received little attention. When the economist Evelyn Forget managed to get hold of the mothballed archives in 2009, they were on the verge of being discarded. There is a new study in Kenya, funded by GiveDirectly. With 5,000 recipients getting a basic income for 12 years, that trial shows real ambition — but the income in question is just over $20 a month. This is unlikely to tell us much about reforming a European welfare state. Nor is a much-hyped but rather small trial in Finland, which will last just two years and is focused only on those already receiving unemployment benefits.

Other trials have been excitedly announced but have yet to begin, let alone conclude. We are still waiting for a study large and patient enough to tell us much about a basic income in a developed economy. So what are these “dozens of experiments”?

Bregman says that the experiments he has in mind are less evaluating a full basic income scheme, and more exploring the impact of cash transfers in development aid. That is indeed a well-studied area, although not quite the same thing. Those experiments provide encouragement for proponents of a basic income: households tend to put the money to good use, and reap long-term benefits.

By now, we’re talking over a coffee, my enfeebled hands thankfully strong enough to grip a mug. My final question is about one of his other ideas: dramatically liberalising immigration rules.

“Every utopian system is obviously grounded in the injustices of the present,” he says. “What’s the biggest injustice in the world right now? It’s pretty easy to see. It’s borders: apartheid on a global scale.”

But while basic income seems to be having a day in the sun, an end to passport control is hardly in tune with the Trumpian zeitgeist, is it? “Well that’s almost my problem with basic income right now. I get questions during lectures, people say, ‘Is this really a radical idea?’ So I’m like, I should move on. Because utopias are meant to make people angry.”

Fair enough: as in bouldering, so in utopian politics. Once you’ve solved one puzzle, it is time to move on to a new challenge.

 

 
Written for and first published in the Financial Times on 9 March 2018.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

28th of March, 2018HighlightsOther WritingComments off
Highlights

Your handy postcard-sized guide to statistics

Statistics on a postcard

 

“The best financial advice for most people would fit on an index card.” That’s the gist of an offhand comment in 2013 by Harold Pollack, a professor at the University of Chicago. Pollack’s bluff was duly called, and he quickly rushed off to find an index card and scribble some bullet points — with respectable results.

When I heard about Pollack’s notion — he elaborated upon it in a 2016 book — I asked myself: would this work for statistics, too? There are some obvious parallels. In each case, common sense goes a surprisingly long way; in each case, dizzying numbers and impenetrable jargon loom; in each case, there are stubborn technical details that matter; and, in each case, there are people with a sharp incentive to lead us astray.

The case for everyday practical numeracy has never been more urgent. Statistical claims fill our newspapers and social media feeds, unfiltered by expert judgment and often designed as a political weapon. We do not necessarily trust the experts — or more precisely, we may have our own distinctive view of who counts as an expert and who does not.  Nor are we passive consumers of statistical propaganda; we are the medium through which the propaganda spreads. We are arbiters of what others will see: what we retweet, like or share online determines whether a claim goes viral or vanishes. If we fall for lies, we become unwittingly complicit in deceiving others.

On the bright side, we have more tools than ever to help weigh up what we see before we share it — if we are able and willing to use them. In the hope that someone might use it, I set out to write my own postcard-sized citizens’ guide to statistics. Here’s what I learnt.

 

Professor Pollack’s index card includes advice such as “Save 20 per cent of your money” and “Pay your credit card in full every month”. The author Michael Pollan offers dietary advice in even pithier form: “Eat Food. Not Too Much. Mostly Plants.” Quite so, but I still want a cheeseburger.  However good the advice Pollack and Pollan offer, it’s not always easy to take. The problem is not necessarily ignorance. Few people think that Coca-Cola is a healthy drink, or believe that credit cards let you borrow cheaply. Yet many of us fall into some form of temptation or other. That is partly because slick marketers are focused on selling us high-fructose corn syrup and easy credit. And it is partly because we are human beings with human frailties.

With this in mind, my statistical postcard begins with advice about emotion rather than logic. When you encounter a new statistical claim, observe your feelings. Yes, it sounds like a line from Star Wars, but we rarely believe anything because we’re compelled to do so by pure deduction or irrefutable evidence. We have feelings about many of the claims we might read — anything from “inequality is rising” to “chocolate prevents dementia”. If we don’t notice and pay attention to those feelings, we’re off to a shaky start. What sort of feelings? Defensiveness. Triumphalism. Righteous anger. Evangelical fervour. Or, when it comes to chocolate and dementia, relief.

It’s fine to have an emotional response to a chart or shocking statistic — but we should not ignore that emotion, or be led astray by it. There are certain claims that we rush to tell the world, others that we use to rally like-minded people, still others we refuse to believe. Our belief or disbelief in these claims is part of who we feel we are.

“We all process information consistent with our tribe,” says Dan Kahan, professor of law and psychology at Yale University.

In 2005, Charles Taber and Milton Lodge, political scientists at Stony Brook University, New York, conducted experiments in which subjects were invited to study arguments around hot political issues. Subjects showed a clear confirmation bias: they sought out testimony from like-minded organisations. For example, subjects who opposed gun control would tend to start by reading the views of the National Rifle Association.

Subjects also showed a disconfirmation bias: when the researchers presented them with certain arguments and invited comment, the subjects would quickly accept arguments with which they agreed, but devote considerable effort to disparage opposing arguments.

Expertise is no defence against this emotional reaction; in fact, Taber and Lodge found that better-informed experimental subjects showed stronger biases. The more they knew, the more cognitive weapons they could aim at their opponents.

“So convenient a thing it is to be a reasonable creature,” commented Benjamin Franklin, “since it enables one to find or make a reason for everything one has a mind to do.”

This is why it’s important to face up to our feelings before we even begin to process a statistical claim. If we don’t at least acknowledge that we may be bringing some emotional baggage along with us, we have little chance of discerning what’s true. As the physicist Richard Feynman once commented, “You must not fool yourself — and you are the easiest person to fool.”

 

The second crucial piece of advice is to understand the claim. That seems obvious. But all too often we leap to disbelieve or believe (and repeat) a claim without pausing to ask whether we really understand what the claim is. To quote Douglas Adams’s philosophical supercomputer, Deep Thought, “Once you know what the question actually is, you’ll know what the answer means.”

For example, take the widely accepted claim that “inequality is rising”. It seems uncontroversial, and urgent. But what does it mean? Racial inequality? Gender inequality? Inequality of opportunity, of consumption, of education attainment, of wealth? Within countries or across the globe? Even given a narrower claim, “inequality of income before taxes is rising” (and you should be asking yourself, since when?), there are several different ways to measure this.

One approach is to compare the income of people at the 90th percentile and the 10th percentile, but that tells us nothing about the super-rich, nor the ordinary people in the middle. An alternative is to examine the income share of the top 1 per cent — but this approach has the opposite weakness, telling us nothing about how the poorest fare relative to the majority.  There is no single right answer — nor should we assume that all the measures tell a similar story. In fact, there are many true statements that one can make about inequality. It may be worth figuring out which one is being made before retweeting it.

Perhaps it is not surprising that a concept such as inequality turns out to have hidden depths. But the same holds true of more tangible subjects, such as “a nurse”. Are midwives nurses? Health visitors? Should two nurses working half-time count as one nurse? Claims over the staffing of the UK’s National Health Service have turned on such details.

All this can seem like pedantry — or worse, a cynical attempt to muddy the waters and suggest that you can prove anything with statistics. But there is little point in trying to evaluate whether a claim is true if one is unclear what the claim even means.

Imagine a study showing that kids who play violent video games are more likely to be violent in reality. Rebecca Goldin, a mathematician and director of the statistical literacy project STATS, points out that we should ask questions about concepts such as “play”, “violent video games” and “violent in reality”. Is Space Invaders a violent game? It involves shooting things, after all. And are we measuring a response to a questionnaire after 20 minutes’ play in a laboratory, or murderous tendencies in people who play 30 hours a week?

“Many studies won’t measure violence,” says Goldin. “They’ll measure something else such as aggressive behaviour.” Just like “inequality” or “nurse”, these seemingly common sense words hide a lot of wiggle room.

Two particular obstacles to our understanding are worth exploring in a little more detail. One is the question of causation. “Taller children have a higher reading age,” goes the headline. This may summarise the results of a careful study about nutrition and cognition. Or it may simply reflect the obvious point that eight-year-olds read better than four-year-olds — and are taller. Causation is philosophically and technically a knotty business but, for the casual consumer of statistics, the question is not so complicated: just ask whether a causal claim is being made, and whether it might be justified.

Returning to this study about violence and video games, we should ask: is this a causal relationship, tested in experimental conditions? Or is this a broad correlation, perhaps because the kind of thing that leads kids to violence also leads kids to violent video games? Without clarity on this point, we don’t really have anything but an empty headline.

We should never forget, either, that all statistics are a summary of a more complicated truth. For example, what’s happening to wages? With tens of millions of wage packets being paid every month, we can only ever summarise — but which summary? The average wage can be skewed by a small number of fat cats. The median wage tells us about the centre of the distribution but ignores everything else. Or we might look at the median increase in wages, which isn’t the same thing as the increase in the median wage — not at all. In a situation where the lowest and highest wages are increasing while the middle sags, it’s quite possible for the median pay rise to be healthy while median pay falls.

Sir Andrew Dilnot, former chair of the UK Statistics Authority, warns that an average can never convey the whole of a complex story. “It’s like trying to see what’s in a room by peering through the keyhole,” he tells me.

In short, “you need to ask yourself what’s being left out,” says Mona Chalabi, data editor for The Guardian US. That applies to the obvious tricks, such as a vertical axis that’s been truncated to make small changes look big. But it also applies to the less obvious stuff — for example, why does a graph comparing the wages of African-Americans with those of white people not also include data on Hispanic or Asian-Americans? There is no shame in leaving something out. No chart, table or tweet can contain everything. But what is missing can matter.

 

Channel the spirit of film noir: get the backstory. Of all the statistical claims in the world, this particular stat fatale appeared in your newspaper or social media feed, dressed to impress. Why? Where did it come from? Why are you seeing it?

Sometimes the answer is little short of a conspiracy: a PR company wanted to sell ice cream, so paid a penny-ante academic to put together the “equation for the perfect summer afternoon”, pushed out a press release on a quiet news day, and won attention in a media environment hungry for clicks. Or a political donor slung a couple of million dollars at an ideologically sympathetic think-tank in the hope of manufacturing some talking points.

Just as often, the answer is innocent but unedifying: publication bias. A study confirming what we already knew — smoking causes cancer — is unlikely to make news. But a study with a surprising result — maybe smoking doesn’t cause cancer after all — is worth a headline. The new study may have been rigorously conducted but is probably wrong: one must weigh it up against decades of contrary evidence.

Publication bias is a big problem in academia. The surprising results get published, the follow-up studies finding no effect tend to appear in lesser journals if they appear at all. It is an even bigger problem in the media — and perhaps bigger yet in social media. Increasingly, we see a statistical claim because people like us thought it was worth a Like on Facebook.

David Spiegelhalter, president of the Royal Statistical Society, proposes what he calls the “Groucho principle”. Groucho Marx famously resigned from a club — if they’d accept him as a member, he reasoned, it couldn’t be much of a club. Spiegelhalter feels the same about many statistical claims that reach the headlines or the social media feed. He explains, “If it’s surprising or counter-intuitive enough to have been drawn to my attention, it is probably wrong.”

 

OK. You’ve noted your own emotions, checked the backstory and understood the claim being made. Now you need to put things in perspective. A few months ago, a horrified citizen asked me on Twitter whether it could be true that in the UK, seven million disposable coffee cups were thrown away every day.  I didn’t have an answer. (A quick internet search reveals countless repetitions of the claim, but no obvious source.) But I did have an alternative question: is that a big number? The population of the UK is 65 million. If one person in 10 used a disposable cup each day, that would do the job.

Many numbers mean little until we can compare them with a more familiar quantity. It is much more informative to know how many coffee cups a typical person discards than to know how many are thrown away by an entire country. And more useful still to know whether the cups are recycled (usually not, alas) or what proportion of the country’s waste stream is disposable coffee cups (not much, is my guess, but I may be wrong).

So we should ask: how big is the number compared with other things I might intuitively understand? How big is it compared with last year, or five years ago, or 30? It’s worth a look at the historical trend, if the data are available.

Finally, beware “statistical significance”. There are various technical objections to the term, some of which are important. But the simplest point to appreciate is that a number can be “statistically significant” while being of no practical importance. Particularly in the age of big data, it’s possible for an effect to clear this technical hurdle of statistical significance while being tiny.  One study was able to demonstrate that unborn children exposed to a heatwave while in the womb went on to earn less as adults. The finding was statistically significant. But the impact was trivial: $30 in lost income per year. Just because a finding is statistically robust does not mean it matters; the word “significance” obscures that.

 

In an age of computer-generated images of data clouds, some of the most charming data visualisations are hand-drawn doodles by the likes of Mona Chalabi and the cartoonist Randall Munroe. But there is more to these pictures than charm: Chalabi uses the wobble of her pen to remind us that most statistics have a margin of error. A computer plot can confer the illusion of precision on what may be a highly uncertain situation.

“It is better to be vaguely right than exactly wrong,” wrote Carveth Read in Logic (1898), and excessive precision can lead people astray. On the eve of the US presidential election in 2016, the political forecasting website FiveThirtyEight gave Donald Trump a 28.6 per cent chance of winning. In some ways that is impressive, because other forecasting models gave Trump barely any chance at all. But how could anyone justify the decimal point on such a forecast? No wonder many people missed the basic message, which was that Trump had a decent shot. “One in four” would have been a much more intuitive guide to the vagaries of forecasting.

Exaggerated precision has another cost: it makes numbers needlessly cumbersome to remember and to handle. So, embrace imprecision. The budget of the NHS in the UK is about £10bn a month. The national income of the United States is about $20tn a year. One can be much more precise about these things, but carrying the approximate numbers around in my head lets me judge pretty quickly when — say — a £50m spending boost or a $20bn tax cut is noteworthy, or a rounding error.

My favourite rule of thumb is that since there are 65 million people in the UK and people tend to live a bit longer than 65, the size of a typical cohort — everyone retiring or leaving school in a given year — will be nearly a million people. Yes, it’s a rough estimate — but vaguely right is often good enough.

 

Be curious. Curiosity is bad for cats, but good for stats. Curiosity is a cardinal virtue because it encourages us to work a little harder to understand what we are being told, and to enjoy the surprises along the way.

This is partly because almost any statistical statement raises questions: who claims this? Why? What does this number mean? What’s missing? We have to be willing — in the words of UK statistical regulator Ed Humpherson — to “go another click”. If a statistic is worth sharing, isn’t it worth understanding first? The digital age is full of informational snares — but it also makes it easier to look a little deeper before our minds snap shut on an answer.

While curiosity gives us the motivation to ask another question or go another click, it gives us something else, too: a willingness to change our minds. For many of the statistical claims that matter, we have already reached a conclusion. We already know what our tribe of right-thinking people believe about Brexit, gun control, vaccinations, climate change, inequality or nationalisation — and so it is natural to interpret any statistical claim as either a banner to wave, or a threat to avoid.

Curiosity can put us into a better frame of mind to engage with statistical surprises. If we treat them as mysteries to be resolved, we are more likely to spot statistical foul play, but we are also more open-minded when faced with rigorous new evidence.

In research with Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, Dan Kahan has discovered that people who are intrinsically curious about science — they exist across the political spectrum — tend to be less polarised in their response to questions about politically sensitive topics. We need to treat surprises as a mystery rather than a threat.  Isaac Asimov is thought to have said, “The most exciting phrase in science isn’t ‘Eureka!’, but ‘That’s funny…’” The quip points to an important truth: if we treat the open question as more interesting than the neat answer, we’re on the road to becoming wiser.

 

In the end, my postcard has 50-ish words and six commandments. Simple enough, I hope, for someone who is willing to make an honest effort to evaluate — even briefly — the statistical claims that appear in front of them. That willingness, I fear, is what is most in question.

“Hey, Bill, Bill, am I gonna check every statistic?” said Donald Trump, then presidential candidate, when challenged by Bill O’Reilly about a grotesque lie that he had retweeted about African-Americans and homicides. And Trump had a point — sort of. He should, of course, have got someone to check a statistic before lending his megaphone to a false and racist claim. We all know by now that he simply does not care.

But Trump’s excuse will have struck a chord with many, even those who are aghast at his contempt for accuracy (and much else). He recognised that we are all human. We don’t check everything; we can’t. Even if we had all the technical expertise in the world, there is no way that we would have the time.

My aim is more modest. I want to encourage us all to make the effort a little more often: to be open-minded rather than defensive; to ask simple questions about what things mean, where they come from and whether they would matter if they were true. And, above all, to show enough curiosity about the world to want to know the answers to some of these questions — not to win arguments, but because the world is a fascinating place.
Written for and first published in the Financial Times on 8 February 2018.

My recent book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

8th of March, 2018HighlightsOther WritingComments off
Highlights

What We Get Wrong About Technology

Blade Runner (1982) is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being.  So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?

He calls her up from a payphone.

There is something revealing about the contrast between the two technologies — the biotech miracle that is Rachael, and the graffiti-scrawled videophone that Deckard uses to talk to her. It’s not simply that Blade Runner fumbled its futurism by failing to anticipate the smartphone. That’s a forgivable slip, and Blade Runner is hardly the only film to make it. It’s that, when asked to think about how new inventions might shape the future, our imaginations tend to leap to technologies that are sophisticated beyond comprehension. We readily imagine cracking the secrets of artificial life, and downloading and uploading a human mind. Yet when asked to picture how everyday life might look in a society sophisticated enough to build such biological androids, our imaginations falter. Blade Runner audiences found it perfectly plausible that LA would look much the same, beyond the acquisition of some hovercars and a touch of noir.

Now is a perplexing time to be thinking about how technology shapes us. Some economists, disappointed by slow growth in productivity, fear the glory days are behind us. “The economic revolution of 1870 to 1970 was unique in human history,” writes Robert Gordon in The Rise and Fall of American Growth (UK) (US). “The pace of innovation since 1970 has not been as broad or as deep.” Others believe that exponential growth in computing power is about to unlock something special. Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age” (UK) (US), while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.

Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere. Instead, when we try to imagine the future, the past offers two lessons. First, the most influential new technologies are often humble and cheap. Mere affordability often counts for more than the beguiling complexity of an organic robot such as Rachael. Second, new inventions do not appear in isolation, as Rachael and her fellow androids did. Instead, as we struggle to use them to their best advantage, they profoundly reshape the societies around us.

 

 

To understand how humble, cheap inventions have shaped today’s world, picture a Bible — specifically, a Gutenberg Bible from the 1450s. The dense black Latin script, packed into twin blocks, makes every page a thing of beauty to rival the calligraphy of the monks. Except, of course, these pages were printed using the revolutionary movable type printing press. Gutenberg developed durable metal type that could be fixed firmly to print hundreds of copies of a page, then reused to print something entirely different.  The Gutenberg press is almost universally considered to be one of humanity’s defining inventions. It gave us the Reformation, the spread of science, and mass culture from the novel to the newspaper. But it would have been a Rachael — an isolated technological miracle, admirable for its ingenuity but leaving barely a ripple on the wider world — had it not been for a cheap and humble invention that is far more easily and often overlooked: paper.

The printing press didn’t require paper for technical reasons, but for economic ones. Gutenberg also printed a few copies of his Bible on parchment, the animal-skin product that had long served the needs of European scribes. But parchment was expensive — 250 sheep were required for a single book. When hardly anyone could read or write, that had not much mattered. Paper had been invented 1,500 years earlier in China and long used in the Arabic world, where literacy was common. Yet it had taken centuries to spread to Christian Europe, because illiterate Europe no more needed a cheap writing surface than it needed a cheap metal to make crowns and sceptres.

Paper caught on only when a commercial class started to need an everyday writing surface for contracts and accounts. “If 11th-century Europe had little use for paper,” writes Mark Kurlansky in his book Paper (UK) (US), “13th-century Europe was hungry for it.” When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.

Paper opened the way for printing. The kind of print run that might justify the expense of a printing press could not be produced on parchment; it would require literally hundreds of thousands of animal skins. It was only when it became possible to mass-produce paper that it made sense to search for a way to mass-produce writing too. Not that writing is the only use for paper. In his book Stuff Matters (UK) (US), Mark Miodownik points out that we use paper for everything from filtering tea and coffee to decorating our walls. Paper gives us milk cartons, cereal packets and corrugated cardboard boxes. It can be sandpaper, wrapping paper or greaseproof paper. In quilted, perforated form, paper is soft, absorbent and cheap enough to wipe, well, anything you want. Toilet paper seems a long way from the printing revolution. And it is easily overlooked — as we occasionally discover in moments of inconvenience. But many world-changing inventions hide in plain sight in much the same way — too cheap to remark on, even as they quietly reorder everything. We might call this the “toilet-paper principle”.

 

 

It’s not hard to find examples of the toilet-paper principle, once you start to look. The American west was reshaped by the invention of barbed wire, which was marketed by the great salesman John Warne Gates with the slogan: “Lighter than air, stronger than whiskey, cheaper than dust.” Barbed wire enabled settlers to fence in vast areas of prairie cheaply. Joseph Glidden patented it in 1874; just six years later, his factory produced enough wire annually to circle the world 10 times over. Barbed wire’s only advantage over wooden fencing was its cost but that was quite sufficient to cage the wild west, where the simple invention prevented free-roaming bison and cowboys’ herds of cattle from trampling crops.  Once settlers could assert control over their land, they had the incentive to invest in and improve it. Without barbed wire, the American economy — and the trajectory of 20th-century history — might have looked very different.

There’s a similar story to be told about the global energy system. The Rachael of the energy world — the this-changes-everything invention, the stuff of dreams — is nuclear fusion. If we perfect this mind-bendingly complex technology, we might safely harvest almost limitless energy by fusing variants of hydrogen. It could happen: in France, the ITER fusion reactor is scheduled to be fully operational in 2035 at a cost of at least $20bn. If it works, it will achieve temperatures of 200 million degrees Celsius — yet will still only be an experimental plant, producing less power than a coal-fired plant, and only in 20-minute bursts. Meanwhile, cheap-and-cheerful solar power is quietly leading a very different energy revolution. Break-even costs of solar electricity have fallen by two-thirds in the past seven years, to levels barely more than those of natural gas plants. But this plunge has been driven less by any great technological breakthrough than by the humble methods familiar to anyone who shops at Ikea: simple modular products that have been manufactured at scale and that snap together quickly on site.

The problem with solar power is that the sun doesn’t always shine. And the solution that’s emerging is another cheap-and-cheerful, familiar technology: the battery. Lithium-ion batteries to store solar energy are becoming increasingly commonplace, and mass-market electric cars would represent a large battery on every driveway. Several giant factories are under construction, most notably a Tesla factory that promises to manufacture 35GWh worth of batteries each year by 2020; that is more than the entire global production of batteries in 2013. Battery prices have fallen as quickly as those of solar panels. Such Ikea-fication is a classic instance of toilet-paper technology: the same old stuff, only cheaper.

Perhaps the most famous instance of the toilet-paper principle is a corrugated steel box, 8ft wide, 8.5ft high and 40ft long. Since the shipping container system was introduced, world merchandise trade (the average of imports and exports) has expanded from about 10 per cent of world GDP in the late 1950s to more than 20 per cent today. We now take for granted that when we visit the shops, we’ll be surrounded by products from all over the globe, from Spanish tomatoes to Australian wine to Korean mobile phones.

“The standard container has all the romance of a tin can,” says historian Marc Levinson in his book The Box (UK) (US). Yet this simple no-frills system for moving things around has been a force for globalisation more powerful than the World Trade Organisation. Before the shipping container was introduced, a typical transatlantic cargo ship might contain 200,000 separate items, comprising many hundreds of different shipments, from food to letters to heavy machinery. Hauling and loading this cornucopia from the dockside, then packing it into the tightest corners of the hull, required skill, strength and bravery from the longshoremen, who would work on a single ship for days at a time. The container shipping system changed all that.

Loading and unloading a container ship is a gigantic ballet of steel cranes, choreographed by the computers that keep the vessel balanced and track each container through a global logistical system. But the fundamental technology that underpins it all could hardly be simpler. The shipping container is a 1950s invention using 1850s know-how. Since it was cheap, it worked. The container was a simple enough idea, and the man who masterminded its rise, Malcom McLean, could scarcely be described as an inventor. He was an entrepreneur who dreamed big, took bold risks, pinched pennies and deftly negotiated with regulators, port authorities and the unions.

McLean’s real achievement was in changing the system that surrounded his box: the way that ships, trucks and ports were designed. It takes a visionary to see how toilet-paper inventions can totally reshape systems; it’s easier for our limited imaginations to slot Rachael-like inventions into existing systems.  If nuclear fusion works, it neatly replaces coal, gas and nuclear fission in our familiar conception of the grid: providers make electricity, and sell it to us. Solar power and batteries are much more challenging. They’re quietly turning electricity companies into something closer to Uber or Airbnb — a platform connecting millions of small-scale providers and consumers of electricity, constantly balancing demand and supply.

 

 

Some technologies are truly revolutionary. They transcend the simple pragmatism of paper or barbed wire to produce effects that would have seemed miraculous to earlier generations. But they take time to reshape the economic systems around us — much more time than you might expect. No discovery fits that description more aptly than electricity, barely comprehended at the beginning of the 19th century but harnessed and commodified by its end. Usable light bulbs had appeared in the late 1870s, courtesy of Thomas Edison and Joseph Swan. In 1881, Edison built electricity-generating stations in New York and London and he began selling electricity as a commodity within a year. The first electric motors were used to drive manufacturing machinery a year after that. Yet the history of electricity in manufacturing poses a puzzle. Poised to take off in the late 1800s, electricity flopped as a source of mechanical power with almost no impact at all on 19th-century manufacturing. By 1900, electric motors were providing less than 5 per cent of mechanical drive power in American factories. Despite the best efforts of Edison, Nikola Tesla and George Westinghouse, manufacturing was still in the age of steam.

Productivity finally surged in US manufacturing only in the 1920s. The reason for the 30-year delay? The new electric motors only worked well when everything else changed too. Steam-powered factories had delivered power through awe-inspiring driveshafts, secondary shafts, belts, belt towers, and thousands of drip-oilers. The early efforts to introduce electricity merely replaced the single huge engine with a similarly large electric motor. Results were disappointing.

As the economic historian Paul David has argued, electricity triumphed only when factories themselves were reconfigured. The driveshafts were replaced by wires, the huge steam engine by dozens of small motors. Factories spread out, there was natural light. Stripped of the driveshafts, the ceilings could be used to support pulleys and cranes. Workers had responsibility for their own machines; they needed better training and better pay. The electric motor was a wonderful invention, once we changed all the everyday details that surrounded it.

David suggested in 1990 that what was true of electric motors might also prove true of computers: that we had yet to see the full economic benefits because we had yet to work out how to reshape our economy to take advantage of them. Later research by economists Erik Brynjolfsson and Lorin Hitt backed up the idea: they found that companies that had merely invested in computers in the 1990s had seen few benefits, but those that had also reorganised — decentralising, outsourcing and customising their products — had seen productivity soar.

Overall, the productivity statistics have yet to display anything like a 1920s breakthrough. In that respect we are still waiting for David’s suggestion to bear fruit. But in other ways, he was proved right almost immediately. People were beginning to figure out new ways to use computers and, in August 1991, Tim Berners-Lee posted his code for the world wide web on the internet so that others could download it and start to tinker. It was another cheap and unassuming technology, and it unlocked the potential of the older and grander internet itself.

 

 

If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael? The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise — from hackable pacemakers to botnets comprised of printers to, inevitably, internet-enabled sex toys that leak the most intimate data imaginable. Both the potential and the pitfalls are spectacular.

Whatever the technologies of the future turn out to be, they are likely to demand that, like the factories of the early 20th century, we change to accommodate them. Genuinely revolutionary inventions live up to their name: they change almost everything, and such transformations are by their nature hard to predict. One clarifying idea has been proposed by economists Daron Acemoglu and David Autor. They argue that when we study the impact of technology on the workplace, we should view work in bite-sized chunks — tasks rather than jobs.

For example, running a supermarket involves many tasks — stacking the shelves, collecting money from customers, making change, and preventing shoplifters. Automation has had a big impact on supermarkets, but not because the machines have simply replaced human jobs. Instead, they have replaced tasks done by humans, generally the tasks that could be most easily codified. The barcode turned stocktaking from a human task into one performed by computers. (It is another toilet-paper invention, cheap and ubiquitous, and one that made little difference until retail formats and supply chains were reshaped to take advantage.)

A task-based analysis of labour and automation suggests that jobs themselves aren’t going away any time soon — and that distinctively human skills will be at a premium. When humans and computers work together, says Autor, the computers handle the “routine, codifiable tasks” while amplifying the capabilities of the humans, such as “problem-solving skills, adaptability and creativity”. But there are also signs that new technologies have polarised the labour market, with more demand for both the high-end skills and the low-end ones, and a hollowing out in the middle. If human skills are now so valuable, that low-end growth seems like a puzzle — but the truth is that many distinctively human skills are not at the high end. While Jane Austen, Albert Einstein and Pablo Picasso exhibited human skills, so does the hotel maid who scrubs the toilet and changes the bed. We’re human by virtue not just of our brains, but our sharp eyes and clever fingers.

So one invention I’m keen to observe is the “Jennifer unit”, made by a company called Lucas Systems. Jennifer and the many other programmes like her are examples of a “voice-directed application” — just software and a simple, inexpensive earpiece. Such systems have become part of life for warehouse workers: a voice in their ear or instructions on a screen tell them where to go and what to do, down to the fine details. If 13 items must be collected from a shelf, Jennifer will tell the human worker to pick five, then five, then three. “Pick 13” would lead to mistakes. That makes sense. Computers are good at counting and scheduling. Humans are good at picking things off shelves. Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human? Like paper, Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.

 

 
Written for and first published in the FT Magazine on 8 July 2017.

My new book is “Fifty Things That Made The Modern Economy” – now out! Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

29th of August, 2017HighlightsOther WritingComments off
Highlights

The Problem With Facts

 

1.

Just before Christmas 1953, the bosses of America’s leading tobacco companies met John Hill, the founder and chief executive of one of America’s leading public relations firms, Hill & Knowlton. Despite the impressive surroundings — the Plaza Hotel, overlooking Central Park in New York — the mood was one of crisis.

Scientists were publishing solid evidence of a link between smoking and cancer. From the viewpoint of Big Tobacco, more worrying was that the world’s most read publication, The Reader’s Digest, had already reported on this evidence in a 1952 article, “Cancer by the Carton”. The journalist Alistair Cooke, writing in 1954, predicted that the publication of the next big scientific study into smoking and cancer might finish off the industry.

It did not. PR guru John Hill had a plan — and the plan, with hindsight, proved tremendously effective. Despite the fact that its product was addictive and deadly, the tobacco industry was able to fend off regulation, litigation and the idea in the minds of many smokers that its products were fatal for decades.

So successful was Big Tobacco in postponing that day of reckoning that their tactics have been widely imitated ever since. They have also inspired a thriving corner of academia exploring how the trick was achieved. In 1995, Robert Proctor, a historian at Stanford University who has studied the tobacco case closely, coined the word “agnotology”. This is the study of how ignorance is deliberately produced; the entire field was started by Proctor’s observation of the tobacco industry. The facts about smoking — indisputable facts, from unquestionable sources — did not carry the day. The indisputable facts were disputed. The unquestionable sources were questioned. Facts, it turns out, are important, but facts are not enough to win this kind of argument.

 

2.

Agnotology has never been more important. “We live in a golden age of ignorance,” says Proctor today. “And Trump and Brexit are part of that.”

In the UK’s EU referendum, the Leave side pushed the false claim that the UK sent £350m a week to the EU. It is hard to think of a previous example in modern western politics of a campaign leading with a transparent untruth, maintaining it when refuted by independent experts, and going on to triumph anyway. That performance was soon to be eclipsed by Donald Trump, who offered wave upon shameless wave of demonstrable falsehood, only to be rewarded with the presidency. The Oxford Dictionaries declared “post-truth” the word of 2016. Facts just didn’t seem to matter any more.

The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts. Fact-checking organisations, such as Full Fact in the UK and PolitiFact in the US, evaluate prominent claims by politicians and journalists. I should confess a personal bias: I have served as a fact checker myself on the BBC radio programme More or Less, and I often rely on fact-checking websites. They judge what’s true rather than faithfully reporting both sides as a traditional journalist would. Public, transparent fact checking has become such a feature of today’s political reporting that it’s easy to forget it’s barely a decade old.

Mainstream journalists, too, are starting to embrace the idea that lies or errors should be prominently identified. Consider a story on the NPR website about Donald Trump’s speech to the CIA in January: “He falsely denied that he had ever criticised the agency, falsely inflated the crowd size at his inauguration on Friday . . . —” It’s a bracing departure from the norms of American journalism, but then President Trump has been a bracing departure from the norms of American politics.

Facebook has also drafted in the fact checkers, announcing a crackdown on the “fake news” stories that had become prominent on the network after the election. Facebook now allows users to report hoaxes. The site will send questionable headlines to independent fact checkers, flag discredited stories as “disputed”, and perhaps downgrade them in the algorithm that decides what each user sees when visiting the site.

We need some agreement about facts or the situation is hopeless. And yet: will this sudden focus on facts actually lead to a more informed electorate, better decisions, a renewed respect for the truth? The history of tobacco suggests not. The link between cigarettes and cancer was supported by the world’s leading medical scientists and, in 1964, the US surgeon general himself. The story was covered by well-trained journalists committed to the values of objectivity. Yet the tobacco lobbyists ran rings round them.

In the 1950s and 1960s, journalists had an excuse for their stumbles: the tobacco industry’s tactics were clever, complex and new. First, the industry appeared to engage, promising high-quality research into the issue. The public were assured that the best people were on the case. The second stage was to complicate the question and sow doubt: lung cancer might have any number of causes, after all. And wasn’t lung cancer, not cigarettes, what really mattered? Stage three was to undermine serious research and expertise. Autopsy reports would be dismissed as anecdotal, epidemiological work as merely statistical, and animal studies as irrelevant. Finally came normalisation: the industry would point out that the tobacco-cancer story was stale news. Couldn’t journalists find something new and interesting to say?

Such tactics are now well documented — and researchers have carefully examined the psychological tendencies they exploited. So we should be able to spot their re-emergence on the political battlefield.

“It’s as if the president’s team were using the tobacco industry’s playbook,” says Jon Christensen, a journalist turned professor at the University of California, Los Angeles, who wrote a notable study in 2008 of the way the tobacco industry tugged on the strings of journalistic tradition.

One infamous internal memo from the Brown & Williamson tobacco company, typed up in the summer of 1969, sets out the thinking very clearly: “Doubt is our product.” Why? Because doubt “is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.” Big Tobacco’s mantra: keep the controversy alive.

Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again.

 

3.

Tempting as it is to fight lies with facts, there are three problems with that strategy. The first is that a simple untruth can beat off a complicated set of facts simply by being easier to understand and remember. When doubt prevails, people will often end up believing whatever sticks in the mind. In 1994, psychologists Hollyn Johnson and Colleen Seifert conducted an experiment in which people read an account of an explosive warehouse fire. The account mentioned petrol cans and paint but later explained that petrol and paint hadn’t been present at the scene after all. The experimental subjects, tested on their comprehension, recalled that paint wasn’t actually there. But when asked to explain facts about the fire (“why so much smoke?”), they would mention the paint. Lacking an alternative explanation, they fell back on a claim they had already acknowledged was wrong. Once we’ve heard an untrue claim, we can’t simply unhear it.

This should warn us not to let lie-and-rebuttal take over the news cycle. Several studies have shown that repeating a false claim, even in the context of debunking that claim, can make it stick. The myth-busting seems to work but then our memories fade and we remember only the myth. The myth, after all, was the thing that kept being repeated. In trying to dispel the falsehood, the endless rebuttals simply make the enchantment stronger.

With this in mind, consider the Leave campaign’s infamous bus-mounted claim: “We send the EU £350m a week.” Simple. Memorable. False. But how to rebut it? A typical effort from The Guardian newspaper was headlined, “Why Vote Leave’s £350m weekly EU cost claim is wrong”, repeating the claim before devoting hundreds of words to gnarly details and the dictionary definition of the word “send”. This sort of fact-checking article is invaluable to a fellow journalist who needs the issues set out and hyperlinked. But for an ordinary voter, the likely message would be: “You can’t trust politicians but we do seem to send a lot of money to the EU.” Doubt suited the Leave campaign just fine.

This is an inbuilt vulnerability of the fact-checking trade. Fact checkers are right to be particular, to cover all the details and to show their working out. But that’s why the fact-checking job can only be a part of ensuring that the truth is heard.

Andrew Lilico, a thoughtful proponent of leaving the EU, told me during the campaign that he wished the bus had displayed a more defensible figure, such as £240m. But Lilico now acknowledges that the false claim was the more effective one. “In cynical campaigning terms, the use of the £350m figure was perfect,” he says. “It created a trap that Remain campaigners kept insisting on jumping into again and again and again.”

Quite so. But not just Remain campaigners — fact-checking journalists too, myself included. The false claim was vastly more powerful than a true one would have been, not because it was bigger, but because everybody kept talking about it.

Proctor, the tobacco industry historian turned agnotologist, warns of a similar effect in the US: “Fact checkers can become Trump’s poodle, running around like an errand boy checking someone else’s facts. If all your time is [spent] checking someone else’s facts, then what are you doing?”

 

4.

There’s a second reason why facts don’t seem to have the traction that one might hope. Facts can be boring. The world is full of things to pay attention to, from reality TV to your argumentative children, from a friend’s Instagram to a tax bill. Why bother with anything so tedious as facts?

Last year, three researchers — Seth Flaxman, Sharad Goel and Justin Rao — published a study of how people read news online. The study was, on the face of it, an inquiry into the polarisation of news sources. The researchers began with data from 1.2 million internet users but ended up examining only 50,000. Why? Because only 4 per cent of the sample read enough serious news to be worth including in such a study. (The hurdle was 10 articles and two opinion pieces over three months.) Many commentators worry that we’re segregating ourselves in ideological bubbles, exposed only to the views of those who think the same way we do. There’s something in that concern. But for 96 per cent of these web surfers the bubble that mattered wasn’t liberal or conservative, it was: “Don’t bother with the news.”

In the war of ideas, boredom and distraction are powerful weapons. A recent study of Chinese propaganda examined the tactics of the paid pro-government hacks (known as the “50 cent army”, after the amount contributors were alleged to be paid per post) who put comments on social media. The researchers, Gary King, Jennifer Pan and Margaret Roberts, conclude: “Almost none of the Chinese government’s 50c party posts engage in debate or argument of any kind . . . they seem to avoid controversial issues entirely . . . the strategic objective of the regime is to distract and redirect public attention.”

Trump, a reality TV star, knows the value of an entertaining distraction: simply pick a fight with Megyn Kelly, The New York Times or even Arnold Schwarzenegger. Isn’t that more eye-catching than a discussion of healthcare reform?

The tobacco industry also understood this point, although it took a more highbrow approach to generating distractions. “Do you know about Stanley Prusiner?” asks Proctor.

Prusiner is a neurologist. In 1972, he was a young researcher who’d just encountered a patient suffering from Creutzfeldt-Jakob disease. It was a dreadful degenerative condition then thought to be caused by a slow-acting virus. After many years of study, Prusiner concluded that the disease was caused instead, unprecedentedly, by a kind of rogue protein. The idea seemed absurd to most experts at the time, and Prusiner’s career began to founder. Promotions and research grants dried up. But Prusiner received a source of private-sector funding that enabled him to continue his work. He was eventually vindicated in the most spectacular way possible: with a Nobel Prize in Medicine in 1997. In his autobiographical essay on the Nobel Prize website, Prusiner thanked his private-sector benefactors for their “crucial” support: RJ Reynolds, maker of Camel cigarettes.

The tobacco industry was a generous source of research funds, and Prusiner wasn’t the only scientist to receive both tobacco funding and a Nobel Prize. Proctor reckons at least 10 Nobel laureates are in that position. To be clear, this wasn’t an attempt at bribery. In Proctor’s view, it was far more subtle. “The tobacco industry was the leading funder of research into genetics, viruses, immunology, air pollution,” says Proctor. Almost anything, in short, except tobacco. “It was a massive ‘distraction research’ project.” The funding helped position Big Tobacco as a public-spirited industry but Proctor considers its main purpose was to produce interesting new speculative science. Creutzfeldt-Jakob disease may be rare, but it was exciting news. Smoking-related diseases such as lung cancer and heart disease aren’t news at all.

The endgame of these distractions is that matters of vital importance become too boring to bother reporting. Proctor describes it as “the opposite of terrorism: trivialism”. Terrorism provokes a huge media reaction; smoking does not. Yet, according to the US Centers for Disease Control, smoking kills 480,000 Americans a year. This is more than 50 deaths an hour. Terrorists have rarely managed to kill that many Americans in an entire year. But the terrorists succeed in grabbing the headlines; the trivialists succeed in avoiding them.

Tobacco industry lobbyists became well-practised at persuading the media to withhold or downplay stories about the dangers of cigarettes. “That record is scratched,” they’d say. Hadn’t we heard such things before?

Experienced tobacco watchers now worry that Trump may achieve the same effect. In the end, will people simply start to yawn at the spectacle? Jon Christensen, at UCLA, says: “I think it’s the most frightening prospect.”

On the other hand, says Christensen, there is one saving grace. It is almost impossible for the US president not to be news. The tobacco lobby, like the Chinese government, proved highly adept at pointing the spotlight elsewhere. There are reasons to believe that will be difficult for Trump.

 

5.

There’s a final problem with trying to persuade people by giving them facts: the truth can feel threatening, and threatening people tends to backfire. “People respond in the opposite direction,” says Jason Reifler, a political scientist at Exeter University. This “backfire effect” is now the focus of several researchers, including Reifler and his colleague Brendan Nyhan of Dartmouth.

In one study, conducted in 2011, Nyhan, Reifler and others ran a randomised trial in which parents with young children were either shown or not shown scientific information debunking an imaginary but widely feared link between vaccines and autism. At first glance, the facts were persuasive: parents who saw the myth-busting science were less likely to believe that the vaccine could cause autism. But parents who were already wary of vaccines were actually less likely to say they’d vaccinate their children after being exposed to the facts — despite apparently believing those facts.

What’s going on? “People accept the corrective information but then resist in other ways,” says Reifler. A person who feels anxious about vaccination will subconsciously push back by summoning to mind all the other reasons why they feel vaccination is a bad idea. The fear of autism might recede, but all the other fears are stronger than before.

It’s easy to see how this might play out in a political campaign. Say you’re worried that the UK will soon be swamped by Turkish immigrants because a Brexit campaigner has told you (falsely) that Turkey will soon join the EU. A fact checker can explain that no Turkish entry is likely in the foreseeable future. Reifler’s research suggests that you’ll accept the narrow fact that Turkey is not about to join the EU. But you’ll also summon to mind all sorts of other anxieties: immigration, loss of control, the proximity of Turkey to Syria’s war and to Isis, terrorism and so on. The original lie has been disproved, yet its seductive magic lingers.

The problem here is that while we like to think of ourselves as rational beings, our rationality didn’t just evolve to solve practical problems, such as building an elephant trap, but to navigate social situations. We need to keep others on our side. Practical reasoning is often less about figuring out what’s true, and more about staying in the right tribe.

An early indicator of how tribal our logic can be was a study conducted in 1954 by Albert Hastorf, a psychologist at Dartmouth, and Hadley Cantril, his counterpart at Princeton. Hastorf and Cantril screened footage of a game of American football between the two college teams. It had been a rough game. One quarterback had suffered a broken leg. Hastorf and Cantril asked their students to tot up the fouls and assess their severity. The Dartmouth students tended to overlook Dartmouth fouls but were quick to pick up on the sins of the Princeton players. The Princeton students had the opposite inclination. They concluded that, despite being shown the same footage, the Dartmouth and Princeton students didn’t really see the same events. Each student had his own perception, closely shaped by his tribal loyalties. The title of the research paper was “They Saw a Game”.

A more recent study revisited the same idea in the context of political tribes. The researchers showed students footage of a demonstration and spun a yarn about what it was about. Some students were told it was a protest by gay-rights protesters outside an army recruitment office against the military’s (then) policy of “don’t ask, don’t tell”. Others were told that it was an anti-abortion protest in front of an abortion clinic.

Despite looking at exactly the same footage, the experimental subjects had sharply different views of what was happening — views that were shaped by their political loyalties. Liberal students were relaxed about the behaviour of people they thought were gay-rights protesters but worried about what the pro-life protesters were doing; conservative students took the opposite view. As with “They Saw a Game”, this disagreement was not about the general principles but about specifics: did the protesters scream at bystanders? Did they block access to the building? We see what we want to see — and we reject the facts that threaten our sense of who we are.

When we reach the conclusion that we want to reach, we’re engaging in “motivated reasoning”. Motivated reasoning was a powerful ally of the tobacco industry. If you’re addicted to a product, and many scientists tell you it’s deadly, but the tobacco lobby tells you that more research is needed, what would you like to believe? Christensen’s study of the tobacco public relations campaign revealed that the industry often got a sympathetic hearing in the press because many journalists were smokers. These journalists desperately wanted to believe their habit was benign, making them ideal messengers for the industry.

Even in a debate polluted by motivated reasoning, one might expect that facts will help. Not necessarily: when we hear facts that challenge us, we selectively amplify what suits us, ignore what does not, and reinterpret whatever we can. More facts mean more grist to the motivated reasoning mill. The French dramatist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Modern social science agrees.

On a politically charged issue such as climate change, it feels as though providing accurate information about the science should bring people together. The opposite is true, says Dan Kahan, a law and psychology professor at Yale and one of the researchers on the study into perceptions of a political protest. Kahan writes: “Groups with opposing values often become more polarised, not less, when exposed to scientifically sound information.”

When people are seeking the truth, facts help. But when people are selectively reasoning about their political identity, the facts can backfire.

 

6.

All this adds up to a depressing picture for those of us who aren’t ready to live in a post-truth world. Facts, it seems, are toothless. Trying to refute a bold, memorable lie with a fiddly set of facts can often serve to reinforce the myth. Important truths are often stale and dull, and it is easy to manufacture new, more engaging claims. And giving people more facts can backfire, as those facts provoke a defensive reaction in someone who badly wants to stick to their existing world view. “This is dark stuff,” says Reifler. “We’re in a pretty scary and dark time.”

Is there an answer? Perhaps there is.

We know that scientific literacy can actually widen the gap between different political tribes on issues such as climate change — that is, well-informed liberals and well-informed conservatives are further apart in their views than liberals and conservatives who know little about the science. But a new research paper from Dan Kahan, Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson explores the role not of scientific literacy but of scientific curiosity.

The researchers measured scientific curiosity by asking their experimental subjects a variety of questions about their hobbies and interests. The subjects were offered a choice of websites to read for a comprehension test. Some went for ESPN, some for Yahoo Finance, but those who chose Science were demonstrating scientific curiosity. Scientifically curious people were also happier to watch science documentaries than celebrity gossip TV shows. As one might expect, there’s a correlation between scientific knowledge and scientific curiosity, but the two measures are distinct.

What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”

So how can we encourage curiosity? It’s hard to make banking reform or the reversibility of Article 50 more engaging than football, Game of Thrones or baking cakes. But it does seem to be what’s called for. “We need to bring people into the story, into the human narratives of science, to show people how science works,” says Christensen.

We journalists and policy wonks can’t force anyone to pay attention to the facts. We have to find a way to make people want to seek them out. Curiosity is the seed from which sensible democratic decisions can grow. It seems to be one of the only cures for politically motivated reasoning but it’s also, into the bargain, the cure for a society where most people just don’t pay attention to the news because they find it boring or confusing.

What we need is a Carl Sagan or David Attenborough of social science — somebody who can create a sense of wonder and fascination not just at the structure of the solar system or struggles of life in a tropical rainforest, but at the workings of our own civilisation: health, migration, finance, education and diplomacy.

One candidate would have been Swedish doctor and statistician Hans Rosling, who died in February. He reached an astonishingly wide audience with what were, at their heart, simply presentations of official data from the likes of the World Bank.

He characterised his task as telling people the facts — “to describe the world”. But the facts need a champion. Facts rarely stand up for themselves — they need someone to make us care about them, to make us curious. That’s what Rosling did. And faced with the apocalyptic possibility of a world where the facts don’t matter, that is the example we must follow.

Written for and first published in the Financial Times.

My book “Messy” is available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

17th of March, 2017HighlightsComments off
Highlights

What makes the perfect office?

In 1923, the father of modern architecture, Le Corbusier, was commissioned by a French industrialist to design some homes for workers in his factory near Bordeaux. Le Corbusier duly delivered brightly-hued concrete blocks of pure modernism. The humble factory workers did not take to Le Corbusier’s visionary geometry. They added rustic shutters, pitched roofs, and picket-fenced gardens. And they decorated the gardens in the least modernist way imaginable: with gnomes.
Companies no longer hire star architects to design housing for an industrial workforce. The architects are instead put to work producing the most magazine-shoot worthy office spaces. A pioneer was the uber-cool advertising agency, Chiat-Day, which in 1993 hired the playful Italian architect Gaetano Pesce to create a New York space for them (hot-lips mural, luminous floor, spring-loaded chairs). Their Los Angeles office (four-storey binoculars, brainstorming pods commandeered from fairground rides) was designed by Frank Gehry, whom Chiat-Day’s boss, Jay Chiat, had spotted before Gehry created the Guggenheim Bilbao and became the most famous architect on the planet.
Jay Chiat believed that design was for the professionals. Give workers control over their own space and they would simply clutter up Frank Gehry’s vision, so Jay Chiat decreed that his employees be given tiny lockers for “their dog pictures, or whatever”.
Now everyone is hiring the high priests of architecture. Google has asked Thomas Heatherwick, creator of the 2012 Olympic torch, to create a new Googleplex. Apple’s new headquarters will be a gigantic glass donut over a mile around, designed by Norman Foster and partners.
The most famous corporate architect was not an architect at all: the late Steve Jobs, the boss of Apple, owned much of the film studio Pixar and stamped his taste all over Pixar’s headquarters. Jobs pored over the finest details, choosing an Arkansas steel mill that produced steels of the perfect hue (bolted, not welded).
Jobs believed that a building could shape the way people interacted with each other, and hit upon the notion that Pixar would have just a single pair of washrooms, just off the main lobby. Every time nature called, there was only one place for the entire company to go, and serendipitous new connections would be made.
But what if all these efforts are basically repeating Le Corbusier’s error? What if the ideal office isn’t the coolest or the most aesthetically visionary? What if the ideal office is the one, dog pictures and gnomes and all, that workers make their own?
In 2010, two psychologists conducted an experiment to test that idea. Alex Haslam and Craig Knight set up simple office spaces where they asked experimental subjects to spend an hour doing simple administrative tasks. Haslam and Knight wanted to understand what sort of office space made people productive and happy, and they tested four different layouts.
Two of the layouts were familiar. One was stripped down – bare desk, swivel chair, pencil, paper, nothing else. Most participants found it rather oppressive. “You couldn’t relax in it,” said one. The other layout was softened with pot plants and tasteful close-up photographs of flowers, faintly reminiscent of Georgia O’Keefe paintings. Workers got more and better work done there, and enjoyed themselves more.
The next two layouts produced dramatically different outcomes – and yet, photographs of the spaces would offer few clues as to why. They used the same basic elements and the same botanical decorations. But the appearance wasn’t what mattered; what mattered was who got to decide.
In the third and fourth layouts, workers were given the plants and pictures and invited to use them to decorate the space – or not – before they started work. But in the fourth, the experimenter came in after the subject had finished setting everything out to her satisfaction, and then rearranged it all. The office space itself was not much different, but difference in productivity and job satisfaction was dramatic.
When workers were empowered to design their own space, they had fun and worked hard and accurately, producing 30 per cent more work than in the minimalist office and 15 per cent more than in the decorated office. When workers were deliberately disempowered, their work suffered and of course, they hated it. “I wanted to hit you,” one participant later admitted to an experimenter.
Haslam and Knight have confirmed what other researchers have long suspected – that lack of control over one’s physical environment is stressful and distracting. But this perspective is in stark contrast to those who see office design as too important to be left to the people who work in offices.
At least Le Corbusier had a vision, but many office spaces are ruled instead by an aesthetic that is mean and petty. The Wall Street Journal reported on Kyocera’s clipboard-wielding “inspectors” not only enforcing a clear-desk policy, but pulling open drawers and cabinets, photographing messy contents and demanding improvements. The Australian Financial Review published an 11-page clean-desk manual leaked from the mining giant BHP Billiton; apparently copper and coal cannot be mined if office workers do not respect the limit of one A5 picture frame on each desk. (The frame may display a family photo, or an award certificate, but it was forbidden to display both). Haslam and Knight told of a Sydney-based bank that changed the layout of its IT department 36 times in four years at the whim of senior management.
It is unclear why any of this top-down design is thought desirable. Official explanations are often empty or circular: that clean desks are more professional, or look tidier. In some cases, streamlined practices from the production line have been copied mindlessly into general office spaces, where they serve no purpose. Whatever the reason, it is folly. It can be satisfying to straighten up all the pens on your desk; but to order an underling to straighten their own pens is sociopathic.
When the likes of Steve Jobs or Frank Gehry are in charge, we can at least expect a workplace that will look beautiful. But that does not make it functional. Truly creative spaces aren’t constantly being made over for photoshoots in glossy business magazines. Just ask veterans of M.I.T., many of whom will identify as their favourite and most creative space a building that didn’t even have a proper name, which was designed in an afternoon and built to last just a couple of years. Building 20 was 200,000 square feet of plywood, cinderblock and asbestos, a squat, dusty firetrap originally designed to accommodate the wartime radar research effort, but which eked out an existence as M.I.T.’s junk-filled attic until 1998.
Building 20 was an unbelievably fertile mess. The successes started with the wartime RadLab, which produced nine Nobel prizes and the radar systems that won the second world war. But the outpouring continued for more than half a century. The first commercial atomic clock; one of the earliest particle accelerators; Harold Edgerton’s iconic high-speed photographs of a bullet passing through an apple – all sprang from Building 20. So did computer hacking and the first arcade video game, Spacewar. So did the pioneering technology companies DEC, BBN, and Bose. Cognitive science was revolutionised in Building 20 by the researcher Jerry Lettvin, while Noam Chomsky did the same for linguistics.
All this happened in the cheapest, nastiest space that M.I.T. could offer. But that was no coincidence. Building 20 was where the university put odd projects, student hobbyists and anything else that didn’t seem to matter, producing new collaborations.
And Building 20’s ugliness was functional. The water and cabling was exposed, running across the ceilings in brackets. Researchers thought nothing of tapping in to them for their experimental needs – or for that matter for knocking down a wall. When the atomic clock was being developed, the team removed two floors to accommodate it. This was the result not of design but of neglect. In the words of Stewart Brand, author of How Buildings Learn, “nobody cares what you do in there.”
And that was all Building 20’s residents wanted: to be left alone to create, to make whatever mess they wanted to make. When, inevitably, M.I.T. finally replaced Building 20 with a $300m structure designed by Frank Gehry himself, its former residents held a memorial wake. The new building might have been cutting-edge architecture, but one unhappy resident summed up the problem perfectly: “I didn’t ask for it.”
Of course nobody cares what the people who actually do the work might want or need. Chief executives exult in bold architectural statements, and universities find it easier to raise money for new buildings than for research. And so the grand buildings continue to be built, especially by the most profitable companies and the most prestigious seats of learning.
But we’re often guilty of confusing causation here, believing that great architecture underpins the success of great universities, or that Google flourishes because of the vibrancy of the helter skelters and ping pong tables in the Googleplex. A moment’s reflection reminds us that the innovation comes first, and the stunt architecture comes later.
Remember that for the first two years of Google’s history, there were no headquarters at all. The company’s founders, Sergey Brin and Larry Page, made the breakthroughs at Stanford University. Then came the cliché of a garage in Menlo Park, with desks made from doors set horizontally across sawhorses. The company grew and grew, into one crude space after another – and with engineers always free to hack things about. One knocked down the wall of his office, decided he didn’t like the results, and rebuilt the wall. That made for an ugly space – but a space that worked for the people who worked in it. The spirit of Building 20 lived on at Google.
So how should the ideal office look? In the most prestigious offices at the most prestigious companies, the ones which are being photographed by Wired, the answer to that question is: this place should look the way the boss’s pet architect wants it to look.
But Building 20, and Google’s early offices, and some of the great creative spaces all over the world, suggest a very different answer to the same question: how this place looks doesn’t matter.
Back in 1977, the editor of Psychology Today, T George Harris, put his finger on the problem:
“The office is a highly personal tool shop, often the home of the soul… this fact may sound simple, but it eludes most architects… They have a mania for uniformity, in space as in furniture, and a horror over how the messy side of human nature clutters up an office landscape that would otherwise be as tidy as a national cemetery.”
Harris scoured the academic literature for any evidence that good design helped people to get things done, or to be happier in the office. He couldn’t find it. “People suddenly put into “good design” did not seem to wake up and love it,” he wrote. What people love, instead, is the ability to control the space in which they work – even if they end up filling the space with kitsch, or dog photos, or even – shudder – garden gnomes.
Strangely enough, it was Steve Jobs himself – notorious as a dictatorial arbiter of good taste – who came to appreciate this at Pixar. When he unveiled his plan for the single pair of serendipity-inducing uber-bathrooms, he faced a rebellion from pregnant women at Pixar who didn’t want to have to make the long walk ten times a day. Jobs was aghast that people didn’t appreciate the importance of his vision. But then he did something unexpected: he backed down and agreed to install extra bathrooms.
Steve Jobs found other ways to encourage serendipitous interactions. More importantly, he showed that even on a question that mattered deeply to him, junior staff were able to defy him. Milled Arkansas steels be damned: it is the autonomy that really matters.
“The animators who work here are free to – no, encouraged to – decorate their work spaces in whatever style they wish,” explains Pixar’s boss Ed Catmull in his book Creativity, Inc. “They spend their days inside pink dollhouses whose ceilings are hung with miniature chandeliers, tiki huts made of real bamboo, and castles whose meticulously painted, fifteen-foot-high Styrofoam turrets appear to be carved from stone.”
I suspect that there may be a garden gnome in there, too.

 

The ideas in this article are adapted from my book “Messy“, which is available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

16th of February, 2017HighlightsComments off
Highlights

Newsnight film featuring Brian Eno

I made a short film for BBC’s Newsnight.  Lots of fun:

(If the embed function doesn’t work you can try this.)

1st of November, 2016HighlightsVideoComments off
Highlights

There’s magic in mess: Why you should embrace a disorderly desk

messy-desk-animation

In 1726, during a long voyage from London to Philadelphia, a young printer hatched the idea of using a notebook to systematically chart his efforts to become a better man. He set out 13 virtues — including industry, justice, tranquillity and temperance — and his plan was to focus on each in turn in an endless quest for self-improvement, recording failures with a black spot in his journal. The virtue journal worked, and the black marks became scarcer and scarcer.

Benjamin Franklin kept up this practice for his entire life. What a life it was: Franklin invented bifocals and a clean-burning stove; he proved that lightning was a form of electricity and then tamed it with the lightning conductor; he charted the Gulf Stream. He organised a lending library, a fire brigade and a college. He was America’s first postmaster-general, its ambassador to France, even the president of Pennsylvania.

And yet the great man had a weakness — or so he thought. His third virtue was Order. “Let all your things have their places; let each part of your business have its time,” he wrote. While all the other virtues were mastered, one by one, Franklin never quite managed to get his desk or his diary tidy.

“My scheme of Order gave me the most trouble,” he reflected six decades later. “My faults in it vexed me so much, and I made so little progress in amendment, and had such frequent relapses, that I was almost ready to give up the attempt.” Observers agreed. One described how callers on Franklin “were amazed to behold papers of the greatest importance scattered in the most careless way over the table and floor”.

Franklin was a messy fellow his entire life, despite 60 years of trying to reform himself, and remained convinced that if only he could learn to tidy up, he would become a more successful and productive person. But any outsider can see that it is absurd to think such a rich life could have been yet further enriched by assiduous use of a filing cabinet. Franklin was deluding himself. But his error is commonplace; we’re all tidy-minded people, admiring ourselves when we keep a clean desk and uneasy when we do not. Tidiness can be useful but it’s not always a virtue. Even though Franklin never let himself admit it, there can be a kind of magic in mess.

Why is it so difficult to keep things tidy? A clue comes in Franklin’s motto, “Let all your things have their places … ” That seems to make sense. Humans tend to have an excellent spatial memory. The trouble is that modern office life presents us with a continuous stream of disparate documents arriving not only by post but via email and social media. What are the “places”, both physical and digital, for this torrent of miscellanea?

Categorising documents of any kind is harder than it seems. The writer and philosopher Jorge Luis Borges once told of a fabled Chinese encyclopaedia, the “Celestial Emporium of Benevolent Knowledge”, which organised animals into categories such as: a) belonging to the emperor, c) tame, d) sucking pigs, f) fabulous, h) included in the present classification, and m) having just broken the water pitcher.

Borges’s joke has a point: categories are difficult. Distinctions that seem practically useful — who owns what, who did what, what might make a tasty supper — are utterly unusable when taken as a whole. The problem is harder still when we must file many incoming emails an hour, building folder structures that need to make sense months or years down the line. Borgesian email folders might include: a) coming from the boss, b) tedious, c) containing appointments, d) sent to the entire company, e) urgent, f) sexually explicit, g) complaints, h) personal, i) pertaining to the year-end review, and j) about to exceed the memory allocation on the server.

Regrettably, many of these emails fit into more than one category and while each grouping itself is perfectly meaningful, they do not fit together. Some emails clearly fit into a pattern, but many do not. One may be the start of a major project or the start of nothing at all, and it will rarely be clear which is which at the moment that email arrives in your inbox. Giving documents — whether physical or digital — a proper place, as Franklin’s motto recommends, requires clairvoyance. Failing that, we muddle through the miscellany, hurriedly imposing some kind of practical organising principle on what is a rapid and fundamentally messy flow of information.

When it comes to actual paper, there’s always the following beautiful approach. Invented in the early 1990s by Yukio Noguchi, an emeritus professor at Hitotsubashi University in Tokyo and author of books such as Super Organised Method, Noguchi doesn’t try to categorise anything. Instead, he places each incoming document in a large envelope. He writes the envelope’s contents neatly on its edge, and lines them up on a bookshelf, their contents visible like the spines of books. Now the moment of genius: each time he uses an envelope, Noguchi places it back on the left of the shelf. Over time, recently used documents will shuffle themselves towards the left, and never-used documents will accumulate on the right. Archiving is easy: every now and again, Noguchi removes the documents on the right. To find any document in this system, he simply asks himself how recently he has seen it. It is a filing system that all but organises itself.

But wait a moment. Eric Abrahamson and David Freedman, authors of A Perfect Mess, offer the following suggestion: “Turn the row of envelopes so that the envelopes are stacked vertically instead of horizontally, place the stack on your desktop, and get rid of the envelopes.” Those instructions transform the shelf described in Super Organised Method into an old-fashioned pile of papers on a messy desk. Every time a document arrives or is consulted, it goes back on the top of the pile. Unused documents gradually settle at the bottom. Less elegant, perhaps, but basically the same system.

Computer scientists may recognise something rather familiar about this arrangement: it mirrors the way that computers handle their memory systems. Computers use memory “caches”, which are small but swift to access. A critical issue is which data should be prioritised and put in the fastest cache. This cache management problem is analogous to asking which paper you should keep on your desk, which should be in your desk drawer, and which should be in offsite storage in New Jersey. Getting the decision right makes computers a lot faster — and it can make you faster too.

Fifty years ago, computer scientist Laszlo Belady proved that one of the fastest and most effective simple algorithms is to wait until the cache is full, then start ejecting the data that haven’t been used recently. This rule is called “Least Recently Used” or LRU — and it works because in computing, as in life, the fact that you’ve recently needed to use something is a good indication that you will need it again soon.

As Brian Christian and Tom Griffiths observe in their recent book Algorithms to Live By, while a computer might use LRU to manage a memory cache, Noguchi’s Super Organised Method uses the same rule to manage paper: recently used stuff on the left, stuff that you haven’t looked at for ages on the right. A pile of documents also implements LRU: recently touched stuff on the top, everything else sinks to the bottom.

This isn’t to say that a pile of paper is always the very best organisational system. That depends on what is being filed, and whether several people have to make sense of the same filing system or not. But the pile of papers is not random. It has its own pragmatic structure based simply on the fact that whatever you’re using tends to stay visible and accessible. Obsolete stuff sinks out of sight. Your desk may look messy to other people but you know that, thanks to the LRU rule, it’s really an efficient self-organising rapid-access cache.

If all this sounds to you like self-justifying blather from untidy colleagues, you might just be a “filer” rather than a “piler”. The distinction between the two was first made in the 1980s by Thomas Malone, a management professor at the Massachusetts Institute of Technology. Filers like to establish a formal organisational structure for their paper documents. Pilers, by contrast, let pieces of paper build up around their desks or, as we have now learnt to say, implement an LRU-cache.

To most of us, it may seem obvious that piling is dysfunctional while filing is the act of a serious professional. Yet when researchers from the office design company Herman Miller looked at high-performing office workers, they found that they tended to be pilers. They let documents accumulate on their desks, used their physical presence as a reminder to do work, and relied on subtle cues — physical alignment, dog-ears, or a stray Post-it note — to orient themselves.

In 2001, Steve Whittaker and Julia Hirschberg, researchers at AT&T Labs, studied filers and pilers in a real office environment, and discovered why the messy approach works better than it seemingly has any right to. They tracked the behaviour of the filers and pilers over time. Who accumulated the biggest volume of documents? Whose archives worked best? And who struggled most when an office relocation forced everyone to throw paperwork away?

One might expect that disciplined filers would have produced small, useful filing systems. But Whittaker and Hirschberg found, instead, that they were sagging under the weight of bloated, useless archives. The problem was a bad case of premature filing. Paperwork would arrive, and then the filer would have to decide what to do with it. It couldn’t be left on the desk — that would be untidy. So it had to be filed somewhere. But most documents have no long-term value, so in an effort to keep their desks clear, the filers were using filing cabinets as highly structured waste-paper baskets. Useful material was hemmed in by well-organised dross.

“You can’t know your own information future,” says Whittaker, who is now a professor of psychology at University of California Santa Cruz, and co-author of The Science of Managing Our Digital Stuff. People would create folder structures that made sense at the time but that would simply be baffling to their own creators months or years later. Organisational categories multiplied. One person told Whittaker and Hirschberg: “I had so much stuff filed. I didn’t know where everything was, and I’d found that I had created second files for something in what seemed like a logical place, but not the only logical place … I ended up having the same thing in two places or I had the same business unit stuff in five different places.”

As for the office move, it was torture for the filers. They had too much material and had invested too much time in organising it. One commented that it was “gruesome … you’re casting off your first-born”. Whittaker reminds me that these people were not discarding their children. They weren’t even discarding family photographs and keepsakes. They were throwing away office memos and dog-eared corporate reports. “It’s very visceral,” Whittaker says. “People’s identity is wrapped up in their jobs, and in the information professions your identity is wrapped up with your information.” And yet the happy-go-lucky pilers, in their messy way, coped far better. They used their desks as temporary caches for documents. The good stuff would remain close at hand, easy to use and to throw away when finished. Occasionally, the pilers would grab a pile, riffle through it and throw most of it away. And when they did file material, they did so in archives that were small, practical and actively used.

Whittaker points out that the filers struggled because the categories they created turned out not to work well as times changed. This suggests that tidiness can work, but only when documents or emails arrive with an obvious structure. My own desk is messy but my financial records are neat — not because they’re more important but because the record-keeping required for accountancy is predictable.

One might object that whatever researchers have concluded about paper documents is obsolete, as most documents are now digital. Surely the obvious point of stress is now the email inbox? But Whittaker’s interest in premature filing actually started in 1996 with an early study of email overload. “The thing we observed was failed folders,” he says. “Tiny email folders with one or two items.”

It turns out that the fundamental problem with email is the same as the problem with paper on the desk: people try to clear their inbox by sorting the email into folders but end up prematurely filing in folder structures that turn out not to work well. In 2011, Whittaker and colleagues published a research paper with the title “Am I Wasting My Time Organizing Email?”. The answer is: yes, you are. People who use the search function find their email more quickly than those who click through carefully constructed systems of folders. The folder system feels better organised but, unless the information arrives with a predictable structure, creating folders is laborious and worse than useless.

So we know that carefully filing paper documents is often counterproductive. Email should be dumped in a few broad folders — or one big archive — rather than a careful folder hierarchy. What then should we do with our calendars? There are two broad approaches. One — analogous to the “filer” approach — is to organise one’s time tightly, scheduling each task in advance and using the calendar as a to-do list. As Benjamin Franklin expressed it: “Let each part of your business have its time.” The alternative avoids the calendar as much as possible, noting only fixed appointments. Intuitively, both approaches have something going for them, so which works best?

Fortunately we don’t need to guess, because three psychologists, Daniel Kirschenbaum, Laura Humphrey and Sheldon Malett, have already run the experiment. Thirty-five years ago, Kirschenbaum and his colleagues recruited a group of undergraduates for a short course designed to improve their study skills. The students were randomly assigned one of three possible pieces of coaching. There was a control group, which was given simple time-management advice such as, “Take breaks of five to 10 minutes after every ½-1½ hour study session.” The other two groups got those tips but they were also given much more specific advice as to how to use their calendars. The “monthly plan” group were instructed to set goals and organise study activities across the space of a month; in contrast, the “daily plan” group were told to micromanage their time, planning activities and setting goals within the span of a single day.

The researchers assumed that the planners who set quantifiable daily goals would do better than those with vaguer monthly plans. In fact, the daily planners started brightly but quickly became hopelessly demotivated, with their study effort collapsing to eight hours a week — even worse than the 10 hours for those with no plan at all. But the students on the monthly plans maintained a consistent study habit of 25 hours a week throughout the course. The students’ grades, unsurprisingly, reflected their work effort.

The problem is that the daily plans get derailed. Life is unpredictable. A missed alarm, a broken washing machine, a dental appointment, a friend calling by for a coffee — or even the simple everyday fact that everything takes longer than you expect — all these obstacles proved crushing for people who had used their calendar as a to-do list.

Like the document pilers, the monthly planners adopted a loose, imperfect and changeable system that happens to work just fine in a loose, imperfect and changeable world. The daily planners, like the filers, imposed a tight, tidy-minded system that shattered on contact with a messy world.

Some people manage to take this lesson to extremes. Marc Andreessen — billionaire entrepreneur and venture capitalist — decided a decade ago to stop writing anything in his calendar. If something was worth doing, he figured, it was worth doing immediately. “I’ve been trying this tactic as an experiment,” he wrote in 2007. “And I am so much happier, I can’t even tell you.”

Arnold Schwarzenegger has adopted much the same approach. He insisted on keeping his diary clear when he was a film star. He even tried the same policy when governor of California. “Appointments are always a no-no. Planning ahead is a no-no,” he told The New York Times. Politicians, lobbyists and activists had to treat him like a popular walk-up restaurant: they showed up and hoped to get a slot. Of course, this was in part a pure status play. But it was more than that. Schwarzenegger knew that an overstuffed diary allows no room to adapt to circumstances.

Naturally, Schwarzenegger and Andreessen can make the world wait to meet them. You and I can’t. But we probably could take a few steps in the same direction, making fewer firm commitments to others and to ourselves, leaving us the flexibility to respond to what life throws at us. A plan that is too finely woven will soon lie in tatters. Daily plans are tidy but life is messy.

The truth is that getting organised is often a matter of soothing our anxieties — or the anxieties of tidy-minded colleagues. It can simply be an artful way of feeling busy while doing nothing terribly useful. Productivity guru Merlin Mann, host of a podcast called Back To Work, has a telling metaphor. Imagine making sandwiches in a deli, says Mann. In comes the first sandwich order. You’re about to reach for the mayonnaise and a couple of slices of sourdough. But then more orders start coming in.

Mann knows all too well how we tend to react. Instead of making the first sandwich, we start to ponder organisational systems. Separate the vegetarian and the meat? Should toasted sandwiches take priority?

There are two problems here. First, there is no perfect way to organise a fast-moving sandwich queue. Second, the time we spend trying to get organised is time we don’t spend getting things done. Just make the first sandwich. If we just got more things done, decisively, we might find we had less need to get organised.

Of course, sometimes we need a careful checklist (if, say, we’re building a house) or a sophisticated reference system (if we’re maintaining a library, for example). But most office workers are neither construction managers nor librarians. Yet we share Benjamin Franklin’s mistaken belief that if only we were more neatly organised, then we would live more productive and more admirable lives. Franklin was too busy inventing bifocals and catching lightning to get around to tidying up his life. If he had been working in a deli, you can bet he wouldn’t have been organising sandwich orders. He would have been making sandwiches.
Image by Benjamin Swanson. This article was first published in the Financial Times magazine and is inspired by ideas from my new book, “Messy“. (US) (UK)

Free email updates

(You can unsubscribe at any time)

20th of October, 2016HighlightsComments off
Highlights

How Politicians Poisoned Statistics

We have more data — and the tools to analyse and share them — than ever before. So why is the truth so hard to pin down?

In January 2015, a few months before the British general election, a proud newspaper resigned itself to the view that little good could come from the use of statistics by politicians. An editorial in the Guardian argued that in a campaign that would be “the most fact-blitzed in history”, numerical claims would settle no arguments and persuade no voters. Not only were numbers useless for winning power, it added, they were useless for wielding it, too. Numbers could tell us little. “The project of replacing a clash of ideas with a policy calculus was always dubious,” concluded the newspaper. “Anyone still hankering for it should admit their number’s up.”

This statistical capitulation was a dismaying read for anyone still wedded to the idea — apparently a quaint one — that gathering statistical information might help us understand and improve our world. But the Guardian’s cynicism can hardly be a surprise. It is a natural response to the rise of “statistical bullshit” — the casual slinging around of numbers not because they are true, or false, but to sell a message.

Politicians weren’t always so ready to use numbers as part of the sales pitch. Recall Ronald Reagan’s famous suggestion to voters on the eve of his landslide defeat of President Carter: “Ask yourself, ‘Are you better off now than you were four years ago?’” Reagan didn’t add any statistical garnish. He knew that voters would reach their own conclusions.

The British election campaign of spring last year, by contrast, was characterised by a relentless statistical crossfire. The shadow chancellor of the day, Ed Balls, declared that a couple with children (he didn’t say which couple) had lost £1,800 thanks to the government’s increase in value added tax. David Cameron, the prime minister, countered that 94 per cent of working households were better off thanks to recent tax changes, while the then deputy prime minister Nick Clegg was proud to say that 27 million people were £825 better off in terms of the income tax they paid.

Could any of this be true? Yes — all three claims were. But Ed Balls had reached his figure by summing up extra VAT payments over several years, a strange method. If you offer to hire someone for £100,000, and then later admit you meant £25,000 a year for a four-year contract, you haven’t really lied — but neither have you really told the truth. And Balls had looked only at one tax. Why not also consider income tax, which the government had cut? Clegg boasted about income-tax cuts but ignored the larger rise in VAT. And Cameron asked to be evaluated only on his pre-election giveaway budget rather than the tax rises he had introduced earlier in the parliament — the equivalent of punching someone on the nose, then giving them a bunch of flowers and pointing out that, in floral terms, they were ahead on the deal.

Each claim was narrowly true but broadly misleading. Not only did the clashing numbers confuse but none of them helped answer the crucial question of whether Cameron and Clegg had made good decisions in office.

To ask whether the claims were true is to fall into a trap. None of these politicians had any interest in playing that game. They were engaged in another pastime entirely.

Thirty years ago, the Princeton philosopher Harry Frankfurt published an essay in an obscure academic journal, Raritan. The essay’s title was “On Bullshit”. (Much later, it was republished as a slim volume that became a bestseller.) Frankfurt was on a quest to understand the meaning of bullshit — what was it, how did it differ from lies, and why was there so much of it about?

Frankfurt concluded that the difference between the liar and the bullshitter was that the liar cared about the truth — cared so much that he wanted to obscure it — while the bullshitter did not. The bullshitter, said Frankfurt, was indifferent to whether the statements he uttered were true or not. “He just picks them out, or makes them up, to suit his purpose.”

Statistical bullshit is a special case of bullshit in general, and it appears to be on the rise. This is partly because social media — a natural vector for statements made purely for effect — are also on the rise. On Instagram and Twitter we like to share attention-grabbing graphics, surprising headlines and figures that resonate with how we already see the world. Unfortunately, very few claims are eye-catching, surprising or emotionally resonant because they are true and fair. Statistical bullshit spreads easily these days; all it takes is a click.

Consider a widely shared list of homicide “statistics” attributed to the “Crime Statistics Bureau — San Francisco”, asserting that 81 per cent of white homicide victims were killed by “blacks”. It takes little effort to establish that the Crime Statistics Bureau of San Francisco does not exist, and not much more digging to discover that the data are utterly false. Most murder victims in the United States are killed by people of their own race; the FBI’s crime statistics from 2014 suggest that more than 80 per cent of white murder victims were killed by other white people.

Somebody, somewhere, invented the image in the hope that it would spread, and spread it did, helped by a tweet from Donald Trump, the current frontrunner for the Republican presidential nomination, that was retweeted more than 8,000 times. One can only speculate as to why Trump lent his megaphone to bogus statistics, but when challenged on Fox News by the political commentator Bill O’Reilly, he replied, “Hey, Bill, Bill, am I gonna check every statistic?”

Harry Frankfurt’s description of the bullshitter would seem to fit Trump perfectly: “He does not care whether the things he says describe reality correctly.”

While we can’t rule out the possibility that Trump knew the truth and was actively trying to deceive his followers, a simpler explanation is that he wanted to win attention and to say something that would resonate with them. One might also guess that he did not check whether the numbers were true because he did not much care one way or the other. This is not a game of true and false. This is a game of politics.

While much statistical bullshit is careless, it can also be finely crafted. “The notion of carefully wrought bullshit involves … a certain inner strain,” wrote Harry Frankfurt but, nevertheless, the bullshit produced by spin-doctors can be meticulous. More conventional politicians than Trump may not much care about the truth but they do care about being caught lying.

Carefully wrought bullshit was much in evidence during last year’s British general election campaign. I needed to stick my nose in and take a good sniff on a regular basis because I was fact-checking on behalf of the BBC’s More or Less programme. Again and again I would find myself being asked on air, “Is that claim true?” and finding that the only reasonable answer began with “It’s complicated”.

Take Ed Miliband’s claim before the last election that “people are £1,600 a year worse off” than they were when the coalition government came to power. Was that claim true? Arguably, yes.

But we need to be clear that by “people”, the then Labour leader was excluding half the adult population. He was not referring to pensioners, benefit recipients, part-time workers or the self-employed. He meant only full-time employees, and, more specifically, only their earnings before taxes and benefits.

Even this narrower question of what was happening to full-time earnings is a surprisingly slippery one. We need to take an average, of course. But what kind of average? Labour looked at the change in median wages, which were stagnating in nominal terms and falling after inflation was taken into account.

That seems reasonable — but the median is a problematic measure in this case. Imagine nine people, the lowest-paid with a wage of £1, the next with a wage of £2, up to the highest-paid person with a wage of £9. The median wage is the wage of the person in the middle: it’s £5.

Now imagine that everyone receives a promotion and a pay rise of £1. The lowly worker with a wage of £1 sees his pay packet double to £2. The next worker up was earning £2 and now she gets £3. And so on. But there’s also a change in the composition of the workforce: the best-paid worker retires and a new apprentice is hired at a wage of £1. What’s happened to people’s pay? In a sense, it has stagnated. The pattern of wages hasn’t changed at all and the median is still £5.

But if you asked the individual workers about their experiences, they would all tell you that they had received a generous pay rise. (The exceptions are the newly hired apprentice and the recent retiree.) While this example is hypothetical, at the time Miliband made his comments something similar was happening in the real labour market. The median wage was stagnating — but among people who had worked for the same employer for at least a year, the median worker was receiving a pay rise, albeit a modest one.

Another source of confusion: if wages for the low-paid and the high-paid are rising but wages in the middle are sagging, then the median wage can fall, even though the median wage increase is healthy. The UK labour market has long been prone to this kind of “job polarisation”, where demand for jobs is strongest for the highest and lowest-paid in the economy. Job polarisation means that the median pay rise can be sizeable even if median pay has not risen.

Confused? Good. The world is a complicated place; it defies description by sound bite statistics. No single number could ever answer Ronald Reagan’s question — “Are you better off now than you were four years ago?” — for everyone in a country.

So, to produce Labour’s figure of “£1,600 worse off”, the party’s press office had to ignore the self-employed, the part-timers, the non-workers, compositional effects and job polarisation. They even changed the basis of their calculation over time, switching between different measures of wages and different measures of inflation, yet miraculously managing to produce a consistent answer of £1,600. Sometimes it’s easier to make the calculation produce the number you want than it is to reprint all your election flyers.

Very few claims are eye-catching, surprising or emotionally resonant because they are true and fair

Such careful statistical spin-doctoring might seem a world away from Trump’s reckless retweeting of racially charged lies. But in one sense they were very similar: a political use of statistics conducted with little interest in understanding or describing reality. Miliband’s project was not “What is the truth?” but “What can I say without being shown up as a liar?”

Unlike the state of the UK job market, his incentives were easy to understand. Miliband needed to hammer home a talking point that made the government look bad. As Harry Frankfurt wrote back in the 1980s, the bullshitter “is neither on the side of the true nor on the side of the false. His eye is not on the facts at all … except insofar as they may be pertinent to his interest in getting away with what he says.”

Such complexities put fact-checkers in an awkward position. Should they say that Ed Miliband had lied? No: he had not. Should they say, instead, that he had been deceptive or misleading? Again, no: it was reasonable to say that living standards had indeed been disappointing under the coalition government.

Nevertheless, there was a lot going on in the British economy that the figure omitted — much of it rather more flattering to the government. Full Fact, an independent fact-checking organisation, carefully worked through the paper trail and linked to all the relevant claims. But it was powerless to produce a fair and representative snapshot of the British labour market that had as much power as Ed Miliband’s seven-word sound bite. No such snapshot exists. Truth is usually a lot more complicated than statistical bullshit.

On July 16 2015, the UK health phentermine secretary Jeremy Hunt declared: “Around 6,000 people lose their lives every year because we do not have a proper seven-day service in hospitals. You are 15 per cent more likely to die if you are admitted on a Sunday compared to being admitted on a Wednesday.”

This was a statistic with a purpose. Hunt wanted to change doctors’ contracts with the aim of getting more weekend work out of them, and bluntly declared that the doctors’ union, the British Medical Association, was out of touch and that he would not let it block his plans: “I can give them 6,000 reasons why.”

Despite bitter opposition and strike action from doctors, Hunt’s policy remained firm over the following months. Yet the numbers he cited to support it did not. In parliament in October, Hunt was sticking to the 15 per cent figure, but the 6,000 deaths had almost doubled: “According to an independent study conducted by the BMJ, there are 11,000 excess deaths because we do not staff our hospitals properly at weekends.”

Arithmetically, this was puzzling: how could the elevated risk of death stay the same but the number of deaths double? To add to the suspicions about Hunt’s mathematics, the editor-in-chief of the British Medical Journal, Fiona Godlee, promptly responded that the health phentermine secretary had publicly misrepresented the BMJ research.

Undaunted, the health phentermine secretary bounced back in January with the same policy and some fresh facts: “At the moment we have an NHS where if you have a stroke at the weekends, you’re 20 per cent more likely to die. That can’t be acceptable.”

All this is finely wrought bullshit — a series of ever-shifting claims that can be easily repeated but are difficult to unpick. As Hunt jumped from one form of words to another, he skipped lightly ahead of fact-checkers as they tried to pin him down. Full Fact concluded that Hunt’s statement about 11,000 excess deaths had been untrue, and asked him to correct the parliamentary record. His office responded with a spectacular piece of bullshit, saying (I paraphrase) that whether or not the claim about 11,000 excess deaths was true, similar claims could be made that were.

So, is it true? Do 6,000 people — or 11,000 — die needlessly in NHS hospitals because of poor weekend care? Nobody knows for sure; Jeremy Hunt certainly does not. It’s not enough to show that people admitted to hospital at the weekend are at an increased risk of dying there. We need to understand why — a question that is essential for good policy but inconvenient for politicians.

One possible explanation for the elevated death rate for weekend admissions is that the NHS provides patchy care and people die as a result. That is the interpretation presented as bald fact by Jeremy Hunt. But a more straightforward explanation is that people are only admitted to hospital at the weekend if they are seriously ill. Less urgent cases wait until weekdays. If weekend patients are sicker, it is hardly a surprise that they are more likely to die. Allowing non-urgent cases into NHS hospitals at weekends wouldn’t save any lives, but it would certainly make the statistics look more flattering. Of course, epidemiologists try to correct for the fact that weekend patients tend to be more seriously ill, but few experts have any confidence that they have succeeded.

A more subtle explanation is that shortfalls in the palliative care system may create the illusion that hospitals are dangerous. Sometimes a patient is certain to die, but the question is where — in a hospital or a palliative hospice? If hospice care is patchy at weekends then a patient may instead be admitted to hospital and die there. That would certainly reflect poor weekend care. It would also add to the tally of excess weekend hospital deaths, because during the week that patient would have been admitted to, and died in, a palliative hospice. But it is not true that the death was avoidable.

Does it seem like we’re getting stuck in the details? Well, yes, perhaps we are. But improving NHS care requires an interest in the details. If there is a problem in palliative care hospices, it will not be fixed by improving staffing in hospitals.

“Even if you accept that there’s a difference in death rates,” says John Appleby, the chief economist of the King’s Fund health think-tank, “nobody is able to say why it is. Is it lack of diagnostic services? Lack of consultants? We’re jumping too quickly from a statistic to a solution.”

When one claim is discredited, Jeremy Hunt’s office simply asserts that another one can be found to take its place

This matters — the NHS has a limited budget. There are many things we might want to spend money on, which is why we have the National Institute for Health and Care Excellence (Nice) to weigh up the likely benefits of new treatments and decide which offer the best value for money.

Would Jeremy Hunt’s push towards a seven-day NHS pass the Nice cost-benefit threshold? Probably not. Our best guess comes from a 2015 study by health economists Rachel Meacock, Tim Doran and Matt Sutton, which estimates that the NHS has many cheaper ways to save lives. A more comprehensive assessment might reach a different conclusion but we don’t have one because the Department for Health, oddly, hasn’t carried out a formal health impact assessment of the policy it is trying to implement.

This is a depressing situation. The government has devoted considerable effort to producing a killer number: Jeremy Hunt’s “6,000 reasons” why he won’t let the British Medical Association stand in his way. It continues to produce statistical claims that spring up like hydra heads: when one claim is discredited, Hunt’s office simply asserts that another one can be found to take its place. Yet the government doesn’t seem to have bothered to gather the statistics that would actually answer the question of how the NHS could work better.

This is the real tragedy. It’s not that politicians spin things their way — of course they do. That is politics. It’s that politicians have grown so used to misusing numbers as weapons that they have forgotten that used properly, they are tools.

You complain that your report would be dry. The dryer the better. Statistics should be the dryest of all reading,” wrote the great medical statistician William Farr in a letter in 1861. Farr sounds like a caricature of a statistician, and his prescription — convey the greatest possible volume of information with the smallest possible amount of editorial colour — seems absurdly ill-suited to the modern world.

But there is a middle ground between the statistical bullshitter, who pays no attention to the truth, and William Farr, for whom the truth must be presented without adornment. That middle ground is embodied by the recipient of William Farr’s letter advising dryness. She was the first woman to be elected to the Royal Statistical Society: Florence Nightingale.

Nightingale is the most celebrated nurse in British history, famous for her lamplit patrols of the Barrack Hospital in Scutari, now a district of Istanbul. The hospital was a death trap, with thousands of soldiers from the Crimean front succumbing to typhus, cholera and dysentery as they tried to recover from their wounds in cramped conditions next to the sewers. Nightingale, who did her best, initially believed that the death toll was due to lack of food and supplies. Then, in the spring of 1855, a sanitary commission sent from London cleaned up the hospital, whitewashing the walls, carting away filth and dead animals and flushing out the sewers. The death rate fell sharply.

Nightingale returned to Britain and reviewed the statistics, concluding that she had paid too little attention to sanitation and that most military and medical professions were making the same mistake, leading to hundreds of thousands of deaths. She began to campaign for better public health measures, tighter laws on hygiene in rented properties, and improvements to sanitation in barracks and hospitals across the country. In doing so, a mere nurse had to convince the country’s medical and military establishments, led by England’s chief medical officer, John Simon, that they had been doing things wrong all their lives.

A key weapon in this lopsided battle was statistical evidence. But Nightingale disagreed with Farr on how that evidence should be presented. “The dryer the better” would not serve her purposes. Instead, in 1857, she crafted what has become known as the Rose Diagram, a beautiful array of coloured wedges showing the deaths from infectious diseases before and after the sanitary improvements at Scutari.

When challenged by Bill O’Reilly on Fox News, Trump replied, ‘Hey Bill, Bill, am I gonna check every statistic?’

The Rose Diagram isn’t a dry presentation of statistical truth. It tells a story. Its structure divides the death toll into two periods — before the sanitary improvements, and after. In doing so, it highlights a sharp break that is less than clear in the raw data. And the Rose Diagram also gently obscures other possible interpretations of the numbers — that, for example, the death toll dropped not because of improved hygiene but because winter was over. The Rose Diagram is a marketing pitch for an idea. The idea was true and vital, and Nightingale’s campaign was successful. One of her biographers, Hugh Small, argues that the Rose Diagram ushered in health improvements that raised life expectancy in the UK by 20 years and saved millions of lives.

What makes Nightingale’s story so striking is that she was able to see that statistics could be tools and weapons at the same time. She educated herself using the data, before giving it the makeover it required to convince others. Though the Rose Diagram is a long way from “the dryest of all reading”, it is also a long way from bullshit. Florence Nightingale realised that the truth about public health was so vital that it could not simply be recited in a monotone. It needed to sing.

The idea that a graph could change the world seems hard to imagine today. Cynicism has set in about statistics. Many journalists draw no distinction between a systematic review of peer-reviewed evidence and a survey whipped up in an afternoon to sell biscuits or package holidays: it’s all described as “new research”. Politicians treat statistics not as the foundation of their argument but as decoration — “spray-on evidence” is the phrase used by jaded civil servants. But a freshly painted policy without foundations will not last long before the cracks show through.

“Politicians need to remember: there is a real world and you want to try to change it,” says Will Moy, the director of Full Fact. “At some stage you need to engage with the real world — and that is where the statistics come in handy.”

That should be no problem, because it has never been easier to gather and analyse informative statistics. Nightingale and Farr could not have imagined the data that modern medical researchers have at their fingertips. The gold standard of statistical evidence is the randomised controlled trial, because using a randomly chosen control group protects against biased or optimistic interpretations of the evidence. Hundreds of thousands of such trials have been published, most of them within the past 25 years. In non-medical areas such as education, development aid and prison reform, randomised trials are rapidly catching on: thousands have been conducted. The British government, too, has been supporting policy trials — for example, the Education Endowment Foundation, set up with £125m of government funds just five years ago, has already backed more than 100 evaluations of educational approaches in English schools. It favours randomised trials wherever possible.

The frustrating thing is that politicians seem quite happy to ignore evidence — even when they have helped to support the researchers who produced it. For example, when the chancellor George Osborne announced in his budget last month that all English schools were to become academies, making them independent of the local government, he did so on the basis of faith alone. The Sutton Trust, an educational charity which funds numerous research projects, warned that on the question of whether academies had fulfilled their original mission of improving failing schools in poorer areas, “our evidence suggests a mixed picture”. Researchers at the LSE’s Centre for Economic Performance had a blunter description of Osborne’s new policy: “a non-evidence based shot in the dark”.

This should be no surprise. Politicians typically use statistics like a stage magician uses smoke and mirrors. Over time, they can come to view numbers with contempt. Voters and journalists will do likewise. No wonder the Guardian gave up on the idea that political arguments might be settled by anything so mundane as evidence. The spin-doctors have poisoned the statistical well.

But despite all this despair, the facts still matter. There isn’t a policy question in the world that can be settled by statistics alone but, in almost every case, understanding the statistical background is a tremendous help. Hetan Shah, the executive director of the Royal Statistical Society, has lost count of the number of times someone has teased him with the old saying about “lies, damned lies and statistics”. He points out that while it’s easy to lie with statistics, it’s even easier to lie without them.

Perhaps the lies aren’t the real enemy here. Lies can be refuted; liars can be exposed. But bullshit? Bullshit is a stickier problem. Bullshit corrodes the very idea that the truth is out there, waiting to be discovered by a careful mind. It undermines the notion that the truth matters. As Harry Frankfurt himself wrote, the bullshitter “does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”

 

Written for and first published in the FT Magazine

Free email updates

(You can unsubscribe at any time)

20th of April, 2016HighlightsOther WritingComments off
Highlights

My TED talk on how distractions make us more creative

11th of January, 2016HighlightsSpeechesVideoComments off
Highlights

Multi-tasking: how to survive in the 21st century

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

Subscribe to TimHarford.com

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 162,215 other subscribers

Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Tim’s Tweets

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 162,215 other subscribers

Website Malware Scan
Do NOT follow this link or you will be banned from the site!