Tim Harford The Undercover Economist

HighlightsHighlights

From the geeks who took over poker to the nuclear safety experts who want to prevent the next banking meltdown, these are my favourite long-form articles.

Highlights

Steve Levitt plays poker with the FT

“I used to play poker a ton and then I quit. It’s too time consuming and toooo boring.” There’s something boyish about the way Steve Levitt drags out the word. But then his inner economist reasserts itself: “What you come to realise about poker over time is that the ratio of luck to skill in the short term is too high to make it feel productive.”

Here’s what you need to know about Levitt. He used to be a rising star in academia, with prestigious positions at Harvard and then Chicago. He picked unusual topics: cheating sumo wrestlers; the link between legal abortion and falling crime. His detective work with data was influential. In 2003, when he was just 35, Levitt won the John Bates Clark medal, often a precursor to the Nobel memorial prize. The journalist Stephen Dubner profiled him in The New York Times Magazine; a book deal followed for the pair, and the result, Freakonomics, sold four million copies. So, I’m playing poker with a data-savvy millionaire genius, a game I understand only in the sense that I’ve written about it. The good news is that Levitt doesn’t play any more. The bad news is that on his last outing, five years ago, he was within one hand of the final table at the World Series of Poker … I am doomed.

We’re at a casino in Mayfair: just me, Levitt, and the dealer, JD. At 47, Levitt has greyed since I first interviewed him nine years ago. But he still looks young and he’s better dressed than he used to be, in a silver-grey jacket and a midnight-blue shirt. JD, who deals for the poker professionals on late-night TV, looks the part in a black suit and waistcoat. Your correspondent has just come from a radio studio and is dressed accordingly. The game is Texas Hold’Em, the modern standard for poker, in which each player constructs a hand from his two concealed cards plus five communal cards on the table. The stakes: £100 each, winner takes all.

Like any good economist, I understand how to play poker in theory but am not sure how to do it in practice. (It takes me a couple of dry runs to figure out whose turn it is to bet.) We have 10,000 chips each and I have a king in my very first hand. The “flop” of three communal cards reveals a second king so, after a couple of small raises, I go in hard with a bet of 2,000 chips. Levitt chuckles, which is unnerving. After pausing, he folds. I get the impression he’s not convinced of my expertise – but I’ve won the first pot, even if it is tiny.

I’m trying to write down all the hands for posterity but that quickly becomes ludicrous. So, too, is the idea of conducting an interview while playing. Concentration is required – from me at least. I guess that Levitt wouldn’t break sweat if he had to play and chat simultaneously. I fold, and Levitt opts to show me his cards. I ask what the thinking is behind showing me that he was bluffing. Both JD and Levitt rush to explain that he had two pairs, and wasn’t bluffing at all. I realise that I have no idea what’s happening. This could be a long afternoon. Or, more likely, a short and expensive one.

“I think the statistics of poker are actually probably overrated,” Levitt says. “Most of poker is based on pretty easy rules of thumb. In a game like this there’s not many hard calculations to do.” He tells me about some research he conducted in Las Vegas, with a range of poker players including 18 winners of World Series events. “Almost all of them continued to use the rules of thumb that you use in regular poker. Even though they were not the right rules to use in the game we ran.” He concludes that experts can quickly be undone. “If you change the rules or the incentives, they tend to do very poorly.”

As our appointed break time approaches, Levitt’s getting into his stride. He’s more aggressive than I am, pushing me out of hands. But when I do stay in until our first showdown, I lose: it’s a pair of fives against Levitt’s pair of sevens. There have been no dramatic moments, yet I am slowly bleeding chips. JD is encouraging. “The play hasn’t been that bad,” he says. Levitt agrees. Still, I am losing.

Then, the very last deal before the hour, I have a decent hand: two pairs. The pot’s not a bad size and Levitt might be drawing to get a flush, so I decide to shut things down: I bet big. Levitt folds, and, as we break, I’m not far behind, with 8,900 chips to Levitt’s 11,100. I observe that since I first met Levitt, he has become a celebrity. He snorts. He’s relieved that nobody ever recognises him because he looks “so generic”.

“The nice thing is, the perks that come with the success of our book are opportunities. People come to me all the time with great opportunities.” Such as what? Money? Secrets? Power? For Levitt, the answer is simple: fun. That could mean anything from a round of golf at Augusta to working with the US Defense Advanced Research Projects Agency to prevent sex trafficking. He designs algorithms to catch credit-card fraud, and for horseracing syndicates. “The horseracing is the most fun thing.”

At one point, Levitt talks about his academic career in the past tense. “I view everything I do as a hobby now,” he says. “I no longer feel like an adult. I feel like what has happened is that I’ve been given so many opportunities that I am somehow back into a very childlike phase that I’m in the candy store and I get to pick and choose whatever I want.”

Immediately after the restart, I’m dealt a 9-7; it’s trash. But Levitt doesn’t raise the stakes so I stay in and see the flop: 4-6-8. Now either 5 or 10 will give me a straight; I call Levitt’s diffident bet. The next card is the 5. I do have a straight. It’s a monster hand in two-player poker. With 4, 5, 6 and 8 on the table, Levitt might have a straight too. But I know something he doesn’t: I have a 9, so my straight will be higher than his.  Levitt comes in with a solid bet, I raise, and he calls. Then the final card comes: another 9. That’s annoying because it might allow Levitt to split the pot with me. If Levitt has the 10 and 7, he will beat me. But that’s vanishingly unlikely. I go all in. Levitt calls.

If I win, I’ll be 18,000 chips to 2,000 chips ahead. If Levitt wins, game over. And … he has the 10 and 7 of spades. I’ve lost it all. Just like that.

“He called with the nuts,” says JD. JD and Levitt are quick to commiserate. Levitt had no idea that I had him beaten all along. JD admires how I reeled Levitt in. Levitt says that I was incredibly unlucky; that last card, the 9, killed me. And the 10-7 was the only combination in the deck that could have beaten me.

“Now that’s a hand worth writing about,” says JD. I’m feeling pretty good: I’ve lost to a “bad beat” in true poker-pro fashion. But gradually the congratulation fades into criticism. Levitt points out that I should never have gone all-in. It was a small risk but a pointless one, because Levitt would never have called me except in the unlikely event that he had the 10.

“That’s an essential rule of thumb you need to know,” Levitt says. “But that’s not Poker 101. That’s Poker 403. That’s Master’s Level Poker. PhD level Poker.” Maybe. But I’ve been schooled.

 

Written for and first published in the Financial Times on 27 June 2014. (I’m not sure why I didn’t post it at the time, but you might enjoy it now.) Levitt was promoting his book Think Like A Freak.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

12th of October, 2019HighlightsComments off
Highlights

“If you want people do to something, make it easy.” Richard Thaler has Lunch with the FT

The Anthologist doesn’t serve cashew nuts, so I order a bowl of smoked almonds instead. When they arrive, caramelised and brown as barbecue sauce, I ask for them to be put right in front of Richard Thaler. He protests that the waiter isn’t in on the joke.

The readers will be, I assure him. “The educated ones, perhaps,” he concedes.

Those educated readers may know that Professor Thaler is a Nobel laureate economist, but even more famous as the co-author of Nudge. They may even know — from his later book, Misbehaving: The Making of Behavioural Economics — that the 73-year-old is fond of telling an anecdote about a bowl of cashew nuts that sheds light on his approach to economics.

He served the notorious bowl to some guests while dinner was roasting in the oven, then watched everyone compulsively munch on the nuts and gradually spoil their appetites. So Thaler decided to remove the temptation by hiding the cashews in the kitchen. His guests thanked him.

It would be an unremarkable tale, except that such behaviour simply does not fit the rational economic model of human behaviour. Either eat the cashews or don’t eat the cashews, says classical economics, but don’t thank the person who moves them out of easy reach.

Reflecting on such stories helped Thaler create “behavioural economics” — a branch of the discipline that aims at psychological realism. Doing so also helped him with the equally difficult task of persuading other economists to take the behavioural view seriously.

True, it’s just a story about cashews — but if you don’t think short-termism and weak willpower are economically significant in the grand scheme of things, I have a payday loan, a pension drawdown scheme and an auto-renewing gym membership to sell you.

And, sure enough, Thaler’s ideas about the importance of realistic human behaviour have permeated into the economic mainstream, particularly the study of finance. His policy proposals have influenced tax collection, organ donation, energy efficiency drives — and most notably pensions, where participation in workplace schemes dramatically increases when people must explicitly opt out if they are not to be automatically enrolled.

Thaler cultivates a happy-go-lucky persona, a man whose own weaknesses help him understand the weaknesses of others. “You assume that the agents in the economy are as smart as you are,” he once told Robert Barro, one of the pillars of the economics establishment, “and I assume that they’re as dumb as me.” Barro was happy to agree with that.

This sunny July, however, Thaler is a model of self-control. “Notice how many nuts I’ve had so far,” he announces, 20 minutes into our conversation. He gestures for emphasis. “Zero.”

I’m not surprised by that, although I am when Thaler — who struck me as a bon vivant — admits that he has been skipping lunch entirely. He’s in London for a fortnight, teaching a course at the London campus of the University of Chicago Booth School of Business, and after a generous breakfast he says he has neither the need nor the time for lunch.

This may also explain his lack of interest in the restaurant itself. We meet at the business school, and he’s chosen the closest place — announcing “it’s me again” to the waitress who stands outside. I don’t even glimpse the interior of The Anthologist, because she promptly directs us to a pavement table, which has a large masonry wall on one side and on the other — if you squint — a view down Gresham Street to a back corner of the Bank of England. The scooters and trucks roar past a couple of yards away, but Thaler has no trouble making himself heard.

He used to squeeze more out of his annual fortnights in London. “I would spend the morning with the Behavioural Insight Team” — the famous “nudge” unit established by David Cameron and inspired by Thaler’s book with the law professor Cass Sunstein — “then come and teach all afternoon. And then half the nights there would be dinners with friends. And I was comatose at the end of the first week.”

He does admit to having a few dinners planned, though — and to timing his visit to coincide with the Wimbledon Men’s Final. He and his wife, the photographer France Leclerc, had Centre Court tickets. Was he a fan of Djokovic or Federer?

“We support Rafa. Although if he had been playing in a match like that it might have got too much for my wife. She would have been hiding somewhere by the fifth set.”

It was the same on election night: the Trump/Clinton contest reduced his wife to a nervous wreck. “And who were you supporting in that one?” I ask. He gives me a withering look. “At least credit me with sentience.”

President Barack Obama seemed to appreciate behavioural economics and gave Thaler’s co-author, Cass Sunstein, a senior appointment. The Trump administration, observes Thaler, has no interest in behavioural economics. “Look, there’s no demand for expertise of any sort . . . The lack of competence and expertise is like nothing anyone has ever seen.”

Whitehall’s Behavioural Insight Team seems to be displaying more longevity than the White House equivalent. “The key move they made very early on was to extricate themselves from government.”

They’re now a semi-autonomous social enterprise in which the Cabinet Office retains a stake. They made that move, of course, before Cameron’s referendum-induced autodefenestration. “I will say that David Cameron never talked to anybody at the Behavioural Insight Team about the Brexit referendum”.

And what should they have said if he had? “One thing for sure is Remain is a horrible name. It’s weak. Whereas Leave is strong.”

Thaler has written about the referendum before in the Financial Times. He reminds me that Theresa May said, before the referendum: “The reality is that we do not know on what terms we would have access to the single market.”

The waiter interrupts us and presses Thaler to order some wine. He waves him away. “No, I have to teach for the next three hours.”

We return to May, and her explanation that a vote to Leave would be a vote for something undefined and unknowable. Yet as prime minister, she felt that it was quite sufficient to declare that Brexit means Brexit. “Brexit means Brexit — that is one of the dumbest statements that has ever been uttered by a head of state. And I’m aware that there are thousands of tweets one could compare it with. I mean, it’s simultaneously meaningless and wrong.”

The waiter finally manages to get us to order something. Thaler goes for a crispy duck salad. “It’s called salad, you know it has at least the illusion of being healthy”. I’m tempted by the Wagyu beef burger but feel ashamed (social pressure means nothing to homo economicus but is a powerful nudge for human beings), so I order some cod with samphire.

The waiter is keen to upsell. Spritzer? Some halloumi? Thaler and I are baffled by the suggestion of halloumi with cod and duck, although I would have cracked if the waiter had tried to sell us French fries.

We turn to the state of economics, and how it became so wrapped up in the idea of rational agents. Some of those models have a hypnotic pull, I suggest: they’re so ingenious, so difficult, and once you’ve understood how they work you don’t want to abandon them in favour of the bowl-of-cashews guy.

I’m recalling a time I was reading a classic article by Barro — in the emergency room, having dislocated my jaw after a spectacular yawn, which I protest was unconnected to the research paper in question. I don’t get far. “You should change this story!” hoots Thaler. “It should be that you read this paper and, literally, your jaw dropped.”

It’s a reminder that Thaler is a storyteller as well as a sharp theorist. Misbehaving is full of stories. “I decided to just start writing things that would amuse me,” he says — including an account of a huge academic bunfight over the allocation of corner offices at the University of Chicago economics department that cannot fail to provoke Schadenfreude.

“I sent that to my friend Michael Lewis. I said, ‘How much of the book could be like this?’ and he said ‘All’.”

Lewis (whom I interviewed here) isn’t a bad sounding board: he’s the author of Liar’s Poker, Moneyball and The Big Short. He also wrote a biography of Thaler’s friends and colleagues, the psychologists Daniel Kahneman and Amos Tversky. I wouldn’t mind getting him to look over my first drafts.

When it arrives, the cod is pleasant enough, but there isn’t much of it. I’m regretting not ordering the fries. The smoked almonds look tasty, but they’re across the table sitting beside Thaler’s left hand. He hasn’t so much as twitched towards them.

The key message of Nudge was that governments could improve the health and wellbeing of their citizens without infringing on their liberty, simply by more thoughtfully designing their rules, procedures, or even labelling. “If you want people to do something, make it easy.” Put the cashews in the kitchen and the fruit by the cafeteria checkout.

More recently, Thaler has been thinking and writing about what he calls “sludge”. It’s the same procedure in reverse: if you want people not to do something, make it difficult. Reaching for an example, Thaler has a bone to pick with The Times.

The first review of Misbehaving was published there, and Thaler’s editor sent him a link. “And I can’t get past the paywall without subscribing.” But then he notices there’s an offer of a month’s trial subscription at an introductory rate. “But I read further, having written a book about this, and I see that it will be automatically renewed.”

Not only that, it will be renewed at full price, “and that in order to quit, I have to give them 14 days’ notice. So the one month free trial is actually two weeks. And I have to call London [from Chicago] in London business hours, not on a toll free line.”

He pauses and chides me to check that the FT isn’t placing similar sludge in the way of readers who wish to unsubscribe. I assure him that nobody would ever want to unsubscribe, but in any case such knavery would be beneath us. But part of me wonders. “Check your policy at the FT,” he advises. (Later, I check. The FT offers a very similar introductory offer, but I am relieved to discover that the newspaper offers regional phone numbers and you can also cancel online.)

While we’re talking about the consumption of digital goods, I am keen to ask him about how he deals with email, smartphones and social media. We’re in the middle of a colossal set of experiments in behavioural manipulation that would have been hard to imagine when Sunstein and Thaler wrote Nudge over a decade ago. Google, Apple, Facebook and Amazon are constantly running little trials to see what we do in response.

“The world has changed. I remember that while we were writing the book, I got my first iPhone.”

But does it tempt him? Distract him? An iPhone, it seems to me, is a bottomless bowl of digital cashews. But he’s not worried. “I’m not on Facebook at all . . . I am on Twitter and I find much of it to be quite useful. There’s a growing academic economics Twitter that’s fantastic. There’s almost no ad hominem. There are people live-tweeting conferences. Fantastic. There are people who will give a 10-tweet summary of some new paper.”

Thaler stops eating his salad — he’s managed to get most of it down, in between his answers. I’ve long since finished my little piece of fish. The smoked almonds have somehow migrated into the centre of the table, easily within my reach. They are untouched. “Let the record be noted that my consumption so far is zero,” he declares.

Thaler isn’t interested in coffee or dessert, but says he has time if I want something. I order espresso. After it arrives, I take a sip, and then my hand moves instinctively towards the almonds before I catch myself. He laughs. “That was a trembling hand.”

My involuntary slip prompts us to start talking about accidents. “Here’s something I was thinking about this morning,” he says. “All these announcements to mind the gap. Can that conceivably be useful?”

“Mind the gap,” is part of the sonic wallpaper of the London Underground, a reminder not to accidentally stumble into the space between Tube train and platform. I wonder if Transport for London has run an experiment. “I’m wondering that too.” Although we both doubt it.

“Now here’s my hypothesis. 99.9 per cent of the people on the Tube have blocked this out long ago. And whatever the percentage of tourists is, half of them have no idea what ‘mind the gap’ means. It could be ‘cheerio’.”

In short, the people who might conceivably benefit from the warning probably don’t understand it. So why not experiment with some different approaches to see if that reduces accidents?

The proposal is typical Thaler. He’s noticed a feature of everyday life that most of us either overlook or take for granted — and he’s turned it into an easily implementable experiment that might actually make the world a better place.

It’s time for him to go and teach. We shake hands, and then he reaches forward, slowly and deliberately, for a smoked almond. He holds it up in front of me as though displaying a fine diamond.

“One!” he says. Then he pops it into his mouth, and ambles off towards the business school. Only when his back is turned do I dare grab one myself.

The Anthologist 58 Gresham St, London EC2

Smoked almonds £3.75

Crispy duck salad £11.50

Cod with samphire £14.95

Sparkling water £3.95

Double espresso £2.90

12.5 per cent service £4.63

Waiter rounds up the bill (a nudge?) £0.32

Total £42.00

 

==

 

 
Written for and first published in the Financial Times on 2 August 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

How behavioural economics helped me kick my smartphone addiction

The year 2011 was a big one for me. My son was born. We moved to a new city. I published a book. But something else happened that was in some ways more significant: on February 9 2011, I bought my first smartphone.

It didn’t feel like a milestone in my life at the time. I didn’t note it down in a diary or commit the date to memory. Only finding a copy of the receipt helped pin down the day. Yet I have come to realise that the phone was a very big deal indeed.

Daniel Kahneman, Nobel laureate and author of Thinking, Fast and Slow (UK) (US), distinguishes between the “experiencing self” and the “remembering self”. My remembering self dwells upon the landmark moments such as the new baby. But my experiencing self is all about the phone.

I spend more time interacting with it than I do interacting with my children. I am in the presence of the device more than I am in the presence of my wife, although at least I have my priorities straight as to which I go to bed with.

As Cal Newport puts it in a new book, Digital Minimalism (UK) (US), we didn’t sign up for this. My first email account (1994) received a handful of messages a day, most of them newsletters I subscribed to in order to prevent cobwebs forming in my inbox. Facebook (2004) was a curiosity, less interesting than the latest computer game.

The first iPhone (2007) had no app store and was originally conceived as an iPod that made phone calls — although since “crackberry” had just been named the word of the year by Webster’s New World Dictionary, perhaps we should have seen what was coming.

But we didn’t. The hardware and software of the mobile age have gradually and profoundly entangled themselves in most parts of most people’s lives. If you are anything like me, you pick up your phone much more often than you pick up a knife and fork, and spend far longer reading email than reading books.

Not that I wish to grumble. These tools are enormously powerful. Without them I’d need to hire a secretary, spend hours playing phone tag and give up on working during long journeys by train and plane. Yes, they may occasionally distract me during the school nativity play, but the alternative would have been to miss the play entirely, because the office and the school are 50 miles apart.

I am not entirely happy with the role these technologies play in my life, but neither do I want to relinquish them. I know I’m not alone. For several years now, I’ve been dispensing sporadic advice about email overload both to readers and — if I am honest — to myself.

But late last year, I decided to do something more radical: to deploy everything I knew about economic theory and behavioural science, along with a few hard-won practical discoveries, to rebuild my relationship with the digital world from scratch. This is the story of what I learnt.

The power of the status quo
Inertia is always the first obstacle. Richard Thaler, who won a Nobel Memorial Prize for his contributions to behavioural economics, coined the term “endowment effect” to label the behaviour of an oenophile economist.

The economist had snapped up some Bordeaux wines for $10 a bottle, only to see them appreciate in value to $200 each. The economist wouldn’t have dreamt of paying $200 for a bottle of wine, but didn’t want to sell the wine for $200 either. He was happy to drink it on special occasions instead.

This behaviour is illogical: either the economist should prefer $200 or he should prefer the wine, and which he actually possesses should make no difference. Yet his actions seem perfectly natural, and Thaler and colleagues were able to demonstrate similar behaviour in laboratory experiments.

We like what we have, and these experiments suggest that we have no better reason for liking what we have other than that we have it: the disadvantages of choosing something else often loom larger than the advantages. As a result, we are reluctant to relinquish what we have — including the digital tools we’ve grown accustomed to using.

For this reason, digital sceptics such as Cal Newport and Jaron Lanier suggest that the first step in a reassessment of your digital habits should be a sharp temporary break.

If you are anything like me, you pick up your phone much more often than you pick up a knife and fork

Lanier, a pioneer of virtual reality and the author of Ten Arguments for Deleting Your Social Media Accounts Right Now (UK) (US), advises at least a six-month break from all social media. Newport suggests a briefer but broader ban: not only no social media, but no Netflix, no Google Maps, no smartphones — no digital tools at all for 30 days, apart from whatever is professionally essential.

The point here is not a “detox”. There is no intrinsic benefit to taking a month off from computers any more than one might recommend a brief, invigorating break from smoking or opiates.

The aim is to change the status quo to allow a reassessment. It’s only after you put down the electronic rucksack overflowing with digital possibility and stroll off unencumbered that you’re in a position to make a sensible decision about whether you really want to carry it around all day long.

So, I stripped various apps off my smartphone. The first time I dragged an icon to the “uninstall” bin felt like a big step, but it soon became a giddy pleasure. Off went the news apps, and a blog reader called Feedly that absorbed a huge amount of my time and attention. I already eschew games on my phone, but would have removed them too with gusto.

I spared the Financial Times app (which surely passes Newport’s test of professional necessity), and also retained Google Maps, a podcast player, The Economist’s “Espresso” app, the camera and the weather. Newport would have been more radical but I felt satisfied with my choices.

The big question was: what to do with my social media accounts? Facebook was simply too troublesome to delete, especially since my personal account is connected in opaque ways to a “Tim Harford” page maintained by my publishers. But I never had Facebook on my phone and after briefly unfollowing or muting all my contacts, I had no problem staying logged out.

My Twitter habit is more of a problem. I have 145,000 followers, gently persuaded over 10 years and 40,000 tweets to follow me — that’s about 10 books’ worth, or 20 years of weekly columns. This alone was a reminder of just what an effort Twitter could be; but deleting the account felt like the nuclear option.

So what could I do? Two years ago, I hid the “mentions” column so that I don’t see what other people say about me on Twitter. (Much is friendly, some hurtful and almost all superfluous.) Yet I was still wasting a lot of time noodling around there for no obvious gain. So I deleted the smartphone app and on November 23 2018, I tweeted that I was planning to “get off Twitter for a bit”. By a pleasing coincidence, the last person I interacted with before logging out was the man who named the endowment effect, Richard Thaler.

Time for what?
One of the most important — and misunderstood — ideas in economics is that of opportunity cost. Everything we do is an implicit decision not to do something else. If you decide to go to an evening lecture, you’re also deciding not to be at home reading a bedtime story. If you spend half an hour browsing news websites, that’s half an hour you can’t spend watching football. Those 40,000 tweets cost me something, but I am not sure what and I certainly didn’t ponder the cost while tweeting them.

This neglect of opportunity cost is a very human trait; we often fail to bring to mind the opportunity costs of our choices. One fun if slightly dated illustration of this is the choice between a £1,000 high-end CD player or a slightly less excellent £700 unit.

A difficult choice — until it is phrased as a choice between a top-notch £1,000 CD player or a £700 player plus £300 worth of CDs. At that point, most people clearly prefer the second option. The opportunity cost of the more expensive player could hardly be more obvious, and yet bringing the obvious to our attention changes our decisions.

Trying to get work done with an internet-enabled device is like trying to diet when there’s a mini-fridge full of beer and ice cream on your desk

For this reason I was determined not simply to cut back on my digital activities, but to fill the freed-up time and energy with something else. I focused on three activities. First, more exercise: I replaced Twitter with an exercise app that could run me through some brief, vigorous training sessions.

Second, more fun: I looked up some old friends and invited them to play role-playing games with me every other Sunday evening, rolling dice and pretending to be wizards. (I realise that Dungeons & Dragons isn’t cool. But neither am I, so I don’t care.)

And third, since social media is supposed to be about connecting with far-flung people, and since Christmas was looming, I decided to start writing letters to include with Christmas cards. I couldn’t write properly to everyone but I did manage to write serious letters to nearly 30 old friends, most of whom I’d not seen for a while. I reflected on our long friendships, brought to mind good times long past and, in particular, recalled important moments shared just by the two of us, nobody else. The letters were the antithesis of clicking “Like” on Facebook.

The experiment was beginning to get interesting.

Swiping, fast and slow
As Daniel Kahneman explained in Thinking, Fast and Slow: “When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” Rather than asking whether we should buy shares in Amazon, we ask, “Do I like to shop with Amazon?” Instead of pondering the leadership and managerial qualities of a presidential candidate, we ask ourselves whether we’d enjoy having a beer with them.

Tristan Harris, executive director of the Center for Humane Technology, argues that the digital services we use often perform this substitution for us. Imagine, says Harris, a group of friends on a night out, trying to figure out where they can go to keep the conversation flowing. They turn to their phones for a recommendation and find themselves gawping at images of cocktails on Instagram.

The phones, says Harris, replace the question, “Where can we go to keep talking?” with, “What’s a bar with good photos of cocktails?” Phones simply do not suggest options such as going back to someone’s apartment or strolling along the waterfront.

This happens all the time, and we often don’t notice the substitution. Looking for love, we swipe through faces on Tinder rather than searching for local clubs or volunteering activities. Picking up a phone to check the time in the morning, the question “What’s the time?” is quickly substituted with, “What did I miss while sleeping”?

While writing the last paragraph, I was confronted with the perfect example. It started to rain. Wanting to know whether the shower would last, I typed “weather” into Google. I was given an instant answer to my question, but I was also shown a list of weather presenters. Human faces! They are always eye-catching.

An old university acquaintance became a TV weather presenter; I wondered how she was doing. Who wouldn’t? Of course Google substituted an easier question: What does she look like these days? Other photos of weather presenters were also offered and, 30 seconds later, I was looking at pictures of a completely different weather personality, Tomasz Schafernaker, stripped to the waist.

Fifteen years ago, I would have struggled to explain this sequence of events to my wife. But nowadays, no explanation is really needed. We all know how swiftly and easily “When will it stop raining?” can lead to “What do Tomasz Schafernaker’s nipples look like?”

Trying to get some work done with an internet-enabled device is like trying to diet when there’s a mini-fridge full of beer and ice cream sitting on your desk, always within arm’s reach. You can crack open a can and take a swig before you’ve even realised what you’re doing.

Perhaps even worse, the tempting rewards are unpredictable. The psychologist BF Skinner once found himself trying to eke out a supply of food pellets he’d been using to reward rats. To his surprise, he found that “intermittent reinforcement” — sometimes the rats would get a pellet, sometimes not — was more motivating than reliable rewards. Unpredictable treats are highly addictive, just like email, social media or clickbait headlines.

So what to do about this problem? It’s not easy: by definition an intuitive response occurs before we have time to stop and think. The obvious solution is to create some friction. I installed a software plug-in called Strict Workflow on my desktop browser. With one click, it blocks time sinks such as Twitter, YouTube and various clickbait news websites for a period of 25 minutes.

It’s astonishing how many times during those 25 minutes I reflexively check, see the blocking message instead and go back to work. I’m hopeful that a few weeks or months with this blocker may break this fast-twitch habit, but in any case the software works.

Meanwhile, by uninstalling news apps, Twitter and Feedly, I’d made my phone less like a sweet shop. As a testimony to the power of unconscious habit, after uninstalling Feedly, I deleted a few incoming emails, then unthinkingly tried to find it. It took a moment for me to realise I was searching for an app that I’d deleted less than a minute earlier.

It was a reminder that there’s more going on here than poor or short-sighted decision-making: often when we use our phones, we’re not really making any conscious decision at all.

Spillover benefits
Paul Romer won a Nobel Memorial Prize recently for analysing the way different innovations would spill over, enabling other innovations and the process of economic growth itself. Four weeks into my experiment, I was noticing some unexpected spillover benefits myself. The phone was still tempting, but decreasingly so. I took my children to see a Christmas film and, for the first time in years, didn’t feel the urge to check it.

I was getting a real sense of the mutually reinforcing nature of the distraction ecosystem — and how I’d failed to see it clearly when inside it. In November, for example, I would have been scrolling through Feedly looking for interesting material. I told myself I was looking for things to read, but really I was looking for things to tweet about. If pushed for time, I’d sometimes tweet things instead of reading them. This foolishness was evidence of a seriously bad habit.

But having uninstalled Twitter, I found myself less tempted to go and look at my Twitter stats (nothing to see) and also less tempted to flick through the blogs. After all, if I wasn’t going to tweet about them, why not read a book instead? Each new app that I removed from my phone weakened my tendency to pick up the device; often, it made other apps less useful or less appealing. I hadn’t seen this effect coming, but I wasn’t complaining.

Adapting to events
The first of January is usually the date for turning over a new leaf but, with hindsight, beginning my experiment in late November instead was an accidental masterstroke. The run-up to Christmas is a different kind of busy: the volume of email declines, replaced by Christmas cards and shopping lists. It’s a time when we often see people face-to-face instead of on Facebook.

By unplugging various digital services, I was moving with the wind at my back; doing firmly and deliberately what I might anyway have drifted towards.

The experiment was working well. I wasn’t missing Twitter at all. I was spending much less time with the phone. Some old friends were emerging from the woodwork to tell me how much they enjoyed receiving my letter. A few fretted that I was going through some kind of crisis, but overall the letters felt like a vastly better way to contact people than through Facebook.

When I did see friends and family, I found it easier to give them my full attention. Sherry Turkle, author of Reclaiming Conversation (UK) (US), has found that people initially used texts as an add-on to face-to-face conversation, but the texts soon became a substitute: more convenient, more controllable.

The problem with real conversation, one high-school senior told her, was that “it takes place in real time and you can’t control what you’re going to say”.

I sympathise, and we probably all had face-to-face conversations over Christmas that we wish could have been conducted from a thousand miles away. But while real conversation can be tiring, it is also vastly more rich and meaningful than a few dozen bytes of text. The less distracting I found my phone, the more I enjoyed talking to the people in front of me.

At the end of December came a strange and unexpected test: I was awarded an OBE in the New Year honours list. Suddenly the digital hush of the year’s twilight was interrupted by a steady stream of congratulatory messages.

I was out walking with some old friends, catching up on the news of the past few months and chatting about the year ahead. In my pocket, my phone was pinging, and I felt increasingly anxious about letting the messages go unanswered. I snatched moments here and there to type responses, offering slightly embarrassed excuses to my companions.

It’s not an experience I’m likely to repeat, but it taught me a few lessons. First, even friendly digital messages can provoke anxiety. I was fearful of appearing ungrateful by not replying promptly. This was silly. A delay would not have bothered anyone. But I couldn’t help myself. I should have left the phone at home.

Second, it’s easy to reactivate bad habits. After a couple of weeks in which I checked my phone a few times a day instead of several times an hour, the influx of messages pushed me back into the habit of checking my phone like a rat hoping for a food pellet. It took several days more to regain some calm.

Third, and more positively, the investment in spurning social media was paying dividends. I did buckle and log into Facebook for the first time in weeks, not wanting to ignore messages of congratulation. It was completely silent. People had worked out, it seems, that Facebook wasn’t a good way to reach me. I managed to resist logging into Twitter completely.

Still, I did start to wonder whether the new regime would survive contact with the normal working routines of January. I called Jocelyn Glei, author of Unsubscribe (UK) (US) and host of the Hurry Slowly podcast. “The notion that you’re going to change all your habits and be done is absurd,” she cheerfully warned me. Fair enough — but then how to sustain the new pattern?

Glei’s advice was to remain vigilant. It’s one thing to check out at Christmas, another to do so in September. It makes sense to stay off Twitter while writing a book; less sense, perhaps, while marketing it. Each new project, she advised, required a quick re-evaluation of where to draw the digital boundaries. The digital reset was going to be a work in progress.

Lessons learnt
The point of the break was to allow a thoughtful assessment of which digital services were worth letting back into my life. So as the new year starts up and emails start to flow freely again, what did I learn?

First, I didn’t miss being plugged into Twitter at all. I’ve been ignoring notifications for years — thus missing some of the benefit and much of the aggravation of the platform — but have still been tweeting away out of some strange combination of duty and inertia.

My new plan is to log in for a few hours on Friday, set up some links to my columns and other projects that may interest some people, and log out again. If I ever see a good reason to use the platform more intensively, I’ll be back.

Second, I enjoyed having a more boring phone. With very little on it now but an easily emptied email inbox and the FT app, I pick it up less often and for less time, and am more likely to do something useful with it when I do check it.

I did reinstall Feedly — which I find essential for my job — but will keep an eye on my usage. With no tweets to send, the app has become more useful. I read for the sake of learning rather than for the sake of tweeting.

Third, the “strict workflow” blocker worked so well in saving me from my fast-twitch impulses that I added my email inbox to the blocked list. I’d had limited success with an email blocker before, but this time was much more successful, perhaps because the blocker was part of a larger plan.

Finally, it was good to focus on the upside of the digital decluttering. Although it was partly an exercise in habit-breaking or self-denial, it was much more useful to think of it as spending time and attention on things that mattered.

Some old friends seemed genuinely touched to receive a real letter; nobody has ever been touched by a Facebook “Like”. I felt in better shape at the beginning of January than at the beginning of December, which is hardly the usual Christmas experience. I walked, talked, ate and drank with old friends. I even battled a few imaginary wizards.

I’ve no desire to give all this up to spend more time with my phone.

 

Written for and first published in the Financial Times on 17 January 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

My TED talk on the power of slow-motion multitasking

26th of January, 2019HighlightsSpeechesVideoComments off
Highlights

Why big companies squander brilliant ideas

J F C Fuller did not invent the tank.

That distinction should probably fall to E L de Mole, an Australian who approached the British war office in 1912 with a design that was — in the words of historians Kenneth Macksey and John Batchelor — “so convincingly similar to those which finally went into service that one wonders why it was never adopted from the outset”.

But when the British army eventually introduced the tank, it was J F C Fuller, chief staff officer of what would later become the tank corps, who understood what to do with it. At 39 years old, Fuller was a small man with a neatly trimmed moustache and a hairline that had retreated over his crown and was beginning to march down the back of his head. He could have passed for a butler in a costume drama, but his appearance belied an inner radicalism. (He had been friends — and then enemies — with the occultist Aleister Crowley.)

Late in 1917, after almost 400 British tanks had, with modest success, lumbered across the German lines at the battle of Cambrai, Fuller applied his radical streak to the problem of using the tank effectively.

A new and much faster tank, the Medium D, could travel 200 miles at a speed of 20 miles per hour. Fuller proposed that these tanks would attack the German army’s brain — the string of German headquarters miles behind the front line.

A Medium D could roll across the trenches and be on the German command posts in an hour; Fuller’s attack would come from nowhere. Air support would disrupt German road and rail travel.

“Bad news confuses, confusion stimulates panic,” wrote Fuller. His idea was dubbed Plan 1919. By striking suddenly at the German command, Plan 1919 would cause the German army to disintegrate. It would, Fuller declared, be “the winning of the war in a single battle”.

His astonishing idea became “the most famous unused plan in military history”, according to his biographer Brian Holden Reid. But, of course, that is not entirely true. It was used to great effect, in 1940 — by the Germans. J F C Fuller had invented blitzkrieg.

 

The story might be a historical curiosity, had echoes of it not been repeated so frequently since the British army stuffed Fuller’s plans for blitzkrieg into a desk drawer. Organisations from newspapers to oil majors to computing giants have persistently struggled to embrace new technological opportunities, or recognise new technological threats, even when the threats are mortal or the opportunities are golden. Why do some ideas slip out of the grasp of incumbents, then thrive in the hands of upstarts?

In 1970, the photocopying giant Xerox established the Palo Alto Research Center, or Parc. Xerox Parc then developed the world’s first personal computer, with a graphical user interface, windows, icons and a mouse. Bill Gates of Microsoft and Steve Jobs of Apple observed developments at Xerox Parc with great interest. Xerox still makes photocopiers.

In 1975, a 24-year-old engineer named Steven Sasson built the world’s first digital camera — a patched-together device scavenging a lens from a Super-8 camera, magnetic tape in a portable cassette recorder and a TV screen. Sasson worked for Eastman Kodak, where in 1989 he and his colleagues also constructed the first modern digital SLR camera. Kodak built a sizeable line of business in digital photography, and earned a small fortune from the patents. Yet Kodak could not adjust to a world in which every phone contained a camera. The company filed for bankruptcy in 2012.

In 1999, Sony launched the “Memory Stick Walkman”, one of the world’s first digital music players. Sony was armed with the iconic Walkman brand, some of the world’s best consumer electronics engineers and the talent-soaked Sony-BMG music label. The Memory Stick Walkman went nowhere and, two years later, it was eclipsed by a product that transformed the fortunes of a struggling Apple: the iPod.

And in 1918, Britain had the best tanks in the world, a clear vision of how to use them and, in Fuller, one of the best military strategists to emerge from the British army. The German army was forbidden to use tanks at all; it was scarcely more than a collection of officers, a head without a body. Heinz Guderian, later one of the leading Panzer commanders, had not even seen the inside of a tank until he managed to go on manoeuvres with the Swedish army in 1929. Yet by the late 1930s, the British had conceded technical and tactical superiority to Hitler’s new army.

There is an obvious explanation for all of these failures and missed opportunities: people are idiots. “Now we can get back to some real soldiering,” remarked one senior officer to Fuller at the end of the first world war — as though defending Britain in an existential struggle had been a frivolous distraction from tending to noble horses, bright buckles and shiny boots. The army blocked publication of Fuller’s books for several years; they were seen as insubordinate.

When Steve Jobs visited Xerox Parc in 1979, and saw a windows-and-mouse interface for the first time, he couldn’t contain himself, according to Malcolm Gladwell. “Why aren’t you doing anything with this?” he yelled. “This is the greatest thing. This is revolutionary!” If Jobs had been teleported into the British war office in the 1920s, he might well have said the same thing.

Idiocy is a tempting explanation and not without merit. The top man in the British army, Field Marshal Sir Archibald Montgomery-Massingberd, responded to the threat of Nazi militarisation by increasing the amount spent on forage for horses by a factor of 10. Cavalry officers would be provided with a second horse; tank officers would get a horse too. As I say: people are idiots.

But there is something about the “idiot” theory that feels too glib. Consider Xerox Parc: how is it that a corporation could be smart enough to establish such a superb research centre, but then fail to take advantage? Was Sony really run by idiots in the 1990s? Even Montgomery-Massingberd is too casually caricatured. These organisations stumbled for a reason.

Management theorists have a word for it: disruption. “Disruption describes what happens when firms fail because they keep making the kinds of choices that made them successful,” says Joshua Gans, an economist at the Rotman School of Management in Toronto and author of The Disruption Dilemma. (US) (UK) Successful organisations stick to their once-triumphant strategies, even as the world changes around them. More horses! More forage!

Why does this happen? Easily the most famous explanation comes from Clayton Christensen of Harvard Business School. Christensen’s 1997 book, The Innovator’s Dilemma, (US) (UK) told a compelling story about how new technologies creep up from below: they are flawed or under-developed at first, so do not appeal to existing customers. Holiday snappers do not want to buy digital cameras the size of a shoebox and the price of a car.

However, Christensen explains, these technologies do find customers: people with unusual needs previously unserved by the incumbent players. The new technology gets better and, one day, the incumbent wakes up to discover that an upstart challenger has several years’ head start — and once-loyal customers have jumped ship.

Christensen’s story is an elegant one and fits some cases brilliantly. But there are many examples that do not fit — such as the failure of Xerox to exploit the cutting-edge research at Parc. The mouse and the graphic user interface aren’t a low-end competitor to the photocopier. They’re from a completely different universe.

The iPod didn’t sneak up on Sony from below: the company had seen the potential of a digital music player and moved quickly. Dominant organisations often see the disruptive technologies coming. “Kodak and Blockbuster weren’t caught by surprise,” Joshua Gans tells me. “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.” They knew; but they were unable to put together the right response.

There is also a striking counter-example to Christensen’s idea that disruptive technologies begin as flawed or low-quality options. The iPhone was priced as a premium product with never-before-seen capabilities. It devastated Nokia and Research In Motion — now simply named BlackBerry Ltd in an echo of its once-iconic offering.

Christensen has tried to fit the iPhone into his theories. At first he predicted that incumbents would easily respond, and later he recast it as a disruption in a different industry altogether: “It was intended to disrupt the laptop. And that’s the way it happened.”

The laptop? Tell that to Nokia and BlackBerry.

Anyway, is the tank a low-end competitor to the horse? That’s a stretch. When a theory needs to be made this elastic, it may be time to look for another theory.

 

In 1990, a young economist named Rebecca Henderson published an article with her supervisor Kim Clark that presented a different view of why it is hard to do new things in old organisations. The relevant word is “organisations”.

Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.

Consider, for example, IBM — the giant of mainframe computing. IBM is a survivor. It predates the digital computer by more than three decades. While the performance of computers was being revolutionised by the semiconductor, the integrated circuit, the hard drive and the compiler, IBM maintained a dominant position without breaking stride. This was because the organisational challenge of making and selling a sophisticated mainframe computer to a bank in the 1970s was not greatly different from the organisational challenge of making and selling a mechanical tabulating machine to a bank in the 1930s. Change was constant but manageable.

When computers started to be bought by small businesses, hobbyists and even parents, IBM faced a very different challenge. It did build a successful business in PCs, but was unable to maintain its old dominance, or bring to bear its historical strengths. In fact, the PC division prospered only as long as it was able to snub the rest of the organisation, often partnering with component suppliers and distributors that directly competed with IBM divisions. Internal politics soon asserted itself.

A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.

“An architectural innovation is an innovation that changes the relationship between the pieces of the problem,” Henderson tells me. “It can be hard to perceive, because many of the pieces remain the same. But they fit together differently.”

An architectural innovation challenges an old organisation because it demands that the organisation remake itself. And who wants to do that?

 

The armies of the late 19th century were organised — as armies had long been — around cavalry and infantry. Cavalry units offered mobility. Infantry offered strength in numbers and the ability to dig in defensively.

Three technologies emerged to define the first world war: artillery, barbed wire and the machine gun. They profoundly shaped the battlefield, but also slipped easily into the existing decision-making structures. Barbed wire and machine guns were used to reinforce infantry positions. Artillery could support either cavalry or infantry from a distance.

Tanks, however, were different. In some ways they were like cavalry, since their strength lay partly in their ability to move quickly. In other ways, they fitted with the infantry, fighting alongside foot soldiers. Or perhaps tanks were a new kind of military capability entirely; this was the view taken by J F C Fuller.

These discussions might seem philosophical — but in the light of Henderson’s ideas, they are intensely practical. “You have to find an organisation that will accept the new bit of technology,” says Andrew Mackay. Mackay runs an advisory firm, Complexas, but was also the commander of British and coalition forces in Helmand, Afghanistan, in 2008. “The organisational question is deeply unsexy, but it’s fundamental.”

A more recent example: is the helicopter basically a kind of aeroplane, and therefore an asset of the Royal Air Force? Or something quite different? Who should be in charge of drones today?

So it was with the tank. If it was to prosper, it needed an organisational home. Someone would have to argue for it, someone would have to pay for it, and someone would have to make it all work, technologically and tactically.

Perhaps the two most obvious places to put the tank were as a standalone unit (since it offered quite new capabilities) or in cavalry regiments (since it was highly mobile and the horse was becoming obsolete). There were traps along either route: the established regiments would resist a standalone structure for tanks, which would compete for resources while the postwar army was shrinking. A new tank regiment would lack both allies and the heft of historical tradition.

After various twists and turns, it was the cavalry that ended up as the organisational home of the tank. And cavalry officers certainly understand a highly mobile strike capability. But they were never really organised around the concept of “mobility”. They were organised around horses. The cavalry officer loved his horse and rode it with skill. His regiment was devoted to feeding and caring for the horses. Would he not resist the usurper tank with every fibre of his being?

 

Xerox Parc developed or assembled most of the features of a user-friendly personal computer, but Xerox itself did not have the organisational architecture to manufacture and market it. Xerox Parc did develop the laser printer, a product that matched the company’s expertise nicely. As Gladwell pointed out, this easily paid for the entire Parc project. The laser printer was like artillery or the machine gun for Xerox: it was an exciting new technology, but it was not a challenge to the organisation’s architecture. The personal computer was like the tank.

The same is true for Sony and the Memory Stick Walkman. As Sony expanded, it produced radios and televisions, video recorders and camcorders, computers, game consoles and even acquired a film and music empire. But to keep this sprawl manageable, Sony’s leaders divided it into silos. As Gillian Tett explains in The Silo Effect, (US) (UK) the silo that produced the PlayStation had almost nothing to do with the silo that produced portable CD players. The Memory Stick Walkman was like the tank: it didn’t fit neatly into any category. To be a success, the silos that had been designed to work separately would have to work together. That required an architectural change that Sony tried but failed to achieve.

And for IBM, the shift from a mechanical tabulator to a mainframe digital computer was like the shift from rifles to the machine gun: an awesome step up in firepower, but a modest adjustment to organisational capacity. The tank was like the personal computer: it may have been a logical step forward given the technology available, but it required a different organisational architecture — one that bypassed and threatened the existing power centres of Big Blue. That was the problem.

The politics of organisational change are never easy. In the case of the tank, they were brutal. The British public never wanted to fight another war in mainland Europe, and the tank represented an admission that they might have to. The armed forces were starved of cash in the 1920s and 1930s. In 1932, the British army ordered just nine tanks — delicate four-tonners. The total weight of this entire force was less than a single German Tiger tank. But at a time of declining budgets, who could justify buying more?

It did not help that the tank enthusiasts were often politically naive. Since an architectural innovation requires an organisational overhaul, it is a task requiring skilful diplomacy. Fuller was no diplomat. His essays and books were dotted with spiky critiques of senior military officers. After a while, even the junior officers who admired his wit began to tire of his “needlessly offensive” lecturing. D

espite alienating the army top brass, Fuller was handed a unique opportunity to advance the cause of tanks in the British army: he was offered the command of a new experimental mechanised force in December 1926. There was just one problem: he would have to step away from his single-minded focus on the tank, also taking command of an infantry brigade and a garrison. In short, Fuller would have to get into the organisational headaches that surround any architectural innovation.

He baulked, and wrote to the head of the army demanding that these other duties be carried out by someone else, eventually threatening to resign. The position was awarded to another officer, and Fuller’s career never recovered. His petulance cost him — and the British army — dearly. Architectural innovations can seem too much like hard work, even for those most committed to seeing them succeed.

 

Within academia, Rebecca Henderson’s ideas about architectural innovation are widely cited, and she is one of only two academics at Harvard Business School to hold the rank of university professor. The casual observer of business theories, however, is far more likely to have heard of Clayton Christensen, one of the most famous management gurus on the planet. That may be because Christensen has a single clear theory of how disruption happens — and a solution, too: disrupt yourself before you are disrupted by someone else. That elegance is something we tend to find appealing.

The reality of disruption is less elegant — and harder to solve. Kodak’s position may well have been impossible, no matter what managers had done. If so, the most profitable response would have been to vanish gracefully. “There are multiple points of failure,” says Henderson. “There’s the problem of reorganisation. There’s the question of whether the new idea will be profitable. There are cognitive filters. There is more than one kind of denial. To navigate successfully through, an incumbent organisation has to overcome every one of these obstacles.”

In an email, she added that the innovators — like Fuller — are often difficult people. “The people who bug large organisations to do new things are socially awkward, slightly fanatical and politically often hopelessly naive.” Another point of failure.

The message of Henderson’s work with Kim Clark and others is that when companies or institutions are faced with an organisationally disruptive innovation, there is no simple solution. There may be no solution at all. “I’m sorry it’s not more management guru-ish,” she tells me, laughing. “But anybody who’s really any good at this will tell you that this is hard.”

Almost a decade after resigning from a senior position in the British army, Andrew Mackay agrees: “I’d love to think that there could be a solution, but I don’t think there is.”

 

If I had to bet on the most significant disruption occurring today, I would point to the energy industry.

Chris Goodall is a longtime observer of the renewable energy scene and author of The Switch, a book about breakthroughs in solar panel technology. Goodall points out that solar photovoltaics have enjoyed a dramatic fall in costs, one that shows no sign of abating. Solar PV electricity is now cheaper than electricity generated by gas or coal in the sunny climes where most of the planet’s population live. A few more years and that advantage will seem overwhelming, which is great news for the planet and terrible news for incumbents.

Consider General Electric, which this year disappeared from the Dow Jones Industrial Average. In little more than a year, the old industrial titan’s share price had halved. One of the key culprits for its woes was a precipitous collapse in the demand for large gas turbines, that, in turn, was the result of a fall in the cost of solar power cells that had been relentless, predictable and ignored.

This possibility has been clear to the fossil fuel industry for a while. I know: I used to work in long-range scenario planning for Shell International. Back in 2001, my Shell colleagues and I were discussing thin solar films that could be printed cheaply and applied to windows or hung as wallpaper. We could see the threat of exponentially cheaper solar power — but recall what Joshua Gans said about Kodak and Blockbuster: “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.”

They knew. But they could not act. Because what is an oil company to do in a world of abundant, cheap solar energy? Offshore wind farms play to some oil-company strengths; they know a lot about large metal structures in the North Sea. But solar energy is an architectural innovation. The pieces just don’t fit together like an oil rig or a refinery. As a mass-market, manufactured product it is closer to the skill set of Ikea than Exxon.

The implication of Christensen’s theory is that oil companies should have set up solar subsidiaries decades ago. Many of them did, without much success. The implication of Henderson’s theory is that the oil companies are in big trouble.

Chris Goodall thinks the oil companies should rescue what they can — for example, by developing synthetic hydrocarbons derived from water, atmospheric carbon dioxide and solar energy. Such products would play to oil-company strengths. But for most of their business lines, Goodall says, “The best strategy for the oil companies is almost certainly gradual self-liquidation.”

Or as BP’s chief executive Bob Dudley candidly admitted to the Washington Post recently, “If someone said, ‘Here’s $10bn to invest in renewables,’ we wouldn’t know how to do it.”

 

Despite all the obstacles, the British army continued to develop both tanks and tank tactics throughout the 1920s and 1930s. Yet the internal politics proved toxic. The Germans, meanwhile, watched and learnt. If the British were hamstrung by their inability to reorganise what was, after all, a victorious army in the first world war, the Germans had the opposite problem: they had barely any army, and no status quo to defend. There was no organisational architecture to get in the way. When Adolf Hitler came to power in 1933 and began to expand the German army and invest in tanks, he encountered a German military that had been watching, thinking and experimenting for 14 years.

On his 50th birthday in 1939, Hitler celebrated with a parade of Germany’s newly reconstructed army through Berlin. “For three hours,” wrote one witness, “a completely mechanised and motorised army roared past the Führer.”

This witness was a guest of honour at the celebrations. His name: J F C Fuller. After quitting the British army in frustration, he had thrown his lot in with the British fascists of Oswald Mosley. He wrote vitriolic attacks on Jews. Some observers wondered whether this was simply an attempt to win favour with the world’s tank superpower, Nazi Germany. One of Fuller’s biographers, Mark Urban, doubts this: “The facility with which Fuller made anti-Jewish jibes in letters and books suggests pleasure rather than duty.”

Nobody doubts, however, that Fuller was obsessed by German tanks. After all, there was one army that had really understood and embraced his ideas: that of Adolf Hitler. After the parade, Major General Fuller met Hitler himself in a receiving line at the Chancellery.

The Führer grasped Fuller’s hand and asked, “I hope you were pleased with your children?”

“Your excellency,” Fuller replied, “They have grown up so quickly that I no longer recognise them.”

This article was first published as a cover story in the FT Magazine on 8/9 September 2018. 

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)


 

6th of October, 2018HighlightsOther WritingComments off
Highlights

“Basic income is about the freedom to say no” – Rutger Bregman goes bouldering

“You have the instinct for it,” says Rutger Bregman, as I haul myself up an indoor climbing wall, nestled under the arches at Vauxhall station in London. “Shit, this is some talent!” he bellows, as I reach the top. I am inwardly delighted, even though I realise the praise is absurd: I have climbed about four metres and it’s a beginner’s route.

Bregman has suggested that we go bouldering together. Bouldering is a variety of rock climbing, done over short distances without safety ropes. Coming from Bregman, it seems a curious choice. The young Dutch historian and author is most famous for advocating a universal basic income — a regular cash grant to every single person, given unconditionally, to support them and provide a minimum standard of living, no matter what might go wrong.

His book, Utopia for Realists (UK) (US), has been a surprise bestseller, finding an audience eager for radical yet plausible policy ideas. Yet this celebrated advocate of unconditional handouts has chosen a sport that is all about self-reliance, and the ultimate departure from the principle of the safety net.

“There is a safety net — look!” says Bregman, pointing at the crash mats. I am not totally convinced. It doesn’t take long before I fall off — a combination of lack of skill and lack of fitness. As I peel myself off the mat, I realise the skin of one elbow has not remained with me.

Bregman’s contention is that a basic income would be the logical and perfectly affordable next step for a human race that has already taken huge leaps forward since before the industrial revolution, when, he writes, “nearly everyone, everywhere was still poor, hungry, dirty, afraid, stupid, sick and ugly”.

Bregman himself looks the picture of health, possibly because, at 29, he’s 15 years younger than me, and possibly because he’s been practising. He climbs twice a week; his T-shirt says Sterk, the name of his local bouldering hall in Utrecht. The word means “strong” in Dutch. My limited experience of rock climbing with my daughters has taught me that the legs take the strain. Bouldering, however, requires more upper-body strength.

“It’s more explosive,” I am told. And within 15 minutes, I’m done: the tendons below my wrist have given up and I am close to doing the same. The first three routes were exhilarating but without a rope, even the short climbs under the arches of VauxWall are starting to feel vertiginous. I’m losing my nerve as well as my strength. Bregman, on the other hand, is just getting started.

“How long is a typical session?” I ask. “Fifteen minutes or an hour or . . . I can’t imagine anyone keeping this up for an hour.

“Two, two-and-a-half hours, if I have the time. Which I usually don’t,” he says. “If you warm up slowly, not like today, then you are at your peak after 45 minutes, and then you can keep that up for another 45 minutes.”

I spend much of the next hour watching Bregman solve one route after another. Sometimes he is dangling loosely off an overhang, as though resting in an invisible hammock. Sometimes he is moving laterally, his legs as high as his arms in a spiderlike scurry across the wall. Once, he hangs vertically as he works his way from left to right across a whimsical hold: a huge pair of pouting lips in one corner, just below the roof. He took up the sport three years ago. “I didn’t like to exercise at all. It’s so soul-destroying. But this is different.”

Bregman sees soul-destroying activity in much of modern life. Too many people, he says, are doing jobs they dislike or see as pointless, because they have no alternative. A basic income would liberate people: perhaps a minimum of €1,000 a month, given unconditionally as a cash grant, or through the tax system as a negative income tax.

Bregman has branded a basic income as “venture capital for the people”. A good line, I congratulate him. But what does it mean?

“OK, so basic income is all about the freedom to say no. That’s a privilege for the rich right now. With a basic income, you can say no to a job you don’t want to do. You can say no to a city in which you no longer want to live. You can say no to an employer who harasses you at work . . . that’s what real freedom looks like.”

Part of the impetus for a basic income has come from the sense that the robots are coming for our jobs — maybe not today, maybe not tomorrow, but soon. The venture capital firm Y Combinator is funding research into basic income, which seems to be a popular idea in Silicon Valley. But Bregman has no patience for the idea that technological change underpins the case for basic income.

“This is not about AI,” he insists. “You go back to the 1960s, and all the economists, all the philosophers, all the sociologists said we’re going to be working less and less and less and less and boredom is going to be the great challenge of the future. Didn’t happen . . . mostly because we have this ideological obsession with creating new jobs.”

Advocates of basic income have included two rather different Nobel laureates: the civil rights activist Martin Luther King Jr and the free-market evangelist Milton Friedman. The idea draws support from leftwingers who see an opportunity to redistribute and to give workers more bargaining power, and rightwingers who see an opportunity to dismantle paternalistic bureaucracies and empower ordinary people to make their own choices.

Bregman’s own sympathies seem to lie more with the left. At one point I tease him about the fact that he is in London on Valentine’s Day while his wife Maartje (a photographer and collaborator) is not. His response is spat out with a vehemence that might have been for comic effect, and might not: “You know that Valentine’s Day is just a capitalist scam to make you buy stuff you don’t need, to impress people you don’t like, right?”

But like Friedman, Bregman is clearly no fan of paternalistic bureaucracies. “Nowhere you’ll find as much support for something like basic income as [among] people who work for unemployment agencies,” he says. “In Holland I did a couple of lectures for those groups and they just give me a standing ovation when you say that we should abolish their jobs.”

It is the unconditional nature of the cash transfer that particularly appeals to him. With the transfer of money, no strings attached, there is a transfer of dignity, of bargaining power, and of responsibility. People have to make their own choices.

Again, I venture a connection between the basic income idea and bouldering: it’s a solo sport in which individuals need to find their own path, judging risks for themselves?

“If I would make this sport political, what I like about it is that it is competitive, but with yourself. So you’re not competing with anyone else, you’re just trying to do better yourself. And it’s a puzzle, every time it’s different. It’s a very creative sport, I guess.”

Utopia for Realists was itself a slowly assembled puzzle. The early drafts were articles in De Correspondent, an online crowdfunded news website founded by a Dutch pop-philosopher and columnist, Rob Wiijnberg. “It’s an anarchist-idealist collective of journalists who don’t follow the news,” Bregman explains.

This may explain why Utopia for Realists is such a curiously enjoyable read. The title sums up Bregman’s belief that evidence-based pragmatism should not rule out provocative, ambitious ideas. The book is lively, well researched and full of unlikely pieces of history, from the Speenhamland system of poor relief, developed in England in 1795, to US President Richard Nixon’s flirtation with the idea of a basic income in 1969. (Bregman studied history rather than economics or politics.) It is also perfectly orthogonal to anything one might read in a newspaper. The book was published in Dutch by De Correspondent, built a following slowly, then was self-published in English.

“I was my own PR employee at that point. I was emailing everyone — no interviews, no reviews. Nothing.” Yet when Bregman emailed me out of the blue with the English translation and a request for my support, I was sufficiently impressed to endorse the book. Steven Pinker also gave it a glowing cover quote. And as Bregman and his colleagues were pondering giving up, the project suddenly took off. While not quite Fifty Shades of Grey, in a short space of time Utopia for Realists went from brave failed experiment to international bestseller, due to be published in 28 languages.

“Ideas always start on the fringe and then they move towards the centre,” he says. “Then I was invited to come to Davos this year. Like, yeah, that’s pretty much it, right? My first lectures about basic income were for anarchists with long hair, and smelly.”

Did he go to Davos? “No, I had to go to a book fair in Colombia.” He did, however, give a talk at TED last year, and seems aware of the irony of advocating the dismantling of an entire class of do-gooders.

“You’re talking for an audience of 1,500 people, many of them involved in kinds of charities. The CEO of Toms, for example, was there.” Toms donates a pair of shoes to a poor family for every pair purchased; Bregman isn’t impressed. “Buy one shoe, give one shoe. That is just a horrible, horrible idea.”

He got a huge round of applause when he proposed scrapping aid bureaucracies and replacing them with direct cash transfers. The rapturous reception struck him as odd. “I was saying we should hand over the salaries of all these paternalistic bureaucrats and give them to the poor, who are the real experts on their own lives. And they were all clapping and laughing, and I was thinking on stage, ‘But I’m talking about you! It’s you!’”

It’s a good talk, I tell him. “I like to prepare for these things. I knew it off by heart three months before I went on stage.”

I press him on the details of the talk. He skips a little too lightly between the idea of replacing international development aid with direct cash transfers to poor people, and the idea of overhauling modern western welfare states to place unconditional cash payments at their heart. The two ideas are cousins, not identical twins, I suggest. Adding a dollar a day, no strings attached, to a non-existent social safety net might be transformative in rural India or Africa. A resident of London is going to want a little more than that before she willingly gives up her housing benefit. Bregman agrees: his focus now is on welfare reform.

Another question mark is over the evidence base for a basic income. Bregman mentions “dozens of experiments” but, arguably, there has never been a completely satisfactory randomised trial of a long-term basic income. (A literature review by the charity GiveDirectly counted six shorter-term randomised trials; policymakers should conduct many more.)

One promising episode — a four-year trial in Manitoba, Canada, in the 1970s — received little attention. When the economist Evelyn Forget managed to get hold of the mothballed archives in 2009, they were on the verge of being discarded. There is a new study in Kenya, funded by GiveDirectly. With 5,000 recipients getting a basic income for 12 years, that trial shows real ambition — but the income in question is just over $20 a month. This is unlikely to tell us much about reforming a European welfare state. Nor is a much-hyped but rather small trial in Finland, which will last just two years and is focused only on those already receiving unemployment benefits.

Other trials have been excitedly announced but have yet to begin, let alone conclude. We are still waiting for a study large and patient enough to tell us much about a basic income in a developed economy. So what are these “dozens of experiments”?

Bregman says that the experiments he has in mind are less evaluating a full basic income scheme, and more exploring the impact of cash transfers in development aid. That is indeed a well-studied area, although not quite the same thing. Those experiments provide encouragement for proponents of a basic income: households tend to put the money to good use, and reap long-term benefits.

By now, we’re talking over a coffee, my enfeebled hands thankfully strong enough to grip a mug. My final question is about one of his other ideas: dramatically liberalising immigration rules.

“Every utopian system is obviously grounded in the injustices of the present,” he says. “What’s the biggest injustice in the world right now? It’s pretty easy to see. It’s borders: apartheid on a global scale.”

But while basic income seems to be having a day in the sun, an end to passport control is hardly in tune with the Trumpian zeitgeist, is it? “Well that’s almost my problem with basic income right now. I get questions during lectures, people say, ‘Is this really a radical idea?’ So I’m like, I should move on. Because utopias are meant to make people angry.”

Fair enough: as in bouldering, so in utopian politics. Once you’ve solved one puzzle, it is time to move on to a new challenge.

 

 
Written for and first published in the Financial Times on 9 March 2018.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

28th of March, 2018HighlightsOther WritingComments off
Highlights

Your handy postcard-sized guide to statistics

Statistics on a postcard

 

“The best financial advice for most people would fit on an index card.” That’s the gist of an offhand comment in 2013 by Harold Pollack, a professor at the University of Chicago. Pollack’s bluff was duly called, and he quickly rushed off to find an index card and scribble some bullet points — with respectable results.

When I heard about Pollack’s notion — he elaborated upon it in a 2016 book — I asked myself: would this work for statistics, too? There are some obvious parallels. In each case, common sense goes a surprisingly long way; in each case, dizzying numbers and impenetrable jargon loom; in each case, there are stubborn technical details that matter; and, in each case, there are people with a sharp incentive to lead us astray.

The case for everyday practical numeracy has never been more urgent. Statistical claims fill our newspapers and social media feeds, unfiltered by expert judgment and often designed as a political weapon. We do not necessarily trust the experts — or more precisely, we may have our own distinctive view of who counts as an expert and who does not.  Nor are we passive consumers of statistical propaganda; we are the medium through which the propaganda spreads. We are arbiters of what others will see: what we retweet, like or share online determines whether a claim goes viral or vanishes. If we fall for lies, we become unwittingly complicit in deceiving others.

On the bright side, we have more tools than ever to help weigh up what we see before we share it — if we are able and willing to use them. In the hope that someone might use it, I set out to write my own postcard-sized citizens’ guide to statistics. Here’s what I learnt.

 

Professor Pollack’s index card includes advice such as “Save 20 per cent of your money” and “Pay your credit card in full every month”. The author Michael Pollan offers dietary advice in even pithier form: “Eat Food. Not Too Much. Mostly Plants.” Quite so, but I still want a cheeseburger.  However good the advice Pollack and Pollan offer, it’s not always easy to take. The problem is not necessarily ignorance. Few people think that Coca-Cola is a healthy drink, or believe that credit cards let you borrow cheaply. Yet many of us fall into some form of temptation or other. That is partly because slick marketers are focused on selling us high-fructose corn syrup and easy credit. And it is partly because we are human beings with human frailties.

With this in mind, my statistical postcard begins with advice about emotion rather than logic. When you encounter a new statistical claim, observe your feelings. Yes, it sounds like a line from Star Wars, but we rarely believe anything because we’re compelled to do so by pure deduction or irrefutable evidence. We have feelings about many of the claims we might read — anything from “inequality is rising” to “chocolate prevents dementia”. If we don’t notice and pay attention to those feelings, we’re off to a shaky start. What sort of feelings? Defensiveness. Triumphalism. Righteous anger. Evangelical fervour. Or, when it comes to chocolate and dementia, relief.

It’s fine to have an emotional response to a chart or shocking statistic — but we should not ignore that emotion, or be led astray by it. There are certain claims that we rush to tell the world, others that we use to rally like-minded people, still others we refuse to believe. Our belief or disbelief in these claims is part of who we feel we are.

“We all process information consistent with our tribe,” says Dan Kahan, professor of law and psychology at Yale University.

In 2005, Charles Taber and Milton Lodge, political scientists at Stony Brook University, New York, conducted experiments in which subjects were invited to study arguments around hot political issues. Subjects showed a clear confirmation bias: they sought out testimony from like-minded organisations. For example, subjects who opposed gun control would tend to start by reading the views of the National Rifle Association.

Subjects also showed a disconfirmation bias: when the researchers presented them with certain arguments and invited comment, the subjects would quickly accept arguments with which they agreed, but devote considerable effort to disparage opposing arguments.

Expertise is no defence against this emotional reaction; in fact, Taber and Lodge found that better-informed experimental subjects showed stronger biases. The more they knew, the more cognitive weapons they could aim at their opponents.

“So convenient a thing it is to be a reasonable creature,” commented Benjamin Franklin, “since it enables one to find or make a reason for everything one has a mind to do.”

This is why it’s important to face up to our feelings before we even begin to process a statistical claim. If we don’t at least acknowledge that we may be bringing some emotional baggage along with us, we have little chance of discerning what’s true. As the physicist Richard Feynman once commented, “You must not fool yourself — and you are the easiest person to fool.”

 

The second crucial piece of advice is to understand the claim. That seems obvious. But all too often we leap to disbelieve or believe (and repeat) a claim without pausing to ask whether we really understand what the claim is. To quote Douglas Adams’s philosophical supercomputer, Deep Thought, “Once you know what the question actually is, you’ll know what the answer means.”

For example, take the widely accepted claim that “inequality is rising”. It seems uncontroversial, and urgent. But what does it mean? Racial inequality? Gender inequality? Inequality of opportunity, of consumption, of education attainment, of wealth? Within countries or across the globe? Even given a narrower claim, “inequality of income before taxes is rising” (and you should be asking yourself, since when?), there are several different ways to measure this.

One approach is to compare the income of people at the 90th percentile and the 10th percentile, but that tells us nothing about the super-rich, nor the ordinary people in the middle. An alternative is to examine the income share of the top 1 per cent — but this approach has the opposite weakness, telling us nothing about how the poorest fare relative to the majority.  There is no single right answer — nor should we assume that all the measures tell a similar story. In fact, there are many true statements that one can make about inequality. It may be worth figuring out which one is being made before retweeting it.

Perhaps it is not surprising that a concept such as inequality turns out to have hidden depths. But the same holds true of more tangible subjects, such as “a nurse”. Are midwives nurses? Health visitors? Should two nurses working half-time count as one nurse? Claims over the staffing of the UK’s National Health Service have turned on such details.

All this can seem like pedantry — or worse, a cynical attempt to muddy the waters and suggest that you can prove anything with statistics. But there is little point in trying to evaluate whether a claim is true if one is unclear what the claim even means.

Imagine a study showing that kids who play violent video games are more likely to be violent in reality. Rebecca Goldin, a mathematician and director of the statistical literacy project STATS, points out that we should ask questions about concepts such as “play”, “violent video games” and “violent in reality”. Is Space Invaders a violent game? It involves shooting things, after all. And are we measuring a response to a questionnaire after 20 minutes’ play in a laboratory, or murderous tendencies in people who play 30 hours a week?

“Many studies won’t measure violence,” says Goldin. “They’ll measure something else such as aggressive behaviour.” Just like “inequality” or “nurse”, these seemingly common sense words hide a lot of wiggle room.

Two particular obstacles to our understanding are worth exploring in a little more detail. One is the question of causation. “Taller children have a higher reading age,” goes the headline. This may summarise the results of a careful study about nutrition and cognition. Or it may simply reflect the obvious point that eight-year-olds read better than four-year-olds — and are taller. Causation is philosophically and technically a knotty business but, for the casual consumer of statistics, the question is not so complicated: just ask whether a causal claim is being made, and whether it might be justified.

Returning to this study about violence and video games, we should ask: is this a causal relationship, tested in experimental conditions? Or is this a broad correlation, perhaps because the kind of thing that leads kids to violence also leads kids to violent video games? Without clarity on this point, we don’t really have anything but an empty headline.

We should never forget, either, that all statistics are a summary of a more complicated truth. For example, what’s happening to wages? With tens of millions of wage packets being paid every month, we can only ever summarise — but which summary? The average wage can be skewed by a small number of fat cats. The median wage tells us about the centre of the distribution but ignores everything else. Or we might look at the median increase in wages, which isn’t the same thing as the increase in the median wage — not at all. In a situation where the lowest and highest wages are increasing while the middle sags, it’s quite possible for the median pay rise to be healthy while median pay falls.

Sir Andrew Dilnot, former chair of the UK Statistics Authority, warns that an average can never convey the whole of a complex story. “It’s like trying to see what’s in a room by peering through the keyhole,” he tells me.

In short, “you need to ask yourself what’s being left out,” says Mona Chalabi, data editor for The Guardian US. That applies to the obvious tricks, such as a vertical axis that’s been truncated to make small changes look big. But it also applies to the less obvious stuff — for example, why does a graph comparing the wages of African-Americans with those of white people not also include data on Hispanic or Asian-Americans? There is no shame in leaving something out. No chart, table or tweet can contain everything. But what is missing can matter.

 

Channel the spirit of film noir: get the backstory. Of all the statistical claims in the world, this particular stat fatale appeared in your newspaper or social media feed, dressed to impress. Why? Where did it come from? Why are you seeing it?

Sometimes the answer is little short of a conspiracy: a PR company wanted to sell ice cream, so paid a penny-ante academic to put together the “equation for the perfect summer afternoon”, pushed out a press release on a quiet news day, and won attention in a media environment hungry for clicks. Or a political donor slung a couple of million dollars at an ideologically sympathetic think-tank in the hope of manufacturing some talking points.

Just as often, the answer is innocent but unedifying: publication bias. A study confirming what we already knew — smoking causes cancer — is unlikely to make news. But a study with a surprising result — maybe smoking doesn’t cause cancer after all — is worth a headline. The new study may have been rigorously conducted but is probably wrong: one must weigh it up against decades of contrary evidence.

Publication bias is a big problem in academia. The surprising results get published, the follow-up studies finding no effect tend to appear in lesser journals if they appear at all. It is an even bigger problem in the media — and perhaps bigger yet in social media. Increasingly, we see a statistical claim because people like us thought it was worth a Like on Facebook.

David Spiegelhalter, president of the Royal Statistical Society, proposes what he calls the “Groucho principle”. Groucho Marx famously resigned from a club — if they’d accept him as a member, he reasoned, it couldn’t be much of a club. Spiegelhalter feels the same about many statistical claims that reach the headlines or the social media feed. He explains, “If it’s surprising or counter-intuitive enough to have been drawn to my attention, it is probably wrong.”

 

OK. You’ve noted your own emotions, checked the backstory and understood the claim being made. Now you need to put things in perspective. A few months ago, a horrified citizen asked me on Twitter whether it could be true that in the UK, seven million disposable coffee cups were thrown away every day.  I didn’t have an answer. (A quick internet search reveals countless repetitions of the claim, but no obvious source.) But I did have an alternative question: is that a big number? The population of the UK is 65 million. If one person in 10 used a disposable cup each day, that would do the job.

Many numbers mean little until we can compare them with a more familiar quantity. It is much more informative to know how many coffee cups a typical person discards than to know how many are thrown away by an entire country. And more useful still to know whether the cups are recycled (usually not, alas) or what proportion of the country’s waste stream is disposable coffee cups (not much, is my guess, but I may be wrong).

So we should ask: how big is the number compared with other things I might intuitively understand? How big is it compared with last year, or five years ago, or 30? It’s worth a look at the historical trend, if the data are available.

Finally, beware “statistical significance”. There are various technical objections to the term, some of which are important. But the simplest point to appreciate is that a number can be “statistically significant” while being of no practical importance. Particularly in the age of big data, it’s possible for an effect to clear this technical hurdle of statistical significance while being tiny.  One study was able to demonstrate that unborn children exposed to a heatwave while in the womb went on to earn less as adults. The finding was statistically significant. But the impact was trivial: $30 in lost income per year. Just because a finding is statistically robust does not mean it matters; the word “significance” obscures that.

 

In an age of computer-generated images of data clouds, some of the most charming data visualisations are hand-drawn doodles by the likes of Mona Chalabi and the cartoonist Randall Munroe. But there is more to these pictures than charm: Chalabi uses the wobble of her pen to remind us that most statistics have a margin of error. A computer plot can confer the illusion of precision on what may be a highly uncertain situation.

“It is better to be vaguely right than exactly wrong,” wrote Carveth Read in Logic (1898), and excessive precision can lead people astray. On the eve of the US presidential election in 2016, the political forecasting website FiveThirtyEight gave Donald Trump a 28.6 per cent chance of winning. In some ways that is impressive, because other forecasting models gave Trump barely any chance at all. But how could anyone justify the decimal point on such a forecast? No wonder many people missed the basic message, which was that Trump had a decent shot. “One in four” would have been a much more intuitive guide to the vagaries of forecasting.

Exaggerated precision has another cost: it makes numbers needlessly cumbersome to remember and to handle. So, embrace imprecision. The budget of the NHS in the UK is about £10bn a month. The national income of the United States is about $20tn a year. One can be much more precise about these things, but carrying the approximate numbers around in my head lets me judge pretty quickly when — say — a £50m spending boost or a $20bn tax cut is noteworthy, or a rounding error.

My favourite rule of thumb is that since there are 65 million people in the UK and people tend to live a bit longer than 65, the size of a typical cohort — everyone retiring or leaving school in a given year — will be nearly a million people. Yes, it’s a rough estimate — but vaguely right is often good enough.

 

Be curious. Curiosity is bad for cats, but good for stats. Curiosity is a cardinal virtue because it encourages us to work a little harder to understand what we are being told, and to enjoy the surprises along the way.

This is partly because almost any statistical statement raises questions: who claims this? Why? What does this number mean? What’s missing? We have to be willing — in the words of UK statistical regulator Ed Humpherson — to “go another click”. If a statistic is worth sharing, isn’t it worth understanding first? The digital age is full of informational snares — but it also makes it easier to look a little deeper before our minds snap shut on an answer.

While curiosity gives us the motivation to ask another question or go another click, it gives us something else, too: a willingness to change our minds. For many of the statistical claims that matter, we have already reached a conclusion. We already know what our tribe of right-thinking people believe about Brexit, gun control, vaccinations, climate change, inequality or nationalisation — and so it is natural to interpret any statistical claim as either a banner to wave, or a threat to avoid.

Curiosity can put us into a better frame of mind to engage with statistical surprises. If we treat them as mysteries to be resolved, we are more likely to spot statistical foul play, but we are also more open-minded when faced with rigorous new evidence.

In research with Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, Dan Kahan has discovered that people who are intrinsically curious about science — they exist across the political spectrum — tend to be less polarised in their response to questions about politically sensitive topics. We need to treat surprises as a mystery rather than a threat.  Isaac Asimov is thought to have said, “The most exciting phrase in science isn’t ‘Eureka!’, but ‘That’s funny…’” The quip points to an important truth: if we treat the open question as more interesting than the neat answer, we’re on the road to becoming wiser.

 

In the end, my postcard has 50-ish words and six commandments. Simple enough, I hope, for someone who is willing to make an honest effort to evaluate — even briefly — the statistical claims that appear in front of them. That willingness, I fear, is what is most in question.

“Hey, Bill, Bill, am I gonna check every statistic?” said Donald Trump, then presidential candidate, when challenged by Bill O’Reilly about a grotesque lie that he had retweeted about African-Americans and homicides. And Trump had a point — sort of. He should, of course, have got someone to check a statistic before lending his megaphone to a false and racist claim. We all know by now that he simply does not care.

But Trump’s excuse will have struck a chord with many, even those who are aghast at his contempt for accuracy (and much else). He recognised that we are all human. We don’t check everything; we can’t. Even if we had all the technical expertise in the world, there is no way that we would have the time.

My aim is more modest. I want to encourage us all to make the effort a little more often: to be open-minded rather than defensive; to ask simple questions about what things mean, where they come from and whether they would matter if they were true. And, above all, to show enough curiosity about the world to want to know the answers to some of these questions — not to win arguments, but because the world is a fascinating place.
Written for and first published in the Financial Times on 8 February 2018.

My recent book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

8th of March, 2018HighlightsOther WritingComments off
Highlights

What We Get Wrong About Technology

Blade Runner (1982) is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being.  So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?

He calls her up from a payphone.

There is something revealing about the contrast between the two technologies — the biotech miracle that is Rachael, and the graffiti-scrawled videophone that Deckard uses to talk to her. It’s not simply that Blade Runner fumbled its futurism by failing to anticipate the smartphone. That’s a forgivable slip, and Blade Runner is hardly the only film to make it. It’s that, when asked to think about how new inventions might shape the future, our imaginations tend to leap to technologies that are sophisticated beyond comprehension. We readily imagine cracking the secrets of artificial life, and downloading and uploading a human mind. Yet when asked to picture how everyday life might look in a society sophisticated enough to build such biological androids, our imaginations falter. Blade Runner audiences found it perfectly plausible that LA would look much the same, beyond the acquisition of some hovercars and a touch of noir.

Now is a perplexing time to be thinking about how technology shapes us. Some economists, disappointed by slow growth in productivity, fear the glory days are behind us. “The economic revolution of 1870 to 1970 was unique in human history,” writes Robert Gordon in The Rise and Fall of American Growth (UK) (US). “The pace of innovation since 1970 has not been as broad or as deep.” Others believe that exponential growth in computing power is about to unlock something special. Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age” (UK) (US), while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.

Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere. Instead, when we try to imagine the future, the past offers two lessons. First, the most influential new technologies are often humble and cheap. Mere affordability often counts for more than the beguiling complexity of an organic robot such as Rachael. Second, new inventions do not appear in isolation, as Rachael and her fellow androids did. Instead, as we struggle to use them to their best advantage, they profoundly reshape the societies around us.

 

 

To understand how humble, cheap inventions have shaped today’s world, picture a Bible — specifically, a Gutenberg Bible from the 1450s. The dense black Latin script, packed into twin blocks, makes every page a thing of beauty to rival the calligraphy of the monks. Except, of course, these pages were printed using the revolutionary movable type printing press. Gutenberg developed durable metal type that could be fixed firmly to print hundreds of copies of a page, then reused to print something entirely different.  The Gutenberg press is almost universally considered to be one of humanity’s defining inventions. It gave us the Reformation, the spread of science, and mass culture from the novel to the newspaper. But it would have been a Rachael — an isolated technological miracle, admirable for its ingenuity but leaving barely a ripple on the wider world — had it not been for a cheap and humble invention that is far more easily and often overlooked: paper.

The printing press didn’t require paper for technical reasons, but for economic ones. Gutenberg also printed a few copies of his Bible on parchment, the animal-skin product that had long served the needs of European scribes. But parchment was expensive — 250 sheep were required for a single book. When hardly anyone could read or write, that had not much mattered. Paper had been invented 1,500 years earlier in China and long used in the Arabic world, where literacy was common. Yet it had taken centuries to spread to Christian Europe, because illiterate Europe no more needed a cheap writing surface than it needed a cheap metal to make crowns and sceptres.

Paper caught on only when a commercial class started to need an everyday writing surface for contracts and accounts. “If 11th-century Europe had little use for paper,” writes Mark Kurlansky in his book Paper (UK) (US), “13th-century Europe was hungry for it.” When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.

Paper opened the way for printing. The kind of print run that might justify the expense of a printing press could not be produced on parchment; it would require literally hundreds of thousands of animal skins. It was only when it became possible to mass-produce paper that it made sense to search for a way to mass-produce writing too. Not that writing is the only use for paper. In his book Stuff Matters (UK) (US), Mark Miodownik points out that we use paper for everything from filtering tea and coffee to decorating our walls. Paper gives us milk cartons, cereal packets and corrugated cardboard boxes. It can be sandpaper, wrapping paper or greaseproof paper. In quilted, perforated form, paper is soft, absorbent and cheap enough to wipe, well, anything you want. Toilet paper seems a long way from the printing revolution. And it is easily overlooked — as we occasionally discover in moments of inconvenience. But many world-changing inventions hide in plain sight in much the same way — too cheap to remark on, even as they quietly reorder everything. We might call this the “toilet-paper principle”.

 

 

It’s not hard to find examples of the toilet-paper principle, once you start to look. The American west was reshaped by the invention of barbed wire, which was marketed by the great salesman John Warne Gates with the slogan: “Lighter than air, stronger than whiskey, cheaper than dust.” Barbed wire enabled settlers to fence in vast areas of prairie cheaply. Joseph Glidden patented it in 1874; just six years later, his factory produced enough wire annually to circle the world 10 times over. Barbed wire’s only advantage over wooden fencing was its cost but that was quite sufficient to cage the wild west, where the simple invention prevented free-roaming bison and cowboys’ herds of cattle from trampling crops.  Once settlers could assert control over their land, they had the incentive to invest in and improve it. Without barbed wire, the American economy — and the trajectory of 20th-century history — might have looked very different.

There’s a similar story to be told about the global energy system. The Rachael of the energy world — the this-changes-everything invention, the stuff of dreams — is nuclear fusion. If we perfect this mind-bendingly complex technology, we might safely harvest almost limitless energy by fusing variants of hydrogen. It could happen: in France, the ITER fusion reactor is scheduled to be fully operational in 2035 at a cost of at least $20bn. If it works, it will achieve temperatures of 200 million degrees Celsius — yet will still only be an experimental plant, producing less power than a coal-fired plant, and only in 20-minute bursts. Meanwhile, cheap-and-cheerful solar power is quietly leading a very different energy revolution. Break-even costs of solar electricity have fallen by two-thirds in the past seven years, to levels barely more than those of natural gas plants. But this plunge has been driven less by any great technological breakthrough than by the humble methods familiar to anyone who shops at Ikea: simple modular products that have been manufactured at scale and that snap together quickly on site.

The problem with solar power is that the sun doesn’t always shine. And the solution that’s emerging is another cheap-and-cheerful, familiar technology: the battery. Lithium-ion batteries to store solar energy are becoming increasingly commonplace, and mass-market electric cars would represent a large battery on every driveway. Several giant factories are under construction, most notably a Tesla factory that promises to manufacture 35GWh worth of batteries each year by 2020; that is more than the entire global production of batteries in 2013. Battery prices have fallen as quickly as those of solar panels. Such Ikea-fication is a classic instance of toilet-paper technology: the same old stuff, only cheaper.

Perhaps the most famous instance of the toilet-paper principle is a corrugated steel box, 8ft wide, 8.5ft high and 40ft long. Since the shipping container system was introduced, world merchandise trade (the average of imports and exports) has expanded from about 10 per cent of world GDP in the late 1950s to more than 20 per cent today. We now take for granted that when we visit the shops, we’ll be surrounded by products from all over the globe, from Spanish tomatoes to Australian wine to Korean mobile phones.

“The standard container has all the romance of a tin can,” says historian Marc Levinson in his book The Box (UK) (US). Yet this simple no-frills system for moving things around has been a force for globalisation more powerful than the World Trade Organisation. Before the shipping container was introduced, a typical transatlantic cargo ship might contain 200,000 separate items, comprising many hundreds of different shipments, from food to letters to heavy machinery. Hauling and loading this cornucopia from the dockside, then packing it into the tightest corners of the hull, required skill, strength and bravery from the longshoremen, who would work on a single ship for days at a time. The container shipping system changed all that.

Loading and unloading a container ship is a gigantic ballet of steel cranes, choreographed by the computers that keep the vessel balanced and track each container through a global logistical system. But the fundamental technology that underpins it all could hardly be simpler. The shipping container is a 1950s invention using 1850s know-how. Since it was cheap, it worked. The container was a simple enough idea, and the man who masterminded its rise, Malcom McLean, could scarcely be described as an inventor. He was an entrepreneur who dreamed big, took bold risks, pinched pennies and deftly negotiated with regulators, port authorities and the unions.

McLean’s real achievement was in changing the system that surrounded his box: the way that ships, trucks and ports were designed. It takes a visionary to see how toilet-paper inventions can totally reshape systems; it’s easier for our limited imaginations to slot Rachael-like inventions into existing systems.  If nuclear fusion works, it neatly replaces coal, gas and nuclear fission in our familiar conception of the grid: providers make electricity, and sell it to us. Solar power and batteries are much more challenging. They’re quietly turning electricity companies into something closer to Uber or Airbnb — a platform connecting millions of small-scale providers and consumers of electricity, constantly balancing demand and supply.

 

 

Some technologies are truly revolutionary. They transcend the simple pragmatism of paper or barbed wire to produce effects that would have seemed miraculous to earlier generations. But they take time to reshape the economic systems around us — much more time than you might expect. No discovery fits that description more aptly than electricity, barely comprehended at the beginning of the 19th century but harnessed and commodified by its end. Usable light bulbs had appeared in the late 1870s, courtesy of Thomas Edison and Joseph Swan. In 1881, Edison built electricity-generating stations in New York and London and he began selling electricity as a commodity within a year. The first electric motors were used to drive manufacturing machinery a year after that. Yet the history of electricity in manufacturing poses a puzzle. Poised to take off in the late 1800s, electricity flopped as a source of mechanical power with almost no impact at all on 19th-century manufacturing. By 1900, electric motors were providing less than 5 per cent of mechanical drive power in American factories. Despite the best efforts of Edison, Nikola Tesla and George Westinghouse, manufacturing was still in the age of steam.

Productivity finally surged in US manufacturing only in the 1920s. The reason for the 30-year delay? The new electric motors only worked well when everything else changed too. Steam-powered factories had delivered power through awe-inspiring driveshafts, secondary shafts, belts, belt towers, and thousands of drip-oilers. The early efforts to introduce electricity merely replaced the single huge engine with a similarly large electric motor. Results were disappointing.

As the economic historian Paul David has argued, electricity triumphed only when factories themselves were reconfigured. The driveshafts were replaced by wires, the huge steam engine by dozens of small motors. Factories spread out, there was natural light. Stripped of the driveshafts, the ceilings could be used to support pulleys and cranes. Workers had responsibility for their own machines; they needed better training and better pay. The electric motor was a wonderful invention, once we changed all the everyday details that surrounded it.

David suggested in 1990 that what was true of electric motors might also prove true of computers: that we had yet to see the full economic benefits because we had yet to work out how to reshape our economy to take advantage of them. Later research by economists Erik Brynjolfsson and Lorin Hitt backed up the idea: they found that companies that had merely invested in computers in the 1990s had seen few benefits, but those that had also reorganised — decentralising, outsourcing and customising their products — had seen productivity soar.

Overall, the productivity statistics have yet to display anything like a 1920s breakthrough. In that respect we are still waiting for David’s suggestion to bear fruit. But in other ways, he was proved right almost immediately. People were beginning to figure out new ways to use computers and, in August 1991, Tim Berners-Lee posted his code for the world wide web on the internet so that others could download it and start to tinker. It was another cheap and unassuming technology, and it unlocked the potential of the older and grander internet itself.

 

 

If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael? The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise — from hackable pacemakers to botnets comprised of printers to, inevitably, internet-enabled sex toys that leak the most intimate data imaginable. Both the potential and the pitfalls are spectacular.

Whatever the technologies of the future turn out to be, they are likely to demand that, like the factories of the early 20th century, we change to accommodate them. Genuinely revolutionary inventions live up to their name: they change almost everything, and such transformations are by their nature hard to predict. One clarifying idea has been proposed by economists Daron Acemoglu and David Autor. They argue that when we study the impact of technology on the workplace, we should view work in bite-sized chunks — tasks rather than jobs.

For example, running a supermarket involves many tasks — stacking the shelves, collecting money from customers, making change, and preventing shoplifters. Automation has had a big impact on supermarkets, but not because the machines have simply replaced human jobs. Instead, they have replaced tasks done by humans, generally the tasks that could be most easily codified. The barcode turned stocktaking from a human task into one performed by computers. (It is another toilet-paper invention, cheap and ubiquitous, and one that made little difference until retail formats and supply chains were reshaped to take advantage.)

A task-based analysis of labour and automation suggests that jobs themselves aren’t going away any time soon — and that distinctively human skills will be at a premium. When humans and computers work together, says Autor, the computers handle the “routine, codifiable tasks” while amplifying the capabilities of the humans, such as “problem-solving skills, adaptability and creativity”. But there are also signs that new technologies have polarised the labour market, with more demand for both the high-end skills and the low-end ones, and a hollowing out in the middle. If human skills are now so valuable, that low-end growth seems like a puzzle — but the truth is that many distinctively human skills are not at the high end. While Jane Austen, Albert Einstein and Pablo Picasso exhibited human skills, so does the hotel maid who scrubs the toilet and changes the bed. We’re human by virtue not just of our brains, but our sharp eyes and clever fingers.

So one invention I’m keen to observe is the “Jennifer unit”, made by a company called Lucas Systems. Jennifer and the many other programmes like her are examples of a “voice-directed application” — just software and a simple, inexpensive earpiece. Such systems have become part of life for warehouse workers: a voice in their ear or instructions on a screen tell them where to go and what to do, down to the fine details. If 13 items must be collected from a shelf, Jennifer will tell the human worker to pick five, then five, then three. “Pick 13” would lead to mistakes. That makes sense. Computers are good at counting and scheduling. Humans are good at picking things off shelves. Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human? Like paper, Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.

 

 
Written for and first published in the FT Magazine on 8 July 2017.

My new book is “Fifty Things That Made The Modern Economy” – now out! Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

29th of August, 2017HighlightsOther WritingComments off
Highlights

The Problem With Facts

 

1.

Just before Christmas 1953, the bosses of America’s leading tobacco companies met John Hill, the founder and chief executive of one of America’s leading public relations firms, Hill & Knowlton. Despite the impressive surroundings — the Plaza Hotel, overlooking Central Park in New York — the mood was one of crisis.

Scientists were publishing solid evidence of a link between smoking and cancer. From the viewpoint of Big Tobacco, more worrying was that the world’s most read publication, The Reader’s Digest, had already reported on this evidence in a 1952 article, “Cancer by the Carton”. The journalist Alistair Cooke, writing in 1954, predicted that the publication of the next big scientific study into smoking and cancer might finish off the industry.

It did not. PR guru John Hill had a plan — and the plan, with hindsight, proved tremendously effective. Despite the fact that its product was addictive and deadly, the tobacco industry was able to fend off regulation, litigation and the idea in the minds of many smokers that its products were fatal for decades.

So successful was Big Tobacco in postponing that day of reckoning that their tactics have been widely imitated ever since. They have also inspired a thriving corner of academia exploring how the trick was achieved. In 1995, Robert Proctor, a historian at Stanford University who has studied the tobacco case closely, coined the word “agnotology”. This is the study of how ignorance is deliberately produced; the entire field was started by Proctor’s observation of the tobacco industry. The facts about smoking — indisputable facts, from unquestionable sources — did not carry the day. The indisputable facts were disputed. The unquestionable sources were questioned. Facts, it turns out, are important, but facts are not enough to win this kind of argument.

 

2.

Agnotology has never been more important. “We live in a golden age of ignorance,” says Proctor today. “And Trump and Brexit are part of that.”

In the UK’s EU referendum, the Leave side pushed the false claim that the UK sent £350m a week to the EU. It is hard to think of a previous example in modern western politics of a campaign leading with a transparent untruth, maintaining it when refuted by independent experts, and going on to triumph anyway. That performance was soon to be eclipsed by Donald Trump, who offered wave upon shameless wave of demonstrable falsehood, only to be rewarded with the presidency. The Oxford Dictionaries declared “post-truth” the word of 2016. Facts just didn’t seem to matter any more.

The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts. Fact-checking organisations, such as Full Fact in the UK and PolitiFact in the US, evaluate prominent claims by politicians and journalists. I should confess a personal bias: I have served as a fact checker myself on the BBC radio programme More or Less, and I often rely on fact-checking websites. They judge what’s true rather than faithfully reporting both sides as a traditional journalist would. Public, transparent fact checking has become such a feature of today’s political reporting that it’s easy to forget it’s barely a decade old.

Mainstream journalists, too, are starting to embrace the idea that lies or errors should be prominently identified. Consider a story on the NPR website about Donald Trump’s speech to the CIA in January: “He falsely denied that he had ever criticised the agency, falsely inflated the crowd size at his inauguration on Friday . . . —” It’s a bracing departure from the norms of American journalism, but then President Trump has been a bracing departure from the norms of American politics.

Facebook has also drafted in the fact checkers, announcing a crackdown on the “fake news” stories that had become prominent on the network after the election. Facebook now allows users to report hoaxes. The site will send questionable headlines to independent fact checkers, flag discredited stories as “disputed”, and perhaps downgrade them in the algorithm that decides what each user sees when visiting the site.

We need some agreement about facts or the situation is hopeless. And yet: will this sudden focus on facts actually lead to a more informed electorate, better decisions, a renewed respect for the truth? The history of tobacco suggests not. The link between cigarettes and cancer was supported by the world’s leading medical scientists and, in 1964, the US surgeon general himself. The story was covered by well-trained journalists committed to the values of objectivity. Yet the tobacco lobbyists ran rings round them.

In the 1950s and 1960s, journalists had an excuse for their stumbles: the tobacco industry’s tactics were clever, complex and new. First, the industry appeared to engage, promising high-quality research into the issue. The public were assured that the best people were on the case. The second stage was to complicate the question and sow doubt: lung cancer might have any number of causes, after all. And wasn’t lung cancer, not cigarettes, what really mattered? Stage three was to undermine serious research and expertise. Autopsy reports would be dismissed as anecdotal, epidemiological work as merely statistical, and animal studies as irrelevant. Finally came normalisation: the industry would point out that the tobacco-cancer story was stale news. Couldn’t journalists find something new and interesting to say?

Such tactics are now well documented — and researchers have carefully examined the psychological tendencies they exploited. So we should be able to spot their re-emergence on the political battlefield.

“It’s as if the president’s team were using the tobacco industry’s playbook,” says Jon Christensen, a journalist turned professor at the University of California, Los Angeles, who wrote a notable study in 2008 of the way the tobacco industry tugged on the strings of journalistic tradition.

One infamous internal memo from the Brown & Williamson tobacco company, typed up in the summer of 1969, sets out the thinking very clearly: “Doubt is our product.” Why? Because doubt “is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.” Big Tobacco’s mantra: keep the controversy alive.

Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again.

 

3.

Tempting as it is to fight lies with facts, there are three problems with that strategy. The first is that a simple untruth can beat off a complicated set of facts simply by being easier to understand and remember. When doubt prevails, people will often end up believing whatever sticks in the mind. In 1994, psychologists Hollyn Johnson and Colleen Seifert conducted an experiment in which people read an account of an explosive warehouse fire. The account mentioned petrol cans and paint but later explained that petrol and paint hadn’t been present at the scene after all. The experimental subjects, tested on their comprehension, recalled that paint wasn’t actually there. But when asked to explain facts about the fire (“why so much smoke?”), they would mention the paint. Lacking an alternative explanation, they fell back on a claim they had already acknowledged was wrong. Once we’ve heard an untrue claim, we can’t simply unhear it.

This should warn us not to let lie-and-rebuttal take over the news cycle. Several studies have shown that repeating a false claim, even in the context of debunking that claim, can make it stick. The myth-busting seems to work but then our memories fade and we remember only the myth. The myth, after all, was the thing that kept being repeated. In trying to dispel the falsehood, the endless rebuttals simply make the enchantment stronger.

With this in mind, consider the Leave campaign’s infamous bus-mounted claim: “We send the EU £350m a week.” Simple. Memorable. False. But how to rebut it? A typical effort from The Guardian newspaper was headlined, “Why Vote Leave’s £350m weekly EU cost claim is wrong”, repeating the claim before devoting hundreds of words to gnarly details and the dictionary definition of the word “send”. This sort of fact-checking article is invaluable to a fellow journalist who needs the issues set out and hyperlinked. But for an ordinary voter, the likely message would be: “You can’t trust politicians but we do seem to send a lot of money to the EU.” Doubt suited the Leave campaign just fine.

This is an inbuilt vulnerability of the fact-checking trade. Fact checkers are right to be particular, to cover all the details and to show their working out. But that’s why the fact-checking job can only be a part of ensuring that the truth is heard.

Andrew Lilico, a thoughtful proponent of leaving the EU, told me during the campaign that he wished the bus had displayed a more defensible figure, such as £240m. But Lilico now acknowledges that the false claim was the more effective one. “In cynical campaigning terms, the use of the £350m figure was perfect,” he says. “It created a trap that Remain campaigners kept insisting on jumping into again and again and again.”

Quite so. But not just Remain campaigners — fact-checking journalists too, myself included. The false claim was vastly more powerful than a true one would have been, not because it was bigger, but because everybody kept talking about it.

Proctor, the tobacco industry historian turned agnotologist, warns of a similar effect in the US: “Fact checkers can become Trump’s poodle, running around like an errand boy checking someone else’s facts. If all your time is [spent] checking someone else’s facts, then what are you doing?”

 

4.

There’s a second reason why facts don’t seem to have the traction that one might hope. Facts can be boring. The world is full of things to pay attention to, from reality TV to your argumentative children, from a friend’s Instagram to a tax bill. Why bother with anything so tedious as facts?

Last year, three researchers — Seth Flaxman, Sharad Goel and Justin Rao — published a study of how people read news online. The study was, on the face of it, an inquiry into the polarisation of news sources. The researchers began with data from 1.2 million internet users but ended up examining only 50,000. Why? Because only 4 per cent of the sample read enough serious news to be worth including in such a study. (The hurdle was 10 articles and two opinion pieces over three months.) Many commentators worry that we’re segregating ourselves in ideological bubbles, exposed only to the views of those who think the same way we do. There’s something in that concern. But for 96 per cent of these web surfers the bubble that mattered wasn’t liberal or conservative, it was: “Don’t bother with the news.”

In the war of ideas, boredom and distraction are powerful weapons. A recent study of Chinese propaganda examined the tactics of the paid pro-government hacks (known as the “50 cent army”, after the amount contributors were alleged to be paid per post) who put comments on social media. The researchers, Gary King, Jennifer Pan and Margaret Roberts, conclude: “Almost none of the Chinese government’s 50c party posts engage in debate or argument of any kind . . . they seem to avoid controversial issues entirely . . . the strategic objective of the regime is to distract and redirect public attention.”

Trump, a reality TV star, knows the value of an entertaining distraction: simply pick a fight with Megyn Kelly, The New York Times or even Arnold Schwarzenegger. Isn’t that more eye-catching than a discussion of healthcare reform?

The tobacco industry also understood this point, although it took a more highbrow approach to generating distractions. “Do you know about Stanley Prusiner?” asks Proctor.

Prusiner is a neurologist. In 1972, he was a young researcher who’d just encountered a patient suffering from Creutzfeldt-Jakob disease. It was a dreadful degenerative condition then thought to be caused by a slow-acting virus. After many years of study, Prusiner concluded that the disease was caused instead, unprecedentedly, by a kind of rogue protein. The idea seemed absurd to most experts at the time, and Prusiner’s career began to founder. Promotions and research grants dried up. But Prusiner received a source of private-sector funding that enabled him to continue his work. He was eventually vindicated in the most spectacular way possible: with a Nobel Prize in Medicine in 1997. In his autobiographical essay on the Nobel Prize website, Prusiner thanked his private-sector benefactors for their “crucial” support: RJ Reynolds, maker of Camel cigarettes.

The tobacco industry was a generous source of research funds, and Prusiner wasn’t the only scientist to receive both tobacco funding and a Nobel Prize. Proctor reckons at least 10 Nobel laureates are in that position. To be clear, this wasn’t an attempt at bribery. In Proctor’s view, it was far more subtle. “The tobacco industry was the leading funder of research into genetics, viruses, immunology, air pollution,” says Proctor. Almost anything, in short, except tobacco. “It was a massive ‘distraction research’ project.” The funding helped position Big Tobacco as a public-spirited industry but Proctor considers its main purpose was to produce interesting new speculative science. Creutzfeldt-Jakob disease may be rare, but it was exciting news. Smoking-related diseases such as lung cancer and heart disease aren’t news at all.

The endgame of these distractions is that matters of vital importance become too boring to bother reporting. Proctor describes it as “the opposite of terrorism: trivialism”. Terrorism provokes a huge media reaction; smoking does not. Yet, according to the US Centers for Disease Control, smoking kills 480,000 Americans a year. This is more than 50 deaths an hour. Terrorists have rarely managed to kill that many Americans in an entire year. But the terrorists succeed in grabbing the headlines; the trivialists succeed in avoiding them.

Tobacco industry lobbyists became well-practised at persuading the media to withhold or downplay stories about the dangers of cigarettes. “That record is scratched,” they’d say. Hadn’t we heard such things before?

Experienced tobacco watchers now worry that Trump may achieve the same effect. In the end, will people simply start to yawn at the spectacle? Jon Christensen, at UCLA, says: “I think it’s the most frightening prospect.”

On the other hand, says Christensen, there is one saving grace. It is almost impossible for the US president not to be news. The tobacco lobby, like the Chinese government, proved highly adept at pointing the spotlight elsewhere. There are reasons to believe that will be difficult for Trump.

 

5.

There’s a final problem with trying to persuade people by giving them facts: the truth can feel threatening, and threatening people tends to backfire. “People respond in the opposite direction,” says Jason Reifler, a political scientist at Exeter University. This “backfire effect” is now the focus of several researchers, including Reifler and his colleague Brendan Nyhan of Dartmouth.

In one study, conducted in 2011, Nyhan, Reifler and others ran a randomised trial in which parents with young children were either shown or not shown scientific information debunking an imaginary but widely feared link between vaccines and autism. At first glance, the facts were persuasive: parents who saw the myth-busting science were less likely to believe that the vaccine could cause autism. But parents who were already wary of vaccines were actually less likely to say they’d vaccinate their children after being exposed to the facts — despite apparently believing those facts.

What’s going on? “People accept the corrective information but then resist in other ways,” says Reifler. A person who feels anxious about vaccination will subconsciously push back by summoning to mind all the other reasons why they feel vaccination is a bad idea. The fear of autism might recede, but all the other fears are stronger than before.

It’s easy to see how this might play out in a political campaign. Say you’re worried that the UK will soon be swamped by Turkish immigrants because a Brexit campaigner has told you (falsely) that Turkey will soon join the EU. A fact checker can explain that no Turkish entry is likely in the foreseeable future. Reifler’s research suggests that you’ll accept the narrow fact that Turkey is not about to join the EU. But you’ll also summon to mind all sorts of other anxieties: immigration, loss of control, the proximity of Turkey to Syria’s war and to Isis, terrorism and so on. The original lie has been disproved, yet its seductive magic lingers.

The problem here is that while we like to think of ourselves as rational beings, our rationality didn’t just evolve to solve practical problems, such as building an elephant trap, but to navigate social situations. We need to keep others on our side. Practical reasoning is often less about figuring out what’s true, and more about staying in the right tribe.

An early indicator of how tribal our logic can be was a study conducted in 1954 by Albert Hastorf, a psychologist at Dartmouth, and Hadley Cantril, his counterpart at Princeton. Hastorf and Cantril screened footage of a game of American football between the two college teams. It had been a rough game. One quarterback had suffered a broken leg. Hastorf and Cantril asked their students to tot up the fouls and assess their severity. The Dartmouth students tended to overlook Dartmouth fouls but were quick to pick up on the sins of the Princeton players. The Princeton students had the opposite inclination. They concluded that, despite being shown the same footage, the Dartmouth and Princeton students didn’t really see the same events. Each student had his own perception, closely shaped by his tribal loyalties. The title of the research paper was “They Saw a Game”.

A more recent study revisited the same idea in the context of political tribes. The researchers showed students footage of a demonstration and spun a yarn about what it was about. Some students were told it was a protest by gay-rights protesters outside an army recruitment office against the military’s (then) policy of “don’t ask, don’t tell”. Others were told that it was an anti-abortion protest in front of an abortion clinic.

Despite looking at exactly the same footage, the experimental subjects had sharply different views of what was happening — views that were shaped by their political loyalties. Liberal students were relaxed about the behaviour of people they thought were gay-rights protesters but worried about what the pro-life protesters were doing; conservative students took the opposite view. As with “They Saw a Game”, this disagreement was not about the general principles but about specifics: did the protesters scream at bystanders? Did they block access to the building? We see what we want to see — and we reject the facts that threaten our sense of who we are.

When we reach the conclusion that we want to reach, we’re engaging in “motivated reasoning”. Motivated reasoning was a powerful ally of the tobacco industry. If you’re addicted to a product, and many scientists tell you it’s deadly, but the tobacco lobby tells you that more research is needed, what would you like to believe? Christensen’s study of the tobacco public relations campaign revealed that the industry often got a sympathetic hearing in the press because many journalists were smokers. These journalists desperately wanted to believe their habit was benign, making them ideal messengers for the industry.

Even in a debate polluted by motivated reasoning, one might expect that facts will help. Not necessarily: when we hear facts that challenge us, we selectively amplify what suits us, ignore what does not, and reinterpret whatever we can. More facts mean more grist to the motivated reasoning mill. The French dramatist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Modern social science agrees.

On a politically charged issue such as climate change, it feels as though providing accurate information about the science should bring people together. The opposite is true, says Dan Kahan, a law and psychology professor at Yale and one of the researchers on the study into perceptions of a political protest. Kahan writes: “Groups with opposing values often become more polarised, not less, when exposed to scientifically sound information.”

When people are seeking the truth, facts help. But when people are selectively reasoning about their political identity, the facts can backfire.

 

6.

All this adds up to a depressing picture for those of us who aren’t ready to live in a post-truth world. Facts, it seems, are toothless. Trying to refute a bold, memorable lie with a fiddly set of facts can often serve to reinforce the myth. Important truths are often stale and dull, and it is easy to manufacture new, more engaging claims. And giving people more facts can backfire, as those facts provoke a defensive reaction in someone who badly wants to stick to their existing world view. “This is dark stuff,” says Reifler. “We’re in a pretty scary and dark time.”

Is there an answer? Perhaps there is.

We know that scientific literacy can actually widen the gap between different political tribes on issues such as climate change — that is, well-informed liberals and well-informed conservatives are further apart in their views than liberals and conservatives who know little about the science. But a new research paper from Dan Kahan, Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson explores the role not of scientific literacy but of scientific curiosity.

The researchers measured scientific curiosity by asking their experimental subjects a variety of questions about their hobbies and interests. The subjects were offered a choice of websites to read for a comprehension test. Some went for ESPN, some for Yahoo Finance, but those who chose Science were demonstrating scientific curiosity. Scientifically curious people were also happier to watch science documentaries than celebrity gossip TV shows. As one might expect, there’s a correlation between scientific knowledge and scientific curiosity, but the two measures are distinct.

What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”

So how can we encourage curiosity? It’s hard to make banking reform or the reversibility of Article 50 more engaging than football, Game of Thrones or baking cakes. But it does seem to be what’s called for. “We need to bring people into the story, into the human narratives of science, to show people how science works,” says Christensen.

We journalists and policy wonks can’t force anyone to pay attention to the facts. We have to find a way to make people want to seek them out. Curiosity is the seed from which sensible democratic decisions can grow. It seems to be one of the only cures for politically motivated reasoning but it’s also, into the bargain, the cure for a society where most people just don’t pay attention to the news because they find it boring or confusing.

What we need is a Carl Sagan or David Attenborough of social science — somebody who can create a sense of wonder and fascination not just at the structure of the solar system or struggles of life in a tropical rainforest, but at the workings of our own civilisation: health, migration, finance, education and diplomacy.

One candidate would have been Swedish doctor and statistician Hans Rosling, who died in February. He reached an astonishingly wide audience with what were, at their heart, simply presentations of official data from the likes of the World Bank.

He characterised his task as telling people the facts — “to describe the world”. But the facts need a champion. Facts rarely stand up for themselves — they need someone to make us care about them, to make us curious. That’s what Rosling did. And faced with the apocalyptic possibility of a world where the facts don’t matter, that is the example we must follow.

Written for and first published in the Financial Times.

My book “Messy” is available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

17th of March, 2017HighlightsComments off
Highlights

What makes the perfect office?

In 1923, the father of modern architecture, Le Corbusier, was commissioned by a French industrialist to design some homes for workers in his factory near Bordeaux. Le Corbusier duly delivered brightly-hued concrete blocks of pure modernism. The humble factory workers did not take to Le Corbusier’s visionary geometry. They added rustic shutters, pitched roofs, and picket-fenced gardens. And they decorated the gardens in the least modernist way imaginable: with gnomes.
Companies no longer hire star architects to design housing for an industrial workforce. The architects are instead put to work producing the most magazine-shoot worthy office spaces. A pioneer was the uber-cool advertising agency, Chiat-Day, which in 1993 hired the playful Italian architect Gaetano Pesce to create a New York space for them (hot-lips mural, luminous floor, spring-loaded chairs). Their Los Angeles office (four-storey binoculars, brainstorming pods commandeered from fairground rides) was designed by Frank Gehry, whom Chiat-Day’s boss, Jay Chiat, had spotted before Gehry created the Guggenheim Bilbao and became the most famous architect on the planet.
Jay Chiat believed that design was for the professionals. Give workers control over their own space and they would simply clutter up Frank Gehry’s vision, so Jay Chiat decreed that his employees be given tiny lockers for “their dog pictures, or whatever”.
Now everyone is hiring the high priests of architecture. Google has asked Thomas Heatherwick, creator of the 2012 Olympic torch, to create a new Googleplex. Apple’s new headquarters will be a gigantic glass donut over a mile around, designed by Norman Foster and partners.
The most famous corporate architect was not an architect at all: the late Steve Jobs, the boss of Apple, owned much of the film studio Pixar and stamped his taste all over Pixar’s headquarters. Jobs pored over the finest details, choosing an Arkansas steel mill that produced steels of the perfect hue (bolted, not welded).
Jobs believed that a building could shape the way people interacted with each other, and hit upon the notion that Pixar would have just a single pair of washrooms, just off the main lobby. Every time nature called, there was only one place for the entire company to go, and serendipitous new connections would be made.
But what if all these efforts are basically repeating Le Corbusier’s error? What if the ideal office isn’t the coolest or the most aesthetically visionary? What if the ideal office is the one, dog pictures and gnomes and all, that workers make their own?
In 2010, two psychologists conducted an experiment to test that idea. Alex Haslam and Craig Knight set up simple office spaces where they asked experimental subjects to spend an hour doing simple administrative tasks. Haslam and Knight wanted to understand what sort of office space made people productive and happy, and they tested four different layouts.
Two of the layouts were familiar. One was stripped down – bare desk, swivel chair, pencil, paper, nothing else. Most participants found it rather oppressive. “You couldn’t relax in it,” said one. The other layout was softened with pot plants and tasteful close-up photographs of flowers, faintly reminiscent of Georgia O’Keefe paintings. Workers got more and better work done there, and enjoyed themselves more.
The next two layouts produced dramatically different outcomes – and yet, photographs of the spaces would offer few clues as to why. They used the same basic elements and the same botanical decorations. But the appearance wasn’t what mattered; what mattered was who got to decide.
In the third and fourth layouts, workers were given the plants and pictures and invited to use them to decorate the space – or not – before they started work. But in the fourth, the experimenter came in after the subject had finished setting everything out to her satisfaction, and then rearranged it all. The office space itself was not much different, but difference in productivity and job satisfaction was dramatic.
When workers were empowered to design their own space, they had fun and worked hard and accurately, producing 30 per cent more work than in the minimalist office and 15 per cent more than in the decorated office. When workers were deliberately disempowered, their work suffered and of course, they hated it. “I wanted to hit you,” one participant later admitted to an experimenter.
Haslam and Knight have confirmed what other researchers have long suspected – that lack of control over one’s physical environment is stressful and distracting. But this perspective is in stark contrast to those who see office design as too important to be left to the people who work in offices.
At least Le Corbusier had a vision, but many office spaces are ruled instead by an aesthetic that is mean and petty. The Wall Street Journal reported on Kyocera’s clipboard-wielding “inspectors” not only enforcing a clear-desk policy, but pulling open drawers and cabinets, photographing messy contents and demanding improvements. The Australian Financial Review published an 11-page clean-desk manual leaked from the mining giant BHP Billiton; apparently copper and coal cannot be mined if office workers do not respect the limit of one A5 picture frame on each desk. (The frame may display a family photo, or an award certificate, but it was forbidden to display both). Haslam and Knight told of a Sydney-based bank that changed the layout of its IT department 36 times in four years at the whim of senior management.
It is unclear why any of this top-down design is thought desirable. Official explanations are often empty or circular: that clean desks are more professional, or look tidier. In some cases, streamlined practices from the production line have been copied mindlessly into general office spaces, where they serve no purpose. Whatever the reason, it is folly. It can be satisfying to straighten up all the pens on your desk; but to order an underling to straighten their own pens is sociopathic.
When the likes of Steve Jobs or Frank Gehry are in charge, we can at least expect a workplace that will look beautiful. But that does not make it functional. Truly creative spaces aren’t constantly being made over for photoshoots in glossy business magazines. Just ask veterans of M.I.T., many of whom will identify as their favourite and most creative space a building that didn’t even have a proper name, which was designed in an afternoon and built to last just a couple of years. Building 20 was 200,000 square feet of plywood, cinderblock and asbestos, a squat, dusty firetrap originally designed to accommodate the wartime radar research effort, but which eked out an existence as M.I.T.’s junk-filled attic until 1998.
Building 20 was an unbelievably fertile mess. The successes started with the wartime RadLab, which produced nine Nobel prizes and the radar systems that won the second world war. But the outpouring continued for more than half a century. The first commercial atomic clock; one of the earliest particle accelerators; Harold Edgerton’s iconic high-speed photographs of a bullet passing through an apple – all sprang from Building 20. So did computer hacking and the first arcade video game, Spacewar. So did the pioneering technology companies DEC, BBN, and Bose. Cognitive science was revolutionised in Building 20 by the researcher Jerry Lettvin, while Noam Chomsky did the same for linguistics.
All this happened in the cheapest, nastiest space that M.I.T. could offer. But that was no coincidence. Building 20 was where the university put odd projects, student hobbyists and anything else that didn’t seem to matter, producing new collaborations.
And Building 20’s ugliness was functional. The water and cabling was exposed, running across the ceilings in brackets. Researchers thought nothing of tapping in to them for their experimental needs – or for that matter for knocking down a wall. When the atomic clock was being developed, the team removed two floors to accommodate it. This was the result not of design but of neglect. In the words of Stewart Brand, author of How Buildings Learn, “nobody cares what you do in there.”
And that was all Building 20’s residents wanted: to be left alone to create, to make whatever mess they wanted to make. When, inevitably, M.I.T. finally replaced Building 20 with a $300m structure designed by Frank Gehry himself, its former residents held a memorial wake. The new building might have been cutting-edge architecture, but one unhappy resident summed up the problem perfectly: “I didn’t ask for it.”
Of course nobody cares what the people who actually do the work might want or need. Chief executives exult in bold architectural statements, and universities find it easier to raise money for new buildings than for research. And so the grand buildings continue to be built, especially by the most profitable companies and the most prestigious seats of learning.
But we’re often guilty of confusing causation here, believing that great architecture underpins the success of great universities, or that Google flourishes because of the vibrancy of the helter skelters and ping pong tables in the Googleplex. A moment’s reflection reminds us that the innovation comes first, and the stunt architecture comes later.
Remember that for the first two years of Google’s history, there were no headquarters at all. The company’s founders, Sergey Brin and Larry Page, made the breakthroughs at Stanford University. Then came the cliché of a garage in Menlo Park, with desks made from doors set horizontally across sawhorses. The company grew and grew, into one crude space after another – and with engineers always free to hack things about. One knocked down the wall of his office, decided he didn’t like the results, and rebuilt the wall. That made for an ugly space – but a space that worked for the people who worked in it. The spirit of Building 20 lived on at Google.
So how should the ideal office look? In the most prestigious offices at the most prestigious companies, the ones which are being photographed by Wired, the answer to that question is: this place should look the way the boss’s pet architect wants it to look.
But Building 20, and Google’s early offices, and some of the great creative spaces all over the world, suggest a very different answer to the same question: how this place looks doesn’t matter.
Back in 1977, the editor of Psychology Today, T George Harris, put his finger on the problem:
“The office is a highly personal tool shop, often the home of the soul… this fact may sound simple, but it eludes most architects… They have a mania for uniformity, in space as in furniture, and a horror over how the messy side of human nature clutters up an office landscape that would otherwise be as tidy as a national cemetery.”
Harris scoured the academic literature for any evidence that good design helped people to get things done, or to be happier in the office. He couldn’t find it. “People suddenly put into “good design” did not seem to wake up and love it,” he wrote. What people love, instead, is the ability to control the space in which they work – even if they end up filling the space with kitsch, or dog photos, or even – shudder – garden gnomes.
Strangely enough, it was Steve Jobs himself – notorious as a dictatorial arbiter of good taste – who came to appreciate this at Pixar. When he unveiled his plan for the single pair of serendipity-inducing uber-bathrooms, he faced a rebellion from pregnant women at Pixar who didn’t want to have to make the long walk ten times a day. Jobs was aghast that people didn’t appreciate the importance of his vision. But then he did something unexpected: he backed down and agreed to install extra bathrooms.
Steve Jobs found other ways to encourage serendipitous interactions. More importantly, he showed that even on a question that mattered deeply to him, junior staff were able to defy him. Milled Arkansas steels be damned: it is the autonomy that really matters.
“The animators who work here are free to – no, encouraged to – decorate their work spaces in whatever style they wish,” explains Pixar’s boss Ed Catmull in his book Creativity, Inc. “They spend their days inside pink dollhouses whose ceilings are hung with miniature chandeliers, tiki huts made of real bamboo, and castles whose meticulously painted, fifteen-foot-high Styrofoam turrets appear to be carved from stone.”
I suspect that there may be a garden gnome in there, too.

 

The ideas in this article are adapted from my book “Messy“, which is available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

16th of February, 2017HighlightsComments off
Previous

Elsewhere

  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 176,081 other subscribers

Do NOT follow this link or you will be banned from the site!