Tim Harford The Undercover Economist

HighlightsHighlights

From the geeks who took over poker to the nuclear safety experts who want to prevent the next banking meltdown, these are my favourite long-form articles.

Highlights

Can the pandemic help us fix our technology problem?

We have a technology problem.

By that, I mean that we currently lack the technology to deal with the coronavirus pandemic. We don’t have a cheap, easy, self-administered test. We lack effective medicines. Above all, we don’t have a vaccine.

But I also mean something vaguer and more diffuse. We have a technology problem in the sense that scientific and technological progress has been sputtering for a while. That is evident in the data. The 2010-19 decade of productivity growth in the UK was the lowest for the past couple of centuries, and coronavirus can take no blame for that.

If productivity statistics do not speak to your poetic soul, go into your kitchen and look around. You’ll see little there that you couldn’t have seen 50 years ago. The same could not be said of, say, the 50 years between 1920 and 1970. Or ponder air travel, if you can remember what that is like. Between 1920 and 1970, we went from aviator goggles and fabric-covered biplanes to the Boeing 747 and Concorde. Not only have we failed to surge forward since then, one could even argue that we’ve gone backward.

Given how much we keep being told about the disruptive pace of innovation and the boundless creativity of Silicon Valley, the reality is both surprising and disappointing. After several years pondering the history of inventions and inventors, I wondered whether these two problems might shed light on each other — what can we learn from the pandemic about technology, and what does the history of technology teach us about the pandemic?

Get the incentives right

In 1795, the French government offered a prize of 12,000 francs for inventing a method of preserving food. Napoleon Bonaparte was an ambitious general when the prize was announced. By the time it was awarded, he was France’s emperor, and two years away from his disastrous invasion of Russia. Napoleon may or may not have said: “An army marches on its stomach,” but he was keen to broaden his soldiers’ provisions from smoked and salted meat.

One of the hopefuls who tried his hand at winning the prize was Nicolas Appert, a Parisian grocer and confectioner credited with the development of the stock cube and — less plausibly — the recipe for chicken Kiev. Through trial and error, Appert found if you put cooked food in a glass jar, plunged the jar into boiling water and then sealed it with wax, the food would keep — all this was before Louis Pasteur was born.

Having solved the problem, Monsieur Appert duly claimed his reward.

This is by no means the only example of an innovation prize, a policy tool that has waxed and waned over the years. The most famous was the 1714 Longitude Prize, for solving the problem of how far east or west a ship was. The Royal Society for the encouragement of Arts, Manufactures and Commerce, the RSA, also awarded prizes on a frequent basis, often for safety measures that were regarded as unprofitable but socially valuable. Anton Howes, author of Arts and Minds, a history of the RSA, reckons that the society awarded more than 2,000 innovation prizes between the mid-1700s and the mid-1800s. Some were “bounties”, ad hoc recognition for good ideas; many, however, were classic innovation prizes like that awarded to Appert, which pose an important problem and promise to reward the person who solves it.

Nowadays such prizes are out of fashion. Governments tend to favour a combination of direct support for researchers and the award of an intellectual monopoly, in the form of a patent, to those who develop original ideas. But just like the innovations the RSA rewarded, rapid vaccines can be unprofitable but socially valuable. So a group of the world’s leading economists believes that if we are to maximise the chances of producing that vital coronavirus vaccine at the speed and scale that is required, we need to bring innovation prizes back in a big way.

This team, known as “Accelerating Health Technologies”, includes Susan Athey, the first woman to win the prestigious John Bates Clark medal, and Michael Kremer, a Nobel laureate.

“Whoever discovers the vaccine first is going to get such a big hug,” joked the Financial Times cartoonist Banx. It’s safe to say that they would get much more than that, but would they get enough? Major pharmaceutical companies have been scarred by earlier experiences, where they sank money into vaccines for diseases such as Zika or Sars, or in 2009 rushed to fulfil large orders for flu vaccines, only to find that demand had ebbed.

The problem is that most vaccine research programmes do not produce successful vaccines, and so companies — understandably — try to keep a lid on their spending until one is proven to work. Anthony Fauci, director of the US’s National Institute of Allergy and Infectious Diseases, lamented the problem in February: “Companies that have the skill to be able to do it are not going to just sit around and have a warm facility, ready to go for when you need it,” he told an Aspen Institute panel.

We need the leading vaccine contenders to invest vastly more in trials and production than they normally would, even though much of that investment will ultimately be wasted. And of course, they already are investing more — up to a point. That is partly an act of good corporate citizenship and partly in response to subsidies from governments or the Gates Foundation. But it may not be sufficient.

After all, the cost of failure will be borne mainly by the companies involved, while the benefits of success will be enjoyed by all of us: the IMF estimates the benefits are more than $10bn for every day that widespread vaccine delivery is hastened. Any inducement the rest of us can offer might be money well spent. So Athey, Kremer and their colleagues have proposed a kind of prize called an “advanced market commitment”, a promise to buy hundreds of millions of doses of a vaccine for a premium price.

This is not an untried idea. In 2004, Kremer and Rachel Glennerster, the current chief economist of the UK’s Department for International Development, proposed the concept of an advanced market commitment (AMC). In 2010, donors promised $1.5bn as an AMC for a pneumococcal vaccine for low-income countries; this dramatically accelerated the rollout of successful vaccines and saved hundreds of thousands of lives. But the AMC is really just a sophisticated variant on the innovation prizes of the 18th and 19th centuries, such as the one claimed by Nicolas Appert. Incentives are not the only thing that matter — but matter they do. If we want a solution that badly, we shouldn’t hesitate to commit to rewarding those who produce it.

It is not such a leap from food preservation to a vaccine.

Don’t overlook what seems simple

On August 4 1945, as the US and USSR were manoeuvring for position in a postwar world, a group of boys from the Young Pioneer Organisation of the Soviet Union made a charming gesture of friendship. At the US embassy in Moscow, they presented a large, hand-carved ceremonial seal of the United States of America to Averell Harriman, the US ambassador. It was later to become known simply as “the Thing”. Harriman’s office checked the heavy wooden ornament for signs of a bug, but concluded that, with neither wires nor batteries, it could do no harm. Harriman mounted the Thing proudly on the wall of his study. From there, it betrayed his private conversations for the next seven years.

Eventually, a British radio operator stumbled upon the US ambassador’s conversations being broadcast over the airwaves. These broadcasts were unpredictable: scan the embassy for radio emissions, and no bug was in evidence. It took yet more time to discover the secret. The listening device was inside the Thing. And it was so subtle, so simple, as to have proved almost undetectable.

The Thing had been designed — under duress in a Soviet prison camp — by none other than Léon Theremin, famous even then for his eponymous musical instrument. Inside it was little more than an antenna attached to a cavity with a silver diaphragm over it, serving as a microphone. There were no batteries or any other source of power. The Thing didn’t need them. It was activated by radio waves beamed at the US embassy by the Soviets, at which point it would broadcast back, using the energy of the incoming signal. Switch off that signal, and it would go silent.

The US agents who examined the Thing for bugs did not understand its potential to do them harm. It seemed too simple, too primitive, to matter. And I worry that we often make the same mistake. When we think about technology, we think of the flashy, sophisticated stuff. We overlook the cheap and the simple. We celebrate the printing press that produced the Gutenberg Bibles, but not the paper that many of those Bibles were printed on. Alongside paper and the RFID tag, place the brick, the postage stamp and, for that matter, the humble tin can: inventions that are transformative not because they are complicated but because they are simple.

We should remember the same lesson when it comes to the innovations that fuel public health. The simplest technologies — such as soap and gloves, and, it seems increasingly likely, cloth masks — have proved invaluable, and are much-missed when in short supply.

And those are just the obvious technologies. The UK and the US stumbled in their efforts to scale up testing in the crucial early weeks of the epidemic. It will take post-pandemic inquiries to establish exactly why — and incompetence is clearly one explanation — but reporters highlighted a shortage of the chemical reagents necessary to conduct the test, the protective gear needed to shield the medical staff and even something as simple as cotton swabs.

Even now, it is too easy to dismiss the potential of truly cheap and simple testing. The economist Paul Romer, another Nobel memorial prize winner, argues that if everyone in a country could be tested twice a month — the equivalent, in the UK, of more than four million tests a day — that should provide enough information to suppress the virus whenever there was an outbreak. That is a vast leap beyond our current testing capacity — but the benefits could be enormous. Imagine a reliable test that was cheap and self-administered, like a pregnancy test or a thermometer. Highly sophisticated is good, but being cheap has a sophistication of its own.

Contact tracing is another simple but vital approach. An age-old idea that requires little more than a phone, a notebook and a small army of persistent and diplomatic people, it was abandoned in the UK for the three gravest months of the crisis, apparently on the basis that the army had yet to be recruited and so the tracing system could cope with no more than five new cases a week. Since the lockdown was eased, we have well over a thousand a day.

Then there are the everyday logistical miracles made possible by other simple inventions, the barcode and the shipping container. Nobody cares about logistics until things go wrong. It has been remarkable to see how resilient retail supply chains have been in the face of the most extraordinary disruption. At a time when much of the world’s population was told not to venture beyond their own front doors, we saw little more than a brief awkwardness in sourcing flour, pasta and toilet paper.

But it has not been so straightforward to duplicate this feat when it comes to testing. Embarrassed by the early deficiency, the UK government set ambitious targets. Ministers then claimed to hit them, first by including testing kits that had merely been posted out, and then by bragging about “capacity”. Meanwhile, the government simply stopped reporting how many people had been tested at all. The logistics of conducting, or even counting, the tests proved challenging enough that for the purposes of meeting targets, logistical problems were simply assumed away.

In our desperation to develop high-tech solutions such as drugs or contact-tracing apps, there is a risk that we ignore the simple technologies that can achieve a lot. As Averell Harriman discovered, it is a mistake to overlook technologies that seem too simple to matter.

Manufacturing matters too

There is more to innovation than a good idea. The food-preserving “Appertisation” technology did not stay in France for long — it migrated across the Channel to seek London’s entrepreneurialism and venture capital, allowing production to scale up. (This was a time when the British were, evidently, not too proud to borrow a good idea from the French.) Appert himself was also trying to expand his operations. He invested his prize money in a food-preservation factory, only to see it destroyed by invading Prussian and Austrian armies. Ideas matter, but factories matter too.

Factories are likely to prove equally fateful for vaccine production. Developing a successful vaccine is far more than just a manufacturing problem, but manufacturing is undoubtedly the kind of challenge that keeps experts awake at night. The candidate vaccines are sufficiently different from each other that it is unfeasible to build an all-purpose production line that would work for any of them, so we need to build several in parallel.

“Imagine that your life depended on completing a home construction project on time,” Susan Athey told the Planet Money podcast. “Anyone who’s ever done a construction project knows that none of them had ever been completed on time . . . literally, if your life depended on it, you might try to build five houses.”

Or to put it another way, if your life depends on a letter being delivered on time, send multiple copies of the letter by as many methods as you can find.

In the case of a coronavirus vaccine, setting up multiple redundant production lines costs money — tens of billions of dollars. But remember that an accelerated vaccine is worth more than $10bn a day. Any reasonable subsidy would be value for money, assuming it increased the probability of quick success. Some subsidies are already available — for example, as part of the US “Warp Speed” project, and from the Gates Foundation. But Michael Kremer wants to see more international co-ordination and more ambition. “We think the scale of the problem and the risks associated with each candidate warrant pursuing a substantially larger number of candidates,” he told me.

Alex Tabarrok, another member of the team, added: “Bill Gates is doing the right thing but even Gates can’t do it all. Governments are acting too slowly. Every week that we delay a vaccine costs us billions.”

Athey, Kremer, Tabarrok and the rest of the team behind the Advanced Market Commitment proposal want to supplement it with generous 85 per cent subsidies for the immediate construction of vaccine factories. The calculation here is that firms are the best judges of their own prospects. A firm with a marginal vaccine will not build much capacity, even with an 85 per cent subsidy. But anyone with a decent chance at producing a vaccine will see the prize on offer, and the subsidies, and start building factories at once.

On the principle of not overlooking what seems simple, even the most sophisticated vaccines rely on ingredients that are all too easy to take for granted. Consider the supply of glass vials. Several doses can be included in a single vial, but that still suggests a demand for hundreds of millions of them if a successful vaccine is made. The vaccine industry is used to operating at scale, but this would be something new: vaccines simply aren’t given to everyone in the world all at once.

Or perhaps the hold-up won’t be the glass, but something else. James Robinson, a vaccine manufacturing expert, told the science writer Maggie Koerth: “A vaccine manufacture . . . might source several thousand ingredients to make a vaccine. But each material is coming from factories with hundreds of sources, and those sources have sources.”

For example, GlaxoSmithKline uses an extract from the soap-bark tree to produce a vaccine-enhancing ingredient called an adjuvant; for some of the vaccines now in development, the adjuvant may enhance their effectiveness or make a certain quantity stretch to more doses. As Koerth noted, however, the bark is harvested in Peru, Chile and Bolivia during the summer months of the southern hemisphere. Last year’s crop was harvested before the coronavirus had become a household name; this year’s harvest will not begin until November.

Disruption can help

It hasn’t just been the past few decades in which apparently remarkable technologies have made an underwhelming impression on the productivity figures. Consider the history of electrification in American factories. In the 1890s, the potential for electricity seemed clear. Thomas Edison and Joseph Swan independently invented usable lightbulbs in the late 1870s. In 1881, Edison built electricity-generating stations at Pearl Street in Manhattan and Holborn in London. Things moved quickly: within a year, he was selling electricity as a commodity; a year later, the first electric motors were used to drive manufacturing machinery.

Yet by 1900, less than 5 per cent of mechanical drive power in US factories was coming from electric motors. Most factories were still in the age of steam. This was because when manufacturers replaced large steam engines with large electric motors, they were disappointed with the results.

I’ve written about the work of economic historian Paul David before. He argued it wasn’t enough merely to replace steam engines with electric motors. The capabilities of those new motors could only be used fully if the factories were redesigned.

While replacing a large steam engine with a large electric motor had achieved very little, electric motors could be efficient at a smaller scale. That meant that each worker could have a small motor at their bench. Wires could replace driveshafts; factories could spread out into lighter, airier spaces; the flow of product could be optimised, rather than being constrained by proximity to the power source.

But a fascinating part of David’s argument is that all this was catalysed by a crisis. After 1914, workers became more expensive thanks to a series of new laws that limited immigration into the US from a war-torn Europe. Manufacturing wages soared and hiring workers became more about quality, and less about quantity. It was worth investing in training — and better trained workers were better placed to use the autonomy that electricity gave them. The recruitment problem sparked by the immigration restrictions helped to spark new thinking about the design of the American factory floor.

Some of the modern parallels are obvious. We have had email, internet and affordable computers for years — and more recently, video-conferencing. Yet until the crisis hit, we had been slow to explore online education, virtual meetings or telemedicine. 3D printing and other agile manufacturing techniques have moved from being curiosities to life-saving ways to meet the new demand for medical equipment. We are quickly learning new ways to work from a distance because suddenly we have had no choice. And we are learning about resilience.

There is no guarantee that a crisis always brings fresh ideas; sometimes a catastrophe is just a catastrophe. Still, there is no shortage of examples for when necessity proved the mother of invention, sometimes many times over.

The Economist points to the case of Karl von Drais, who invented an early model of the bicycle in the shadow of “the year without a summer” — when in 1816 European harvests were devastated by the after-effects of the gargantuan eruption of Mount Tambora in Indonesia. Horses were starved of oats; von Drais’s “mechanical horse” needed no food.

It is a good example. But one might equally point to infant formula and beef extract, both developed by Justus von Liebig in response to the horrifying hunger he had witnessed in Germany as a teenager in 1816. Or, if we are to recognise art as well as science, there is Mary Shelley’s masterpiece Frankenstein, written that same rainy summer beside Lake Geneva; the creature’s isolation mirrors that of the starving peasants she saw, begging for food. One crisis may lead to many creative responses.

The same may be true of this pandemic. Disruptions — even calamitous ones — have a way of bulldozing vested interests and tearing up cosy assumptions, jolting people and organisations out of the status quo.

It is just possible that future generations will point to 2020 as the year the innovation slowdown ended. Even economists need to be able to hope.

Written for and first published in the Financial Times on 11 June 2020.

My NEW book The Next Fifty Things That Made the Modern Economy is NOW OUT; this article is based in part on ideas in that book. Details, and to order on Hive, Blackwells, Amazon or Watersones. Bill Bryson comments, “Endlessly insightful and full of surprises — exactly what you would expect from Tim Harford.”

Receive these posts by email

(You can unsubscribe at any time)

1st of July, 2020HighlightsOther WritingComments off
Highlights

Why we fail to prepare for disasters

You can’t say that nobody saw it coming. For years, people had warned that New Orleans was vulnerable. The Houston Chronicle reported that 250,000 people would be stranded if a major hurricane struck, with the low-lying city left 20ft underwater. New Orleans’s Times-Picayune noted the inadequacy of the levees. In 2004, National Geographic vividly described a scenario in which 50,000 people drowned. The Red Cross feared a similar death toll. Even Fema, the Federal Emergency Management Agency, was alert: in 2001, it had stated that a major hurricane hitting New Orleans was one of the three likeliest catastrophes facing the United States.

Now the disaster scenario was becoming a reality. A 140mph hurricane was heading directly towards the city. More than a million residents were warned to evacuate. USA Today warned of “a modern Atlantis”, explaining that the hurricane “could overwhelm New Orleans with up to 20ft of filthy, chemical-polluted water”.

The city’s mayor, Ray Nagin, begged people to get away. He was reluctant to make evacuation mandatory because more than 100,000 people had no cars and no way of leaving. The roads out were jammed, anyway. Thousands of visiting conference delegates were stranded; the airport had been closed. There were no emergency shelters. Nagin mooted using a local stadium, the Louisiana Superdome, as a temporary refuge — but the Superdome was not necessarily hurricane-proof and Nagin was warned that it wasn’t equipped to be a shelter.

But then, the storm turned aside. It was September 2004, and New Orleans had been spared. Hurricane Ivan had provided the city, and the nation, with a vivid warning. It had demonstrated the need to prepare, urgently and on a dozen different fronts, for the next hurricane.

“In early 2005, emergency officials were under no illusions about the risks New Orleans faced,” explain Howard Kunreuther and Robert Meyer in their book The Ostrich Paradox. But the authorities did not act swiftly or decisively enough. Eleven months later, Hurricane Katrina drowned the city — and many hundreds of its residents. As predicted, citizens had been unable or unwilling to leave; levees had been breached in over 50 places; the Superdome had been an inadequate shelter.

Surely, with such a clear warning, New Orleans should have been better prepared to withstand Hurricane Katrina? It’s easily said. But as the new coronavirus sweeps the globe, killing thousands more people every day, we are now realising that New Orleans is not the only place that did not prepare for a predictable catastrophe.

*

In 2003, the Harvard Business Review published an article titled “Predictable Surprises: The Disasters You Should Have Seen Coming”. The authors, Max Bazerman and Michael Watkins, both business school professors, followed up with a book of the same title. Bazerman and Watkins argued that while the world is an unpredictable place, unpredictability is often not the problem. The problem is that faced with clear risks, we still fail to act.

For Watkins, the coronavirus pandemic is the ultimate predictable surprise. “It’s not like this is some new issue,” he says, before sending over the notes for a pandemic response exercise that he ran at Harvard University. It’s eerily prescient: a shortage of masks; a scramble for social distance; university leaders succumbing to the illness. The date on the document is October 12 2002. We’ve been thinking about pandemics for a long time.

Other warnings have been more prominent. In 2015, Bill Gates gave a TED talk called “The next outbreak? We’re not ready”; 2.5 million people had watched it by the end of 2019. In 2018, the science journalist Ed Yong wrote a piece in The Atlantic titled “The Next Plague Is Coming. Is America Ready?” Now we know the answer, and it wasn’t just the Americans who were unprepared.

Officialdom had also been sounding the alarm. The World Health Organization and the World Bank had convened the Global Preparedness Monitoring Board (GPMB), chaired by Elhadj As Sy of the Red Cross and Gro Harlem Brundtland, a former director of the WHO. The GPMB published a report in October warning of “a cycle of panic and neglect” and calling for better preparation for “managing the fallout of a high-impact respiratory pathogen”. It noted that a pandemic “akin to the scale and virulence of the one in 1918 would cost the modern economy $3 trillion”.

Alongside these authoritative warnings were the near misses, the direct parallels to Hurricane Ivan: Sars in 2003; two dangerous influenza epidemics, H5N1 in 2006 and H1N1 in 2009; Ebola in 2013; and Mers in 2015. Each deadly outbreak sparked brief and justifiable alarm, followed by a collective shrug of the shoulders.

It is understandable that we have too few doctors, nurses and hospital beds to cope with a pandemic: spare doctors are expensive. It is less clear why we have so few masks, are so unprepared to carry out widespread testing and didn’t do more to develop coronavirus vaccines after the Sars epidemic of 2003, which involved a strain related to the current outbreak. (There was a flurry of activity, but interest waned after 2004.) We were warned, both by the experts and by reality. Yet on most fronts, we were still caught unprepared. Why?

*

Wilful blindness is not confined to those in power. The rest of us should acknowledge that we too struggled to grasp what was happening as quickly as we should. I include myself. In mid-February, I interviewed an epidemiologist, Dr Nathalie MacDermott of King’s College London, who said it would likely prove impossible to contain the new coronavirus, in which case it might well infect more than half the world’s population. Her best guess of the fatality rate at the time was a little under one per cent. I nodded, believed her, did the maths in my head — 50 million dead — and went about my business. I did not sell my shares. I did not buy masks. I didn’t even stock up on spaghetti. The step between recognising the problem and taking action was simply too great.

Nor did the broadcast of my radio interview with MacDermott on the BBC seem to spark much in the way of disaster planning. Psychologists describe this inaction in the face of danger as normalcy bias or negative panic. In the face of catastrophe, from the destruction of Pompeii in AD79 to the September 11 2001 attacks on the World Trade Center, people have often been slow to recognise the danger and confused about how to respond. So they do nothing, until it is too late.

Part of the problem may simply be that we get our cues from others. In a famous experiment conducted in the late 1960s, the psychologists Bibb Latané and John Darley pumped smoke into a room in which their subjects were filling in a questionnaire. When the subject was sitting alone, he or she tended to note the smoke and calmly leave to report it. When subjects were in a group of three, they were much less likely to react: each person remained passive, reassured by the passivity of the others.

As the new coronavirus spread, social cues influenced our behaviour in a similar way. Harrowing reports from China made little impact, even when it became clear that the virus had gone global. We could see the metaphorical smoke pouring out of the ventilation shaft, and yet we could also see our fellow citizens acting as though nothing was wrong: no stockpiling, no self-distancing, no Wuhan-shake greetings. Then, when the social cues finally came, we all changed our behaviour at once. At that moment, not a roll of toilet paper was to be found.

Normalcy bias and the herd instinct are not the only cognitive shortcuts that lead us astray. Another is optimism bias. Psychologists have known for half a century that people tend to be unreasonably optimistic about their chances of being the victim of a crime, a car accident or a disease, but, in 1980, the psychologist Neil Weinstein sharpened the question. Was it a case of optimism in general, a feeling that bad things rarely happened to anyone? Or perhaps it was a more egotistical optimism: a sense that while bad things happen, they don’t happen to me. Weinstein asked more than 250 students to compare themselves to other students. They were asked to ponder pleasant prospects such as a good job or a long life, and vivid risks such as an early heart attack or venereal disease. Overwhelmingly, the students felt that good things were likely to happen to them, while unpleasant fates awaited their peers.

Robert Meyer’s research, set out in The Ostrich Paradox, shows this effect in action as Hurricane Sandy loomed in 2012. He found that coastal residents were well aware of the risks of the storm; they expected even more damage than professional meteorologists did. But they were relaxed, confident that it would be other people who suffered.

While I realise some people are paranoid about catching Covid-19, it’s egotistical optimism that I see in myself. Although I know that millions of people in the UK will catch this disease, my gut instinct, against all logic, is that I won’t be one of them. Meyer points out that such egotistical optimism is particularly pernicious in the case of an infectious disease. A world full of people with the same instinct is a world full of disease vectors. I take precautions partly because of social pressure and partly because, intellectually, I know they are necessary. But my survival instinct just isn’t doing the job, because I simply do not feel my survival is at stake.

The fact that the epidemic started in China, among ethnically Asian people, can only have ­deepened the sense of personal invulnerability in the west. As epidemiologist Neil Ferguson told the FT: “What had happened in China was a long way away, and it takes a certain type of person to take on board that this might actually happen here.”

The virus started to feel real to Europeans only when Europeans were suffering. Logically, it was always clear that the disease could strike middle-class people who enjoy skiing holidays in Italy; emotionally, we seemed unable to grasp that fact until it was too late.

A fourth problem, highlighted by Meyer’s co-author Howard Kunreuther, is what we might call exponential myopia. We find exponential growth counterintuitive to the point of being baffling — we tend to think of it as a shorthand for “fast”. An epidemic that doubles in size every three days will turn one case into a thousand within a month — and into a million within two months if the growth does not slow.

Donald Trump’s boast, on March 9, that there had been only 22 deaths in the US, was ill-judged in light of what we know about exponential growth, but he is hardly the only person to fail to grasp this point. In 1975, the psychologists William Wagenaar and Sabato Sagaria found that when asked to forecast an exponential process, people often underestimated by a factor of 10. The process in that study was much slower than this epidemic, doubling in 10 months rather than a few days. No wonder we find ourselves overtaken by events.

Finally, there’s our seemingly limitless capacity for wishful thinking. In a complex world, we are surrounded by contradictory clues and differing opinions. We can and do seize upon whatever happens to support the conclusions we wish to reach — whether it’s that the virus is being spread by 5G networks, is a hoax dreamed up by “the Dems” or is no worse than the flu.

Both Robert Meyer and Michael Watkins made an observation that surprised me: previous near misses such as Sars or Hurricane Ivan don’t necessarily help citizens prepare. It is all too easy for us to draw the wrong lesson, which is that the authorities have it under control. We were fine before and we’ll be fine this time.

This, then, is why you and I did not see this coming: we couldn’t grasp the scale of the threat; we took complacent cues from each other, rather than digesting the logic of the reports from China and Italy; we retained a sunny optimism that no matter how bad things got, we personally would escape harm; we could not grasp what an exponentially growing epidemic really means; and our wishful thinking pushed us to look for reasons to ignore the danger.

*

The true failure, however, surely lies with our leaders. We are humble folk, minding our own business; their business should be safeguarding our welfare, advised by expert specialists. You or I could hardly be expected to read Gro Harlem Brundtland’s October Global Preparedness Monitoring Board report, and if we did, it is not clear what action we could really take. Surely every government should have someone who is paying attention to such things?

Margaret Heffernan, the author of Uncharted, warns that the same mental failings that blind us to certain risks can do the same to our leaders.

“We hang around with people like ourselves and if they’re not fussed, we’re not fussed,” she says. “Gro Harlem Brundtland lives inside a global health institution, so she cares. Most politicians don’t.”

While politicians have access to the best advice, they may not feel obliged to take experts seriously. Powerful people, after all, feel sheltered from many everyday concerns.  Heffernan argues that this sense of distance between the powerful and the problem shaped the awful response to Hurricane Katrina. Leaked emails show the response of Michael Brown, then the director of Fema.

One subordinate wrote: “Sir, I know that you know the situation is past critical. Here some things you might not know. Hotels are kicking people out, thousands gathering in the streets with no food or water… dying patients at the DMAT tent being medivac. Estimates are many will die within hours…”

Brown’s response, in its entirety, was: “Thanks for update. Anything specific I need to do or tweak?” That’s a sense of distance and personal impunity distilled to its purest form.

Sometimes, of course, the feeling of invulnerability is an illusion: in early March, the British Prime Minister Boris Johnson jovially declared that people would be “pleased to know” that he was shaking hands with everybody at a hospital tending to patients with coronavirus, and inviting people to make their own decisions about such matters. It was a shamefully irresponsible thing to say — but it also spoke volumes about his misplaced intuition that he could come to no harm. Within weeks, the story of Johnson had become a classical tragedy, the hero laid low by his own larger-than-life qualities.

*

We should acknowledge that even foreseeable problems can be inherently hard to prepare for. A pandemic, for example, is predictable only in broad outline. The specifics are unknowable. “What disease? When? Where?” says Heffernan. “It’s inherently unpredictable.”

The UK, for example, ran a pandemic planning exercise in October 2016, dubbed “Exercise Cygnus”. That forethought is admirable, but also highlights the problem: Cygnus postulated a flu pandemic, perhaps a strain of the H1N1 virus that killed tens of thousands in 2009, and many millions in 1918. Covid-19 is caused by a coronavirus instead, a relative of the Sars-Cov strain from the 2003 outbreak. Some of the implications are the same: we should stockpile personal protective equipment. Some, such as the danger of flu to young children, are different.

In any case, those implications seem broadly to have been ignored. “We learnt what would help, but did not necessarily implement those lessons,” wrote Professor Ian Boyd in Nature in March. Boyd had been a senior scientific adviser to the UK government at the time. “The assessment, in many sectors of government, was that the resulting medicine [in terms of policy] was so strong that it would be spat out.”

Being fully prepared would have required diverting enormous sums from the everyday requirements of a medical system that was already struggling to cope with the nation’s needs. The UK’s National Health Service was short of staff before the crisis began, seems to have had woefully inadequate stores of protective equipment for doctors and nurses, and has long pursued a strategy of minimising the use of hospital beds.

It’s this quest for efficiency above all else — in the NHS, and modern organisations in general — that leaves us vulnerable. The financial crisis taught us that banks needed much bigger buffers, but few carried the lesson over to other institutions, such as hospitals.

“On a good day, having 100 per cent of your intensive care beds in use looks efficient. The day a pandemic strikes is the day you realise the folly of efficiency. You’ve got to have a margin,” says Heffernan.

These margins are hard to maintain, though. In 2006, Arnold Schwarzenegger — then governor of California — announced an investment of hundreds of millions of dollars in medical supplies and mobile hospitals to deal with earthquakes, fires and particularly pandemics. According to the Los Angeles Times, emergency response teams would have access to a stockpile including “50 million N95 respirators, 2,400 portable ventilators and kits to set up 21,000 additional patient beds wherever they were needed”.  It was impressive.

But after a brutal recession, Schwarzenegger’s successor, Jerry Brown, cut the funding for the scheme, and the stockpile is nowhere to be found. Brown isn’t the only one to look for something to cut when funds are tight. Managers everywhere have long been promoted on their ability to save money in the short term.

I spoke to a friend of mine, a senior NHS consultant who had contracted Covid-19 as he tended his patients. Recovering in self-isolation, he reminisced about the days that he was told to find cuts of five to 10 per cent — and the fact that his hospital was no longer providing coffee for staff meetings as a cost-saving exercise. That seems like a memo from another era — but it was just a few weeks ago. As the cost-saving measures were being introduced in the UK, Italians had started to die.

*

The pandemic has offered us few easy choices so far. Nor are there many easy answers to the general problem of preparing for predictable catastrophes. It is too tempting to look at a near miss like Hurricane Ivan or Sars and conclude that since the worst did not happen then, the worst will not happen in the future. It is tempting, too, to fight the last war: we built up reserves in banking after the financial crisis, but we did not pay attention to reserve capacity in health, vaccine production and social care.

Preparedness is possible. Margaret Heffernan points to Singapore, a tiny country with front-line experience of Sars, acutely aware of its geographical vulnerability.

“The foresight unit in Singapore is the best I’ve ever encountered,” she says. “There are serious people working through very serious scenarios, and there’s a diversity of thinking styles and disciplines.”

Serious scenarios are useful, but as the UK’s Exercise Cygnus demonstrated, serious scenarios are no use if they are not taken seriously. That means spending money on research that may never pay off, or on emergency capacity that may never be used. It is not easy to justify such investments with the day-to-day logic of efficiency.

Singapore isn’t the only place to have prepared. Almost four years ago, philanthropists, governments and foundations created the Coalition for Epidemic Preparedness Innovations. Cepi’s mission is to support and develop technologies and systems that could create vaccines more quickly. While the world chafes at the idea that a vaccine for the new coronavirus might take more than a year to deploy, such a timeline would have been unthinkably fast in the face of earlier epidemics. If such a vaccine does arrive within a year — there is no guarantee it will arrive at all — that will be thanks to the likes of Cepi.

Still, we are left wondering what might have been if Cepi had existed just a few years earlier. In October 2019, for example, it started funding vaccine “platform” technologies to enable a more agile, rapid response to what it called “Disease X… a rapidly moving, highly lethal pandemic of a respiratory pathogen killing 50 [million] to 80 million people and wiping out nearly 5 per cent of the world’s economy”. That’s preparedness; alas ­Disease X may have arrived just a little too soon for the preparedness to bear fruit.

*

And what of New Orleans? In the summer of 2017, it was underwater again. A vast and expensive system of pumps had been installed, but the system was patchy, under-supplied with power and unable to cope with several weeks of persistent rain. It does not inspire confidence for what will happen if a big hurricane does strike.

Robert Meyer says that while the city has learnt a lot about preparation, “Katrina was not close to the worst-case scenario for New Orleans, which is a full category-five storm hitting just east of the city”.

The same may be true of the pandemic. Because Covid-19 has spread much faster than HIV and is more dangerous than the flu, it is easy to imagine that this is as bad as it is possible to get. It isn’t. Perhaps this pandemic, like the financial crisis, is a challenge that should make us think laterally, applying the lessons we learn to other dangers, from bioterrorism to climate change. Or perhaps the threat really is a perfectly predictable surprise: another virus, just like this one, but worse. Imagine an illness as contagious as measles and as virulent as Ebola, a disease that disproportionately kills children rather than the elderly.

What if we’re thinking about this the wrong way? What if instead of seeing Sars as the warning for Covid-19, we should see Covid-19 itself as the warning?

Next time, will we be better prepared?

Written for and first published in the FT Magazine on 18/19 April 2020.

My NEW book The Next Fifty Things That Made the Modern Economy is out in the UK in two weeks and available to pre-order; please consider doing so online or at your local bookshop – pre-orders help other people find the book and are a BIG help.

Receive these posts by email

(You can unsubscribe at any time)

9th of May, 2020HighlightsOther WritingComments off
Highlights

Why we fall for cons

There may be times and places where it’s a good idea to talk back to a military officer — but Germany in 1906 wasn’t one of them. So the young corporal didn’t. The corporal — let’s call him Muller — had been leading his squad of four privates down Sylterstrasse in Berlin, only to be challenged by a captain.  Captain Voigt was in his fifties, a slim fellow with sunken cheeks, the outline of his skull prominent above a large, white moustache. Truth be told, he looked strangely down on his luck — but Muller didn’t seem to take that in. Like any man in uniform, Captain Voigt appeared taller and broader thanks to his boots, smart grey overcoat and Prussian-blue officer’s cap. His white-gloved hand rested casually on the hilt of his rapier.

“Where are you taking these men?” he barked.

“Back to barracks, sir,” replied Muller.

“Turn them around and follow me,” ordered Voigt. “I have an urgent mission from the “all-highest” command.”

Direct orders from the kaiser himself!

As the small group marched towards Putlitzstrasse station, the charismatic Captain Voigt saw another squad and ordered them to fall in behind. He led his little army on a train ride towards Köpenick, a charming little town just south-east of the capital.

On arrival, the adventure continued: bayonets were to be fixed for inspection. It had been an extraordinary day for Corporal Muller and his men. But it was going to get a lot more extraordinary: what they were about to do would be the talk of newspapers around the world.

 

Captain Voigt’s impromptu strike force burst into Köpenick town hall and into the office of the mayor, a man named Georg Langerhans. Langerhans, a mild-looking fellow in his mid-thirties with pince-nez spectacles, a pointed goatee and a large, well-groomed moustache, stood up in astonishment and demanded an explanation. Voigt promptly placed him under arrest, by order of the kaiser.

“Where is your warrant?” stammered Langerhans.

“My warrant is the men I command!”

Voigt ordered the town treasurer to open the safe for inspection: fraud was suspected. The safe contained three thousand five hundred and fifty seven marks, forty-five pfennigs. Captain Voigt was punctilious about the count, confiscated the money, and handed over a receipt to be stamped.

It was nearly a quarter of a million dollars in today’s money.

Captain Voigt sent a pair of soldiers to find and detain Mayor Langerhans’s wife. She, too, was a suspect. He then searched the town hall office while his men kept the officials under arrest. Failing to find what he sought, he decided to wrap up the mission. The officials were to be driven to a police station where they would be detained and interrogated.

Captain Voigt himself walked to Köpenick railway station. He collected a package from the left-luggage office, and stepped into a toilet cubicle. A minute or two later, he stepped out again — and he was almost unrecognisable, having changed into shabby civilian clothes. He ambled, bandy-legged, across the station concourse. This anonymous fellow boarded the train back to Berlin, with his uniform neatly folded under one arm, and a bag of money under the other. Just like that, the “Captain of Köpenick” was gone.

Meanwhile, Corporal Muller dutifully presented his prisoners at the police station in central Berlin. The situation quickly became baffling to all concerned. Nobody had heard anything about the “all-highest” demanding the interrogation of the Mayor of Köpenick — nor his wife. After a phone call to headquarters, the head of the German general staff himself, General Helmuth von Moltke the Younger, arrived to resolve the situation. But nobody had received any orders from the kaiser. Nobody could see any reason to detain the mayor, or his wife, or his treasurer. And nobody could recall ever having met a “Captain Voigt” before.  No wonder. Except in the minds of the bemused soldiers and their civilian prisoners, Captain Voigt never existed. They met instead Herr Wilhelm Voigt, an ex-convict, an ex-shoemaker, a nobody, who possessed nothing more than a confident manner . . . and a very nice uniform.

 

The tale I just told you is a famous one in Germany. It became a play, and an Oscar-nominated film. (The most comprehensive English-language account I could find is by the historian Benjamin Carter Hett.) When the Germans tell the story they tend to linger on the prelude to the heist. What kind of a man does this? Who was Wilhelm Voigt, and what inspired his audacious confidence trick? Voigt was a crook, no doubt about it — his crimes included armed robbery. But the judicial system had treated him harshly, stuffing a legitimate appeal into a filing cabinet. In this version of the story, Voigt was persecuted by a cruel bureaucracy, driven to ransacking the mayor’s office looking not for money but for the paperwork he needed to get a job. No wonder he became seen as a sympathetic figure in German literature.

The English-speaking world drew a different lesson from the reports that filled their newspapers: that the Germans are a sucker for a shouty man in a uniform. The Morning Post named Voigt “the most humorous figure of the century”. The writer GK Chesterton could scarcely contain his glee upon reading the “comic” reports from Köpenick of the “absurd fraud (at least, to English eyes)”. An Englishman, mused Chesterton, would have seen through the bluster immediately.

Yet four years later, a group of young upper-class pranksters including the novelist Virginia Woolf and the artist Duncan Grant managed to arrange for a tour of the Royal Navy’s flagship, HMS Dreadnought, by putting on turbans, brown make-up and fake beards, and claiming to be from the royal family of Abyssinia.

“Bunga bunga!” they boomed as they greeted each other, and when they had to improvise further, they spoke scrambled fragments of ancient Greek poetry they’d learnt at school. Faced with this ridiculous, and to our modern eyes profoundly offensive prank, the Royal Navy responded with a commensurate display of ignorance: it treated the visitors with all the honour it could muster, including the flag and anthem of the nation of Zanzibar rather than Abyssinia. That was apparently close enough to satisfy everyone.

It’s easy to laugh — as GK Chesterton did — when it happens to someone else. But the closer I looked at the story of the Captain of Köpenick, the less funny it looks. Faced with the right con, we’re all vulnerable. Any one of us could have been the hapless Corporal Muller. And if we don’t understand how the trick worked, Wilhelm Voigt’s modern-day successors will do far more damage than he could ever have imagined.

 

Since Wilhelm Voigt persuaded people to obey orders that they should not have obeyed, you may already be thinking about Stanley Milgram. Milgram is the psychologist who, in the 1960s, conducted the most famous and controversial psychological experiment of all time — an experiment that I think we tend to misunderstand. Milgram recruited unsuspecting members of the American public — all men — to participate in a “study of memory”. On showing up at the laboratory, in a basement at Yale University, they met a man — apparently a scientist, just as Voigt had apparently been a Prussian army captain — dressed in a tie and grey lab coat.

“Very straightforward and professional, just what you’d expect from Yale,” one participant recalled. (Gina Perry’s book Behind The Shock Machine is an authoritative account of the experiments.)

The man-dressed-as-a-scientist supervised proceedings. Participants would be assigned the role either of “teacher” or “learner”. The learner was then strapped into an electric chair while the teacher retreated into another room to take control of a machine with switches labelled with terms including: “slight shock”, “moderate shock”, “danger: severe shock” and, finally, “XXX”.

As the learner failed to answer questions correctly, the teacher was asked to administer steadily increasing electric shocks. Although the teachers had received a painful shock themselves as a demonstration and had witnessed the learner complaining of a heart condition, many proved willing to deliver possibly fatal shocks while listening to screams of pain from the other side of the wall.  Of course, there were no shocks; both the screaming “learner” and the scientific supervisor were actors. The true experiment was studying the “teachers”: how far would they go when following direct orders?

In the best known study, 65 per cent of experimental subjects went all the way to 450 volts, applying shocks long after the man in the other room had fallen silent. Under the guise of science, Stanley Milgram had perpetrated yet another of these grim hoaxes.

Milgram’s research agenda was influenced by the shadow of the Holocaust and a desire to understand how it had been possible. He made the link explicit, and argued that his experiment was all about “obedience to authority”. But modern scientists no longer see Milgram’s research in quite that way.

There’s a lot we could say about those experiments — about their ethics, and about the more than 20 experimental variations. But the most fundamental objection is that these experiments may not be about obedience at all. Alex Haslam, a psychologist who has re-examined the studies in recent years, found that when the man in the lab coat gave direct orders, they backfired. One pre-scripted instruction produced universal disobedience: “you have no other choice . . . you must continue”. Experimental subjects concluded that this was simply untrue; nobody continued after that order. People need to be persuaded, not bullied, into participating.

So if these experiments weren’t about blind obedience, what were they about? Here’s a detail that is usually overlooked: Milgram’s shock machine had 30 settings, fine increments of 15 volts. It’s hard to object to giving someone a tiny 15-volt shock. And if you’ve decided that 15 volts is fine, then why draw the line at 30 volts? Why draw the line at 45? Why draw the line at all?

At 150 volts, the “learner” yelled out in distress. Some people stopped at that point. But those who continued past 150 volts almost always kept going to the full 450 volts. They were in too deep. Refusing to administer a shock of 225 volts would be an implicit admission that they had been wrong to deliver 210. Perhaps Stanley Milgram’s experiments weren’t a study of obedience so much as a study of our unwillingness to stop and admit that we’ve been making a dreadful mistake. We’re in too deep; we’re committed; we can’t turn back.

Think back to that day in Berlin, in 1906. Voigt stopped Corporal Muller in the street and demanded to know where he and his men were going. What was Muller to do? Demand proof of identification? Of course not. Muller didn’t want to risk a court martial over answering a simple question.

Voigt then asked Muller’s squad to follow him. That’s a bit more of a stretch, but Muller had already obeyed one order, already addressed this stranger-in-a-uniform as “sir”. Marching down the street behind him was just one small action further.

The pattern repeated itself with the second squad: when they first saw Captain Voigt, he was already at the head of half a dozen men; that was the evidence he was who he said he was. Why not fall in? Why not get the train to Köpenick? Why not fix bayonets for inspection? It’s really only at the moment that they burst into the town hall that the doubts might occur.

But by then, the whole business was already well beyond the 210-volt mark. They had travelled all the way across Berlin. They had been following Wilhelm Voigt’s instructions for a couple of hours. It would have been very late in the day for Corporal Muller, or anyone else, to have the presence of mind to stop, think and challenge their new captain.

Georg Langerhans, the young mayor, saw the situation very differently — he immediately demanded to see a warrant. Langerhans, of course, was effectively being asked to apply a 450-volt shock without preamble. No wonder he was sceptical.

At first glance, then, Wilhelm Voigt’s con and Milgram’s shock experiments are evidence for the idea that we’ll do anything for a figure of authority wearing the right outfit. But look deeper and they’re evidence for something else — that we’re willing to help out with reasonable requests, and that step by step we can find ourselves trapped in a web of our own making. Each small movement binds us more tightly to the con artist. We become complicit; breaking free becomes all but impossible.

That said, the right outfit matters. And here I want to think bigger than the world of the con artist. Yes, we fall for cons. But we fall for all kinds of other superficial things that shouldn’t matter, like a nice uniform, and those superficial things are constantly influencing our decisions — including decisions that we may later come to regret.

 

Almost exactly 110 years after Wilhelm Voigt’s audacious heist, Hillary Clinton and Donald Trump squared off in one of three televised debates. You might remember it. In a town-hall format, the candidates were able to roam the stage. And Trump certainly did roam, following Clinton around as she answered questions, looming behind her, always on camera, clearly visible over the top of Clinton’s head.

After the debate, that was all anyone could talk about. Was it an attempt at intimidation? Perhaps. But there’s something else about that footage of Donald Trump stalking Hillary Clinton: he towers over her.

Voters were being offered all kinds of choices in that election but one that was never really articulated was this: would you like to elect the third-tallest president ever, or the shortest president since James Madison two centuries ago?

There’s not much doubt that some voters were influenced by the disparity in height. The US does elect a lot of tall presidents. Trump was taller than Hillary Clinton. Obama was taller than McCain. Bill Clinton and George Bush Sr were the same height — towering over tiny Ross Perot, the feisty independent challenger they beat into third place. Bush Sr was taller than Dukakis. Reagan was taller than Carter, Nixon was taller than Humphrey, Kennedy was taller than Nixon, Truman taller than Dewey. Lyndon Johnson was taller than pretty much everyone. Are we electing a president here, or picking a basketball team? Of course there are some exceptions to the rule: when Carter beat Ford, it was a victory for the little guy.

But serious statistical analysis concludes that taller presidential candidates are more likely to win the election, more likely to win re-election, and more likely — unlike Donald Trump — to win the popular vote. Since the dawn of the television age, the only person ever to have overcome a height deficit of more than three inches was the incumbent George W Bush running against John Kerry.

Hillary Clinton would have been the first female president, true. She would also have been the first president to win despite a 10in height disadvantage since 1812. Americans may not have elected any female presidents over the years — but they haven’t elected any short men, either — not in a long, long time.

This isn’t just about presidential elections and it isn’t just about height. Across the world, voters favour candidates based on the most superficial characteristics imaginable. For example, one study — by economists Daniel Benjamin and Jesse Shapiro — found that people were fairly good at predicting the victor of an election for state governor after being shown a brief piece of video of a gubernatorial debate with the sound turned off: just looking at the candidates seemed to be enough to judge who voters would pick. In fact, giving people audio too actually made the predictions worse, presumably because it distracted them from what mattered: appearances.

We hairless apes seem to go for simple proxies when judging someone’s capacity for leadership. That 400-page manifesto? We’re not going to read it. But we pay close attention, whether we realise it or not, to the fine details of a candidate’s posture, styling, clothes — and, of course, height.  Corporal Muller and his men were completely taken in by Wilhelm Voigt’s appearance and mannerisms. But they’re not the only ones to pay attention to appearances.

Consider the advertising classic, “I’m not a doctor but I play one on TV.”  And then, as though it was the most natural thing in the world, the man who admits he isn’t a doctor goes on to tell us what brand of cough syrup to buy. Even Wilhelm Voigt would not have been quite as audacious as to announce: “I’m not a captain, I’m just wearing the uniform.”

And yet the advertisements work. We buy the cough syrup from the man who tells us, “I only look like a doctor”. That’s how powerful appearances can be. And what about “I’m not a successful businessman, but I play one on TV?” Oh — I think I know that guy.

 

Fraudsters using the playbook of Wilhelm Voigt trick people every day. First, they get the appearances right. Maybe it’s a text message that looks like it’s from your bank — the phone number is right, after all. Maybe the doorbell rings and the man is standing there with an official-looking ID; he wants to come and check your electricity meter. That ID does look genuine. Maybe it’s a smooth-talking politician with a good suit. Milgram well understood the need to get the clothes right. In a variation where the experimenter didn’t wear a lab coat, few people went to 450 volts.

Second, fraudsters put people into what psychologists call a “hot state”. We don’t think so clearly when we’re hungry, or angry, or afraid. Wilhelm Voigt yelled at Corporal Muller. A politician who wanted to put people into a hot state might announce that the country was being taken over by gangs and terrorists, and that his opponent should be locked up. Whatever works.

Third, they pull the heist one small step at a time. They start with the request for information: where are you taking these men? You are Ms Jane Doe, aren’t you? I’m sorry to report that your bank account has been compromised, Ms Doe. Just enter your password and username — just like you usually do — and we’ll sort it out for you.  Give us someone who looks or sounds the part; apply a bit of fear, anger, lust or greed; and then proceed in salami slices from the reasonable to the insane, so smoothly that we don’t stop to think. That’s how Wilhelm Voigt fooled Corporal Muller. But it’s how he would have fooled any of us, if he caught us at the wrong moment.

At first it looked as though Voigt would enjoy the fruits of his acting skills in peace. But as he relaxed with his money, a former accomplice of his saw the reports of the daring heist in all the newspapers and remembered a prison conversation in which Voigt had dreamt of such a coup. He promptly reported Voigt to the authorities.

When four detectives burst in to his apartment at six o’clock in the morning, they found Voigt enjoying breakfast. He protested that the timing was inconvenient. “I should like a moment to finish my meal.”

So the detectives watched him break open another crusty white roll, spread on a thick layer of butter, and wash it down with his coffee. You can’t help but admire the audacity.

At trial, Voigt became a folk hero. The judge sympathised with the way he had been treated, gave him an unexpectedly short sentence, then took off his judge’s cap and stepped down to clasp Voigt by the hand. “I wish you good health throughout your prison term, and beyond.”

The German authorities felt that — in light of the popularity of the Captain of Köpenick — even more ostentatious clemency was required. They pardoned him after less than two years in jail. The kaiser himself was said to have chuckled, “amiable scoundrel” at the deed.

Statues of Voigt were erected and waxworks made of him — including one in Madame Tussauds in London. He was paid to record his story so that people could listen to him recount his deeds. He went on tour, posing in his uniform and signing photographs of himself for money.

A local restaurateur begged him to come and dine as often as he wanted, free of charge, knowing that his presence would attract other customers. A wealthy widow gave him a pension for life. Never let it be said that the Germans lack a sense of humour. But while the comedy is undeniable, we should not be too fond of the Prussian prankster. Perhaps Wilhelm Voigt’s adventure did little harm in the long run. The same cannot be said for some of the con artists who followed in his footsteps. It is exciting to read about a fraud — from a distance. It is not so funny to live through one.

 

This article is based on Episode 2 of my new podcast,“Cautionary Tales”. [Apple] [Spotify] [Stitcher]

Published in FT Magazine, 16/17 November 2019.

 

Further reading

The best English-language account I could find of the Kopenick story is by Benjamin Carter Hett. “The ‘Captain of Köpenick’ and the Transformation of German Criminal Justice, 1891-1914,” Central European History 36 (1), 2003.

I first read about the story in Nigel Blundell’s The World’s Greatest MistakesOther accounts are at Strange History  and The Rags of TimeKoepenickia offers various contemporary German newspaper accounts. There are many small differences in the accounts but the overall story remains just as remarkable.

The definitive account of Stanley Milgram’s experiments is Gina Perry’s Behind the Shock Machine and Alex Haslam was interviewed by Radiolab in a great episode about the same topic.

An overview of the evidence on tall presidents is Gert Stulp, Abraham P. Buunk, Simon Verhulst, Thomas V. Pollet, “Tall claims? Sense and nonsense about the importance of height of US presidents” The Leadership Quarterly  Volume 24, Issue 1, 2013.

The study of gubernatorial elections is Daniel J Benjamin & Jesse M Shapiro, 2009. “Thin-Slice Forecasts of Gubernatorial Elections” The Review of Economics and Statistics, MIT Press, vol. 91(3), pages 523-536, 02.

Daniel Hamermesh’s Beauty Pays looks at the overall evidence that appearances matter – including in politics.

Receive these posts by email

(You can unsubscribe at any time)

Highlights

Cautionary Tales Ep 1 – DANGER: Rocks Ahead!

 

Torrey Canyon was one of the biggest and best ships in the world – nevertheless its captain and crew needlessly steered it towards a deadly reef known as The Seven Stones. This risky manoeuvre seems like utter madness, but the thinking behind it is something we are all prone to do when we fixate on a goal and a plan to get us there.

Featuring: Enzo Cilenti, Ed Gaughan, Rufus Wright and Melanie Gutteridge.

Producers: Ryan Dilley and Marilyn Rust. Sound design/mix/musical composition: Pascal Wyse. Fact checking: Joseph Fridman. Editor: Julia Barton. Recording: Wardour Studios, London. GSI Studios, New York. PR: Christine Ragasa.

Thanks to the team at Pushkin Industries, Heather Fain, Mia Lobel, Carly Migliori, Jacob Weisberg, and of course, the mighty Malcolm Gladwell.

[Apple] [Spotify] [Stitcher]

 

Further reading

Two authoritative books were written about Torrey Canyon shortly after the events described in the podcast, and I’ve relied on both of them.

One is Oil and Water by Edward Cowan, and the other is The Black Tide by Richard Petrow. Both excellent, both long out of print.

For a more contemporary discussion of plan continuation bias I strongly recommend Meltdown by Chris Clearfield and Andras Tilcik.

This book also contains a good description of Marlys Christianson’s study of plan continuation bias in emergency rooms, “More and less effective updating” in Administrative Science Quarterly 2018.

The study of landings at Hartsfield-Jackson airport is Chris A. West’s “The Barn Door Effect“.

The dark tale of Sir Cloudesley Shovell is told in Dava Sobel’s hugely enjoyable book Longitudealthough I am sceptical Sir Cloudesley actually survived long enough for anyone to murder him.

 

 

Receive these posts by email

(You can unsubscribe at any time)

Highlights

Steve Levitt plays poker with the FT

“I used to play poker a ton and then I quit. It’s too time consuming and toooo boring.” There’s something boyish about the way Steve Levitt drags out the word. But then his inner economist reasserts itself: “What you come to realise about poker over time is that the ratio of luck to skill in the short term is too high to make it feel productive.”

Here’s what you need to know about Levitt. He used to be a rising star in academia, with prestigious positions at Harvard and then Chicago. He picked unusual topics: cheating sumo wrestlers; the link between legal abortion and falling crime. His detective work with data was influential. In 2003, when he was just 35, Levitt won the John Bates Clark medal, often a precursor to the Nobel memorial prize. The journalist Stephen Dubner profiled him in The New York Times Magazine; a book deal followed for the pair, and the result, Freakonomics, sold four million copies. So, I’m playing poker with a data-savvy millionaire genius, a game I understand only in the sense that I’ve written about it. The good news is that Levitt doesn’t play any more. The bad news is that on his last outing, five years ago, he was within one hand of the final table at the World Series of Poker … I am doomed.

We’re at a casino in Mayfair: just me, Levitt, and the dealer, JD. At 47, Levitt has greyed since I first interviewed him nine years ago. But he still looks young and he’s better dressed than he used to be, in a silver-grey jacket and a midnight-blue shirt. JD, who deals for the poker professionals on late-night TV, looks the part in a black suit and waistcoat. Your correspondent has just come from a radio studio and is dressed accordingly. The game is Texas Hold’Em, the modern standard for poker, in which each player constructs a hand from his two concealed cards plus five communal cards on the table. The stakes: £100 each, winner takes all.

Like any good economist, I understand how to play poker in theory but am not sure how to do it in practice. (It takes me a couple of dry runs to figure out whose turn it is to bet.) We have 10,000 chips each and I have a king in my very first hand. The “flop” of three communal cards reveals a second king so, after a couple of small raises, I go in hard with a bet of 2,000 chips. Levitt chuckles, which is unnerving. After pausing, he folds. I get the impression he’s not convinced of my expertise – but I’ve won the first pot, even if it is tiny.

I’m trying to write down all the hands for posterity but that quickly becomes ludicrous. So, too, is the idea of conducting an interview while playing. Concentration is required – from me at least. I guess that Levitt wouldn’t break sweat if he had to play and chat simultaneously. I fold, and Levitt opts to show me his cards. I ask what the thinking is behind showing me that he was bluffing. Both JD and Levitt rush to explain that he had two pairs, and wasn’t bluffing at all. I realise that I have no idea what’s happening. This could be a long afternoon. Or, more likely, a short and expensive one.

“I think the statistics of poker are actually probably overrated,” Levitt says. “Most of poker is based on pretty easy rules of thumb. In a game like this there’s not many hard calculations to do.” He tells me about some research he conducted in Las Vegas, with a range of poker players including 18 winners of World Series events. “Almost all of them continued to use the rules of thumb that you use in regular poker. Even though they were not the right rules to use in the game we ran.” He concludes that experts can quickly be undone. “If you change the rules or the incentives, they tend to do very poorly.”

As our appointed break time approaches, Levitt’s getting into his stride. He’s more aggressive than I am, pushing me out of hands. But when I do stay in until our first showdown, I lose: it’s a pair of fives against Levitt’s pair of sevens. There have been no dramatic moments, yet I am slowly bleeding chips. JD is encouraging. “The play hasn’t been that bad,” he says. Levitt agrees. Still, I am losing.

Then, the very last deal before the hour, I have a decent hand: two pairs. The pot’s not a bad size and Levitt might be drawing to get a flush, so I decide to shut things down: I bet big. Levitt folds, and, as we break, I’m not far behind, with 8,900 chips to Levitt’s 11,100. I observe that since I first met Levitt, he has become a celebrity. He snorts. He’s relieved that nobody ever recognises him because he looks “so generic”.

“The nice thing is, the perks that come with the success of our book are opportunities. People come to me all the time with great opportunities.” Such as what? Money? Secrets? Power? For Levitt, the answer is simple: fun. That could mean anything from a round of golf at Augusta to working with the US Defense Advanced Research Projects Agency to prevent sex trafficking. He designs algorithms to catch credit-card fraud, and for horseracing syndicates. “The horseracing is the most fun thing.”

At one point, Levitt talks about his academic career in the past tense. “I view everything I do as a hobby now,” he says. “I no longer feel like an adult. I feel like what has happened is that I’ve been given so many opportunities that I am somehow back into a very childlike phase that I’m in the candy store and I get to pick and choose whatever I want.”

Immediately after the restart, I’m dealt a 9-7; it’s trash. But Levitt doesn’t raise the stakes so I stay in and see the flop: 4-6-8. Now either 5 or 10 will give me a straight; I call Levitt’s diffident bet. The next card is the 5. I do have a straight. It’s a monster hand in two-player poker. With 4, 5, 6 and 8 on the table, Levitt might have a straight too. But I know something he doesn’t: I have a 9, so my straight will be higher than his.  Levitt comes in with a solid bet, I raise, and he calls. Then the final card comes: another 9. That’s annoying because it might allow Levitt to split the pot with me. If Levitt has the 10 and 7, he will beat me. But that’s vanishingly unlikely. I go all in. Levitt calls.

If I win, I’ll be 18,000 chips to 2,000 chips ahead. If Levitt wins, game over. And … he has the 10 and 7 of spades. I’ve lost it all. Just like that.

“He called with the nuts,” says JD. JD and Levitt are quick to commiserate. Levitt had no idea that I had him beaten all along. JD admires how I reeled Levitt in. Levitt says that I was incredibly unlucky; that last card, the 9, killed me. And the 10-7 was the only combination in the deck that could have beaten me.

“Now that’s a hand worth writing about,” says JD. I’m feeling pretty good: I’ve lost to a “bad beat” in true poker-pro fashion. But gradually the congratulation fades into criticism. Levitt points out that I should never have gone all-in. It was a small risk but a pointless one, because Levitt would never have called me except in the unlikely event that he had the 10.

“That’s an essential rule of thumb you need to know,” Levitt says. “But that’s not Poker 101. That’s Poker 403. That’s Master’s Level Poker. PhD level Poker.” Maybe. But I’ve been schooled.

 

Written for and first published in the Financial Times on 27 June 2014. (I’m not sure why I didn’t post it at the time, but you might enjoy it now.) Levitt was promoting his book Think Like A Freak.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

12th of October, 2019HighlightsComments off
Highlights

“If you want people do to something, make it easy.” Richard Thaler has Lunch with the FT

The Anthologist doesn’t serve cashew nuts, so I order a bowl of smoked almonds instead. When they arrive, caramelised and brown as barbecue sauce, I ask for them to be put right in front of Richard Thaler. He protests that the waiter isn’t in on the joke.

The readers will be, I assure him. “The educated ones, perhaps,” he concedes.

Those educated readers may know that Professor Thaler is a Nobel laureate economist, but even more famous as the co-author of Nudge. They may even know — from his later book, Misbehaving: The Making of Behavioural Economics — that the 73-year-old is fond of telling an anecdote about a bowl of cashew nuts that sheds light on his approach to economics.

He served the notorious bowl to some guests while dinner was roasting in the oven, then watched everyone compulsively munch on the nuts and gradually spoil their appetites. So Thaler decided to remove the temptation by hiding the cashews in the kitchen. His guests thanked him.

It would be an unremarkable tale, except that such behaviour simply does not fit the rational economic model of human behaviour. Either eat the cashews or don’t eat the cashews, says classical economics, but don’t thank the person who moves them out of easy reach.

Reflecting on such stories helped Thaler create “behavioural economics” — a branch of the discipline that aims at psychological realism. Doing so also helped him with the equally difficult task of persuading other economists to take the behavioural view seriously.

True, it’s just a story about cashews — but if you don’t think short-termism and weak willpower are economically significant in the grand scheme of things, I have a payday loan, a pension drawdown scheme and an auto-renewing gym membership to sell you.

And, sure enough, Thaler’s ideas about the importance of realistic human behaviour have permeated into the economic mainstream, particularly the study of finance. His policy proposals have influenced tax collection, organ donation, energy efficiency drives — and most notably pensions, where participation in workplace schemes dramatically increases when people must explicitly opt out if they are not to be automatically enrolled.

Thaler cultivates a happy-go-lucky persona, a man whose own weaknesses help him understand the weaknesses of others. “You assume that the agents in the economy are as smart as you are,” he once told Robert Barro, one of the pillars of the economics establishment, “and I assume that they’re as dumb as me.” Barro was happy to agree with that.

This sunny July, however, Thaler is a model of self-control. “Notice how many nuts I’ve had so far,” he announces, 20 minutes into our conversation. He gestures for emphasis. “Zero.”

I’m not surprised by that, although I am when Thaler — who struck me as a bon vivant — admits that he has been skipping lunch entirely. He’s in London for a fortnight, teaching a course at the London campus of the University of Chicago Booth School of Business, and after a generous breakfast he says he has neither the need nor the time for lunch.

This may also explain his lack of interest in the restaurant itself. We meet at the business school, and he’s chosen the closest place — announcing “it’s me again” to the waitress who stands outside. I don’t even glimpse the interior of The Anthologist, because she promptly directs us to a pavement table, which has a large masonry wall on one side and on the other — if you squint — a view down Gresham Street to a back corner of the Bank of England. The scooters and trucks roar past a couple of yards away, but Thaler has no trouble making himself heard.

He used to squeeze more out of his annual fortnights in London. “I would spend the morning with the Behavioural Insight Team” — the famous “nudge” unit established by David Cameron and inspired by Thaler’s book with the law professor Cass Sunstein — “then come and teach all afternoon. And then half the nights there would be dinners with friends. And I was comatose at the end of the first week.”

He does admit to having a few dinners planned, though — and to timing his visit to coincide with the Wimbledon Men’s Final. He and his wife, the photographer France Leclerc, had Centre Court tickets. Was he a fan of Djokovic or Federer?

“We support Rafa. Although if he had been playing in a match like that it might have got too much for my wife. She would have been hiding somewhere by the fifth set.”

It was the same on election night: the Trump/Clinton contest reduced his wife to a nervous wreck. “And who were you supporting in that one?” I ask. He gives me a withering look. “At least credit me with sentience.”

President Barack Obama seemed to appreciate behavioural economics and gave Thaler’s co-author, Cass Sunstein, a senior appointment. The Trump administration, observes Thaler, has no interest in behavioural economics. “Look, there’s no demand for expertise of any sort . . . The lack of competence and expertise is like nothing anyone has ever seen.”

Whitehall’s Behavioural Insight Team seems to be displaying more longevity than the White House equivalent. “The key move they made very early on was to extricate themselves from government.”

They’re now a semi-autonomous social enterprise in which the Cabinet Office retains a stake. They made that move, of course, before Cameron’s referendum-induced autodefenestration. “I will say that David Cameron never talked to anybody at the Behavioural Insight Team about the Brexit referendum”.

And what should they have said if he had? “One thing for sure is Remain is a horrible name. It’s weak. Whereas Leave is strong.”

Thaler has written about the referendum before in the Financial Times. He reminds me that Theresa May said, before the referendum: “The reality is that we do not know on what terms we would have access to the single market.”

The waiter interrupts us and presses Thaler to order some wine. He waves him away. “No, I have to teach for the next three hours.”

We return to May, and her explanation that a vote to Leave would be a vote for something undefined and unknowable. Yet as prime minister, she felt that it was quite sufficient to declare that Brexit means Brexit. “Brexit means Brexit — that is one of the dumbest statements that has ever been uttered by a head of state. And I’m aware that there are thousands of tweets one could compare it with. I mean, it’s simultaneously meaningless and wrong.”

The waiter finally manages to get us to order something. Thaler goes for a crispy duck salad. “It’s called salad, you know it has at least the illusion of being healthy”. I’m tempted by the Wagyu beef burger but feel ashamed (social pressure means nothing to homo economicus but is a powerful nudge for human beings), so I order some cod with samphire.

The waiter is keen to upsell. Spritzer? Some halloumi? Thaler and I are baffled by the suggestion of halloumi with cod and duck, although I would have cracked if the waiter had tried to sell us French fries.

We turn to the state of economics, and how it became so wrapped up in the idea of rational agents. Some of those models have a hypnotic pull, I suggest: they’re so ingenious, so difficult, and once you’ve understood how they work you don’t want to abandon them in favour of the bowl-of-cashews guy.

I’m recalling a time I was reading a classic article by Barro — in the emergency room, having dislocated my jaw after a spectacular yawn, which I protest was unconnected to the research paper in question. I don’t get far. “You should change this story!” hoots Thaler. “It should be that you read this paper and, literally, your jaw dropped.”

It’s a reminder that Thaler is a storyteller as well as a sharp theorist. Misbehaving is full of stories. “I decided to just start writing things that would amuse me,” he says — including an account of a huge academic bunfight over the allocation of corner offices at the University of Chicago economics department that cannot fail to provoke Schadenfreude.

“I sent that to my friend Michael Lewis. I said, ‘How much of the book could be like this?’ and he said ‘All’.”

Lewis (whom I interviewed here) isn’t a bad sounding board: he’s the author of Liar’s Poker, Moneyball and The Big Short. He also wrote a biography of Thaler’s friends and colleagues, the psychologists Daniel Kahneman and Amos Tversky. I wouldn’t mind getting him to look over my first drafts.

When it arrives, the cod is pleasant enough, but there isn’t much of it. I’m regretting not ordering the fries. The smoked almonds look tasty, but they’re across the table sitting beside Thaler’s left hand. He hasn’t so much as twitched towards them.

The key message of Nudge was that governments could improve the health and wellbeing of their citizens without infringing on their liberty, simply by more thoughtfully designing their rules, procedures, or even labelling. “If you want people to do something, make it easy.” Put the cashews in the kitchen and the fruit by the cafeteria checkout.

More recently, Thaler has been thinking and writing about what he calls “sludge”. It’s the same procedure in reverse: if you want people not to do something, make it difficult. Reaching for an example, Thaler has a bone to pick with The Times.

The first review of Misbehaving was published there, and Thaler’s editor sent him a link. “And I can’t get past the paywall without subscribing.” But then he notices there’s an offer of a month’s trial subscription at an introductory rate. “But I read further, having written a book about this, and I see that it will be automatically renewed.”

Not only that, it will be renewed at full price, “and that in order to quit, I have to give them 14 days’ notice. So the one month free trial is actually two weeks. And I have to call London [from Chicago] in London business hours, not on a toll free line.”

He pauses and chides me to check that the FT isn’t placing similar sludge in the way of readers who wish to unsubscribe. I assure him that nobody would ever want to unsubscribe, but in any case such knavery would be beneath us. But part of me wonders. “Check your policy at the FT,” he advises. (Later, I check. The FT offers a very similar introductory offer, but I am relieved to discover that the newspaper offers regional phone numbers and you can also cancel online.)

While we’re talking about the consumption of digital goods, I am keen to ask him about how he deals with email, smartphones and social media. We’re in the middle of a colossal set of experiments in behavioural manipulation that would have been hard to imagine when Sunstein and Thaler wrote Nudge over a decade ago. Google, Apple, Facebook and Amazon are constantly running little trials to see what we do in response.

“The world has changed. I remember that while we were writing the book, I got my first iPhone.”

But does it tempt him? Distract him? An iPhone, it seems to me, is a bottomless bowl of digital cashews. But he’s not worried. “I’m not on Facebook at all . . . I am on Twitter and I find much of it to be quite useful. There’s a growing academic economics Twitter that’s fantastic. There’s almost no ad hominem. There are people live-tweeting conferences. Fantastic. There are people who will give a 10-tweet summary of some new paper.”

Thaler stops eating his salad — he’s managed to get most of it down, in between his answers. I’ve long since finished my little piece of fish. The smoked almonds have somehow migrated into the centre of the table, easily within my reach. They are untouched. “Let the record be noted that my consumption so far is zero,” he declares.

Thaler isn’t interested in coffee or dessert, but says he has time if I want something. I order espresso. After it arrives, I take a sip, and then my hand moves instinctively towards the almonds before I catch myself. He laughs. “That was a trembling hand.”

My involuntary slip prompts us to start talking about accidents. “Here’s something I was thinking about this morning,” he says. “All these announcements to mind the gap. Can that conceivably be useful?”

“Mind the gap,” is part of the sonic wallpaper of the London Underground, a reminder not to accidentally stumble into the space between Tube train and platform. I wonder if Transport for London has run an experiment. “I’m wondering that too.” Although we both doubt it.

“Now here’s my hypothesis. 99.9 per cent of the people on the Tube have blocked this out long ago. And whatever the percentage of tourists is, half of them have no idea what ‘mind the gap’ means. It could be ‘cheerio’.”

In short, the people who might conceivably benefit from the warning probably don’t understand it. So why not experiment with some different approaches to see if that reduces accidents?

The proposal is typical Thaler. He’s noticed a feature of everyday life that most of us either overlook or take for granted — and he’s turned it into an easily implementable experiment that might actually make the world a better place.

It’s time for him to go and teach. We shake hands, and then he reaches forward, slowly and deliberately, for a smoked almond. He holds it up in front of me as though displaying a fine diamond.

“One!” he says. Then he pops it into his mouth, and ambles off towards the business school. Only when his back is turned do I dare grab one myself.

The Anthologist 58 Gresham St, London EC2

Smoked almonds £3.75

Crispy duck salad £11.50

Cod with samphire £14.95

Sparkling water £3.95

Double espresso £2.90

12.5 per cent service £4.63

Waiter rounds up the bill (a nudge?) £0.32

Total £42.00

 

==

 

 
Written for and first published in the Financial Times on 2 August 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

How behavioural economics helped me kick my smartphone addiction

The year 2011 was a big one for me. My son was born. We moved to a new city. I published a book. But something else happened that was in some ways more significant: on February 9 2011, I bought my first smartphone.

It didn’t feel like a milestone in my life at the time. I didn’t note it down in a diary or commit the date to memory. Only finding a copy of the receipt helped pin down the day. Yet I have come to realise that the phone was a very big deal indeed.

Daniel Kahneman, Nobel laureate and author of Thinking, Fast and Slow (UK) (US), distinguishes between the “experiencing self” and the “remembering self”. My remembering self dwells upon the landmark moments such as the new baby. But my experiencing self is all about the phone.

I spend more time interacting with it than I do interacting with my children. I am in the presence of the device more than I am in the presence of my wife, although at least I have my priorities straight as to which I go to bed with.

As Cal Newport puts it in a new book, Digital Minimalism (UK) (US), we didn’t sign up for this. My first email account (1994) received a handful of messages a day, most of them newsletters I subscribed to in order to prevent cobwebs forming in my inbox. Facebook (2004) was a curiosity, less interesting than the latest computer game.

The first iPhone (2007) had no app store and was originally conceived as an iPod that made phone calls — although since “crackberry” had just been named the word of the year by Webster’s New World Dictionary, perhaps we should have seen what was coming.

But we didn’t. The hardware and software of the mobile age have gradually and profoundly entangled themselves in most parts of most people’s lives. If you are anything like me, you pick up your phone much more often than you pick up a knife and fork, and spend far longer reading email than reading books.

Not that I wish to grumble. These tools are enormously powerful. Without them I’d need to hire a secretary, spend hours playing phone tag and give up on working during long journeys by train and plane. Yes, they may occasionally distract me during the school nativity play, but the alternative would have been to miss the play entirely, because the office and the school are 50 miles apart.

I am not entirely happy with the role these technologies play in my life, but neither do I want to relinquish them. I know I’m not alone. For several years now, I’ve been dispensing sporadic advice about email overload both to readers and — if I am honest — to myself.

But late last year, I decided to do something more radical: to deploy everything I knew about economic theory and behavioural science, along with a few hard-won practical discoveries, to rebuild my relationship with the digital world from scratch. This is the story of what I learnt.

The power of the status quo
Inertia is always the first obstacle. Richard Thaler, who won a Nobel Memorial Prize for his contributions to behavioural economics, coined the term “endowment effect” to label the behaviour of an oenophile economist.

The economist had snapped up some Bordeaux wines for $10 a bottle, only to see them appreciate in value to $200 each. The economist wouldn’t have dreamt of paying $200 for a bottle of wine, but didn’t want to sell the wine for $200 either. He was happy to drink it on special occasions instead.

This behaviour is illogical: either the economist should prefer $200 or he should prefer the wine, and which he actually possesses should make no difference. Yet his actions seem perfectly natural, and Thaler and colleagues were able to demonstrate similar behaviour in laboratory experiments.

We like what we have, and these experiments suggest that we have no better reason for liking what we have other than that we have it: the disadvantages of choosing something else often loom larger than the advantages. As a result, we are reluctant to relinquish what we have — including the digital tools we’ve grown accustomed to using.

For this reason, digital sceptics such as Cal Newport and Jaron Lanier suggest that the first step in a reassessment of your digital habits should be a sharp temporary break.

If you are anything like me, you pick up your phone much more often than you pick up a knife and fork

Lanier, a pioneer of virtual reality and the author of Ten Arguments for Deleting Your Social Media Accounts Right Now (UK) (US), advises at least a six-month break from all social media. Newport suggests a briefer but broader ban: not only no social media, but no Netflix, no Google Maps, no smartphones — no digital tools at all for 30 days, apart from whatever is professionally essential.

The point here is not a “detox”. There is no intrinsic benefit to taking a month off from computers any more than one might recommend a brief, invigorating break from smoking or opiates.

The aim is to change the status quo to allow a reassessment. It’s only after you put down the electronic rucksack overflowing with digital possibility and stroll off unencumbered that you’re in a position to make a sensible decision about whether you really want to carry it around all day long.

So, I stripped various apps off my smartphone. The first time I dragged an icon to the “uninstall” bin felt like a big step, but it soon became a giddy pleasure. Off went the news apps, and a blog reader called Feedly that absorbed a huge amount of my time and attention. I already eschew games on my phone, but would have removed them too with gusto.

I spared the Financial Times app (which surely passes Newport’s test of professional necessity), and also retained Google Maps, a podcast player, The Economist’s “Espresso” app, the camera and the weather. Newport would have been more radical but I felt satisfied with my choices.

The big question was: what to do with my social media accounts? Facebook was simply too troublesome to delete, especially since my personal account is connected in opaque ways to a “Tim Harford” page maintained by my publishers. But I never had Facebook on my phone and after briefly unfollowing or muting all my contacts, I had no problem staying logged out.

My Twitter habit is more of a problem. I have 145,000 followers, gently persuaded over 10 years and 40,000 tweets to follow me — that’s about 10 books’ worth, or 20 years of weekly columns. This alone was a reminder of just what an effort Twitter could be; but deleting the account felt like the nuclear option.

So what could I do? Two years ago, I hid the “mentions” column so that I don’t see what other people say about me on Twitter. (Much is friendly, some hurtful and almost all superfluous.) Yet I was still wasting a lot of time noodling around there for no obvious gain. So I deleted the smartphone app and on November 23 2018, I tweeted that I was planning to “get off Twitter for a bit”. By a pleasing coincidence, the last person I interacted with before logging out was the man who named the endowment effect, Richard Thaler.

Time for what?
One of the most important — and misunderstood — ideas in economics is that of opportunity cost. Everything we do is an implicit decision not to do something else. If you decide to go to an evening lecture, you’re also deciding not to be at home reading a bedtime story. If you spend half an hour browsing news websites, that’s half an hour you can’t spend watching football. Those 40,000 tweets cost me something, but I am not sure what and I certainly didn’t ponder the cost while tweeting them.

This neglect of opportunity cost is a very human trait; we often fail to bring to mind the opportunity costs of our choices. One fun if slightly dated illustration of this is the choice between a £1,000 high-end CD player or a slightly less excellent £700 unit.

A difficult choice — until it is phrased as a choice between a top-notch £1,000 CD player or a £700 player plus £300 worth of CDs. At that point, most people clearly prefer the second option. The opportunity cost of the more expensive player could hardly be more obvious, and yet bringing the obvious to our attention changes our decisions.

Trying to get work done with an internet-enabled device is like trying to diet when there’s a mini-fridge full of beer and ice cream on your desk

For this reason I was determined not simply to cut back on my digital activities, but to fill the freed-up time and energy with something else. I focused on three activities. First, more exercise: I replaced Twitter with an exercise app that could run me through some brief, vigorous training sessions.

Second, more fun: I looked up some old friends and invited them to play role-playing games with me every other Sunday evening, rolling dice and pretending to be wizards. (I realise that Dungeons & Dragons isn’t cool. But neither am I, so I don’t care.)

And third, since social media is supposed to be about connecting with far-flung people, and since Christmas was looming, I decided to start writing letters to include with Christmas cards. I couldn’t write properly to everyone but I did manage to write serious letters to nearly 30 old friends, most of whom I’d not seen for a while. I reflected on our long friendships, brought to mind good times long past and, in particular, recalled important moments shared just by the two of us, nobody else. The letters were the antithesis of clicking “Like” on Facebook.

The experiment was beginning to get interesting.

Swiping, fast and slow
As Daniel Kahneman explained in Thinking, Fast and Slow: “When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” Rather than asking whether we should buy shares in Amazon, we ask, “Do I like to shop with Amazon?” Instead of pondering the leadership and managerial qualities of a presidential candidate, we ask ourselves whether we’d enjoy having a beer with them.

Tristan Harris, executive director of the Center for Humane Technology, argues that the digital services we use often perform this substitution for us. Imagine, says Harris, a group of friends on a night out, trying to figure out where they can go to keep the conversation flowing. They turn to their phones for a recommendation and find themselves gawping at images of cocktails on Instagram.

The phones, says Harris, replace the question, “Where can we go to keep talking?” with, “What’s a bar with good photos of cocktails?” Phones simply do not suggest options such as going back to someone’s apartment or strolling along the waterfront.

This happens all the time, and we often don’t notice the substitution. Looking for love, we swipe through faces on Tinder rather than searching for local clubs or volunteering activities. Picking up a phone to check the time in the morning, the question “What’s the time?” is quickly substituted with, “What did I miss while sleeping”?

While writing the last paragraph, I was confronted with the perfect example. It started to rain. Wanting to know whether the shower would last, I typed “weather” into Google. I was given an instant answer to my question, but I was also shown a list of weather presenters. Human faces! They are always eye-catching.

An old university acquaintance became a TV weather presenter; I wondered how she was doing. Who wouldn’t? Of course Google substituted an easier question: What does she look like these days? Other photos of weather presenters were also offered and, 30 seconds later, I was looking at pictures of a completely different weather personality, Tomasz Schafernaker, stripped to the waist.

Fifteen years ago, I would have struggled to explain this sequence of events to my wife. But nowadays, no explanation is really needed. We all know how swiftly and easily “When will it stop raining?” can lead to “What do Tomasz Schafernaker’s nipples look like?”

Trying to get some work done with an internet-enabled device is like trying to diet when there’s a mini-fridge full of beer and ice cream sitting on your desk, always within arm’s reach. You can crack open a can and take a swig before you’ve even realised what you’re doing.

Perhaps even worse, the tempting rewards are unpredictable. The psychologist BF Skinner once found himself trying to eke out a supply of food pellets he’d been using to reward rats. To his surprise, he found that “intermittent reinforcement” — sometimes the rats would get a pellet, sometimes not — was more motivating than reliable rewards. Unpredictable treats are highly addictive, just like email, social media or clickbait headlines.

So what to do about this problem? It’s not easy: by definition an intuitive response occurs before we have time to stop and think. The obvious solution is to create some friction. I installed a software plug-in called Strict Workflow on my desktop browser. With one click, it blocks time sinks such as Twitter, YouTube and various clickbait news websites for a period of 25 minutes.

It’s astonishing how many times during those 25 minutes I reflexively check, see the blocking message instead and go back to work. I’m hopeful that a few weeks or months with this blocker may break this fast-twitch habit, but in any case the software works.

Meanwhile, by uninstalling news apps, Twitter and Feedly, I’d made my phone less like a sweet shop. As a testimony to the power of unconscious habit, after uninstalling Feedly, I deleted a few incoming emails, then unthinkingly tried to find it. It took a moment for me to realise I was searching for an app that I’d deleted less than a minute earlier.

It was a reminder that there’s more going on here than poor or short-sighted decision-making: often when we use our phones, we’re not really making any conscious decision at all.

Spillover benefits
Paul Romer won a Nobel Memorial Prize recently for analysing the way different innovations would spill over, enabling other innovations and the process of economic growth itself. Four weeks into my experiment, I was noticing some unexpected spillover benefits myself. The phone was still tempting, but decreasingly so. I took my children to see a Christmas film and, for the first time in years, didn’t feel the urge to check it.

I was getting a real sense of the mutually reinforcing nature of the distraction ecosystem — and how I’d failed to see it clearly when inside it. In November, for example, I would have been scrolling through Feedly looking for interesting material. I told myself I was looking for things to read, but really I was looking for things to tweet about. If pushed for time, I’d sometimes tweet things instead of reading them. This foolishness was evidence of a seriously bad habit.

But having uninstalled Twitter, I found myself less tempted to go and look at my Twitter stats (nothing to see) and also less tempted to flick through the blogs. After all, if I wasn’t going to tweet about them, why not read a book instead? Each new app that I removed from my phone weakened my tendency to pick up the device; often, it made other apps less useful or less appealing. I hadn’t seen this effect coming, but I wasn’t complaining.

Adapting to events
The first of January is usually the date for turning over a new leaf but, with hindsight, beginning my experiment in late November instead was an accidental masterstroke. The run-up to Christmas is a different kind of busy: the volume of email declines, replaced by Christmas cards and shopping lists. It’s a time when we often see people face-to-face instead of on Facebook.

By unplugging various digital services, I was moving with the wind at my back; doing firmly and deliberately what I might anyway have drifted towards.

The experiment was working well. I wasn’t missing Twitter at all. I was spending much less time with the phone. Some old friends were emerging from the woodwork to tell me how much they enjoyed receiving my letter. A few fretted that I was going through some kind of crisis, but overall the letters felt like a vastly better way to contact people than through Facebook.

When I did see friends and family, I found it easier to give them my full attention. Sherry Turkle, author of Reclaiming Conversation (UK) (US), has found that people initially used texts as an add-on to face-to-face conversation, but the texts soon became a substitute: more convenient, more controllable.

The problem with real conversation, one high-school senior told her, was that “it takes place in real time and you can’t control what you’re going to say”.

I sympathise, and we probably all had face-to-face conversations over Christmas that we wish could have been conducted from a thousand miles away. But while real conversation can be tiring, it is also vastly more rich and meaningful than a few dozen bytes of text. The less distracting I found my phone, the more I enjoyed talking to the people in front of me.

At the end of December came a strange and unexpected test: I was awarded an OBE in the New Year honours list. Suddenly the digital hush of the year’s twilight was interrupted by a steady stream of congratulatory messages.

I was out walking with some old friends, catching up on the news of the past few months and chatting about the year ahead. In my pocket, my phone was pinging, and I felt increasingly anxious about letting the messages go unanswered. I snatched moments here and there to type responses, offering slightly embarrassed excuses to my companions.

It’s not an experience I’m likely to repeat, but it taught me a few lessons. First, even friendly digital messages can provoke anxiety. I was fearful of appearing ungrateful by not replying promptly. This was silly. A delay would not have bothered anyone. But I couldn’t help myself. I should have left the phone at home.

Second, it’s easy to reactivate bad habits. After a couple of weeks in which I checked my phone a few times a day instead of several times an hour, the influx of messages pushed me back into the habit of checking my phone like a rat hoping for a food pellet. It took several days more to regain some calm.

Third, and more positively, the investment in spurning social media was paying dividends. I did buckle and log into Facebook for the first time in weeks, not wanting to ignore messages of congratulation. It was completely silent. People had worked out, it seems, that Facebook wasn’t a good way to reach me. I managed to resist logging into Twitter completely.

Still, I did start to wonder whether the new regime would survive contact with the normal working routines of January. I called Jocelyn Glei, author of Unsubscribe (UK) (US) and host of the Hurry Slowly podcast. “The notion that you’re going to change all your habits and be done is absurd,” she cheerfully warned me. Fair enough — but then how to sustain the new pattern?

Glei’s advice was to remain vigilant. It’s one thing to check out at Christmas, another to do so in September. It makes sense to stay off Twitter while writing a book; less sense, perhaps, while marketing it. Each new project, she advised, required a quick re-evaluation of where to draw the digital boundaries. The digital reset was going to be a work in progress.

Lessons learnt
The point of the break was to allow a thoughtful assessment of which digital services were worth letting back into my life. So as the new year starts up and emails start to flow freely again, what did I learn?

First, I didn’t miss being plugged into Twitter at all. I’ve been ignoring notifications for years — thus missing some of the benefit and much of the aggravation of the platform — but have still been tweeting away out of some strange combination of duty and inertia.

My new plan is to log in for a few hours on Friday, set up some links to my columns and other projects that may interest some people, and log out again. If I ever see a good reason to use the platform more intensively, I’ll be back.

Second, I enjoyed having a more boring phone. With very little on it now but an easily emptied email inbox and the FT app, I pick it up less often and for less time, and am more likely to do something useful with it when I do check it.

I did reinstall Feedly — which I find essential for my job — but will keep an eye on my usage. With no tweets to send, the app has become more useful. I read for the sake of learning rather than for the sake of tweeting.

Third, the “strict workflow” blocker worked so well in saving me from my fast-twitch impulses that I added my email inbox to the blocked list. I’d had limited success with an email blocker before, but this time was much more successful, perhaps because the blocker was part of a larger plan.

Finally, it was good to focus on the upside of the digital decluttering. Although it was partly an exercise in habit-breaking or self-denial, it was much more useful to think of it as spending time and attention on things that mattered.

Some old friends seemed genuinely touched to receive a real letter; nobody has ever been touched by a Facebook “Like”. I felt in better shape at the beginning of January than at the beginning of December, which is hardly the usual Christmas experience. I walked, talked, ate and drank with old friends. I even battled a few imaginary wizards.

I’ve no desire to give all this up to spend more time with my phone.

 

Written for and first published in the Financial Times on 17 January 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

My TED talk on the power of slow-motion multitasking

26th of January, 2019HighlightsSpeechesVideoComments off
Highlights

Why big companies squander brilliant ideas

J F C Fuller did not invent the tank.

That distinction should probably fall to E L de Mole, an Australian who approached the British war office in 1912 with a design that was — in the words of historians Kenneth Macksey and John Batchelor — “so convincingly similar to those which finally went into service that one wonders why it was never adopted from the outset”.

But when the British army eventually introduced the tank, it was J F C Fuller, chief staff officer of what would later become the tank corps, who understood what to do with it. At 39 years old, Fuller was a small man with a neatly trimmed moustache and a hairline that had retreated over his crown and was beginning to march down the back of his head. He could have passed for a butler in a costume drama, but his appearance belied an inner radicalism. (He had been friends — and then enemies — with the occultist Aleister Crowley.)

Late in 1917, after almost 400 British tanks had, with modest success, lumbered across the German lines at the battle of Cambrai, Fuller applied his radical streak to the problem of using the tank effectively.

A new and much faster tank, the Medium D, could travel 200 miles at a speed of 20 miles per hour. Fuller proposed that these tanks would attack the German army’s brain — the string of German headquarters miles behind the front line.

A Medium D could roll across the trenches and be on the German command posts in an hour; Fuller’s attack would come from nowhere. Air support would disrupt German road and rail travel.

“Bad news confuses, confusion stimulates panic,” wrote Fuller. His idea was dubbed Plan 1919. By striking suddenly at the German command, Plan 1919 would cause the German army to disintegrate. It would, Fuller declared, be “the winning of the war in a single battle”.

His astonishing idea became “the most famous unused plan in military history”, according to his biographer Brian Holden Reid. But, of course, that is not entirely true. It was used to great effect, in 1940 — by the Germans. J F C Fuller had invented blitzkrieg.

 

The story might be a historical curiosity, had echoes of it not been repeated so frequently since the British army stuffed Fuller’s plans for blitzkrieg into a desk drawer. Organisations from newspapers to oil majors to computing giants have persistently struggled to embrace new technological opportunities, or recognise new technological threats, even when the threats are mortal or the opportunities are golden. Why do some ideas slip out of the grasp of incumbents, then thrive in the hands of upstarts?

In 1970, the photocopying giant Xerox established the Palo Alto Research Center, or Parc. Xerox Parc then developed the world’s first personal computer, with a graphical user interface, windows, icons and a mouse. Bill Gates of Microsoft and Steve Jobs of Apple observed developments at Xerox Parc with great interest. Xerox still makes photocopiers.

In 1975, a 24-year-old engineer named Steven Sasson built the world’s first digital camera — a patched-together device scavenging a lens from a Super-8 camera, magnetic tape in a portable cassette recorder and a TV screen. Sasson worked for Eastman Kodak, where in 1989 he and his colleagues also constructed the first modern digital SLR camera. Kodak built a sizeable line of business in digital photography, and earned a small fortune from the patents. Yet Kodak could not adjust to a world in which every phone contained a camera. The company filed for bankruptcy in 2012.

In 1999, Sony launched the “Memory Stick Walkman”, one of the world’s first digital music players. Sony was armed with the iconic Walkman brand, some of the world’s best consumer electronics engineers and the talent-soaked Sony-BMG music label. The Memory Stick Walkman went nowhere and, two years later, it was eclipsed by a product that transformed the fortunes of a struggling Apple: the iPod.

And in 1918, Britain had the best tanks in the world, a clear vision of how to use them and, in Fuller, one of the best military strategists to emerge from the British army. The German army was forbidden to use tanks at all; it was scarcely more than a collection of officers, a head without a body. Heinz Guderian, later one of the leading Panzer commanders, had not even seen the inside of a tank until he managed to go on manoeuvres with the Swedish army in 1929. Yet by the late 1930s, the British had conceded technical and tactical superiority to Hitler’s new army.

There is an obvious explanation for all of these failures and missed opportunities: people are idiots. “Now we can get back to some real soldiering,” remarked one senior officer to Fuller at the end of the first world war — as though defending Britain in an existential struggle had been a frivolous distraction from tending to noble horses, bright buckles and shiny boots. The army blocked publication of Fuller’s books for several years; they were seen as insubordinate.

When Steve Jobs visited Xerox Parc in 1979, and saw a windows-and-mouse interface for the first time, he couldn’t contain himself, according to Malcolm Gladwell. “Why aren’t you doing anything with this?” he yelled. “This is the greatest thing. This is revolutionary!” If Jobs had been teleported into the British war office in the 1920s, he might well have said the same thing.

Idiocy is a tempting explanation and not without merit. The top man in the British army, Field Marshal Sir Archibald Montgomery-Massingberd, responded to the threat of Nazi militarisation by increasing the amount spent on forage for horses by a factor of 10. Cavalry officers would be provided with a second horse; tank officers would get a horse too. As I say: people are idiots.

But there is something about the “idiot” theory that feels too glib. Consider Xerox Parc: how is it that a corporation could be smart enough to establish such a superb research centre, but then fail to take advantage? Was Sony really run by idiots in the 1990s? Even Montgomery-Massingberd is too casually caricatured. These organisations stumbled for a reason.

Management theorists have a word for it: disruption. “Disruption describes what happens when firms fail because they keep making the kinds of choices that made them successful,” says Joshua Gans, an economist at the Rotman School of Management in Toronto and author of The Disruption Dilemma. (US) (UK) Successful organisations stick to their once-triumphant strategies, even as the world changes around them. More horses! More forage!

Why does this happen? Easily the most famous explanation comes from Clayton Christensen of Harvard Business School. Christensen’s 1997 book, The Innovator’s Dilemma, (US) (UK) told a compelling story about how new technologies creep up from below: they are flawed or under-developed at first, so do not appeal to existing customers. Holiday snappers do not want to buy digital cameras the size of a shoebox and the price of a car.

However, Christensen explains, these technologies do find customers: people with unusual needs previously unserved by the incumbent players. The new technology gets better and, one day, the incumbent wakes up to discover that an upstart challenger has several years’ head start — and once-loyal customers have jumped ship.

Christensen’s story is an elegant one and fits some cases brilliantly. But there are many examples that do not fit — such as the failure of Xerox to exploit the cutting-edge research at Parc. The mouse and the graphic user interface aren’t a low-end competitor to the photocopier. They’re from a completely different universe.

The iPod didn’t sneak up on Sony from below: the company had seen the potential of a digital music player and moved quickly. Dominant organisations often see the disruptive technologies coming. “Kodak and Blockbuster weren’t caught by surprise,” Joshua Gans tells me. “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.” They knew; but they were unable to put together the right response.

There is also a striking counter-example to Christensen’s idea that disruptive technologies begin as flawed or low-quality options. The iPhone was priced as a premium product with never-before-seen capabilities. It devastated Nokia and Research In Motion — now simply named BlackBerry Ltd in an echo of its once-iconic offering.

Christensen has tried to fit the iPhone into his theories. At first he predicted that incumbents would easily respond, and later he recast it as a disruption in a different industry altogether: “It was intended to disrupt the laptop. And that’s the way it happened.”

The laptop? Tell that to Nokia and BlackBerry.

Anyway, is the tank a low-end competitor to the horse? That’s a stretch. When a theory needs to be made this elastic, it may be time to look for another theory.

 

In 1990, a young economist named Rebecca Henderson published an article with her supervisor Kim Clark that presented a different view of why it is hard to do new things in old organisations. The relevant word is “organisations”.

Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.

Consider, for example, IBM — the giant of mainframe computing. IBM is a survivor. It predates the digital computer by more than three decades. While the performance of computers was being revolutionised by the semiconductor, the integrated circuit, the hard drive and the compiler, IBM maintained a dominant position without breaking stride. This was because the organisational challenge of making and selling a sophisticated mainframe computer to a bank in the 1970s was not greatly different from the organisational challenge of making and selling a mechanical tabulating machine to a bank in the 1930s. Change was constant but manageable.

When computers started to be bought by small businesses, hobbyists and even parents, IBM faced a very different challenge. It did build a successful business in PCs, but was unable to maintain its old dominance, or bring to bear its historical strengths. In fact, the PC division prospered only as long as it was able to snub the rest of the organisation, often partnering with component suppliers and distributors that directly competed with IBM divisions. Internal politics soon asserted itself.

A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.

“An architectural innovation is an innovation that changes the relationship between the pieces of the problem,” Henderson tells me. “It can be hard to perceive, because many of the pieces remain the same. But they fit together differently.”

An architectural innovation challenges an old organisation because it demands that the organisation remake itself. And who wants to do that?

 

The armies of the late 19th century were organised — as armies had long been — around cavalry and infantry. Cavalry units offered mobility. Infantry offered strength in numbers and the ability to dig in defensively.

Three technologies emerged to define the first world war: artillery, barbed wire and the machine gun. They profoundly shaped the battlefield, but also slipped easily into the existing decision-making structures. Barbed wire and machine guns were used to reinforce infantry positions. Artillery could support either cavalry or infantry from a distance.

Tanks, however, were different. In some ways they were like cavalry, since their strength lay partly in their ability to move quickly. In other ways, they fitted with the infantry, fighting alongside foot soldiers. Or perhaps tanks were a new kind of military capability entirely; this was the view taken by J F C Fuller.

These discussions might seem philosophical — but in the light of Henderson’s ideas, they are intensely practical. “You have to find an organisation that will accept the new bit of technology,” says Andrew Mackay. Mackay runs an advisory firm, Complexas, but was also the commander of British and coalition forces in Helmand, Afghanistan, in 2008. “The organisational question is deeply unsexy, but it’s fundamental.”

A more recent example: is the helicopter basically a kind of aeroplane, and therefore an asset of the Royal Air Force? Or something quite different? Who should be in charge of drones today?

So it was with the tank. If it was to prosper, it needed an organisational home. Someone would have to argue for it, someone would have to pay for it, and someone would have to make it all work, technologically and tactically.

Perhaps the two most obvious places to put the tank were as a standalone unit (since it offered quite new capabilities) or in cavalry regiments (since it was highly mobile and the horse was becoming obsolete). There were traps along either route: the established regiments would resist a standalone structure for tanks, which would compete for resources while the postwar army was shrinking. A new tank regiment would lack both allies and the heft of historical tradition.

After various twists and turns, it was the cavalry that ended up as the organisational home of the tank. And cavalry officers certainly understand a highly mobile strike capability. But they were never really organised around the concept of “mobility”. They were organised around horses. The cavalry officer loved his horse and rode it with skill. His regiment was devoted to feeding and caring for the horses. Would he not resist the usurper tank with every fibre of his being?

 

Xerox Parc developed or assembled most of the features of a user-friendly personal computer, but Xerox itself did not have the organisational architecture to manufacture and market it. Xerox Parc did develop the laser printer, a product that matched the company’s expertise nicely. As Gladwell pointed out, this easily paid for the entire Parc project. The laser printer was like artillery or the machine gun for Xerox: it was an exciting new technology, but it was not a challenge to the organisation’s architecture. The personal computer was like the tank.

The same is true for Sony and the Memory Stick Walkman. As Sony expanded, it produced radios and televisions, video recorders and camcorders, computers, game consoles and even acquired a film and music empire. But to keep this sprawl manageable, Sony’s leaders divided it into silos. As Gillian Tett explains in The Silo Effect, (US) (UK) the silo that produced the PlayStation had almost nothing to do with the silo that produced portable CD players. The Memory Stick Walkman was like the tank: it didn’t fit neatly into any category. To be a success, the silos that had been designed to work separately would have to work together. That required an architectural change that Sony tried but failed to achieve.

And for IBM, the shift from a mechanical tabulator to a mainframe digital computer was like the shift from rifles to the machine gun: an awesome step up in firepower, but a modest adjustment to organisational capacity. The tank was like the personal computer: it may have been a logical step forward given the technology available, but it required a different organisational architecture — one that bypassed and threatened the existing power centres of Big Blue. That was the problem.

The politics of organisational change are never easy. In the case of the tank, they were brutal. The British public never wanted to fight another war in mainland Europe, and the tank represented an admission that they might have to. The armed forces were starved of cash in the 1920s and 1930s. In 1932, the British army ordered just nine tanks — delicate four-tonners. The total weight of this entire force was less than a single German Tiger tank. But at a time of declining budgets, who could justify buying more?

It did not help that the tank enthusiasts were often politically naive. Since an architectural innovation requires an organisational overhaul, it is a task requiring skilful diplomacy. Fuller was no diplomat. His essays and books were dotted with spiky critiques of senior military officers. After a while, even the junior officers who admired his wit began to tire of his “needlessly offensive” lecturing. D

espite alienating the army top brass, Fuller was handed a unique opportunity to advance the cause of tanks in the British army: he was offered the command of a new experimental mechanised force in December 1926. There was just one problem: he would have to step away from his single-minded focus on the tank, also taking command of an infantry brigade and a garrison. In short, Fuller would have to get into the organisational headaches that surround any architectural innovation.

He baulked, and wrote to the head of the army demanding that these other duties be carried out by someone else, eventually threatening to resign. The position was awarded to another officer, and Fuller’s career never recovered. His petulance cost him — and the British army — dearly. Architectural innovations can seem too much like hard work, even for those most committed to seeing them succeed.

 

Within academia, Rebecca Henderson’s ideas about architectural innovation are widely cited, and she is one of only two academics at Harvard Business School to hold the rank of university professor. The casual observer of business theories, however, is far more likely to have heard of Clayton Christensen, one of the most famous management gurus on the planet. That may be because Christensen has a single clear theory of how disruption happens — and a solution, too: disrupt yourself before you are disrupted by someone else. That elegance is something we tend to find appealing.

The reality of disruption is less elegant — and harder to solve. Kodak’s position may well have been impossible, no matter what managers had done. If so, the most profitable response would have been to vanish gracefully. “There are multiple points of failure,” says Henderson. “There’s the problem of reorganisation. There’s the question of whether the new idea will be profitable. There are cognitive filters. There is more than one kind of denial. To navigate successfully through, an incumbent organisation has to overcome every one of these obstacles.”

In an email, she added that the innovators — like Fuller — are often difficult people. “The people who bug large organisations to do new things are socially awkward, slightly fanatical and politically often hopelessly naive.” Another point of failure.

The message of Henderson’s work with Kim Clark and others is that when companies or institutions are faced with an organisationally disruptive innovation, there is no simple solution. There may be no solution at all. “I’m sorry it’s not more management guru-ish,” she tells me, laughing. “But anybody who’s really any good at this will tell you that this is hard.”

Almost a decade after resigning from a senior position in the British army, Andrew Mackay agrees: “I’d love to think that there could be a solution, but I don’t think there is.”

 

If I had to bet on the most significant disruption occurring today, I would point to the energy industry.

Chris Goodall is a longtime observer of the renewable energy scene and author of The Switch, a book about breakthroughs in solar panel technology. Goodall points out that solar photovoltaics have enjoyed a dramatic fall in costs, one that shows no sign of abating. Solar PV electricity is now cheaper than electricity generated by gas or coal in the sunny climes where most of the planet’s population live. A few more years and that advantage will seem overwhelming, which is great news for the planet and terrible news for incumbents.

Consider General Electric, which this year disappeared from the Dow Jones Industrial Average. In little more than a year, the old industrial titan’s share price had halved. One of the key culprits for its woes was a precipitous collapse in the demand for large gas turbines, that, in turn, was the result of a fall in the cost of solar power cells that had been relentless, predictable and ignored.

This possibility has been clear to the fossil fuel industry for a while. I know: I used to work in long-range scenario planning for Shell International. Back in 2001, my Shell colleagues and I were discussing thin solar films that could be printed cheaply and applied to windows or hung as wallpaper. We could see the threat of exponentially cheaper solar power — but recall what Joshua Gans said about Kodak and Blockbuster: “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.”

They knew. But they could not act. Because what is an oil company to do in a world of abundant, cheap solar energy? Offshore wind farms play to some oil-company strengths; they know a lot about large metal structures in the North Sea. But solar energy is an architectural innovation. The pieces just don’t fit together like an oil rig or a refinery. As a mass-market, manufactured product it is closer to the skill set of Ikea than Exxon.

The implication of Christensen’s theory is that oil companies should have set up solar subsidiaries decades ago. Many of them did, without much success. The implication of Henderson’s theory is that the oil companies are in big trouble.

Chris Goodall thinks the oil companies should rescue what they can — for example, by developing synthetic hydrocarbons derived from water, atmospheric carbon dioxide and solar energy. Such products would play to oil-company strengths. But for most of their business lines, Goodall says, “The best strategy for the oil companies is almost certainly gradual self-liquidation.”

Or as BP’s chief executive Bob Dudley candidly admitted to the Washington Post recently, “If someone said, ‘Here’s $10bn to invest in renewables,’ we wouldn’t know how to do it.”

 

Despite all the obstacles, the British army continued to develop both tanks and tank tactics throughout the 1920s and 1930s. Yet the internal politics proved toxic. The Germans, meanwhile, watched and learnt. If the British were hamstrung by their inability to reorganise what was, after all, a victorious army in the first world war, the Germans had the opposite problem: they had barely any army, and no status quo to defend. There was no organisational architecture to get in the way. When Adolf Hitler came to power in 1933 and began to expand the German army and invest in tanks, he encountered a German military that had been watching, thinking and experimenting for 14 years.

On his 50th birthday in 1939, Hitler celebrated with a parade of Germany’s newly reconstructed army through Berlin. “For three hours,” wrote one witness, “a completely mechanised and motorised army roared past the Führer.”

This witness was a guest of honour at the celebrations. His name: J F C Fuller. After quitting the British army in frustration, he had thrown his lot in with the British fascists of Oswald Mosley. He wrote vitriolic attacks on Jews. Some observers wondered whether this was simply an attempt to win favour with the world’s tank superpower, Nazi Germany. One of Fuller’s biographers, Mark Urban, doubts this: “The facility with which Fuller made anti-Jewish jibes in letters and books suggests pleasure rather than duty.”

Nobody doubts, however, that Fuller was obsessed by German tanks. After all, there was one army that had really understood and embraced his ideas: that of Adolf Hitler. After the parade, Major General Fuller met Hitler himself in a receiving line at the Chancellery.

The Führer grasped Fuller’s hand and asked, “I hope you were pleased with your children?”

“Your excellency,” Fuller replied, “They have grown up so quickly that I no longer recognise them.”

This article was first published as a cover story in the FT Magazine on 8/9 September 2018. 

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)


 

6th of October, 2018HighlightsOther WritingComments off
Highlights

“Basic income is about the freedom to say no” – Rutger Bregman goes bouldering

“You have the instinct for it,” says Rutger Bregman, as I haul myself up an indoor climbing wall, nestled under the arches at Vauxhall station in London. “Shit, this is some talent!” he bellows, as I reach the top. I am inwardly delighted, even though I realise the praise is absurd: I have climbed about four metres and it’s a beginner’s route.

Bregman has suggested that we go bouldering together. Bouldering is a variety of rock climbing, done over short distances without safety ropes. Coming from Bregman, it seems a curious choice. The young Dutch historian and author is most famous for advocating a universal basic income — a regular cash grant to every single person, given unconditionally, to support them and provide a minimum standard of living, no matter what might go wrong.

His book, Utopia for Realists (UK) (US), has been a surprise bestseller, finding an audience eager for radical yet plausible policy ideas. Yet this celebrated advocate of unconditional handouts has chosen a sport that is all about self-reliance, and the ultimate departure from the principle of the safety net.

“There is a safety net — look!” says Bregman, pointing at the crash mats. I am not totally convinced. It doesn’t take long before I fall off — a combination of lack of skill and lack of fitness. As I peel myself off the mat, I realise the skin of one elbow has not remained with me.

Bregman’s contention is that a basic income would be the logical and perfectly affordable next step for a human race that has already taken huge leaps forward since before the industrial revolution, when, he writes, “nearly everyone, everywhere was still poor, hungry, dirty, afraid, stupid, sick and ugly”.

Bregman himself looks the picture of health, possibly because, at 29, he’s 15 years younger than me, and possibly because he’s been practising. He climbs twice a week; his T-shirt says Sterk, the name of his local bouldering hall in Utrecht. The word means “strong” in Dutch. My limited experience of rock climbing with my daughters has taught me that the legs take the strain. Bouldering, however, requires more upper-body strength.

“It’s more explosive,” I am told. And within 15 minutes, I’m done: the tendons below my wrist have given up and I am close to doing the same. The first three routes were exhilarating but without a rope, even the short climbs under the arches of VauxWall are starting to feel vertiginous. I’m losing my nerve as well as my strength. Bregman, on the other hand, is just getting started.

“How long is a typical session?” I ask. “Fifteen minutes or an hour or . . . I can’t imagine anyone keeping this up for an hour.

“Two, two-and-a-half hours, if I have the time. Which I usually don’t,” he says. “If you warm up slowly, not like today, then you are at your peak after 45 minutes, and then you can keep that up for another 45 minutes.”

I spend much of the next hour watching Bregman solve one route after another. Sometimes he is dangling loosely off an overhang, as though resting in an invisible hammock. Sometimes he is moving laterally, his legs as high as his arms in a spiderlike scurry across the wall. Once, he hangs vertically as he works his way from left to right across a whimsical hold: a huge pair of pouting lips in one corner, just below the roof. He took up the sport three years ago. “I didn’t like to exercise at all. It’s so soul-destroying. But this is different.”

Bregman sees soul-destroying activity in much of modern life. Too many people, he says, are doing jobs they dislike or see as pointless, because they have no alternative. A basic income would liberate people: perhaps a minimum of €1,000 a month, given unconditionally as a cash grant, or through the tax system as a negative income tax.

Bregman has branded a basic income as “venture capital for the people”. A good line, I congratulate him. But what does it mean?

“OK, so basic income is all about the freedom to say no. That’s a privilege for the rich right now. With a basic income, you can say no to a job you don’t want to do. You can say no to a city in which you no longer want to live. You can say no to an employer who harasses you at work . . . that’s what real freedom looks like.”

Part of the impetus for a basic income has come from the sense that the robots are coming for our jobs — maybe not today, maybe not tomorrow, but soon. The venture capital firm Y Combinator is funding research into basic income, which seems to be a popular idea in Silicon Valley. But Bregman has no patience for the idea that technological change underpins the case for basic income.

“This is not about AI,” he insists. “You go back to the 1960s, and all the economists, all the philosophers, all the sociologists said we’re going to be working less and less and less and less and boredom is going to be the great challenge of the future. Didn’t happen . . . mostly because we have this ideological obsession with creating new jobs.”

Advocates of basic income have included two rather different Nobel laureates: the civil rights activist Martin Luther King Jr and the free-market evangelist Milton Friedman. The idea draws support from leftwingers who see an opportunity to redistribute and to give workers more bargaining power, and rightwingers who see an opportunity to dismantle paternalistic bureaucracies and empower ordinary people to make their own choices.

Bregman’s own sympathies seem to lie more with the left. At one point I tease him about the fact that he is in London on Valentine’s Day while his wife Maartje (a photographer and collaborator) is not. His response is spat out with a vehemence that might have been for comic effect, and might not: “You know that Valentine’s Day is just a capitalist scam to make you buy stuff you don’t need, to impress people you don’t like, right?”

But like Friedman, Bregman is clearly no fan of paternalistic bureaucracies. “Nowhere you’ll find as much support for something like basic income as [among] people who work for unemployment agencies,” he says. “In Holland I did a couple of lectures for those groups and they just give me a standing ovation when you say that we should abolish their jobs.”

It is the unconditional nature of the cash transfer that particularly appeals to him. With the transfer of money, no strings attached, there is a transfer of dignity, of bargaining power, and of responsibility. People have to make their own choices.

Again, I venture a connection between the basic income idea and bouldering: it’s a solo sport in which individuals need to find their own path, judging risks for themselves?

“If I would make this sport political, what I like about it is that it is competitive, but with yourself. So you’re not competing with anyone else, you’re just trying to do better yourself. And it’s a puzzle, every time it’s different. It’s a very creative sport, I guess.”

Utopia for Realists was itself a slowly assembled puzzle. The early drafts were articles in De Correspondent, an online crowdfunded news website founded by a Dutch pop-philosopher and columnist, Rob Wiijnberg. “It’s an anarchist-idealist collective of journalists who don’t follow the news,” Bregman explains.

This may explain why Utopia for Realists is such a curiously enjoyable read. The title sums up Bregman’s belief that evidence-based pragmatism should not rule out provocative, ambitious ideas. The book is lively, well researched and full of unlikely pieces of history, from the Speenhamland system of poor relief, developed in England in 1795, to US President Richard Nixon’s flirtation with the idea of a basic income in 1969. (Bregman studied history rather than economics or politics.) It is also perfectly orthogonal to anything one might read in a newspaper. The book was published in Dutch by De Correspondent, built a following slowly, then was self-published in English.

“I was my own PR employee at that point. I was emailing everyone — no interviews, no reviews. Nothing.” Yet when Bregman emailed me out of the blue with the English translation and a request for my support, I was sufficiently impressed to endorse the book. Steven Pinker also gave it a glowing cover quote. And as Bregman and his colleagues were pondering giving up, the project suddenly took off. While not quite Fifty Shades of Grey, in a short space of time Utopia for Realists went from brave failed experiment to international bestseller, due to be published in 28 languages.

“Ideas always start on the fringe and then they move towards the centre,” he says. “Then I was invited to come to Davos this year. Like, yeah, that’s pretty much it, right? My first lectures about basic income were for anarchists with long hair, and smelly.”

Did he go to Davos? “No, I had to go to a book fair in Colombia.” He did, however, give a talk at TED last year, and seems aware of the irony of advocating the dismantling of an entire class of do-gooders.

“You’re talking for an audience of 1,500 people, many of them involved in kinds of charities. The CEO of Toms, for example, was there.” Toms donates a pair of shoes to a poor family for every pair purchased; Bregman isn’t impressed. “Buy one shoe, give one shoe. That is just a horrible, horrible idea.”

He got a huge round of applause when he proposed scrapping aid bureaucracies and replacing them with direct cash transfers. The rapturous reception struck him as odd. “I was saying we should hand over the salaries of all these paternalistic bureaucrats and give them to the poor, who are the real experts on their own lives. And they were all clapping and laughing, and I was thinking on stage, ‘But I’m talking about you! It’s you!’”

It’s a good talk, I tell him. “I like to prepare for these things. I knew it off by heart three months before I went on stage.”

I press him on the details of the talk. He skips a little too lightly between the idea of replacing international development aid with direct cash transfers to poor people, and the idea of overhauling modern western welfare states to place unconditional cash payments at their heart. The two ideas are cousins, not identical twins, I suggest. Adding a dollar a day, no strings attached, to a non-existent social safety net might be transformative in rural India or Africa. A resident of London is going to want a little more than that before she willingly gives up her housing benefit. Bregman agrees: his focus now is on welfare reform.

Another question mark is over the evidence base for a basic income. Bregman mentions “dozens of experiments” but, arguably, there has never been a completely satisfactory randomised trial of a long-term basic income. (A literature review by the charity GiveDirectly counted six shorter-term randomised trials; policymakers should conduct many more.)

One promising episode — a four-year trial in Manitoba, Canada, in the 1970s — received little attention. When the economist Evelyn Forget managed to get hold of the mothballed archives in 2009, they were on the verge of being discarded. There is a new study in Kenya, funded by GiveDirectly. With 5,000 recipients getting a basic income for 12 years, that trial shows real ambition — but the income in question is just over $20 a month. This is unlikely to tell us much about reforming a European welfare state. Nor is a much-hyped but rather small trial in Finland, which will last just two years and is focused only on those already receiving unemployment benefits.

Other trials have been excitedly announced but have yet to begin, let alone conclude. We are still waiting for a study large and patient enough to tell us much about a basic income in a developed economy. So what are these “dozens of experiments”?

Bregman says that the experiments he has in mind are less evaluating a full basic income scheme, and more exploring the impact of cash transfers in development aid. That is indeed a well-studied area, although not quite the same thing. Those experiments provide encouragement for proponents of a basic income: households tend to put the money to good use, and reap long-term benefits.

By now, we’re talking over a coffee, my enfeebled hands thankfully strong enough to grip a mug. My final question is about one of his other ideas: dramatically liberalising immigration rules.

“Every utopian system is obviously grounded in the injustices of the present,” he says. “What’s the biggest injustice in the world right now? It’s pretty easy to see. It’s borders: apartheid on a global scale.”

But while basic income seems to be having a day in the sun, an end to passport control is hardly in tune with the Trumpian zeitgeist, is it? “Well that’s almost my problem with basic income right now. I get questions during lectures, people say, ‘Is this really a radical idea?’ So I’m like, I should move on. Because utopias are meant to make people angry.”

Fair enough: as in bouldering, so in utopian politics. Once you’ve solved one puzzle, it is time to move on to a new challenge.

 

 
Written for and first published in the Financial Times on 9 March 2018.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

28th of March, 2018HighlightsOther WritingComments off
Previous

Elsewhere

  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 188,797 other subscribers.