Tim Harford The Undercover Economist

Undercover EconomistUndercover Economist

My weekly column in the Financial Times on Saturdays, explaining the economic ideas around us every day. This column was inspired by my book and began in 2005.

Undercover Economist

Why it’s too tempting to believe good news about the coronavirus

Wishful thinking is a powerful thing. When I read about a new disease-modelling study from the University of Oxford, I desperately wanted to believe. It is the most prominent exploration of the “tip-of-the-iceberg hypothesis”, which suggests that the majority of coronavirus infections are so mild as to have passed unrecorded by the authorities and perhaps even un­noticed by the people infected.

If true, many of us — perhaps most of us in Europe — have already had the virus and probably developed some degree of immunity. If true, the lockdowns have served a valuable purpose in easing an overwhelming strain on intensive care units, but they will soon become unnecessary. If true.

But is it true? If it is, it stands in stark contrast to the far grimmer modelling from a group at Imperial College London, which concluded that if the epidemic was not aggressively contained, half a million people would die in the UK — and more than 2m in the US. Models such as this one helped to persuade the British government to follow much of continental Europe in putting the economy into a coma.

The differing perspectives are made possible by the fact that the data we have so far are not very good. Testing has been sporadic — in some places, shambolic — and everyone agrees that large numbers of cases never reach official notice. We do have solid statistics about deaths, and as the epidemiologist Adam Kucharski, author of The Rules of Contagion, observes, a wide variety of scenarios are consistent with the deaths we’ve seen so far. Perhaps Covid-19 is uncommon and deadly; perhaps it is ubiquitous and kills only a tiny proportion of those it affects. Deaths alone cannot tell us.

This uncertainty is unnerving. John Ioannidis, an iconoclastic epidemiologist, wrote on March 17 that Covid-19 “might be a one-in-a-century evidence fiasco”. Prof Ioannidis’s argument is that some infections are being missed, and we have little idea how many. Therefore we have little idea how deadly Covid-19 really is.

He speculates that the fatality rate could plausibly lie between one in 100 and one in 2,000 cases. Either way, it is dangerous; but the difference is vast. And if the scale of our ignorance about coronavirus may seem hard to swallow, bear in mind that the fatality rate for the H1N1 swine flu pandemic in 2009 was still being debated years later.

Prof Ioannidis has form: 15 years ago he published a study with the title “Why Most Published Research Findings Are False”. That claim seemed outrageous at the time, but subsequent efforts to reproduce famous experiments in psychology have revealed that he was on to something important. We know less than we think.

But we are not completely ignorant. Alongside the death total, there are other clues to the truth. For example, thousands of people were evacuated from Wuhan city in late January and February and most of them were tested. A few tested positive and several were indeed symptom-free, but not the large majority that the Oxford version of the tip-of-the-iceberg hypothesis would imply.

The entire population of the town of Vò in Italy was repeatedly tested and, while half of the positive cases were asymptomatic, that is still much less than the Oxford model might lead us to expect.

So while it is possible that most of us could have been infected without ever knowing — and that herd immunity is within easy reach — it is not likely. That may explain why neutral experts have responded to the Oxford study with caution, and some concern that it might provoke a reckless response from individuals or policymakers.

So, what now? First: stay indoors if you want to save many lives and prevent health systems from being overwhelmed. The bitter experience of Italy and Spain demonstrates the importance of flattening the peak of the epidemic. That remains true even if, as we might hope, the epidemic is much milder and more widespread than we currently believe. It might have been tempting to wait and gather more evidence — but faced with an exponentially rising pile of corpses, “wait and see” is not an option.

Second: health systems should expand capacity, buying more ventilators and more protective equipment for doctors and nurses. In all but the most optimistic scenarios we will need them now, we will need them later in the year and we will need them from time to time in the future. This crisis is teaching us that we should have had more spare capacity all along, despite the cost.

Third: test, test, test — and not only using the current tests to detect infection, but new ones for antibodies that should show whether people have already had the virus and have developed some degree of immunity. Sunetra Gupta, a professor on the Oxford team, says that such tests may start to produce results in a matter of days.

The epidemiologists are doing their best, but they are not omniscient. They need facts with which to work. Gathering those facts systematically is one of many urgent tasks ahead of us.

 

Written for and first published in the Financial Times on 27 March 2020.

My NEW book The Next Fifty Things That Made the Modern Economy is out in the UK in May and available to pre-order; please consider doing so online or at your local bookshop – pre-orders help other people find the book and are a huge help.

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Why the crisis is a test of our capacity to adapt

“It’s really quiet,” said the proprietor of Oxford’s best falafel stall when I popped over to buy lunch on Monday. It is even quieter now. Meanwhile, my wife emailed friends to ask if we could help: both of them are doctors and they have three children and a parent undergoing treatment for cancer. “Thanks We will be in touch,” came the reply. No time for more. It may be quiet for the falafel man, but not for them.

There, in miniature, is the economic problem that the coronavirus pandemic has caused, even in its early stages. For everyone who is overworked, someone else has little to do but wait. The supermarkets have struggled to meet a rush of demand for some goods, but that should pass. “We are not going to run out of food, so chill,” Yossi Sheffi tells me. He’s an MIT professor and an authority on supply chains.

While the pressure on the supermarkets may ease, the strain on the healthcare system will not. It is already intense and will get much worse. Yet while clinicians are overstretched, others wonder when the next job is coming from. From the falafel seller to the celebrity chef, the hotel porter to the millionaire motivational speaker, many tens of millions of people around the world are fit and eager to work, yet unable to.

This is a test of flexibility and imagination. Gourmet restaurants are shifting to takeaway service; conference speakers are building portable studios. Best of all is when we find ways to turn idle resources into weapons in the fight against the virus. It is hard not to cheer when reading tales of distillers turning their stills to the task of producing hand sanitiser, or hoteliers offering their empty rooms to doctors and nurses.

But it is a much tougher task, for example, to make more urgently needed ventilators. In the mid-20th century, William Morris, a man who made his fortune manufacturing British cars, turned his workshops to the task of producing “iron lungs” for people paralysed by polio. It’s an inspiring precedent for his successors at Meggitt, McLaren and Nissan scrambling to emulate him by building ventilators to use in the current crisis, but it took time.

Prof Sheffi reckons that it would be straightforward for, say, an automobile parts supplier to retool in a matter of months, and having many thousands of extra ventilators by the autumn would certainly be better than nothing. But to produce complex equipment from scratch in weeks, perhaps using 3D printing, would be a miraculous achievement even if regulations are loosened, as they should be.

Yet harder is to find more nurses and doctors; intensive care units do not operate themselves. And even for less specialist staff, the task is larger than it might seem because of what the late Thomas Schelling, a Nobel laureate economist, called “the acceleration principle”. Let’s say that Europe has 10m hospital orderlies, with an annual turnover of 30 per cent. That means 3m need to be trained each year, 1m at a time on a four-month training course.

Now let us aim to expand gently to 11m over the next four months. It doesn’t sound much — just a 10 per cent increase. Yet the training programme must double in scale to accommodate it, because now 2m rather than 1m orderlies are enrolled in the same four-month window. The same logic applies to anything we need more of, from the personal protective equipment that is in desperately short supply in our hospitals, to the internet bandwidth that we will all be using more of, while working from home.

The task, then, is immense. But we must try. Under any conceivable scenario, we would not regret trying to expand emergency medical care several times over. If it is impossible, so be it. But if it is merely expensive and difficult, such costs are trivial compared to the costs of suspending everyday life for weeks or months.

And there is some hope: efforts are already under way to persuade doctors and nurses who have retired or switched careers to return, and to put medical students to work at once. We could quickly train new medical support staff to perform focused and limited roles. I can only imagine the breadth of the skills needed to be an intensive care nurse, but if we cannot have more experienced nurses with complex skills, let us at least support them with people who can quickly be trained to change an oxygen tank or turn a patient in bed.

Even those apparently ill-suited to intensive care duty — the 75-year-old retired doctor, the community volunteer with first aid training, or even furloughed airline crews — could indirectly support health systems. While medical professionals staff the wards, I would gladly pay taxes to fund online advice from a retired doctor, a virus test administered by an air steward, or stitches and bandages from a St John Ambulance volunteer.

Killing two birds with one stone never sounded easy to me. But there is no excuse now not to be radical. This crisis is a test of many things. Not least among them is our capacity to adapt.

 

Written for and first published in the Financial Times on 20 March 2020.

My NEW book The Next Fifty Things That Made the Modern Economy is out in the UK in May and available to pre-order; please consider doing so online or at your local bookshop – pre-orders help other people find the book and are a huge help.

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Will economists ever be as good at forecasting as meteorologists?

The UK’s national weather service, the Met Office, is to get a £1.2bn computer to help with its forecasting activities. That is a lot of silicon. My instinctive response was: when do we economists get one?

People may grumble about the weather forecast, but in many places we take its accuracy for granted. When we ask our phones about tomorrow’s weather, we act as though we are gazing through a window into the future. Nobody treats the latest forecasts from the Bank of England or the IMF as a window into anything.

That is partly because politics gets in the way. On the issue of Brexit, for example, extreme forecasts from partisans attracted attention, while independent mainstream forecasters have proved to be pretty much on the money. Few people stopped to praise the economic bean-counters.

Economists might also protest that nobody asks them to forecast economic activity tomorrow or even next week; they are asked to describe the prospects for the next year or so. True, some almanacs offer long-range weather forecasts based on methods that are secret, arcane, or both — but the professionals regard such attempts as laughable.

Enough excuses; economists deserve few prizes for prediction. Prakash Loungani of the IMF has conducted several reviews of mainstream forecasts, finding them dismally likely to miss recessions. Economists are not very good at seeing into the future — to the extent that most argue forecasting is simply none of their business. The weather forecasters are good, and getting better all the time. Could we economists do as well with a couple of billion dollars’ worth of kit, or is something else lacking?

The question seemed worth exploring to me, so I picked up Andrew Blum’s recent book, The Weather Machine, to understand what meteorologists actually do and how they do it. I realised quickly that a weather forecast is intimately connected to a map in a way that an economic forecast is not.

Without wishing to oversimplify the remarkable science of meteorology, one part of the game is straightforward: if it’s raining to the west of you and the wind is blowing from the west, you can expect rain soon. Weather forecasts begin with weather observations: the more observations, the better.

In the 1850s, the Smithsonian Institution in Washington DC used reports from telegraph operators to patch together local downpours into a national weather map. More than a century and a half later, economists still lack high-definition, high-frequency maps of the economic weather, although we are starting to see how they might be possible, tapping into data from satellites and digital payments. An example is an attempt — published in 2012 — by a large team of economists to build a simulation of the Washington DC housing market as a complex system. It seems a long way from a full understanding of the economy, but then the Smithsonian’s paper map was a long way from a proper weather forecast, too.

Weather forecasters could argue that they have a better theory of atmospheric conditions than economists have of the economy. It was all sketched out in 1904 by the Norwegian mathematician Vilhelm Bjerknes, who published “The problem of weather prediction”, an academic paper describing the circulation of masses of air. If you knew the density, pressure, temperature, humidity and the velocity of the air in three dimensions, and plugged the results into Bjerknes’s formulas, you would be on the way to a respectable weather forecast — if only you could solve those computationally-demanding equations. The processing power to do so was to arrive many decades later.

The missing pieces, then: much better, more detailed and more frequent data. Better theory too, perhaps — although it is striking that many critiques of the economic mainstream seem to have little interest in high-resolution, high frequency data. Instead, they propose replacing one broad theory with another broad theory: the latest one I have seen emphasises “the energy cost of energy”. I am not sure that is the path to progress.

The weather forecasters have another advantage: a habit of relentless improvement in the face of frequent feedback. Every morning’s forecast is a hypothesis to be tested. Every evening that hypothesis has been confirmed or refuted. If the economy offered similar daily lessons, economists might be quicker to learn. All these elements are linked. If we had more detailed data we might formulate more detailed theories, building an economic map from the bottom up rather than from the top down. And if we had more frequent feedback, we could test theories more often, making economics more empirical and less ideological.

And yet — does anyone really want to spend a billion pounds on an economic simulation that will accurately predict the economic weather next week? Perhaps the limitations of economic forecasting reflect the limitations of the economics profession. Or perhaps the problem really is intractable.

 
Written for and first published in the Financial Times on 21 February 2020.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Why moonshots matter

Tim Bradshaw, head of the Russell Group of leading UK universities, has a curious tale to tell about failure. A few years ago he visited the Cambridge office of an admired Japanese company to find them fretting about the success rate of their research and development. At 70 per cent, it was far too high: the research teams had been risk-averse, pursuing easy wins at the expense of more radical and risky long-shots.

The late Marty Sklar, a Disney veteran, once told me a similar tale — if his colleagues weren’t failing at half of their endeavours, they weren’t being brave or creative enough. My boss at the World Bank 15 years ago had the same worry that too many projects were succeeding.

When the same concern arises in such wildly different contexts, we may be worrying about a common problem: a systematic preference for marginal gains over long shots. It’s not hard to see why. It is much more pleasant to experience a steady trickle of small successes than a long drought while waiting for a flood that may never come.

While marginal gains add up, they need to be refreshed by the occasional long-shot breakthrough. Major innovations such as the electric motor, the photo­voltaic cell or the mobile phone open up new territories that the marginal-gains innovators can then explore.

With this in mind, it’s hard not to sympathise with the UK Conservative party’s promise to establish “a new agency for high-risk, high-pay-off research, at arm’s length from government” — a British version of the much-admired US Defense Advanced Projects Research Agency.

Originally known as Arpa, now Darpa, it is most famous for creating Arpanet, the precursor to the internet. It also supported early research into satellite navigation and the windows-and-mouse system for operating a computer. And it helped to catalyse interest in self-driving cars. With successes like that, nobody seems to mind that Arpa’s failure rate is often said to be around 85 per cent. High-risk, high-pay-off indeed.

A collection of essays published recently by the think-tank Policy Exchange concurs that we need an Arpa for the UK. I’ve long argued for the importance of long-shots — the subtitle of one of my books is “why success always starts with failure” — so I can’t help but agree. Yet if this was easy, the UK would have an Arpa already.

At the casino it is easy to double the rewards by doubling the risk, but in the world of research, the trade-off is not so straightforward. While a low failure rate may indeed signal a lack of originality and ambition, we cannot simply decide to fail more often in the hope that originality will follow.

Arpa itself has approached this problem by hiring high-quality scientists for short stints — often two or three years — and giving them control over a programme budget to commission research from any source they wish.

Meanwhile, the Howard Hughes Medical Institute, a foundation, deliberately looks for projects with an unusual or untried approach, but a large potential pay-off. One study suggested that HHMI gets what it pays for — more failures, but larger successes, compared with other grant-makers funding researchers of a similar calibre.

Another large obstacle looms: how long will politicians find failure to be a sign of boldness and originality? Eventually, they will simply call it failure. Now that Arpa has a 62-year record, it is easy to forget that the agency was initially written off by some critics.

A new UK agency will face pressure to deliver. That sits uneasily with the desire to support risk-taking. Consider Arpa’s younger sibling, Arpa-E, created in 2009 to fund new energy projects. As of this week, the section of the Wikipedia entry on Arpa-E entitled “Accomplishments” is empty. Ouch.

The problem is more acute for a UK Arpa, because it is likely to have less funding — perhaps £200m a year. Is that enough? When Arpa’s head Charles Herzfeld heard the initial pitch for the proto-internet, in 1965, he responded, “Great idea . . . Get it going. You’ve got a million dollars more in your budget right now. Go.”

That is $10m-$30m in today’s money — depending on how one adjusts for spending power. It is hard to imagine a modern-day Herzfeld blowing a tenth of the UK-Arpa’s budget after a 20-minute meeting. We are on the Triceratops-horns of a trilemma. Be cautious, or fund lots of risky but tiny projects, or fund a few big, risky projects from a modest budget and accept that every single one may flop.

Keeping this new agency “at arm’s-length from government” is essential. Indeed, Safi Bahcall — the author of Loonshots — persuasively argues that such agencies need to be at arm’s length not just from government but from everybody. Yet somehow they must focus on real, practical, front-line problems. Not too close, not too distant. Not too many successes, but not too many failures, either. It’s quite a balancing act. Still, I’d pay for a ticket to this circus. Let’s give it a try.

 
Written for and first published in the Financial Times on 14 February 2020.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

The statistics behind the spread of ideas

Everyone loves a good idea. It’s even better when the idea becomes a tangible innovation, a better mousetrap that anyone can use and every mouse should fear. The awkward truth, however, is that even in a polished form, good ideas can be slow to spread.

Anaesthetic and antiseptic offer an instructive contrast. Both were developed in the mid-1800s. Anaesthetic spread faster than a hula-hooping craze. Atul Gawande explained in the New Yorker, “within seven years, virtually every hospital in America and Britain had adopted the new discovery”. Antiseptic, in contrast, took a generation to catch on.

“The puzzle is why,” noted Dr Gawande, before conceding that it is not a puzzle at all. Anaesthetic solves an immediate problem: a patient screaming and writhing in agony. Antiseptic is a defence against an invisible killer, infection, that acts only with a delay.

Unfortunately, many innovations are more like antiseptic than anaesthetic: they solve problems that can only be seen through a statistical lens. People are slow to embrace what they cannot see. A few years ago, researchers at the OECD looking at the diffusion of global productivity gains concluded that there was a growing gap between productive companies and the laggards. The gulf was huge — typically a fivefold productivity gap per worker, even after adjusting for differences in the equipment available.

Whether the innovation is a hardier variety of seed, a safer pharmaceutical compound or a more reliable manufacturing process, the benefits will rarely be as obvious as slumbering through surgery. Such ideas often spread all too slowly.

There are other barriers to the diffusion of innovation. If people feel they can’t adapt a new idea to their own purposes, or try it out on a small scale, they will resist. One major obstacle is social: evangelists for innovation are often rather different kinds of people from their audience. Agronomists are not farmers; pharmaceutical sales representatives are not general practice doctors; inventors are different from the rest of us. We will gladly imitate our peers, although that still raises the question of who will go first. One influential early study of hybrid corn in Iowa between 1926 and 1941 found that a few farmers would experiment with the new seed in small quantities to see how things worked out. Even the early adopters took things cautiously, while others watched. Farmers would then eventually copy their neighbours.

It is tempting to shrug and conclude that this is simply a tough problem. But there is no need to despair. Late last year, the British Medical Journal published a study that caught my attention, in part because of the cross-disciplinary team of authors: Alex Walker and Ben Goldacre (epidemiologists), Felix Pretis (an economist) and Anna Powell-Smith (a data scientist) — but also because those authors were looking at the diffusion of innovation in an innovative way.

The study examined how quickly National Health Service general practice clinics in England caught up with best practice in prescribing two types of drug. In one case, the birth-control pill Cerazette came off patent in 2012, at which point patients should generally have been prescribed cheaper generic versions of the drug, desogestrel. In the other, national guidelines were changed to recommend a different antibiotic for urinary tract infections.

NHS England publishes anonymised data, every month, describing the drugs being prescribed by GPs across 8,000 clinics. If you have time, you can noodle around on OpenPrescribing.net — a platform developed by Ms Powell-Smith and Dr Goldacre — looking for patterns.

And since that sounds like hard work, the BMJ study uses a statistical tool to spot whenever a clinic seems to have changed its clinical practice, and whether they did so promptly or gradually, or suddenly but after a delay, or not at all. The patterns are clear to the naked eye once pulled out of the mass of data: here’s a clinic that swiftly and sharply switched to the cheaper generic drug; here’s a clinic that never read the email. A follow-up study performs a similar analysis for statins.

What’s remarkable about all this is how unremarkable it really is. The diffusion of innovation could once only be studied in small settings and by taking considerable pains. But this is the 21st century: the NHS has made the data available to allow us to watch a good idea spreading across the nation, or not, almost in real time.

This is, of course, an atypical situation. It is unusual to be able to collect such a large set of high-quality data, showing who has or has not embraced a new idea. And it is unusual to have such sharply defined innovations: either the doctor prescribes the new drug to patient X or she does not. Still, being able to observe leaders and laggards in the NHS is no small thing. It should be straightforward to prod the laggards — and to ask the leaders how they do it. And Dr Goldacre’s group have published their statistical tools. Hopefully, it won’t take too long for the idea of using them to spread.

 

Written for and first published in the Financial Times on 7 February 2020.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Why we need to disagree

A few days after Christmas in 1978, United Airlines Flight 173 ran into trouble on its descent into Portland, Oregon. The landing gear should have descended smoothly and an indicator light blinked on to indicate all was secure. Instead, there was a loud bang and no light.

While the crew tried to figure out whether the landing gear was in position or not, the plane circled and circled. The engineer mentioned that fuel was running low, but didn’t manage to muster enough forcefulness to convey the urgency to the captain, who was focused on the landing gear. Finally, when the first officer said “we’re going to lose an engine, buddy”, the captain asked, “why?”

The plane crashed shortly afterwards. Ten people died. The lesson: sometimes we can’t bring ourselves to speak up, even when lives are at stake.

It might seem strange, in this politically divided age, to call for people to speak out if they see things differently. But our current political discourse doesn’t quite qualify. (Abuse is not an argument, as any Monty Python fan knows.)

Useful dissent means serious engagement with people who see the world differently — or, perhaps, the courage to puncture the consensus of one’s own tribe. It is far more common to see people seeking out like-minded groups, while politicians are happy to deliver hellfire sermons to their own choirs.

That is a shame. Within a cohesive group, the mere demonstration that disagreement is possible can have liberating effects. Charlan Nemeth, a psychologist at the University of California, Berkeley, studies dissent. (Her recent book is titled, No!: The Power of Disagreement in a World that Wants to Get Along – or in the US, In Defense of Troublemakers; at least we can reliably expect transatlantic disagreement over titles.) When she arrived at the university she found her office a little too austere, and decided to put down a rug.

“These offices are all the same for a reason,” remonstrated a colleague. She kept the rug anyway — and before long, her colleagues started putting rugs in their offices, too. Apparently, few people had liked the austere offices but nobody was willing to admit that. It took Prof Nemeth’s low-level troublemaking to shatter the illusion of consensus.

Prof Nemeth has studied disagreement during brainstorming sessions. One rule of brainstorming is not to criticise the ideas of others. When she and colleagues ran their sessions, they found that groups produced more ideas if the “do not criticise” rule was reversed, encouraging participants to “debate and even criticise each other’s ideas”.

Dissent can free us to place rugs in our offices, or express our individuality in more important ways. It can also stimulate our ideas and creativity. And — as the case of Flight 173 suggests — if we hesitate forcefully to disrupt a group conversation, that can deny others a vital piece of information.

Matthew Syed, in his book Rebel Ideas (this one also has a different title in the US; there’s something in the air…) draws the same conclusion from a disastrous attempt on Everest in 1996. Mr Syed argues that junior members of the expedition had useful pieces of information about the weather and their equipment but tended to stay silent, deferring to the team leaders.

A similar dynamic is at play in lower-stakes environments. One study, conducted by Garold Stasser and William Titus, asked undergraduates to discuss hypothetical candidates for a student society president.

The researchers gave each participant a different fact sheet; some facts were given to everyone in the discussion, but others were disclosed to only one person. People rarely spoke up about their private information, and the conversation revolved — redundantly — around what the whole group knew already rather than trying to find out what wasn’t widely known. There was an opportunity for everyone to learn from everybody else, but it proved more comfortable to focus on knowledge that they all had in common.

The truth is that disagreement is hard. We find it unpleasant to be disagreed with, and it can be painful to be a dissenter. Prof Nemeth notes that when she hired actors to play the role of dissenters in experiments studying group dynamics, the actors found it distressing to be on the receiving end of hostility. Some even asked for “combat pay”.

Even in gentler settings, we underestimate the benefit of friction. One study of problem solving (conducted by Katherine Phillips, Katie Liljenquist and Margaret Neale) simply contrasted small groups of friends with those of three friends plus a stranger. The groups with an outsider did much better at solving the problems, even though the strangers had no special expertise: their mere presence raised everyone’s game.

Nevertheless, the groups of friends enjoyed themselves more and had more confidence in their answers — confidence that was, of course, badly misplaced.

We rarely appreciate it when someone is speaking out rather than fitting in. But whether it is as trivial as a rug, or as vital as a fuel gauge in a circling aircraft, we need people who see things that we don’t. We need them to speak up. And we also need to listen when they do.

 
Written for and first published in the Financial Times on 31 January 2020.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

The prisoner’s dilemma at 70 – at what we get wrong about it

Once upon a time, a pianist was arrested by the secret police and accused of spying. He was carrying sheets of paper covered with a mysterious code. Despite protesting that it was merely the sheet music for Beethoven’s Moonlight sonata, the poor man was marched to the cells. A couple of hours later, a sinister interrogator walked in. “You’d better tell us everything, comrade,” he announced with a thin smile. “We have caught your friend Beethoven. He is already talking.”

This sets up the most famous problem in game theory: the prisoner’s dilemma. The interrogator explains that if one man confesses and the other does not, the talkative prisoner will go free and the other will do 25 years in a gulag. If they both remain silent, they will each spend five years in prison. If they both confess, 20 years each. The dilemma is clear enough: each would do better to confess, regardless of what the other does; yet collectively they could profit by sticking together.

The dilemma is now 70 years old — it was developed in a simple mathematical form in 1950 by mathematicians Merrill Flood and Melvin Dresher and wrapped in a story by Albert Tucker. (My own retelling owes a debt to economists Avinash Dixit and Barry Nalebuff.)

Dresher, Flood and Tucker worked at the Rand think-tank. The prisoner’s dilemma distilled the tension between selfishness and co-operation into a potent form, making it emblematic of the risk of nuclear destruction and much more besides. The dilemma received a second burst of attention in 1981, after the publication of “The Evolution of Cooperation” by political scientist Robert Axelrod and evolutionary biologist William Hamilton. Their article is not only the most cited in political science, but as popular as the next three works put together.

I hope readers will forgive my dredging up such a venerable idea, because it remains relevant, instructive, and widely misunderstood. One common misunderstanding is that the problem is one of communication: if only the pianist and Beethoven could get together and agree a strategy, they’d figure out that they should stick together. Not so. Communication doesn’t solve anything. The attraction of teaming up is obvious; so is the temptation to betray. Those who believe talking helps much should watch Golden Balls, a game show based on a modified prisoner’s dilemma. What makes the show fun to watch is the emptiness of the promises contestants make to each other.

More problematic is the mistaken belief that the prisoner’s dilemma means we are doomed to selfish self-destruction. Moral philosophers have tied themselves in knots trying to refute it, to show that it is somehow rational to collaborate in a one-shot prisoner’s dilemma. It isn’t. Fortunately, most human interaction is not a one-shot prisoner’s dilemma. The 1981 paper — and subsequent book — may have pushed the pendulum too far in an optimistic direction. Prof Axelrod ran tournaments in which computer programs competed against each other, playing the prisoner’s dilemma hundreds of times. Repeating the game allows co-operation to be enforced through the threat of punishment — something game theorists had known since the 1950s. When Prof Axelrod enshrined that idea in a simple program called “Tit for Tat”, it routinely triumphed.

Tit for Tat responds to co-operation with co-operation, and betrayal with betrayal. Whatever you do to it, it does right back. Prof Axelrod highlighted the fact that although the program was tough, it was “nice” — it tried co-operation first. And he drew broader parallels, arguing that the success of the strategy explains why soldiers in the trenches of the first world war were able to agree informal ceasefires. His inspiring message was that in the worst possible circumstances, nice guys finish first — provided they have an inner steel.

But that goes too far. A simpler explanation of “live and let live” in the trenches is that popping up to shoot at the enemy is nothing like ratting out Beethoven. It is dangerous. One needs no game theory to explain why soldiers might prefer to lie low.

Prof Axelrod also set far too much store by Tit for Tat’s “niceness”. Other strategies prosper in prisoner’s dilemma tournaments, depending on details of the rules. Among them is “Pavlov”, a strategy that tries to exploit suckers and changes tactics when it encounters a punishing response. It can be co-operative, sure — but it is hardly “nice”.

Prisoner’s dilemmas do exist. The most pressing example today is climate change. Every nation and every individual benefits if others restrain their pollution, but we all prefer not to have to restrain our own. It would be foolish to hope that Tit for Tat will save the day here — and we don’t have to. We have tools available to us: domestically, taxes and regulations; internationally, treaties and alliances. Such tools change the incentives. We could and should be using them more. The pianist and his suspected accomplice were trapped. We are not. Unlike them, we can change the game.

Written for and first published in the Financial Times on 24 January 2020.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Why my purchase choices have the kiss of death

Steve Eisman, the investment manager made famous by Michael Lewis’s The Big Short, did a lot of homework in his quest for terrible assets to bet against. But when he was introduced to another investment manager — Wing Chau — he saw the opportunity to accelerate the decision-making process: “Whatever that guy is buying, I want to short it.”

For Mr Eisman, Wing Chau was the equivalent of a watch that is six hours off: a perfect guide, as long as you realise that you need to look at the opposite side of the deal — or the clock face.

A few years ago, four business school academics won attention for the discovery that the same logic may work for retail products. Eric Anderson, Song Lin, Duncan Simester and Catherine Tucker found what they called “harbingers of failure” — consumers who simply adored the Ford Edsel, the Betamax video format, or those squeezy bottles loaded with Heinz EZ Squirt ketchup in bright blue, green and purple, a kind of edible paint. These people thought nothing cried out “sophisticated lady” more loudly than a packet of Bic disposable knickers.

Product development teams have long prized the idea that “lead customers” could give them insight into where the mass market might be going. A celebrated example is the mountain bike, a product assembled by enthusiasts who, starting in the early 1970s, modified old bikes by adding balloon tyres and motorcycle brakes to cope with demanding off-road conditions. Fifteen years later, the mountain bike was a mainstream retail product.

Pointing to such examples, Eric von Hippel, a professor at MIT, argued that companies shouldn’t just show product ideas to focus groups made up of generic, average consumers. They should find the early adopters and the trend setters, and pay particular attention to them.

But the “harbingers of failure” study reminds us that we could equally seek customers with the opposite quality: an unerring nose for products that the mass market will despise. Perhaps it shouldn’t be a surprise that such people exist. Prof Anderson and his colleagues suggested that companies could identify harbinger customers by examining their purchase decisions, and then use them as a guide to what not to stock in future. They also concluded that these customers provided a strong signal of a product’s prospects: “The more they buy, the less likely the product will succeed.”

Recently, the plot thickened like a glob of EZ Squirt: a research paper from professors Simester and Tucker and Clair Yang reported on “The Surprising Breadth of Harbingers of Failure”. This study found that “not only are there customers who are harbingers, but there are also harbinger zip codes”.

People in these accursed neighbourhoods buy doomed products, and also niche products that nearby zip codes don’t find attractive. The tendency is broad-based: they buy unpopular products at a big-box warehouse store, but they also buy unpopular garments at a clothing retailer. This is rather convenient for market researchers — Prof Simester and colleagues argue that zip codes provide all the information needed to learn from the harbinger effect. The harbinger zip codes are even losing propositions in electoral campaigns: they are more likely to donate money to political candidates who lose, and less likely to donate to popular ones.

And then I realised: they’re talking about me. While I’d prefer not to reveal too much about my voting habits, it has been a very long time since I was on the winning side: I didn’t vote for Boris Johnson, I didn’t vote for David Cameron and I didn’t vote for Tony Blair. I was on the losing side in all the referendums, too. Politically, I am Crystal Pepsi. I am Colgate ready meals.

Come to think of it, as a student I did go through a phase of drinking the monumentally unsuccessful soft drink, Tab. I’ve never owned an iPhone and when my wife bought me an iPad, I sent it back because I couldn’t figure out how to make it work. In the pool, I wear Speedos. I am the Wing Chau of retail and politics: come study me, oh trendspotters and psephologists, for a glimpse into what the future does not hold.

All this makes me wonder: what makes a harbinger of failure, and why is our taste for the unpopular so wide-ranging? Why would someone who admires Clairol’s Touch of Yogurt shampoo feel the same way about the Liberal Democrats’ Vince Cable? Perhaps we harbingers are open-minded, happy to take a risk on something new and unusual? Perhaps; but harbingers don’t just try Frito-Lay lemonade, we swig it down and then come back for more.

Perhaps the answer is that ordinary, well-adjusted people notice what other people are doing, and fit in. In contrast, we harbingers are simply oblivious. Jacket and jeans? Socks and sandals? Why not? I have yet to see a completely convincing explanation — or even to be fully persuaded that the whole idea isn’t one big statistical fluke. But if anyone in market research would like to follow me around a supermarket, get in touch.

 

Written for and first published in the Financial Times on 17 January 2020.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Two cheers for the dematerialising economy

If past trends continue, the world’s gross domestic product will be about twice as big by 2040 as it is today. That’s the sort of growth rate that translates to 30-fold growth over a century, or by a factor of a thousand over two centuries. Is that miraculous, or apocalyptic? In itself, neither. GDP is a synthetic statistic, invented to help us put a measuring rod up against the ordinary business of life. It measures neither the energy and resource consumption that might worry us, nor the things that really lead to human flourishing.

That disconnection from what matters might be a problem if politicians strove to maximise GDP, but they don’t — otherwise they would have hesitated before imposing austerity in the face of a financial crisis, launching trade wars or getting Brexit done. Economic policymaking has flaws, but an obsession with GDP is not one of them.

Nevertheless the exponential expansion of GDP is indirectly important, because GDP growth is correlated with things that do matter, good and bad. Economic growth has long been associated with unsustainable activities such as carbon dioxide emissions and the consumption of metals and minerals.

But GDP growth is also correlated with the good things in life: in the short run, an economy that is creating jobs; in the long run, more important things. GDP per capita is highly correlated with indicators such as the Social Progress Index. The SPI summarises a wide range of indicators from access to food, shelter, health and education to vital freedoms of choice and from discrimination. All the leading countries in the Social Progress database are rich. All the strugglers are desperately poor.

So the prospect of a doubling of world GDP matters, not for its own sake, but for what it implies — an expansion of human flourishing, and the risk of environmental disaster. So here’s the good news: we might be able to enjoy all the good stuff while avoiding the unsustainable environmental impact. The link between economic activity and the use of material resources is not as obvious as one might think. There are several reasons for this.

The first is that for all our seemingly insatiable desires, sometimes enough is enough. If you live in a cold house for lack of money, a pay rise lets you take off the extra cardigan and turn up the radiators. But if you win the lottery, you are not going to celebrate by roasting yourself alive.

The second is that, while free enterprise may care little for the planet, it is always on the lookout for ways to save money. As long as energy, land and materials remain costly, we’ll develop ways to use less. Aluminium beer cans weighed 85 grammes when introduced in the late 1950s. They now weigh less than 13 grammes.

The third reason is a switch to digital products — a fact highlighted back in 1997 by Diane Coyle in her book The Weightless World [pdf]. The trend has only continued since then. My music collection used to require a wall full of shelves. It is now on a network drive the size of a large hardback book. My phone contains the equivalent of a rucksack full of equipment.

Dematerialisation is not automatic, of course. As Vaclav Smil calculates in his new book, Growth, US houses are more than twice as large today as in 1950. The US’s bestselling vehicle in 2018, the Ford F-150, weighs almost four times as much as 1908’s bestseller, the Model T. Let’s not even talk about the number of cars; Mr Smil reckons the global mass of automobiles sold has increased 2,500-fold over the past century.

Still, there is reason for hope. Chris Goodall’s research paper “Peak Stuff” concluded that, in the UK, “both the weight of goods entering the economy and the amounts finally ending up as waste probably began to fall from sometime between 2001 and 2003”. That figure includes the impact of imported goods.

In the US, Jesse Ausubel’s article “The Return of Nature” found falling consumption of commodities such as iron ore, aluminium, copper, steel, and paper and many others. Agricultural land has become so productive that some of it is being allowed to return to nature.

In the EU, carbon dioxide emissions fell 22 per cent between 1990 and 2017, despite the economy growing by 58 per cent. Only some of this fall is explained by the offshoring of production. (For a good summary of all this research, try Andrew McAfee’s book More From Less.)

Can we, then, relax? No. To pick a single obvious problem, global carbon dioxide emissions may be rising more slowly than GDP — but they are rising nevertheless, and they need to fall rapidly. Yet the fact that dematerialisation is occurring is heartening. We all know what the basic policies are that would tilt the playing field in favour of smaller, lighter, lower-emission products and activities. Adopting those policies means we might actually be able to save the planet, preserve human needs, rights and freedoms — and still have plenty of fun into the bargain.

 

Written for and first published in the Financial Times on 10 January 2020.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Undercover Economist

Will the 2020s be the decade that the robots finally come for our jobs?

Will this decade finally be the one in which the machines take our jobs? Such concerns have been aired many times over the centuries and they have always been wrong. But they are not intrinsically absurd.

In 1979, the economist Wassily Leontief pointed to the fate of the horse. Horses had long been of vital economic importance, but faded in the second half of the 20th century as the internal combustion engine became the dominant source of horsepower. Horses still have a niche, but will never outcompete engines, no matter how cheap oats become.

Might large numbers of human workers go the way of the horse? In 2003, the economists David Autor, Frank Levy and Richard Murnane published a study of the economics of technological change that made two influential observations.

First, they pointed out (correctly) that it is misleading to talk of robots — or any other technology — taking jobs. Instead, machines perform tasks, a narrower unit of work. Since most jobs involve many different tasks, robots do not take jobs, but they may radically reshape them. A robot accountant is not C-3PO; it’s Excel or QuickBooks. As with the horse, there is no wage at which human calculators can compete with a computer at the task of adding up a spreadsheet. Still, human accountants exist in large numbers. Their jobs simply look very different today.

Second, argued Profs Autor, Levy and Murnane, the tasks that machines took on were best described not as “skilled” or “unskilled” but as “routine” or “non-routine”. Recalculating a spreadsheet is a skilled but routine task, easily automated. Cleaning a toilet requires little skill — even I can do it — but is non-routine and therefore hard to automate.

This way of looking at the world proved very useful. It explained why technology could disrupt our jobs without destroying them. And why both the low-paid and high-paid ends of the labour market were proving robust, while the middle, packed with skilled-yet-routine tasks, was hollowed out.

But in a new book, A World Without Work, Daniel Susskind argues that the second part of the Autor-Levy-Murnane perspective is proving more questionable. He observes that the boundaries of the “routine” are blurring fast. Consider, for example, CloudCV, a system that answers open-ended questions about images. Upload an image and ask any question you like.

One photograph showed some 20-somethings sitting on a sofa with white wine and cans of Kronenbourg lager in front of them, with one fellow standing in a dramatic pose. “What are they doing?” I asked the computer. “Playing Wii,” it replied, correctly. “What are they drinking?” Probably beer, it said. “How’s the weather?” I asked of an outdoor snapshot. “Cloudy.” It was.

The system gives accurate answers to informally phrased questions about random photographs. Is that task routine? Hardly.

Neither is the performance of Alpha Zero, the game-playing algorithm developed by DeepMind, a sister company of Google. In 2017, AlphaZero trained itself in a few hours to thrash the best chess-playing engine and the best Go program, both of which easily beat the best humans. Some claim this performance is less impressive than it first appears — but 10 years ago the mere idea that a computer could beat a human at Go seemed implausible. What DeepMind’s supercomputers can do today will be achievable on laptops and phones by 2030.

In task after task, the computers are overtaking us. In the Visual Question Answering challenge that CloudCV attempts, humans score 81 per cent. The machines were at 55 per cent as recently as 2016; by the summer of 2019 they were at 75 per cent. It’s only a matter of time before they do a better job than us — just as AlphaZero does.

The Artificial Intelligence Index project, based at Stanford University, tracks a wide variety of benchmarks. The machines are making rapid progress at symbolic achievements — such as playing poker — but also at translation, speech recognition, and classifying diseases such as skin cancer (from images of moles) and diabetes (from images of retinas).

These achievements are real. And despite the fact that there are many things computers cannot do, when an algorithm does a narrow task cheaply and well, we humans end up contorting ourselves to unleash the new capability while sweeping up the tasks the software leaves behind. Just look at the self-checkout at your local supermarket.

So — will the machines take all the jobs in the coming decade? No, and that remains an unhelpful way to phrase the question. Machines encroach on tasks, and we reorganise our jobs in response, becoming more productive as a result. But there is good reason to believe that such reorganisations will be wrenching in the decade to come, and also that some people will be permanently unable to contribute economically in the way they would have hoped and expected. Above all, it is likely that our political institutions will be unable to adapt to the challenge.

 
Written for and first published in the Financial Times on 3 January 2020.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Previous

Elsewhere

  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 181,481 other subscribers

Do NOT follow this link or you will be banned from the site!