Tim Harford The Undercover Economist

Undercover EconomistUndercover Economist

My weekly column in the Financial Times on Saturdays, explaining the economic ideas around us every day. This column was inspired by my book and began in 2005.

Undercover Economist

Trump, May and the necessary art of brinkmanship

Brinkmanship is an old idea, but not such an old word. It was first used in 1956, after US Secretary of State John Foster Dulles opined that “the ability to get to the verge without getting into the war is the necessary art . . . if you are scared to go to the brink, you are lost.”

Adlai Stevenson, the Democratic presidential nominee, began to use the term “brinkmanship” in response. He did not intend it as compliment.

Yet we find ourselves surrounded on all sides by leaders who think they have mastered this “necessary art”. The stakes are blessedly lower, but still high enough to deserve examination. In the US, Donald Trump has failed to deliver on his promise to get Mexico to pay for his border wall, and has partly shut down the federal government until Congress agrees that the US taxpayer will fund it instead. Voters will reach their own conclusions as to who is to blame.

In the UK, Theresa May wants parliament to vote for the unappetising Brexit deal she has negotiated with the EU. She offers two simultaneous and mutually exclusive threats, confronting hardliners with the prospect of no Brexit at all, while warning the EU and British moderates that there will be a chaotic “no deal” outcome instead.

Whether we are talking about Brexit, a border wall, or the early stages of the Vietnam war, each situation is different. Yet it is worth pondering similarities in the structure of the problem.

These threats may seem empty. Dulles did not want nuclear war. Mrs May does not want six-day-long traffic jams on the way into Dover. Nevertheless the threat may be made credible enough to achieve results. How?

One option is to use a doomsday machine, made famous by Stanley Kubrick’s dark comedy Dr Strangelove. The doomsday machine is credible because it is automatic. It cannot be switched off, only obeyed. The risks are obvious; in the movie, the doomsday machine destroys civilisation.

Mrs May’s doomsday machine was the Article 50 divorce process, which we were told could not be halted once begun. Without parliamentary approval of a deal, this legal doomsday machine would deliver a disruptive no-deal by default. Triggering Article 50 weakened the prime minister’s negotiating hand with the EU but strengthened it when dealing with those MPs who seem open to reason.

Yet it now transpires that the machine has an off-switch after all. The UK government can simply revoke its notification to leave. Mrs May therefore managed to hobble her bargaining position with the EU while leaving herself hostage to her own party.

The second tactic for gaining credibility is the “madman” strategy: if you are insane, or can fake insanity, then insane threats seem plausible. The strategy was flawlessly executed by Sheriff Bart in the film Blazing Saddles, who managed to escape being lynched by racists by threatening to shoot himself. That achievement is hard to replicate, though. As Bart tells himself, “you are so talented. And they are so dumb!”

Mr Trump is erratic enough to make the madman tactic seem plausible, although he has also frequently backed down. Mrs May does a good line in stubbornness, and is trying hard to make a chaotic no-deal seem as if it is an inescapable force of nature, like an earthquake or a flood. Yet it seems unlikely that she would embrace the chaos when, with a stroke of her pen, she could call it all off. Some leading Brexiters, however, have perfected the madman pose; they’ve convinced me that they simply do not care. Perhaps I’ve been fooled by a brilliant bluff. Perhaps.

There is a third way to make threats credible: create the risk of an accident. Thomas Schelling, cold war strategist and Nobel laureate economist, described handcuffing yourself to your opponent then cavorting on the edge of a cliff. You’re not suicidal, but you are willing to create the risk that things will go terribly wrong. If your counterpart fears that risk more than you, you may extract concessions.

As Schelling and his fellow strategists knew, in situations such as the Cuban missile crisis there was always a risk that something would get out of hand, and all of us would slip off the cliff together. It was this that made world-ending threats plausible.

If you are finding all this discomfiting, you are not alone. Somehow we have managed to produce a situation where democratically elected politicians are threatening substantial harm to their own countries as a bargaining tactic. The tactic is credible because accidents happen. At least we can comfort ourselves that long-range bombers are not involved.

How did we get here? Recall the final scene of Dr Strangelove. With Armageddon inevitable, Strangelove reassures the all-male leadership of the US that they could survive in underground cities. The survival of the human race would be ensured by a ratio of 10 “highly stimulating” women to each man. Everyone seems rather cheered by this thought.

Brinkmanship does not work if it does not create a risk of harm. Yet the people practising the strategy may not be the ones who will experience it.


Written for and first published in the Financial Times on 11 January 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Why there is no need to panic about fake news

A new year’s resolution for all: stop talking about fake news. Perhaps we should have stopped talking about it at the same time as we started. That, according to Google Trends, was the week after Donald Trump won the US presidential election in 2016, which suggests the interest was driven by astonished people looking for an explanation. Fake news was not the only scapegoat but it was, and still is, a popular one. It was even named the Word of the Year in 2017 by Collins Dictionary. Yet the phrase has long since ceased to be useful, and here are five reasons why.

First, fake news doesn’t mean anything — or rather, it means so many different things to different people as to be bewildering. Focus group studies conducted by the Reuters Institute for the Study of Journalism found that people placed various things under the “fake news” umbrella, including annoying pop-up advertisements, politicians making misleading claims, and newspapers with a political slant.

None of these match the original definition of fake news — at least, as I understand it — which referred to stories that were invented to win advertising clicks and impersonated or parodied genuine journalism. The most famous example was when the Pope was “reported” to have endorsed Mr Trump’s presidential candidacy.

Such stories were widely shared, and while some claimed to be humour or satire, the basic motive was monetary. It is cheap to invent lies, and eye-catching lies are a reliable source of clicks and thus advertising dollars. No wonder journalists became irate: for so many outlets, real news had become unprofitable yet fake news is a money-spinner.

But for all the people determined to believe that the Pope’s fictional endorsement had swung the election for Mr Trump, there is little evidence that it — or similar clickbait fabrications — did any such thing. While the most popular fake stories were shared at least as widely as the most popular true articles, that is partly because the fakes were unique while each true article had dozens of imitators or parallels.

A study conducted by economists Hunt Allcott and Matthew Gentzkow found that fake news simply wasn’t as widely shared, seen or remembered as many people think. Close as the 2016 election was, it is unlikely that these stories swung it.

That is the second reason to steer clear of the fake news phrase: in its original form it is aggravating and, occasionally, has constituted incitement to serious violence. But despite a certain degree of moral panic, fake news itself does not pose an existential threat either to democracy or the free press.

What does pose such a threat is a draconian response from governments. Is that likely? The fact-checking organisation FullFact has described the response of some governments, internet and media companies as “frightening over-reactions” — although it adds that the UK government has so far avoided rushed or illiberal measures.

It is all too easy to turn legitimate concerns about false information into a situation where the government decides what can be said and who can say it. We need to be careful that the cure is not worse than the disease — a third reason to avoid panicking about fake news.

The fourth reason is that Mr Trump, with his twisted genius for turning a complex issue into a political cudgel, has deployed the term to demonise regular journalists. Given the number of journalists murdered around the world, including in the US, one might hope for some restraint from the president, but in vain.

Other politicians have also embraced the phrase, including UK Prime Minister Theresa May and Labour party leader Jeremy Corbyn. I worry about a world in which many people believe lies, but I worry far more about one in which many people instinctively refuse to believe the truth.

Here is the final reason to calm down about fake news: it feeds into the tempting but smug assumption that the world is full of idiots. People are sometimes taken in by lies, and some spectacular falsehoods have gained more traction on social media than one might hope.

But if we persuade ourselves that Mr Trump was elected by people who wanted to be on the same side as the Pope, we’re not giving voters enough credit. It is true that most people are disengaged from serious news, and vote with their guts rather than their heads, or being guided by friends rather than a close reading of policy analysis. That does not make them fools.

There is much to concern me in the current political information environment. I worry (partly selfishly) that it is harder than ever to sustain a business that provides serious journalism. I worry that politicians around the world are doing their best to politicise what should be apolitical, to smear independent analysis and demean expertise.

I worry that there is far too little transparency over political advertising in the digital age: we don’t know who is paying for what message to be shown to whom.

The free press — and healthy democratic discourse — faces some existential problems. Fake news ain’t one.


Written for and first published in the Financial Times on 4 January 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Cash-rich, time-poor: Why the wealthy are always in a hurry

Will making more money save you time? Or will it make you feel more rushed than ever? I’ve been pondering this question because a friend challenged me to figure out whether income poverty and time poverty go hand in hand.

There are cash-poor, time-poor people, who juggle multiple shifts with childcare and spend precious hours on long commutes. There are cash-poor, time-rich people — pensioners or job seekers wondering how to fill the day. But, on average, are richer people more or less busy than those with less money?

On one point, the evidence is clear: whether or not people on high incomes are busy, they think they are. In a forthcoming book, Spending Time (UK) (US), economist Daniel Hamermesh looks at “time stress”, which is measured not by looking at a time-use diary but instead by surveying people to ask if they often feel “rushed” or “pressed for time”.

New parents, especially new mothers, are more likely to complain of time-stress. So are people who work longer hours — no surprise there. But what about income? Prof Hamermesh finds that “people who were always or often stressed had the highest earnings . . . earnings were lowest among the never-stressed”. Money goes hand-in-hand with the sense that there aren’t enough hours in the day.

This isn’t just for the obvious reason that high-income people spend more time doing paid work, although on average they do. (They also sleep less and watch less television.) Of people who work the same hours, having a higher income per hour is correlated with feeling pushed for time. Even people who don’t do any paid work at all feel more rushed if they have more money.

On the face of it, this makes little sense: surely, for any given workload, money is a timesaver rather than a time-sink? Logically, yes. Psychologically, no. It seems that people with more money find more things to do with their time, and so feel more time pressure.

For example, someone with money to spare may book nights at the theatre, reserve tables at fancy restaurants and sign up for bespoke courses. With less cash, cheaper options such as watching TV or reading a book seem more practical. A time-use diary would record all of these activities as “leisure”, but curling up at home with a book is not only cheaper than going to the theatre, but induces less of a sense of time stress.

I’m not saying we should shed a tear for the millionaire who feels she doesn’t have enough hours in the day to spend all her money. But perhaps we shouldn’t be surprised that such feelings are common among richer people.

Another perspective comes from comparing education levels to how people spend their time across a week, as the economists Orazio Attanasio, Erik Hurst and Luigi Pistaferri have done.

People with more education — say, more than 12 years — tend to be richer. But do they also tend to be busier? It seems so. We have US surveys from around 1985 and 2005, and they show that less-educated people have more leisure time than those who are highly educated. (They also had more leisure time in 2005 than in the 1980s.) In contrast, the more highly educated group — who already had less free time in the 1980s — have been getting busier since.

There’s a gender dimension here too. Both in the 1980s and the 2000s, the people with the least leisure time were highly educated women, while those with most time to kill were less-educated men.

The gap between these two groups has widened. While less-educated men have gained 2.5 hours of leisure time a week (to a total of 39 hours), the more-educated women have lost two hours a week (bringing them down to a total of 30 hours). Women also feel more time-stressed than men, even after adjusting for other factors.

All these averages, of course, conceal a great deal of variation. The extra 2.5 hours of leisure a week that less-educated men have gained sound rather pleasant. But behind that average is a growing minority with 60, 80 or 100 hours a week of “leisure time” — better described as unemployment. Although research suggests that some young men seem not to mind unemployment, given that computer games are now so awesome, most people hate it.

So while there are many struggling people who are holding down several different gigs, juggling childcare and burning time on long commutes, overall the evidence shows that the rich are time-poor and the poor are time-rich.

Is this any compensation for the other inequities of life? Probably not — although it depends how much you enjoy your leisure and whether you enjoy your job. Recent studies of people doing gig work or shift-work on irregular hours find that a lot of them love the flexibility but many others hate the uncertainty or want more work.

Research on happiness shows that people — on average — tend to prefer leisure to work. On the other hand, it also shows that being unemployed is utterly miserable. Prof Hamermesh writes, “I would be very happy to wager that most people would choose to feel time-poor rather than income-poor.” It’s hard to disagree.


Written for and first published in the Financial Times on 30 Nov 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Technology can be the friend of creativity

By the time you read this, I shall be sitting in a cinema watching a screening of The Box of Delights (US), nearly three hours of vintage television that captivated me as a boy when broadcast on the BBC in the six weeks running up to Christmas 1984. I’ll be charmed by Patrick Troughton, terrified by Robert Stephens, and mildly amused by the special effects, which were record-breakingly lavish at the time but look amateurish now.

Christmas isn’t Christmas without The Box of Delights. Still, the cinema visit is an indulgence, because a presumably illicit copy of the series has been on YouTube for three years. And there lies an interesting question: what has digitisation done to the richness of our popular culture, from TV and film to music and books?

The obvious response is that digitisation is ruining everything: a children’s series that was once the most expensive ever made by the BBC has been pirated. Why would the BBC — or anyone — invest in the next masterpiece if it will inevitably be ripped off?

This problem is starkest for the music industry, since pirated music is not only free but convenient. For the US music industry, annual retail revenues for physical products have fallen in real terms by more than 90 per cent, from about $20bn in the late 1990s to less than $1.5bn last year. Revenue from downloads and particularly streaming has been growing strongly, but the total takings are less than half what they once were.

Such a collapse in revenues does not seem to be a recipe for a creative flowering. The same threat of piracy — or simply competition from videos of cats riding on Roombas — hangs over film, TV and books. (It also hangs over journalism, but that is a topic for another day.)

On the other hand, digitisation makes it easy to find obscure works. The Box of Delights was merely a memory for many years; finding a copy on VHS video would have been its own epic quest. Now I can watch it on a whim, without leaving my desk.

In the early years of ecommerce, economists Erik Brynjolfsson, Yu Hu and Michael Smith estimated that the availability of obscure book titles alone through websites had increased consumer welfare in the US by about $1bn in the year 2000 — a modest $3 per person, but not nothing. That figure may be much more today.

There is a balance here to be struck, and it is one familiar from debates about copyright. Copyright creates an artificial monopoly, rewarding creators to encourage them to create more. The same artificial monopoly raises prices to consumers and restricts remixes, adaptations and derivative work that is valuable in its own right. Copyright can harm the spread of creative ideas by being too weak, or too strong.

At least copyright rules can be optimised in principle — even if they are in practice much longer than is required.

Technology cannot be so easily tamed by a stroke of the legislator’s pen. So what has new technology done to creative work? Has it been gutted by piracy, or is it flourishing thanks to ever-cheaper means of producing and distributing new ideas?

Joel Waldfogel, an economist and author of a new book, Digital Renaissance (UK) (US), has been trying to figure out the answer to that question. On the question of quantity, there is no doubt: we now have access to vastly more creative works. As well as all the amusing Roomba videos, there is a huge international output, from “Gangnam Style” to the Korean historical dramas that my daughter enjoys so much. Obscure music and out-of-print books can be obtained in digital form within seconds, and YouTube allows me to bore my children with old comedy clips any time I choose.

Isn’t this just strip-mining old assets? No. New releases abound. In the US, 3,000 new movies were released in 2010, up from 500 in 1990. New song publication increased sevenfold between 1988 and 2007, despite plunging revenues. Four hundred thousand books were published in 2012, up from 85,000 in 2008. Much of this new stuff is dreadful, but that doesn’t much matter, since nobody has to watch, hear or read it. What matters is not the average quality, but the quality of the best stuff.

This is hard to assess, but Prof Waldfogel looks at indicators such as reviews — both of professional critics and on online databases — for measures of quality. Among the dross there is an increasing number of both highly rated TV shows and highly rated movies.

Music, too, is doing just fine. Synthesising the ratings of critics suggests that the late 1960s and early 1970s were the golden age for music; any other conclusion would have been a shock. But while more recent music is less highly rated, there is little sign that it is inferior to the highly profitable albums of the 1990s.

This shouldn’t be entirely surprising. Most ideas used to be shut down at an early stage; now many see the light of day. As the late novelist William Goldman reminded us, “nobody knows anything”, so it is no surprise that among these new releases, the occasional gem sparkles. The internet, like the box of delights itself, is full of wonders.


Written for and first published in the Financial Times on 7 December 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Stop sniping at central banks and set clear targets

What is the most important job in the world? President? Teacher? Parent? I suggest a post that would not feature on most people’s lists: central banker.

James Carville, who advised US president Bill Clinton, once quipped that he wanted to be reincarnated as the bond market, able to intimidate anybody. But as Ben Bernanke, Mervyn King and more recently Mario Draghi (crisis-era central bank heads in the US, UK and EU respectively) demonstrated, even the bond markets must bow to the desires of a central banker with trillions to spend.

Thank goodness for such financial superpowers. These central bankers have a strong claim to having prevented a rerun of the Great Depression. But the crisis is over, Mr Draghi is the last of that cohort still standing. Now in the final year of his term as European Central Bank president, he announced this week that eurozone quantitative easing would end this month.

So can central bankers now step out of the spotlight and toast their own intelligence, decisiveness and humility with a decent claret? It seems not. Politicians are spoiling for a fight.

President Donald Trump recently accused the US Federal Reserve of having “gone crazy” (the alleged sign of insanity: nudging interest rates up during a boom). Turkey’s Recep Tayyip Erdogan has been leaning on his central bank. And when the Reserve Bank of India’s governor abruptly resigned this week after weeks of political pressure, nobody was buying his claim that it is “for personal reasons”.

In the UK, Mark Carney, the governor of the Bank of England, has drawn fire both from prime minister Theresa May and Brexiter Jacob Rees-Mogg. Central bankers claim to be uninterested in politics, but politics is interested in them.

To understand how we got here, it is worth remembering why the idea of independent central banking, fashionable in the 1920s, became the received wisdom again in the 1990s.

The aim was simple: credibility. Politicians are always tempted to lower interest rates to keep the economy hot, unemployment low and voters happy. This might achieve short-term benefits but it undermines an economy’s health and stokes inflation. Even a principled politician’s promises to curb inflation would not be believed; bond markets would demand an inflation premium, unions big pay rises.

A flint-hearted technocrat can at times deliver better results for everyone. In the early 1980s, Fed chair Paul Volcker demonstrated the basic idea that inflation could be crushed by a sufficiently badass central banker. New Zealand formalised the arrangement a few years later by giving the Reserve Bank of New Zealand an explicit inflation target, regardless of consequences.

That target was hit. “Inflation will be defeated” seems to be partly a self-fulfilling belief, so credibility is a strong argument for central bank independence.

Yet central bankers have experienced some serious mission creep over the past decade. Paul Tucker, a former deputy BoE governor and author of Unelected Power (UK) (US), notes that we have handed ever more power to them. There were reasons for this. Central banks operate through the banking system and have the ability to create new money without limit. Instead of watching the crisis of 2008 from the sidelines, they rolled up their sleeves and got involved in almost every part of the bond and loan markets. Yet these efforts — from buying up debt to regulating mortgage availability — created winners and losers. That is the natural domain of politics.

And so we have three options. The status quo is to leave powerful unelected officials free to act with wide discretion, while serving as a scapegoat for politicians who have nothing else to offer. That will not do.

Or we could double down on autonomy. The central banks got plenty right during the crisis and, given the sorry state of politics at the moment, technocracy has a certain appeal. If the UK appointed Mr Carney supreme dictator for life and bought him a nice dress uniform, he surely couldn’t do a worse job of running the country than the elected politicians currently attempting to interpret the “will of the people”.

But the long-run health of our democracies demands that our politicians start taking responsibility again. The Brexit referendum demonstrated that it is unwise to turn over direct policymaking power to us voters; we lack the time, expertise and interest. Yet it is undemocratic to place the levers of power two or three steps away from the people.

There is no easy or complete solution, but Mr Tucker is right to demand a return to clear mandates for independent agencies, set and monitored by elected politicians. That is partly to keep the technocrats in check, but also to force politicians to step up.

If we see clearly that responsibility lies with elected officials, then we may start to value expertise for the sake of expertise again. We are only likely to trust technocrats to deal with the technical details if we see that politicians are dealing with the politics. We cannot allow unelected people unlimited discretion. But just as importantly, we cannot tolerate our politicians acting like children and hoping that the grown-ups will tidy up the mess.

Written for and first published in the Financial Times on 14 Dec 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Why good forecasters become better people

So, what’s going to happen next, eh? Hard to say: the future has a lotta ins, a lotta outs, a lotta what-have-yous.

Perhaps I should be more willing to make bold forecasts. I see my peers forecasting all kinds of things with a confidence that only seems to add to their credibility. Bad forecasts are usually forgotten and you can milk a spectacular success for years.

Yet forecasts are the junk food of political and economic analysis: tasty to consume but neither satisfying nor healthy in the long run. So why should they be any more wholesome to produce?

The answer, it seems, is that those who habitually make forecasts may turn into better people. That is the conclusion suggested by a research paper from three psychologists, Barbara Mellers, Philip Tetlock and Hal Arkes.

Prof Tetlock won attention for his 2005 book Expert Political Judgment (UK) (US), which used the simple method of asking a few hundred experts to make specific, time-limited forecasts such as “Will Italy’s government debt/GDP ratio be between 70 and 90 per cent in December 1998?” or “Will Saddam Hussein be the president of Iraq on Dec 31 2002?”

It is only a modest oversimplification to summarise Prof Tetlock’s results using the late William Goldman’s aphorism: nobody knows anything.

Yet Profs Mellers, Tetlock and Don Moore then ran a larger forecasting tournament and discovered that a small number of people seem to be able to forecast better than the rest of us. These so-called superforecasters are not necessarily subject-matter experts, but they tend to be proactively open-minded, always looking for contrary evidence or opinions.

There are certain mental virtues, then, that make people better forecasters. The new research turns the question around: might trying to become a better forecaster strengthen such mental virtues? In particular, might it make us less polarised in our political views?

Of course there is nothing particularly virtuous about many of the forecasts we make, which are often pure bluff, attention-seeking or cheerleading. “We are going to make America so great again” (Donald Trump, February 2016); “There will be no downside to Brexit, only a considerable upside” ( David Davis, October 2016); “If this exit poll is right . . . I will publicly eat my hat” (Paddy Ashdown, May 2015). These may all be statements about the future, but it seems reasonable to say that they were never really intended as forecasts.

A forecasting tournament, on the other hand, rewards a good-faith effort at getting the answer right. A serious forecaster will soon be confronted by the gaps in his or her knowledge. In 2002, psychologists Leonid Rozenblit and Frank Keil coined the phrase “the illusion of explanatory depth”. If you ask people to explain how a flush lavatory actually works (or a helicopter, or a sewing machine) they will quickly find it is hard to explain beyond hand-waving. Most parents discover this when faced by questions from curious children.

Yet subsequent work has shown that asking people to explain how the US Affordable Care Act or the European Single Market work prompts some humility and, with it, political moderation. It seems plausible that thoughtful forecasting has a similar effect.

Good forecasters are obliged to consider different scenarios. Few prospects in a forecasting tournament are certainties. A forecaster may believe that parliament is likely to reject the deal the UK has negotiated with the EU, but he or she must seriously evaluate the alternative. Under which circumstances might parliament accept the deal instead? Again, pondering alternative scenarios and viewpoints has been shown to reduce our natural overconfidence.

My own experience with scenario planning — a very different type of futurology than a forecasting tournament — suggests another benefit of exploring the future. If the issue at hand is contentious, it can feel safer and less confrontational to talk about future possibilities than to argue about the present.

It may not be so surprising, then, that Profs Mellers, Tetlock and Arkes found that forecasting reduces political polarisation. They recruited people to participate in a multi-month forecasting tournament, then randomly assigned some to the tournament and some to a non-forecasting control group. (A sample question: “Will President Trump announce that the US will pull out of the Trans-Pacific Partnership during the first 100 days of his administration?”)

At the end of the experiment, the forecasters had moderated their views on a variety of policy domains. They also tempered their inclination to presume the opposite side was packed with extremists. Forecasting, it seems, is an antidote to political tribalism.

Of course, centrism is not always a virtue and, if forecasting tournaments are a cure for tribalism, then they are a course of treatment that lasts months. Yet the research is a reminder that not all forecasters are blowhards and bluffers.

Thinking seriously about the future requires keeping an open mind, understanding what you don’t know, and seeing things as others see them. If the end result is a good forecast, perhaps we should see that as the icing on the cake.


Written for and first published in the Financial Times on 23 Nov 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

The economist’s guide to the perfect Christmas

It was snowing even in London this week. Surely now it’s time to get serious about Christmas — and who better to give the perfect yuletide advice than an economist? (I also plan to include the best ideas from psychology, and call them “behavioural economics”; this is a proven formula.)

Economists are much needed at this time of year since Christmas is, more than anything, a consumerist blowout. It has been for well over a century. Joel Waldfogel, economist and author of Scroogenomics (UK) (US), comments that “just as every generation imagines that it invented sex, every generation imagines that it invented the vulgar commercialisation of Christmas”. Prof Waldfogel has tracked the size of the spending boom in the US in December, compared with November and January. It has been sizeable at least since the 1930s and probably much longer than that. If anything, Christmas stands out less in the spending data than it did three generations ago.

As an economist, I have nothing against this rampant consumerism — but I do wonder whether there is a way to enjoy a better Christmas.

Here is my three-point plan.

Step one: beware the efficient presents hypothesis. This is a variant on the efficient markets hypothesis, which says (roughly) that there are no bargains to be had on the stock market because they’ve already been noticed and snapped up. Similarly, the efficient presents hypothesis says that all the most suitable gifts have already been purchased — typically by recipients who have decided to treat themselves.

I have already fallen foul of the efficient presents hypothesis this year; carefully selecting a pair of extra-warm socks in precisely the style and size my wife prefers, I was dismayed when a parcel containing a duplicate pair arrived at our house a few days later. She was one step ahead of me in picking her own presents. The efficient presents hypothesis is not always true, any more than the regular efficient markets hypothesis. It is nevertheless true often enough to take seriously.

Step two: adopt a passive gift-buyer strategy. Again, the parallel with investment should be clear. You can achieve excellent investment results simply by making regular payments into passive index tracker funds. This strategy is dull and unimaginative, leaving no room for flair or good judgment. Nevertheless it works — partly because good judgment is scarce and flair often counterproductive. Active managers are often unable to outperform the stock market by enough to justify their fees. Individual investors tend to trade too often, buy high and sell low. Regular passive investment may be boring but it avoids these traps.

The ultimate passive gift is cash. Just like tracker funds it is utterly unimaginative yet a surprisingly difficult benchmark to beat. Many active gift-buyers swear they can get more than £50-worth of joy out of a £50 present, but Prof Waldfogel has good evidence that most of them fall well short. Gift-givers, like stock pickers, tend to overrate their abilities. (At least gift-givers have an excuse: they receive no feedback. Nobody is going to tell you that they hate the present you bought for them, but if your stock portfolio crashes it is hard not to notice.)

Since giving cash is often socially unacceptable, there is another passive approach that works well: find a wishlist, or just ask the recipient what they would like. Just as passive investment in index funds robs the stockpicking game of its daring and mystique, simply consulting a list seems robotic and joyless. But — as Francis Flynn of Stanford and Francesca Gino of Harvard have found — it is rarely perceived that way by recipients. While gift-givers hesitate to fall back on a wish list, recipients prefer items they have indicated that they actually want. They still think of the present purchaser as perfectly thoughtful: after all, someone took the trouble to find out what you wanted.

Step three: give the gift of time and attention. With all the effort you’ve saved ordering gifts from wish lists or simply writing cheques, see friends and enjoy the rituals of Christmas. Fresh from her wishlist research, Prof Gino has been part of a team studying the way family rituals influence our experience of seasonal festivities. Whether the rituals are secular or sacred, they are correlated with liking your family, feeling more satisfied with life and paying closer attention to your experiences. Exactly what causes what is not clear, but the idea that a good Christmas tradition brings people together is a sensible one. Too often, we lack the time because we are spending countless hours running around shops buying things that nobody will ever tell you they hated, already owned or both.

One could do worse than the reformed Ebenezer Scrooge, who, Dickens tells us, “knew how to keep Christmas well, if any man alive possessed the knowledge”. On Christmas morning the only physical gift he gives is a prize turkey, having been assured on good ghostly authority that it is much needed. Other than that, he gives time and money, notably a pay rise for Bob Cratchit.

Money! That’s the Christmas spirit. God bless us, every one!

Written for and first published in the Financial Times on 15 December 2017.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Brexit, Trump, and how politics loses the capacity to shock

How often do I find myself utterly unsurprised by a news headline that should be shocking? Whether it’s Donald Trump declaring the media to be “the true Enemy of the People” after bombs had been sent to CNN offices, or the UK government planning to charter a flotilla to keep the nation supplied with broccoli and penicillin in a no-deal Brexit scenario, I merely shrug. Of course it’s appalling, I think, but it’s the logical continuation of what has been said and done already.

So let’s talk about the psychologist Stanley Milgram. Milgram is most notorious for his electric shock experiments in the 1960s. He recruited unsuspecting members of the public to participate in a “study of memory”. On showing up at the laboratory, they drew lots with another participant to see who would be “teacher” and who “learner”. Once the learner was strapped into an electric chair, the teacher retreated into another room to take control of a shock machine.

As the learner failed to answer questions correctly, the teacher was asked to administer steadily increasing electric shocks. Many proved willing to deliver possibly fatal shocks — despite having received a painful shock themselves as a demonstration, despite the learner having already complained of a heart condition, despite the screams of pain and the pleadings to be released from the other side of the wall, and despite the fact that the switches on the shock machine read “Danger: Severe Shock, XXX”.

Of course, there were no shocks — the man screaming from the nearby room was pretending. Yet the research exerts a horrifying fascination. In the best known study, 65 per cent of experimental subjects went all the way to 450 volts, applying shocks long after the man in the other room had fallen silent.

In the shadow of the Holocaust, which influenced Milgram’s research agenda, the obvious conclusion was that we will do terrible things if an authority figure requires them. But psychologists no longer draw that lesson from Milgram’s experiment.

Behind The Shock Machine (UK) (US) a history by Gina Perry, reminds us that Milgram’s experimental set-ups varied. In most of them, more than half of participants refused to continue. And Alex Haslam, a psychologist who has re-examined the studies, found that direct orders backfired. When people complied it was not because they were ordered, but because they were persuaded.

One often overlooked detail is that Milgram’s shock machine had 30 settings in increments of 15 volts. It’s hard to object to giving someone a tiny 15-volt shock. And if you’ve decided that 15 volts is fine, then why draw the line at 30 volts? Why draw the line at 45? Why draw the line anywhere?

At 150 volts, the “learner” yelled out in distress. Some people stopped at that point. But those who continued past 150 volts were overwhelmingly likely then to persist to the full 450 volts. They were in too deep. Refusing to administer a shock of 225 volts would be an implicit admission that they had been wrong to deliver 210.

Perhaps we need to turn to another great mid-century psychologist, Leon Festinger, for an explanation. Festinger is best known for the theory of “cognitive dissonance”, the discomfort of holding two contradictory notions — such as “I’m a decent person” and “I just hit that poor guy with 210 volts”.

Festinger demonstrated that we are able to summon up considerable reserves of wishful thinking and selective memory in order to restore consistency. The further people slid into the Milgram experiment, the harder they worked to convince themselves that it was all in a good cause, or that no real harm was being done, or both.

Seeing the experiment described in textbooks half a century later, it still seems perplexing. And perhaps future students of history will be baffled to see recent events concisely summarised. The Republicans, party of “family values”, confirmed a Supreme Court justice nominee after he was accused of sexual assault? Could they really not find someone better?

But these future students will not see the 15-volt increments that got us to this destination. Electing a president who has boasted of his own sexual depredations meant crossing the 150-volt line. Once you’ve found a way to laugh off the issue, it is hard to treat it with gravity thereafter.

Although Brexit is a very different business, it, too, will make little sense to future generations unless they see that we got there 15 volts at a time. It turns out the single market requires free movement of labour? Zap! We’ve discovered that the border with Ireland is a sensitive issue? Zap! The funding of the Leave.EU campaign is being investigated by the National Crime Agency? Zap!

The consolation is that democracies provide us with moments in which we can step back and think about the direction we are taking. The recent US midterm elections were one; there will be others. “I have a choice,” responded one Milgram subject, when ordered to increase the voltage. “I’m not going to go ahead with it.” That is worth remembering.



Written for and first published in the Financial Times on 16 November 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Messy desks and benign neglect allow ideas to grow

My daughter is about to receive a new desk, so in order to clear space for it we were obliged to hack our way through the undergrowth of a 12-year-old’s bedroom. We found a half-assembled jigsaw puzzle from last Christmas; three separate sets of worn pyjamas scrunched up and stored in diverse locations; and empty sweet wrappers from Halloween. More alarmingly, there were empty sweet wrappers from Easter.

I am trying my best to treat with equanimity the discovery of a novel ecosystem under my roof. This is because I have come to believe that many spaces work a great deal better if subjected to a sustained period of benign neglect.

Consider the office cubicle. Some people pile their desks with everything from old newspapers to unwashed mugs; others are fastidiously tidy. (I fluctuate.) I’m not saying that people with messy desks are more productive, although there’s some evidence that they are; I’m just saying that if your colleague is a messy-desker then he or she should be allowed to get on with it.

Support for this position comes from a study conducted by two psychologists, Alex Haslam and Craig Knight. A few years ago they set up simple office spaces in which they asked experimental subjects to spend an hour doing administrative tasks. Messrs Haslam and Knight wanted to understand what made people productive and happy, and they tested four arrangements in a randomised trial. One was minimalist: chair, desk, bare walls. A second was softened with tasteful prints and some greenery. Workers were happier there, and got more done.

The kicker comes with the third and fourth arrangements. In each case, workers were invited to rearrange the pictures and pot-plants as they wished before settling down to work. But while some were then left to their labours, others were second-guessed by an experimenter who stepped in and found a pretext to rearrange everything.

This, unsurprisingly, drove people mad. “I wanted to hit you,” one participant later admitted. Empowering people to lay out their own space led to happier, more productive workers. Stripped of that freedom, everyone’s productivity fell and some felt quite ill.

The principle of benign neglect may well operate on a larger scale. Consider Building 20, one of the most celebrated structures at Massachusetts Institute of Technology. The product of wartime urgency, it was designed one afternoon in the spring of 1943, then hurriedly assembled out of plywood, breeze-blocks and asbestos. Fire regulations were waived in exchange for a promise that it would be pulled down within six months of the war’s end; in fact the building endured, dusty and uncomfortable, until 1998.

During that time, it played host not only to the radar researchers of Rad Lab (nine of whom won Nobel Prizes) but one of the first atomic clocks, one of the first particle accelerators, and one of the first anechoic chambers — possibly the one in which composer John Cage conceived 4’33. Noam Chomsky revolutionised linguistics there. Harold Edgerton took his high-speed photographs of bullets hitting apples. The Bose Corporation emerged from Building 20; so did computing powerhouse DEC; so did the hacker movement, via the Tech Model Railroad Club.

Building 20 was a success because it was cheap, ugly and confusing. Researchers and departments with status would be placed in sparkling new buildings or grand old ones — places where people would protest if you nailed something to a door. In Building 20, all the grimy start-ups were thrown in to jostle each other, and they didn’t think twice about nailing something to a door — or, for that matter, for taking out a couple of floors, as Jerrold Zacharias did when installing the atomic clock.

As Stewart Brand drily remarked in How Buildings Learn (UK) (US) Building 20 worked because “nobody cares what you do in there”.

If benign neglect works for your colleague’s desk and it works for an entire building, what about a grander scale still? What about a city neighbourhood? Up to a point, yes: even cities benefit from being left alone in certain ways. Of course, potholes must be fixed, bins emptied and charging points for electric vehicles installed. But Jane Jacobs argued in The Death And Life of Great American Cities (UK) (US) that cities desperately need old buildings, and not just glorious masterpieces but “a good lot of plain, ordinary, low-value old buildings, including some rundown old buildings”.

Her reasoning: cities are always in need of new experiments and economically marginal activities. “Neighbourhood bars . . . good bookshops . . . studios, galleries . . . hundreds of ordinary enterprises” all need somewhere cheap.

There’s nothing wrong with new buildings, argued Ms Jacobs, frustratingly for those who hold her up as a Nimby icon. But they should not be built everywhere all at once. Something has to be neglected and run down, or the city has no soil from which new buds can shoot.

There is always a balance to be struck. Every old building was once new. Every desk needs the occasional wipe. And my daughter is currently engaged in an extended programme of supervised room-tidying. Yet neglect is undervalued. Sometimes we need to learn when to leave well alone.


The ideas in this column are more fully expressed in my book “Messy: How To Be Creative and Resilient in a Tidy-Minded World”. It’s available in paperback both in the US and the UK – or through your local bookshop.


Written for and first published in the Financial Times on 9 November 2018.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

The untold career value of a little bit of luck

Almost exactly 20 years ago, fresh out of graduate school, I started work at a smallish strategy consulting firm. It was a poor choice, both for them and for me; I am not cut out to be a management consultant. I was even allergic to my own suit.

When one of my fellow recruits learnt this she pointed out that, in this job, “you only need to do two things: talk shit and wear a suit, and you can’t do either of them”. (I like to think that I have since mastered at least one of those skills.)

In any case, I was miserable and useless. My employers generously suggested that I might want to resign, and that if I did they would happily keep paying me for a while. I followed their advice and found a much more conducive job, most of which I was able to perform without embarrassment, wearing blue jeans. I had been very lucky.

But it is only recently that I realised that some of my luck required being born at the right time. Had I been a year younger I’d have crashed out of my job, not in the spring of 1999 but of 2000, just as the dotcom bubble was bursting.

Pity the babies of 1987 and 1990, then, who left school or university around the time that Lehman Brothers collapsed in 2008. Sending around a CV in the middle of the greatest financial crisis since their grandparents were born cannot have been a whole lot of fun.

Of course the financial crisis made life difficult for a lot of people, not just graduates. It is now over, although the scars remain. So here’s a question: are those people who were unfortunate enough to have been looking for their first job as a recession struck still disadvantaged after the recession itself has passed?

In 2006, an economist called Paul Oyer posed that very question about the young academics he was teaching. Let’s say that two equally able young economists, Alexandra and Betsy, are looking for work. Alexandra arrives on the job market during the good times, when departmental budgets are fat, and is hired by the 30th best university in the country. Betsy is a couple of years younger, tries to find a job during a lean year, and can only get a job at the 60th best university.

The question that Professor Oyer posed is: will this matter in the long run? Will the equally talented Betsy find a better job, given a few years? Or will she be trying to catch up with Alexandra for decades?

Prof Oyer assembled data describing the PhD students graduating from seven excellent graduate schools, and he concluded that Betsy would remain at a disadvantage for a long time. Students who graduate in good years are more likely to find good jobs — obviously — but there is also a strong correlation between getting a good job immediately and still having a good job four, eight or even 12 years later.

This makes some sense; if an economist applies for a mid-career job having already been an assistant professor at (say) the Massachusetts Institute of Technology, employers are unlikely to adjust for whether she secured that assistant professorship in an impossibly difficult year or at a somewhat easier time. The halo will shine, regardless. And regardless of whether it was a good year or a bad one, the young economist will have picked up skills from working at MIT, teaching MIT students and rubbing shoulders with Nobel Prize winners. If you want to work in a top research job, it helps a lot to start out in a top research job.

Prof Oyer conducted a similar analysis for MBA graduates looking for high-paying jobs in finance and consulting in the late 1980s. The results were similar. Could this be a problem only for the young elite — students on such a precision-engineered career path that their fate is sensitive to accidents of timing? Or might it be even more serious for those further down the educational pecking order?

A new study from economists Hannes Schwandt and Till Marco von Wachter suggests that the latter is true. Looking at young people entering the US labour market between 1976 and 2015, every group suffers lasting harm if they have to find their first job during a recession, but disadvantaged groups suffer more and for longer. High school dropouts fare particularly badly, both immediately and several years later, as do those from a racial minority. People with a college education suffer less.

Overall, for a typical recession, the unlucky cohorts can expect to lose the equivalent of seven months’ pay over the course of a decade, relative to their more fortunate peers, who are only a couple of years older or younger. That’s no trivial sum. People who can’t find the job of their dreams end up settling for something else, building up skills and contacts in a field that was never their first choice.

Few people, by now, need reminding that the financial crisis has had a lasting impact and that, in many ways (though not all), it is the younger generation that has been left to count the cost. But this research points towards other lessons. It’s easy to overlook luck; but good or bad, a single piece of luck can last in ways that we find it hard even to notice.


Written for and first published in the Financial Times on 2 November 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Previous Next


  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook


  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Tim’s Tweets

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 171,772 other subscribers

Do NOT follow this link or you will be banned from the site!