Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

Fifty Things That Made The Modern Economy – UK Paperback

I’m delighted to report that “Fifty Things That Made The Modern Economy” is out in paperback in the UK this week. (The US edition – Fifty Inventions That Shaped the Modern Economy – is out at the end of August. Sorry you have to wait…)

I had such fun writing this book and people seem to be enjoying reading it, which is great. One of the joys of the book was the ability to leap across time and topic, pick up under-rated technologies and explored the unexpected consequences of the things we invent.

“Packed with fascinating detail… Harford has an engagingly wry style and his book is a superb introduction to some of the most vital products of human ingenuity”, said the The Sunday Times.

“It’s great fun to dip into… Harford succeeds in teaching… without resorting to technical terminology and intimidating charts and tables. Such a feat requires a kind of inventiveness in itself.” That was the  The Wall Street Journal.

You can read more reviews and get more information about where to buy the book here.

30th of July, 2018Other WritingComments off
Highlights

“Basic income is about the freedom to say no” – Rutger Bregman goes bouldering

“You have the instinct for it,” says Rutger Bregman, as I haul myself up an indoor climbing wall, nestled under the arches at Vauxhall station in London. “Shit, this is some talent!” he bellows, as I reach the top. I am inwardly delighted, even though I realise the praise is absurd: I have climbed about four metres and it’s a beginner’s route.

Bregman has suggested that we go bouldering together. Bouldering is a variety of rock climbing, done over short distances without safety ropes. Coming from Bregman, it seems a curious choice. The young Dutch historian and author is most famous for advocating a universal basic income — a regular cash grant to every single person, given unconditionally, to support them and provide a minimum standard of living, no matter what might go wrong.

His book, Utopia for Realists (UK) (US), has been a surprise bestseller, finding an audience eager for radical yet plausible policy ideas. Yet this celebrated advocate of unconditional handouts has chosen a sport that is all about self-reliance, and the ultimate departure from the principle of the safety net.

“There is a safety net — look!” says Bregman, pointing at the crash mats. I am not totally convinced. It doesn’t take long before I fall off — a combination of lack of skill and lack of fitness. As I peel myself off the mat, I realise the skin of one elbow has not remained with me.

Bregman’s contention is that a basic income would be the logical and perfectly affordable next step for a human race that has already taken huge leaps forward since before the industrial revolution, when, he writes, “nearly everyone, everywhere was still poor, hungry, dirty, afraid, stupid, sick and ugly”.

Bregman himself looks the picture of health, possibly because, at 29, he’s 15 years younger than me, and possibly because he’s been practising. He climbs twice a week; his T-shirt says Sterk, the name of his local bouldering hall in Utrecht. The word means “strong” in Dutch. My limited experience of rock climbing with my daughters has taught me that the legs take the strain. Bouldering, however, requires more upper-body strength.

“It’s more explosive,” I am told. And within 15 minutes, I’m done: the tendons below my wrist have given up and I am close to doing the same. The first three routes were exhilarating but without a rope, even the short climbs under the arches of VauxWall are starting to feel vertiginous. I’m losing my nerve as well as my strength. Bregman, on the other hand, is just getting started.

“How long is a typical session?” I ask. “Fifteen minutes or an hour or . . . I can’t imagine anyone keeping this up for an hour.

“Two, two-and-a-half hours, if I have the time. Which I usually don’t,” he says. “If you warm up slowly, not like today, then you are at your peak after 45 minutes, and then you can keep that up for another 45 minutes.”

I spend much of the next hour watching Bregman solve one route after another. Sometimes he is dangling loosely off an overhang, as though resting in an invisible hammock. Sometimes he is moving laterally, his legs as high as his arms in a spiderlike scurry across the wall. Once, he hangs vertically as he works his way from left to right across a whimsical hold: a huge pair of pouting lips in one corner, just below the roof. He took up the sport three years ago. “I didn’t like to exercise at all. It’s so soul-destroying. But this is different.”

Bregman sees soul-destroying activity in much of modern life. Too many people, he says, are doing jobs they dislike or see as pointless, because they have no alternative. A basic income would liberate people: perhaps a minimum of €1,000 a month, given unconditionally as a cash grant, or through the tax system as a negative income tax.

Bregman has branded a basic income as “venture capital for the people”. A good line, I congratulate him. But what does it mean?

“OK, so basic income is all about the freedom to say no. That’s a privilege for the rich right now. With a basic income, you can say no to a job you don’t want to do. You can say no to a city in which you no longer want to live. You can say no to an employer who harasses you at work . . . that’s what real freedom looks like.”

Part of the impetus for a basic income has come from the sense that the robots are coming for our jobs — maybe not today, maybe not tomorrow, but soon. The venture capital firm Y Combinator is funding research into basic income, which seems to be a popular idea in Silicon Valley. But Bregman has no patience for the idea that technological change underpins the case for basic income.

“This is not about AI,” he insists. “You go back to the 1960s, and all the economists, all the philosophers, all the sociologists said we’re going to be working less and less and less and less and boredom is going to be the great challenge of the future. Didn’t happen . . . mostly because we have this ideological obsession with creating new jobs.”

Advocates of basic income have included two rather different Nobel laureates: the civil rights activist Martin Luther King Jr and the free-market evangelist Milton Friedman. The idea draws support from leftwingers who see an opportunity to redistribute and to give workers more bargaining power, and rightwingers who see an opportunity to dismantle paternalistic bureaucracies and empower ordinary people to make their own choices.

Bregman’s own sympathies seem to lie more with the left. At one point I tease him about the fact that he is in London on Valentine’s Day while his wife Maartje (a photographer and collaborator) is not. His response is spat out with a vehemence that might have been for comic effect, and might not: “You know that Valentine’s Day is just a capitalist scam to make you buy stuff you don’t need, to impress people you don’t like, right?”

But like Friedman, Bregman is clearly no fan of paternalistic bureaucracies. “Nowhere you’ll find as much support for something like basic income as [among] people who work for unemployment agencies,” he says. “In Holland I did a couple of lectures for those groups and they just give me a standing ovation when you say that we should abolish their jobs.”

It is the unconditional nature of the cash transfer that particularly appeals to him. With the transfer of money, no strings attached, there is a transfer of dignity, of bargaining power, and of responsibility. People have to make their own choices.

Again, I venture a connection between the basic income idea and bouldering: it’s a solo sport in which individuals need to find their own path, judging risks for themselves?

“If I would make this sport political, what I like about it is that it is competitive, but with yourself. So you’re not competing with anyone else, you’re just trying to do better yourself. And it’s a puzzle, every time it’s different. It’s a very creative sport, I guess.”

Utopia for Realists was itself a slowly assembled puzzle. The early drafts were articles in De Correspondent, an online crowdfunded news website founded by a Dutch pop-philosopher and columnist, Rob Wiijnberg. “It’s an anarchist-idealist collective of journalists who don’t follow the news,” Bregman explains.

This may explain why Utopia for Realists is such a curiously enjoyable read. The title sums up Bregman’s belief that evidence-based pragmatism should not rule out provocative, ambitious ideas. The book is lively, well researched and full of unlikely pieces of history, from the Speenhamland system of poor relief, developed in England in 1795, to US President Richard Nixon’s flirtation with the idea of a basic income in 1969. (Bregman studied history rather than economics or politics.) It is also perfectly orthogonal to anything one might read in a newspaper. The book was published in Dutch by De Correspondent, built a following slowly, then was self-published in English.

“I was my own PR employee at that point. I was emailing everyone — no interviews, no reviews. Nothing.” Yet when Bregman emailed me out of the blue with the English translation and a request for my support, I was sufficiently impressed to endorse the book. Steven Pinker also gave it a glowing cover quote. And as Bregman and his colleagues were pondering giving up, the project suddenly took off. While not quite Fifty Shades of Grey, in a short space of time Utopia for Realists went from brave failed experiment to international bestseller, due to be published in 28 languages.

“Ideas always start on the fringe and then they move towards the centre,” he says. “Then I was invited to come to Davos this year. Like, yeah, that’s pretty much it, right? My first lectures about basic income were for anarchists with long hair, and smelly.”

Did he go to Davos? “No, I had to go to a book fair in Colombia.” He did, however, give a talk at TED last year, and seems aware of the irony of advocating the dismantling of an entire class of do-gooders.

“You’re talking for an audience of 1,500 people, many of them involved in kinds of charities. The CEO of Toms, for example, was there.” Toms donates a pair of shoes to a poor family for every pair purchased; Bregman isn’t impressed. “Buy one shoe, give one shoe. That is just a horrible, horrible idea.”

He got a huge round of applause when he proposed scrapping aid bureaucracies and replacing them with direct cash transfers. The rapturous reception struck him as odd. “I was saying we should hand over the salaries of all these paternalistic bureaucrats and give them to the poor, who are the real experts on their own lives. And they were all clapping and laughing, and I was thinking on stage, ‘But I’m talking about you! It’s you!’”

It’s a good talk, I tell him. “I like to prepare for these things. I knew it off by heart three months before I went on stage.”

I press him on the details of the talk. He skips a little too lightly between the idea of replacing international development aid with direct cash transfers to poor people, and the idea of overhauling modern western welfare states to place unconditional cash payments at their heart. The two ideas are cousins, not identical twins, I suggest. Adding a dollar a day, no strings attached, to a non-existent social safety net might be transformative in rural India or Africa. A resident of London is going to want a little more than that before she willingly gives up her housing benefit. Bregman agrees: his focus now is on welfare reform.

Another question mark is over the evidence base for a basic income. Bregman mentions “dozens of experiments” but, arguably, there has never been a completely satisfactory randomised trial of a long-term basic income. (A literature review by the charity GiveDirectly counted six shorter-term randomised trials; policymakers should conduct many more.)

One promising episode — a four-year trial in Manitoba, Canada, in the 1970s — received little attention. When the economist Evelyn Forget managed to get hold of the mothballed archives in 2009, they were on the verge of being discarded. There is a new study in Kenya, funded by GiveDirectly. With 5,000 recipients getting a basic income for 12 years, that trial shows real ambition — but the income in question is just over $20 a month. This is unlikely to tell us much about reforming a European welfare state. Nor is a much-hyped but rather small trial in Finland, which will last just two years and is focused only on those already receiving unemployment benefits.

Other trials have been excitedly announced but have yet to begin, let alone conclude. We are still waiting for a study large and patient enough to tell us much about a basic income in a developed economy. So what are these “dozens of experiments”?

Bregman says that the experiments he has in mind are less evaluating a full basic income scheme, and more exploring the impact of cash transfers in development aid. That is indeed a well-studied area, although not quite the same thing. Those experiments provide encouragement for proponents of a basic income: households tend to put the money to good use, and reap long-term benefits.

By now, we’re talking over a coffee, my enfeebled hands thankfully strong enough to grip a mug. My final question is about one of his other ideas: dramatically liberalising immigration rules.

“Every utopian system is obviously grounded in the injustices of the present,” he says. “What’s the biggest injustice in the world right now? It’s pretty easy to see. It’s borders: apartheid on a global scale.”

But while basic income seems to be having a day in the sun, an end to passport control is hardly in tune with the Trumpian zeitgeist, is it? “Well that’s almost my problem with basic income right now. I get questions during lectures, people say, ‘Is this really a radical idea?’ So I’m like, I should move on. Because utopias are meant to make people angry.”

Fair enough: as in bouldering, so in utopian politics. Once you’ve solved one puzzle, it is time to move on to a new challenge.

 

 
Written for and first published in the Financial Times on 9 March 2018.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

28th of March, 2018HighlightsOther WritingComments off
Highlights

Your handy postcard-sized guide to statistics

Statistics on a postcard

 

“The best financial advice for most people would fit on an index card.” That’s the gist of an offhand comment in 2013 by Harold Pollack, a professor at the University of Chicago. Pollack’s bluff was duly called, and he quickly rushed off to find an index card and scribble some bullet points — with respectable results.

When I heard about Pollack’s notion — he elaborated upon it in a 2016 book — I asked myself: would this work for statistics, too? There are some obvious parallels. In each case, common sense goes a surprisingly long way; in each case, dizzying numbers and impenetrable jargon loom; in each case, there are stubborn technical details that matter; and, in each case, there are people with a sharp incentive to lead us astray.

The case for everyday practical numeracy has never been more urgent. Statistical claims fill our newspapers and social media feeds, unfiltered by expert judgment and often designed as a political weapon. We do not necessarily trust the experts — or more precisely, we may have our own distinctive view of who counts as an expert and who does not.  Nor are we passive consumers of statistical propaganda; we are the medium through which the propaganda spreads. We are arbiters of what others will see: what we retweet, like or share online determines whether a claim goes viral or vanishes. If we fall for lies, we become unwittingly complicit in deceiving others.

On the bright side, we have more tools than ever to help weigh up what we see before we share it — if we are able and willing to use them. In the hope that someone might use it, I set out to write my own postcard-sized citizens’ guide to statistics. Here’s what I learnt.

 

Professor Pollack’s index card includes advice such as “Save 20 per cent of your money” and “Pay your credit card in full every month”. The author Michael Pollan offers dietary advice in even pithier form: “Eat Food. Not Too Much. Mostly Plants.” Quite so, but I still want a cheeseburger.  However good the advice Pollack and Pollan offer, it’s not always easy to take. The problem is not necessarily ignorance. Few people think that Coca-Cola is a healthy drink, or believe that credit cards let you borrow cheaply. Yet many of us fall into some form of temptation or other. That is partly because slick marketers are focused on selling us high-fructose corn syrup and easy credit. And it is partly because we are human beings with human frailties.

With this in mind, my statistical postcard begins with advice about emotion rather than logic. When you encounter a new statistical claim, observe your feelings. Yes, it sounds like a line from Star Wars, but we rarely believe anything because we’re compelled to do so by pure deduction or irrefutable evidence. We have feelings about many of the claims we might read — anything from “inequality is rising” to “chocolate prevents dementia”. If we don’t notice and pay attention to those feelings, we’re off to a shaky start. What sort of feelings? Defensiveness. Triumphalism. Righteous anger. Evangelical fervour. Or, when it comes to chocolate and dementia, relief.

It’s fine to have an emotional response to a chart or shocking statistic — but we should not ignore that emotion, or be led astray by it. There are certain claims that we rush to tell the world, others that we use to rally like-minded people, still others we refuse to believe. Our belief or disbelief in these claims is part of who we feel we are.

“We all process information consistent with our tribe,” says Dan Kahan, professor of law and psychology at Yale University.

In 2005, Charles Taber and Milton Lodge, political scientists at Stony Brook University, New York, conducted experiments in which subjects were invited to study arguments around hot political issues. Subjects showed a clear confirmation bias: they sought out testimony from like-minded organisations. For example, subjects who opposed gun control would tend to start by reading the views of the National Rifle Association.

Subjects also showed a disconfirmation bias: when the researchers presented them with certain arguments and invited comment, the subjects would quickly accept arguments with which they agreed, but devote considerable effort to disparage opposing arguments.

Expertise is no defence against this emotional reaction; in fact, Taber and Lodge found that better-informed experimental subjects showed stronger biases. The more they knew, the more cognitive weapons they could aim at their opponents.

“So convenient a thing it is to be a reasonable creature,” commented Benjamin Franklin, “since it enables one to find or make a reason for everything one has a mind to do.”

This is why it’s important to face up to our feelings before we even begin to process a statistical claim. If we don’t at least acknowledge that we may be bringing some emotional baggage along with us, we have little chance of discerning what’s true. As the physicist Richard Feynman once commented, “You must not fool yourself — and you are the easiest person to fool.”

 

The second crucial piece of advice is to understand the claim. That seems obvious. But all too often we leap to disbelieve or believe (and repeat) a claim without pausing to ask whether we really understand what the claim is. To quote Douglas Adams’s philosophical supercomputer, Deep Thought, “Once you know what the question actually is, you’ll know what the answer means.”

For example, take the widely accepted claim that “inequality is rising”. It seems uncontroversial, and urgent. But what does it mean? Racial inequality? Gender inequality? Inequality of opportunity, of consumption, of education attainment, of wealth? Within countries or across the globe? Even given a narrower claim, “inequality of income before taxes is rising” (and you should be asking yourself, since when?), there are several different ways to measure this.

One approach is to compare the income of people at the 90th percentile and the 10th percentile, but that tells us nothing about the super-rich, nor the ordinary people in the middle. An alternative is to examine the income share of the top 1 per cent — but this approach has the opposite weakness, telling us nothing about how the poorest fare relative to the majority.  There is no single right answer — nor should we assume that all the measures tell a similar story. In fact, there are many true statements that one can make about inequality. It may be worth figuring out which one is being made before retweeting it.

Perhaps it is not surprising that a concept such as inequality turns out to have hidden depths. But the same holds true of more tangible subjects, such as “a nurse”. Are midwives nurses? Health visitors? Should two nurses working half-time count as one nurse? Claims over the staffing of the UK’s National Health Service have turned on such details.

All this can seem like pedantry — or worse, a cynical attempt to muddy the waters and suggest that you can prove anything with statistics. But there is little point in trying to evaluate whether a claim is true if one is unclear what the claim even means.

Imagine a study showing that kids who play violent video games are more likely to be violent in reality. Rebecca Goldin, a mathematician and director of the statistical literacy project STATS, points out that we should ask questions about concepts such as “play”, “violent video games” and “violent in reality”. Is Space Invaders a violent game? It involves shooting things, after all. And are we measuring a response to a questionnaire after 20 minutes’ play in a laboratory, or murderous tendencies in people who play 30 hours a week?

“Many studies won’t measure violence,” says Goldin. “They’ll measure something else such as aggressive behaviour.” Just like “inequality” or “nurse”, these seemingly common sense words hide a lot of wiggle room.

Two particular obstacles to our understanding are worth exploring in a little more detail. One is the question of causation. “Taller children have a higher reading age,” goes the headline. This may summarise the results of a careful study about nutrition and cognition. Or it may simply reflect the obvious point that eight-year-olds read better than four-year-olds — and are taller. Causation is philosophically and technically a knotty business but, for the casual consumer of statistics, the question is not so complicated: just ask whether a causal claim is being made, and whether it might be justified.

Returning to this study about violence and video games, we should ask: is this a causal relationship, tested in experimental conditions? Or is this a broad correlation, perhaps because the kind of thing that leads kids to violence also leads kids to violent video games? Without clarity on this point, we don’t really have anything but an empty headline.

We should never forget, either, that all statistics are a summary of a more complicated truth. For example, what’s happening to wages? With tens of millions of wage packets being paid every month, we can only ever summarise — but which summary? The average wage can be skewed by a small number of fat cats. The median wage tells us about the centre of the distribution but ignores everything else. Or we might look at the median increase in wages, which isn’t the same thing as the increase in the median wage — not at all. In a situation where the lowest and highest wages are increasing while the middle sags, it’s quite possible for the median pay rise to be healthy while median pay falls.

Sir Andrew Dilnot, former chair of the UK Statistics Authority, warns that an average can never convey the whole of a complex story. “It’s like trying to see what’s in a room by peering through the keyhole,” he tells me.

In short, “you need to ask yourself what’s being left out,” says Mona Chalabi, data editor for The Guardian US. That applies to the obvious tricks, such as a vertical axis that’s been truncated to make small changes look big. But it also applies to the less obvious stuff — for example, why does a graph comparing the wages of African-Americans with those of white people not also include data on Hispanic or Asian-Americans? There is no shame in leaving something out. No chart, table or tweet can contain everything. But what is missing can matter.

 

Channel the spirit of film noir: get the backstory. Of all the statistical claims in the world, this particular stat fatale appeared in your newspaper or social media feed, dressed to impress. Why? Where did it come from? Why are you seeing it?

Sometimes the answer is little short of a conspiracy: a PR company wanted to sell ice cream, so paid a penny-ante academic to put together the “equation for the perfect summer afternoon”, pushed out a press release on a quiet news day, and won attention in a media environment hungry for clicks. Or a political donor slung a couple of million dollars at an ideologically sympathetic think-tank in the hope of manufacturing some talking points.

Just as often, the answer is innocent but unedifying: publication bias. A study confirming what we already knew — smoking causes cancer — is unlikely to make news. But a study with a surprising result — maybe smoking doesn’t cause cancer after all — is worth a headline. The new study may have been rigorously conducted but is probably wrong: one must weigh it up against decades of contrary evidence.

Publication bias is a big problem in academia. The surprising results get published, the follow-up studies finding no effect tend to appear in lesser journals if they appear at all. It is an even bigger problem in the media — and perhaps bigger yet in social media. Increasingly, we see a statistical claim because people like us thought it was worth a Like on Facebook.

David Spiegelhalter, president of the Royal Statistical Society, proposes what he calls the “Groucho principle”. Groucho Marx famously resigned from a club — if they’d accept him as a member, he reasoned, it couldn’t be much of a club. Spiegelhalter feels the same about many statistical claims that reach the headlines or the social media feed. He explains, “If it’s surprising or counter-intuitive enough to have been drawn to my attention, it is probably wrong.”

 

OK. You’ve noted your own emotions, checked the backstory and understood the claim being made. Now you need to put things in perspective. A few months ago, a horrified citizen asked me on Twitter whether it could be true that in the UK, seven million disposable coffee cups were thrown away every day.  I didn’t have an answer. (A quick internet search reveals countless repetitions of the claim, but no obvious source.) But I did have an alternative question: is that a big number? The population of the UK is 65 million. If one person in 10 used a disposable cup each day, that would do the job.

Many numbers mean little until we can compare them with a more familiar quantity. It is much more informative to know how many coffee cups a typical person discards than to know how many are thrown away by an entire country. And more useful still to know whether the cups are recycled (usually not, alas) or what proportion of the country’s waste stream is disposable coffee cups (not much, is my guess, but I may be wrong).

So we should ask: how big is the number compared with other things I might intuitively understand? How big is it compared with last year, or five years ago, or 30? It’s worth a look at the historical trend, if the data are available.

Finally, beware “statistical significance”. There are various technical objections to the term, some of which are important. But the simplest point to appreciate is that a number can be “statistically significant” while being of no practical importance. Particularly in the age of big data, it’s possible for an effect to clear this technical hurdle of statistical significance while being tiny.  One study was able to demonstrate that unborn children exposed to a heatwave while in the womb went on to earn less as adults. The finding was statistically significant. But the impact was trivial: $30 in lost income per year. Just because a finding is statistically robust does not mean it matters; the word “significance” obscures that.

 

In an age of computer-generated images of data clouds, some of the most charming data visualisations are hand-drawn doodles by the likes of Mona Chalabi and the cartoonist Randall Munroe. But there is more to these pictures than charm: Chalabi uses the wobble of her pen to remind us that most statistics have a margin of error. A computer plot can confer the illusion of precision on what may be a highly uncertain situation.

“It is better to be vaguely right than exactly wrong,” wrote Carveth Read in Logic (1898), and excessive precision can lead people astray. On the eve of the US presidential election in 2016, the political forecasting website FiveThirtyEight gave Donald Trump a 28.6 per cent chance of winning. In some ways that is impressive, because other forecasting models gave Trump barely any chance at all. But how could anyone justify the decimal point on such a forecast? No wonder many people missed the basic message, which was that Trump had a decent shot. “One in four” would have been a much more intuitive guide to the vagaries of forecasting.

Exaggerated precision has another cost: it makes numbers needlessly cumbersome to remember and to handle. So, embrace imprecision. The budget of the NHS in the UK is about £10bn a month. The national income of the United States is about $20tn a year. One can be much more precise about these things, but carrying the approximate numbers around in my head lets me judge pretty quickly when — say — a £50m spending boost or a $20bn tax cut is noteworthy, or a rounding error.

My favourite rule of thumb is that since there are 65 million people in the UK and people tend to live a bit longer than 65, the size of a typical cohort — everyone retiring or leaving school in a given year — will be nearly a million people. Yes, it’s a rough estimate — but vaguely right is often good enough.

 

Be curious. Curiosity is bad for cats, but good for stats. Curiosity is a cardinal virtue because it encourages us to work a little harder to understand what we are being told, and to enjoy the surprises along the way.

This is partly because almost any statistical statement raises questions: who claims this? Why? What does this number mean? What’s missing? We have to be willing — in the words of UK statistical regulator Ed Humpherson — to “go another click”. If a statistic is worth sharing, isn’t it worth understanding first? The digital age is full of informational snares — but it also makes it easier to look a little deeper before our minds snap shut on an answer.

While curiosity gives us the motivation to ask another question or go another click, it gives us something else, too: a willingness to change our minds. For many of the statistical claims that matter, we have already reached a conclusion. We already know what our tribe of right-thinking people believe about Brexit, gun control, vaccinations, climate change, inequality or nationalisation — and so it is natural to interpret any statistical claim as either a banner to wave, or a threat to avoid.

Curiosity can put us into a better frame of mind to engage with statistical surprises. If we treat them as mysteries to be resolved, we are more likely to spot statistical foul play, but we are also more open-minded when faced with rigorous new evidence.

In research with Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, Dan Kahan has discovered that people who are intrinsically curious about science — they exist across the political spectrum — tend to be less polarised in their response to questions about politically sensitive topics. We need to treat surprises as a mystery rather than a threat.  Isaac Asimov is thought to have said, “The most exciting phrase in science isn’t ‘Eureka!’, but ‘That’s funny…’” The quip points to an important truth: if we treat the open question as more interesting than the neat answer, we’re on the road to becoming wiser.

 

In the end, my postcard has 50-ish words and six commandments. Simple enough, I hope, for someone who is willing to make an honest effort to evaluate — even briefly — the statistical claims that appear in front of them. That willingness, I fear, is what is most in question.

“Hey, Bill, Bill, am I gonna check every statistic?” said Donald Trump, then presidential candidate, when challenged by Bill O’Reilly about a grotesque lie that he had retweeted about African-Americans and homicides. And Trump had a point — sort of. He should, of course, have got someone to check a statistic before lending his megaphone to a false and racist claim. We all know by now that he simply does not care.

But Trump’s excuse will have struck a chord with many, even those who are aghast at his contempt for accuracy (and much else). He recognised that we are all human. We don’t check everything; we can’t. Even if we had all the technical expertise in the world, there is no way that we would have the time.

My aim is more modest. I want to encourage us all to make the effort a little more often: to be open-minded rather than defensive; to ask simple questions about what things mean, where they come from and whether they would matter if they were true. And, above all, to show enough curiosity about the world to want to know the answers to some of these questions — not to win arguments, but because the world is a fascinating place.
Written for and first published in the Financial Times on 8 February 2018.

My recent book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

8th of March, 2018HighlightsOther WritingComments off
Other Writing

Review of The Tyranny of Metrics by Jerry Muller

Jerry Z Muller’s latest book is 220 pages long, not including the front matter. The average chapter is 10.18 pages long and contains 17.76 endnotes. There are four cover endorsements and the book weighs 421 grammes. These numbers tell us nothing, of course. If you want to understand the strengths and weaknesses of The Tyranny of Metrics (UK) (US) you will need to read it — or trust the opinion of someone who has.

Professor Muller’s argument is that we keep forgetting this obvious point. Rather than rely on the informed judgment of people familiar with the situation, we gather meaningless numbers at great cost. We then use them to guide our actions, predictably causing unintended damage.

A famous example is the obsession, during the Vietnam war, with the “body count” metric embraced by US defence secretary Robert McNamara. The more of the enemy you kill, reasoned McNamara, the closer you are to winning. This was always a dubious idea, but the body count quickly became an informal metric for ranking units and handing out promotions, and was therefore often exaggerated. Counting bodies became a risky military objective in itself.

This episode symbolises the mindless, fruitless drive to count things. But it also shows us why metrics are so often used: McNamara was trying to understand and control a distant situation using the skills of a generalist, not a general. Muller shows that metrics are often used as a substitute for relevant experience, by managers with generic rather than specific expertise.

Muller does not claim that metrics are always useless, but that we expect too much from them as a tool of management. For example, if a group of doctors collect and analyse data on clinical outcomes, they are likely to learn something together. If bonuses and promotions are tied to the numbers, the exercise will teach nobody anything and may end up killing patients. Several studies have found evidence of cardiac surgeons refusing to operate on the sickest patients for fear of lowering their reported success rates.

It’s easy to sympathise with this argument, and I do. (I made some similar points in a chapter of my book Messy.) The Tyranny of Metrics does us a service in briskly pulling together parallel arguments from economics, management science, philosophy and psychology along with examples from education, policing, medicine, business and the military. It makes the case for professional autonomy: that metrics should be tools in the hands of teachers, doctors and soldiers rather than tools in the hands of those who would oversee them.

In an excellent final chapter, Muller summarises his argument thus: “measurement is not an alternative to judgement: measurement demands judgement: judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available”.

This is a strong argument, but there are gaps in it. The book does not engage seriously enough with the possibility that the advantages of metric-driven accountability might outweigh the undoubted downsides. Tellingly, Muller complains of a university ratings metric that rewards high graduation rates, access for disadvantaged students, and low costs. He says these requirements are “mutually exclusive”, but they are not. They are in tension with each other, but a college that achieved all three goals would be a triumph rather than a logical absurdity.

The risk of trusting the professionals is that a bad teacher or police officer might coast undetected; or that a coterie of insiders might promote their own protégées, excluding women or ethnic minorities. Data-gathering efforts promise to spot prejudice, incompetence, back-scratching and worse. Perhaps they are doomed to fail, but in a world where insiders have covered up for each other far too often, we should not dismiss them too quickly.

Nor does this book reckon with evidence that mechanical statistical predictions often beat the subjective judgment of experts. This was demonstrated by psychologist Paul Meehl in his 1954 book Clinical versus Statistical Prediction. Subsequent research has supported his claim, while campaigners for evidence-based medicine such as Archie Cochrane and Sir Iain Chalmers have made a strong case not simply to take expert medical opinion on trust. Overconfident experts have been humbled by statistical methods frequently enough for the phenomenon to have been worth a chapter.

Finally, and perhaps most curiously, there is no discussion of computers, cheap sensors, or big data. In this respect, at least, the book could have been written in the 1980s. It is a strange omission, especially since Muller would surely have much to say about big data and algorithmic management.

All this, however, is criticism from a position of admiration. Many of us have the vague sense that metrics are leading us astray, stripping away context, devaluing subtle human judgement, and rewarding those who know how to play the system. Muller’s book crisply explains where this fashion came from, why it can be so counterproductive and why we don’t learn. It should be required reading for any manager on the verge of making the Vietnam body count mistake all over again.
Written for and first published in the Financial Times on 24 January 2018.

My recent book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

15th of February, 2018Other WritingComments off
Other Writing

Budget 2017 shows a reactive government throwing cash at crises

Firefighting is a brave and essential profession, but for a politician it is not a good look. Successive British governments have found themselves locked in a vicious cycle: some part of the public sector is squeezed for money, manages decline for a while, then cracks under pressure. The crisis is extinguished by a sudden spray of last-minute cash — an expensive way to solve any problem — while money and attention is drained from some other area. So the next crisis smoulders.

Philip Hammond, the chancellor, did not create this situation but neither has he shown much sign of breaking out of it. His very first policy announcement was a slug of money aimed squarely at the civil war in the Conservative party: £3bn to be spent on “Brexit preparations”, whatever they may be.

A similar amount was offered to the National Health Service in England over the next few years, in an act of largesse Mr Hammond emphasised was exceptional and “outside the spending review process”. This prompts an obvious question: what is wrong with the spending review process?

What’s wrong — according to a recent report from the Institute for Government — is that the government has been too reactive. It has squeezed public service funding, hoping nothing goes wrong, and doled out cash when it does. The most obvious culprit is Mr Hammond’s predecessor George Osborne, who presided over a sustained period of austerity.

One could reserve some blame for the Labour government which handed Mr Osborne an economy in disastrous shape — and for civil servants eager to make the numbers add up with optimistic forecasts that quality could be sustained during a funding drought. And let’s not even talk about the chancellor’s boss, a prime minister who triggered Article 50 and called a general election without ever seeming to think seriously about the hard choices involved in Brexit.

Meanwhile public services continue to be stretched. The next crisis looms in the prison system; after that, the police service and the UK’s visa and immigration system are likely candidates to show signs of serious strain. So far they have not done so, which may be why Mr Hammond said nothing abut them.

It is hard not to have some sympathy for the chancellor. He stands in the middle of a party in turmoil; he is on the periphery of a predictably difficult Brexit process; he has been handed a huge downgrade in growth and productivity forecasts from the independent Office for Budget Responsibility. None of this is his fault, and he deserves credit for ramping up spending on infrastructure and the housing market. It remains to be seen whether that money will be sufficient, or well spent, but the sums involved are not trivial.

Meanwhile Mr Hammond increasingly resembles a beaten-down dad in the middle of a raucous children’s party, trying to tidy up the mess on his hands and knees while the chaos continues around him. One five-year-old sits kicking on his back and another tries to spread Nutella in his ear. He has reached into his pockets and hurled out a few sweets, but his best efforts are unlikely to be enough.

For now, the cycle of emergency spending continues. There was a telling moment as the chancellor turned to discuss a real fire, the Grenfell Tower disaster. The House of Commons rightly fell silent. The tragedy should never have happened, said Mr Hammond. He then announced that he would be spending some money on the aftermath. Crisis, then cash. Same old story.

 

Written for and first published in the Financial Times on 22 November 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

23rd of November, 2017Other WritingComments off
Other Writing

Review of “The Square and The Tower” by Niall Ferguson

“The world remains a world of squares and towers,” concludes Niall Ferguson, after skipping across 500 years in about as many pages. The square — the town square, the market square — represents social networks, “the structures that human beings naturally form”. The tower represents hierarchical control and secular authority, the top-down approach to social structure.

The study of how networks compete or co-operate with each other and with hierarchies is a hot topic in the social sciences, and it is easy to see why: think of the US military versus Isis; or Russian intelligence trying to exploit the US media; or Facebook and, well, almost anything.

Yet both networks and hierarchies have been around for a long time, as Ferguson is quick to remind us in The Square and the Tower (UK) (US). Networks flourished in the years 1450 to 1790, he writes; hierarchies reasserted themselves until around 1970, and networks have been making a comeback ever since. The book is a history told with the focus on the way networks and hierarchies shaped events.

This approach is engaging but not always helpful. It is unclear that we gain much by describing Pizarro’s conquistadors and their allies as a network opposing Atahualpa’s hierarchical Inca society.

When it does work, however, it works well. German National Socialism is described as a network that then transformed itself into a crushingly powerful hierarchy. Faced with the power of the German state, the network of Jewish business interests that had loomed so large in the Nazi imagination proved helpless. “After all that had been written about the web of Jewish power,” he writes, “the only networks that really mattered were the ones that enabled emigration, and those were often simple family ties.” The analysis is illuminating, chilling and still relevant today.

While National Socialism was a network that infected a hierarchy, the Soviet infiltration of the British old boys’ club between the 1930s and the 1960s shows that hierarchies can infect networks, too.

No book written by a historian of Ferguson’s gifts is likely to disappoint, but The Square and the Tower does have one obvious weakness: it’s not at all clear that the author takes his own premise seriously. That premise, set out in the first 50 pages of the book, is that by adding the formal social science of networks to the informal descriptive practice of history, we can unlock new insights.

This union of history and social science is an exciting prospect with Ferguson in charge. But the early chapters in which he outlines the science and social science of networks are dutiful literature reviews; though he nods to network scholars such as Ronald Burt, Mark Granovetter and Duncan Watts, those names do not resurface later in the book. Ferguson cites an impressive range of social science research papers; he does not always trouble to explain technical terms as a skilled science writer might. One is left with the impression that he is happy to list every tool in the toolkit but doesn’t actually want to pick up a spanner himself.

The impression is reinforced by the way the author uses diagrams. Network diagrams always look good, whether it’s diagram 22, showing the interconnected nodes of the Bloomsbury Group (“it was . . . sexual relationships that defined the network”, we are told) or, over the page, diagram 23 depicting the evolving connections between the great powers in the late 19th century. These diagrams have been reproduced from other sources, but without sufficient labelling. Those lines mean something yet we can only guess what, unless we consult the original sources directly. The network diagrams, like the network research described early on in the book, appear to be largely decorative. That is a missed opportunity.

Yet that same flip of the page takes us from Virginia Woolf and John Maynard Keynes to a theory of the causes of the first world war outlined by none other than Henry Kissinger. There’s a joy in such leaps — from industrial networks in pre-Victorian Britain to the Taiping Rebellion, from Kissinger’s use of networked influence to how the World Economic Forum reshaped Nelson Mandela’s policy of nationalisation.

“By choice, I am more of a networks guy”, Ferguson tells us early on, and he is convincing in his claim that networks have been playing an important role for centuries. Yet at the end of his freewheeling history, he yearns for someone to take charge: “The lesson of history is that trusting in networks to run the world is a recipe for anarchy.” At best, the Illuminati take control; more likely, the Jacobins.

 

Written for and first published in the Financial Times on 11 October 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

8th of November, 2017Other WritingComments off
Marginalia

Why Thaler’s Nobel is a well-deserved nudge for behavioural economics

Richard Thaler has won the Nobel memorial prize in economics, an award that had been anticipated for some time. Mr Thaler is a behavioural economist, one of the group of economists who applies insights from psychology, or perhaps plain common sense, into the idealised world of economic modelling.

One trivial behavioural insight that Mr Thaler is fond of mentioning concerns a large bowl of cashew nuts he once served to dinner guests over drinks. Observing his guests hoovering up the contents of the bowl, he removed it to the kitchen so as not to spoil everyone’s appetite. The guests could in principle have stopped of their own accord; nevertheless they were pleased to see temptation removed.

Early in his career, he started making a list of “Dumb Stuff People Do” on the blackboard in his office. The cashew nut example was on the list, and it is a classic piece of Thaler thinking: obvious, trivial, fun and yet completely beyond the scope of traditional economics to model. Mr Thaler’s insight is that such trivia might lead to important analytical and policy insights.

Thomas Schelling, Nobel laureate in 2005, was also a master of these deceptively simple observations of human nature. And Daniel Kahneman — a psychologist, mentor for Mr Thaler, and winner of the prize in 2002 — had with Amos Tversky laid the foundations for behavioural economics.

Mr Thaler advanced the field in two important ways. He campaigned for behavioural economics to be taken seriously within the economics profession. He also brought it into the policy environment with his book Nudge (co-authored with Cass Sunstein) and his support for behavioural policy units in the White House and 10 Downing Street.

Within the profession, Mr Thaler found a pulpit in the Journal of Economic Perspectives, an academic journal supplied to all members of the American Economic Association. His Anomalies column was witty and sharply reasoned, highlighting strange features of the economic and financial world that standard economic theory could not explain, and rigorously debunking unconvincing attempts at rationalisation.

His evangelism for behavioural economics has been successful, at least in microeconomics: it is commonplace to see economic models incorporate psychological realism, and Mr Thaler himself was president of the American Economic Association in 2015.

In the policy world, Mr Thaler’s most famous idea was to use behavioural insights in pensions policy — for example, by enrolling people in a pension scheme by default, while giving them the choice to opt out. The stakes here are much higher than with cashew nuts: default enrolment has, according to the UK pensions regulator, increased participation in private-sector pension schemes from 42 per cent to 73 per cent between 2012 and 2016.

Rational economic man does not care — or even notice — whether a pension is opt-in or opt-out. He simply calculates (instantly) whether it pays to participate and chooses accordingly. Mr Thaler’s insight is not only that people are not perfectly rational (that much is obvious, even to the most traditional of economists) but that apparently small departures from rationality can have outsized impacts.

Mr Thaler’s catch-all advice: whether you’re a business or a government, if you want people to do something, make it easy. This year’s choice of Nobel Prize winner is an easy one to like.

Written for and first published in the Financial Times on 9 October 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

What We Get Wrong About Technology

Blade Runner (1982) is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being.  So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?

He calls her up from a payphone.

There is something revealing about the contrast between the two technologies — the biotech miracle that is Rachael, and the graffiti-scrawled videophone that Deckard uses to talk to her. It’s not simply that Blade Runner fumbled its futurism by failing to anticipate the smartphone. That’s a forgivable slip, and Blade Runner is hardly the only film to make it. It’s that, when asked to think about how new inventions might shape the future, our imaginations tend to leap to technologies that are sophisticated beyond comprehension. We readily imagine cracking the secrets of artificial life, and downloading and uploading a human mind. Yet when asked to picture how everyday life might look in a society sophisticated enough to build such biological androids, our imaginations falter. Blade Runner audiences found it perfectly plausible that LA would look much the same, beyond the acquisition of some hovercars and a touch of noir.

Now is a perplexing time to be thinking about how technology shapes us. Some economists, disappointed by slow growth in productivity, fear the glory days are behind us. “The economic revolution of 1870 to 1970 was unique in human history,” writes Robert Gordon in The Rise and Fall of American Growth (UK) (US). “The pace of innovation since 1970 has not been as broad or as deep.” Others believe that exponential growth in computing power is about to unlock something special. Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age” (UK) (US), while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.

Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere. Instead, when we try to imagine the future, the past offers two lessons. First, the most influential new technologies are often humble and cheap. Mere affordability often counts for more than the beguiling complexity of an organic robot such as Rachael. Second, new inventions do not appear in isolation, as Rachael and her fellow androids did. Instead, as we struggle to use them to their best advantage, they profoundly reshape the societies around us.

 

 

To understand how humble, cheap inventions have shaped today’s world, picture a Bible — specifically, a Gutenberg Bible from the 1450s. The dense black Latin script, packed into twin blocks, makes every page a thing of beauty to rival the calligraphy of the monks. Except, of course, these pages were printed using the revolutionary movable type printing press. Gutenberg developed durable metal type that could be fixed firmly to print hundreds of copies of a page, then reused to print something entirely different.  The Gutenberg press is almost universally considered to be one of humanity’s defining inventions. It gave us the Reformation, the spread of science, and mass culture from the novel to the newspaper. But it would have been a Rachael — an isolated technological miracle, admirable for its ingenuity but leaving barely a ripple on the wider world — had it not been for a cheap and humble invention that is far more easily and often overlooked: paper.

The printing press didn’t require paper for technical reasons, but for economic ones. Gutenberg also printed a few copies of his Bible on parchment, the animal-skin product that had long served the needs of European scribes. But parchment was expensive — 250 sheep were required for a single book. When hardly anyone could read or write, that had not much mattered. Paper had been invented 1,500 years earlier in China and long used in the Arabic world, where literacy was common. Yet it had taken centuries to spread to Christian Europe, because illiterate Europe no more needed a cheap writing surface than it needed a cheap metal to make crowns and sceptres.

Paper caught on only when a commercial class started to need an everyday writing surface for contracts and accounts. “If 11th-century Europe had little use for paper,” writes Mark Kurlansky in his book Paper (UK) (US), “13th-century Europe was hungry for it.” When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.

Paper opened the way for printing. The kind of print run that might justify the expense of a printing press could not be produced on parchment; it would require literally hundreds of thousands of animal skins. It was only when it became possible to mass-produce paper that it made sense to search for a way to mass-produce writing too. Not that writing is the only use for paper. In his book Stuff Matters (UK) (US), Mark Miodownik points out that we use paper for everything from filtering tea and coffee to decorating our walls. Paper gives us milk cartons, cereal packets and corrugated cardboard boxes. It can be sandpaper, wrapping paper or greaseproof paper. In quilted, perforated form, paper is soft, absorbent and cheap enough to wipe, well, anything you want. Toilet paper seems a long way from the printing revolution. And it is easily overlooked — as we occasionally discover in moments of inconvenience. But many world-changing inventions hide in plain sight in much the same way — too cheap to remark on, even as they quietly reorder everything. We might call this the “toilet-paper principle”.

 

 

It’s not hard to find examples of the toilet-paper principle, once you start to look. The American west was reshaped by the invention of barbed wire, which was marketed by the great salesman John Warne Gates with the slogan: “Lighter than air, stronger than whiskey, cheaper than dust.” Barbed wire enabled settlers to fence in vast areas of prairie cheaply. Joseph Glidden patented it in 1874; just six years later, his factory produced enough wire annually to circle the world 10 times over. Barbed wire’s only advantage over wooden fencing was its cost but that was quite sufficient to cage the wild west, where the simple invention prevented free-roaming bison and cowboys’ herds of cattle from trampling crops.  Once settlers could assert control over their land, they had the incentive to invest in and improve it. Without barbed wire, the American economy — and the trajectory of 20th-century history — might have looked very different.

There’s a similar story to be told about the global energy system. The Rachael of the energy world — the this-changes-everything invention, the stuff of dreams — is nuclear fusion. If we perfect this mind-bendingly complex technology, we might safely harvest almost limitless energy by fusing variants of hydrogen. It could happen: in France, the ITER fusion reactor is scheduled to be fully operational in 2035 at a cost of at least $20bn. If it works, it will achieve temperatures of 200 million degrees Celsius — yet will still only be an experimental plant, producing less power than a coal-fired plant, and only in 20-minute bursts. Meanwhile, cheap-and-cheerful solar power is quietly leading a very different energy revolution. Break-even costs of solar electricity have fallen by two-thirds in the past seven years, to levels barely more than those of natural gas plants. But this plunge has been driven less by any great technological breakthrough than by the humble methods familiar to anyone who shops at Ikea: simple modular products that have been manufactured at scale and that snap together quickly on site.

The problem with solar power is that the sun doesn’t always shine. And the solution that’s emerging is another cheap-and-cheerful, familiar technology: the battery. Lithium-ion batteries to store solar energy are becoming increasingly commonplace, and mass-market electric cars would represent a large battery on every driveway. Several giant factories are under construction, most notably a Tesla factory that promises to manufacture 35GWh worth of batteries each year by 2020; that is more than the entire global production of batteries in 2013. Battery prices have fallen as quickly as those of solar panels. Such Ikea-fication is a classic instance of toilet-paper technology: the same old stuff, only cheaper.

Perhaps the most famous instance of the toilet-paper principle is a corrugated steel box, 8ft wide, 8.5ft high and 40ft long. Since the shipping container system was introduced, world merchandise trade (the average of imports and exports) has expanded from about 10 per cent of world GDP in the late 1950s to more than 20 per cent today. We now take for granted that when we visit the shops, we’ll be surrounded by products from all over the globe, from Spanish tomatoes to Australian wine to Korean mobile phones.

“The standard container has all the romance of a tin can,” says historian Marc Levinson in his book The Box (UK) (US). Yet this simple no-frills system for moving things around has been a force for globalisation more powerful than the World Trade Organisation. Before the shipping container was introduced, a typical transatlantic cargo ship might contain 200,000 separate items, comprising many hundreds of different shipments, from food to letters to heavy machinery. Hauling and loading this cornucopia from the dockside, then packing it into the tightest corners of the hull, required skill, strength and bravery from the longshoremen, who would work on a single ship for days at a time. The container shipping system changed all that.

Loading and unloading a container ship is a gigantic ballet of steel cranes, choreographed by the computers that keep the vessel balanced and track each container through a global logistical system. But the fundamental technology that underpins it all could hardly be simpler. The shipping container is a 1950s invention using 1850s know-how. Since it was cheap, it worked. The container was a simple enough idea, and the man who masterminded its rise, Malcom McLean, could scarcely be described as an inventor. He was an entrepreneur who dreamed big, took bold risks, pinched pennies and deftly negotiated with regulators, port authorities and the unions.

McLean’s real achievement was in changing the system that surrounded his box: the way that ships, trucks and ports were designed. It takes a visionary to see how toilet-paper inventions can totally reshape systems; it’s easier for our limited imaginations to slot Rachael-like inventions into existing systems.  If nuclear fusion works, it neatly replaces coal, gas and nuclear fission in our familiar conception of the grid: providers make electricity, and sell it to us. Solar power and batteries are much more challenging. They’re quietly turning electricity companies into something closer to Uber or Airbnb — a platform connecting millions of small-scale providers and consumers of electricity, constantly balancing demand and supply.

 

 

Some technologies are truly revolutionary. They transcend the simple pragmatism of paper or barbed wire to produce effects that would have seemed miraculous to earlier generations. But they take time to reshape the economic systems around us — much more time than you might expect. No discovery fits that description more aptly than electricity, barely comprehended at the beginning of the 19th century but harnessed and commodified by its end. Usable light bulbs had appeared in the late 1870s, courtesy of Thomas Edison and Joseph Swan. In 1881, Edison built electricity-generating stations in New York and London and he began selling electricity as a commodity within a year. The first electric motors were used to drive manufacturing machinery a year after that. Yet the history of electricity in manufacturing poses a puzzle. Poised to take off in the late 1800s, electricity flopped as a source of mechanical power with almost no impact at all on 19th-century manufacturing. By 1900, electric motors were providing less than 5 per cent of mechanical drive power in American factories. Despite the best efforts of Edison, Nikola Tesla and George Westinghouse, manufacturing was still in the age of steam.

Productivity finally surged in US manufacturing only in the 1920s. The reason for the 30-year delay? The new electric motors only worked well when everything else changed too. Steam-powered factories had delivered power through awe-inspiring driveshafts, secondary shafts, belts, belt towers, and thousands of drip-oilers. The early efforts to introduce electricity merely replaced the single huge engine with a similarly large electric motor. Results were disappointing.

As the economic historian Paul David has argued, electricity triumphed only when factories themselves were reconfigured. The driveshafts were replaced by wires, the huge steam engine by dozens of small motors. Factories spread out, there was natural light. Stripped of the driveshafts, the ceilings could be used to support pulleys and cranes. Workers had responsibility for their own machines; they needed better training and better pay. The electric motor was a wonderful invention, once we changed all the everyday details that surrounded it.

David suggested in 1990 that what was true of electric motors might also prove true of computers: that we had yet to see the full economic benefits because we had yet to work out how to reshape our economy to take advantage of them. Later research by economists Erik Brynjolfsson and Lorin Hitt backed up the idea: they found that companies that had merely invested in computers in the 1990s had seen few benefits, but those that had also reorganised — decentralising, outsourcing and customising their products — had seen productivity soar.

Overall, the productivity statistics have yet to display anything like a 1920s breakthrough. In that respect we are still waiting for David’s suggestion to bear fruit. But in other ways, he was proved right almost immediately. People were beginning to figure out new ways to use computers and, in August 1991, Tim Berners-Lee posted his code for the world wide web on the internet so that others could download it and start to tinker. It was another cheap and unassuming technology, and it unlocked the potential of the older and grander internet itself.

 

 

If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael? The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise — from hackable pacemakers to botnets comprised of printers to, inevitably, internet-enabled sex toys that leak the most intimate data imaginable. Both the potential and the pitfalls are spectacular.

Whatever the technologies of the future turn out to be, they are likely to demand that, like the factories of the early 20th century, we change to accommodate them. Genuinely revolutionary inventions live up to their name: they change almost everything, and such transformations are by their nature hard to predict. One clarifying idea has been proposed by economists Daron Acemoglu and David Autor. They argue that when we study the impact of technology on the workplace, we should view work in bite-sized chunks — tasks rather than jobs.

For example, running a supermarket involves many tasks — stacking the shelves, collecting money from customers, making change, and preventing shoplifters. Automation has had a big impact on supermarkets, but not because the machines have simply replaced human jobs. Instead, they have replaced tasks done by humans, generally the tasks that could be most easily codified. The barcode turned stocktaking from a human task into one performed by computers. (It is another toilet-paper invention, cheap and ubiquitous, and one that made little difference until retail formats and supply chains were reshaped to take advantage.)

A task-based analysis of labour and automation suggests that jobs themselves aren’t going away any time soon — and that distinctively human skills will be at a premium. When humans and computers work together, says Autor, the computers handle the “routine, codifiable tasks” while amplifying the capabilities of the humans, such as “problem-solving skills, adaptability and creativity”. But there are also signs that new technologies have polarised the labour market, with more demand for both the high-end skills and the low-end ones, and a hollowing out in the middle. If human skills are now so valuable, that low-end growth seems like a puzzle — but the truth is that many distinctively human skills are not at the high end. While Jane Austen, Albert Einstein and Pablo Picasso exhibited human skills, so does the hotel maid who scrubs the toilet and changes the bed. We’re human by virtue not just of our brains, but our sharp eyes and clever fingers.

So one invention I’m keen to observe is the “Jennifer unit”, made by a company called Lucas Systems. Jennifer and the many other programmes like her are examples of a “voice-directed application” — just software and a simple, inexpensive earpiece. Such systems have become part of life for warehouse workers: a voice in their ear or instructions on a screen tell them where to go and what to do, down to the fine details. If 13 items must be collected from a shelf, Jennifer will tell the human worker to pick five, then five, then three. “Pick 13” would lead to mistakes. That makes sense. Computers are good at counting and scheduling. Humans are good at picking things off shelves. Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human? Like paper, Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.

 

 
Written for and first published in the FT Magazine on 8 July 2017.

My new book is “Fifty Things That Made The Modern Economy” – now out! Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

29th of August, 2017HighlightsOther WritingComments off
Other Writing

Books about calling statistical bullshit

A friend recently emailed to ask me for books that might help navigate a world full of statistical bullshit. Here are some recommendations.

I can’t think of a better science writer than Ben Goldacre, who burns with righteous mischief. His Bad Science (UK) (US) isn’t always about statistics, but it’s excellent throughout and an essential read for anyone who wants to understand some of the faults of modern health and nutrition journalism. Wonderful book.

Of course you should subscribe to the More or Less podcast, but you could also enjoy The Tiger That Isn’t (UK) (US). This is the unofficial book of the series, written by More or Less founders Andrew Dilnot and Michael Blastland. A highly readable guide to making sense of numbers in the wild.

Also very good – with more US examples – is Stat-Spotting (UK) (US) by Joel Best. Best’s book has given me some of my favourite examples of bad stats, but it currently seems a bit overpriced on Amazon, alas.

The classic of the field is, of course, Darrell Huff’s How To Lie With Statistics (UK) (US). There’s a sad coda that will tarnish your view of Huff; but this is still a terrific book.

Brand new book by the very splendid Evan Davis is called Post Truth (UK) (US) – haven’t yet read much but looks good.

And finally try Naked Statistics (UK) (US) by Charles Wheelan, who with wit and clarity wrote the similarly excellent Naked Economics (UK) (US).

Best, Dilnot, Huff and Wheelan all cover quite similar ground. If I was picking just one of them I’d go for Dilnot for a UK audience and Wheelan in the US.

My new book is “Fifty Things That Made The Modern Economy” – coming very, very soon and available for pre-order. Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

26th of June, 2017Other WritingResourcesComments off
Other Writing

Economics lessons from Dr Seuss

Language matters. Any poet can attest to that, as can any lawyer. (One recent court case in the US turned on an ambiguity created by a missing comma.) But it’s less clear that we economists have realised how important it is to write clearly.
One who has is Paul Romer, the Chief Economist of the World Bank. Mr Romer has provoked a staff rebellion by instructing his large team of research economists to sharpen up their language. He’s threatened to block publication of a flagship report if more than 2.6 per cent of the words in it are “and”. Such medicine seems to be too strong for the World Bank: Mr Romer keeps his job title but is to be stripped of his managerial responsibilities.
No doubt the amusing surface of this story hides tedious depths of office politics. But Mr Romer has a point: economists seem to be drawn to obfuscatory polysyllables like wasps to jam. This is true even when compared to other academics, and even in a medium that encourages brevity: Twitter.
Recently economist Marina Della Giusta and colleagues at the University of Reading conducted an as-yet-unpublished linguistic analysis of the tweets of the top 25 academic economists and the top 25 scientists on Twitter. (The top 3 economists: Paul Krugman, Joseph Stiglitz, and Erik Brynjolfsson; the top 3 scientists: Neil DeGrasse Tyson, Brian Cox, and Richard Dawkins.)
Della Giusta and her colleagues found that economists tweeted less, were less likely to mention other twitter users, and mentioned fewer people when they did. This implies that the economists were less likely than the scientists to have Twitter conversations, especially with people they didn’t know. I can’t say I blame them; I avoid using Twitter as a medium for conversation myself. Still, the scientists managed it and the economists did not.
The economists also used less accessible language with more complex words and more abbreviations. Both their language and their behaviour was less chatty than that of the scientists.
The Bank of England has been pondering this kind of thing, too. Last year on Bank Underground, a blog for Bank of England staff, analyst Jonathan Fullwood compared Bank of England reports to the writings of Dr Seuss.
Mr Fullwood’s analysis uses statistical measures of writing complexity: long words, long sentences and long paragraphs make for more difficult prose. The Cat In The Hat stands at one end of the scale; Bank of England reports stand at the other. Mr Fullwood suggests that this complexity is not a good thing – and his work has been praised this week by Minouche Shafik, who recently left the Bank of England to run the London School of Economics.
We economists should write simpler, clearer prose if we want anybody to pay attention to what we think. But at the World Bank, Paul Romer has another mission. He has long argued that economists need to write clearly to help them think clearly.  He also believes that trust depends on clarity. If we economists write prose that sounds complicated but does not tie us down to meaning anything in particular, we cannot be surprised if nobody trusts us.
Mr Romer is much taken by a linguistic analysis from Stanford’s Literary Lab. The analysis, published in 2015, tracks the development of “Bankspeak” in World Bank annual reports since 1948.
These reports once described specific situations (“Congo’s present transport system is geared mainly to the export trade”) and what the World Bank had done to improve them. Now they more likely to be clouds of feelgood bureaucratese, in which nothing specific ever happens. Projects “are emerging” while “players” are “partnering”. The result is somewhere on the Venn diagram between unobjectionable and incomprehensible.
When I worked at the World Bank, in the early 2000s, I first heard the phrase “Christmas Tree” used to bemoan work sagging under the pet ideas that had been loaded onto it. This explains Mr Romer’s irritation at excessive use of the word “and”. The Stanford analysis has a prime example of a Christmas tree from the 1999 annual report, which wants to “promote corporate governance and competition policies and reform and privatize state-owned enterprises and labor market/social protection reform…”
Such sentences are written by committee. It is surprisingly easy to write like this when you don’t know what you think, or cannot agree, or dare not say.
Mr Romer knows what he thinks and has never been afraid to say it. His focus on clear language can do little harm. It may even do some good, although I fear that too many “ands” are the symptom but not the cause of the trouble.
We should all aspire to write a bit more like Dr Seuss. If we write more clearly we tend to think more clearly. Since what we say is easy to understand we must make sure that it is true.
But simplicity alone will not save us.
“We’re going to build a big, beautiful wall and Mexico is going to pay for it,” has the same simple tone as Dr Seuss, although it lacks his compassion. Does it reflect clear, trustworthy thinking? I do not think so, Sam-I-Am.

My new book is “Fifty Things That Made The Modern Economy” – coming soon! If you want to get ahead of the curve you can pre-order in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

31st of May, 2017Other WritingComments off
Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Tim’s Tweets

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 165,827 other subscribers

Do NOT follow this link or you will be banned from the site!