Tim Harford The Undercover Economist

Articles published in August, 2019

What we get wrong about meetings – and how to make them worth attending

I rely on Google Calendar to tell me where I am supposed to be, when and with whom. When the service collapsed for an afternoon last month, it felt like a teachable moment. For a few seconds, I panicked. Then, I realised that with all the meetings gone, I was free to do some real work.

I know I’m not the only person who loves to hate meetings. Will There Be Donuts?, a book by David Pearl, skewers the “Wagner meeting” (of epic length), the “mushroom meeting” (appears suddenly, multiplies rapidly) and the “Stonehenge meeting” (it’s been a fixture for ages but nobody knows why).

Yet Mr Pearl also acknowledges that ineffectual meetings often suit us. Boring meetings make us feel interesting by comparison. Long meetings pass the time. Indecisive meetings postpone painful choices.

Meetings frustrate when they reveal painful disparities in power. For subordinates, meetings are often the things that get in the way of doing their job. For the person with the power — the manager — meetings are the job. The manager can even offload the scheduling on to her secretary. No wonder some staff feel resentful of meetings while their managers are oblivious.

That said, the relentless democracy of a meeting where everybody must be heard is a kind of torture in its own right. Never-ending consultations are a good way to ensure that nothing ever happens and nobody has to take responsibility. Oscar Wilde never said that socialism “would take too many evenings” but if he’d met UK Labour leader Jeremy Corbyn, he surely would have.

Some meetings are to transfer information, some to allow discussion and some to reach a decision or resolve a problem. There are committee meetings that exist to satisfy some rule or regulation. I am on such a committee, and find it useful as a reminder not to sign up for any other committees. Then there are the meetings that exist purely for the sake of meeting. Don’t dismiss them; there’s nothing wrong with consenting adults enjoying a coffee break together. There doesn’t always need to be a reason.

But nothing undermines a meeting more than a lack of agreement as to why it’s happening. I know a school that invites parents in for curriculum meetings. The teachers think they’re explaining their approach to the parents; the parents are under the misapprehension they’re being asked for their input. Nobody goes away happy.

Yet despite all the well-justified complaints, there are many situations in which there’s simply no substitute for a meeting. For quickly co-ordinating a shared task, it’s perfect. A few minutes is often sufficient. “Agile” working methods call for a “scrum” in which team members briskly report what they did yesterday, what they’re going to do today and if there’s anything stopping them. A newspaper’s morning editorial conference serves a similar purpose, without the funky terminology.

Or perhaps the meeting is a workshop designed to produce ideas. Some people will assert that meetings are creativity killers, and “a camel is a horse designed by committee”. But this is absurd. We’ve all been in conversations where one idea sparks another. And while an individual can write a novel or paint a portrait, solo creativity is no way to produce nuclear fusion or a new antibiotic. In a world full of specialists, complex projects require collaboration. Meetings can and do generate ideas that no individual could have conceived alone. They do not do so automatically, however.

A persistent myth is that “brainstorming” — the unfiltered, no-wrong-answers, bellowing out of ideas — is a reliable route to innovative brilliance. Psychologist Keith Sawyer, the author of Group Genius, points out that there are many reasons to doubt this. In brainstorming, individuals distract each other, groups fixate on particular topics and some people use the opportunity to stop thinking entirely. Twelve people generate more ideas if they work separately than if they brainstorm together.

The meeting serves a far more important creative purpose when it is time to criticise, evaluate and combine those ideas. Charlan Nemeth, a psychologist at UC Berkeley and author of In Defense of Troublemakers, has found that groups come up with more ideas — and, importantly, better ideas — when they are invited to debate and dissent. It may even be worth breaking the meeting into subgroups with the express purpose of developing competing ideas. “Do not criticise” is a handy rule for new grandparents; it’s not a good approach for innovators.

In a daily scrum, everyone arrives with a brisk update and leaves with a crisp to-do list. In a creative workshop, everyone arrives with a boxful of ideas, ready to discard some and weave the rest together. There’s a big difference between the two types of meeting, but there’s also a clear common thread: people come prepared, have a reason to work together and finish the meeting with a clear sense of what comes next. A good meeting is a good meeting less because of what happens at the time, but because of what came before — and most importantly, what comes after.


Written for and first published in the Financial Times on 26 July 2019.

My book Messy has more on the joys of creative tension. Feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

US health care is literally killing people

It is astonishing how far the debate on healthcare has moved in the US, at least for the Democrats. Not long ago offering universal, government-funded healthcare was viewed as tantamount to communism; now, it’s a touchstone of many presidential hopefuls.

Not before time. The US healthcare system is a monument to perverse incentives, unintended consequences and political inertia. It is astonishingly bad — indeed, it’s so astonishingly bad that even people who believe it’s bad don’t appreciate quite how bad it is.

I don’t say this out of any great devotion to the UK alternative. The National Health Service works well enough for a vast tax-funded bureaucracy, but it might work better if we didn’t view any attempt at reform as the desecration of a holy institution. Nor do I have bad experiences of US healthcare. My daughter was born in America, where my family had sensitive and expert medical care. But that’s what you’d expect with a good health insurance plan — something that many Americans don’t have.

Around 27m people — 10 per cent of the non-elderly US population — have no insurance at all. That is precarious, given that a serious illness or accident could incur bankruptcy-inducing costs. Yet the astonishingly large number of people living on the edge is still progress: before the passage of the Affordable Care Act under President Barack Obama, the figure was closer to 45m people.

It’s this lack of anything resembling universal access that seems most grotesque to observers from other rich nations. But it’s just the beginning of the costs that the US health system imposes on Americans.

The financial costs are most obvious, and they are truly extraordinary. For a family of four, the US system costs about $13,000 a year more than that of Switzerland, which itself is substantially more expensive than any other. The US system costs more than twice as much, per person, as the universal coverage provided by the UK’s NHS. Even the government-funded part of the US system costs more per capita than the NHS.

Why so expensive? It’s because US doctors prescribe more treatments, and those treatments cost much more than they do elsewhere. Most governments limit the price of treatments, freeriding on the US market to stimulate investment in medicine. American hospitals and drug companies have enormous leeway to raise prices — insurers have limited bargaining power, and uninsured patients even less.

Nor is all this money bringing any obvious reward. Compared with other rich countries, the US ranks at or near the bottom on life expectancy, infant mortality, adolescent pregnancy, sexually transmitted infections, drug-related mortality, obesity, diabetes, heart disease, lung disease and arthritis. No, the healthcare system can’t be blamed for all that — but it is hardly covering itself with glory.

One of the striking tragedies of modern America, brought to light by the research of the economists Anne Case and Angus Deaton, has been the phenomenon of “deaths of despair”, from suicide, alcohol abuse and overdoses. Such deaths go a long way to explaining why mortality rates for middle-aged white Americans have stagnated or perhaps even risen in the US, while falling fast in other rich countries.

I recently had the opportunity to ask Prof Case and Sir Angus to what extent the US healthcare system was to blame. Their answer, in a nutshell: it would be an exaggeration to blame the system entirely but not a gross exaggeration.

The most obvious connection is that the opioids that have played such a role in these deaths of despair were supplied by the healthcare system. Opioids are a simple and profitable palliative for a widespread condition (“I’m in pain”) rather than a cure for anything. Doctors and drug companies made more money if they prescribed more opioids, and human nature being human nature, found ways to justify that decision to themselves.

The dysfunction of the US healthcare system has also eaten away at American wellbeing in other ways. Those extraordinary costs — more than $10,000 per person — must be paid by someone. When they are paid by employers, through workplace health plans, rising healthcare spending becomes a substitute for the rising wages that workers so desperately want.

And those extortionate costs also give employers a powerful reason to jettison staff at every opportunity, employing freelancers and subcontractors in the hope of cutting the cost of employer-sponsored health insurance. As a result, people feel disconnected from the workplace. Jobs become insecure ways to scrape a living, rather than sources of identity and pride. For many, despair follows.

Such problems are easier to diagnose than to cure. Reforming American healthcare will require an almighty effort. With politics gridlocked and soaking in lobbyist money, it’s not obvious that the US government is capable of running the kind of healthcare system that works elsewhere — even if Congress decides to try. But try it must, because the status quo is a tragedy.



Written for and first published in the Financial Times on 12 July 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

The strange power of the idea of “average”

“While nothing is more uncertain than a single life, nothing is more certain than the average duration of a thousand lives.” The statement is often attributed to the 19th-century mathematician Elizur Wright, who not coincidentally was a life insurance geek. But buried in the aphorism is a humdrum word concealing a powerful idea: the “average”.

The idea of taking an average — that is, of adding up (say) a hundred lifespans and dividing the total by a hundred, to produce the arithmetic mean — seems absurdly simple. But Stephen Stigler, a historian of statistics, reckons it is the most radical statistical operation ever devised. I am inclined to agree. The mean has a strange power over the way we think, and not always a benign one.

We do not know who invented the arithmetic mean. The statistician Churchill Eisenhart once tried to trace its history. It was originally used as a way of combining various observations that should be identical, but were not — for example, estimates of the direction of magnetic north.

In 1635 the mathematician Henry Gellibrand used the word “meane” to describe the midpoint of a lowest and highest number — not the same thing — but by 1668, a person known as “DB” was quoted in the Transactions of the Royal Society describing “taking the mean” of five values casually enough to make it clear that the concept was by then established.

Why is this such a powerful idea? As Prof Stigler puts it in The Seven Pillars of Statistical Wisdom, “you can actually gain information by throwing information away”. This is true in the straightforward sense that too many numbers become confusing: more than 50m people died last year, but if I could somehow show you a hundred-mile long printout of all their ages at death, you might struggle to learn much from it.

But the mean also eliminates errors. In the context that Gellibrand and DB were writing, taking an average cancels out mistakes in the original observations. This was by no means obvious. When confronted with contradictory measurements, the instinct of mathematicians had been to figure out which one was best and to dismiss the rest. But taking the average was a far better way to eliminate error.

And yet in this method lies a trap, because not all variation is error. The trap was sprung in the 1830s by the hugely influential statistician Adolphe Quetelet, who was an astronomer as well as a founder of the idea of “social physics” — using statistical techniques to understand humans and their societies.

Quetelet asked us to imagine that a thousand sculptors had made a copy of a famous statue of a gladiator. Each copy would have some errors or imperfections — but on average, they would be a perfect copy. So far, so good. But then, Quetelet continued, if we measured a thousand real soldiers and averaged their body measurements, we could get the ideal, perfect soldier. “L’homme moyen” — the average man — was Quetelet’s benchmark for perfection. (What about “la femme moyenne”? Well, quite.)

But as Todd Rose points out in The End of Average, Quetelet’s logic isn’t just andro-centric. It’s nonsense. A tall copy of a statue may be an error, but a tall soldier is not — and may well benefit from having superior reach.

Quetelet did not lack for critics. His contemporary, the mathematician and proto-economist AA Cournot, correctly argued that the Average Man probably didn’t exist. Victorian statistician Francis Galton was fascinated by averages, yet asserted: “No statistician dreams of combining objects into the same generic group that do not cluster towards a common centre . . . if we do so the result is monstrous and meaningless.”

But this idea of the average as perfection did not die. In an age of mass production it was too convenient. Production-line managers and modernist architects found it easy to design for the average person.

In 1943, the sculptor Abram Belskie and obstetrician Robert Dickinson turned Quetelet’s metaphor into reality, carving statues of “Normman” and “Norma”, based on the average measurements of 15,000 young adults. The US press loved Norma in particular: she was regarded as female perfection, at least from the male perspective. (It would be a stretch to describe the statue as “monstrous”.) Yet a competition to find an actual woman who matched her dimensions did not succeed. Being precisely average is not the same thing as being perfect — but it is just as rare.

We no longer have to fall into this trap. It is still hard to personalise rather than standardise — but it is not impossible. Drugs are not best evaluated by the average effect over thousands of patients. Social care will fail if it is designed only for the average recipient, or education for the average pupil. A forecasting model that is correct on average may be a very dangerous model indeed.

Elizur Wright was quite correct to declare that a single life is uncertain; but we should never leap to the conclusion that an average life is ideal.



Written for and first published in the Financial Times on 5 July 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)