Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.


Multi-tasking: how to survive in the 21st century

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

Other Writing

The myth of the robot job-ocalypse

“The number of jobs lost to more efficient machines is only part of the problem . . . In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries.”
This sentiment, from Time magazine, dates from the early weeks of John Kennedy’s presidency. Yet it would slot nicely into many a contemporary political speech. Like any self-respecting remorseless killer robot from the future, our techno-anxiety just keeps coming back.
Arnold Schwarzenegger’s Terminator was science fiction — but so, too, is the idea that robots and software algorithms are guzzling jobs faster than they can be created. There is an astonishing mismatch between our fear of automation and the reality so far.
How can this be? The highways of Silicon Valley are sprinkled with self-driving cars. Visit the cinema, the supermarket or the bank and the most prominent staff you will see are the security guards, who are presumably there to prevent you stealing valuable machines. Your computer once contented itself with correcting your spelling; now it will translate your prose into Mandarin. Given all this, surely the robots must have stolen a job or two by now?
Of course, the answer is that automation has been destroying particular jobs in particular industries for a long time, which is why most westerners who weave clothes or cultivate and harvest crops by hand do so for fun. In the past that process made us richer.
The worry now is that, with computers making jobs redundant faster than we can generate new ones, the result is widespread unemployment, leaving a privileged class of robot-owning rentiers and highly paid workers with robot-compatible skills.
This idea is superficially plausible: we are surrounded by cheap, powerful computers; many people have lost their jobs in the past decade; and inequality has risen in the past 30 years.
But the theory can be put to a very simple test: how fast is productivity growing? The usual measure of productivity is output per hour worked — by a human. Robots can produce economic output without any hours of human labour at all, so a sudden onslaught of robot workers should cause a sudden acceleration in productivity.
Instead, productivity has been disappointing. In the US, labour productivity growth averaged an impressive 2.8 per cent per year from 1948 to 1973. The result was mass affluence rather than mass joblessness. Productivity then slumped for a generation and perked up in the late 1990s but has now sagged again. The picture is little better in the UK, where labour productivity is notoriously low compared with the other G7 leading economies, and it has been falling further behind since 2007.
Taking a 40-year perspective, the impact of this long productivity malaise on typical workers in rich countries is greater than that of the rise in inequality, or of the financial crisis of 2008. In an age peppered with economic disappointments, the worst has been the stubborn failure of the robots to take our jobs. Then why is so much commentary dedicated to the opposite view? Some of this is a simple error: it has been a tough decade, economically speaking, and it is easy to blame robots for woes that should be laid at the door of others, such as bankers, austerity enthusiasts and eurozone politicians.
It is also true that robotics is making impressive strides. Gill Pratt, a robotics expert, recently described a “Cambrian explosion” for robotics in the Journal of Economic Perspectives. While robots have done little to cause mass unemployment in the recent past, that may change in future.
Automation has also undoubtedly changed the shape of the job market — economist David Autor, writing in the same journal, documents a rise in demand for low-skilled jobs and highly skilled jobs, and a hollowing out of jobs in the middle. There are signs that the hollow is moving further and further up the spectrum of skills. The robots may not be taking our jobs, but they are certainly shuffling them around.
Yet Mr Autor also points to striking statistic: private investment in computers and software in the US has been falling almost continuously for 15 years. That is hard to square with the story of a robotic job-ocalypse. Surely we should expect to see a surge in IT investment as all those machines are installed?
Instead, in the wake of the great recession, managers have noted an ample supply of cheap human labour and have done without the machines for now. Perhaps there is some vast underground dormitory somewhere, all steel and sparks and dormant androids. In a corner, a chromium-plated robo-hack is tapping away at a column lamenting the fact that the humans have taken all the robots’ jobs.

24th of August, 2015Other WritingComments off
Other Writing

George Osborne’s Magic Has Us Fooled, For Now

The chancellor can alter the law but cannot make costly workers worth hiring, says Tim Harford

He has mastered the art of misdirection as well as any stage magician. Everyone knew George Osborne was going to butcher the tax credit system on Wednesday, more or less halving the income at which they begin to fall away. But few expected him to announce a much higher minimum wage, and he did it with such an extravagant flourish that no one clearly remembers seeing him wield the cleaver.
For most of the poorer working households who qualify for tax credits, the combined effect of Mr Osborne’s Budget will be to make them worse off financially, and to push them away from the labour force by raising the effective rate of tax they pay.
Monique Ebell of the National Institute of Economic and Social Research reckons that a single mother working 30 hours a week at the minimum wage will be more than a £1,000 a year worse off in two years’ time than she is today, despite the increase in the wage she must legally be paid.
That assumes, of course, that she keeps her job at all. This is the big question about the minimum wage: will it increase the earnings of low-paid workers, or price them out of the job market entirely? Should we expect to see these workers laid off and replaced with one-touch espresso machines, automatic checkouts and call-centre workers from India? The minimum wage is a delicate balance, and Mr Osborne has put his thumb on the scale.
The chancellor’s aim is to raise the minimum wage for those over 25 beyond £9 by 2020, from £6.50 today. That is dramatic, although not quite as dramatic as it first seems. Mr Osborne is setting the minimum wage where it might be if the economic crisis of 2008, and the long stagnation that followed, had never happened. He is hoping that employment will not suffer. He has a few other countries to look to as a precedent. France is one example, and it is not encouraging. Australia is a more hopeful case.
Mr Osborne’s move would once have been unthinkable from a Conservative chancellor. A quarter of a century ago, the conventional wisdom was that the idea of a minimum wage was absurd at any level. The logic of that position was simple enough. If the minimum wage was below the market-clearing wage — at which employees want to work the same number of hours that businesses want to hire them for — it would be irrelevant; if it was above, it would be worse than useless. Productive workers do not need a minimum wage because they will anyway be well paid. Less productive workers will be harmed by a minimum wage because employers would rather sack them than pay more than they are worth. One does not simply repeal the laws of supply and demand.
The world has moved on since then, and we know that while supply and demand matter, there is more to the labour market than the simple story above.
Some employers have market power and could pay higher wages if they were forced to; the higher minimum wage may simply redistribute from employers to low-paid employees. Another possibility is that if forced to pay higher wages, employers will invest in training and equipment to justify the labour expense. On this view, wages do not need to follow productivity; productivity can be led by wages.
A third explanation is that since many low wage jobs are in non-traded sectors such as retail, employers will simply put up prices, spreading the burden of the higher minimum wage across all consumers, and possibly reducing inequality.
There is also the argument that higher wages can encourage workers to show up more often and smile at the customers. This is true, but in most cases managers will have reached that conclusion by themselves without the need for a legal minimum.
A large body of empirical evidence suggests either that reasonable minimum wages do not destroy jobs at all, or that they do not destroy very many. The evidence is, of course, mixed and contested.
Much of it comes from the US and concerns the experience of teenagers, who — in the words of Alan Manning of the London School of Economics, “represent about 2 per cent of hours worked and 98 per cent of the studies of the minimum wage”. But it is clear enough that if modest increases in the minimum wage were disastrous for jobs, we would know that by now.
Whether the chancellor’s wage rise counts as “modest” is far more questionable. Professor Manning is guardedly optimistic: he thinks that the bold increase in the minimum wage is worth a try. But he is nervous, and so am I. We are at the edge of what the data can tell us. Mr Osborne is about to provide a fascinating new case study.
The best scenario is that the minimum wage helps to drive up British productivity, which has long languished. Employers invest in training, and rather than replacing workers with machines they give them the latest tools to do their jobs.
To the extent that productivity does not rise, employers absorb the costs or pass them on to consumers, equitably bearing the burden of giving hard-working people a decent wage.
A gloomier scenario seems more probable for some sectors, especially social care. The law of supply and demand turns out to matter after all. Faced with a sharp increase in the minimum wage that runs well ahead of what the Low Pay Commission has felt able to endorse, employers lay off many workers and reduce the hours of others. The welfare bill rises and — as so often in the past — it proves much harder to create jobs than to destroy them.
My own bet is somewhere in the middle. We will discover that Mr Osborne has pushed too hard, and that the minimum wage must be allowed to slip back again relative to median earnings. Some jobs will be lost, a lesson will be learned, and Mr Osborne’s political purposes will have been served. He will be hoping to have upgraded his own job to that of prime minister by then, which may be appropriate: he is a masterful politician but has never shown much grasp of economics.

Written for and first published at FT.com

13th of July, 2015Other WritingComments off
Other Writing

George Osborne’s gamble with jobs

My response to the Summer Budget went up on the FT website yesterday:

The sharp hike in the minimum wage in the Budget was a shock, but it was true to form for the UK chancellor of the exchequer: clever politics and dubious economics. It is telling that, where the Low Pay Commission used to consider the evidence and carefully balance the risks and rewards of a higher minimum wage, it must now recommend whatever George Osborne tells it to recommend.

The risk is clear: forced to pay up to £9 an hour, many businesses will find that they would rather find other ways to conduct their affairs — buying robots, offshoring key functions or moving overseas entirely. Bankruptcy is, of course, another option.


Mr Osborne’s gamble is that some businesses will simply eat the cost of higher wages (unlikely), or train their workers better and give them better tools so that the higher wages can be justified with higher productivity. It is possible this may work. It is enormously risky, and if the move is the wrong one it will be hard to reverse. The lesson of the 1980s is that, once lost, jobs are not easy to find again.

One might ask why the chancellor is willing to take such risks and to order the Low Pay Commission to do his bidding rather than be guided by evidence. The answer is not hard to find: Mr Osborne needs political cover. He is hacking away at the welfare state, notably the system of tax credits that was designed to encourage people to work rather than stay at home.

One can only guess what Milton Friedman, one of the inspirations behind the Thatcherite revolution, would have made of all this. In place of a carefully designed system of incentives for people to go to work, we are to be offered a wage increase set by a politician’s whim. Friedman knew that, even in the complex market for jobs, one does not simply abolish the laws of supply and demand.

Mr Osborne promised a Budget for working people but reality does not match that sound bite. The biggest tax break was for people inheriting expensive homes from their parents; and, while benefits for the working poor were being squeezed, those for pensioners were — as always — protected. Those who hoped for radical and logical tax reform have been bitterly disappointed.

As for working people, many will thank the chancellor as their wages rise. Others will become unaffordable and will lose their jobs. No doubt they will be scapegoated as scroungers in some future Budget speech. It is possible that Mr Osborne’s gamble will pay off. It is even possible, although unlikely, that it will pay off spectacularly. But it is reckless, and it is not his job that is on the line.

9th of July, 2015Other WritingComments off

Paying to Get Inside A Restaurant

Me, writing in May’s edition of The Atlantic:

The next time you’re fortunate enough to have dinner at a high-end restaurant, take a moment to enjoy not only the food and wine, but the frisson of a really good puzzle: Why do restaurants price things the way they do?

The markup on food makes sense. It takes time and skill to prepare the perfect cold-smoked salmon with balsamic-vinegar sorbet. But why are the wine prices so inflated? How hard can it be to pop open a bottle? Meanwhile, restroom access is free and unlimited for customers—a curious cross-subsidy.

Most mysterious of all: When reservations at hot new restaurants are so sought-after, why are they simply given away?

Why indeed? The full article is here and free to read online.

23rd of April, 2015MarginaliaOther WritingComments off
Other Writing

‘The Organised Mind’, by Daniel J Levitin; ‘The Life-Changing Magic of Tidying’, by Marie Kondo

Two very different books claim they can help us to cope with the excess of information, tasks and belongings

The challenge of “getting organised” is both commonplace and oddly particular. Organising my study is a different problem from organising a library, or organising a drinking session in a brewery. Organising things can be practical to the point of banality (what’s the best way to store socks?) but it can also be philosophically deep.

Consider the Celestial Empire of Benevolent Knowledge, a Chinese encyclopedia conjured up in an essay by Jorge Luis Borges. The oriental tome organises animals into categories thus: “(a) belonging to the emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (l) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies.”

Daniel Levitin is a professor of psychology and behavioural neuroscience; Marie Kondo is a professional cleaner. Both have written books that promise to help us get organised, although they are quite different. Kondo’s is a brief and bracing practical guide to tidying up your home. Levitin’s sprawling discussion aspires to provide a comprehensive account of the way we think about organising everything from possessions to friends. Levitin offers enough psychological detail to suggest why our mental categories do not match Borges’s encyclopedia. Yet he also frequently dabbles in the dispensing of practical tips. He has studied not only psychology but “HSPs” (Highly Successful Persons) and with his advice, perhaps you can be an HSP too.

Both books agree that everything should have a place and be put back in that place. Levitin expands on that, explaining that our memory for locations is well developed; hence, the well-known trick of committing a list of things to memory by assigning each thing a place in an imaginary physical space, a “memory palace”.

Yet there is a big difference in approach here. Levitin seems to approve of devising organisational systems for a complex world; Kondo believes that before we organise, we must first simplify. Levitin describes a system in which letters are printed in triplicate to allow filing chronologically, by topic and by correspondent. Kondo suggests that we could do without most of our paperwork: even love letters become a drag if you hoard every one.

Levitin’s approach to computers illustrates this disagreement. He makes the familiar argument that multitasking is a distracting problem. His solution is to buy several computers, each with a single purpose: one is for work, one for domestic tasks, one for social networking, one for reading books and so on. He is apologetic about the cost of this, but one wonders if even a millionaire could make the system work.

First, all these devices use the internet, and the internet is inherently multifunctional. Second, these categories are more porous than they seem: how to categorise an email from a work colleague inviting you to a party? Third, it’s not clear at all that maintaining several distinct email accounts helps anyone process email. Many productivity gurus advise pooling your emails as they come in, and not being too fussy about categorising them on the way out. Simple beats well-organised here, and complex filing systems aren’t worth the hassle. As Borges understood very well, they are inevitably a poor fit for the rich mess of reality.

Kondo declares that too much organisation is a trap. Instead of getting rid of things we don’t need, we buy stuff to organise the stuff we already have. None of this really helps, she claims. Once you get too clever about storage you lose track of what you own, and that way lies bloat and frustration.

Levitin’s approach to organisation seems more rational. He is always willing to throw in some talk about dopamine, the prefrontal cortex or inhibitory neurotransmitters. Kondo, eccentrically spiritual, believes in saying thank you to old socks before discarding them. Pragmatically speaking, though, Kondo is both more radical and more persuasive. To the extent that Levitin offers sensible practical advice, it is well-worn (why not carry a notebook and write down ideas and tasks so you don’t strain to remember them?). And when he admiringly describes the way that Joni Mitchell has a custom drawer for Scotch tape, another for masking tape, a third for string and rope, a fourth for mailing and packing products, and so on, one can only imagine Kondo trying to slap both Mitchell and Levitin into sanity.

This predilection for comprehensiveness weighs on Levitin’s book in another way: it’s long, and while it does contain a great deal to intrigue, the parade of disparate topics makes it less than the sum of the parts. There are tens of thousands of words on topics that are perfectly interesting in isolation (Bayes’ rule; how to estimate quantities; how online dating works) and yet they do not seem to fit together.

Consider each thing, writes Kondo, and ask “Does it spark joy?” Only then should you keep it. Levitin might profitably have taken that advice.

The Organised Mind: Thinking Straight in the Age of Information Overload, by Daniel J Levitin, Viking, RRP£20/ Dutton RRP$27.95, 528 pages

The Life-Changing Magic of Tidying: A Simple, Effective Way to Banish Clutter Forever, by Marie Kondo, Vermilion, RRP£7.99/ Ten Speed Press, RRP$16.99, 256 pages

Written for and first published at ft.com.

24th of January, 2015Other WritingComments off
Other Writing

What if we abolished Christmas?

One possibility is that the economy would be just fine. This is the classic view of macroeconomics, writes Tim Harford

In 1939, Lew Hahn, the head of the Retail Dry Goods Association in the US, noticed something that gave him cause for concern: Thanksgiving would fall on November 30 that year, the latest possible date. Since it was thought poor form to start hawking Yuletide goodies before Thanksgiving was over, this would mean a brief Christmas spending season.

Hahn was concerned that consumers would spend less, damaging an already weak economy, to say nothing of the prosperity of the members of the Retail Dry Goods Association. And so he had a word with the secretary of commerce, Harry Hopkins, who had a word with President Franklin D Roosevelt, who had a word with the nation. He explained that as Thanksgiving was a federal holiday it was the president’s job to select the date — and he was choosing November 23 instead.

The move was controversial. Alfred Landon, the Republican who had been defeated by Roosevelt in the presidential election of 1936, compared FDR’s high-handedness to that of Adolf Hitler, thus beginning a hallowed tradition in US political commentary. For a couple of years, half the country celebrated on the old Thanksgiving date while the other half marked the new “Franksgiving” instead; a couple of states sat on the fence and made both days a holiday.

All this raises a deeper question: what are the macroeconomic consequences of Christmas? The answer depends on your politics. Economic conservatives, from Rick Santorum to Alf Landon to George Osborne, believe Christmas has little effect on the health of the economy; liberals, from Ed Balls to Franklin Roosevelt to Paul Krugman, believe Christmas is macroeconomically invaluable.

I should emphasise that I am making assumptions here. I have not approached any of these people to ask their opinions about Christmas. But the views I am speculating that they hold seem a logical extension of their views on government stimulus spending.

Allow me to explain. Imagine that this Christmas day, the Queen, the Pope and even Oprah Winfrey announced that Christmas would be a purely religious occasion from 2015 onwards. There would be no presents and no feasting. If people respected this declaration, about $75bn-$100bn of extra consumer spending in the US alone would simply not materialise next December. What then?

One possibility is that the economy would be just fine. This is the classical view of macroeconomics: nothing significant would change after the abolition of Christmas. We would retain the same labour force and the same skills, the same factories and the same power stations, the same financial sector and the same logistics networks. The capacity of the economy to produce goods and services would be undiminished, and after a period of adjustment, during which tinsel factories would be retooled and Christmas tree plantations replanted, all would be well.

What would replace nearly $100bn of seasonal consumer spending? Nothing noticeable, but the replacement would happen just the same. The productive capacity freed up by the disappearance of Christmas could be turned to other uses; prices would fall just enough to tempt us to spend our money at other times of the year. Indeed, cancelling Christmas might even provide a modest boost to our prosperity in the longer term, as bunching up all that spending into a few short weeks strains factories and supply chains. Smoothing out our spending would be more efficient.

This classical view of how the economy works is also the view taken by Mr Osborne, the UK chancellor, and by Republicans in the US. Their view is that government stimulus spending does not work; cut it back, they argue, and the economy would adjust as the private sector took up the slack.

On the other side of the debate stands Mr Balls, the UK’s shadow chancellor, as well as American stimulus proponents such as Mr Krugman and Lawrence Summers. Mr Krugman once commented that panic about an attack from aliens would help the economy because it would get the government spending money again. Since aliens are not available, Santa Claus will have to do.

This Keynesian view of how the economy works differs from the classical view in one crucial way: it argues that supply does not always and automatically create demand. When Christmas is abolished (or a financial crisis devastates people’s confidence and their spending power), consumers will plan to spend less. And if consumers plan to spend less, price adjustments may not induce them to change their minds; the price adjustments may not even happen. If Christmas spending disappears, it may take many years for the economy to replace it. Those factories will still be there and the workers will remain available — but they will stand idle.

Who is right? I should confess a bias. I am sceptical about the efficiency of many government spending programmes and of many Christmas purchase decisions. In both cases, too much attention is lavished on appearances and too little on what the recipient might truly want. In the long run, then, I should hope both for a smaller state and for a smaller Christmas.

But that is a matter for the ghost of Christmas yet to come. Despite my own biases, I have to acknowledge that this Christmas interest rates are still close to zero. Until that changes, the liberals will have the better of the argument. Stimulus spending remains effective, regardless of whether the stimulus comes from the Treasury — or from the North Pole.

Written for and first published at ft.com.

24th of December, 2014Other WritingComments off
Other Writing

Why pilot schemes help ideas take flight

There’s huge value in experiments that help us decide whether to go big or go home

Here’s a little puzzle. You’re offered the chance to participate in two high-risk business ventures. Each costs £11,000. Each will be worth £1m if all goes well. Each has just a 1 per cent chance of success. The mystery is that the ventures have very different expected pay-offs.

One of these opportunities is a poor investment: it costs £11,000 to get an expected payout of £10,000, which is 1 per cent of a million. Unless you take enormous pleasure in gambling, the venture makes no sense.

Strangely, the other opportunity, while still risky, is an excellent bet. With the same cost and the same chance of success, how could that be?

Here’s the subtle difference. This attractive alternative project has two stages. The first is a pilot, costing £1,000. The pilot has a 90 per cent chance of failing, which would end the whole project. If the pilot succeeds, scaling up will cost a further £10,000, and there will be a 10 per cent chance of a million-pound payday.

This two-stage structure changes everything. While the total cost is still £11,000 and the chance of success is still 1 per cent, the option to get out after a failed pilot is invaluable. Nine times out of 10, the pilot will save you from wasting £10,000 – which means that while the simple project offers an expected loss of £1,000, the two-stage project has an expected profit of £8,000.

In a real project, nobody could ever be sure about the probability of success or its rewards. But the idea behind this example is very real: there’s huge value in experiments that help us decide whether to go big or go home.

We can see this effect in data from the venture capital industry. One study looked at companies backed by US venture capitalists (VCs) between 1986 and 1997, comparing them with a sample of companies chosen randomly to be the same age, size and from the same industry. (These results were published in this summer’s Journal of Economic Perspectives in an article titled “Entrepreneurship as Experimentation”.)

By 2007, only a quarter of the VC-backed firms had survived, while one-third of the comparison group was still in business. However, the surviving VC-backed firms were big successes, employing more than five times as many people as the surviving comparison firms. We can’t tell from this data whether the VCs are creating winners or merely spotting them in advance but we can see that big successes on an aggregate scale are entwined with a very high failure rate.

The option to conduct a cheap test run can be very valuable. It’s easy to lose sight of quite how valuable. Aza Raskin, who was lead designer for the Firefox browser, cites the late Paul MacCready as his inspiration on this point. MacCready was one of the great aeronautical engineers, and his most famous achievement was to build the Gossamer Condor and the Gossamer Albatross, human-powered planes that tore up the record books in the late 1970s.

One of MacCready’s key ideas was to develop a plane that could swiftly be rebuilt after a crash. Each test flight revealed fresh information, MacCready figured, but human-powered planes are so feather-light that each test flight also damages the plane. The most important thing a designer could do was to build a plane that could be rebuilt within days or even hours after a crash – rather than weeks or months. Once the problem of fast, cheap experimentation was solved, everything else followed.

Some professions have internalised this lesson. Architects use scale models to shed light on how a completed building might look and feel. A nicely made model can take days of work to complete but that is not much compared with the cost of the building itself.

Politicians don’t find it so easy. A new policy is hardly a new policy at all unless it can be unveiled in a blaze of glory, preferably as a well-timed surprise. That hardly suits the MacCready approach. Imagine the conference speech: “We’re announcing a new array of quick-and-dirty experiments with the welfare state. We’ll be iterating rapidly after each new blunder and heart-rending tabloid anecdote.”

A subtler problem is that projects need a certain scale before powerful decision makers will take them seriously.

“The transaction costs involved in setting up any aid project are so great that most donors don’t want to consider a project spending less than £20m,” says Owen Barder, director for Europe at the Center for Global Development, a think-tank. I suspect that the same insight applies far beyond the aid industry. Governments and large corporations can find it’s such a hassle to get anything up and running that the big stakeholders don’t want to be bothered with anything small.

That is a shame. The real leverage of a pilot scheme is that although it is cheap, it could have much larger consequences. The experiment itself may seem too small to bother with; the lesson it teaches is not.

Also published at ft.com.

21st of October, 2014Other WritingComments off

How to see into the future

Billions of dollars are spent on experts who claim they can forecast what’s around the corner, in business, finance and economics. Most of them get it wrong. Now a groundbreaking study has unlocked the secret: it IS possible to predict the future – and a new breed of ‘superforecasters’ knows how to do it

Irving Fisher was once the most famous economist in the world. Some would say he was the greatest economist who ever lived. “Anywhere from a decade to two generations ahead of his time,” opined the first Nobel laureate economist Ragnar Frisch, in the late 1940s, more than half a century after Fisher’s genius first lit up his subject. But while Fisher’s approach to economics is firmly embedded in the modern discipline, many of those who remember him now know just one thing about him: that two weeks before the great Wall Street crash of 1929, Fisher announced, “Stocks have reached what looks like a permanently high plateau.”

In the 1920s, Fisher had two great rivals. One was a British academic: John Maynard Keynes, a rising star and Fisher’s equal as an economic theorist and policy adviser. The other was a commercial competitor, an American like Fisher. Roger Babson was a serial entrepreneur with no serious academic credentials, inspired to sell economic forecasts by the banking crisis of 1907. As Babson and Fisher locked horns over the following quarter-century, they laid the foundations of the modern economic forecasting industry.

Fisher’s rivals fared better than he did. Babson foretold the crash and made a fortune, enough to endow the well-respected Babson College. Keynes was caught out by the crisis but recovered and became rich anyway. Fisher died in poverty, ruined by the failure of his forecasts.

If Fisher and Babson could see the modern forecasting industry, it would have astonished them in its scale, range and hyperactivity. In his acerbic book The Fortune Sellers, former consultant William Sherden reckoned in 1998 that forecasting was a $200bn industry – $300bn in today’s terms – and the bulk of the money was being made in business, economic and financial forecasting.

It is true that forecasting now seems ubiquitous. Data analysts forecast demand for new products, or the impact of a discount or special offer; scenario planners (I used to be one) produce broad-based narratives with the aim of provoking fresh thinking; nowcasters look at Twitter or Google to track epidemics, actual or metaphorical, in real time; intelligence agencies look for clues about where the next geopolitical crisis will emerge; and banks, finance ministries, consultants and international agencies release regular prophecies covering dozens, even hundreds, of macroeconomic variables.

Real breakthroughs have been achieved in certain areas, especially where rich datasets have become available – for example, weather forecasting, online retailing and supply-chain management. Yet when it comes to the headline-grabbing business of geopolitical or macroeconomic forecasting, it is not clear that we are any better at the fundamental task that the industry claims to fulfil – seeing into the future.

So why is forecasting so difficult – and is there hope for improvement? And why did Babson and Keynes prosper while Fisher suffered? What did they understand that Fisher, for all his prodigious talents, did not?

In 1987, a young Canadian-born psychologist, Philip Tetlock, planted a time bomb under the forecasting industry that would not explode for 18 years. Tetlock had been trying to figure out what, if anything, the social sciences could contribute to the fundamental problem of the day, which was preventing a nuclear apocalypse. He soon found himself frustrated: frustrated by the fact that the leading political scientists, Sovietologists, historians and policy wonks took such contradictory positions about the state of the cold war; frustrated by their refusal to change their minds in the face of contradictory evidence; and frustrated by the many ways in which even failed forecasts could be justified. “I was nearly right but fortunately it was Gorbachev rather than some neo-Stalinist who took over the reins.” “I made the right mistake: far more dangerous to underestimate the Soviet threat than overestimate it.” Or, of course, the get-out for all failed stock market forecasts, “Only my timing was wrong.”

Tetlock’s response was patient, painstaking and quietly brilliant. He began to collect forecasts from almost 300 experts, eventually accumulating 27,500. The main focus was on politics and geopolitics, with a selection of questions from other areas such as economics thrown in. Tetlock sought clearly defined questions, enabling him with the benefit of hindsight to pronounce each forecast right or wrong. Then Tetlock simply waited while the results rolled in – for 18 years.

Tetlock published his conclusions in 2005, in a subtle and scholarly book, Expert Political Judgment. He found that his experts were terrible forecasters. This was true in both the simple sense that the forecasts failed to materialise and in the deeper sense that the experts had little idea of how confident they should be in making forecasts in different contexts. It is easier to make forecasts about the territorial integrity of Canada than about the territorial integrity of Syria but, beyond the most obvious cases, the experts Tetlock consulted failed to distinguish the Canadas from the Syrias.

Adding to the appeal of this tale of expert hubris, Tetlock found that the most famous experts fared somewhat worse than those outside the media spotlight. Other than that, the humiliation was evenly distributed. Regardless of political ideology, profession and academic training, experts failed to see into the future.

Most people, hearing about Tetlock’s research, simply conclude that either the world is too complex to forecast, or that experts are too stupid to forecast it, or both. Tetlock himself refused to embrace cynicism so easily. He wanted to leave open the possibility that even for these intractable human questions of macroeconomics and geopolitics, a forecasting approach might exist that would bear fruit.

. . .

In 2013, on the auspicious date of April 1, I received an email from Tetlock inviting me to join what he described as “a major new research programme funded in part by Intelligence Advanced Research Projects Activity, an agency within the US intelligence community.”

The core of the programme, which had been running since 2011, was a collection of quantifiable forecasts much like Tetlock’s long-running study. The forecasts would be of economic and geopolitical events, “real and pressing matters of the sort that concern the intelligence community – whether Greece will default, whether there will be a military strike on Iran, etc”. These forecasts took the form of a tournament with thousands of contestants; it is now at the start of its fourth and final annual season.

“You would simply log on to a website,” Tetlock’s email continued, “give your best judgment about matters you may be following anyway, and update that judgment if and when you feel it should be. When time passes and forecasts are judged, you could compare your results with those of others.”

I elected not to participate but 20,000 others have embraced the idea. Some could reasonably be described as having some professional standing, with experience in intelligence analysis, think-tanks or academia. Others are pure amateurs. Tetlock and two other psychologists, Don Moore and Barbara Mellers, have been running experiments with the co-operation of this army of volunteers. (Mellers and Tetlock are married.) Some were given training in how to turn knowledge about the world into a probabilistic forecast; some were assembled into teams; some were given information about other forecasts while others operated in isolation. The entire exercise was given the name of the Good Judgment Project, and the aim was to find better ways to see into the future.

The early years of the forecasting tournament have, wrote Tetlock, “already yielded exciting results”.

A first insight is that even brief training works: a 20-minute course about how to put a probability on a forecast, correcting for well-known biases, provides lasting improvements to performance. This might seem extraordinary – and the benefits were surprisingly large – but even experienced geopolitical seers tend to have expertise in a subject, such as Europe’s economies or Chinese foreign policy, rather than training in the task of forecasting itself.

“For people with the right talents or the right tactics, it is possible to see into the future after all”

A second insight is that teamwork helps. When the project assembled the most successful forecasters into teams who were able to discuss and argue, they produced better predictions.

But ultimately one might expect the same basic finding as always: that forecasting events is basically impossible. Wrong. To connoisseurs of the frailties of futurology, the results of the Good Judgment Project are quite astonishing. Forecasting is possible, and some people – call them “superforecasters”– can predict geopolitical events with an accuracy far outstripping chance. The superforecasters have been able to sustain and even improve their performance.

The cynics were too hasty: for people with the right talents or the right tactics, it is possible to see into the future after all.

Roger Babson, Irving Fisher’s competitor, would always have claimed as much. A serial entrepreneur, Babson made his fortune selling economic forecasts alongside information about business conditions. In 1920, the Babson Statistical Organization had 12,000 subscribers and revenue of $1.35m – almost $16m in today’s money.

“After Babson, the forecaster was an instantly recognisable figure in American business,” writes Walter Friedman, the author of Fortune Tellers, a history of Babson, Fisher and other early economic forecasters. Babson certainly understood how to sell himself and his services. He advertised heavily and wrote prolifically. He gave a complimentary subscription to Thomas Edison, hoping for a celebrity endorsement. After contracting tuberculosis, Babson turned his management of the disease into an inspirational business story. He even employed stonecutters to carve inspirational slogans into large rocks in Massachusetts (the “Babson Boulders” are still there).

On September 5 1929, Babson made a speech at a business conference in Wellesley, Massachusetts. He predicted trouble: “Sooner or later a crash is coming which will take in the leading stocks and cause a decline of from 60 to 80 points in the Dow-Jones barometer.” This would have been a fall of around 20 per cent.

So famous had Babson become that his warning was briefly a self-fulfilling prophecy. When the news tickers of New York reported Babson’s comments at around 2pm, the markets erupted into what The New York Times described as “a storm of selling”. Shares lurched down by 3 per cent. This became known as the “Babson break”.

The next day, shares bounced back and Babson, for a few weeks, appeared ridiculous. On October 29, the great crash began, and within a fortnight the market had fallen almost 50 per cent. By then, Babson had an advertisement in the New York Times pointing out, reasonably, that “Babson clients were prepared”. Subway cars were decorated with the slogan, “Be Right with Babson”. For Babson, his forecasting triumph was a great opportunity to sell more subscriptions.

But his true skill was marketing, not forecasting. His key product, the “Babson chart”, looked scientific and was inspired by the discoveries of Isaac Newton, his idol. The Babson chart operated on the Newtonian assumption that any economic expansion would be matched by an equal and opposite contraction. But for all its apparent sophistication, the Babson chart offered a simple and usually contrarian message.

“Babson offered an up-arrow or a down-arrow. People loved that,” says Walter Friedman. Whether or not Babson’s forecasts were accurate was not a matter that seemed to concern many people. When he was right, he advertised the fact heavily. When he was wrong, few noticed. And Babson had indeed been wrong for many years during the long boom of the 1920s. People taking his advice would have missed out on lucrative opportunities to invest. That simply didn’t matter: his services were popular, and his most spectacularly successful prophecy was also his most famous.

Babson’s triumph suggests an important lesson: commercial success as a forecaster has little to do with whether you are any good at seeing into the future. No doubt it helped his case when his forecasts were correct but nobody gathered systematic information about how accurate he was. The Babson Statistical Organization compiled business and economic indicators that were, in all probability, of substantial value in their own right. Babson’s prognostications were the peacock’s plumage; their effect was simply to attract attention to the services his company provided.

. . .

When Barbara Mellers, Don Moore and Philip Tetlock established the Good Judgment Project, the basic principle was to collect specific predictions about the future and then check to see if they came true. That is not the world Roger Babson inhabited and neither does it describe the task of modern pundits.

When we talk about the future, we often aren’t talking about the future at all but about the problems of today. A newspaper columnist who offers a view on the future of North Korea, or the European Union, is trying to catch the eye, support an argument, or convey in a couple of sentences a worldview that would otherwise be impossibly unwieldy to explain. A talking head in a TV studio offers predictions by way of making conversation. A government analyst or corporate planner may be trying to justify earlier decisions, engaging in bureaucratic defensiveness. And many election forecasts are simple acts of cheerleading for one side or the other.

“Some people – call them ‘superforecasters’– can predict geopolitical events with an accuracy far outstripping chance”

Unlike the predictions collected by the Good Judgment Project, many forecasts are vague enough in their details to allow the mistaken seer off the hook. Even if it was possible to pronounce that a forecast had come true or not, only in a few hotly disputed cases would anybody bother to check.

All this suggests that among the various strategies employed by the superforecasters of the Good Judgment Project, the most basic explanation of their success is that they have the single uncompromised objective of seeing into the future – and this is rare. They receive continual feedback about the success and failure of every forecast, and there are no points for radicalism, originality, boldness, conventional pieties, contrarianism or wit. The project manager of the Good Judgment Project, Terry Murray, says simply, “The only thing that matters is the right answer.”

I asked Murray for her tips on how to be a good forecaster. Her reply was, “Keep score.”

. . .

An intriguing footnote to Philip Tetlock’s original humbling of the experts was that the forecasters who did best were what Tetlock calls “foxes” rather than “hedgehogs”. He used the term to refer to a particular style of thinking: broad rather than deep, intuitive rather than logical, self-critical rather than assured, and ad hoc rather than systematic. The “foxy” thinking style is now much in vogue. Nate Silver, the data journalist most famous for his successful forecasts of US elections, adopted the fox as the mascot of his website as a symbol of “a pluralistic approach”.

The trouble is that Tetlock’s original foxes weren’t actually very good at forecasting. They were merely less awful than the hedgehogs, who deployed a methodical, logical train of thought that proved useless for predicting world affairs. That world, apparently, is too complex for any single logical framework to encompass.

More recent research by the Good Judgment Project investigators leaves foxes and hedgehogs behind but develops this idea that personality matters. Barbara Mellers told me that the thinking style most associated with making better forecasts was something psychologists call “actively open-minded thinking”. A questionnaire to diagnose this trait invites people to rate their agreement or disagreement with statements such as, “Changing your mind is a sign of weakness.” The project found that successful forecasters aren’t afraid to change their minds, are happy to seek out conflicting views and are comfortable with the notion that fresh evidence might force them to abandon an old view of the world and embrace something new.

Which brings us to the strange, sad story of Irving Fisher and John Maynard Keynes. The two men had much in common: both giants in the field of economics; both best-selling authors; both, alas, enthusiastic and prominent eugenicists. Both had immense charisma as public speakers.

Fisher and Keynes also shared a fascination with financial markets, and a conviction that their expertise in macroeconomics and in economic statistics should lead to success as an investor. Both of them, ultimately, were wrong about this. The stock market crashes of 1929 – in September in the UK and late October in the US – caught each of them by surprise, and both lost heavily.

Yet Keynes is remembered today as a successful investor. This is not unreasonable. A study by David Chambers and Elroy Dimson, two financial economists, concluded that Keynes’s track record over a quarter century running the discretionary portfolio of King’s College Cambridge was excellent, outperforming market benchmarks by an average of six percentage points a year, an impressive margin.

This wasn’t because Keynes was a great economic forecaster. His original approach had been predicated on timing the business cycle, moving into and out of different investment classes depending on which way the economy itself was moving. This investment strategy was not a success, and after several years Keynes’s portfolio was almost 20 per cent behind the market as a whole.

The secret to Keynes’s eventual profits is that he changed his approach. He abandoned macroeconomic forecasting entirely. Instead, he sought out well-managed companies with strong dividend yields, and held on to them for the long term. This approach is now associated with Warren Buffett, who quotes Keynes’s investment maxims with approval. But the key insight is that the strategy does not require macroeconomic predictions. Keynes, the most influential macroeconomist in history, realised not only that such forecasts were beyond his skill but that they were unnecessary.

Irving Fisher’s mistake was not that his forecasts were any worse than Keynes’s but that he depended on them to be right, and they weren’t. Fisher’s investments were leveraged by the use of borrowed money. This magnified his gains during the boom, his confidence, and then his losses in the crash.

But there is more to Fisher’s undoing than leverage. His pre-crash gains were large enough that he could easily have cut his losses and lived comfortably. Instead, he was convinced the market would turn again. He made several comments about how the crash was “largely psychological”, or “panic”, and how recovery was imminent. It was not.

One of Fisher’s major investments was in Remington Rand – he was on the stationery company’s board after selling them his “Index Visible” invention, a type of Rolodex. The share price tells the story: $58 before the crash, $28 by 1930. Fisher topped up his investments – and the price soon dropped to $1.

Fisher became deeper and deeper in debt to the taxman and to his brokers. Towards the end of his life, he was a marginalised figure living alone in modest circumstances, an easy target for scam artists. Sylvia Nasar writes in Grand Pursuit, a history of economic thought, “His optimism, overconfidence and stubbornness betrayed him.”

. . .

So what is the secret of looking into the future? Initial results from the Good Judgment Project suggest the following approaches. First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters.

But the Good Judgment Project also hints at why so many experts are such terrible forecasters. It’s not so much that they lack training, teamwork and open-mindedness – although some of these qualities are in shorter supply than others. It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.

“Successful forecasters aren’t afraid to change their minds and are comfortable with the notion that fresh evidence might mean abandoning an old view”

This is because our predictions are about the future only in the most superficial way. They are really advertisements, conversation pieces, declarations of tribal loyalty – or, as with Irving Fisher, statements of profound conviction about the logical structure of the world. As Roger Babson explained, not without sympathy, Fisher had failed because “he thinks the world is ruled by figures instead of feelings, or by theories instead of styles”.

Poor Fisher was trapped by his own logic, his unrelenting optimism and his repeated public declarations that stocks would recover. And he was bankrupted by an investment strategy in which he could not afford to be wrong.

Babson was perhaps wrong as often as he was right – nobody was keeping track closely enough to be sure either way – but that did not stop him making a fortune. And Keynes prospered when he moved to an investment strategy in which forecasts simply did not matter much.

Fisher once declared that “the sagacious businessman is constantly forecasting”. But Keynes famously wrote of long-term forecasts, “About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.”

Perhaps even more famous is a remark often attributed to Keynes. “When my information changes, I alter my conclusions. What do you do, sir?”

If only he had taught that lesson to Irving Fisher.

Also published at ft.com.

Other Writing

Why inflation remains best way to avoid stagnation

The prospect is that central banks will find themselves helpless, writes Tim Harford

People who were not born when the financial crisis began are now old enough to read about it. We have been able to distract ourselves with two Olympics, two World Cups and two US presidential elections. Yet no matter how stale our economic troubles feel, they manage to linger.

Given the severity of the crisis and the inadequacy of the policy response, it should be no surprise that recovery has been slow and anaemic: that is what economic history always suggested. Yet some economists are growing disheartened. The talk is of “secular stagnation” – a phrase which could mean two things, neither of them good.

One fear has been well-aired: that future growth possibilities will be limited by an ageing population or perhaps even technological stagnation.

The second meaning of secular stagnation is altogether stranger: it is that regardless of their potential for growth, modern economies may suffer from a persistent tendency to slip below that potential, sliding into stubborn recessions. The west’s lost decade of economic growth may be a taste of things to come.

This view was put forward most forcefully by Lawrence Summers, who was Treasury secretary under Bill Clinton and a senior adviser to President Barack Obama. It has been discussed at length in a collection of essays published last week by the Centre for Economic Policy Research. But what could it mean?

Normally, when an economy slips into recession, the standard response is to cut interest rates. This encourages us to spend, rather than save, giving the economy an immediate boost.

Things become more difficult if nominal interest rates are already low. Central banks have to employ radical tactics of uncertain effectiveness, such as quantitative easing. Governments could and should borrow and spend to support the economy. In practice they have proved politically gridlocked (in the US), institutionally hamstrung (in the EU) or ideologically blinkered (in the UK). There is not much reason to think the politics of fiscal stimulus would be very different in the future, so the zero-interest rate boundary is a problem.

The awful prospect of secular stagnation is that this is the new normal. Interest rates will be very low as a matter of course, and central banks will routinely find themselves nearly helpless.
“A cut in interest rates encourages us to spend, rather than save, giving the economy an immediate boost”

Before we startle ourselves at shadows, let us ask why Prof Summers might be right. Real interest rates – the rates paid after adjusting for inflation – have been falling. In the US, real rates averaged about 5 per cent in the 1980s, 2 per cent in the 1990s and 1 per cent in the Noughties. (Since Lehman Brothers failed they have been negative, but the long-term trend speaks more eloquently.) Real interest rates have also been declining in the EU for 20 years. The International Monetary Fund’s estimate of global real interest rates has been declining for 30 years.

This does not look good, so why is it happening? The background level of real interest rates is set not by central banks but by supply and demand. Low real rates suggest lots of people are trying to save, and particularly in safe assets, while few people are trying to borrow and invest. Only with rates at a very low level can enough borrowers be found to mop up all the savings.

If secular stagnation is a real risk, we need policies to address it. One approach is to try to change the forces of supply and demand to boost the demand for cash to invest, while stemming the supply of savings, and reducing the bias towards super-safe assets.

This looks tricky. Much policy has pushed in the opposite direction. Consider the austerity drive and long-term goals to reduce government debt burdens; this reduces the supply of safe assets and pushes down real rates. Or the tendency in the UK to push pension risk away from companies and the government, and towards individuals; this encourages extra saving, just in case. Or the way in which (understandably) regulators insist that banks and pension funds hold more safe assets; again, this increases the demand for safe assets and pushes down real interest rates. To reverse all these policies, sacrificing microeconomic particulars for a rather abstract macroeconomic hunch, looks like a hard sell.

There is a simple alternative, albeit one that carries risks. Central bank targets for inflation should be raised to 4 per cent. A credible higher inflation target would provide immediate stimulus (who wants to squirrel away money that is eroding at 4 per cent a year?) and would give central banks more leeway to cut real rates in future. If equilibrium real interest rates are zero, that might not matter when central banks can produce real rates of minus 4 per cent.

If all that makes you feel queasy, it should. As Prof Summers argues, unpleasant things have a tendency to happen when real interest rates are very low. Bubbles inflate, Ponzi schemes prosper and investors are reckless in their scrabble for yield.

One thing that need not worry anyone, though, is the prospect of an inflation target of 4 per cent. It will not happen. That is particularly true in the place where the world economy most needs more inflation: in the eurozone. The German folk memory of hyperinflation in 1923 is just too strong. That economic catastrophe, which helped lay the foundations for Nazism and ruin much of the 20th century, continues to resonate today.

What practical policy options remain? That is easy to see. We must cross our fingers and hope that Prof Summers is mistaken.

Also published at ft.com.

5th of September, 2014Other WritingComments off


  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook


  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Subscribe to TimHarford.com

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Do NOT follow this link or you will be banned from the site!