Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

A modest proposal – let’s just abolish the Budget

Once upon a time, it made sense to have an annual Budget speech. When the central economic fact of the year was whether the harvest had failed or not, it behoved the Chancellor to declare how he planned to spend whatever he happened to have in his coffers. But a vital institution for the pre-industrial age has mutated into a mere circus for the post-industrial one. The central question that this Budget provoked in my mind was this: why on earth do we still have a Budget?

Skim through the transcript of yesterday’s speech — if you can bear to — and you’ll find that the items fall into a few categories: (1) trivial; (2) responses to silly self-imposed rules; (3) economic forecasts that will later be wrong; (4) pure rhetoric; (5) worse than useless; (6) irrelevant.

Mr Osborne opened with a list of all the ways in which the UK economy is strong, skimmed over all the ways in which it is weak, and blamed the Labour party or foreigners for everything. (Rhetoric.)

Then he ran through the latest outlook from the Office for Budget Responsibility, an institution that represents all that is best about rigorous independent economic forecasting — and is therefore bound to be wrong. (Bad forecasts.)

He admitted that he had broken his own rather odd rule about the ratio of debt to gross domestic product, before announcing that according to a different, nonsensical metric, he looked rather good. (Silly self-imposed rules.)

Mr Osborne threw the usual glitter-bomb of little presents. (Trivial.)

Consider the £19m for “community-led housing schemes” in the south-west of England. Very nice, but if every £1m of spending earned one word from the chancellor, his Budget speech would be considerably longer than War and Peace. Then there’s the donation of tampon VAT revenues to charity, the halving of the Severn Bridge toll, and a tax break for touring museum exhibitions. Isn’t it strange how the treats are emphasised and the multibillion pound cuts and tax rises are always relegated to the appendices?

Then there’s the Chancellor’s odd tradition of pencilling in an increase to fuel duty year after year and, regardless of circumstances, coming up with an excuse to cancel the increase one more time. (Worse than useless.)

We also have major reforms to the school system. (Nothing to do with the Budget.)

What are we left with? The bodged introduction of a sugar tax and yet another wheeze to reform pension saving, the Lifetime Isa. Both policies would have been far better separated from the rabbit-out-of-hat Budget show, and considered on their merits.

There have been more dramatic Budgets than this, of course, but even then the drama has too often fallen into the “worse than useless” category.

Would the country’s economic policy really be harmed if the chancellor set out his fiscal direction at the beginning of the parliament and left it unchanged unless extraordinary circumstances intervened?

We should abolish 100 per cent of Autumn Statements and 80 per cent of Budgets — that’s a fiscal rule that I could really get behind.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

17th of March, 2016Other WritingComments off
Other Writing

Why Osborne’s sugar tax is half baked

I’m all in favour of a sugar tax, as I wrote in the FT Magazine on Saturday. It’s a shame, then, that — despite the headlines to the contrary — George Osborne hasn’t introduced one.

His proposal instead is to tax the manufacturers and importers of a particular variety of sugary drink. I am no dentist or dietitian, but it seems strange to take the view that sugar in general poses no risk to the nation’s teeth or waistline, unless it comes in a soft drink.

Coke and Pepsi are a problem, apparently. But it seems that sugar lumps in tea or coffee are not. Neither are cartons of chocolate milk. Nor syrupy concoctions from Starbucks and Costa. Nor soft drinks produced by boutique producers. Mars bars are fine. So are cakes. So are Coco Pops and Frosties, and for that matter the remarkable quantities of sugar that infuse cereals such as Bran Flakes, or are buried in the recipes of many ready meals. All these forms of sugar will continue to reach our taste buds free of a sugar tax.

Mr Osborne’s proposal will work, after a fashion. There is abundant evidence that people adjust their behaviour in response to financial incentives, whether through the window tax-avoiding architecture of 18th-century Britain or the inheritance tax-avoiding feat of Australians in postponing the date of their deaths to a more tax-efficient time.

So yes, as Mr Osborne expects, large companies will try to put less sugar in their soft drinks, or raise the prices of those drinks, or both. Sugar consumption from those sources will fall. But they may well rise elsewhere. For many people, a chocolate bar and a fizzy drink are substitutes. If the fizzy drink gets more expensive, the chocolate bar is a tasty alternative for the sweet-toothed consumer. It has more fat in it, too.

It’s clear enough why the chancellor has opted for this approach. He wants to blame large companies, not voters, and hide the fact that ultimately consumers will pay the tax. A broad-based tax on sugar itself would have been simpler, braver and far more effective. But Mr Osborne wanted his Budget to leave voters with a sweeter taste in the mouth.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

17th of March, 2016Other WritingComments off
Highlights

Multi-tasking: how to survive in the 21st century

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

Subscribe to TimHarford.com

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,151 other subscribers

Other Writing

The myth of the robot job-ocalypse

“The number of jobs lost to more efficient machines is only part of the problem . . . In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries.”
This sentiment, from Time magazine, dates from the early weeks of John Kennedy’s presidency. Yet it would slot nicely into many a contemporary political speech. Like any self-respecting remorseless killer robot from the future, our techno-anxiety just keeps coming back.
Arnold Schwarzenegger’s Terminator was science fiction — but so, too, is the idea that robots and software algorithms are guzzling jobs faster than they can be created. There is an astonishing mismatch between our fear of automation and the reality so far.
How can this be? The highways of Silicon Valley are sprinkled with self-driving cars. Visit the cinema, the supermarket or the bank and the most prominent staff you will see are the security guards, who are presumably there to prevent you stealing valuable machines. Your computer once contented itself with correcting your spelling; now it will translate your prose into Mandarin. Given all this, surely the robots must have stolen a job or two by now?
Of course, the answer is that automation has been destroying particular jobs in particular industries for a long time, which is why most westerners who weave clothes or cultivate and harvest crops by hand do so for fun. In the past that process made us richer.
The worry now is that, with computers making jobs redundant faster than we can generate new ones, the result is widespread unemployment, leaving a privileged class of robot-owning rentiers and highly paid workers with robot-compatible skills.
This idea is superficially plausible: we are surrounded by cheap, powerful computers; many people have lost their jobs in the past decade; and inequality has risen in the past 30 years.
But the theory can be put to a very simple test: how fast is productivity growing? The usual measure of productivity is output per hour worked — by a human. Robots can produce economic output without any hours of human labour at all, so a sudden onslaught of robot workers should cause a sudden acceleration in productivity.
Instead, productivity has been disappointing. In the US, labour productivity growth averaged an impressive 2.8 per cent per year from 1948 to 1973. The result was mass affluence rather than mass joblessness. Productivity then slumped for a generation and perked up in the late 1990s but has now sagged again. The picture is little better in the UK, where labour productivity is notoriously low compared with the other G7 leading economies, and it has been falling further behind since 2007.
Taking a 40-year perspective, the impact of this long productivity malaise on typical workers in rich countries is greater than that of the rise in inequality, or of the financial crisis of 2008. In an age peppered with economic disappointments, the worst has been the stubborn failure of the robots to take our jobs. Then why is so much commentary dedicated to the opposite view? Some of this is a simple error: it has been a tough decade, economically speaking, and it is easy to blame robots for woes that should be laid at the door of others, such as bankers, austerity enthusiasts and eurozone politicians.
It is also true that robotics is making impressive strides. Gill Pratt, a robotics expert, recently described a “Cambrian explosion” for robotics in the Journal of Economic Perspectives. While robots have done little to cause mass unemployment in the recent past, that may change in future.
Automation has also undoubtedly changed the shape of the job market — economist David Autor, writing in the same journal, documents a rise in demand for low-skilled jobs and highly skilled jobs, and a hollowing out of jobs in the middle. There are signs that the hollow is moving further and further up the spectrum of skills. The robots may not be taking our jobs, but they are certainly shuffling them around.
Yet Mr Autor also points to striking statistic: private investment in computers and software in the US has been falling almost continuously for 15 years. That is hard to square with the story of a robotic job-ocalypse. Surely we should expect to see a surge in IT investment as all those machines are installed?
Instead, in the wake of the great recession, managers have noted an ample supply of cheap human labour and have done without the machines for now. Perhaps there is some vast underground dormitory somewhere, all steel and sparks and dormant androids. In a corner, a chromium-plated robo-hack is tapping away at a column lamenting the fact that the humans have taken all the robots’ jobs.

24th of August, 2015Other WritingComments off
Other Writing

George Osborne’s Magic Has Us Fooled, For Now

The chancellor can alter the law but cannot make costly workers worth hiring, says Tim Harford

He has mastered the art of misdirection as well as any stage magician. Everyone knew George Osborne was going to butcher the tax credit system on Wednesday, more or less halving the income at which they begin to fall away. But few expected him to announce a much higher minimum wage, and he did it with such an extravagant flourish that no one clearly remembers seeing him wield the cleaver.
For most of the poorer working households who qualify for tax credits, the combined effect of Mr Osborne’s Budget will be to make them worse off financially, and to push them away from the labour force by raising the effective rate of tax they pay.
Monique Ebell of the National Institute of Economic and Social Research reckons that a single mother working 30 hours a week at the minimum wage will be more than a £1,000 a year worse off in two years’ time than she is today, despite the increase in the wage she must legally be paid.
That assumes, of course, that she keeps her job at all. This is the big question about the minimum wage: will it increase the earnings of low-paid workers, or price them out of the job market entirely? Should we expect to see these workers laid off and replaced with one-touch espresso machines, automatic checkouts and call-centre workers from India? The minimum wage is a delicate balance, and Mr Osborne has put his thumb on the scale.
The chancellor’s aim is to raise the minimum wage for those over 25 beyond £9 by 2020, from £6.50 today. That is dramatic, although not quite as dramatic as it first seems. Mr Osborne is setting the minimum wage where it might be if the economic crisis of 2008, and the long stagnation that followed, had never happened. He is hoping that employment will not suffer. He has a few other countries to look to as a precedent. France is one example, and it is not encouraging. Australia is a more hopeful case.
Mr Osborne’s move would once have been unthinkable from a Conservative chancellor. A quarter of a century ago, the conventional wisdom was that the idea of a minimum wage was absurd at any level. The logic of that position was simple enough. If the minimum wage was below the market-clearing wage — at which employees want to work the same number of hours that businesses want to hire them for — it would be irrelevant; if it was above, it would be worse than useless. Productive workers do not need a minimum wage because they will anyway be well paid. Less productive workers will be harmed by a minimum wage because employers would rather sack them than pay more than they are worth. One does not simply repeal the laws of supply and demand.
The world has moved on since then, and we know that while supply and demand matter, there is more to the labour market than the simple story above.
Some employers have market power and could pay higher wages if they were forced to; the higher minimum wage may simply redistribute from employers to low-paid employees. Another possibility is that if forced to pay higher wages, employers will invest in training and equipment to justify the labour expense. On this view, wages do not need to follow productivity; productivity can be led by wages.
A third explanation is that since many low wage jobs are in non-traded sectors such as retail, employers will simply put up prices, spreading the burden of the higher minimum wage across all consumers, and possibly reducing inequality.
There is also the argument that higher wages can encourage workers to show up more often and smile at the customers. This is true, but in most cases managers will have reached that conclusion by themselves without the need for a legal minimum.
A large body of empirical evidence suggests either that reasonable minimum wages do not destroy jobs at all, or that they do not destroy very many. The evidence is, of course, mixed and contested.
Much of it comes from the US and concerns the experience of teenagers, who — in the words of Alan Manning of the London School of Economics, “represent about 2 per cent of hours worked and 98 per cent of the studies of the minimum wage”. But it is clear enough that if modest increases in the minimum wage were disastrous for jobs, we would know that by now.
Whether the chancellor’s wage rise counts as “modest” is far more questionable. Professor Manning is guardedly optimistic: he thinks that the bold increase in the minimum wage is worth a try. But he is nervous, and so am I. We are at the edge of what the data can tell us. Mr Osborne is about to provide a fascinating new case study.
The best scenario is that the minimum wage helps to drive up British productivity, which has long languished. Employers invest in training, and rather than replacing workers with machines they give them the latest tools to do their jobs.
To the extent that productivity does not rise, employers absorb the costs or pass them on to consumers, equitably bearing the burden of giving hard-working people a decent wage.
A gloomier scenario seems more probable for some sectors, especially social care. The law of supply and demand turns out to matter after all. Faced with a sharp increase in the minimum wage that runs well ahead of what the Low Pay Commission has felt able to endorse, employers lay off many workers and reduce the hours of others. The welfare bill rises and — as so often in the past — it proves much harder to create jobs than to destroy them.
My own bet is somewhere in the middle. We will discover that Mr Osborne has pushed too hard, and that the minimum wage must be allowed to slip back again relative to median earnings. Some jobs will be lost, a lesson will be learned, and Mr Osborne’s political purposes will have been served. He will be hoping to have upgraded his own job to that of prime minister by then, which may be appropriate: he is a masterful politician but has never shown much grasp of economics.

Written for and first published at FT.com

13th of July, 2015Other WritingComments off
Other Writing

George Osborne’s gamble with jobs

My response to the Summer Budget went up on the FT website yesterday:

The sharp hike in the minimum wage in the Budget was a shock, but it was true to form for the UK chancellor of the exchequer: clever politics and dubious economics. It is telling that, where the Low Pay Commission used to consider the evidence and carefully balance the risks and rewards of a higher minimum wage, it must now recommend whatever George Osborne tells it to recommend.

The risk is clear: forced to pay up to £9 an hour, many businesses will find that they would rather find other ways to conduct their affairs — buying robots, offshoring key functions or moving overseas entirely. Bankruptcy is, of course, another option.

 

Mr Osborne’s gamble is that some businesses will simply eat the cost of higher wages (unlikely), or train their workers better and give them better tools so that the higher wages can be justified with higher productivity. It is possible this may work. It is enormously risky, and if the move is the wrong one it will be hard to reverse. The lesson of the 1980s is that, once lost, jobs are not easy to find again.

One might ask why the chancellor is willing to take such risks and to order the Low Pay Commission to do his bidding rather than be guided by evidence. The answer is not hard to find: Mr Osborne needs political cover. He is hacking away at the welfare state, notably the system of tax credits that was designed to encourage people to work rather than stay at home.

One can only guess what Milton Friedman, one of the inspirations behind the Thatcherite revolution, would have made of all this. In place of a carefully designed system of incentives for people to go to work, we are to be offered a wage increase set by a politician’s whim. Friedman knew that, even in the complex market for jobs, one does not simply abolish the laws of supply and demand.

Mr Osborne promised a Budget for working people but reality does not match that sound bite. The biggest tax break was for people inheriting expensive homes from their parents; and, while benefits for the working poor were being squeezed, those for pensioners were — as always — protected. Those who hoped for radical and logical tax reform have been bitterly disappointed.

As for working people, many will thank the chancellor as their wages rise. Others will become unaffordable and will lose their jobs. No doubt they will be scapegoated as scroungers in some future Budget speech. It is possible that Mr Osborne’s gamble will pay off. It is even possible, although unlikely, that it will pay off spectacularly. But it is reckless, and it is not his job that is on the line.

9th of July, 2015Other WritingComments off
Marginalia

Paying to Get Inside A Restaurant

Me, writing in May’s edition of The Atlantic:

The next time you’re fortunate enough to have dinner at a high-end restaurant, take a moment to enjoy not only the food and wine, but the frisson of a really good puzzle: Why do restaurants price things the way they do?

The markup on food makes sense. It takes time and skill to prepare the perfect cold-smoked salmon with balsamic-vinegar sorbet. But why are the wine prices so inflated? How hard can it be to pop open a bottle? Meanwhile, restroom access is free and unlimited for customers—a curious cross-subsidy.

Most mysterious of all: When reservations at hot new restaurants are so sought-after, why are they simply given away?

Why indeed? The full article is here and free to read online.

23rd of April, 2015MarginaliaOther WritingComments off
Other Writing

‘The Organised Mind’, by Daniel J Levitin; ‘The Life-Changing Magic of Tidying’, by Marie Kondo

Two very different books claim they can help us to cope with the excess of information, tasks and belongings

The challenge of “getting organised” is both commonplace and oddly particular. Organising my study is a different problem from organising a library, or organising a drinking session in a brewery. Organising things can be practical to the point of banality (what’s the best way to store socks?) but it can also be philosophically deep.

Consider the Celestial Empire of Benevolent Knowledge, a Chinese encyclopedia conjured up in an essay by Jorge Luis Borges. The oriental tome organises animals into categories thus: “(a) belonging to the emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (l) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies.”

Daniel Levitin is a professor of psychology and behavioural neuroscience; Marie Kondo is a professional cleaner. Both have written books that promise to help us get organised, although they are quite different. Kondo’s is a brief and bracing practical guide to tidying up your home. Levitin’s sprawling discussion aspires to provide a comprehensive account of the way we think about organising everything from possessions to friends. Levitin offers enough psychological detail to suggest why our mental categories do not match Borges’s encyclopedia. Yet he also frequently dabbles in the dispensing of practical tips. He has studied not only psychology but “HSPs” (Highly Successful Persons) and with his advice, perhaps you can be an HSP too.

Both books agree that everything should have a place and be put back in that place. Levitin expands on that, explaining that our memory for locations is well developed; hence, the well-known trick of committing a list of things to memory by assigning each thing a place in an imaginary physical space, a “memory palace”.

Yet there is a big difference in approach here. Levitin seems to approve of devising organisational systems for a complex world; Kondo believes that before we organise, we must first simplify. Levitin describes a system in which letters are printed in triplicate to allow filing chronologically, by topic and by correspondent. Kondo suggests that we could do without most of our paperwork: even love letters become a drag if you hoard every one.

Levitin’s approach to computers illustrates this disagreement. He makes the familiar argument that multitasking is a distracting problem. His solution is to buy several computers, each with a single purpose: one is for work, one for domestic tasks, one for social networking, one for reading books and so on. He is apologetic about the cost of this, but one wonders if even a millionaire could make the system work.

First, all these devices use the internet, and the internet is inherently multifunctional. Second, these categories are more porous than they seem: how to categorise an email from a work colleague inviting you to a party? Third, it’s not clear at all that maintaining several distinct email accounts helps anyone process email. Many productivity gurus advise pooling your emails as they come in, and not being too fussy about categorising them on the way out. Simple beats well-organised here, and complex filing systems aren’t worth the hassle. As Borges understood very well, they are inevitably a poor fit for the rich mess of reality.

Kondo declares that too much organisation is a trap. Instead of getting rid of things we don’t need, we buy stuff to organise the stuff we already have. None of this really helps, she claims. Once you get too clever about storage you lose track of what you own, and that way lies bloat and frustration.

Levitin’s approach to organisation seems more rational. He is always willing to throw in some talk about dopamine, the prefrontal cortex or inhibitory neurotransmitters. Kondo, eccentrically spiritual, believes in saying thank you to old socks before discarding them. Pragmatically speaking, though, Kondo is both more radical and more persuasive. To the extent that Levitin offers sensible practical advice, it is well-worn (why not carry a notebook and write down ideas and tasks so you don’t strain to remember them?). And when he admiringly describes the way that Joni Mitchell has a custom drawer for Scotch tape, another for masking tape, a third for string and rope, a fourth for mailing and packing products, and so on, one can only imagine Kondo trying to slap both Mitchell and Levitin into sanity.

This predilection for comprehensiveness weighs on Levitin’s book in another way: it’s long, and while it does contain a great deal to intrigue, the parade of disparate topics makes it less than the sum of the parts. There are tens of thousands of words on topics that are perfectly interesting in isolation (Bayes’ rule; how to estimate quantities; how online dating works) and yet they do not seem to fit together.

Consider each thing, writes Kondo, and ask “Does it spark joy?” Only then should you keep it. Levitin might profitably have taken that advice.

The Organised Mind: Thinking Straight in the Age of Information Overload, by Daniel J Levitin, Viking, RRP£20/ Dutton RRP$27.95, 528 pages

The Life-Changing Magic of Tidying: A Simple, Effective Way to Banish Clutter Forever, by Marie Kondo, Vermilion, RRP£7.99/ Ten Speed Press, RRP$16.99, 256 pages

Written for and first published at ft.com.

24th of January, 2015Other WritingComments off
Other Writing

What if we abolished Christmas?

One possibility is that the economy would be just fine. This is the classic view of macroeconomics, writes Tim Harford

In 1939, Lew Hahn, the head of the Retail Dry Goods Association in the US, noticed something that gave him cause for concern: Thanksgiving would fall on November 30 that year, the latest possible date. Since it was thought poor form to start hawking Yuletide goodies before Thanksgiving was over, this would mean a brief Christmas spending season.

Hahn was concerned that consumers would spend less, damaging an already weak economy, to say nothing of the prosperity of the members of the Retail Dry Goods Association. And so he had a word with the secretary of commerce, Harry Hopkins, who had a word with President Franklin D Roosevelt, who had a word with the nation. He explained that as Thanksgiving was a federal holiday it was the president’s job to select the date — and he was choosing November 23 instead.

The move was controversial. Alfred Landon, the Republican who had been defeated by Roosevelt in the presidential election of 1936, compared FDR’s high-handedness to that of Adolf Hitler, thus beginning a hallowed valium tradition in US political commentary. For a couple of years, half the country celebrated on the old Thanksgiving date while the other half marked the new “Franksgiving” instead; a couple of states sat on the fence and made both days a holiday.

All this raises a deeper question: what are the macroeconomic consequences of Christmas? The answer depends on your politics. Economic conservatives, from Rick Santorum to Alf Landon to George Osborne, believe Christmas has little effect on the health of the economy; liberals, from Ed Balls to Franklin Roosevelt to Paul Krugman, believe Christmas is macroeconomically invaluable.

I should emphasise that I am making assumptions here. I have not approached any of these people to ask their opinions about Christmas. But the views I am speculating that they hold seem a logical extension of their views on government stimulus spending.

Allow me to explain. Imagine that this Christmas day, the Queen, the Pope and even Oprah Winfrey announced that Christmas would be a purely religious occasion from 2015 onwards. There would be no presents and no feasting. If people respected this declaration, about $75bn-$100bn of extra consumer spending in the US alone would simply not materialise next December. What then?

One possibility is that the economy would be just fine. This is the classical view of macroeconomics: nothing significant would change after the abolition of Christmas. We would retain the same labour force and the same skills, the same factories and the same power stations, the same financial sector and the same logistics networks. The capacity of the economy to produce goods and services would be undiminished, and after a period of adjustment, during which tinsel factories would be retooled and Christmas tree plantations replanted, all would be well.

What would replace nearly $100bn of seasonal consumer spending? Nothing noticeable, but the replacement would happen just the same. The productive capacity freed up by the disappearance of Christmas could be turned to other uses; prices would fall just enough to tempt us to spend our money at other times of the year. Indeed, cancelling Christmas might even provide a modest boost to our prosperity in the longer term, as bunching up all that spending into a few short weeks strains factories and supply chains. Smoothing out our spending would be more efficient.

This classical view of how the economy works is also the view taken by Mr Osborne, the UK chancellor, and by Republicans in the US. Their view is that government stimulus spending does not work; cut it back, they argue, and the economy would adjust as the private sector took up the slack.

On the other side of the debate stands Mr Balls, the UK’s shadow chancellor, as well as American stimulus proponents such as Mr Krugman and Lawrence Summers. Mr Krugman once commented that panic about an attack from aliens would help the economy because it would get the government spending money again. Since aliens are not available, Santa Claus will have to do.

This Keynesian view of how the economy works differs from the classical view in one crucial way: it argues that supply does not always and automatically create demand. When Christmas is abolished (or a financial crisis devastates people’s confidence and their spending power), consumers will plan to spend less. And if consumers plan to spend less, price adjustments may not induce them to change their minds; the price adjustments may not even happen. If Christmas spending disappears, it may take many years for the economy to replace it. Those factories will still be there and the workers will remain available — but they will stand idle.

Who is right? I should confess a bias. I am sceptical about the efficiency of many government spending programmes and of many Christmas purchase decisions. In both cases, too much attention is lavished on appearances and too little on what the recipient might truly want. In the long run, then, I should hope both for a smaller state and for a smaller Christmas.

But that is a matter for the ghost of Christmas yet to come. Despite my own biases, I have to acknowledge that this Christmas interest rates are still close to zero. Until that changes, the liberals will have the better of the argument. Stimulus spending remains effective, regardless of whether the stimulus comes from the Treasury — or from the North Pole.

Written for and first published at ft.com.

24th of December, 2014Other WritingComments off
Other Writing

Why pilot schemes help ideas take flight

There’s huge value in experiments that help us decide whether to go big or go home

Here’s a little puzzle. You’re offered the chance to participate in two high-risk business ventures. Each costs £11,000. Each will be worth £1m if all goes well. Each has just a 1 per cent chance of success. The mystery is that the ventures have very different expected pay-offs.

One of these opportunities is a poor investment: it costs £11,000 to get an expected payout of £10,000, which is 1 per cent of a million. Unless you take enormous pleasure in gambling, the venture makes no sense.

Strangely, the other opportunity, while still risky, is an excellent bet. With the same cost and the same chance of success, how could that be?

Here’s the subtle difference. This attractive alternative project has two stages. The first is a pilot, costing £1,000. The pilot has a 90 per cent chance of failing, which would end the whole project. If the pilot succeeds, scaling up will cost a further £10,000, and there will be a 10 per cent chance of a million-pound payday.

This two-stage structure changes everything. While the total cost is still £11,000 and the chance of success is still 1 per cent, the option to get out after a failed pilot is invaluable. Nine times out of 10, the pilot will save you from wasting £10,000 – which means that while the simple project offers an expected loss of £1,000, the two-stage project has an expected profit of £8,000.

In a real project, nobody could ever be sure about the probability of success or its rewards. But the idea behind this example is very real: there’s huge value in experiments that help us decide whether to go big or go home.

We can see this effect in data from the venture capital industry. One study looked at companies backed by US venture capitalists (VCs) between 1986 and 1997, comparing them with a sample of companies chosen randomly to be the same age, size and from the same industry. (These results were published in this summer’s Journal of Economic Perspectives in an article titled “Entrepreneurship as Experimentation”.)

By 2007, only a quarter of the VC-backed firms had survived, while one-third of the comparison group was still in business. However, the surviving VC-backed firms were big successes, employing more than five times as many people as the surviving comparison firms. We can’t tell from this data whether the VCs are creating winners or merely spotting them in advance but we can see that big successes on an aggregate scale are entwined with a very high failure rate.

The option to conduct a cheap test run can be very valuable. It’s easy to lose sight of quite how valuable. Aza Raskin, who was lead designer for the Firefox browser, cites the late Paul MacCready as his inspiration on this point. MacCready was one of the great aeronautical engineers, and his most famous achievement was to build the Gossamer Condor and the Gossamer Albatross, human-powered planes that tore up the record books in the late 1970s.

One of MacCready’s key ideas was to develop a plane that could swiftly be rebuilt after a crash. Each test flight revealed fresh information, MacCready figured, but human-powered planes are so feather-light that each test flight also damages the plane. The most important thing a designer could do was to build a plane that could be rebuilt within days or even hours after a crash – rather than weeks or months. Once the problem of fast, cheap experimentation was solved, everything else followed.

Some professions have internalised this lesson. Architects use scale models to shed light on how a completed building might look and feel. A nicely made model can take days of work to complete but that is not much compared with the cost of the building itself.

Politicians don’t find it so easy. A new policy is hardly a new policy at all unless it can be unveiled in a blaze of glory, preferably as a well-timed surprise. That hardly suits the MacCready approach. Imagine the conference speech: “We’re announcing a new array of quick-and-dirty experiments with the welfare state. We’ll be iterating rapidly after each new blunder and heart-rending tabloid anecdote.”

A subtler problem is that projects need a certain scale before powerful decision makers will take them seriously.

“The transaction costs involved in setting up any aid project are so great that most donors don’t want to consider a project spending less than £20m,” says Owen Barder, director for Europe at the Center for Global Development, a think-tank. I suspect that the same insight applies far beyond the aid industry. Governments and large corporations can find it’s such a hassle to get anything up and running that the big stakeholders don’t want to be bothered with anything small.

That is a shame. The real leverage of a pilot scheme is that although it is cheap, it could have much larger consequences. The experiment itself may seem too small to bother with; the lesson it teaches is not.

Also published at ft.com.

21st of October, 2014Other WritingComments off
Previous Next

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Tim’s Tweets

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 4,151 other subscribers

Do NOT follow this link or you will be banned from the site!