Tim Harford The Undercover Economist
    “Every Tim Harford book is cause for celebration. He makes the ‘dismal science’ seem like an awful lot of fun.”
    – Malcolm Gladwell, author of “The Tipping Point”
  • Adapt – Why Success Always Starts with Failure

    “A highly readable argument... Very impressive”
    – Nassim N Taleb, author of “The Black Swan”
  • Dear Undercover Economist
    “Harford has a knack for explaining economic principles and problems in plain language and, even better, for making them fun”
    – The New York Times
  • The Logic of Life
    “Highly readable, funny and daringly contentious... a whopping good time.”
    – The San Francisco Chronicle
  • The Undercover Economist
    “Required reading”
    – Stephen J. Dubner, co-author of “Freakonomics”
Undercover Economist

Peer-to-peer pressure

‘Are these new players providing a valuable new service or are they merely an arbitrage play?’

Peer-to-peer markets used to be simple: there was eBay. If you had a broken laser pointer you wanted to sell, eBay was the place to find a buyer. Then came the local marketplace Craigslist and, before long, peer-to-peer markets were linking buyers and sellers in every market imaginable: crafts (Etsy); chores (TaskRabbit); transport (Uber); accommodation (Airbnb); consumer loans (Zopa); and even booze (Drizly).

It was exciting, for a while, to realise that you could actually get a car home on a Saturday night in San Francisco, or make money renting out your attic, but the backlash has been simmering for some time. That backlash mixes two complaints, elegantly exemplified when a group of taxicab owners and drivers sued Uber in Atlanta a year ago.

“Uber has been operating in Atlanta with little concern about the safety of their passengers and zero concern for the laws that protect them,” said one of the plaintiffs in a statement to The Atlanta Journal-Constitution. “Our incomes have steadily dropped since Uber started and legally licensed drivers are leaving the business.”

In other words, peer-to-peer services such as Uber are said to be hazardous, and they are also unwelcome competition for incumbents. (Several studies have supported the common-sense conclusion that these new competitors threaten the revenue of existing players.)

These might seem very different issues. It’s one thing to worry about signposting fire exits when you let out a spare room on Airbnb. Protecting the profit margins of fine upstanding local hoteliers is another matter.

Yet the two questions are inevitably tangled up, because both touch on the way incumbents are regulated. One would hope that regulators protect consumers, employees and the public by making it more difficult for drunks and sexual predators to drive cars, for firetraps to host unsuspecting tourists, and for employers to exploit workers. But some regulations seem designed more to protect insiders than to protect consumers.

Consider the New York taxi medallion system: you can’t drive a taxicab without one, and they’ve been million-dollar assets at times, often owned by investors and leased to drivers at a rate of $100 or more a day. New kids Uber and Lyft not only compete for passengers, they compete for drivers too, who may prefer to pay commission to these new players than the flat fee to the medallion owner.

Taxi medallions are a scarce asset created purely by a stroke of the regulator’s pen, and you don’t need to be a hardcore libertarian to conclude that, in this case, the regulator is motivated by protecting the value of this asset. Nor does it take a free-market fundamentalist to believe that if consumers think that taxicabs provide a safer service, they will pay for that safer service.

It may help to approach the debate from a different direction. Are these new players providing a valuable new service or are they merely an arbitrage play, using technology to sidestep taxes that others must pay, and to limbo-dance under regulatory hurdles that rivals must jump?

If the economic value is real, then it is up to the regulators to figure out how to unleash that value rather than trying to legislate it out of existence.

A new study of peer-to-peer markets by economists Liran Einav, Chiara Farronato and Jonathan Levin argues that the economic value is there all right. Peer-to-peer markets make two things possible that were previously hard to imagine.

The first is to make arid markets lush and fertile. The quintessential example is eBay, enabling buyers and sellers of the quirkiest products to find each other and gain by trading. Etsy fits the eBay mould, with sellers who will knit you a cuddly toy designed to resemble a dissected frog, a product that seems unlikely to find a niche on the high street.

The second peer-to-peer trick is to introduce part-timers into the market to meet surges in demand. It’s inefficient to build hotels just to cope with the summer rush, or taxis to cope with New Year’s Eve but, if the demand is there, peer-to-peer markets can pull in a bit of extra supply. As a result, it should be easier to get a cab at 11pm on a Friday, and prices for hotel rooms should be more reasonable during school holidays.

Peer-to-peer markets are well worth having. The challenge for regulators, then, is to catch up. How should Airbnb landlords who let a room for 10 nights a year be placed on a level playing field with regular bed-and-breakfast landlords? Are Uber drivers employees (as a California labour commissioner recently ruled)? Or freelancers using Uber’s software to help them do their jobs (as Uber insists)? Or something else?

James Surowiecki, writing in The New Yorker, recently argued for “something else”, and called for a regulatory overhaul to give “gig-economy workers a better balance of flexibility and security”. That sounds like an admirable aim, although achieving it isn’t straightforward. Giving pensions, vacation rights or unemployment insurance to Uber drivers or TaskRabbit “taskers” would require both clever rules and clever admin systems.

Peer-to-peer markets may once have been simple; now there is more at stake than the occasional broken laser pointer.

Written for and first published at ft.com.

Undercover Economist

What cities tell us about the economy

‘In 1667 the Dutch ceded Manhattan to the British, thinking sugar-rich Suriname was a better bet’

The economic indicators that surround us are familiar, as are the criticisms they attract. The consumer prices index doesn’t fully capture the boon of new products; unemployment figures do not count workers who have given up the job hunt in despair; gross domestic product (GDP) includes bad things if they have a market price, and excludes good things if they don’t.

But there is one fundamental flaw in all these statistics that is rarely discussed: they are almost always applied to countries. It is not impossible to find educated guesses about the GDP of Cambridge, or the inflation rate in Mumbai, and there is nothing conceptually troubling about trying to calculate either. Yet most economic statistics describe the nation state.

This is odd, because the nation state is a political unit, not an economic one. Policy does influence the economy, of course — national authorities can impose a common interest rate, tax rates and regulations. But, as the unorthodox thinker and writer Jane Jacobs used to argue, the natural unit of macroeconomic analysis is not a nation state at all. It is a city and its surrounding region.

Aberdeen, Cardiff, Glasgow and Manchester are subject to some similarities by virtue of their shared participation in something we call “the British economy” but economically they are quite different. Their relative fortunes fluctuate because they are pushed and pulled by different forces.

In her book Cities and the Wealth of Nations , Jacobs zooms in still further, looking at “Shinohata”, a pseudonymous Japanese hamlet a hundred miles north-west of Tokyo. (She relies on a rich description of Shinohata by sociologist Ronald Dore.) Shinohata was initially a subsistence economy, supplemented by woodland foraging and a little silk farming. In the 20th century, the villagers gained some time thanks to improved agricultural techniques, and they used it to produce more silk cocoons. After the war, Tokyo’s expansion pulled Shinohata into its economic orbit. The booming Japanese capital became a market for Shinohata’s fresh fruit and wild oak mushrooms; Tokyo’s government paid for bridges and roads; its capitalists built a factory; its labour market lured young men and women from their village existence. The tale is intricate and unpredictable; Japan’s economic miracle, as recorded in the national statistics, was actually the sum of countless unrecorded stories of local development.

Jacobs is not the only person to argue that economic development may be profitably studied through a magnifying glass. A new research paper from three development economists, William Easterly, Laura Freschi and Steven Pennings, offers “A Long History of a Short Block” — a Shinohata-style tale of the economic development of a single 486ft block of Greene Street, between Houston and Prince Street in downtown Manhattan.

Easterly, a former World Bank researcher, is well known in development circles for his scepticism about how much development can ever be planned, and how much credit political leaders and their expert advisers deserve when things go well.

“Here’s a block where there is no leader; there’s no president or prime minister of this block,” he explained to me. Greene Street, he suggests, offers us a perspective on the more spontaneous, decentralised features of economic development.

Greene Street’s history certainly offers plenty of rapid and surprising changes to observe. The Dutch, who had colonised Manhattan in 1624, decided in 1667 to cede what is now New York to the British, in exchange for guarantees over their possession of what is now Suriname in Latin America. The Dutch thought sugar-rich Suriname was a better bet but New York City’s economy is now more than a hundred times larger than Suriname’s.

In 1850, Greene Street was a prosperous residential district with several households who would be multimillionaires in today’s terms. Two large hotels and a theatre opened nearby, and prostitutes started to move in. By 1870, the middle classes had fled and the block was at the heart of one of New York City’s largest sex-work districts.

In the late 19th century, perhaps because property values in the red-light area were low, entrepreneurs swooped in to build large cast-iron stores and warehouses for the garment trade. Greene Street’s fortunes waned when the industry moved uptown after 1910, and property values collapsed. In the 1940s and 1950s, urban planners suggested bulldozing the lot and starting again but a community campaign — famously involving Jacobs herself — fought them off. Property values were revived as artists colonised Greene Street in the 1950s and 1960s, attracted by the large, airy and cheap spaces. None of these changes could easily have been predicted; some are rather mysterious even in retrospect.

The lessons of Greene Street? Getting the basic infrastructure right — streets, water, sanitation, policing — is a good idea. Aggressive planning, knocking down entire blocks in response to temporary weakness, is probably not. Predicting the process of economic development at a local level is a game for suckers. Most importantly, even a tremendous development success — the United States and, within it, New York City — is going to show some deep wrinkles to those who get in close.

Written for and first published at ft.com.


Multi-tasking: how to survive in the 21st century

Modern life now forces us to do a multitude of things at once — but can we? Should we?

Forget invisibility or flight: the superpower we all want is the ability to do several things at once. Unlike other superpowers, however, being able to multitask is now widely regarded as a basic requirement for employability. Some of us sport computers with multiple screens, to allow tweeting while trading pork bellies and frozen orange juice. Others make do with reading a Kindle while poking at a smartphone and glancing at a television in the corner with its two rows of scrolling subtitles. We think nothing of sending an email to a colleague to suggest a quick coffee break, because we can feel confident that the email will be read within minutes.

All this is simply the way the modern world works. Multitasking is like being able to read or add up, so fundamental that it is taken for granted. Doing one thing at a time is for losers — recall Lyndon Johnson’s often bowdlerised dismissal of Gerald Ford: “He can’t fart and chew gum at the same time.”

The rise of multitasking is fuelled by technology, of course, and by social change as well. Husbands and wives no longer specialise as breadwinners and homemakers; each must now do both. Work and play blur. Your friends can reach you on your work email account at 10 o’clock in the morning, while your boss can reach you on your mobile phone at 10 o’clock at night. You can do your weekly shop sitting at your desk and you can handle a work query in the queue at the supermarket.

This is good news in many ways — how wonderful to be able to get things done in what would once have been wasted time! How delightful the variety of it all is! No longer must we live in a monotonous, Taylorist world where we must painstakingly focus on repetitive tasks until we lose our minds.

And yet we are starting to realise that the blessings of a multitasking life are mixed. We feel overwhelmed by the sheer number of things we might plausibly be doing at any one time, and by the feeling that we are on call at any moment.

And we fret about the unearthly appetite of our children to do everything at once, flipping through homework while chatting on WhatsApp, listening to music and watching Game of Thrones. (According to a recent study by Sabrina Pabilonia of the US Bureau of Labor Statistics, for over half the time that high-school students spend doing homework, they are also listening to music, watching TV or otherwise multitasking. That trend is on the increase.) Can they really handle all these inputs at once? They seem to think so, despite various studies suggesting otherwise.

And so a backlash against multitasking has begun — a kind of Luddite self-help campaign. The poster child for uni-tasking was launched on the crowdfunding website Kickstarter in December 2014. For $499 — substantially more than a multifunctional laptop — “The Hemingwrite” computer promised a nice keyboard, a small e-ink screen and an automatic cloud back-up. You couldn’t email on the Hemingwrite. You couldn’t fool around on YouTube, and you couldn’t read the news. All you could do was type. The Hemingwrite campaign raised over a third of a million dollars.

The Hemingwrite (now rebranded the Freewrite) represents an increasingly popular response to the multitasking problem: abstinence. Programs such as Freedom and Self-Control are now available to disable your browser for a preset period of time. The popular blogging platform WordPress offers “distraction-free writing”. The Villa Stéphanie, a hotel in Baden-Baden, offers what has been branded the “ultimate luxury”: a small silver switch beside the hotel bed that will activate a wireless blocker and keep the internet and all its temptations away.

The battle lines have been drawn. On one side: the culture of the modern workplace, which demands that most of us should be open to interruption at any time. On the other, the uni-tasking refuseniks who insist that multitaskers are deluding themselves, and that focus is essential. Who is right?

The ‘cognitive cost’

There is ample evidence in favour of the proposition that we should focus on one thing at a time. Consider a study led by David Strayer, a psychologist at the University of Utah. In 2006, Strayer and his colleagues used a high-fidelity driving simulator to compare the performance of drivers who were chatting on a mobile phone to drivers who had drunk enough alcohol to be at the legal blood-alcohol limit in the US. Chatting drivers didn’t adopt the aggressive, risk-taking style of drunk drivers but they were unsafe in other ways. They took much longer to respond to events outside the car, and they failed to notice a lot of the visual cues around them. Strayer’s infamous conclusion: driving while using a mobile phone is as dangerous as driving while drunk.

Less famous was Strayer’s finding that it made no difference whether the driver was using a handheld or hands-free phone. The problem with talking while driving is not a shortage of hands. It is a shortage of mental bandwidth.

Yet this discovery has made little impression either on public opinion or on the law. In the United Kingdom, for example, it is an offence to use a hand-held phone while driving but perfectly legal if the phone is used hands-free. We’re happy to acknowledge that we only have two hands but refuse to admit that we only have one brain.

Another study by Strayer, David Sanbonmatsu and others, suggested that we are also poor judges of our ability to multitask. The subjects who reported doing a lot of multitasking were also the ones who performed poorly on tests of multitasking ability. They systematically overrated their ability to multitask and they displayed poor impulse control. In other words, wanting to multitask is a good sign that you should not be multitasking.

We may not immediately realise how multitasking is hampering us. The first time I took to Twitter to comment on a public event was during a televised prime-ministerial debate in 2010. The sense of buzz was fun; I could watch the candidates argue and the twitterati respond, compose my own 140-character profundities and see them being shared. I felt fully engaged with everything that was happening. Yet at the end of the debate I realised, to my surprise, that I couldn’t remember anything that Brown, Cameron and Clegg had said.

A study conducted at UCLA in 2006 suggests that my experience is not unusual. Three psychologists, Karin Foerde, Barbara Knowlton and Russell Poldrack, recruited students to look at a series of flashcards with symbols on them, and then to make predictions based on patterns they had recognised. Some of these prediction tasks were done in a multitasking environment, where the students also had to listen to low- and high-pitched tones and count the high-pitched ones. You might think that making predictions while also counting beeps was too much for the students to handle. It wasn’t. They were equally competent at spotting patterns with or without the note-counting task.

But here’s the catch: when the researchers then followed up by asking more abstract questions about the patterns, the cognitive cost of the multitasking became clear. The students struggled to answer questions about the predictions they’d made in the multitasking environment. They had successfully juggled both tasks in the moment — but they hadn’t learnt anything that they could apply in a different context.

That’s an unnerving discovery. When we are sending email in the middle of a tedious meeting, we may nevertheless feel that we’re taking in what is being said. A student may be confident that neither Snapchat nor the live football is preventing them taking in their revision notes. But the UCLA findings suggest that this feeling of understanding may be an illusion and that, later, we’ll find ourselves unable to remember much, or to apply our knowledge flexibly. So, multitasking can make us forgetful — one more way in which multitaskers are a little bit like drunks.

Early multitaskers

All this is unnerving, given that the modern world makes multitasking almost inescapable. But perhaps we shouldn’t worry too much. Long before multitasking became ubiquitous, it had a long and distinguished history.

In 1958, a young psychologist named Bernice Eiduson embarked on an long-term research project — so long-term, in fact, that Eiduson died before it was completed. Eiduson studied the working methods of 40 scientists, all men. She interviewed them periodically over two decades and put them through various psychological tests. Some of these scientists found their careers fizzling out, while others went on to great success. Four won Nobel Prizes and two others were widely regarded as serious Nobel contenders. Several more were invited to join the National Academy of Sciences.

After Eiduson died, some of her colleagues published an analysis of her work. These colleagues, Robert Root-Bernstein, Maurine Bernstein and Helen Garnier, wanted to understand what determined whether a scientist would have a long productive career, a combination of genius and longevity.

There was no clue in the interviews or the psychological tests. But looking at the early publication record of these scientists — their first 100 published research papers — researchers discovered a pattern: the top scientists were constantly changing the focus of their research.

Over the course of these first 100 papers, the most productive scientists covered five different research areas and moved from one of these topics to another an average of 43 times. They would publish, and change the subject, publish again, and change the subject again. Since most scientific research takes an extended period of time, the subjects must have overlapped. The secret to a long and highly productive scientific career? It’s multitasking.

Charles Darwin thrived on spinning multiple plates. He began his first notebook on “transmutation of species” two decades before The Origin of Species was published. His A Biographical Sketch of an Infant was based on notes made after his son William was born; William was 37 when he published. Darwin spent nearly 20 years working on climbing and insectivorous plants. And Darwin published a learned book on earthworms in 1881, just before his death. He had been working on it for 44 years. When two psychologists, Howard Gruber and Sara Davis, studied Darwin and other celebrated artists and scientists they concluded that such overlapping interests were common.

Another team of psychologists, led by Mihaly Csikszentmihalyi, interviewed almost 100 exceptionally creative people from jazz pianist Oscar Peterson to science writer Stephen Jay Gould to double Nobel laureate, the physicist John Bardeen. Csikszentmihalyi is famous for developing the idea of “flow”, the blissful state of being so absorbed in a challenge that one loses track of time and sets all distractions to one side. Yet every one of Csikszentmihalyi’s interviewees made a practice of keeping several projects bubbling away simultaneously.

Just internet addiction?

If the word “multitasking” can apply to both Darwin and a teenager with a serious Instagram habit, there is probably some benefit in defining our terms. There are at least four different things we might mean when we talk about multitasking. One is genuine multitasking: patting your head while rubbing your stomach; playing the piano and singing; farting while chewing gum. Genuine multitasking is possible, but at least one of the tasks needs to be so practised as to be done without thinking.

Then there’s the challenge of creating a presentation for your boss while also fielding phone calls for your boss and keeping an eye on email in case your boss wants you. This isn’t multitasking in the same sense. A better term is task switching, as our attention flits between the presentation, the telephone and the inbox. A great deal of what we call multitasking is in fact rapid task switching.

Task switching is often confused with a third, quite different activity — the guilty pleasure of disappearing down an unending click-hole of celebrity gossip and social media updates. There is a difference between the person who reads half a page of a journal article, then stops to write some notes about a possible future project, then goes back to the article — and someone who reads half a page of a journal article before clicking on bikini pictures for the rest of the morning. “What we’re often calling multitasking is in fact internet addiction,” says Shelley Carson, a psychologist and author of Your Creative Brain. “It’s a compulsive act, not an act of multitasking.”

A final kind of multitasking isn’t a way of getting things done but simply the condition of having a lot of things to do. The car needs to be taken in for a service. Your tooth is hurting. The nanny can’t pick up the kids from school today. There’s a big sales meeting to prepare for tomorrow, and your tax return is due next week. There are so many things that have to be done, so many responsibilities to attend to. Having a lot of things to do is not the same as doing them all at once. It’s just life. And it is not necessarily a stumbling block to getting things done — as Bernice Eiduson discovered as she tracked scientists on their way to their Nobel Prizes.

The fight for focus

These four practices — multitasking, task switching, getting distracted and managing multiple projects — all fit under the label “multitasking”. This is not just because of a simple linguistic confusion. The versatile networked devices we use tend to blur the distinction, serving us as we move from task to task while also offering an unlimited buffet of distractions. But the different kinds of multitasking are linked in other ways too. In particular, the highly productive practice of having multiple projects invites the less-than-productive habit of rapid task switching.

To see why, consider a story that psychologists like to tell about a restaurant near Berlin University in the 1920s. (It is retold in Willpower, a book by Roy Baumeister and John Tierney.) The story has it that when a large group of academics descended upon the restaurant, the waiter stood and calmly nodded as each new item was added to their complicated order. He wrote nothing down, but when he returned with the food his memory had been flawless. The academics left, still talking about the prodigious feat; but when one of them hurried back to retrieve something he’d left behind, the waiter had no recollection of him. How could the waiter have suddenly become so absent-minded? “Very simple,” he said. “When the order has been completed, I forget it.”

One member of the Berlin school was a young experimental psychologist named Bluma Zeigarnik. Intrigued, she demonstrated that people have a better recollection of uncompleted tasks. This is called the “Zeigarnik effect”: when we leave things unfinished, we can’t quite let go of them mentally. Our subconscious keeps reminding us that the task needs attention.

The Zeigarnik effect may explain the connection between facing multiple responsibilities and indulging in rapid task switching. We flit from task to task to task because we can’t forget about all of the things that we haven’t yet finished. We flit from task to task to task because we’re trying to get the nagging voices in our head to shut up.

Of course, there is much to be said for “focus”. But there is much to be said for copperplate handwriting, too, and for having a butler. The world has moved on. There’s something appealing about the Hemingwrite and the hotel room that will make the internet go away, but also something futile.

It is probably not true that Facebook is all that stands between you and literary greatness. And in most office environments, the Hemingwrite is not the tool that will win you promotion. You are not Ernest Hemingway, and you do not get to simply ignore emails from your colleagues.

If focus is going to have a chance, it’s going to have to fight an asymmetric war. Focus can only survive if it can reach an accommodation with the demands of a multitasking world.

Loops and lists

The word “multitasking” wasn’t applied to humans until the 1990s, but it has been used to describe computers for half a century. According to the Oxford English Dictionary, it was first used in print in 1966, when the magazine Datamation described a computer capable of appearing to perform several operations at the same time.

Just as with humans, computers typically create the illusion of multitasking by switching tasks rapidly. Computers perform the switching more quickly, of course, and they don’t take 20 minutes to get back on track after an interruption.

Nor does a computer fret about what is not being done. While rotating a polygon and sending text to the printer, it feels no guilt that the mouse has been left unchecked for the past 16 milliseconds. The mouse’s time will come. Being a computer means never having to worry about the Zeigarnik effect.

Is there a lesson in this for distractible sacks of flesh like you and me? How can we keep a sense of control despite the incessant guilt of all the things we haven’t finished?

“Whenever you say to someone, ‘I’ll get back to you about that’, you just opened a loop in your brain,” says David Allen. Allen is the author of a cult productivity book called Getting Things Done. “That loop will keep spinning until you put a placeholder in a system you can trust.”

Modern life is always inviting us to open more of those loops. It isn’t necessarily that we have more work to do, but that we have more kinds of work that we ought to be doing at any given moment. Tasks now bleed into each other unforgivingly. Whatever we’re doing, we can’t escape the sense that perhaps we should be doing something else. It’s these overlapping possibilities that take the mental toll.

The principle behind Getting Things Done is simple: close the open loops. The details can become rather involved but the method is straightforward. For every single commitment you’ve made to yourself or to someone else, write down the very next thing you plan to do. Review your lists of next actions frequently enough to give you confidence that you won’t miss anything.

This method has a cult following, and practical experience suggests that many people find it enormously helpful — including me (see below). Only recently, however, did the psychologists E J Masicampo and Roy Baumeister find some academic evidence to explain why people find relief by using David Allen’s system. Masicampo and Baumeister found that you don’t need to complete a task to banish the Zeigarnik effect. Making a specific plan will do just as well. Write down your next action and you quiet that nagging voice at the back of your head. You are outsourcing your anxiety to a piece of paper.

A creative edge?

It is probably a wise idea to leave rapid task switching to the computers. Yet even frenetic flipping between Facebook, email and a document can have some benefits alongside the costs.

The psychologist Shelley Carson and her student Justin Moore recently recruited experimental subjects for a test of rapid task switching. Each subject was given a pair of tasks to do: crack a set of anagrams and read an article from an academic journal. These tasks were presented on a computer screen, and for half of the subjects they were presented sequentially — first solve the anagrams, then read the article. For the other half of the experimental group, the computer switched every two-and-a-half minutes between the anagrams and the journal article, forcing the subjects to change mental gears many times.

Unsurprisingly, task switching slowed the subjects down and scrambled their thinking. They solved fewer anagrams and performed poorly on a test of reading comprehension when forced to refocus every 150 seconds.

But the multitasking treatment did have a benefit. Subjects who had been task switching became more creative. To be specific, their scores on tests of “divergent” thinking improved. Such tests ask subjects to pour out multiple answers to odd questions. They might be asked to think of as many uses as possible for a rolling pin or to list all the consequences they could summon to mind of a world where everyone has three arms. Involuntary multitaskers produced a greater volume and variety of answers, and their answers were more original too.

“It seems that switching back and forth between tasks primed people for creativity,” says Carson, who is an adjunct professor at Harvard. The results of her work with Moore have not yet been published, and one might reasonably object that such tasks are trivial measures of creativity. Carson responds that scores on these laboratory tests of divergent thinking are correlated with substantial creative achievements such as publishing a novel, producing a professional stage show or creating an award-winning piece of visual art. For those who insist that great work can only be achieved through superhuman focus, think long and hard on this discovery.

Carson and colleagues have found an association between significant creative achievement and a trait psychologists term “low latent inhibition”. Latent inhibition is the filter that all mammals have that allows them to tune out apparently irrelevant stimuli. It would be crippling to listen to every conversation in the open-plan office and the hum of the air conditioning, while counting the number of people who walk past the office window. Latent inhibition is what saves us from having to do so. These subconscious filters let us walk through the world without being overwhelmed by all the different stimuli it hurls at us.

And yet people whose filters are a little bit porous have a big creative edge. Think on that, uni-taskers: while you busily try to focus on one thing at a time, the people who struggle to filter out the buzz of the world are being reviewed in The New Yorker.

“You’re letting more information into your cognitive workspace, and that information can be consciously or unconsciously combined,” says Carson. Two other psychologists, Holly White and Priti Shah, found a similar pattern for people suffering from attention deficit hyperactivity disorder (ADHD).

It would be wrong to romanticise potentially disabling conditions such as ADHD. All these studies were conducted on university students, people who had already demonstrated an ability to function well. But their conditions weren’t necessarily trivial — to participate in the White/Shah experiment, students had to have a clinical diagnosis of ADHD, meaning that their condition was troubling enough to prompt them to seek professional help.

It’s surprising to discover that being forced to switch tasks can make us more creative. It may be still more surprising to realise that in an age where we live under the threat of constant distraction, people who are particularly prone to being distracted are flourishing creatively.

Perhaps we shouldn’t be entirely surprised. It’s easier to think outside the box if the box is full of holes. And it’s also easier to think outside the box if you spend a lot of time clambering between different boxes. “The act of switching back and forth can grease the wheels of thought,” says John Kounios, a professor of psychology at Drexel University.

Kounios, who is co-author of The Eureka Factor, suggests that there are at least two other potentially creative mechanisms at play when we switch between tasks. One is that the new task can help us forget bad ideas. When solving a creative problem, it’s easy to become stuck because we think of an incorrect solution but simply can’t stop returning to it. Doing something totally new induces “fixation forgetting”, leaving us free to find the right answer.

Another is “opportunistic assimilation”. This is when the new task prompts us to think of a solution to the old one. The original Eureka moment is an example.

As the story has it, Archimedes was struggling with the task of determining whether a golden wreath truly was made of pure gold without damaging the ornate treasure. The solution was to determine whether the wreath had the same volume as a pure gold ingot with the same mass; this, in turn, could be done by submerging both the wreath and the ingot to see whether they displaced the same volume of water.

This insight, we are told, occurred to Archimedes while he was having a bath and watching the water level rise and fall as he lifted himself in and out. And if solving such a problem while having a bath isn’t multitasking, then what is?

Tim Harford is an FT columnist. His latest book is ‘The Undercover Economist Strikes Back’. Twitter: @TimHarford

Six ways to be a master of multitasking

1. Be mindful

“The ideal situation is to be able to multitask when multitasking is appropriate, and focus when focusing is important,” says psychologist Shelley Carson. Tom Chatfield, author of Live This Book, suggests making two lists, one for activities best done with internet access and one for activities best done offline. Connecting and disconnecting from the internet should be deliberate acts.

2. Write it down

The essence of David Allen’s Getting Things Done is to turn every vague guilty thought into a specific action, to write down all of the actions and to review them regularly. The point, says Allen, is to feel relaxed about what you’re doing — and about what you’ve decided not to do right now — confident that nothing will fall through the cracks.

3. Tame your smartphone

The smartphone is a great servant and a harsh master. Disable needless notifications — most people don’t need to know about incoming tweets and emails. Set up a filing system within your email so that when a message arrives that requires a proper keyboard to answer — ie 50 words or more — you can move that email out of your inbox and place it in a folder where it will be waiting for you when you fire up your computer.

4. Focus in short sprints

The “Pomodoro Technique” — named after a kitchen timer — alternates focusing for 25 minutes and breaking for five minutes, across two-hour sessions. Productivity guru Merlin Mann suggests an “email dash”, where you scan email and deal with urgent matters for a few minutes each hour. Such ideas let you focus intensely while also switching between projects several times a day.

5. Procrastinate to win

If you have several interesting projects on the go, you can procrastinate over one by working on another. (It worked for Charles Darwin.) A change is as good as a rest, they say — and as psychologist John Kounios explains, such task switching can also unlock new ideas.

6. Cross-fertilise

“Creative ideas come to people who are interdisciplinary, working across different organisational units or across many projects,” says author and research psychologist Keith Sawyer. (Appropriately, Sawyer is also a jazz pianist, a former management consultant and a sometime game designer for Atari.) Good ideas often come when your mind makes unexpected connections between different fields.

Tim Harford’s To-Do Lists

David Allen’s Getting Things Done system — or GTD — has reached the status of a religion among some productivity geeks. At its heart, it’s just a fancy to-do list, but it’s more powerful than a regular list because it’s comprehensive, specific and designed to prompt you when you need prompting. Here’s how I make the idea work for me.

Write everything down. I use Google Calendar for appointments and an electronic to-do list called Remember the Milk, plus an ad hoc daily list on paper. The details don’t matter. The principle is never to carry a mental commitment around in your head.

Make the list comprehensive. Mine currently has 151 items on it. (No, I don’t memorise the number. I just counted.)

Keep the list fresh. The system works its anxiety-reducing magic best if you trust your calendar and to-do list to remind you when you need reminding. I spend about 20 minutes once a week reviewing the list to note incoming deadlines and make sure the list is neither missing important commitments nor cluttered with stale projects. Review is vital — the more you trust your list, the more you use it. The more you use it, the more you trust it.

List by context as well as topic. It’s natural to list tasks by topic or project — everything associated with renovating the spare room, for instance, or next year’s annual away-day. I also list them by context (this is easy on an electronic list). Things I can do when on a plane; things I can only do when at the shops; things I need to talk about when I next see my boss.

Be specific about the next action. If you’re just writing down vague reminders, the to-do list will continue to provoke anxiety. Before you write down an ill-formed task, take the 15 seconds required to think about exactly what that task is.

Written for and first published at ft.com.

Undercover Economist

Let’s be blunt: criticism works

‘If Amazon encourages its staff to be straight with each other about what should be fixed, so much the better’

Last month’s Amazon exposé in The New York Times evidently touched a white-collar nerve. Jodi Kantor and David Streitfeld described what might euphemistically be called an “intense” culture at Amazon’s headquarters in a feature article that promptly became the most commented-on story in the newspaper’s website’s history. As Kantor and Streitfeld told it, Amazon reduces grown men to tears and comes down hard on staff whose performance is compromised by distractions such as stillborn children, dying parents or simply having a family. Not for the first time, The Onion was 15 years ahead of the story with a December 2000 headline that bleakly satirised a certain management style: “There’s No ‘My Kid Has Cancer’ In Team.”

Mixed in with the grim anecdotes was a tale of a bracingly honest culture of criticism and self-criticism. (Rival firms, we are told, have been hiring Amazon workers after they’ve quit in exasperation, but are worried that these new hires may have become such aggressive “Amholes” that they won’t fit in anywhere else.)

At Amazon, performance reviews seem alarmingly blunt. One worker’s boss reeled off a litany of unachieved goals and inadequate skills. As the stunned recipient steeled himself to be fired, he was astonished when his superior announced, “Congratulations, you’re being promoted,” and gave him a hug.

It is important to distinguish between a lack of compassion and a lack of tact. It’s astonishing how often we pass up the chance to give or receive useful advice. If Amazon encourages its staff to be straight with each other about what should be fixed, so much the better.

We call workplace comments “feedback”. This is an ironic word to borrow from engineering, because while feedback in a physical system is automatic, with a clear link between cause and effect, feedback in a corporate environment is fraught with emotion and there is rarely a clear link between what was done and what is said about it.

The story of the Amazon worker who thought he was about to be fired is instructive. A list of goals not yet accomplished and skills that need improving is actually useful. Yet we’re so accustomed to receiving uninformative compliments — well done, good job — that a specific list sounds like grounds for dismissal.

Consider the contrast between a corporate manager and a sports coach. The manager usually wants to placate workers and avoid awkward confrontations. As a result, comments will be pleasant but too woolly to be of much use. The sports coach is likely to be far more specific: maintain lane discipline; straighten your wrist; do fewer repetitions with heavier weights. Being positive or negative is beside the point. What matters is concrete advice about how to do better.

A similar problem besets meetings. On the surface these group discussions aim at reaching a good decision but people may care more about getting along. People who like each other may find it harder to have sensible conversations about hard topics.

In the mid-1990s, Brooke Harrington, a sociologist, made a study of Californian investment clubs, where people joined together to research possible stock-market investments, debate their merits and invest as a collective enterprise. (The results were published in a book, Pop Finance.) Harrington found a striking distinction between clubs that brought together friends and those with no such social ties.

The clubs made up of strangers made much better investment decisions and, as a fly on the wall, Harrington could see why. These clubs had open disagreements about which investments to make; tough decisions were put to a vote; people who did shoddy research were called on it. All rather Amazonian. The friendlier clubs had a very different dynamic, because here people were more concerned with staying friends than with making good investments. Making good decisions often requires social awkwardness. People who are confused must be corrected. People who are free-riding must be criticised. Disagreements must be hashed out. The friendly groups often simply postponed hard decisions or passed over good opportunities because they would require someone to say out loud that someone else was wrong.

None of this should be a blanket defence of Amazon’s workplace culture — which if the New York Times exposé is to be believed, sounds dreadful. Nor does it excuse being rude. But the problem is that honest criticism is so rare that it is often misinterpreted as rudeness.

In some contexts, letting politeness trump criticism can be fatal. From the operating theatre to the aeroplane cockpit, skilled professionals are being taught techniques such as “graded assertiveness” — or how to gently but firmly make your boss realise he is about to kill someone by mistake.

Scientists have wrestled with a similar challenge. As the great statistician Ronald Fisher once drily commented, “A scientific career is peculiar . . . its raison d’être is the increase of natural knowledge. Occasionally, therefore, an increase of natural knowledge occurs. But this is tactless, and feelings are hurt . . . it is inevitable that views previously expounded are shown to be either obsolete or false . . . some undoubtedly take it hard.”

Nobody likes to be told that they are wrong. But if there’s one thing worse than someone telling you that you are wrong, it’s no one telling you that you are wrong.

Written for and first published at ft.com.

Undercover Economist

Meet the Flop Pickers

‘If savvy consumers can help predict a product’s success, might there not be consumers whose clammy embrace spells its death?’

Spare a thought for the poor darlings who run your typical transnational, fast-moving, consumer goods company. They invest millions launching an exciting new product (Colgate ready meals, say, or Cosmopolitan brand yoghurt. Or Crystal Pepsi: it sounded like a Class A drug, it looked like water and it tasted pretty much like any other kind of Pepsi. How could it fail?) They give the product to focus groups, who like it. They trial it in a few select stores, and it sells well. Their retail partners are convinced. Then the product is launched to global fanfare, and the ungrateful customers refuse to buy it.

It is no secret that many new products fail. Naturally, companies are on the lookout for ways to identify failures earlier in the process, before they have sunk too much time and money into a product that will eventually collapse. If you’re going to flop, it’s better if you can do it quietly in the changing rooms rather than from the highest diving board with camera bulbs flashing.

Back in the mid-1980s, Eric von Hippel, a management professor, suggested working with what he called “lead” customers — people with more advanced or specialised needs whose demands might predict where the market could be heading. Today’s military technology is tomorrow’s household appliance; today’s professional imaging software is tomorrow’s smartphone app. In the innovative industries that interested von Hippel, paying attention to lead customers would produce great ideas for the mass market.

In more everyday sectors, lead customers might instead be fashionistas whose choices were copied by others. Or lead customers might simply be cutting-edge consumers of music, or coffee, or gluten-free cakes, who are always one step ahead of where the herd is already going. In any case, the strategy for a business is clear: identify these lead customers if you can, and pay attention to what they do and say.

Von Hippel and co-authors wrote in the Harvard Business Review in 1999 that “all processes designed to generate ideas for products begin with information collected from users. What separates companies is the kind of information they collect and from whom they collect it”.

What they meant was that companies should consider the ideas of lead customers, rather than gathering a group of John and Jane Does and showing them a product prototype. But lead customers aren’t the only unusual people who might be worth paying attention to.

If savvy influential consumers can help predict a product’s success, might it not be that there are consumers whose clammy embrace spells death for a product? It’s a counter-intuitive idea at first but, on further reflection, there’s a touch of genius about it.

Let’s say that some chap — let’s call him “Herb Inger” — simply adored Clairol’s Touch of Yogurt shampoo. He couldn’t get enough of Frito-Lay’s lemonade (nothing says “thirst-quenching” like salty potato chips, after all). He snapped up Bic’s range of disposable underpants. Knowing this, you get hold of Herb and you let him try out your new product, a zesty Cayenne Pepper eyewash. He loves it. Now you know all you need to know. The product is doomed, and you can quietly kill it while it is still small enough to drown in your bathtub.

A cute idea in theory — does it work in practice? Apparently so. Management professors Eric Anderson, Song Lin, Duncan Simester and Catherine Tucker have studied people, such as Herb, whom they call “Harbingers of Failure”. (Their paper by that name is forthcoming in the Journal of Marketing Research.) They used a data set from a chain of more than 100 convenience stores. The data covered more than 100,000 customers with loyalty cards, more than 10 million transactions and nearly 10,000 new products. Forty per cent of those products were no longer stocked after three years, and were defined as “flops”.

. . .

The harbinger customers are those who buy lots of flops and, in particular, those who buy flops and then go back for more, repeatedly buying the same unpopular product. It turns out that having identified these flop-loving customers, you can get a good idea of future failures by watching whatever they buy next. This is interesting: in principle there’s no reason why a customer who loves an unpopular flavour of soft drink would be more likely to also love an unpopular brand of shampoo.

In practice, however, the data show a class of people with an eclectic taste in products that others dislike.

Anderson and his colleagues reckon that their results are robust within the particular context of the convenience store data. Whether the technique could also be used for films and books, or computers and tablets, remains to be seen. But it’s already a good example of the kind of patterns that emerge from much larger and more detailed data sets than ones traditionally available to social scientists.

A final question is whether you can spot Harbingers of Failure without access to their shopping habits, purely from demographic information. The answer seems to be no. Harbingers of Failure are much like the rest of us. The only difference is that they love the products that we hate.

Written for and first published at ft.com.

Undercover Economist

When it comes to banking, can we have too much of a good thing?

In 1980, the econometrician David Hendry (now Sir David) investigated a key economic question: what causes inflation? Hendry looked to the data for insight. He speculated that a particular variable, X, was largely responsible. He assembled data on variable X, performed a few deft mathematical tweaks and compared his transformed X with the path of consumer prices in the UK. Graphing the result showed an astonishingly close fit.
The only snag: X was cumulative rainfall. Since consumer prices and cumulative rainfall both rise over time, Hendry had an excellent platform for finding his spurious correlation. Statistical sleight of hand did the rest.
Hendry wanted to demonstrate just how easy it was to produce plausible nonsense by misusing the tools of statistics. “It is meaningless to talk about ‘confirming’ theories when spurious results are so easily obtained,” he wrote.
All this is by way of preamble, because a hot topic in economics at the moment is the role of finance in the health of the economy. For many years, economists have tended to believe that a larger financial sector tends to be good news for economic growth, with statistical evidence to back this up.
It won’t surprise anyone to hear that this belief is now viewed with some scepticism, and the statistical studies now back up the scepticism too. Several recent research papers have found that finance can be bad for economic growth.
Given this statistical volte-face, Hendry’s conjuring trick comes to mind. Are our statistical studies simply serving as decoration for our existing prejudices?
A recent note by William Cline of the Peterson Institute for International Economics worries that new anti-finance research rests on a statistical illusion. Rich countries tend to grow more slowly than poorer ones. But rich countries also have larger banking sectors. A naive analysis, then, would show that large banking sectors are correlated with slower growth. But, points out Cline, the same statistical methods show that doctors are bad for growth and that telephones are bad for growth and even that research and development technicians are bad for growth. In reality, all that is being shown is that being rich already is bad for further growth.
Cline makes a good point but a narrow one. It’s not particularly helpful to analyse banking like salt in cooking or water on your vegetable patch, and conclude that “some is good, too much is bad”. Unlike salt and water, banking services are complex and diverse. There’s a difference between a mortgage, a payday loan, life insurance, a credit derivative, a venture capital investment and an equity tracker fund. They’re all financial services, though.
More persuasive analyses of the relationship between finance and growth are asking not just whether finance can grow too big to be helpful but what kind of finance, and why.
In two working papers for DNB, the Dutch central bank, Christiane Kneer explores the idea that the trouble with banking is that it sucks talent away from the rest of the economy. Kneer looked at the process of banking deregulation state by state in the US and found that banks hired skilled individuals away from manufacturing, where labour productivity fell. If Kneer is right, too much finance is bad for growth because the banks are gobbling up too many of the smartest workers.
Another possibility, explored by economists Stephen Cecchetti and Enisse Kharroubi, is that large banking sectors aren’t doing their classic textbook job of funding the most productive investments. Instead, they like to lend money to organisations that already have collateral. Mortgages make attractive loans for this reason. Loans to a business that already owns an office block or an oil refinery are also tempting. But lending to a business with more intangible assets, such as an R&D department or a set of strong consumer relationships, is less attractive. Perhaps it is no surprise when Cecchetti and Kharroubi find that larger banking sectors are correlated with slower growth in R&D-intensive parts of the economy. But it is not encouraging.
. . .
Such research reminds us that we shouldn’t simply bash “banking” or “finance” in some generic way, blaming the banks for anything from the weather to the struggles of bees. We need to look at the details of what the financial services industry is doing, and whether financial regulations are protecting society or making things worse.
The truth is that we desperately need a strong banking sector. This entire research literature on finance and growth was originally kicked off by development economists who had observed that poor countries struggled to develop if they didn’t have decent banks. Thorsten Beck, an economist at Cass Business School, first started studying the effects of finance when he worked at the World Bank. “I didn’t care about the UK or the Netherlands. I cared about Kenya, Chile and Brazil.”
Without a strong and sizeable banking sector to lend money to businesses, it is very hard for a poor country to grow. It may well be that we have more finance sloshing around the economy than we can use. That is a big problem — but it is also a first-world problem.

Other Writing

The myth of the robot job-ocalypse

“The number of jobs lost to more efficient machines is only part of the problem . . . In the past, new industries hired far more people than those they put out of business. But this is not true of many of today’s new industries.”
This sentiment, from Time magazine, dates from the early weeks of John Kennedy’s presidency. Yet it would slot nicely into many a contemporary political speech. Like any self-respecting remorseless killer robot from the future, our techno-anxiety just keeps coming back.
Arnold Schwarzenegger’s Terminator was science fiction — but so, too, is the idea that robots and software algorithms are guzzling jobs faster than they can be created. There is an astonishing mismatch between our fear of automation and the reality so far.
How can this be? The highways of Silicon Valley are sprinkled with self-driving cars. Visit the cinema, the supermarket or the bank and the most prominent staff you will see are the security guards, who are presumably there to prevent you stealing valuable machines. Your computer once contented itself with correcting your spelling; now it will translate your prose into Mandarin. Given all this, surely the robots must have stolen a job or two by now?
Of course, the answer is that automation has been destroying particular jobs in particular industries for a long time, which is why most westerners who weave clothes or cultivate and harvest crops by hand do so for fun. In the past that process made us richer.
The worry now is that, with computers making jobs redundant faster than we can generate new ones, the result is widespread unemployment, leaving a privileged class of robot-owning rentiers and highly paid workers with robot-compatible skills.
This idea is superficially plausible: we are surrounded by cheap, powerful computers; many people have lost their jobs in the past decade; and inequality has risen in the past 30 years.
But the theory can be put to a very simple test: how fast is productivity growing? The usual measure of productivity is output per hour worked — by a human. Robots can produce economic output without any hours of human labour at all, so a sudden onslaught of robot workers should cause a sudden acceleration in productivity.
Instead, productivity has been disappointing. In the US, labour productivity growth averaged an impressive 2.8 per cent per year from 1948 to 1973. The result was mass affluence rather than mass joblessness. Productivity then slumped for a generation and perked up in the late 1990s but has now sagged again. The picture is little better in the UK, where labour productivity is notoriously low compared with the other G7 leading economies, and it has been falling further behind since 2007.
Taking a 40-year perspective, the impact of this long productivity malaise on typical workers in rich countries is greater than that of the rise in inequality, or of the financial crisis of 2008. In an age peppered with economic disappointments, the worst has been the stubborn failure of the robots to take our jobs. Then why is so much commentary dedicated to the opposite view? Some of this is a simple error: it has been a tough decade, economically speaking, and it is easy to blame robots for woes that should be laid at the door of others, such as bankers, austerity enthusiasts and eurozone politicians.
It is also true that robotics is making impressive strides. Gill Pratt, a robotics expert, recently described a “Cambrian explosion” for robotics in the Journal of Economic Perspectives. While robots have done little to cause mass unemployment in the recent past, that may change in future.
Automation has also undoubtedly changed the shape of the job market — economist David Autor, writing in the same journal, documents a rise in demand for low-skilled jobs and highly skilled jobs, and a hollowing out of jobs in the middle. There are signs that the hollow is moving further and further up the spectrum of skills. The robots may not be taking our jobs, but they are certainly shuffling them around.
Yet Mr Autor also points to striking statistic: private investment in computers and software in the US has been falling almost continuously for 15 years. That is hard to square with the story of a robotic job-ocalypse. Surely we should expect to see a surge in IT investment as all those machines are installed?
Instead, in the wake of the great recession, managers have noted an ample supply of cheap human labour and have done without the machines for now. Perhaps there is some vast underground dormitory somewhere, all steel and sparks and dormant androids. In a corner, a chromium-plated robo-hack is tapping away at a column lamenting the fact that the humans have taken all the robots’ jobs.

24th of August, 2015Other WritingComments off
Undercover Economist

London’s turning . . . 

‘London’s excruciating price tag is not just a vulnerability but also a sign of success’

What is happening to London? Is the city devouring itself, its street life disappearing as flat-pack apartment blocks metastasise in once-healthy neighbourhoods? Or are we simply witnessing a process of regeneration and renewal? Rohan Silva, a former adviser to David Cameron, recently told architecture magazine Dezeen that London might lose its creative class because of high rents. In The Observer, Rowan Moore wrote that London “was suffering a form of entropy whereby anything distinctive is converted into property value”.

It is natural that journalists find this an urgent topic: surely the gap between the price of a typical London house and the salary of a typical London journalist has never been higher. But the topic is genuinely puzzling, because London’s excruciating price tag is not just a vulnerability but also a sign of success. It is hard to see how the city can be written off when so many people are willing to pay such extraordinary sums to live there.

It’s worth dismissing some disaster scenarios. Many people fret that the infamous apartments of One Hyde Park stand for the future of London: joyless, unaffordable and empty most of the time. But London is not going to become a gigantic holiday park full of second homes for billionaires — there simply aren’t enough billionaires out there to turn a city of more than eight million souls into the equivalent of a weekend hideaway in Cornwall.

Another concern is that international investors will snap up new-build apartments as investments, then leave them empty. But rental property is a much better investment when one actually rents it out, so this makes sense only if one accepts that most international investors are insane.

Nor has London abandoned its social housing sector either — not yet. About a quarter of London’s households live in social rented housing, and many more than that in inner London. The prevalence of social housing has been falling since the 1980s — but slowly. Social housing is still on offer to almost a million households.

The vision of London as a ghost town can be disproved in an instant by the experience of actually being in London. Try to get on the Tube at Clapham Common at 8am, then tell me that London’s problem is underpopulation.

“London’s population is going up,” says Professor Christine Whitehead of the London School of Economics. “And there’s no indication that the new population is the wrong mix.”

“Mix” is an important word here. More than 50 years ago, in The Death and Life of Great American Cities, Jane Jacobs emphasised the merits of variety in city life. If a neighbourhood had a mix of homes, offices, factories, shops and nightlife, then the streets would be interesting, well used and, therefore, safe for many hours a day. More rigorous zoning might look tidy on a city map but would leave streets (and shops) unusably overcrowded at some times and deadly boring at others. Tedious and perhaps dangerous, such a neighbourhood would be fragile. Jacobs also advocated a mix of different industries so that ideas could spread from one to another. Her most famous example was Ida Rosenthal’s invention of the bra after working not in lingerie but in dressmaking.

Fundamental to all this, wrote Jacobs, was a mix of old and new buildings. Leaving aside unusual cases such as Venice, cities need a mix of higher-rent buildings and more decrepit low-rent buildings, because such buildings house different kinds of activity. Experimental projects, in particular, need somewhere cheap — the Silicon Valley garage, perhaps, or the east London warehouse. “Old ideas can sometimes use new buildings,” Jacobs wrote. “New ideas must use old buildings.”

This is London’s challenge: if only hedge-fund millionaires can afford to live there, then even the hedge-fund millionaires will not wish to. Artists, start-up hopefuls and hipster baristas need not only low-rent places to live but low-rent places to work. If London loses such places, then it will indeed lose its creative edge.

Still, London does not yet seem to be short of hipster baristas. Whitehead says: “I’ve heard that argument once a decade for the past 50 years.” It may yet come true — but so far, so good.

The trouble with creative destruction is that it is always easier to see what is being lost than what is being gained. Notting Hill seems pretty dull to me these days but Clapton Pond is on fine form: brothels have been replaced by bars; murderous dives have been replaced by gastropubs; fried-chicken joints have been replaced by coffee shops. The mix is changing but it’s still a mix, and it’s not obvious that the new mix is disastrous.

What would be disastrous would be if the lot were bulldozed to make way for apartment blocks in which nobody wanted to live because there was nothing to do outside. We must guard against the encroachment of such residential deserts, yet they remain rare in London. The city is, of course, a playground for the super-rich. But, for now, it remains much more than that.

Written for and first published at ft.com.

Undercover Economist

How to level the playing field

‘It costs something like a billion quid to turn a bottom-half Premier League club into one of the best teams in Europe’

They have dominated their national football league, winning 17 times in the past 50 years, far more than any rival. Indeed, only 10 other teams have managed to win the league at all since 1965. Can you guess the club?

You might be thinking Real Madrid — but no, Real have won 21 times in the past half century and Barcelona are not so far behind. Juventus would be a better guess — they have won 19 titles in the past 50 years, with rivals AC Milan boasting 10. Manchester United have 14, just ahead of Liverpool with 12.

But no. I am thinking of Havnar Bóltfelag. Affectionately known as HB, they are the Real Madrid of the Faroe Islands. The club is 111 years old — just a couple of years younger than Real — and just as dominant, albeit on a smaller stage. If Havnar Bóltfelag were ever to play Real, the entire population of the Faroe Islands could rattle around inside Real’s Bernabéu stadium with room to spare for some home fans.

The curious thing is that most European football leagues show a similar pattern of dominance. As sports economist Stefan Szymanski explains in his book Money and Football, looking at just the distribution of wins — typically 18-22 for the leading team over the past 50 years, and 11 or 12 champions in total — it is hard to tell the European leagues apart.

There are some exceptions (the French and Irish leagues have been more competitive, and the Scottish and Dutch leagues less so) but the regularity is striking. That might seem surprising given that the average revenue per top-tier club in England, Germany, Spain and Italy is more than $100m, while that in the Faroe Islands and Luxembourg is less than $1m. And, in case you were wondering, Luxembourg fits the pattern perfectly. The Real Madrid of Luxembourg (or should that be the Havnar Bóltfelag of Luxembourg?) is Jeunesse Esch, also just over a century old. It has won 20 titles in the past 50 years.

So what explains this widespread pattern in which one or two teams tend to dominate? Not the cost of fielding 11 players, certainly. As Szymanski points out, there are plenty of football teams around. Running a football club does not demand scale, like a mobile phone network or a nuclear power station. It’s more like running a soft drinks company: anyone can do it at any scale but Coca-Cola somehow manages to be the biggest around. (Coincidentally, Coca-Cola’s global market share is the same as Real Madrid’s share of the past 50 La Liga titles and Pepsi’s is the same as Barcelona’s.)

Here is a hypothetical question that might shed some light: why don’t Havnar Bóltfelag borrow some money and hire superstars such as Lionel Messi and Cristiano Ronaldo? Let’s assume that a sufficiently gullible bank could be found — the nearest banking centres, Reykjavik and Edinburgh, suggest that anything is possible.

The answer is that Havnar Bóltfelag is not the best place to showcase the talents of the world’s most expensive players. Partly this is a question of geography. But partly HB’s problem is its reputation: it doesn’t have one. Until the club had more name recognition, good players would be reluctant to join, and would have to be given extra financial incentives. Sponsors would feel the same. And while football spectators do like to watch big names playing attractive football, they are also loyal to their old clubs. Even if Real Madrid and Barcelona were somehow to become feeder clubs for Havnar Bóltfelag, it would take a while for their support to ebb and HB’s to grow.

Now apply the same logic to smaller clubs in large leagues, and the reason for big club dominance starts to become clear. Clubs such as Rayo Vallecano in Madrid or West Ham in London face no geographical handicap in challenging Real Madrid or Chelsea. But they would have to pay over the odds to attract players, staff and fans while simultaneously earning less from advertisers and global TV rights.

It is possible to bridge this brand-name gap if you are willing to lose enough money. “It costs something like a billion quid to turn a club from a bottom-half Premier League team to one of the best teams in Europe,” says Szymanski. Manchester City, Chelsea and Paris Saint-Germain have been transformed into top-flight clubs. They are all now in a position to make money — or at least, to lose no more money than any other top club — but nobody expects their owners to recoup the cost of that transition, at least not through the business of football.

The clubs with a large fan base, long history and global name recognition are the clubs with the most to gain from spending a lot trying to win football matches. For anyone else to challenge them requires very deep pockets. That is why winners keep winning.

But should we care about this sporting dominance? Szymanski points out that European football with its lopsided leagues has far outgrown American sports, which are carefully engineered to ensure competitive balance. Apparently, the global army of fans of Real Madrid, Manchester United and Bayern Munich don’t mind if they win a lot.

Written for and first published at ft.com.

Undercover Economist

Worming our way to the truth

‘Why does such a large policy push need to be based on a handful of clinical trials?’

It was one of the most influential economics studies to have been published in the past 20 years, with a simple title, “Worms”. Now, its findings are being questioned in an exchange that somehow manages to be encouraging and frustrating all at once. Development economics is growing up, and getting acne.

The authors of “Worms”, economists Edward Miguel and Michael Kremer, studied a deworming project in an area of western Kenya where parasitic intestinal worms were a serious problem in 1998. The project was a “cluster randomisation”, meaning that the treatment for worms was randomised between entire schools rather than between children within each school.

Miguel and Kremer concluded three things from the randomised trial. First, deworming treatments produced not just health benefits but educational ones, because healthier children were able to attend school and flourish while in class. Second, the treatments were cracking value for money. Third, there were useful spillovers: when a school full of children was treated for worms, the parasites became less prevalent, so infection rates in nearby schools also fell.

The “Worms” study was influential in two very different ways. Activists began to campaign for wider use of deworming treatments, with some success. Development economists drew a separate lesson: that running randomised trials was an excellent way to figure out what worked.

In this, they were following in the footsteps of epidemiologists. Yet it is the epidemiologists who are now asking the awkward questions. Alexander Aiken and three colleagues from the London School of Hygiene and Tropical Medicine have just published a pair of articles in the International Journal of Epidemiology that examine the “Worms” experiment, test it for robustness and find it wanting.

Their first article follows the original methodology closely and uncovers some programming errors. Most are trivial but one of them calls into question the key claim that deworming produces spillover benefits. Their second article uses epidemiological methods rather than the statistical techniques preferred by economists. It raises the concern that the central “Worms” findings may be something of a fluke.

Everyone agrees that there were some errors in the original paper; such errors aren’t uncommon. There’s agreement, too, that it’s very useful to go back and check classic study results. All sides of the debate praise each other for being open and collegial with their work.

But on the key questions, there is little common ground. Miguel and Kremer stoutly defend their findings, arguing that the epidemiologists have gone through extraordinary statistical contortions to make the results disappear. Other development economists support them. After reviewing the controversy, Berk Ozler of the World Bank says: “I find the findings of the original study more robust than I did before.”

Yet epidemiologists are uneasy. The respected Cochrane Collaboration, an independent network of health researchers, has published a review of deworming evidence, which concludes that many deworming studies are of poor quality and produce rather weak evidence of benefits.

What explains this difference of views? Partly this is a clash of academic best practices. Consider the treatment of spillover effects. To Miguel and Kremer, these were the whole point of the cluster study. Aiken, however, says that an epidemiologist is trained to think of such effects as “contamination” — an undesirable source of statistical noise. Miguel believes this may explain some of the disagreement. The epidemiologists fret about the statistical headaches the spillovers cause, while the economists are enthused by the prospect that these spillovers will help improve childhood health and education.

Another cultural difference is this: epidemiologists have long been able to run rigorous trials but, with big money sometimes at stake, they have had to defend the integrity of those trials against the possibility of bias. They place a high value on double-blind methodologies, where neither subjects nor researchers know who has received the treatment and who is in the control group.

Economists, by contrast, are used to having to make the best of noisier data. Consider a century-old intervention, when John D Rockefeller funded a programme of hookworm eradication county by county across the American south. A few years ago, the economist Hoyt Bleakley teased apart census data from the early 20th century to show that this programme had led to big gains in schooling and in income. To an economist, that is clever work. To an epidemiologist, it’s a curiosity and of limited scientific value.

As you might expect, my sympathies lie with the economists. I suspect that the effects that Miguel and Kremer found are quite real, even if their methods do not quite match the customs of epidemiologists. But the bigger question is why so large a policy push needs to be based on a handful of clinical trials. It is absolutely right that we check existing work to see if it stands up to scrutiny but more useful still is to run more trials, producing more information about how, where and why deworming treatments work or do not work.

This debate is a sign that development policy wonks are now serious about rigorous evidence. That’s good news. Better news will be when there are so many strong studies that none of them will be indispensable, and nobody will need to care much about what exactly happened in western Kenya in 1998.

Written for and first published at ft.com.



  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook


  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

About Tim

Tim Harford is an author, columnist for the Financial Times and presenter of Radio 4's "More or Less".
Learn More ›

Subscribe to TimHarford.com

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Do NOT follow this link or you will be banned from the site!