Tim Harford The Undercover Economist

Articles published in August, 2014

Monopoly is a bureaucrat’s friend but a democrat’s foe

The challenges from smaller competitors spur the innovations that matter

“It takes a heap of Harberger triangles to fill an Okun gap,” wrote James Tobin in 1977, four years before winning the Nobel Prize in economics. He meant that the big issue in economics was not battling against monopolists but preventing recessions and promoting recovery.

After the misery of recent years, nobody can doubt that preventing recessions and promoting recovery would have been a very good idea. But economists should be able to think about more than one thing at once. What if monopoly matters, too?

The Harberger triangle is the loss to society as monopolists raise their prices, and it is named after Arnold Harberger, who 60 years ago discovered that the costs of monopoly were about 0.1 per cent of US gross domestic product – a few billion dollars these days, much less than expected and much less than a recession.

Professor Harberger’s discovery helped build a consensus that competition authorities could relax about the power of big business. But have we relaxed too much?

Large companies are all around us. We buy our mid-morning coffee from global brands such as Starbucks, use petrol from Exxon or Shell, listen to music purchased from a conglomerate such as Sony (via Apple’s iTunes), boot up a computer that runs Microsoft on an Intel processor. Crucial utilities – water, power, heating, internet and telephone – are supplied by a few dominant groups, with baffling contracts damping any competition.

Of course, not all large businesses have monopoly power. Tesco, the monarch of British food retailing, has found discount competitors chopping up its throne to use as kindling. Apple and Google are supplanting Microsoft. And even where market power is real, Prof Harberger’s point was that it may matter less than we think. But his influential analysis focused on monopoly pricing. We now know there are many other ways in which dominant businesses can harm us.

In 1989 the Beer Orders shook up a British pub industry controlled by six brewers. The hope was that more competition would lead to more and cheaper beer. It did not. The price of beer rose. Yet so did the quality of pubs. Where once every pub had offered rubbery sandwiches and stinking urinals, suddenly there were sports bars, candlelit gastropubs and other options. There is more to competition than lower prices.

Monopolists can sometimes use their scale and cash flow to produce real innovations – the glory years of Bell Labs come to mind. But the ferocious cut and thrust of smaller competitors seems a more reliable way to produce many of the everyday innovations that matter.

That cut and thrust is no longer so cutting or thrusting as once it was. “The business sector of the US economy is ageing,” says a Brookings research paper. It is a trend found across regions and industries, as incumbent players enjoy entrenched advantages. “The rate of business start-ups and the pace of employment dynamism in the US economy has fallen over recent decades . . . This downward trend accelerated after 2000,” adds a survey in the Journal of Economic Perspectives.

That means higher prices and less innovation, but perhaps the game is broader still. The continuing debate in the US over “net neutrality” is really an argument about the least damaging way to regulate the conduct of cable companies that hold local monopolies. If customers had real choice over their internet service provider, net neutrality rules would be needed only as a backstop.

As the debate reminds us, large companies enjoy power as lobbyists. When they are monopolists, the incentive to lobby increases because the gains from convenient new rules and laws accrue solely to them. Monopolies are no friend of a healthy democracy.

They are, alas, often the friend of government bureaucracies. This is not just a case of corruption but also about what is convenient and comprehensible to a politician or civil servant. If they want something done about climate change, they have a chat with the oil companies. Obesity is a problem to be discussed with the likes of McDonald’s. If anything on the internet makes a politician feel sad, from alleged copyright infringement to “the right to be forgotten”, there is now a one-stop shop to sort it all out: Google.

Politicians feel this is a sensible, almost convivial, way to do business – but neither the problems in question nor the goal of vigorous competition are resolved as a result.

One has only to consider the way the financial crisis has played out. The emergency response involved propping up big institutions and ramming through mergers; hardly a long-term solution to the problem of “too big to fail”. Even if smaller banks do not guarantee a more stable financial system, entrepreneurs and consumers would profit from more pluralistic competition for their business.

No policy can guarantee innovation, financial stability, sharper focus on social problems, healthier democracies, higher quality and lower prices. But assertive competition policy would improve our odds, whether through helping consumers to make empowered choices, splitting up large corporations or blocking megamergers. Such structural approaches are more effective than looking over the shoulders of giant corporations and nagging them; they should be a trusted tool of government rather than a last resort.

As human freedoms go, the freedom to take your custom elsewhere is not a grand or noble one – but neither is it one that we should abandon without a fight.

Also published at ft.com.

16th of August, 2014Other WritingComments off

Pity the robot drivers snarled in a human moral maze

Robotic cars do not get tired, drunk or angry but there are bound to be hiccups, says Tim Harford

Last Wednesday Vince Cable, the UK business secretary, invited British cities to express their interest in being used as testing grounds for driverless cars. The hope is that the UK will gain an edge in this promising new industry. (German autonomous cars were being tested on German, French and Danish public roads 20 years ago, so the time is surely ripe for the UK to leap into a position of technological leadership.)

On Tuesday, a very different motoring story was in the news. Mark Slater, a lorry driver, was convicted of murdering Trevor Allen. He had lost his temper and deliberately driven a 17 tonne lorry over Mr Allen’s head. It is a striking juxtaposition.

The idea of cars that drive themselves is unsettling, but with drivers like Slater at large, the age of the driverless car cannot come quickly enough.

But the question of how safe robotic cars are, or might become, is rather different from the question of the risks of a computer-guided car are perceived, and how they might be repackaged by regulators, insurers and the courts.

On the first question, it is highly likely that a computer will one day do a better, safer, more courteous job of driving than you can. It is too early to be certain of that, because serious accidents are rare. An early benchmark for Google’s famous driverless car programme was to complete 100,000 miles driving on public roads – but American drivers in general only kill someone every 100m miles.

Still, the safety record so far seems good, and computers have some obvious advantages. They do not get tired, drunk or angry. They are absurdly patient in the face of wobbly cyclists, learner drivers and road hogs.

But there are bound to be hiccups. While researching this article my Google browser froze up while trying to read a Google blog post hosted on a Google blogging platform. Two seconds later the problem had been solved, but at 60 miles per hour two seconds is more than 50 metres. One hopes that Google-driven cars will be more reliable when it comes to the more literal type of crash.

Yet the exponential progress of cheaper, faster computers with deeper databases of experience will probably guarantee success eventually. In a simpler world, that would be the end of it.

Reality is knottier. When a car knocks over a pedestrian, who is to blame? Our answer depends not only on particular circumstances but on social norms. In the US in the 1920s, the booming car industry found itself under pressure as pedestrian deaths mounted. One response was to popularise the word “jaywalking” as a term of ridicule for bumpkins who had no idea how to cross a street. Social norms changed, laws followed, and soon enough the default assumption was that pedestrians had no business being in the road. If they were killed they had only themselves to blame.

We should prepare ourselves for a similar battle over robot drivers. Assume that driverless cars are provably safer. When a human driver collides with a robo-car, where will our knee-jerk sympathies lie? Will we blame the robot for not spotting the human idiosyncrasies? Or the person for being so arrogant as to think he could drive without an autopilot?

When such questions arrive in the courts, as they surely will, robotic cars have a serious handicap. When they err, the error can be tracked back to a deep-pocketed manufacturer. It is quite conceivable that Google, Mercedes or Volvo might produce a robo-car that could avoid 90 per cent of the accidents that would befall a human driver, and yet be bankrupted by the legal cases arising from the 10 per cent that remained. The sensible benchmark for robo-drivers would be “better than human”, but the courts may punish them for being less than perfect.

There are deep waters here. How much space is enough when overtaking a slow vehicle – and is it legitimate for the answer to change when running late? When a child chases a ball out into the road, is it better to swerve into the path of an oncoming car, or on to the pavement where the child’s parents are standing, or not to swerve at all?

These are hardly thought of as ethical questions because human drivers make them intuitively and in an instant. But a computer’s priorities must be guided by its programmers, who have plenty of time to weigh up the tough ethical choices.

In 1967 Philippa Foot, one of Oxford’s great moral philosophers, posed a thought experiment that she called the “trolley problem”. A runaway railway trolley is about to kill five people, but by flipping the points, you can redirect it down a line where it will instead kill one. Which is the right course of action? It is a rich seam for ethical discourse, with many interesting variants. But surely Foot did not imagine that the trolley problem would have to be answered one way or another and wired into the priorities of computer chauffeurs – or that lawyers would second-guess those priorities in court in the wake of an accident.

Then there is the question of who opts for a driverless car. Sir David Spiegelhalter, a risk expert at Cambridge university, points out that most drivers are extremely safe. Most accidents are caused by a few idiots, and it is precisely those idiots, Sir David speculates, who are least likely to cede control to a computer.

Perhaps driverless cars will be held back by a tangle of social, legal and regulatory stubbornness. Or perhaps human drivers will one day be banned, or prohibitively expensive to insure. It is anyone’s guess, because while driving is no longer the sole preserve of meatsacks such as you and me, the question of what we fear and why we fear it remains profoundly, quirkily human.

Also published at ft.com.

7th of August, 2014Other WritingComments off

When crime stops paying

To an economist, tougher sentencing in the wake of the 2011 riots offers a fascinating natural experiment

The third anniversary of the 2011 London riots is this week. They erupted so suddenly and spread so quickly across the capital and to other English cities that at the time the disintegration of British society seemed, if unlikely, at least conceivable. In the rear-view mirror, though, the riots are eclipsed by the London Olympics and much diminished by the passage of time.

For parochial reasons, the riots remain vivid to me. My son was born in Hackney just a few days before they started. As violence flared a couple of streets away to the south and to the north of us, my wife and son slept while I stood on the doorstep of our home and watched as a pair of helicopters droned directly overhead.

A year after the riots I wrote a column pointing out that they were essentially random events. They had a cause, of course. The spark was the shooting of Mark Duggan by the Metropolitan Police, and one source of fuel was the perception that police stop-and-search powers were being used crassly and with a racial bias. Yet similar grievances have emerged at other times and in other places without provoking mass civil unrest. Chance plays a major element in such stories.

The criminal justice system responded sharply to the riots. More than 1,000 suspected rioters were charged by the Metropolitan Police during the first week of trouble, and over the same time period more than 800 of them made a first appearance in court. By September 2012, 4,600 people had been arrested, out of about 13,000-15,000 people who are believed to have participated in the trouble in some way. Given the initial sense of impunity, that is a high rate of unwelcome police attention.

More striking was the way in which judges handed out sentences as though they were on steroids. Two people were sentenced to four years in prison each for Facebook postings inviting others to run amok in Cheshire, an unlikely location for a revolutionary uprising. Nobody showed up to “smash dwn in Northwich town” or “riot in Latchford”, so the sentences raised eyebrows. So did the 10-month sentence handed out to a teenager who carried two left-footed trainers out of a shop in Wolverhampton. She thought better of it and immediately dropped them – surely one of the most short-lived thefts in history. Sentences were, in general, more severe than normal. The thinking behind all this was that the true crime that needed to be punished was not theft or incitement but participation in a moment of grave civil peril.

Were these sentences an essential crisis response or a draconian overreaction? To an economist, they are something else: a fascinating natural experiment. With the news full of crushing punishments, it must have seemed plausible that the risks of committing a crime had soared. So did the threat of harsh punishments deter crime?

The usual statistical problem is that sentencing policy might influence crime rates but crime rates might equally influence sentencing policy. Cause and effect are hard to disentangle. In the case of the riots, however, the surge in crime that provoked the crackdown was sudden, unexpected, highly localised and brief. The sentencing response was drawn-out and stories of harsh sentences appeared in the national and London press for months.

. . .

As a result, a mugger or burglar in an area of London entirely unaffected by the riots might still feel conscious that the mood of the judiciary had changed. Three economists, Brian Bell, Laura Jaitman and Stephen Machin, used this sudden change in the judicial wind to measure the impact of tough sentences on crime. Across London, they found a significant drop in “riot crimes” – burglary, criminal damage and violence against the person – over the six months following the riots. Meanwhile, other crimes showed a tendency to increase, as though criminals were substituting away from the “expensive” crimes and towards the “cheaper” ones.

This shouldn’t be too much of a surprise. (I wrote an entire book, The Logic of Life, arguing that the most unlikely people in the most unlikely circumstances turn out to be greatly influenced by simple incentives.) But it’s a useful result because rigorous evidence on such matters is hard to find.

One of my favourite exceptions is an article by two economists, Jonathan Klick and Alex Tabarrok, who examined the impact of periodic terrorism alerts in Washington DC in the couple of years following the attacks of September 11 2001. Whenever alert levels were raised, police officers flooded sensitive locations, most of which (such as the White House and the Capitol) are on or near the National Mall.

Over the 16 months studied, the Mall and surrounding district experienced about 8,500 crimes, often theft from or of cars, not really al-Qaeda territory. Klick and Tabarrok argued that the occasional surges in police numbers were not caused by car thefts but did successfully deter them.

There may well be cheaper, more effective and more humane ways to reduce the crime rate.

But such studies have helped to build confidence that the world isn’t an entirely irrational place. Raise the costs of crime and criminals will respond.

Also published at ft.com.


  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook


  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Tim’s Tweets

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 3,476 other subscribers

Do NOT follow this link or you will be banned from the site!