Tim Harford The Undercover Economist
  • THE UNDERCOVER ECONOMIST STRIKES BACK
    “Every Tim Harford book is cause for celebration. He makes the ‘dismal science’ seem like an awful lot of fun.”
    – Malcolm Gladwell, author of “The Tipping Point”
  • ADAPT – Why Success Always Starts with Failure
    “Tim Harford could well be Britain’s Malcolm Gladwell... Insightful and clever”
    – Alex Bellos, author of “Alex’s Adventures in Numberland”
  • Dear Undercover Economist
    “Economics becomes cool” – The Times
  • The Logic of Life
    “As lively as it is smart, charming, penetrating and wise”
    – Stephen J. Dubner, co-author of “Freakonomics”
  • The Undercover Economist
    Over one million copies sold
    “Like spending an ordinary day wearing X-ray goggles”
    – David Bodanis, author of “E=MC2
Undercover Economist

It’s tough turning ideas into gold

‘If innovators make no money at all, they will end up creating for the love of creation rather than for any financial reward’

Alchemy, the ancient art of turning base metals into precious ones, was built on more than one misapprehension. The obvious error is that it is impossible to turn lead into gold. (Not quite impossible, actually. Chemistry will not do the job but a particle accelerator will, although not cheaply. In 1980, researchers bombarded the faintly lead-like metal bismuth and created a few atoms of gold. The cost was a less-than-economical rate of one quadrillion dollars an ounce.)

But there is a subtler mistake — not a scientific one but a matter of economics. “The alchemist fallacy” is the belief that once a simple method is found for turning lead into gold, gold will continue to be precious. We don’t have to rely on economic theory to refute this conclusion, because we have a fascinating case study of a close parallel.

Twenty-six million years ago, some cataclysmic event in the Eastern Sahara raised the temperature of silica sand to well over 1,000 degrees. The result was a large deposit of a lustrous material the colour of a lemon’s flesh. A fragment of this precious stuff was used to make jewellery for Tutankhamun’s tomb. (The story is well told in Steven Johnson’s book How We Got to Now.) The substance could be quarried but not manufactured.

While alchemists never figured out how to turn lead into gold, other craftsmen did develop a process with much the same economic implications. They worked out how to transform silica sand, one of the most common materials on earth, into the beautiful, versatile material we know as glass. It has an astonishing variety of uses from fibre optics to microscopes to aeroplane fuselages. But while gold remains highly prized, glass is now so cheap that we use it as disposable packaging for water.

When it was possible to restrict access to the secret of glassmaking, the guardians of that knowledge profited. Venetian glassmakers were clustered together on the island of Murano, where sparks from the furnaces would not endanger Venice itself. Venice had less success in preventing the secrets of glassmaking from spreading. Despite being forbidden on pain of death to leave the state of Venice, some of Murano’s glassmakers sought fortunes elsewhere. The wealth that could be earned as a glassmaking monopolist in some distant city must have been worth the risk.

That is the way of new ideas: they have a tendency to spread. Business partners will fall out and set up as rivals. Employees will leave to establish their own businesses. Time-honoured techniques such as industrial espionage or reverse engineering will be deployed. Sometimes innovators are happy to give their ideas away for nothing, whether for noble reasons or commercial ones. But it is very hard to stop ideas spreading entirely.

A few years ago, the economist William Nordhaus tried to estimate just how fallacious the alchemist fallacy is — how much of the social gains from innovation does the innovator manage to keep hold of? If the answer is anywhere near 100 per cent — even 50 per cent — then there is no fallacy at all. But if the answer is near zero, then innovations are swiftly copied, benefiting competitors and above all consumers.

Nordhaus was writing in the wake of the dotcom bubble, and pointed out that the valuations of “new economy” companies could be justified only if they were able to retain about 90 per cent of their value to society. This is a slice of the pie to make the most powerful and sophisticated monopolist dream. Nordhaus, pointing to the rapid demise of many dotcom firms and the tight margins of others, reckoned that the proportion was rather lower than that.

Looking at data from the United States between 1948 and 2001, Nordhaus estimated that corporations were able to keep about 3.7 per cent of the social value of their innovations. The remaining 96.3 per cent went to everyone else, mostly to consumers.

Nordhaus’s estimate is uncertain but has the ring of truth about it. Even a company such as Apple, which has a gift for holding on to a good proportion of the returns from innovation, has seen its iPad, iPhone and MacBook Air relentlessly chased down by competitors.

All this raises a question: should we wish that the innovator’s profit share was higher or lower? There is a balance to be struck. When innovators keep too much money, the benefit of using or recombining new ideas spreads too slowly. But if innovators make no money at all, then they will end up creating for the love of creation rather than for any financial reward. That may be fine for pop songs and poetry but less so for nuclear fusion or an HIV vaccine. Costly research programmes will not be funded.

The right balance depends on the innovation in question, and how expensive it is to develop. We probably need better incentives to create some new medicines. Yet our intellectual property system gives too much protection to ideas that would have been created anyway, such as simple software, business methods and Mickey Mouse. It is no surprise that the Venetian doges tried to keep the glassmakers in Murano. We should be grateful that they failed.

Written for and first published at ft.com.

Undercover Economist

Why wishful thinking doesn’t work

‘Careless nudges are no more welcome in public policy than at a domino-toppling event’

Three years ago, the University of Vermont in Burlington began to experiment with a few nudges towards a healthy, sustainable lifestyle. First, in 2012, campus outlets and the company operating the vending machines were required to make sure that at least 30 per cent of the drinks on offer were wholesome stuff such as vegetable juice, low-fat milk and water. A few months later, selling bottled water on campus was banned outright. The aim, pushed hard by student campaigners, was to encourage students to fill reusable bottles with tap water instead.

So, how did the Vermont experiment go? A study by Elizabeth Berman and Rachel Johnson (of the University’s own Department of Nutrition and Food Sciences) was recently published in the American Journal of Public Health. The researchers found that “per capita shipments of bottles, calories, sugars and added sugars increased significantly when bottled water was removed . . . As bottled water sales dropped to zero, sales of sugar-free beverages and sugar-sweetened beverages increased.”

In other words, the policy backfired with both barrels. Students didn’t switch to tap water, they switched to the likes of Coke and Diet Coke instead. All this would be just an amusing curiosity — one more example of student campaigners who are all heart and no brains — if it weren’t for the fact that more mature policy makers often commit similar blunders on much broader canvases. We would do well to learn some lessons from the University of Vermont’s experience.

The first lesson is that when it comes to saving the planet, people focus on what they can see. Type “environmental impact of concrete” into a search engine and you are likely to see a page filled with scholarly analysis pointing out that the impact is very large indeed, because cement production releases vast volumes of carbon dioxide. Type “environmental impact of bottled water” instead and your search results will be packed with campaigning groups seeking to persuade you to change your ways.

This is understandable: I can’t do much about concrete but I can stop drinking bottled water. But being a logical target for campaigners is not the same as being a logical target for policy action.

The second lesson is that we often struggle to deal with multiple goals. The University of Vermont wanted to reduce the flow of plastic water bottles to landfill but also wanted to encourage students to be healthy. There’s a clear conflict between these goals. Water is as healthy a drink as you can find, yet that was exactly what the University of Vermont was banning from vending machines. Wishful thinking provides a resolution — if everyone just drank tap water then there would be no problem. But wishful thinking is not an excuse for setting no priorities.

We see this sharply in the debate over nuclear power. We want to reduce the greenhouse gas emissions that result from burning fossil fuels. We also want to avoid radioactive waste and the risk of radiation leaks. In response to a genuine policy dilemma, politicians have tended to plump for wishful thinking every time, typically involving wind turbines.

The third lesson is that the much-vaunted notion of “nudging” doesn’t always help navigate a complicated policy maze. Nudging means using default options, information design and similar techniques to achieve policy goals. It can be very successful. But careless nudges are no more welcome in public policy than at a domino-toppling event. If you pick a questionable target (bottled water) and fudge a key policy dilemma (the environment vs health) then nudging isn’t going to solve your problems.

So what can be done? One approach is to try to reach policy goals with the help of market signals. The classic example of this is a carbon tax, levied on fossil fuels to reflect their carbon-dioxide emissions. The advantage of this approach is that it encourages everybody at any stage of production or consumption to take actions that reduce emissions, because those actions will save them money. A truck manufacturer might develop a cleaner engine, a logistics company might find a more efficient delivery algorithm, and the final consumer might decide to consume a little less.

The idea of using the price system to solve environmental problems is widely accepted by economists but, alas, it finds itself stranded in the policy doldrums. Ponder this: the Pope recently argued that climate change was a grave problem but he opposed market-based responses. Meanwhile the US Republican party likes market-based responses but isn’t so convinced about climate change.

One other advantage of using environmental taxes is that people can decide on their own priorities. A lot of what we do has consequences for the planet — including breathing — and so part of the problem we face is deciding what is worth doing anyway.

Perhaps it is time for a confession. I am writing this column on the hottest July day recorded in British history. At my left hand is a glass of chilled sparkling water, and next to the glass is a plastic bottle to top it up. If there had been a tax on that bottle, it is a tax I would willingly have paid.

Written for and first published at ft.com.

Other Writing

George Osborne’s Magic Has Us Fooled, For Now

The chancellor can alter the law but cannot make costly workers worth hiring, says Tim Harford

He has mastered the art of misdirection as well as any stage magician. Everyone knew George Osborne was going to butcher the tax credit system on Wednesday, more or less halving the income at which they begin to fall away. But few expected him to announce a much higher minimum wage, and he did it with such an extravagant flourish that no one clearly remembers seeing him wield the cleaver.
For most of the poorer working households who qualify for tax credits, the combined effect of Mr Osborne’s Budget will be to make them worse off financially, and to push them away from the labour force by raising the effective rate of tax they pay.
Monique Ebell of the National Institute of Economic and Social Research reckons that a single mother working 30 hours a week at the minimum wage will be more than a £1,000 a year worse off in two years’ time than she is today, despite the increase in the wage she must legally be paid.
That assumes, of course, that she keeps her job at all. This is the big question about the minimum wage: will it increase the earnings of low-paid workers, or price them out of the job market entirely? Should we expect to see these workers laid off and replaced with one-touch espresso machines, automatic checkouts and call-centre workers from India? The minimum wage is a delicate balance, and Mr Osborne has put his thumb on the scale.
The chancellor’s aim is to raise the minimum wage for those over 25 beyond £9 by 2020, from £6.50 today. That is dramatic, although not quite as dramatic as it first seems. Mr Osborne is setting the minimum wage where it might be if the economic crisis of 2008, and the long stagnation that followed, had never happened. He is hoping that employment will not suffer. He has a few other countries to look to as a precedent. France is one example, and it is not encouraging. Australia is a more hopeful case.
Mr Osborne’s move would once have been unthinkable from a Conservative chancellor. A quarter of a century ago, the conventional wisdom was that the idea of a minimum wage was absurd at any level. The logic of that position was simple enough. If the minimum wage was below the market-clearing wage — at which employees want to work the same number of hours that businesses want to hire them for — it would be irrelevant; if it was above, it would be worse than useless. Productive workers do not need a minimum wage because they will anyway be well paid. Less productive workers will be harmed by a minimum wage because employers would rather sack them than pay more than they are worth. One does not simply repeal the laws of supply and demand.
The world has moved on since then, and we know that while supply and demand matter, there is more to the labour market than the simple story above.
Some employers have market power and could pay higher wages if they were forced to; the higher minimum wage may simply redistribute from employers to low-paid employees. Another possibility is that if forced to pay higher wages, employers will invest in training and equipment to justify the labour expense. On this view, wages do not need to follow productivity; productivity can be led by wages.
A third explanation is that since many low wage jobs are in non-traded sectors such as retail, employers will simply put up prices, spreading the burden of the higher minimum wage across all consumers, and possibly reducing inequality.
There is also the argument that higher wages can encourage workers to show up more often and smile at the customers. This is true, but in most cases managers will have reached that conclusion by themselves without the need for a legal minimum.
A large body of empirical evidence suggests either that reasonable minimum wages do not destroy jobs at all, or that they do not destroy very many. The evidence is, of course, mixed and contested.
Much of it comes from the US and concerns the experience of teenagers, who — in the words of Alan Manning of the London School of Economics, “represent about 2 per cent of hours worked and 98 per cent of the studies of the minimum wage”. But it is clear enough that if modest increases in the minimum wage were disastrous for jobs, we would know that by now.
Whether the chancellor’s wage rise counts as “modest” is far more questionable. Professor Manning is guardedly optimistic: he thinks that the bold increase in the minimum wage is worth a try. But he is nervous, and so am I. We are at the edge of what the data can tell us. Mr Osborne is about to provide a fascinating new case study.
The best scenario is that the minimum wage helps to drive up British productivity, which has long languished. Employers invest in training, and rather than replacing workers with machines they give them the latest tools to do their jobs.
To the extent that productivity does not rise, employers absorb the costs or pass them on to consumers, equitably bearing the burden of giving hard-working people a decent wage.
A gloomier scenario seems more probable for some sectors, especially social care. The law of supply and demand turns out to matter after all. Faced with a sharp increase in the minimum wage that runs well ahead of what the Low Pay Commission has felt able to endorse, employers lay off many workers and reduce the hours of others. The welfare bill rises and — as so often in the past — it proves much harder to create jobs than to destroy them.
My own bet is somewhere in the middle. We will discover that Mr Osborne has pushed too hard, and that the minimum wage must be allowed to slip back again relative to median earnings. Some jobs will be lost, a lesson will be learned, and Mr Osborne’s political purposes will have been served. He will be hoping to have upgraded his own job to that of prime minister by then, which may be appropriate: he is a masterful politician but has never shown much grasp of economics.

Written for and first published at FT.com

13th of July, 2015Other WritingComments off
Other Writing

George Osborne’s gamble with jobs

My response to the Summer Budget went up on the FT website yesterday:

The sharp hike in the minimum wage in the Budget was a shock, but it was true to form for the UK chancellor of the exchequer: clever politics and dubious economics. It is telling that, where the Low Pay Commission used to consider the evidence and carefully balance the risks and rewards of a higher minimum wage, it must now recommend whatever George Osborne tells it to recommend.

The risk is clear: forced to pay up to £9 an hour, many businesses will find that they would rather find other ways to conduct their affairs — buying robots, offshoring key functions or moving overseas entirely. Bankruptcy is, of course, another option.

 

Mr Osborne’s gamble is that some businesses will simply eat the cost of higher wages (unlikely), or train their workers better and give them better tools so that the higher wages can be justified with higher productivity. It is possible this may work. It is enormously risky, and if the move is the wrong one it will be hard to reverse. The lesson of the 1980s is that, once lost, jobs are not easy to find again.

One might ask why the chancellor is willing to take such risks and to order the Low Pay Commission to do his bidding rather than be guided by evidence. The answer is not hard to find: Mr Osborne needs political cover. He is hacking away at the welfare state, notably the system of tax credits that was designed to encourage people to work rather than stay at home.

One can only guess what Milton Friedman, one of the inspirations behind the Thatcherite revolution, would have made of all this. In place of a carefully designed system of incentives for people to go to work, we are to be offered a wage increase set by a politician’s whim. Friedman knew that, even in the complex market for jobs, one does not simply abolish the laws of supply and demand.

Mr Osborne promised a Budget for working people but reality does not match that sound bite. The biggest tax break was for people inheriting expensive homes from their parents; and, while benefits for the working poor were being squeezed, those for pensioners were — as always — protected. Those who hoped for radical and logical tax reform have been bitterly disappointed.

As for working people, many will thank the chancellor as their wages rise. Others will become unaffordable and will lose their jobs. No doubt they will be scapegoated as scroungers in some future Budget speech. It is possible that Mr Osborne’s gamble will pay off. It is even possible, although unlikely, that it will pay off spectacularly. But it is reckless, and it is not his job that is on the line.

9th of July, 2015Other WritingComments off
Undercover Economist

In search of the perfect match

‘One algorithm had to cope with pairs of romantically attached doctors who wanted two job offers in the same city’

When it comes to finding the perfect match, nobody wants to be left on the shelf but the Arunta — a polygamous aboriginal tribe from the area around Alice Springs — used to take things to extremes. As described by anthropologists in the 1920s, the father of a newborn Arunta boy would get together with the father of a newborn girl to arrange a future marriage. The betrothal was not between the two babies, of course — that would be leaving things far too late. Instead, the engagement was between the baby boy and the first daughter that the baby girl had when she became a mother herself.

This astonishing process is called “market unravelling”, and it is not limited to the Arunta. As described in Alvin Roth’s new book, Who Gets What — and Why, hospitals make early offers to untried junior doctors. Law firms make early offers to law student freshers. Oxford and Cambridge make offers many months before the students in question sit their exams.

This is not a sensible situation because if everybody could agree to wait, then more information would emerge, allowing more compatible matches. Yet there is an incentive to break ranks and make early “exploding” offers. If those time-limited offers are any good, then students will often accept them rather than take the risk of waiting. The logic of the situation pulls these early offers ever earlier, sometimes absurdly so. Everybody loses but no individual can change things.

One response is to agree a rule banning early offers. That is what the US National Association for Law Placement did in the 1980s: it ruled that any job offer made to a first-semester law student had to remain open until the end of that semester. It wasn’t long before the lawyers had found the loophole: mediocre offers paired with massive time-limited signing bonuses.

Another possibility is to use a central clearing house. That is what the Boston school system did. Parents listed at least three schools in order of preference, and the clearing house put every child into their first choice school where possible. Any schools with spare places would then admit students who’d listed the school as second choice, then third choice, and so on. Four out of five students got their first choice, yet parents hated the system. Why?

The problem was that parents had just one shot at a good school. Popular schools filled instantly, making second choices almost irrelevant. Parents who didn’t understand the game might apply for several popular schools and get nothing. Those who understood the problem found themselves second-guessing the clearing house, using their precious first choice on a compromise school rather than the high-risk approach of saying what they truly wanted. The system produced cynical, alienated parents.

The problem is easier to describe than to solve. But there is a way to fix unravelling markets: call Alvin Roth. An engineer by training — albeit one with a Nobel Memorial Prize in economics — Roth designs markets with an engineer’s practical mentality. With his colleagues, Roth has designed stable clearing houses for doctors, fixed the school application systems in Boston and in New York City, and even created kidney donation networks.

At the heart of many well-functioning clearing houses is something called the deferred acceptance algorithm. The algorithm begins with the following input: each student submits a list of their preferred schools, from first choice to last, and each school submits a ranked list of their preferred students. Armed with these rankings, a computer can swiftly handle the rest. First, each school provisionally fills its places with the top students on its list; then each student provisionally accepts the best offer she has received and rejects the others; each school then extends further offers to fill the spaces that these rejections opened up. The process continues (inside the computer) with each student keeping only the best offer received so far, and with each school working down the list of students and making fresh offers as the rejections come in.

There are two important features of the deferred acceptance algorithm. The first is that people can safely tell the truth about their favourite schools — there is no disadvantage to aiming high. The second is that the algorithm’s allocation is stable. There will never be a pair of school and student who wish they were matched to each other but whom the algorithm sent elsewhere. This matters because if such pairs exist, they have an incentive to strike side deals, undermining the whole system.

The deferred acceptance algorithm is just the start of a successful market design, because details matter. In New York City, there are different application procedures for certain specialised schools. When assigning hospital residencies, the US National Resident Matching Program needed to cope with pairs of romantically attached doctors who wanted two job offers in the same city. These complexities sometimes mean there is no perfect matching algorithm, and the challenge is to find a system that is good enough to work.

Economists such as Alvin Roth are like engineers or doctors. They cannot settle for understanding a system in theory; they must solve practical problems too. It’s a hopeful direction for economics — and an essential one, if economists aren’t to be left on the shelf themselves.

Also published at ft.com.

Undercover Economist

The psychology of saving

‘There is one dramatic success for behavioural economics — the way it has shaped pensions’

“THERE ARE IDIOTS. Look around.” So began a famous economics paper by Larry Summers — a lauded academic before he became US Treasury secretary. It is perhaps the most concise expression of behavioural economics, the branch of economics that tries to take psychology seriously.

Behavioural economics is appealing not only because it is realistic but also because it is vastly more charming than the traditional variety. Championed by economists such as Richard Thaler (co-author of Nudge and author of a new book, Misbehaving ) and psychologists such as Nobel laureate Daniel Kahneman (author of Thinking, Fast and Slow), it has triumphed in the “smart thinking” section of the bookshop and exerted increasing influence in academia.

It can be hard to turn psychological insights into rigorous academic models, and even harder to turn them into good policy. But there is at least one dramatic success for behavioural economics — the way that it has shaped pensions. At a recent Financial Times event, Professor Thaler rightly celebrated this as the field’s greatest triumph.

Other than Thaler’s own evangelism, the reason for this success is twofold. First, when it comes to pensions there is a large gap between what we do and what we should do. Second, bridging that gap is fairly simple: we need to encourage people to save more, and in most cases those savings should flow into simple, low-cost equity tracker funds. The only comparable example that springs to mind is smoking: many smokers are making themselves unhappy and would be better off if they could find a way to stop. And as a classic research paper in behavioural economics concludes, taxes on cigarettes seem to make smokers and potential smokers happier by prompting them to quit, or never start.

Given the problem — people need to be nudged into saving more — the biggest pension policy breakthrough has been automatic enrolment, a cornerstone of modern UK pension policy and widely used in the US too. A typical defined contribution pension invites people to pay money into a pension pot, often enjoying tax advantages and matching contributions from an employer. Yet people procrastinate: money seems tight, retirement is a long way off, and who wants to fill in forms? Automatic enrolment reverses the default, deducting pension contributions from our payslips unless we take active steps to opt out. The process respects our autonomy — you can opt out if you wish — but makes it easy to do what we probably should be doing anyway.

A clever supplement to this approach is “Save More Tomorrow”, a scheme in the US whereby people make an advance commitment to redirect part of their pay rises into the pension. At a 50/50 ratio, for example, a 2 per cent pay rise becomes a 1 per cent pay rise and a 1 percentage point increase in pension contributions. It doesn’t take long for a 3 per cent contribution (which is inadequate but typical in the US) to become something more sensible, such as 10 or 15 per cent. Thaler has been a driving force behind both ideas.

Of course, these tactics do not work for everyone. I once spoke at a book festival in Australia and found that a slice of my modest fee had been automatically invested in an Australian pension for me. This benefited nobody except some administrators and the postman who delivered the letters from Australia, detailing the evaporation of my tiny pension pot.

A more serious difficulty is choosing the right level of default contribution. A default that is too aggressive — automatically deducting 25 per cent of salary — jolts most people into opting out. A default that is too low, such as 3 per cent of salary, could conceivably be worse than the old opt-in default of zero. Many people who might have taken an active choice to save 6 or 7 per cent rather than nothing end up settling instead for the default. As mentioned, 3 per cent is a common level for automatic enrolment in the US, for no good reason other than historical accident. Yet it is dangerously low.

There is also a painful conflict of interest at the heart of any corporate pension plan. From the perspective of classical economics, companies will offer generous pensions if they want to attract capable staff. It is expensive to subsidise a pension but staff value the subsidy, making it worthwhile.

From the more realistic perspective of behavioural economics, a tension emerges. A benevolent planner, armed with behavioural insights, would nudge people into a passive pension investment with almost no conscious thought. But a corporate human resources director would want to remind employees how generously their pensions are being subsidised. That means frequent reminders and ample opportunity to admire the pension pot — even if such admiration leads to anxiety about uncertain returns, or expensively trading shares within the pension.

There are approaches that might keep both the behavioural economist and the HR director happy. For example, a pension pot that is expressed in terms of daily or weekly income in retirement, adorned with photographs of cruise ships, seems more appealing than an abstract and rather meaningless lump sum.

We cannot blame behavioural economics for this tension but it is real. As automatic enrolment becomes the norm, it will be important to keep an eye on how corporations respond.

Written for and first published at ft.com.

Undercover Economist

Teamwork gives us added personbyte

‘Complex products require elaborate networks of teamwork, and only a few places manage the trick’

Is economic life getting more complicated? In some ways, no. It’s much easier to use a computer than it used to be, or to make an international phone call, or to buy avocados. But, in many ways, complexity is on the rise.

This is true for products. When once we used to buy simple chunks of matter, such as copper, tin or wheat, now we buy smart watches, movie downloads and ready meals — these are things whose structure is a vital part of their value. You can melt and cool copper, or scatter and then rebuild a pile of wheat, and no great harm will come to either. Put your phone into a blender and you’ll find it has changed in ways that matter a great deal.

Scientific ideas are also becoming more complex. Benjamin F Jones, an economist, has used large databases of academic articles and patents to show that researchers and inventors are getting older and have narrower specialisations. This seems to be because there is much more science to be known, and scientists must devote more time to mastering what is already known before they can contribute original research.

César Hidalgo, a physicist at Massachusetts Institute of Technology and author of Why Information Grows, coins the word “personbyte” to describe the amount of knowledge that one person can reasonably know. The personbyte isn’t getting any smaller but — relative to the knowledge that needs to be mustered to produce a modern scientific paper, or a computer, or a car — the personbyte looks ever more inadequate.

The way to escape the constraint of the personbyte is to work in larger teams, and this is exactly what Jones finds in academic and patent databases: research teams are bigger than they were 40 years ago. This is a natural consequence of the fact that a personbyte isn’t big enough to process the knowledge required for modern science or engineering. One person cannot hold all the necessary know-how in her head, so she must work together with others.

How easy is this collaboration? It depends. Some knowledge is easily copied. Once someone has invented the wheel, anyone else can simply copy the idea and no more elaborate teamwork is needed than that. Other knowledge can be embodied in a product and widely dispersed. I don’t know how to make a laptop computer but I know how to use one. In a sense, I am standing on the shoulders of Ada Lovelace, Alan Turing and Bill Gates. But I do not need to spend any time in meetings with these people, which is just as well. Better yet, my laptop is built from simpler standalone modules, such as the central processing unit, the keyboard, the operating system and the hard drive. Individuals — or teams, or firms — can work on these modules even if no organisation has mastered the skills necessary to build every part of the laptop from scratch. The modular nature of the computer makes it straightforward to use earlier knowledge.

But some knowledge requires far more challenging collaborations. This knowledge is tacit, hard or perhaps impossible to describe. It may be easy to send data around the world but data may not be enough. Knowledge may be weightless in principle but, as César Hidalgo points out, we find it easier to move heavy copper from mines in Chile to factories in Korea than to move manufacturing know-how from Korea to Chile.

Hidalgo argues persuasively that networks of people and companies with such tacit knowledge are an essential part of a modern economy. They form essential capabilities: how to make a plasma display, or champagne, or a financial derivative. Simpler products require simpler networks of collaboration, and can be produced almost anywhere. More complex products require elaborate networks of teamwork, and only a few places manage the trick.

More than 20 years ago, economist Michael Kremer published “The O-ring Theory of Economic Development”. His title refers to a simple seal whose failure destroyed the Challenger space shuttle and killed seven astronauts. Kremer wanted us to think about weak links. A string quartet is not much better than its worst player. A gourmet meal could be ruined by a clumsy chef, a faulty oven, a rude waiter, a decaying ingredient, or a rat scurrying across the dining room. These are O-ring products.

The logic of an O-ring world is that the most skilled workers end up collaborating with each other, using the best equipment. Chef Heston Blumenthal does not work at Burger King. It makes more sense for Joshua Bell to play a Stradivarius and for a street busker to play a worm-eaten fiddle than for the two musicians to swap.

Inequality soars in O-ring worlds because the more complex a product or service, the greater the value of someone who can avoid errors. And a weak link somewhere in your economy can spread like a cancer. Why should a young person in Nigeria study hard if her efforts will be dissipated by electrical blackouts, criminal gangs or corrupt officials?

The economic world is unlikely to become simpler. But we may rise to the challenge better if we think about both the social and institutional support that helps make complex collaborations possible — and the simple modular engineering that makes complex collaborations unnecessary.

Written for and first published at ft.com.

Undercover Economist

The truth about our norm core

‘Social pressure matters but it is not the only thing that matters. Facts can trump groupthink’

While not quite as infamous as Philip Zimbardo’s Stanford prison simulation, or Stanley Milgram’s obedience research, Solomon Asch’s conformity experiments remain among the most celebrated in psychology. In 1951, Asch’s research showed that our judgments about simple factual matters can be swayed by what people around us say. The finding echoed down the decades. Milgram found in 1961 that people were willing to administer apparently dangerous electric shocks when ordered to do so by an experimenter. In 1971, Zimbardo set up an imitation prison in a Stanford University basement with subjects given the role of guards and prisoners, then observed as the guards humiliated the prisoners.

Between them, the three academic psychologists taught us that in order to fit in with others, we are willing to do almost anything. That, at least, is what we are told. The truth, as so often, is more interesting.

Asch gave his subjects the following task: identify which of three different lines, A, B or C, was the same length as a “standard” line. The task was easy in its own right but there was a twist. Each individual was in a group of seven to nine people, and everyone else in the group was a confederate of Asch’s. For 12 out of 18 questions they had been told to choose, unanimously, a specific incorrect answer. Would the experimental subject respond by fitting in with the group or by contradicting them? Many of us know the answer: we are swayed by group pressure. Offered a choice between speaking the truth and saying something socially convenient, we opt for social convenience every time.

But wait — “every time”? In popular accounts of Asch’s work, conformity tends to be taken for granted. I often describe his research myself in speeches as an example of how easily groupthink can set in and silence dissent. And this is what students of psychology are themselves told by their own textbooks. A survey of these textbooks by three psychologists, Ronald Friend, Yvonne Rafferty and Dana Bramel, found that the texts typically emphasised Asch’s findings of conformity. That was in 1990 but when Friend recently updated his work, he found that today’s textbooks stressed conformity more than ever.

This is odd, because the experiments found something more subtle. It is true that most experimental subjects were somewhat swayed by the group. Fewer than a quarter of experimental subjects resolutely chose the correct line every time. (In a control group, unaffected by social pressure, errors were rare.) However, the experiment found that total conformity was scarcer than total independence. Only six out of 123 subjects conformed on all 12 occasions. More than half of the experimental subjects defied the group and gave the correct answer at least nine times out of 12. A conformity effect certainly existed but it was partial.

This surprised me, and it may surprise others who have read popular accounts of the so-called conformity studies. I doubt that it surprised Asch. Conformity was already a well-established finding by 1951, and his experiments were designed to contrast with earlier research on social norms. This previous research showed that people conformed to social pressure in situations where there was no clear correct answer — for instance, when asked to identify which of two ungrammatical sentences was the most ungrammatical. But Asch wanted to know if peer pressure would also wield influence when the crowd was unambiguously wrong. His research provided an answer: social pressure is persuasive but, for most people, the facts are more persuasive still.

Myths about famous experiments have always grown in the telling. It seems most unlikely that Archimedes ran naked through the streets of Syracuse yelling “Eureka!”, and an apple probably did not strike Newton’s head. But there seems to be something particularly attractive about these famous psychology experiments that paint us all as sheep — even when the experiments may have been flawed, impossible to replicate or (as with Asch’s work) have simply found something much more subtle than the myth would have us believe.

The psychologist Christian Jarrett comments, “the resistance to tyranny shown by many participants in Zimbardo’s prison study has largely been ignored, and so, too, has the disobedience shown by many participants in Milgram’s seminal work.”

Zimbardo’s Stanford prison experiment was shocking stuff, and raised serious questions about research ethics. But we should also ask questions about what Zimbardo really found. By his own admission he gave a strong steer to the guards, and cast himself as their ally in a quest to dehumanise the prisoners. “We’re going to take away their individuality in various ways,” he told them. Other psychologists have suggested that this was more a test of obedience to Zimbardo than a demonstration that sadism blooms given the opportunity.

Few textbook accounts of the study mention Zimbardo’s attempt to influence the guards; nor do they point out that two-thirds of the guards refrained from sadism.

Social pressure matters but it is not the only thing that matters. Solomon Asch showed that facts can trump groupthink. It would be ironic if our own biased recollections of his finding proved him wrong.

Written for and first published at ft.com.

Undercover Economist

Down with mathiness!

‘In the recent UK election campaign, a diet of numbers was stuffed into voters like feed into French ducks’

The American Economic Review isn’t usually the place for trash talk but a brief new article by Paul Romer is the closest academic economics is likely to come to a spat between boxers at a pre-fight weigh-in. Romer, a professor at New York University, is worried that a new trend in economics — “mathiness” — is polluting the discipline. And he names names — including Robert Lucas and Edward Prescott, both Nobel laureates, and inequality guru Thomas Piketty.

In a follow-up comment, “Protecting the Norms of Science in Economics”, Romer says: “I point to specific papers that deserve careful scrutiny because I think they provide objective, verifiable evidence that the authors are not committed to the norms of science.”

Romer adds that if his suspicions are confirmed, such people should be ostracised — suggesting that Nobel Prize winners should be ejected from academic discussion because of their intellectual bad faith. This is strong stuff.

Romer, though, has rarely stuck to the academic script. In the late 1980s he developed a new approach to thinking about economic growth that mathematically modelled the development and spread of ideas, an achievement that many regard as worthy of the Nobel memorial prize in economics. But Romer then drifted away from academia, first founding an online learning platform called Aplia, and then campaigning for a radical development idea, “charter cities”.

Does economics have a mathiness problem? Many casual observers would say, “of course”. Economics has a reputation for producing rigorous nonsense.

But Romer’s attack is much more focused. He doesn’t mean that economics uses too much mathematics but that some economic theorists are pushing an ideological agenda and using fancy mathematics to disguise their intentions. They can redefine familiar words to mean unfamiliar things. They can make unrealistic assumptions. They can take hypothetical conclusions and suggest they have practical significance. And they can do all these things with little fear of detection, behind a smokescreen of equations. If Romer is right, some economics papers are Orwellian Newspeak dressed up as calculus.

In his short essay “Politics and the English Language”, Orwell argued that there was a “special connection between politics and the debasement of language”. While some people wish to communicate clearly, the political writer prefers a rhetorical fog. And the fog can spread. Writers who should know better imitate sloppy writing habits. Readers become jaded and stop hoping that anyone will tell them the truth.

Romer fears a similar rot at the heart of economics. As some academics hide nonsense amid the maths, others will conclude that there is little reward in taking any of the mathematics seriously. It is hard work, after all, to understand a formal economic model. If the model turns out to be more of a party trick than a good-faith effort to clarify thought, then why bother?

Romer focuses his criticism on a small corner of academic economics, and professional economists differ over whether his targets truly deserve such scorn. Regardless, I am convinced that the malaise Romer and Orwell describe is infecting the way we use statistics in politics and public life.

There being more statistics around than ever, it has never been easier to make a statistical claim in service of a political argument.

In the recent election campaign in the UK, a diet of numbers was stuffed into voters like feed into French ducks. A fact-checking industry sprang up to scrutinise these numbers — I was part of it — but the truth is that most of the numbers were not false, unhelpful. Instead of simply verifying or debunking the latest number, fact checkers found themselves spending much effort attempting to purify muddied waters.

This is infuriating — for the public, for the fact checkers, and for the scientists and statisticians who take such pains to gather evidence. Imagine their dismay when the politicians seize that evidence and hold it up for protection like a human shield. Good statistics matter; without them it is almost impossible to understand the modern world. Yet when statistics are dragged into political arguments, it is not the reputation of politics that suffers but the reputation of statistics. The endgame isn’t pretty: it becomes too much trouble to check statistical claims, and so they are by default assumed to be empty, misleading or false.

Just as the antidote to Newspeak isn’t to stop using language, the antidote to mathiness isn’t to stop using mathematics. It is to use better maths. Orwell wanted language to be short, simple, active and direct. Romer wants economists to use maths with “clarity, precision and rigour”. Statistical claims should be robust, match everyday language as much as possible, and be transparent about methods.

Some critics believe that economics should conduct itself in plain English at all times. This is, I think, unreasonable. Mathematics offers precision that English cannot. But it also offers a cloak for the muddle-headed and the unscrupulous. There is a profound difference between good maths and bad maths, between careful statistics and junk statistics. Alas, on the surface, the good and the bad can look very much the same.

Written for and first published at ft.com.

Undercover Economist

Mind the fair trade gap

‘If fair trade does deliver higher incomes for farmers, it may prove too successful for its own good’

In 2001, the world price of coffee sank to its lowest ebb for decades, threatening dreadful hardship for the often-poor farmers who grow the sainted berry. It was also around that time that fair trade coffee seemed to come of age, with a common certification mark launched in 2002, and the product becoming a familiar sight in supermarkets and coffee chains.

The premise of fair trade is that the disparity between poor coffee farmers and prosperous drinkers presents both a problem and an opportunity. The problem is that farmers often live a precarious existence: geographically isolated and growing a crop with a volatile price. The opportunity is that many western consumers care about the earnings and conditions of the people who grow their coffee, and have some money to spare if only it might reach those people.

Unlike a taxi driver or a waiter, you can’t just tip the guy who grew your coffee. The fair trade answer to the conundrum is a labelling scheme: an inspector verifies that all is well on the farm, with good conditions and a higher price paid for coffee; this information is conveyed to consumers by way of a recognisable trademark, the most famous of which is the Fairtrade logo. It’s an appealing idea — a voluntary scheme that helps people who want to help people. (Or rather, several voluntary schemes: there is more than one fair trade label, alongside diverse certification schemes such as Organic or Rainforest Alliance.) Who wouldn’t want a better deal for farmers who are poor and work hard? But there are problems with the idea too.

The most obvious problem is that this labelling scheme costs money. Flocert, a certification body set up by the Fairtrade Labelling Organization, charges farmer co-operatives €538 merely to apply for certification, plus an initial audit fee of €1,466 even for a small co-op. Cynics might suspect bureaucratic bloat but the costs may well be real. It cannot be cheap to check pay and conditions in some remote Peruvian coffee plantation. But every euro spent on certification is a euro that the farmer cannot spend on his family. And larger co-operatives from richer, better-connected countries are more likely to find it worthwhile to pay for certification. For this reason, economist and fair trade critic Ndongo Sylla says that fair trade benefits “the rich”. That seems too strong; but it is certainly a challenge for the fair trade model to reach the poorest.

A second problem is that fair trade certification cannot guarantee fair trade sales. If coffee importers want to put the Fairtrade mark on their coffee, they must find a Fairtrade certified producer and pay the Fairtrade price, which reflects both a modest premium and a guaranteed minimum price. But importers are not obliged to buy fair trade coffee and may avoid it when it gets too expensive, exactly when the premium is most needed. A study by Christopher Bacon found that during the price slump of 2000 and 2001, Nicaraguan coffee farmers were earning twice as much per pound when selling fair trade coffee as when selling the uncertified stuff. But much of their coffee could not find a buyer at such rates and was sold at market rates instead; as a result, the average price premium, while substantial, was much lower at around a third.

Another study, by Tina Beuchelt and Manfred Zeller, found the fair trade certified farmers in Nicaragua started at a similar income level to conventional farmers and, if anything, slipped backwards. A recent survey by Raluca Dragusanu, Nathan Nunn, and Daniele Giovannucci was more upbeat but still found the evidence in favour of fair trade “mixed and incomplete”.

A final irony is that if fair trade does deliver higher incomes for farmers, it may prove too successful for its own good. If coffee farmers are able to sell more coffee at a premium price, more people will want to become coffee farmers. One possible result is that the market price for uncertified coffee falls and, on balance, coffee farmers are no better off.

As the development economist Paul Collier once wrote, fair trade certified farmers “get charity as long as they stay producing the crops that have locked them into poverty”. It is a telling point. For all the good I may wish the people who make my coffee, a globalised tip jar makes a precarious foundation for their future prosperity.

Written for and first published at ft.com.

Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

About Tim

Tim Harford is an author, columnist for the Financial Times and presenter of Radio 4's "More or Less".
Learn More ›

Subscribe to TimHarford.com

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Do NOT follow this link or you will be banned from the site!