Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

Gary Becker, 1930-2014

Gary Becker passed away on Saturday. My obituary for the Financial Times is below.

Gary Becker, the man who led the movement to apply economic ideas to areas of life such as marriage, discrimination and crime, died on May 3 after a long illness. He was 83.
Born in a coal-mining town in Pennsylvania, raised in Brooklyn and with a mathematics degree summa cum laude from Princeton, it was not until Becker arrived at the University of Chicago that he realised “I had to begin to learn again what economics is all about”.
He had considered taking up sociology, but found it “too difficult”. Yet he was to return to the questions of sociology again and again over the years, taking pleasure in wielding the rigorous yet reductive mathematical tools of economics. This approach was to win him the Nobel Memorial Prize in Economics in 1992, and make him one of the most influential and most cited economists of the 20th century.
His doctoral dissertation was on the economics of discrimination – how to measure it and what effects it might have. Becker showed that discrimination was costly for the bigot as well as the victim. This seemed strange material for an economist, and Becker attracted little attention for his ideas when he published a book on discrimination in 1957.
This didn’t seem to worry him. “My whole philosophy has been to be conventional in things such as dress and so on,” he told me in 2005. “But when it comes to ideas, I’ll be willing to stick my neck out: I can take criticism if I think I’m right.”
He received plenty of that criticism over the years for daring to develop economic theories of crime and punishment, of the demand for children, and of rational addicts who may quit in response to a credible threat to raise the price of cigarettes. His idea that individuals might think of their education as an investment, with a rate of return, caused outrage. Yet nobody now frets about the use of the phrase “human capital”, the title of one of Becker’s books.
That exemplifies the way that Becker’s approach has changed the way that economists think about what they do, often without explicitly recognising his influence. He was economically omnivorous: colleagues such as Lars Peter Hansen, a fellow Nobel laureate, would find Becker quizzing them and providing penetrating comments even on research that seemed far removed from Becker’s main interests.
“He will be remembered as a person who in a very creative way broadened the scope of economic analysis,” said Professor Hansen, “And as one of the very best economists of the 20th century.”
Becker’s life-long affection was for the subject he transformed. On weekend afternoons, he would often be found in his office, writing or answering questions from young academics six decades his junior. He continued to write a blog with the legal scholar Richard Posner until a few weeks before his death.
“He loved economics,” said Kevin Murphy, who taught a course alongside Becker for many years, “and he inspired so many economists.” Perhaps the most likely result of a class with Becker was not mastering a particular formal technique, but acquiring that distinctive economist’s outlook on the world.
That worldview was on display when on the way to his Lunch with the FT, Gary Becker parked illegally. On cross-examination, he cheerfully told me that after weighing the risks and benefits, this was a rational crime.
“That sounds like Gary to me,” said Prof Murphy. “He decided to give you a practical lesson in economics.”
Becker was widowed in 1970, and remarried in 1980 to a Chicago history professor, Guity Nashat. She survives him, as does a daughter, Catherine Becker; a sister, Natalie Becker; a stepson and two grandsons.

You can read my lunch with Gary Becker, or read more about his ideas in The Logic of Life. It was clear, speaking to his colleagues, that he will be greatly missed.

5th of May, 2014Other WritingComments off
Highlights

Big Data: Are we making a big mistake?

Five years ago, a team of researchers from Google announced a remarkable achievement in one of the world’s top scientific journals, Nature. Without needing the results of a single medical check-up, they were nevertheless able to track the spread of influenza across the US. What’s more, they could do it more quickly than the Centers for Disease Control and Prevention (CDC). Google’s tracking had only a day’s delay, compared with the week or more it took for the CDC to assemble a picture based on reports from doctors’ surgeries. Google was faster because it was tracking the outbreak by finding a correlation between what people searched for online and whether they had flu symptoms.

Not only was “Google Flu Trends” quick, accurate and cheap, it was theory-free. Google’s engineers didn’t bother to develop a hypothesis about what search terms – “flu symptoms” or “pharmacies near me” – might be correlated with the spread of the disease itself. The Google team just took their top 50 million search terms and let the algorithms do the work.
The success of Google Flu Trends became emblematic of the hot new trend in business, technology and science: “Big Data”. What, excited journalists asked, can science learn from Google?
As with so many buzzwords, “big data” is a vague term, often thrown around by people with something to sell. Some emphasise the sheer scale of the data sets that now exist – the Large Hadron Collider’s computers, for example, store 15 petabytes a year of data, equivalent to about 15,000 years’ worth of your favourite music.
But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast. Google Flu Trends was built on found data and it’s this sort of data that ­interests me here. Such data sets can be even bigger than the LHC data – Facebook’s is – but just as noteworthy is the fact that they are cheap to collect relative to their size, they are a messy collage of datapoints collected for disparate purposes and they can be updated in real time. As our communication, leisure and commerce have moved to the internet and the internet has moved into our phones, our cars and even our glasses, life can be recorded and quantified in a way that would have been hard to imagine just a decade ago.
Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.
Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”
Found data underpin the new internet economy as companies such as Google, Facebook and Amazon seek new ways to understand our lives through our data exhaust. Since Edward Snowden’s leaks about the scale and scope of US electronic surveillance it has become apparent that security services are just as fascinated with what they might learn from our data exhaust, too.
Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.
But while big data promise much to scientists, entrepreneurs and governments, they are doomed to disappoint us if we ignore some very familiar statistical lessons.
“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”
. . .
Four years after the original Nature paper was published, Nature News had sad tidings to convey: the latest flu outbreak had claimed an unexpected victim: Google Flu Trends. After reliably providing a swift and accurate account of flu outbreaks for several winters, the theory-free, data-rich model had lost its nose for where flu was going. Google’s model pointed to a severe outbreak but when the slow-and-steady data from the CDC arrived, they showed that Google’s estimates of the spread of flu-like illnesses were overstated by almost a factor of two.
The problem was that Google did not know – could not begin to know – what linked the search terms with the spread of flu. Google’s engineers weren’t trying to figure out what caused what. They were merely finding statistical patterns in the data. They cared about ­correlation rather than causation. This is common in big data analysis. Figuring out what causes what is hard (impossible, some say). Figuring out what is correlated with what is much cheaper and easier. That is why, according to Viktor Mayer-Schönberger and Kenneth Cukier’s book, Big Data, “causality won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning”.
But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down. One explanation of the Flu Trends failure is that the news was full of scary stories about flu in December 2012 and that these stories provoked internet searches by people who were healthy. Another possible explanation is that Google’s own search algorithm moved the goalposts when it began automatically suggesting diagnoses when people entered medical symptoms.
Google Flu Trends will bounce back, recalibrated with fresh data – and rightly so. There are many reasons to be excited about the broader opportunities offered to us by the ease with which we can gather and analyse vast data sets. But unless we learn the lessons of this episode, we will find ourselves repeating it.
Statisticians have spent the past 200 years figuring out what traps lie in wait when we try to understand the world through data. The data are bigger, faster and cheaper these days – but we must not pretend that the traps have all been made safe. They have not.
. . .
In 1936, the Republican Alfred Landon stood for election against President Franklin Delano Roosevelt. The respected magazine, The Literary Digest, shouldered the responsibility of forecasting the result. It conducted a postal opinion poll of astonishing ambition, with the aim of reaching 10 million people, a quarter of the electorate. The deluge of mailed-in replies can hardly be imagined but the Digest seemed to be relishing the scale of the task. In late August it reported, “Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totalled.”
After tabulating an astonishing 2.4 million returns as they flowed in over two months, The Literary Digest announced its conclusions: Landon would win by a convincing 55 per cent to 41 per cent, with a few voters favouring a third candidate.
The election delivered a very different result: Roosevelt crushed Landon by 61 per cent to 37 per cent. To add to The Literary Digest’s agony, a far smaller survey conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt. Mr Gallup understood something that The Literary Digest did not. When it comes to data, size isn’t everything.
Opinion polls are based on samples of the voting population at large. This means that opinion pollsters need to deal with two issues: sample error and sample bias.
Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The “margin of error” reported in opinion polls reflects this risk and the larger the sample, the smaller the margin of error. A thousand interviews is a large enough sample for many purposes and Mr Gallup is reported to have conducted 3,000 interviews.
But if 3,000 interviews were good, why weren’t 2.4 million far better? The answer is that sampling error has a far more dangerous friend: sampling bias. Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all. George Gallup took pains to find an unbiased sample because he knew that was far more important than finding a big one.
The Literary Digest, in its quest for a bigger data set, fumbled the question of a biased sample. It mailed out forms to people on a list it had compiled from automobile registrations and telephone directories – a sample that, at least in 1936, was disproportionately prosperous. To compound the problem, Landon supporters turned out to be more likely to mail back their answers. The combination of those two biases was enough to doom The Literary Digest’s poll. For each person George Gallup’s pollsters interviewed, The Literary Digest received 800 responses. All that gave them for their pains was a very precise estimate of the wrong answer.
The big data craze threatens to be The Literary Digest all over again. Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is.
Professor Viktor Mayer-Schönberger of Oxford’s Internet Institute, co-author of Big Data, told me that his favoured definition of a big data set is one where “N = All” – where we no longer have to sample, but we have the entire background population. Returning officers do not estimate an election result with a representative tally: they count the votes – all the votes. And when “N = All” there is indeed no issue of sampling bias because the sample includes everyone.
But is “N = All” really a good description of most of the found data sets we are considering? Probably not. “I would challenge the notion that one could ever have all the data,” says Patrick Wolfe, a computer scientist and professor of statistics at University College London.
An example is Twitter. It is in principle possible to record and analyse every message on Twitter and use it to draw conclusions about the public mood. (In practice, most researchers use a subset of that vast “fire hose” of data.) But while we can look at all the tweets, Twitter users are not representative of the population as a whole. (According to the Pew Research Internet Project, in 2013, US-based Twitter users were disproportionately young, urban or suburban, and black.)
There must always be a question about who and what is missing, especially with a messy pile of found data. Kaiser Fung, a data analyst and author of Numbersense, warns against simply assuming we have everything that matters. “N = All is often an assumption rather than a fact about the data,” he says.
Consider Boston’s Street Bump smartphone app, which uses a phone’s accelerometer to detect potholes without the need for city workers to patrol the streets. As citizens of Boston download the app and drive around, their phones automatically notify City Hall of the need to repair the road surface. Solving the technical challenges involved has produced, rather beautifully, an informative data exhaust that addresses a problem in a way that would have been inconceivable a few years ago. The City of Boston proudly proclaims that the “data provides the City with real-time in­formation it uses to fix problems and plan long term investments.”
Yet what Street Bump really produces, left to its own devices, is a map of potholes that systematically favours young, affluent areas where more people own smartphones. Street Bump offers us “N = All” in the sense that every bump from every enabled phone can be recorded. That is not the same thing as recording every pothole. As Microsoft researcher Kate Crawford points out, found data contain systematic biases and it takes careful thought to spot and correct for those biases. Big data sets can seem comprehensive but the “N = All” is often a seductive illusion.
. . .
Who cares about causation or sampling bias, though, when there is money to be made? Corporations around the world must be salivating as they contemplate the uncanny success of the US discount department store Target, as famously reported by Charles Duhigg in The New York Times in 2012. Duhigg explained that Target has collected so much data on its customers, and is so skilled at analysing that data, that its insight into consumers can seem like magic.
Duhigg’s killer anecdote was of the man who stormed into a Target near Minneapolis and complained to the manager that the company was sending coupons for baby clothes and maternity wear to his teenage daughter. The manager apologised profusely and later called to apologise again – only to be told that the teenager was indeed pregnant. Her father hadn’t realised. Target, after analysing her purchases of unscented wipes and magnesium supplements, had.
Statistical sorcery? There is a more mundane explanation.
“There’s a huge false positive issue,” says Kaiser Fung, who has spent years developing similar approaches for retailers and advertisers. What Fung means is that we didn’t get to hear the countless stories about all the women who received coupons for babywear but who weren’t pregnant.
Hearing the anecdote, it’s easy to assume that Target’s algorithms are infallible – that everybody receiving coupons for onesies and wet wipes is pregnant. This is vanishingly unlikely. Indeed, it could be that pregnant women receive such offers merely because everybody on Target’s mailing list receives such offers. We should not buy the idea that Target employs mind-readers before considering how many misses attend each hit.
In Charles Duhigg’s account, Target mixes in random offers, such as coupons for wine glasses, because pregnant customers would feel spooked if they realised how intimately the company’s computers understood them.
Fung has another explanation: Target mixes up its offers not because it would be weird to send an all-baby coupon-book to a woman who was pregnant but because the company knows that many of those coupon books will be sent to women who aren’t pregnant after all.
None of this suggests that such data analysis is worthless: it may be highly profitable. Even a modest increase in the accuracy of targeted special offers would be a prize worth winning. But profitability should not be conflated with omniscience.
. . .
In 2005, John Ioannidis, an epidemiologist, published a research paper with the self-explanatory title, “Why Most Published Research Findings Are False”. The paper became famous as a provocative diagnosis of a serious issue. One of the key ideas behind Ioannidis’s work is what statisticians call the “multiple-comparisons problem”.
It is routine, when examining a pattern in data, to ask whether such a pattern might have emerged by chance. If it is unlikely that the observed pattern could have emerged at random, we call that pattern “statistically significant”.
The multiple-comparisons problem arises when a researcher looks at many possible patterns. Consider a randomised trial in which vitamins are given to some primary schoolchildren and placebos are given to others. Do the vitamins work? That all depends on what we mean by “work”. The researchers could look at the children’s height, weight, prevalence of tooth decay, classroom behaviour, test scores, even (after waiting) prison record or earnings at the age of 25. Then there are combinations to check: do the vitamins have an effect on the poorer kids, the richer kids, the boys, the girls? Test enough different correlations and fluke results will drown out the real discoveries.
There are various ways to deal with this but the problem is more serious in large data sets, because there are vastly more possible comparisons than there are data points to compare. Without careful analysis, the ratio of genuine patterns to spurious patterns – of signal to noise – quickly tends to zero.
Worse still, one of the antidotes to the ­multiple-comparisons problem is transparency, allowing other researchers to figure out how many hypotheses were tested and how many contrary results are languishing in desk drawers because they just didn’t seem interesting enough to publish. Yet found data sets are rarely transparent. Amazon and Google, Facebook and Twitter, Target and Tesco – these companies aren’t about to share their data with you or anyone else.
New, large, cheap data sets and powerful ­analytical tools will pay dividends – nobody doubts that. And there are a few cases in which analysis of very large data sets has worked miracles. David Spiegelhalter of Cambridge points to Google Translate, which operates by statistically analysing hundreds of millions of documents that have been translated by humans and looking for patterns it can copy. This is an example of what computer scientists call “machine learning”, and it can deliver astonishing results with no preprogrammed grammatical rules. Google Translate is as close to theory-free, data-driven algorithmic black box as we have – and it is, says Spiegelhalter, “an amazing achievement”. That achievement is built on the clever processing of enormous data sets.
But big data do not solve the problem that has obsessed statisticians and scientists for centuries: the problem of insight, of inferring what is going on, and figuring out how we might intervene to change a system for the better.
“We have a new resource here,” says Professor David Hand of Imperial College London. “But nobody wants ‘data’. What they want are the answers.”
To use big data to produce such answers will require large strides in statistical methods.
“It’s the wild west right now,” says Patrick Wolfe of UCL. “People who are clever and driven will twist and turn and use every tool to get sense out of these data sets, and that’s cool. But we’re flying a little bit blind at the moment.”
Statisticians are scrambling to develop new methods to seize the opportunity of big data. Such new methods are essential but they will work by building on the old statistical lessons, not by ignoring them.
Recall big data’s four articles of faith. Uncanny accuracy is easy to overrate if we simply ignore false positives, as with Target’s pregnancy predictor. The claim that causation has been “knocked off its pedestal” is fine if we are making predictions in a stable environment but not if the world is changing (as with Flu Trends) or if we ourselves hope to change it. The promise that “N = All”, and therefore that sampling bias does not matter, is simply not true in most cases that count. As for the idea that “with enough data, the numbers speak for themselves” – that seems hopelessly naive in data sets where spurious patterns vastly outnumber genuine discoveries.
“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever.

This article was first published in the FT Magazine, 29/30 March 2014. Read it in its original setting here.

Highlights

What next for behavioural economics?

The past decade has been a triumph for behavioural economics, the fashionable cross-breed of psychology and economics. First there was the award in 2002 of the Nobel Memorial Prize in economics to a psychologist, Daniel Kahneman – the man who did as much as anything to create the field of behavioural economics. Bestselling books were launched, most notably by Kahneman himself (Thinking, Fast and Slow , 2011) and by his friend Richard Thaler, co-author of Nudge (2008). Behavioural economics seems far sexier than the ordinary sort, too: when last year’s Nobel was shared three ways, it was the behavioural economist Robert Shiller who grabbed all the headlines.
Behavioural economics is one of the hottest ideas in public policy. The UK government’s Behavioural Insights Team (BIT) uses the discipline to craft better policies, and in February was part-privatised with a mission to advise governments around the world. The White House announced its own behavioural insights team last summer.
So popular is the field that behavioural economics is now often misapplied as a catch-all term to refer to almost anything that’s cool in popular social science, from the storycraft of Malcolm Gladwell, author of The Tipping Point (2000), to the empirical investigations of Steven Levitt, co-author of Freakonomics (2005).
Yet, as with any success story, the backlash has begun. Critics argue that the field is overhyped, trivial, unreliable, a smokescreen for bad policy, an intellectual dead-end – or possibly all of the above. Is behavioural economics doomed to reflect the limitations of its intellectual parents, psychology and economics? Or can it build on their strengths and offer a powerful set of tools for policy makers and academics alike?
A recent experiment designed by BIT highlights both the opportunity and the limitations of the new discipline. The trial was designed to encourage people to sign up for the Organ Donor Register. It was huge; more than a million people using the Driver and Vehicle Licensing Agency website were shown a webpage inviting them to become an organ donor. One of eight different messages was displayed at random. One was minimalist, another spoke of the number of people who die while awaiting donations, yet another appealed to the idea of reciprocity – if you needed an organ, wouldn’t you want someone to donate an organ to you?
BIT devoted particular attention to an idea called “social proof”, made famous 30 years ago by psychologist Robert Cialdini’s book Influence. While one might be tempted to say, “Too few people are donating their organs, we desperately need your help to change that”, the theory of social proof says that’s precisely the wrong thing to do. Instead, the persuasive message will suggest: “Every day, thousands of people sign up to be donors, please join them.” Social proof describes our tendency to run with the herd; why else are books marketed as “bestsellers”?
Expecting social proof to be effective, the BIT trial used three different variants of a social proof message, one with a logo, one with a photo of smiling people, and one unadorned. None of these approaches was as successful as the best alternatives at persuading people to sign up as donors. The message with the photograph – for which the teams had high hopes – was a flop, proving worse than no attempt at persuasion at all.
Daniel Kahneman, one of the fathers of behavioural economics, receiving an award from Barack Obama, November 2013
Three points should be made here. The first is that this is exactly why running trials is an excellent idea: had the rival approaches not been tested with an experiment, it would have been easy for well-meaning civil servants acting on authoritative advice to have done serious harm. The trial was inexpensive, and now that the most persuasive message is in use (“If you needed an organ transplant, would you have one? If so, please help others”), roughly 100,000 additional people can be expected to sign up for the donor register each year.
The second point is that there is something unnerving about a discipline in which our discoveries about the past do not easily generalise to the future. Social proof is a widely accepted idea in psychology but, as the donor experiment shows, it does not always apply and it can be hard to predict when or why.
This patchwork of sometimes-fragile psychological results hardly invalidates the whole field but complicates the business of making practical policy. There is a sense that behavioural economics is just regular economics plus common sense – but since psychology isn’t mere common sense either, applying psychological lessons to economics is not a simple task.
The third point is that the organ donor experiment has little or nothing to do with behavioural economics, strictly defined. “The Behavioural Insights Team is widely perceived as doing behavioural economics,” says Daniel Kahneman. “They are actually doing social psychology.”
. . .
The line between behavioural economics and psychology can get a little blurred. Behavioural economics is based on the traditional “neoclassical” model of human behaviour used by economists. This essentially mathematical model says human decisions can usefully be modelled as though our choices were the outcome of solving differential equations. Add psychology into the mix – for example, Kahneman’s insight (with the late Amos Tversky) that we treat the possibility of a loss differently from the way we treat the possibility of a gain – and the task of the behavioural economist is to incorporate such ideas without losing the mathematically-solvable nature of the model.
Why bother with the maths? Consider the example of, say, improving energy efficiency. A psychologist might point out that consumers are impatient, poorly-informed and easily swayed by what their neighbours are doing. It’s the job of the behavioural economist to work out how energy markets might work under such conditions, and what effects we might expect if we introduced policies such as a tax on domestic heating or a subsidy for insulation.
It’s this desire to throw out the hyper-rational bathwater yet keep the mathematically tractable baby that leads to difficult compromises, and not everyone is happy. Economic traditionalists argue that behavioural economics is now hopelessly patched-together; some psychologists claim it’s still attempting to be too systematic.
Nick Chater, a psychologist at Warwick Business School and an adviser to the BIT, is a sympathetic critic of the behavioural economics approach. “The brain is the most rational thing in the universe”, he says, “but the way it solves problems is ad hoc and very local.” That suggests that attempts to formulate general laws of human behaviour may never be more than a rough guide to policy.
This shift to radical incrementalism is so much more important than some of the grand proposals out there
The most well-known critique of behavioural economics comes from a psychologist, Gerd Gigerenzer of the Max Planck Institute for Human Development. Gigerenzer argues that it is pointless to keep adding frills to a mathematical account of human behaviour that, in the end, has nothing to do with real cognitive processes.
I put this critique to David Laibson, a behavioural economist at Harvard University. He concedes that Gigerenzer has a point but adds: “Gerd’s models of heuristic decision-making are great in the specific domains for which they are designed but they are not general models of behaviour.” In other words, you’re not going to be able to use them to figure out how people should, or do, budget for Christmas or nurse their credit card limit through a spell of joblessness.
Richard Thaler of the University of Chicago, who with Kahneman and Tversky is the founding father of behavioural economics, agrees. To discard the basic neoclassical framework of economics means “throwing away a lot of stuff that’s useful”.
For some economists, though, behavioural economics has already conceded too much to the patchwork of psychology. David K Levine, an economist at Washington University in St Louis, and author of Is Behavioral Economics Doomed? (2012), says: “There is a tendency to propose some new theory to explain each new fact. The world doesn’t need a thousand different theories to explain a thousand different facts. At some point there needs to be a discipline of trying to explain many facts with one theory.”
The challenge for behavioural economics is to elaborate on the neoclassical model to deliver psychological realism without collapsing into a mess of special cases. Some say that the most successful special case comes from Harvard’s David Laibson. It is a mathematical tweak designed to represent the particular brand of short-termism that leads us to sign up for the gym yet somehow never quite get around to exercising. It’s called “hyperbolic discounting”, a name that refers to a mathematical curve, and which says much about the way behavioural economists represent human psychology.
The question is, how many special cases can behavioural economics sustain before it becomes arbitrary and unwieldy? Not more than one or two at a time, says Kahneman. “You might be able to do it with two but certainly not with many factors.” Like Kahneman, Thaler believes that a small number of made-for-purpose behavioural economics models have proved their worth already. He argues that trying to unify every psychological idea in a single model is pointless. “I’ve always said that if you want one unifying theory of economic behaviour, you won’t do better than the neoclassical model, which is not particularly good at describing actual decision making.”
. . .
Meanwhile, the policy wonks plug away at the rather different challenge of running rigorous experiments with public policy. There is something faintly unsatisfying about how these policy trials have often confirmed what should have been obvious. One trial, for example, showed that text message reminders increase the proportion of people who pay legal fines. This saves everyone the trouble of calling in the bailiffs. Other trials have shown that clearly-written letters with bullet-point summaries provoke higher response rates.
None of this requires the sophistication of a mathematical model of hyperbolic discounting or loss aversion. It is obvious stuff. Unfortunately it is obvious stuff that is often neglected by the civil service. It is hard to object to inexpensive trials that demonstrate a better way. Nick Chater calls the idea “a complete no-brainer”, while Kahneman says “you can get modest gains at essentially zero cost”.
David Halpern, a Downing Street adviser under Tony Blair, was appointed by the UK coalition government in 2010 to establish the BIT. He says that the idea of running randomised trials in government has now picked up steam. The Financial Conduct Authority has also used randomisation to develop more effective letters to people who may have been missold financial products. “This shift to radical incrementalism is so much more important than some of the grand proposals out there,” says Halpern.
Not everyone agrees. In 2010, behavioural economists George Loewenstein and Peter Ubel wrote in The New York Times that “behavioural economics is being used as a political expedient, allowing policy makers to avoid painful but more effective solutions rooted in traditional economics.”
For example, in May 2010, just before David Cameron came to power, he sang the praises of behavioural economics in a TED talk. “The best way to get someone to cut their electricity bill,” he said, “is to show them their own spending, to show them what their neighbours are spending, and then show what an energy-conscious neighbour is spending.”
But Cameron was mistaken. The single best way to promote energy efficiency is, almost certainly, to raise the price of energy. A carbon tax would be even better, because it not only encourages people to save energy but to switch to lower-carbon sources of energy. The appeal of a behavioural approach is not that it is more effective but that it is less unpopular.
Thaler points to the experience of Cass Sunstein, his Nudge co-author, who spent four years as regulatory tsar in the Obama White House. “Cass wanted a tax on petrol but he couldn’t get one, so he pushed for higher fuel economy standards. We all know that’s not as efficient as raising the tax on petrol – but that would be lucky to get a single positive vote in Congress.”
Should we be trying for something more ambitious than behavioural economics? “I don’t know if we know enough yet to be more ambitious,” says Kahneman, “But the knowledge that currently exists in psychology is being put to very good use.”
Small steps have taken behavioural economics a long way, says Laibson, citing savings policy in the US. “Every dimension of that environment is now behaviourally tweaked.” The UK has followed suit, with the new auto-enrolment pensions, directly inspired by Thaler’s work.
Laibson says behavioural economics has only just begun to extend its influence over public policy. “The glass is only five per cent full but there’s no reason to believe the glass isn’t going to completely fill up.”

First published on FT.com, Life and Arts, 22 March 2014

Other Writing

Economic quackery and political humbug

British readers will be well aware that the UK Chancellor George Osborne unveiled his budget statement on Wednesday. Here was the piece I wrote that afternoon:

Has there ever been a chancellor of the exchequer more entranced by the game of politics? Most of George Osborne’s Budget speech was trivial. Some of it was imponderable. The final flurry of punches was substantial. Every word was political.

Consider the substantial first: in abolishing the obligation for pensioners to buy annuities, Mr Osborne has snuck up behind an unpopular part of the financial services industry and slugged it with a sock full of coins. (No doubt he will tell us they were minted in memory of the threepenny bit and in honour of Her Majesty the Queen.) This is a vigorous but carefully calibrated tummy rub for sexagenarians with substantial private-sector pensions.

Nobody else will even understand what has been done. The benefit to pensioners is immediate and real. The cost comes later – but Mr Osborne will be long gone by the time the media begin to wring their hands about some poor pensioner who blew his retirement savings on a boiler-room scam.

His other significant moves were equally calculated. A meaningless and arbitrary cap on the welfare budget is no way to rationalise the welfare state but it is a splendid way to tie Labour in knots. A new cash Isa allowance of £15,000 will benefit only the prosperous, and has political appeal while delivering no real benefit – and no real cost to the Treasury – until long after the 2015 election.

Next, the imponderable. Mr Osborne devoted substantial time to the forecasts of the Office for Budget Responsibility, and no wonder: at last the news is good. But while the OBR is independent it is not omniscient. Like other economic forecasters, it has been wrong before and will be again. Mr Osborne forgot this and spoke of growth in future years being “revised up”. This is absurd. The OBR does not get to decide what growth in future years will be. We can draw mild encouragement from its improved forecasts, nothing more.

Finally, the trivial. Any chancellor must master the skill of announcing policies that have little or no place in the macroeconomic centrepiece of the political calendar. Mr Osborne showed no shame. The news that, for example, police officers who die in the line of duty will pay no inheritance tax is appealing but irrelevant. Police deaths are blessedly rare and, since police officers are usually young and modestly paid, inheritance tax is usually a non-issue even in these rare tragedies.

So let us applaud Mr Osborne for playing his own game well – a game in which economic logic is an irritation, the national interest is a distraction and party politics is everything.

You can read this comment in context at FT.com

Other Writing

The Royal accounts are printed in red and gold

The monarchy costs the same as the milk the nation pours on its cereal, says Tim Harford

‘Britain’s Royal Household spent more than it received last year and is doing too little to improve the management of its finances, a parliamentary watchdog says.’, Financial Times, January 28

What – parliamentarians have condemned deficit spending and poor financial management?

They are the experts on such things, I am sure. This is Margaret Hodge MP’s public accounts committee at work. It has a reputation for shaking things up but I’ve never been able to take Ms Hodge seriously since her complaints about a cap on housing benefit.

What was risible about a cap on housing benefit?

Nothing risible about that as such. But because the cap would particularly affect Londoners claiming the benefit, Ms Hodge was among those complaining that it would change the shape of London. She called it “a massive demographic and social upheaval the likes of which have never been seen before”. Since the London mayor’s office – which also opposed the policy – estimated that fewer than 0.2 per cent of the capital’s families would have to move home as a result, that suggests an alarmingly shaky grasp of the numbers for someone whose job is to oversee value for money in public spending.

You’re an unforgiving sort. In any case, Ms Hodge’s committee is concerned about the way the world’s largest housing benefit cheque is being spent.

Yes, the Royal Household receives £31m – a slice of the income from the Crown Estate.

Isn’t that the Queen’s money?

The Estate is nominally the property of the Queen but George III signed over its revenue to parliament.

Wasn’t he the mad one?

Not at the time he gave up the revenues from the Crown Estate. In any case, the current arrangement is only a couple of years old. The Royal Household gets 15 per cent of the income from the Crown Estate. That income, I might add, is sharply rising.

So all this talk of the Queen being down to her last million is nonsense?

It is obviously jolly amusing and has provoked many enormously original jokes. The Queen’s cash reserve has indeed fallen to £1m – not much relative to the scale of the spending required to run the Royal Household. But since income is rising, both from the Crown Estate and from admissions to the likes of Windsor Castle, and spending has been steadily falling, the Royal Household is about to go into surplus. That is more than you can say for the government.

But you can understand why parliament takes an interest. There’s serious money at stake. Think of the hospital beds you could provide for £31m.

Oh, absolutely. You could keep the English National Health Service running for almost three hours for that kind of money.

I’m getting the sense that you’re a monarchist.

Not particularly, but one thing I’m sure of is that the case for or against the monarchy can’t depend on £31m, which is roughly the cost of the milk the nation pours on its cornflakes each morning, plus a bit of tea and toast. This has been reported in all the papers for roughly the same reason that Kim Kardashian’s latest celebrity exploits are reported everywhere.

Why is that, by the way?

Because we’re all monkeys, and we’re fascinated by other monkeys with higher status than us.

But the public accounts committee thinks the real monkeys are the ones in charge of maintaining Royal Household properties – that whoever is in charge of electrical repairs, repointing the bricks, that sort of thing, has been letting things fall into ruin.

Yes – the committee’s view is also that the Royal Household needs to take responsibility for itself and the Treasury needs to take responsibility for it; that it is successfully saving money but should save more; and that it is successfully raising money from tourists but should raise more from tourists. Basically, just think really hard about whatever you already believe is true about the Royal Household, and I am sure I can spare you the trouble of reading the committee’s report.

And do you agree that maintenance of Household properties is lax?

I don’t know because they’ve never invited me in to poke around the plumbing. But it wouldn’t surprise me. There isn’t much competition, and as the great economist John Hicks said: “The best of all monopoly profits is a quiet life.” That may well be how the courtiers felt – until the public accounts committee came along.

Also published at ft.com.

Other Writing

Unequal societies in a more equal world

Global poverty and rich-world inequality are separate issues, writes Tim Harford

‘The richest 85 people on the globe – who between them control as much wealth as the poorest half of the global population put together – could squeeze on to a single double-decker.’ theguardian.com, January 20

A single double-decker bus? Is that different from a double single-decker?

Oh, don’t be unkind. This snippet comes from The Guardian, which credits global development charity Oxfam, which in turn cites a report from the bank Credit Suisse – and I should probably make clear that neither Oxfam nor Credit Suisse have anything to say about buses. I think that’s what we in the trade call a “scoop of interpretation”.

A striking image, though.

Yes, but a distorted one. In the same vein, here’s a surreal image of my own: my toddler controls more wealth than the poorest one and a half billion people on the planet.

Does he have a rich uncle?

No, but he has no debts. That puts his wealth at zero. The poorest people have more debts than assets; their wealth is less than zero. It’s difficult to know exactly how many people are in that boat – understandably, the data are patchy. Still, James Davies, Anthony Shorrocks and Rodrigo Lluberas, academics who worked on the Credit Suisse report, have suggested elsewhere that the poorest 10 per cent have significant net debt.

This sounds like your usual sophistry.

The sophistry isn’t mine. Oxfam and The Guardian are clearly very keen to draw attention to how rich the very rich are. But something has gone awry when the same reasoning leads you to conclude that my son is richer than the poorest 1.5bn put together. On this measure, he’s also richer than an indebted graduate of Harvard Business School. As Credit Suisse points out, “human capital” or earning power isn’t included in the analysis.

OK, so the single double-decker story is daft. But it points to an important truth: economic inequality is growing sharply.

That’s where Oxfam’s interest in all this is curious. The thrust of Oxfam’s argument is that in a lot of countries, the gap between the incomes of the rich and poor is widening, which is true. They say this is both caused by and causes rent-seeking behaviour – elites shaping the rules to suit themselves. This is very plausible. If you want to make a lot of money in the world, it helps to have a friendly government give you a monopoly.

So Oxfam is right!

Except it’s not clear this is a pressing global development issue. Looking at the world as a whole, income inequality does not seem to be rising and is perhaps even falling a little. Exact details depend on how you measure things but the basic story is that some middle-income or poor countries with large populations – Brazil, India and China, but also the likes of Turkey, Indonesia and Nigeria – have been growing faster than rich countries. This offsets the effect of increasing inequality within countries.

So it’s possible for inequality to be increasing in every country in the world and yet global inequality to be falling.

Not only is it possible, it’s actually not far from the truth. Which is why it’s so baffling that Oxfam has jumped in here feet-first. There are two big trends. One is that there’s a lot of good news in the world of economic development. Poverty rates have fallen and indicators such as infant mortality have been moving in the right direction. The other trend is this sharp rise in the income share of the rich, particularly in the US, the UK and other anglophone countries. It’s misleading to present this as evidence that the plight of the world’s poorest is getting worse.

So what should be done?

Oxfam thinks the answer is for the super-rich to promise to be good: no tax havens, no lobbying, support universal healthcare, support progressive taxation.

What about philanthropy?

Not mentioned.

Strange.

That puzzled me, too.

And what would you do about rising inequality in rich countries?

I think something needs to change but I am at a loss as to what. Universal high-quality education seems to have played an important role in limiting inequality in Scandinavia. But that pushes the problem down the line: how do we get universal high-quality education?

So increasing inequality is real and you don’t know what do to about it.

Quite so; sorry. Maybe Oxfam’s suggestion of asking the plutocrats to play nicely isn’t such a bad idea.

Also published at ft.com.

Other Writing

The robots are coming and will terminate your jobs

In future, there may be people who – despite being fit to work – have no economic value

On August 29 1997, Skynet – a computer system controlling the US nuclear arsenal – became self-aware. Panicking operators tried to deactivate it. Skynet, perceiving the threat, launched its arsenal, killed most of humanity, and ushered in a world in which the robots ruled. So went the backstory of the 1984 movie The Terminator . But computers did not become self-aware in 1997 – the closest they managed was when Deep Blue, a B-list supercomputer, beat Garry Kasparov, the world chess champion. Despite decades of hand-wringing about robots taking over, the robots never quite seem to rise.

But perhaps 2014 will be different. Google certainly seems to think so: early in December it purchased Boston Dynamics, a producer of military prototype robots – with names such as “BigDog”, “WildCat” or “Petman” – that wouldn’t look out of place in the Terminator films. These nightmarish machines will now be brought to you by the folks who host all your email, know what your internet searches are and are tracking your phone’s location.

But while the Skynet-esque combination of Google and Boston Dynamics is unsettling, it is not in itself a reason to expect that robot technologies really will change the world. Yet the talk in the economics profession is increasingly taking that possibility seriously.

The primary cause has been with us a long time: the familiar Moore’s law, which in various guises describes growth in computing power as swift and exponential. We have got used to swift growth, but we can never quite get used to the implications of exponential growth – meaning that whatever has just happened will be eclipsed by whatever is just about to happen.

Moore’s law, loosely applied, is that computers today are twice as powerful as the computers of two years ago, perhaps just 18 months ago. Today’s mobile phone is a match for what was once a cutting-edge gaming console; that gaming console, in turn, outperforms the kind of old-timey supercomputer that the Terminator franchise once imagined taking over the world.

Software is also becoming more efficient. We tend to miss this because the bloated copies of Microsoft Word we use do not seem faster than 20 years ago. But a mobile phone running Pocket Fritz 4, a chess program, can now beat grandmasters, despite the phone running far more slowly than Deep Blue did. A chess-playing phone is not about to lead a robot uprising, so why should we care? A growing number of economists – including Massachusetts Institute of Technology’s Erik Brynjolfsson and Andrew McAfee in a new book The Second Machine Age – argue that robots and algorithms are poised to make inroads into labour markets.

Computing power is starting to solve everyday problems – which turn out to be the hardest ones. Computers were laughable drivers in 2004, when a computer-driving competition was “won” by a car that crashed after completing seven miles of a 150-mile course. Now computers drive cars safely.

In 2008, robots still struggled with a problem known as “Slam” – simultaneous localisation and mapping, the process of mentally building up a map of a new location, including hazards, as you move through it. In 2011, Slam was convincingly addressed by computer scientists using Microsoft’s “Kinect” gaming hub, an array of sensors and processors that until recently would have been impossibly costly but is suddenly compact and cheap.

Problems such as language recognition and Slam have so far prevented robots working alongside humans; or on tasks that are not precisely defined, such as taping up parcels of different sizes or cleaning a kitchen. Perhaps the robots really are now on the rise.

Consider “Baxter”. Traditional industrial robots are major capital investments: vast machines kept apart from human workers for safety reasons. Baxter, by contrast, claims to be able to do much of the same work, is cheaper, safely works with humans, and is – its manufacturers claim – intuitive to reprogramme. And if Baxter fails to live up to the hype, Moore’s law means that the robot’s successors – with a computer eight to 10 times more powerful for the price in five years’ time – will not.

What is sobering is that we have already seen convincing evidence of the impact of technology on the job market. Alan Manning of the London School of Economics coined the term “job polarisation” a decade ago, when he discovered that employment in the UK had been rising for people at the top and the bottom of the income scale. There was more demand for lawyers and burger flippers. It was middle-skill jobs that were disappearing. The same trend is true in the US, and is having the predictable effect on wages: strong gains at the top, some gains at the bottom, stagnation in the middle.

The leading explanation is that technological change has favoured certain skills and displaced others. Typists, clerks, travel agents and bank tellers find their skills less valued. Mechanisation now dominates agriculture, large-scale construction and manufacturing. We tend to imagine that manufacturing jobs have disappeared to China; in fact, manufacturing employment in China has been falling. Even the Chinese must fear the robots.

Of course cheap, ubiquitous computing power has brought many good things – and will bring more. The question is whether we are equipped to deal with the possibility that in future, there will be people who – despite being willing and fit to work – have no economic value as employees. By the time today’s 10- year-olds have their degrees, computers could be a hundred times cheaper and smarter than they are today. A future full of robot servants could be a bright future indeed, but only if we can adapt our institutions quickly enough.

Also published at ft.com.

Other Writing

A Year in a Word: Bitcoin

(noun) A unit of the first and most famous cryptocurrency. After “bit”, a contraction of “binary digit”, the basic unit of digital information.
It all sounds very mysterious. The Bitcoin is a currency based on cryptographic principles; it was created by a pseudonymous programmer, or programmers, called Satoshi Nakamoto. New Bitcoins are “mined” and the entire system is maintained by a decentralised peer-to-peer database recording every transaction in the history of the currency.
Bitcoin promises to be a new type of money. Economists like to say money has three properties: it is a medium of exchange, obviating the need for barter; a store of purchasing power; and a stable unit of account that serves as a standard of what is expensive.
As a peer-to-peer processing network, Bitcoin has the potential to become a superb medium of exchange. Our current systems for digital payment require intermediaries such as MasterCard or PayPal, and fees can be higher than consumers realise. The Bitcoin network allows Andy to send money to Belinda and have the transaction verified “in the cloud”, so to speak, at no cost.
As a stable unit of account, however, Bitcoin is a failure. The currency’s volatility would make a dotcom director blush. That need not matter: a price can be listed in dollars, then transacted using the Bitcoin system. The dollar’s superb record as a standard of value can be married with the Bitcoin network’s ability to process payments cheaply. Alas, Bitcoin is currently used for speculation not transactions. Everyone wants to own Bitcoins but few spend them.
Bitcoin appeals to speculators for its frothy price and to libertarians for what it seems to stand against: it is not controlled by large corporations; it is not issued by a central bank; it is not really fiat money at all, even though the commodity that backs Bitcoin is a mathematical abstraction. Perhaps one day Bitcoin will gain momentum not because of the ideals it embodies, but because it turns out to be useful.

First published on FT.com

Other Writing

There is no sacrilege in flogging EU passports

Citizenship auctions are just the ticket for those who lose in the lottery of life, writes Tim Harford

‘UK ministers are under growing pressure to intervene against plans by the island of Malta to sell EU passports for €650,000, allowing buyers immediate rights of residency in all member states.’ – Financial Times, December 9

That’s outrageous!

I know. There has to be a cheaper deal out there. You can get Portuguese residency with €500,000 in your pocket – and you don’t even have to give the money away. You just have to buy a pad in Portugal.

No, it’s outrageous that Malta is selling passports.

Oh. Well, granted, there is an issue here. Given EU rules on freedom of movement, Malta is in effect selling EU citizenship but pocketing the cash. But this sort of problem is in the nature of the EU. Member states will either have to tolerate it or develop some sort of centralised regulator – just as the European Central Bank regulates the shared currency. That has been a tremendous success.

So in your view the main problem is that the sale of EU passports should be centrally administered?

That would be more logical. Since EU passports are close substitutes for each other, we can’t allow member states to pocket the gains of selling passports but impose the costs on each other.

And what are the costs?

Well, if you’re selling passports at €650,000 a pop, the costs have got to be very close to zero. Not many people will be able to afford that – a few hundred applications are reported to be in process – and those that can are unlikely to be a drain on the state. If the price was 50p and a pound of grapes, there would be a very large number of takers and one might reasonably start to ask whether Malta should really be selling the right to move to Paris.

But isn’t there a security risk?

I am not convinced al-Qaeda has been held at bay by an inability to pick up a Maltese passport for €650,000. As for some Russian mobster buying the right to live in Berlin – that is more distasteful than dangerous.

Distastefulness is important, though. Aren’t we cheapening citizenship?

I hardly think that €650,000 cheapens the EU passport. For the typical British citizen, the message Malta is sending is that the passport in your pocket is worth more than your house and your pension pot put together. Which may not be far wrong: take a typical UK citizen, dump her in Calcutta or Dar es Salaam and see how she gets on. EU citizenship is more valuable than most EU citizens realise.

‘Cheapens’ was the wrong word, perhaps. I should have said ‘commoditises’. Don’t you think there are some things that money shouldn’t be able to buy?

I do indeed – but that leaves open the question of whether citizenship is one of them. Malta’s undisguised flogging of EU passports is viewed with outrage, while “investor” programmes elsewhere – including the UK and the US – attract less opprobrium. That suggests we’re willing to exchange citizenship for cash provided the transaction is disguised as something else. This is hypocrisy.

You’d be happy to sell EU citizenship, then?

I would. I’m with Gary Becker, an economist and Nobel memorial prize winner, who has argued for as long as I can remember for the US to auction off the much-in-demand right to be a citizen. The idea has various frills – including rent-to-buy deals and a sort of student-loan system to allow poor immigrants to buy citizenship on the never-never – but is basically sound. Citizenship is being given away in arbitrary and bureaucratic ways, leaving most would-be immigrants disappointed and many existing citizens resentful. An auction system would streamline the process, be more likely to give citizenship to those best able to take advantage, and would raise cash and thus reduce anti-immigrant sentiment.

This is close to sacrilege, if you ask me.

That’s a strong word to use in defence of our global border control system. We wring our hands about inequality, but the biggest determinant of your income is your country of birth. Our closed borders entrench that unfairness. The system can be defended on pragmatic grounds, but if you’re going to suggest it’s sacred, I’m happy to be profane.

What next, then? Will Goldman Sachs or Google buy some Caribbean island and start selling corporate citizenship? How much for an Amazon Prime residency? Will we get silver, gold and platinum passports?

Portugal is way ahead of them; their “golden residence” programme was launched last year.

Also published at ft.com.

Other Writing

Amazon’s drones aren’t coming soon. But what will the world look like when they do?

Let’s not ask whether, in two years’ time, Amazon will deliver your last-minute Christmas shopping by octocopter. It won’t – the idea is a glorious publicity stunt.
The current objections to filling the skies with delivery drones are tremendous. The hovering robots are unreliable and likely to bump into things, and each other; the energy costs are high; and, of course, we look forward to Amazon fending off litigants after a miniature helicopter makes a hard landing somewhere inconvenient.
But while we tend to get excited and then disappointed by short-term techno-hype, the longer-term changes are often more profound than we expect. We typically lack the imagination to understand what they might be – nothing ever looks as dated as old science fiction. Regulatory obstacles are there to be surmounted: we already drive heavy chunks of metal around, killing people every day, so it is safe to assume that we will get over any safety concerns about drones.
So let us assume that they do one day cloud our skies. That brings us to a far more interesting question: not whether cheap, safe, legal drones will fetch our shopping, but what would the world look like if they did?
Prediction one: a sharper distinction between the hoarders and the minimalists. As the economist Tyler Cowen has pointed out, fast, cheap drone delivery allows you to keep low inventories. Say goodbye to overstuffed freezers and groaning bookshelves: the Ocado-copter will deliver whatever you need for tonight’s dinner party given 30 minutes’ notice. This is an unclutterer’s dream. But while some will use on-demand delivery to keep their homes free of unnecessary junk, many people are likely to order more than they can possibly use. For most of us, cheap, immediate access to stuff is simply going to mean – well, more stuff.
Prediction two: the high street might do better than you would think. Amazon’s current business model relies on vast, efficient warehouse hubs some distance from consumers. Parcels are dispatched from Amazon’s hub, reorganised at delivery hubs by Royal Mail or other couriers, then delivered to us at home. The speed and short range of drone delivery suggests that Amazon would need many more, smaller dispatch centres much closer to consumers. Other retailers already have these: they are on the high street. It remains to be seen whether a small, internet-only warehouse operating out of a business park around the back of the railway station will actually outcompete high-street shops serving double-duty as traditional retailers and drone-dispatch hubs.
Prediction three: home delivery would be just the start. Amazon would know where you were, and with a few swipes on a smartphone app, you could tell the drone to deliver to your office or, for that matter, to bring a Coke as you pause halfway around your morning jog. When you need that delivery of emergency insulin, you may not actually be at home. Amazon will not be delivering cocaine via drones – but somebody certainly will.
Prediction four: the real benefits may come from sharing logistics, with each individual retailer or restaurateur being able to book a slot on the drone mother ship at little notice for a low fee. (Would it be an airship? Or, more prosaically, just a robot-driven lorry that cruises slowly around the neighbourhood?) Amazon – and Google, which has pioneered self-driving cars and this week announced a push into robotics – have been pioneers in “cloud computing”, where smaller players rent computer power as a service from the big boys. The swarm of currycopters may be a less metaphorical cloud – a service that anyone needing a quick delivery can rent on a whim.
Prediction five: the suburbs will benefit. Amazon’s publicity video shows a drone flying over open countryside, but the drones seem more likely to work in urban areas. Whether anyone who lives in Brooklyn or Bloomsbury will really get excited about 30-minute delivery times is an open question: you only have to walk out of the front door to have easy access to most of what you might want in a hurry. Yet those who live on the outskirts of big population centres will see the benefits of not having to get in the car and drive to a hypermarket.
Prediction six: maybe things will not really change after all. If, 15 years ago, I promised you cheap next-day delivery of almost any product you could imagine, convenient and reliable grocery delivery, and the online streaming of cheap or free video and music, you would have predicted a retailing earthquake. What is really surprising about the high street is how little it has changed. My own local high street lost a Virgin Megastore and a Borders, but the gaps have been filled by fast-fashion retailers and a Tesco Metro.
In fact, the only prediction of which I am truly confident is that whatever the really important change will be, it will take most of us by surprise.

 

First published in the Financial Times.

Previous
Do NOT follow this link or you will be banned from the site!