Tim Harford The Undercover Economist

Undercover EconomistUndercover Economist

My weekly column in the FT Magazine on Saturday’s, explaining the economic ideas around us every day. This column was inspired by my book and began in 2005.

Undercover Economist

Cigarettes, damn cigarettes and statistics

We cannot rely on correlation alone. But insisting on absolute proof of causation is too exacting a standard

It is said that there is a correlation between the number of storks’ nests found on Danish houses and the number of children born in those houses. Could the old story about babies being delivered by storks really be true? No. Correlation is not causation. Storks do not deliver children but larger houses have more room both for children and for storks.

This much-loved statistical anecdote seems less amusing when you consider how it was used in a US Senate committee hearing in 1965. The expert witness giving testimony was arguing that while smoking may be correlated with lung cancer, a causal relationship was unproven and implausible. Pressed on the statistical parallels between storks and cigarettes, he replied that they “seem to me the same”.

The witness’s name was Darrell Huff, a freelance journalist beloved by generations of geeks for his wonderful and hugely successful 1954 book How to Lie with Statistics. His reputation today might be rather different had the proposed sequel made it to print. How to Lie with Smoking Statistics used a variety of stork-style arguments to throw doubt on the connection between smoking and cancer, and it was supported by a grant from the Tobacco Institute. It was never published, for reasons that remain unclear. (The story of Huff’s career as a tobacco consultant was brought to the attention of statisticians in articles by Andrew Gelman in Chance in 2012 and by Alex Reinhart in Significance in 2014.)

Indisputably, smoking causes lung cancer and various other deadly conditions. But the problematic relationship between correlation and causation in general remains an active area of debate and confusion. The “spurious correlations” compiled by Harvard law student Tyler Vigen and displayed on his website (tylervigen.com) should be a warning. Did you realise that consumption of margarine is strongly correlated with the divorce rate in Maine?

We cannot rely on correlation alone, then. But insisting on absolute proof of causation is too exacting a standard (arguably, an impossible one). Between those two extremes, where does the right balance lie between trusting correlations and looking for evidence of causation?

Scientists, economists and statisticians have tended to demand causal explanations for the patterns they see. It’s not enough to know that college graduates earn more money — we want to know whether the college education boosted their earnings, or if they were smart people who would have done well anyway. Merely looking for correlations was not the stuff of rigorous science.

But with the advent of “big data” this argument has started to shift. Large data sets can throw up intriguing correlations that may be good enough for some purposes. (Who cares why price cuts are most effective on a Tuesday? If it’s Tuesday, cut the price.) Andy Haldane, chief economist of the Bank of England, recently argued that economists might want to take mere correlations more seriously. He is not the first big-data enthusiast to say so.

This brings us back to smoking and cancer. When the British epidemiologist Richard Doll first began to suspect the link in the late 1940s, his analysis was based on a mere correlation. The causal mechanism was unclear, as most of the carcinogens in tobacco had not been identified; Doll himself suspected that lung cancer was caused by fumes from tarmac roads, or possibly cars themselves.

Doll’s early work on smoking and cancer with Austin Bradford Hill, published in 1950, was duly criticised in its day as nothing more than a correlation. The great statistician Ronald Fisher repeatedly weighed into the argument in the 1950s, pointing out that it was quite possible that cancer caused smoking — after all, precancerous growths irritated the lung. People might smoke to soothe that irritation. Fisher also observed that some genetic predisposition might cause both lung cancer and a tendency to smoke. (Another statistician, Joseph Berkson, observed that people who were tough enough to resist adverts and peer pressure were also tough enough to resist lung cancer.)

Hill and Doll showed us that correlation should not be dismissed too easily. But they also showed that we shouldn’t give up on the search for causal explanations. The pair painstakingly continued their research, and evidence of a causal association soon mounted.

Hill and Doll took a pragmatic approach in the search for causation. For example, is there a dose-response relationship? Yes: heavy smokers are more likely to suffer from lung cancer. Does the timing make sense? Again, yes: smokers develop cancer long after they begin to smoke. This contradicts Fisher’s alternative hypothesis that people self-medicate with cigarettes in the early stages of lung cancer. Do multiple sources of evidence add up to a coherent picture? Yes: when doctors heard about what Hill and Doll were finding, many of them quit smoking, and it became possible to see that the quitters were at lower risk of lung cancer. We should respect correlation but it is a clue to a deeper truth, not the end of our investigations.

It’s not clear why Huff and Fisher were so fixated on the idea that the growing evidence on smoking was a mere correlation. Both of them were paid as consultants by the tobacco industry and some will believe that the consulting fees caused their scepticism. It seems just as likely that their scepticism caused the consulting fees. We may never know.

Written for and first published at ft.com.

Undercover Economist

Online ads: log in, tune out, turn off

How annoying does an ad have to be before a website should refuse to run it?

Online banner ads are not the advertising industry’s most glorious achievement. From the pop-up to the sudden blast of music, the clickbait to the nonsensically animated gifs, the stroboscope to the advert that simply appears to have a spider scurrying across it, there seems to be no end to the ways in which banner advertisements can annoy us.

Up to a point, this is part of the deal. Publishers offer something we want to look at, our attention is worth money to advertisers, and the advertisements help to pay for the content we’re enjoying. But how annoying does an ad have to be before a website should refuse to run it? While the question is obvious, the answer is not: it’s hard for publishers to know how much the adverts may be driving readers away.

Daniel Goldstein, Preston McAfee and Siddharth Suri, all now at Microsoft Research, have run experiments to throw light on this question. (They are, respectively, a psychologist, an economist and a computer scientist; do send in your suggested punchlines.)

The experiments are intriguing as much for the method as for the conclusion. Traditionally, much experimental social science has been conducted with all the participants in the same room, interacting on paper, face to face or through computers. Then the computer-mediated experiments moved online, with researchers such as Goldstein assembling large panels of participants willing to log in and take part in exchange for a modest payment.

Now there is an easier way: Amazon Mechanical Turk. The original Mechanical Turk was an 18th-century chess-playing “robot” which, in reality, concealed a human chess player. Amazon’s Mechanical Turk (MTurk) also uses humans to do jobs we might expect from a computer but which computers cannot yet manage. For example, Turk workers might help train a spam filter by categorising tens of thousands of emails; or they might decide which of several photographs of an item or location is the best.

From the point of view of social-science researchers, MTurk is a remarkable resource, allowing large panels of diligent experimental subjects to be assembled cheaply at a moment’s notice. It is striking and somewhat discomfiting just how little MTurk workers (“Turkers”) are willing to accept — a study in 2010 found an effective median wage of $1.38 an hour. Siddharth Suri says that, because of its speed, flexibility and low cost, MTurk is rapidly becoming a standard tool for experimental social science.

So, back to those annoying ads. First, Goldstein, McAfee and Suri recruited MTurk workers to rate a selection of 72 animated adverts and 72 static ads derived from the final frame of the animations.

It may not surprise you to know that the 21 most annoying adverts were all animated, while the 24 least annoying were static.

The researchers picked the 10 least aggravating and the 10 most excruciating and used them in the second stage of the study.

In this second stage, Goldstein and his colleagues hired Turkers to sort through emails and pick out the spam — they were offered 25 cents as a fixed fee plus a “bonus” that was not specified until after they signed up. The experiment had two variables at play. First, the Turkers were randomly assigned to groups whose workers were paid 10 cents, 20 cents or 30 cents per 50 emails categorised. Second, while the workers were sorting through the emails, they were either shown no adverts, “good” adverts or “bad” adverts. Some workers diligently plodded on while others gave up and cashed out early.

Usually researchers want to avoid people dropping out of their experiments. The wicked brilliance of this experimental design is that the dropout rate is precisely what the experimenters wanted to study.

Unsurprisingly, the experiment found that people will do more work when you pay them a better rate, and they will do less work when you show them annoying adverts. Comparing the two lets the researchers estimate the magnitude of the effect, which is striking: removing the annoying adverts entirely produced as much extra effort as paying an additional $1.15 per 1,000 emails categorised — and effectively $1.15 per 1,000 adverts viewed. But $1.15 per 1,000 views is actually a higher rate than many annoying advertisers will pay — the rate for a cheap advert may be as low as 25 cents per 1,000 views, says Goldstein.

 . . . 

Good adverts are much less destructive. They push workers to quit at an implicit rate of $0.38 per 1,000 views, for an advert that may pay $2 per 1,000 views to the publisher. Generalising for a moment: good adverts seem worth the aggravation but bad adverts seem to impose a higher cost on a website’s readers than the advertisers are willing to pay. It is no wonder that websites hoping for repeat traffic tend to avoid the most infuriating adverts.

A sting in the tail is that the animated adverts may not even work on their own terms. An eye-tracking study conducted in 2003 by Xavier Drèze and François-Xavier Hussherr found that people avoided looking at banner advertisements in general; in 2005 Moira Burke and colleagues found that people actually recalled less about the animated adverts than the static ones.

How could that be? Perhaps we have all learnt a sound principle for browsing the internet: never pay attention to anything that jiggles around.

Written for and first published at ft.com.

Undercover Economist

Highs and lows of the minimum wage

‘The lesson of all this is that the economy is complicated and textbook economic logic alone will get us only so far’

In 1970, Labour’s employment secretary Barbara Castle shepherded the Equal Pay Act through parliament, with the promise that women would be paid as much as men when doing equivalent jobs. The political spark for the Act came from a famous strike by women at Ford’s Dagenham plant, and the moral case is self-evident.

The economics, however, looked worrisome. The Financial Times wrote a series of editorials praising “the principle” of equality but nervous about the practicalities. In September 1969, for example, an FT editorial observed that “if the principle of equal pay were enforced too rigorously, employers might often prefer to employ men”; and the day after the Act came into force on December 29 1975, the paper noted a new era “which many women may come to regret”.

The economic logic for these concerns is straightforward. Whether because of prejudice or some real difference in productivity, employers were willing to pay more for men than for women. That inevitably meant that if a new law artificially raised women’s salaries, women would struggle to find work at those higher salaries.

The law certainly did raise women’s salaries. Looking at the simple headline measure of hourly wages, women’s pay has gradually risen over the decades as a percentage of men’s, although it remains lower. Typically, this process of catch-up has been gradual but, between 1970 and 1975, the years when the Equal Pay Act was being introduced, the gap narrowed sharply.

Did this legal push to women’s pay cause joblessness, as some feared? No. Women have steadily made up a larger and larger proportion of working people in the UK, and the Equal Pay Act seems to have no impact on that trend whatsoever. If any effect can be discerned, it is that the proportion of women in the workforce increased slightly faster as the Act was being introduced; perhaps they were attracted by the higher salaries?

The lesson of all this is that the economy is complicated and textbook economic logic alone will get us only so far. The economist Alan Manning recently gave a public lecture at the London School of Economics, where he drew parallels between the Equal Pay Act and the minimum wage, pointing out that in both cases theoretical concerns were later dispelled by events.

The UK minimum wage took effect 16 years ago this week, on April 1 1999. As with the Equal Pay Act, economically literate commentators feared trouble, and for much the same reason: the minimum wage would destroy jobs and harm those it was intended to help. We would face the tragic situation of employers who would only wish to hire at a low wage, workers who would rather have poorly paid work than no work at all, and the government outlawing the whole affair.

And yet, the minimum wage does not seem to have destroyed many jobs — or at least, not in a way that can be discerned by slicing up the aggregate data. (One exception: there is some evidence that in care homes, where large numbers of people are paid the minimum wage, employment has been dented.)

The general trend seems a puzzling suspension of the law of supply and demand. One explanation of the puzzle is that higher wages may attract more committed workers, with higher morale, better attendance and lower turnover. On this view, the minimum wage pushed employers into doing something they might have been wise to do anyway. To the extent that it imposed net costs on employers, they were small enough to make little difference to their appetite for hiring.

An alternative response is that the data are noisy and don’t tell us much, so we should stick to basic economic reasoning. But do we give the data a fair hearing?

A fascinating survey reported in the most recent World Development Report showed World Bank staff some numbers and asked for an interpretation. In some cases, the staff were told that the data referred to the effectiveness of a skin cream; in other cases, they were told that the data were about whether minimum wages reduced poverty.

The same numbers should lead to the same conclusions but the World Bank staff had much more trouble drawing the statistically correct inference when they had been told the data were about minimum wages. It can be hard to set aside our preconceptions.

The principle of the minimum wage, like the principle of equal pay for women, is no longer widely questioned. But the appropriate level of the minimum wage needs to be the subject of continued research. In the UK, the minimum wage is set with advice from the Low Pay Commission, and it has risen faster than both prices and average wages. A recently announced rise, due in October, is well above the rate of inflation. There must be a level that would be counterproductively high; the question is what that level is.

And we should remember that ideological biases affect both sides of the political divide. In response to Alan Manning’s lecture, Nicola Smith of the Trades Union Congress looked forward to more ambition from the Low Pay Commission in raising the minimum wage “in advance of the evidence”, or using “the evidence more creatively”. I think British politics already has more than enough creativity with the evidence.

Written for and first published at ft.com.

Undercover Economist

The pricing paradox: when diamonds aren’t on tap

‘Diamonds are costly because we desire them. But what if that isn’t true? What if they are desirable because they are costly?’

A glass of water costs very little; a diamond costs a lot. Yet there is nothing more useful than water, while the most prized uses of diamonds are decorative. This apparent paradox has tested some fine minds. Adam Smith’s answer to the paradox was that diamonds were expensive because it was hard work to find them and dig them up. That seems to strike close to the truth but it’s not the way that modern economics approaches the problem.

The usual name for this puzzle is the “paradox of value” or “the water-diamond” paradox but I now prefer to call it the “Button Gwinnett paradox”. (I hadn’t heard of Button Gwinnett until his life was described in a recent episode of the WNYC radio programme Radiolab.) The British-born Gwinnett moved to the colony of Georgia in the mid-1700s. He was a failed businessman, a serial debtor and a B-list politician in the independence movement. But, as it happened, he was one of the 56 signatories of the Declaration of Independence.

Gwinnett might seem a minor figure compared to some of the other men whose names sit beside his: John Hancock, Thomas Jefferson, John Adams and Benjamin Franklin. Despite that, a Button Gwinnett signature is vastly more valuable than a Jefferson or a Franklin. The simple reason for this is that collectors naturally wish to own the complete set of 56 signatures. Ben Franklin lived into his eighties and was a prolific correspondent, so there is no shortage of Franklin signatures.

Gwinnett died in a duel the year after signing the Declaration of Independence. His signature was recently discovered on the parish register of St Peter’s Church in Wolverhampton, where he was married. Most of the other signatures he left behind were on IOUs.

Benjamin Franklin may have been one of the most remarkable human beings in history but when collecting your set of Independence signatures, it’s the Button Gwinnett that will prove the final piece of the jigsaw. Anyone selling a Gwinnett will find few other sellers and many eager buyers.

Which brings us back to water and diamonds. Diamonds are expensive because at the point at which the supply of diamonds dries up, there are plenty of buyers willing to pay handsomely, and they compete with each other. Water is cheap in temperate climes because after satisfying our demand for drinking and cooking, then for washing and for irrigation, and finally for swimming around in, there is still plenty left. The value of the first litres of water may be incalculably high but the marginal value of one more litre is very low, and it’s this value that sets the price.

Everything so far has assumed that our desire for an object — a diamond, a glass of water, a Button Gwinnett signature — is a given. Diamonds are costly because we desire them, and not the other way around. But what if that isn’t true? What if diamonds are desirable because they are costly?

The economist Thorstein Veblen coined the term “conspicuous consumption” to describe situations where an object is attractive merely because it is expensive. The designer watch or car is valuable because, like a peacock’s tail, it is a credible indicator that you have resources to spare. What was the point of spending so much on that diamond engagement ring otherwise?

Another possibility is “pricing bias”. If we don’t really know a good suit or a good bottle of wine from a bad one, we tend to use the price to give us a clue. This is not strictly logical — after all, anyone can double the asking price of anything they are selling, so price is not by itself a reliable clue to quality. But pricing bias exists. Studies show that people will rate a wine more highly in a taste test if they think it is expensive; even placebo painkillers are more effective if the patients believe they are costly new drugs rather than cheap new drugs.

. . .

The final word on this should go to a team led by Laurie Santos at Yale’s Comparative Cognition Laboratory. Santos has spent some time teaching capuchin monkeys how to use money, to exchange it for food and to understand the idea that food can have a price that is high or low. In recent work with Robin Goldstein of UC Davis, Santos’s team has been trying to figure out whether the monkeys also display pricing bias.

It seems not. After a series of trials where monkeys were allowed to buy cheap or expensive jelly and ice lollies, they were then let loose on a free buffet to see if they gravitated towards the once costly items. They didn’t; unlike humans, the monkeys couldn’t care less what the item typically cost. They liked what they liked. In this, they differ not only from humans but also from starlings: Alex Kacelnik and Barnaby Marsh, zoologists at Oxford, have found that starlings prefer more costly food.

My guess is that the monkeys would have little interest in a Button Gwinnett signature. And those glossy advertisements for diamonds and designer handbags? They are evidently far too sophisticated for capuchin tastes.

Written for and first published at ft.com.

Undercover Economist

Man v machine (again)

‘The Luddite anxiety has been dormant for many years but has recently enjoyed a resurgence’

I’m writing these words in York, the city in which, two centuries ago, the British justice system meted out harsh punishments — including execution — to men found guilty of participating in Luddite attacks on spinning and weaving machines. By a curious coincidence, I’ve just read Walter Isaacson’s article in the FT explaining how wrong-headed the Luddites were. I’m not so sure.

“Back then, some believed technology would create unemployment,” writes Isaacson. “They were wrong.”

No doubt such befuddled people did exist, and they still do today. But this is a straw man: we can all see, as Isaacson does, that technology has made us richer while employment is as high as ever. (The least appreciated job-creating invention may well have been the washing machine, which helped turn housewives into women with salaries.)

The Luddites themselves had a more subtle view than Isaacson suggests, and one which is as relevant as ever. They believed that the machines were altering economic power in the textile industry, favouring factory owners and low-skilled labourers at the expense of skilled craftsmen. They wanted to defend their interests and they did so violently. As the historian Eric Hobsbawm put it, their frame-breaking activity was “collective bargaining by riot” and “simply a technique of trade unionism” in the days before formal unions existed.

To put it another way, the Luddites weren’t idiots who thought that machines would destroy jobs in general; they were skilled workers who thought that machines would devalue their specific jobs and their specific skills. They were right about that, and sufficiently determined that stopping them required more than 10,000 troops at a time when the British army might have preferred to focus on Napoleon.

The Luddite anxiety has been dormant for many years but has recently enjoyed a resurgence. This is partly because journalists fear for their own jobs. Technological change has hit us in several ways — by moving attention online, where (so far) it is harder to charge money for subscriptions or advertising; by empowering unpaid writers to reach a large audience through blogging; and even by introducing robo-hacks, algorithms that can and do extract data from corporate reports and turn them into financial journalism written in plain(ish) English. No wonder human journalists have started writing about the economic damage the robots may wreak.

Another reason for the robo-panic is concern about the economic situation in general. Bored of blaming bankers, we blame robots too, and not entirely without reason. Inequality has risen sharply over the past 30 years. Many economists believe that this is partly because technological change has favoured a few highly skilled workers (and perhaps also more mundane trades such as cleaning) at the expense of the middle classes.

Finally, there is the observation that computers continue to develop at an exponential pace and are starting to make inroads in hitherto unexpected places — witness the self-driving car, voice-activated personal assistants and automated language translation. It is a long way from the spinning jenny to Siri.

What are we to make of all this? One view is that this is business as usual. We’ve had dramatic technological change for the past 300 years but it’s fine: we adapt, we still have jobs, we are incomparably richer — and the big headache of modernity isn’t unemployment but climate change.

A second view is that this time is radically different: the robots will, before long, render many people economically valueless — simply incapable of earning a living wage in a market economy. There will be plenty of money around but it will flow to the owners of the machines, and maybe also to the government through taxation. In principle, all could be well in such a future but it would require a radical reimagining of how an economy could work. The state, not the market, would be the arbiter of who gets what. Such a world is probably not imminent but, by 2050, who knows?

 . . . 

The third perspective is what we might call the neo-Luddite view: that technology may not destroy jobs in aggregate but rather changes the demand for skills in ways that are real and troubling. Median incomes in the US have been stagnant for decades. There are many explanations for that, including globalisation and the decline of collective bargaining, but technological change is foremost among them.

If the neo-Luddites are right, then the challenge in front of us is simply to adapt. Individual workers, companies and the political system will have to deal with wrenching economic changes as old industries are destroyed and new ones created. That seems a plausible view of the near future.

But there is a final perspective that doesn’t get as much attention as it might: it’s that technological change is too slow, not too fast. The robo-booster theory implies a short-term surge in jobs, as all those lovely new machines are designed and built and installed, followed by a long-term surge in productivity as the robots make the economy ruthlessly efficient. It is hard to see much sign of either trend in the economic statistics. Productivity, in particular, has been disappointing in the US and utterly dismal in the UK. Where are the robots when we need them?

Written for and first published at ft.com.

Undercover Economist

Boom or bust for bitcoin?

Bitcoin appeals to libertarians on the basis that governments cannot arbitrarily make more of it

In a moment, I’ll gaze into the crystal ball and foretell the future of the world’s most famous cryptocurrency, bitcoin. I should first explain what’s happening now.

It was developed in 2008 by an unknown programmer or programmers. Confusingly, bitcoin is both a payment technology and a financial asset. The asset called bitcoin has no intrinsic value but it has a market price that fluctuates wildly. Like digital gold, it appeals to libertarians on the basis that governments cannot arbitrarily make more of it.

The payment technology called bitcoin is what you might get if you ran the Visa network over a peer-to-peer network of computers. In case that description doesn’t help, it’s a way of sending money anywhere in the world but instead of relying on the authority of a financial intermediary such as Visa or Western Union, it uses a decentralised network to verify that the transaction has occurred. The record of all previous transactions is called the blockchain; it, too, is stored on a decentralised network. The entire process relies on cryptographic techniques to prevent fraud, which is why bitcoin and other currencies like it are called cryptocurrencies.

This may all seem very esoteric but the internet was esoteric once and it turns out to have become important. So what lies ahead for bitcoin?

Here’s one scenario.

Bitcoin has enjoyed many booms and busts in value, and later in 2015, the price surges again. This will be the biggest yet, drawing more and more people into the market. As the dotcom bubble and railway mania proved, even revolutionary technologies can be overvalued; with Bitcoins selling for $2,000, $5,000 and eventually $10,000 each, nemesis is around the corner.

The first sign of trouble will be the scams. A recent research paper by computer scientists Marie Vasek and Tyler Moore identified almost 200 bitcoin scams, in which about 13,000 victims lost $11m. Such scams will only become more common as the stakes become higher and the pool of naive investors deeper. Soon they will be the stuff of mainstream consumer rights phone-ins.

Arguably, scams are a sign that Bitcoin has matured — after all, nobody proposes abandoning the dollar because con artists like to be paid in dollars. But they are just a foretaste of what is to come — Bitcoin will be gutted by predatory monopolists.

The Bitcoin system has always relied on a crowd of people putting their computers to work verifying transactions and writing them into the blockchain, a task which costs money and energy. In a rather confusing analogy with gold, these people are called “miners” and they are compensated in Bitcoins, of course. Yet there is a basic inconsistency at the heart of this system, as the economist Kevin Dowd has observed: Bitcoin mining needs to be done by a decentralised crowd but is more efficiently done by large arrays of computers owned by a few players. Or possibly just a single one.

Even today, Bitcoin mining is a game for the big boys. As the Bitcoin mining industry becomes a tight, self-serving oligopoly, the stage is set for Bitcoin counterfeiting on a massive scale. In 2018, 10 years after the invention of Bitcoin, the system collapses under the weight of its own contradictions.

It’s an intriguing story — but of course, it is just a story. We could give it a name: “BitCon”.

. . .

If you don’t believe that, I have another story for you. The title is “Daisy Chains”. Throughout 2015 and 2016, the price of Bitcoins continues to collapse. Speculators lose interest and some of the big miners sell off their computers at a heavy loss. The spotlight moves elsewhere but the true believers in the power of decentralised blockchain processing continue to develop the system.

Bitcoins aren’t the only things that can be transferred using a peer-verified network, after all — you could transfer the digital lock to a smart car; or a financial contract, with pay-offs and penalties automatically adjudicated and paid for by the blockchain. The question is whether the effort of doing all this is more efficient than the current centralised systems using interbank payments.

The answer is yes but only in certain circumstances. A blockchain is a ledger of every digital transaction ever made on the system. This proves far too unwieldy for a universal means of payment. Yet specialised niche systems evolve: by 2018, block-chain processing is common for remittances; by 2019, block-chain processing pays for and controls self-driving taxis. You can even download an out-of-the-box blockchain app for your local babysitting circle — or your prostitution ring. Blockchain approaches don’t replace Western Union and Visa everywhere but they squeeze margins and make inroads for certain applications.

The only disappointment for the true Bitcoin enthusiasts is that Bitcoin itself, the currency that started it all, fails to catch on. Most people prefer a trusted brand. When a standard of value is used on these disparate blockchain processes, the most popular by far is “FedCoin” — more commonly known by its correct name, the US dollar.

Two stories about the future, and most likely neither one will come true. These are interesting times for cryptocurrencies.

Written for and first published at ft.com.

Undercover Economist

Battle for the web’s ‘last mile’

The fact that a few large players have such influence over vital services should make us all queasy

The cable companies who own the wires that plug us into the internet – particularly the “last mile” along your street and into your house – have a great deal of market power. Small wonder, then, that the notion of “net neutrality” is appealing: the term is usually characterised as the idea that all data transmitted over the internet should be treated equally. After all, why should Google get a zippier connection than a small rival? Why should Netflix have to pay an additional fee to Big Cable when customers have already paid handsomely to be connected?

Advocates of net neutrality won a famous victory a few weeks ago, when the US Federal Communications Commission announced plans to regulate cable companies as utilities. The aim of this was to enforce net neutrality rules after a vibrant grass-roots campaign.

Small wonder that the campaign became so popular. The idea that cable companies could partition the internet into slow lanes and fast lanes is infuriating. Customers have already paid for access, and they don’t take kindly to the prospect of “throttling” – deliberately degrading a service to extort money from content providers.

This kind of product sabotage is far older than the internet itself. The French engineer and economist Jules Dupuit wrote back in 1849 that third-class railway carriages had no roofs, not to save money but to “prevent the passengers who can pay the second-class fare from travelling third class”. Throttling, 19th-century style.

But imagine that a law was introduced stipulating “railway neutrality” – that all passengers must be treated equally. That might not mean a better deal for poorer passengers. We might hope that everyone would ride in comfort at third-class prices, and that is not impossible. But a train company with a monopoly might prefer to operate only the first-class carriages at first-class prices. Poorer passengers would get no service at all. Product sabotage is infuriating but the alternative – a monopolist who screws every customer equally – is not necessarily preferable.

It is easy to think of outrageous scenarios in which a cable company might exploit market power – favouring campaign videos from politicians who do its bidding, or shutting down rivals who pose a competitive threat.

But it is also easy to think of good reasons to treat different kinds of content differently. An online back-up service for big data sets might prefer a discount for a connection that will run only at quieter times of day. Stream the World Cup final and you’ll want to guarantee uninterrupted coverage; sell the highlights as a download and you might accept a cheaper, more volatile connection if it saves money.

With a mandatory uniform price, the online back-up might be too expensive to operate, the live stream too slow to satisfy customers, and the video download getting a faster connection than it really needs. (There is a formal economic model of this effect courtesy of Benjamin Hermalin and Michael Katz but it seems intuitive to me.)

What about the idea that customers have already paid for their internet content, so cable companies shouldn’t be able to demand cash from content providers too? That is not how things work elsewhere. In a shopping mall, customers enter for free and retailers pay to be there. (They pay very different rents, too.) At an industry convention, both the delegates and the exhibitors will pay. There is nothing sacred about the idea that one side of the market pays nothing. Customers may even benefit if content providers must pay, since then the cable company might wish to slash prices to attract them and increase its leverage with the content providers.

Should all content providers be able to connect free of charge? This may not be the best rule for consumers nor the best way to promote innovation. The best defence of such a rule is that it seems to have worked well in the past and, with so much at stake, a change would be risky – not a terrible argument but hardly cast-iron.

Nevertheless I am grateful to the advocates of net neutrality, because they have brought into sharp focus the importance of market power on the internet – both of content providers such as Google and Facebook, and the cable companies who connect us to them. The ability to connect to the internet has become a basic part of living a full economic, social and political life. We use the internet to make our voices heard, to spend money, to access services, to find out the news, to connect with our friends. Increasingly our fridges, cars and pacemakers will use it too. The fact that a few very large players have such influence over such vital services should make us all queasy.

Fast lanes and slow lanes are a symptom of this market power but the underlying cause is much more important. The US needs more internet service providers, and the obvious way to get them is to force cable companies to unbundle the “last mile” and lease it to new entrants.

Alas, in the celebrated statement announcing a defence of net neutrality, the FCC also specifically ruled out taking that pro-competitive step. The share prices of cable companies? They went up.

Written for and first published at ft.com.

Undercover Economist

Overconfidence man

We don’t have a good sense of our own fallibility. Checking my answers, it was the one I felt the most certain of that I got wrong

In 1913 Robert Millikan published the results of one of the most famous experiments in the history of physics: the “oil drop” experiment that revealed both the electric charge on an electron and, indirectly, the mass of the electron too. The experiment led in part to a Nobel Prize for Millikan but it is simple enough for a school kid to carry it out. I was one of countless thousands who did just that as a child, although I found it hard to get my answers quite as neat as Millikan’s.

We now know that even Millikan didn’t get his answers quite as neat as he claimed he did. He systematically omitted observations that didn’t suit him, and lied about those omissions. Historians of science argue about the seriousness of this cherry-picking, ethically and practically. What seems clear is that if the scientific world had seen all of Millikan’s results, it would have had less confidence that his answer was right.

This would have been no bad thing, because Millikan’s answer was too low. The error wasn’t huge — about 0.6 per cent — but it was vast relative to his stated confidence in the result. (For the technically minded, Millikan’s answer is several standard deviations away from modern estimates: that’s plenty big enough.)

There is a lesson here for all of us about overconfidence. Think for a moment: how old was President Kennedy when he was assassinated? How high is the summit of Mount Kilimanjaro? What was the average speed of the winner of last year’s Monaco F1 Grand Prix? Most people do not know the exact answers to these questions but we can all take a guess.

Let me take a guess myself. JFK was a young president but I’m pretty sure he was over 40 when elected. I’m going to say that when he died he was older than 40 but younger than 60. I climbed Kilimanjaro many years ago and I remember it being 6,090-ish metres high. Let’s say, more than 6,000m but less than 6,300m. As for the racing cars, I think they can do a couple of hundred miles an hour but I know that Monaco is a slow and twisty track. I’ll estimate that the average speed was above 80mph but below 150mph.

Psychologists have conducted experiments asking people to answer such questions with upper and lower bounds for their answers. We don’t do very well. Asked to produce wide margins of error, such that 98 per cent of answers fall within that margin, people usually miss the target 20-40 per cent of the time; asked to produce a tighter margin, such that half the answers are correct, people miss the target two-thirds of the time.

We don’t have a good sense of our own fallibility. Despite the fact that I am well aware of such research, when I went back to check my own answers, it was the one I felt most certain of that I got wrong: Kilimanjaro is just 5,895m high. It seemed bigger at the time.

But there’s another issue here. The charismatic Nobel laureate Richard Feynman pointed out in the early 1970s that the process of fixing Millikan’s error with better measurements was a strange one: “One is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover the new number was higher right away?”

What was probably happening was that whenever a number was close to Millikan’s, it was accepted without too much scrutiny. When a number seemed off it would be viewed with scepticism and reasons would be found to discard it. And since Millikan’s estimate was too low, those suspect measurements would typically be larger than Millikan’s. Accepting them was a long and gradual process.

Feynman added that scientists have learnt their lesson and don’t make such mistakes any more. Perhaps that’s true, although a paper published by the decision scientists Max Henrion and Baruch Fischhoff, almost 15 years after Feynman’s lecture, found that same pattern of gradual convergence in other estimates of physical constants such as Avogadro’s number and Planck’s constant. From the perspective of the 1980s, convergence continued throughout the 1950s and 1960s and sometimes into the 1970s.

Perhaps that drift continues today even in physics. Surely it continues in messier fields of academic inquiry such as medicine, psychology and economics. The lessons seem clear enough. First, to be open to ourselves and to others about the messy fringes of our experiments and data; they may not change our conclusions but they should reduce our overconfidence in those conclusions. Second, to think hard about the ways in which our conclusions may be wrong. Third, to seek diversity: diversity of views and of data-gathering methods. Once we look at the same problem from several angles, we have more chances to spot our errors.

But humans being what they are, this problem isn’t likely to go away. It’s very easy to fool ourselves at the best of times. It’s particularly easy to fool ourselves when we already think we have the answer.

Written for and first published at ft.com.

Undercover Economist

Is it possible to just click with someone?

‘Whether the computer reckons you’re a love match or not isn’t something that anyone should take seriously’

I’ve occasionally wondered whether the secret to love is mathematics, and I’m not the only one. Mathematics is full of perky ideas about matching or sorting that have a veneer of romantic promise. But for all their beauty and cleverness, one often feels that such ideas are a far better introduction to mathematics than they are to dating and mating.

Consider the Gale-Shapley algorithm, which dates from 1962 but won Lloyd Shapley a Nobel Memorial Prize in economics just a couple of years ago. The algorithm is a way of assigning matching pairs in a stable way. By “stable”, we mean that no two people would do better ignoring the algorithm and instead making a side-arrangement with each other. The Gale-Shapley algorithm can be used for matching students to university places, or kidney donors to kidney recipients. However, it is most famously described as a way of allocating romantic partners. It is, alas, ill suited to this task, since it skips over the possibility of homosexuality, bisexuality, polyamory or even something as simple as divorce. (1962 is on the phone . . . it wants its algorithm back.)

But if pure mathematics cannot help, surely statistics can? Internet dating promises to move us away from abstractions to the more gritty reality of data. Simply type in everything you have to offer, in great detail, and let the computer algorithm find your match. What could be simpler or more efficient?

Perhaps we should be a little cautious before buying into the hype. After all, such promises have been made before. The journalist Matt Novak has unearthed an article from 1924’s Science and Invention magazine in which the magazine’s publisher Hugo Gernsback explained that humans would soon enjoy the same scientific matchmaking approach then lavished on horses. The science included the “electrical sphygmograph” (it takes your pulse) and a “body odor test” (sniffing a hose attached to a large glass capsule that contains your beau or belle).

Then, in the 1960s, enterprising Harvard students set up “Operation Match”. It was a matchmaking service powered by a punch-card IBM computer. Despite breathless media coverage, this was no more scientific than Gernsback’s sphygmograph. According to Dan Slater’s Love in the Time of Algorithms, the men who founded Operation Match were hoping for the first pick of the women themselves.

One subscriber expressed the advantages and limitations of digital dating very well: “I approve of it as a way to meet people, although I have no faith in the questionnaire’s ability to match compatible people.”

Quite so. Operation Match was a numbers game in the crudest sense. It was an easy way to reach lots of nearby singles. There should be no pretence that the computer could actually pair up couples who were ideally suited to each other.

Perhaps we simply need more data? OkCupid, a dating site with geek appeal and a witty, naughty tone, allows you to answer thousands of questions: anything from “Do you like the taste of beer?” to “Would you ever read your partner’s email?” Users typically answer several hundred such questions, as well as indicating what answer they would hope for from a would-be date, and how important they feel the question is.

Again, media reaction has been credulous. Every now and then we hear of nerds who are living the dream, playing OkCupid’s algorithms with such virtuosity that love is theirs to command. Wired magazine introduced us to Chris McKinlay, “the math genius who hacked OkCupid”. McKinlay, we are told, downloaded a dataset containing 20,000 women’s profiles and six million questionnaire answers, optimised his own profile and unleashed an army of software bots to draw women in. He was a data-driven love-magnet.

But OkCupid’s own research suggests this is all rather futile. In one controversial experiment, it took a collection of pairs of users who were a poor match, according to the OkCupid algorithm — and then told them instead that they were highly compatible. One might expect that these not-really-compatible couples would find that their conversations quickly fizzled. In fact, they did scarcely less well than couples where the algorithm genuinely predicted a match. In short, whether the computer reckons you’re a love match or not isn’t a piece of information that anyone should take seriously.

. . .

Hannah Fry, author of The Mathematics of Love, expresses the problem neatly. The algorithm, she says, “is doing exactly what it was designed to do: deliver singles who meet your specifications. The problem here is that you don’t really know what you want.”

Quite so. The list of qualities that we might want in a partner — “fascinating, sexy, fun, handsome, hilarious” — are a poor match for the list of qualities one could share with a computer database — “likes beer, boardgames, Malcolm Gladwell and redheads”. If the computer cannot pose the right questions it is hardly likely to produce the right answers.

As for Chris McKinlay, no doubt we all wish him well. He announced his engagement to Christine Tien Wang — the 88th woman he met in person after spending months in the middle of a perfect dating storm. His experience suggests that just as with Operation Match, the matching process is nonsense and the secret to finding love is to date a lot of people.

Written for and first published at ft.com.

Undercover Economist

Why the high street is overdosing on caffeine

‘If Starbucks opens a café just round the corner from another Starbucks, is that really about selling more coffee?’

“New Starbucks Opens in Restroom of Existing Starbucks”, announced The Onion, satirically, in 1998. It was a glimpse of the future: there were fewer than 2,000 Starbucks outlets back then and there are more than 21,000 now. They are also highly concentrated in some places. Seoul has nearly 300 Starbucks cafés, London has about 200 — a quarter of all the Starbucks outlets in the UK — and midtown Manhattan alone has 100. It raises the question: how many Starbucks shopfronts are too many?

Such concerns predate the latte boom. In the late 1970s, Douglas Adams (also satirically) posited the Shoe Event Horizon. This is the point at which so much of the retail landscape is given over to shoe shops that utter economic collapse is inevitable.

And in 1972, the US Federal Trade Commission issued an entirely non-satirical complaint against the leading manufacturers of breakfast cereal, alleging that they were behaving anti-competitively by packing the shelves with frivolous variations on the basic cereals. That case dragged on for years before eventually being closed down by congressional action.

The intuition behind these complaints is straightforward. If Starbucks opens a café just round the corner — or in some cases, across the road — from another Starbucks, could that really be about selling more coffee, or is it about creating a retail landscape so caffeinated that no rival could survive? Similarly, the arrival on the supermarket shelves of Cinnamon Burst Cheerios might seem reasonable enough, were they not already laden with Apple Cinnamon Cheerios and Cheerios Protein Cinnamon Almond and 12 other variants on the Cheerios brand.

Conceptually, there is little difference between having outlets that are physically close together and having products that differ only in subtle ways. But it is hard to be sure exactly why a company is packing its offering so densely, at the risk of cannibalising its own sales.

A crush of products or outlets may be because apparently similar offerings reflect differences that matter to consumers. I do not much care whether I am eating Corn Flakes or Shreddies — the overall effect seems much the same to me — but others may care very much indeed. It might well be that in midtown Manhattan, few people will bother walking an extra block to get coffee, so if Starbucks wants customers it needs to be on every corner.

But an alternative explanation is that large companies deliberately open too many stores, or launch too many products, because they wish to pre-empt competitors. Firms could always slash prices instead to keep the competition away but that may not be quite as effective — a competitor might reasonably expect any price war to be temporary. It is less easy to un-launch a new product or shut down a brand-new outlet. A saturated market is likely to stay saturated for a while, then, and that should make proliferation a more credible and effective deterrent than low prices.

A recent paper by two economists from Yale, Mitsuru Igami and Nathan Yang, studies this question in the market for fast-food burgers. Igami and Yang used old telephone directories to track the expansion of the big burger chains into local markets across Canada from 1970 to 2005. After performing some fancy analysis, they concluded that big burger chains did seem to be trying to pre-empt competition. If Igami and Yang’s model is to be believed, McDonald’s was opening more outlets, more quickly than would otherwise have been profitable.

It is the consumer who must ultimately pay for these densely packed outlets and products. But perhaps the price is worthwhile. The econometrician Jerry Hausman once attempted to measure the value to consumers of Apple Cinnamon Cheerios. He concluded that it was tens of millions of dollars a year — not much in the context of an economy of $17tn a year, but not nothing either. Perhaps competitors were shut out of the market by Apple Cinnamon Cheerios but that doesn’t mean that consumers didn’t value them.

 . . . 

It may be helpful to consider what life would be like if every café, cereal brand or fast-food joint were owned by a separate company. Steven Salop, an economist at Georgetown University, produced an elegant economic analysis of this scenario in 1979. He found that even a market full of independents will seem a little too crowded. This is because firms will keep showing up and looking for customers until there is not enough demand to cover their costs. The last entrepreneur to enter is the one that just breaks even, scraping together enough customers to pay for the cost of setting up the business. She is indifferent to whether she is in business or doing something else entirely. However, every other entrepreneur in the crowded market is wishing that she had stayed away.

Whether the products are shoes or cereal, lattes or cheeseburgers, markets will often seem wastefully crowded. That perception is largely an illusion, but not entirely. In big city markets, there really are too many cereals, too many cafés and too many fast-food restaurants. But even if they were all mom-and-pop independents, that might still be true.

Written for and first published at ft.com.

Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Do NOT follow this link or you will be banned from the site!