Tim Harford The Undercover Economist
  • THE UNDERCOVER ECONOMIST STRIKES BACK
    “Every Tim Harford book is cause for celebration. He makes the ‘dismal science’ seem like an awful lot of fun.”
    – Malcolm Gladwell, author of “The Tipping Point”
  • Adapt – Why Success Always Starts with Failure


    “A highly readable argument... Very impressive”
    – Nassim N Taleb, author of “The Black Swan”
  • Dear Undercover Economist
    “Harford has a knack for explaining economic principles and problems in plain language and, even better, for making them fun”
    – The New York Times
  • The Logic of Life
    “Highly readable, funny and daringly contentious... a whopping good time.”
    – The San Francisco Chronicle
  • The Undercover Economist
    “Required reading”
    – Stephen J. Dubner, co-author of “Freakonomics”
Undercover Economist

How to fuel a rewarding culture

‘Money matters, but sometimes we find financial incentives to be insulting or grubby’

Here’s an age-old management conundrum: who should be rewarded for high performance, and how? As Diane Coyle, the economist and former adviser to the UK Treasury, recently observed in this newspaper, the answer to the question is usually self-serving. Simple and easily monitored jobs, such as flipping burgers, are natural candidates for performance incentives. Yet somehow it’s the inhabitants of the C-suite who tend to pick up bonuses, despite the fact that their complex, hard-to-measure jobs are poorly suited to the crude nature of performance-related pay.

But let’s assume that managers really want to answer the question. The answer is deliciously complex. Money matters, but sometimes we find financial incentives to be insulting or grubby. And we can respond keenly to non-financial rewards such as praise, status or the satisfaction in a job well done.

So managers might try running experiments to see what works in a particular situation. There is a long tradition of this, going back to Harvard professor Elton Mayo’s productivity trials at Western Electric’s Hawthorne works in the 1920s and early 1930s.

The Hawthorne experiments themselves, alas, were flawed and have been mythologised. But more modern experiments are revealing some intriguing results. I reported a few years ago on the curious alliance between “Farmer Smith”, owner of a large British fruit farm, and three economists, Oriana Bandiera, Iwan Barankay and Imran Rasul. Bandiera and her colleagues designed and tested different incentive schemes on Farmer Smith’s farms. (The deal: he got higher productivity; they got the data.)

The fruit farm experiments show that financial incentives do matter, at least for casual immigrant labour on fruit farms. First, a piece-rate scheme boosted productivity by 50 per cent; then, performance pay for the front-line managers ensured that work was no longer assigned as a favour to friends, and productivity increased another 20 per cent; then, a tournament encouraged workers to sort themselves into productive teams, and productivity increased by a further 20 per cent.

In another study by Bandiera (with Nava Ashraf and Kelsey Jack), hair stylists in Zambia’s capital Lusaka were recruited to sell condoms and give advice on HIV prevention. It turned out that celebrating the top performers at a public ceremony proved a far better approach than providing financial incentives to sell more condoms.

But sometimes neither a public ceremony nor a financial incentive is appropriate. Consider the case of long-haul airline captains. Unlike part-time condom agents or fruit pickers, these senior pilots have high-status, six-figure salaries and powerful unions to defend their pay and conditions. Nevertheless, a recent experiment conducted by Greer Gosnell, John List and Robert Metcalfe examines what can be done to influence the behaviour of these star players.

Gosnell, List and Metcalfe teamed up with a commercial airline that wanted to encourage captains to save fuel. Broadly, there are three ways to do this: before take-off, by carefully calculating fuel requirements; after landing, by switching off some engines while taxiing; and during the flight, by carefully adjusting the flap settings and negotiating the most efficient altitude, speed and course with air traffic control. The airline’s own data suggested that captains could potentially save 3 to 6 per cent on fuel — a substantial financial and environmental gain. But how to incentivise them?

Gosnell, List and Metcalfe designed an experiment that did not rely on paying bonuses. Instead, the captains were told that their company was running an experiment with the aim of saving fuel, and that the researchers would maintain anonymity for all the captains. There would be no financial incentives and no league tables.

Instead, the captains were split randomly into four groups. The “information” group received monthly feedback reports detailing how often they had saved fuel before, during and after each flight. The “target” group received the same reports but were also set targets to improve their performance. (The reward for hitting the target was a hearty “well done!”) The “incentives” group were told that for each target they hit, £10 would be donated to the charity of their choice — a total donation of £240 was possible if all three targets were hit across the eight months of the study. A control group was simply told that a study into fuel efficiency was taking place.

The most obvious outcome was that there was a large and lasting “observer effect”. Merely telling captains that the experiment was happening prodded them into being more careful and saving a lot of fuel. It is always possible that the sudden switch to fuel-saving behaviour had a cause that was nothing to do with the experiment but there are no apparent alternative explanations.

The second outcome was that all three treatments saved fuel compared with the control group but setting targets (with or without the charitable donation) had a particularly notable effect. And the third outcome was that captains who hit their targets were substantially more satisfied with their jobs.

“I just couldn’t believe the impact we had on job satisfaction,” says Metcalfe. Far from annoying the captains, the fact that the company was taking an interest in fuel saving, and acknowledging success, seemed to delight them.

No performance scheme will fit every occasion but the fuel-saving study does suggest an approach worth trying more broadly. If you want people to do a good job, tell them what success looks like to you — and that you’ve noticed when they’ve achieved it.

Written for and first published at ft.com.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

The dubious power of power poses

‘Many notable results in psychology are now being questioned because later research has reached different conclusions’

Imagine that a group of researchers set out to explore the idea that adopting a “power pose” could make a real difference to how we thought and acted. High-power poses include standing with hands on hips, feet planted confidently apart, or lounging back in a chair with feet on table and hands behind head; low-power poses include slumped shoulders and folded arms.

The researchers asked 200 people to adopt such poses, then tested the levels of two hormones in their saliva: testosterone, associated with high status, and cortisol, associated with stress.

The astonishing findings? Well, actually, there were no astonishing findings: the power poses seemed to make no difference worth mentioning. High-power poses were correlated with slightly lower testosterone and slightly higher cortisol — the opposite of what might be expected, but tiny and statistically indistinguishable from chance.

Now imagine that a second group of researchers re-examined the same hypothesis. There were some small variations, and the study was smaller (42 participants). The new study did produce remarkable findings: high-power poses boosted testosterone and lowered cortisol. Low-power poses had the opposite effect. The scale of the effect was described as “a whopping significant difference” by one of the researchers — more formally, the sizes were both practically large and statistically significant. (Statistical significance is a test of whether the result might easily have been a fluke; it’s possible to have small but statistically significant results, or large but statistically insignificant results.)

Faced with both these research findings, published in reputable journals, what should we think? The natural conclusion is that the second study was a fluke, and that standing in a bold pose for a couple of minutes makes no difference to hormone levels. Being open-minded people, we might also be intrigued by the faint possibility that the second study had uncovered a genuine and important result.

This is a hypothetical scenario, I should emphasise. It hasn’t happened. The studies did take place but not in this order. The smaller study was conducted by Amy Cuddy of Harvard and Dana Carney and Andy Yap of Columbia. It inspired a book, and a TED talk that has been watched 34 million times. The larger study was conducted by a team led by Eva Ranehill. But the smaller Cuddy-Carney Yap study didn’t come second; it was conducted first. The Ranehill team’s study came later.

This story will sound familiar to some. Many notable results in psychology are now being questioned because later research has reached different conclusions. Last year, the “Reproducibility Project”, a large collaborative effort reproducing 100 studies in psychology, published the unnerving finding that only 36 per cent of the replication attempts had produced statistically significant results.

But it is not easy to know quite what to make of that percentage. Failing to find a statistically significant effect in a replication does not simply discredit the original work. For example, some replications find similar effects to the original studies without achieving statistical significance. That means the replication provides (faint) support for the original study rather than evidence against it.

Wharton psychologist Uri Simonsohn suggests a replication attempt should use a substantially larger sample than the original, so it is likely to estimate effects more precisely. If the replication fails to find an effect, that’s not proof there’s no effect; it does suggest, however, that the original study was a fluke.

Columbia University statistician Andrew Gelman suggests a simple rule of thumb that I followed in the opening paragraphs of this column: mentally reverse the order of the studies. Imagine the “replication” came first, and the “original” study came later. Being published first should not be a privileged position from which our conclusions can only be budged with extraordinary evidence. Gelman’s rule of thumb helps us avoid doggedly sticking to the status quo.

But perhaps the most important lesson is to remember that while “statistical significance” sounds scientific, it’s hardly a cast-iron endorsement of a result. The theory behind statistical significance assumes that a single pre-chosen hypothesis will be tested. In practice, researchers rarely pre-specify their hypothesis. They can test dozens, or hundreds — and sooner or later a pattern will emerge, if only by chance.

Imagine testing the idea that vitamin supplements boost childhood achievement. OK. But only for girls? Only for boys? Only for children suffering a poor diet? Only for under-10s?

An unscrupulous researcher can grind through the myriad combinations until a statistically significant pattern appears. But, says Gelman, there is no reason to think such unethical behaviour is common. More likely, researchers gather the data, look informally at the patterns they see, and only then choose a few hypotheses to test. They will tell themselves — correctly — that they’re being led by the data. That’s fine. But nobody should take seriously a test of statistical significance that emerges from such a research process: it will bring up fluke after fluke.

There are various technical solutions to this problem. But a little common sense also goes a long way. When a study of 42 subjects inspires 34 million people, it’s not unreasonable to go back and check the results.

Written for and first published at ft.com.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

Worth the wait?

‘If you miss your plane or your train, it hardly matters that the queue was a nice place to chill’

I love queues. Not that I love queueing — I may be English but I’m not that English. But from a safe distance, queues are fascinating. They’re less fun if they cause you to miss your flight. In mid-May, two-hour queues for security at Chicago’s Midway airport had just that effect. Jeh Johnson, the US Secretary for Homeland Security, offered travellers some meditative advice: “Contemplate increased wait times as you travel.” I’d hope we can do a little better than mindful meditation.

There are three very different perspectives on queues: psychological, engineering and economic.

The psychological perspective tells us that much of what makes queues unpleasant is nothing to do with the waiting time. If a queue carries risk (you may or may not make your flight), then it is far more stressful. So are queues that are confrontational, unfair or require constant monitoring for queue-jumpers or the sudden opening up of new lines.

A single serpentine queue, secure against cheats, can be a perfectly civilised place to stand and check email or read a paperback. With a bit of cleverness, the queue may be a pleasure — as at well-designed theme parks — or an unobtrusive virtual version, as when you collect a ticket from the supermarket deli counter and do some shopping while waiting for your number.

There are, however, limits to the psychological approach. When the Eyjafjallajökull eruption shut down air travel across Europe in 2010, I found myself queueing for train tickets in Stockholm Central Station, along with almost everyone else in Sweden. Thankfully, the queue had a counter system: simply take a ticket, and wait for your turn. I sat in a café, sipping espresso and typing on a laptop as I waited. But, after a pleasant three-quarters of an hour, I did some mental arithmetic, and realised that the queue was approximately 14 hours long. In the end, if you miss your plane or your train, it hardly matters that the queue itself was a nice place to chill.

When psychology fails, engineering must take the strain. A well-engineered queue copes gracefully with periods of high demand, and balances the cost of waiting against the expense of overproviding idle service staff.

Queue engineers understand that queues can have strange properties. Imagine the queue at a busy post office. During the mid-morning lull, roughly one person a minute arrives and one person a minute can be served. The queue will fluctuate — and, alas, there will never be a negative number of people in the queue — but we can expect it to stay fairly short. Then, during lunch hour, extra people arrive and the queue starts to lengthen — two people, then four, five, 10. As the rush subsides, the capacity of the ticket office again begins to match the inflow of customers: one person arrives each minute, and one person is served each minute.

Annoyingly, even though the inflow and outflow of people from the queue is the same as it was in the morning, the afternoon queue is about 10 people long. It will stay 10 people long until the capacity of the ticket office is greater than the inflow of customers. Once a serious queue has formed, it needs attention or it can linger indefinitely.

That brings us to the economic perspective on queues. Queues are a terrible, inefficient waste of time. If the resource in question is genuinely limited, then the existence of a queue shows that it is being underpriced. If everyone had to pay to join a queue, the queue itself would be shorter, because some people would decide not to bother. Those who did queue would earn back their entry fee in time saved, while the person selling tickets for the queue would make some cash.

In other cases, however, capacity should expand to keep the queue short. Imagine a line so long that most passengers would pay $50 to skip it — probably a good description of the two-hour queues at Midway. Hiring extra Homeland Security staff would save $50 worth of frustration for every extra person they scan from the line.

. . .

How many people could an extra security team see? One per minute, perhaps? Fifty dollars a minute would surely pay for some extra personnel. The problem is that the security team is unlikely actually to receive the $50. In an alternate universe, passengers would have a whip round, hire more agents, and the line would move just fine.

But in the world in which we live, queues remain. Part of the cost is imposed on foreigners, whose annoyance barely registers on the system. (This is particularly true of immigration checks.) For example, on a recent trip from South America to London, I chose to change at Madrid rather than at Miami because I’ve had terrible experiences at Miami. That’s bad for the US economy but security screeners, customs officers and immigration officials respond to political signals, not market ones. The US political system is hardly likely to dance to my tune.

Looking on the bright side, I hear that Reagan National Airport, often used by members of Congress as they fly in and out of Washington DC, works like a charm.

Written for and first published at ft.com.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

How the sense of an ending shapes memory

‘Composers, novelists and film directors try to end on a high. Restaurants keen to manipulate their online reviews have found a similar trick’

Many years ago, I listened to a string quartet perform a challenging piece of contemporary music. The piece, we were told, represented a journey of suffering and redemption. It would descend into discordant screeching for nearly 20 minutes before finally resolving harmoniously. The small concert hall was packed — there were even people seated on stage behind the performers — so there was little choice but to stick it out.

Everything unfolded as promised. The performance sounded like a succession of cats being tossed into a food processor. Eventually, though, the dissonance became resonance, the chaos became calm. It was beautiful.

But then came a sound that had not been in the score; the electronic peal of a mobile phone rang out across the tranquil auditorium. To make matters worse, the beeping arpeggios were emerging from the pocket of an audience member who was sitting on the stage. He was so close to the performers that he could easily have been downed by a solid backhand swing with the viola. It must have been tempting.

The music had been ruined. But it’s curious that 20 minutes of listening can be redeemed or destroyed by what happened in a few moments at the conclusion.

Daniel Kahneman, psychologist and Nobel laureate, tells a similar story about a man enraptured by a symphony recording that is ruined by a hideous screech — a scratch on the vinyl — in the final moments.

“But the experience was not actually ruined,” writes Kahneman, “only the memory of it.” After all, both concerts were almost complete when interrupted. The lived experience had been unblemished until the final moments. The remembered experience was awful.

When we recall things — a concert, a holiday, a bout of flu — we do not play out the recollection minute by minute like a movie in our minds. Instead, we tell ourselves a little story about what happened. And these stories have their own logic in which the order of events makes a difference.

Consider Jenson Button’s 2009 season in Formula One. The British racing driver easily outpaced his rivals in the first seven races of the season, building a vast lead. Then, as the relative performance of the cars changed, Button failed to win any of the remaining 10 races. His rival Sebastian Vettel couldn’t quite catch him, though, and Button became champion with a limp fifth place finish in the penultimate race. One pundit defended Button against his many doubters with the feeble line: “There have been many less gifted world champions than Jenson Button.”

But imagine if the order of results had been reversed. After being beaten in almost every one of the first 10 races by Vettel, Button would have mounted a magnificent comeback, sealing his world championship with a victory in the final race. The same results in a different order would have told a very different story. And the story matters.

Kahneman and his colleagues have run a number of experiments testing these ideas. In one, people were asked to hold one hand in painfully cold water for 60 seconds. Some subjects then had to keep their hand in the water for another 30 seconds while a hidden valve released fractionally warmer water. So, which experience was worse: 60 seconds of pain, or 60 seconds of pain followed by 30 seconds of somewhat lesser pain? The experimental subjects preferred the longer experience with the happier ending.

In another study with Don Redelmeier, Kahneman surveyed colonoscopy patients every 60 seconds while they underwent a distinctly uncomfortable procedure, producing a minute-by-minute record of just how painful the colonoscopy was. Then, Redelmeier and Kahneman asked the patients “the total amount of pain” they had experienced. The responses were strongly correlated with the average of two factors: the pain experienced at the worst moment, and the pain experienced at the end.

This is summarised as the “peak-end” rule. Our memories of experiences are governed by — of course — the most memorable things about them. Had the doctor left the probe inside the patient, without prodding around, for an extra 10 minutes, the final moments wouldn’t have felt too bad and the entire memory of the procedure would have been less grim.

No wonder Jenson Button’s 2009 season seemed mediocre: his peak performances were great but his final performances less so. And no wonder that disruptive mobile phone was so aggravating: since the best moment of the music came at the end, one ringtone managed to spoil both the peak and the end.

Of course, it is no coincidence that the best bit of the music was at the finale: composers, like novelists and film directors, try to end on a high.

Restaurants keen to manipulate their online reviews have discovered a similar trick: twice recently I’ve dined at restaurants in unfamiliar towns that were highly rated on TripAdvisor. Both times, the food was good but unremarkable. Both times, the proprietor pressed gifts upon us as we left — a free glass of grappa, a nice corkscrew. It seems that when people thought back and wrote their reviews, they remembered this pleasant send-off. That makes sense: if you want people to remember you fondly, it’s best to engineer things so that the last thing they remember of you is something other than signing a bill.

Written for and first published at ft.com.

Marginalia

What I’ve been reading in May

Angela Duckworth’s new book Grit (US) (UK) – you can read some of my thoughts here. Enjoyable, but not the best book I read this month; if you liked the TED talk you’ll like the book.

Felix Martin’s superb book Money (US) (UK) – some wonderful stories about the evolution of key pieces of financial technology from the tally stick to international banking. A surprisingly light read for a learned book. Strongly recommend.

Equally strong recommendation for William Goetzmann’s Money Changes Everything (US) (UK) – this is a more technical history of finance from Uruk to Rome and beyond. Smaller print, heavier read, but still full of fascinating nuggets and extremely well researched.

I’ve been enjoying Will Gompertz’s Think Like an Artist (US) (UK) which teaches you a lot of art and art history under the guise of a self-help manual. Very nicely done.

Mark Miodownik’s Stuff Matters (US) (UK) is a clever reflection on material sciences. I loved the chapter on concrete, which tells you something about his lightness of touch.

Next on the pile, Robert Gordon’s magisterial The Rise and Fall of American Growth. (US) (UK) Self-recommending; I’ve not read it yet but must hasten.

Free email updates

(You can unsubscribe at any time)

28th of May, 2016MarginaliaComments off
Undercover Economist

The refugee crisis — match us if you can

‘However many refugees we decide to resettle, there’s no excuse for doing the process wastefully’

Writing in the 1930s, Lionel Robbins, head of LSE’s economics department, defined economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses”. It’s the study of who gets what and why.

That typically means that economists study conventional markets: how prices work, how people respond to them and how the whole system might function or malfunction. But sometimes a market simply will not do. We don’t allocate children to state school places based on their parents’ willingness to pay. Most countries don’t sell passports to the highest bidder. We do not have a legal market in iced kidneys.

Whether we like it or not, the problem of who gets what and why remains. Sometimes it is grubbily resolved by the emergence of parallel markets — for example, children can be placed in desirable schools at taxpayer expense if their parents buy or rent expensive homes in the right areas.

Over the past few decades a small group of economists — most notably Nobel laureate Alvin Roth, author of Who Gets What — and Why (2015) — has been designing “matching mechanisms” to address allocation problems without resorting to traditional markets. A typical problem: matching teaching hospitals with trainee doctors. The doctors want good hospitals and the hospitals want good doctors. Each side will also have a focus on a particular field of medicine, and the doctors may have preferences over location. Some doctors may be dating fellow medics, who are themselves searching for a teaching hospital.

A good matching mechanism tries to satisfy as many of these preferences as possible. And it ends the need for people to second-guess the system. Bad matching mechanisms reward people who say that a compromise option is really their top preference. Such mind-games are alienating and unfair; in a well-designed matching system, they can be eliminated.

Roth and a growing number of his students and colleagues have designed matching mechanisms for schools and hospital placements, and even mechanisms to ensure the best match for donated kidneys. In each case a market is socially unacceptable but ad hoc or lottery-based allocations are also poor solutions. Nobody wants a random kidney, or to be assigned a place on the whim of a well-meaning bureaucrat who doesn’t really understand the situation.

By balancing competing demands, good matching mechanisms have alleviated real suffering in school systems and organ donation programmes. Now two young Oxford academics, Will Jones of the Refugee Studies Centre and Alexander Teytelboym of the Institute for New Economic Thinking, are trying to persuade governments to use matching mechanisms in the refugee crisis.

Most popular discussions of the crisis focus on how many refugees we in rich countries should accept. Yet other questions matter too. Once nations, or groups of countries, have decided to resettle a certain number of refugees from temporary camps, to which country should they go? Or within a country, to which area?

Different answers have been tried over the years, from randomly dispersing refugees to using the best guesses of officials, as they juggle the preferences of local communities with what they imagine the refugees might want.

In fact, this is a classic matching problem. Different areas have different capabilities. Some have housing but few school places; others have school places but few jobs; still others have an established community of refugees from a particular region. And refugee families have their own skills, needs and desires.

This is not so different a problem from allocating trainee doctors to teaching hospitals, or children to schools, or even kidneys to compatible recipients. In each case, we can get a better match through a matching mechanism. However many refugees we decide to resettle, there’s no excuse for doing the process wastefully.

There is no perfect mechanism for matching refugees to communities — there are too many variables at play — but there are some clear parameters: housing is a major constraint, as is the availability of medical care. Simple systems exist, or could be developed, that should make the process more efficient, stable and dignified.

One possibility is a mechanism called “top trading cycles”. This method invites each refugee family to point to their preferred local authority, while each local authority has its own waiting list based on refugee vulnerability. The trading cycles mechanism then looks for opportunities to allocate each family to their preferred location. The simplest case is that, for example, the family at the top of the Hackney waiting list wants to go to Hackney. But if the family at the top of Hackney’s list wants to go to Camden, the family at the top of Camden’s list wants to go to Edinburgh, and the family at the top of Edinburgh’s list wants to go to Hackney, all three families will get their wish.

Right now, the UK is a promising candidate to pioneer the use of one of these matching mechanisms to place refugees. The government has pledged to resettle 20,000 Syrian refugees now in temporary camps. Local authorities have volunteered to play their part. But to make the best possible matches between the needs of the refugees and the capabilities of these local authorities, it’s time to deploy a little economics.

Written for and first published at ft.com.

Free email updates

(You can unsubscribe at any time)

Undercover Economist

A billion prices can’t be wrong

‘A “big data” approach to inflation is helping us understand the fundamental question of why recessions happen’

In the dying days of 2015 came news to set any geek’s pulse racing: the declaration of a “statistical emergency” by Mauricio Macri, the new president of Argentina. Macri’s move enabled Jorge Todesca, head of the statistics bureau, to suspend publication of some basic economic data. That might seem extreme but Argentina’s inflation numbers were widely discredited.

The International Monetary Fund censured Argentina in 2013 for its implausible numbers under previous president Cristina Fernández de Kirchner. Government statisticians say they were leaned on by her administration to report low inflation. Todesca himself used to be a private-sector economist, and, in 2011, his firm was fined half a million pesos for publishing numbers that contradicted the official version. (Half a million pesos was about $125,000 at the time; it is $35,000 these days, which rather proves the point.)

But one economist found a way to publish plausible inflation statistics without being prosecuted. His name is Alberto Cavallo, and he realised that by gathering price data published by online retailers, he could produce a credible estimate of Argentine inflation from the safety of Massachusetts. Cavallo’s estimate averaged more than 20 per cent a year between 2007 and 2011; the official figure was 8 per cent.

So began the Billion Prices Project and its commercial arm PriceStats, both collaborations between Cavallo and fellow MIT economics professor Roberto Rigobon. “Billion Prices” sounds hyperbolic but that is the number of prices collected each week by the project, from hundreds of retailers in more than 60 countries.

While the project confirmed that Argentina’s inflation numbers could not be trusted, it also showed that the US inflation numbers published by the US Bureau of Labor Statistics could be. Several maverick commentators had argued that hyperinflation would be the inevitable consequence of money printing at the Federal Reserve. When hyperinflation plainly failed to materialise, some critics suggested the BLS was hiding it — as if nobody would notice.

A second advantage, swiftly noted, was that the daily flow of data from PriceStats was a good predictor of official inflation statistics, which are typically published once a month. Cavallo and Rigobon like to point out that their US online price index started to fall the day after Lehman Brothers declared bankruptcy; the official Consumer Price Index took a month to respond at all, and two months to respond fully.

The BPP is also shedding light on some old economic mysteries. One is the problem of adjusting inflation for changes in quality. To some extent this is an intractable problem. The Edison phonograph cost $20 at the end of the 19th century; an iPod Nano costs about $145 today. What inflation rate does that imply over the past 117 years? There is simply no good answer to that question.

But statistical agencies are always wrestling with smaller slices of the same problem. A new model of washing machine is introduced at a premium price, gradually discounted over the years and eventually sold at clearance prices and replaced with a swankier model. The same thing is happening over differing timescales with computers, summer dresses and cars. If the economic statisticians mishandle these cases, they will get their measure of inflation badly wrong; usually they rely on careful substitutes and clever theory, but success can never be assured.

Cavallo and Rigobon argue that the sheer volume of prices collected by the BPP helps resolve the problem. Every day, the project gathers the prices of hundreds of washing machines. By observing that the availability of the Scrub-O-Mat 9000 overlaps with that of the Cleanado XYZ, it’s possible to adjust as new products are introduced and old products discounted and then phased out.
This “big data” approach to inflation is also helping us to understand the fundamental question of why recessions happen. Without opening a big bag of macroeconomics at this stage in the column, one influential school of thought is that recessions happen (in part) because prices don’t adjust smoothly in the face of a slowdown. Like a small rock that starts an avalanche, this price rigidity causes big trouble. Unsold inventory builds up, retailers slash their orders, and manufacturers go bankrupt.

The trouble with the idea that price stickiness causes recessions is that, according to official inflation statistics, prices routinely change by amounts large or small, which suggests no price rigidity.

But it turns out that many small price changes are statistical illusions. For example, if a product is missing from four monthly inflation surveys and is 1 per cent more expensive when it returns in the fifth month, official statisticians will quite rightly smooth over the gap by imputing a 0.2 per cent rise per month. But it would be a mistake to take this as evidence that retailers did, in fact, repeatedly raise prices by 0.2 per cent. Collecting billions of prices removes the need to fill in these gaps, and in the BPP data very small price changes are rare. Prices will move by several per cent if they move at all. One might guess that in physical stores the cost of relabelling products is higher, and small price changes are even rarer.

The BPP’s big data approach has rescued the important macroeconomic idea of price stickiness. It is a reminder that we often gain from having a second opinion — or a billion of them.

Written for and first published at ft.com.

Free email updates

(You can unsubscribe at any time)

Other Writing

Why everyone should give a TED talk and how to do it

I found out the hard way that bad public speaking is contagious. As a schoolboy I was pretty good at speeches, in a schoolboyish way. I won competitions; being a sharp, witty speaker was a defining part of who I felt myself to be.

Then I grew up and started a corporate job, and something strange happened. My talks sagged into “presentations”, burdened by humourless clip art and plodding bullet points. The reason? I was surrounded by people who were stuck in the same beige offices giving the same beige presentations. Like many workplaces, we had reached an unspoken consensus that giving bad talks was just the way things were done.

Aside from tradition — and it is a powerful one — why else are most talks bad talks? One reason is fear. Being afraid does not itself make a speech bad; fear can make a talk electrifying or touching. But most speakers take the coward’s way out. Afraid of running out of words, they overstuff their speeches. And they prop themselves up by projecting their speaking notes on the wall behind them, even though everyone knows that providing rolling spoilers for your speech is a terrible idea.

A second reason is lack of preparation. Most speakers rehearse neither their argument nor their performance. That is understandable. Practising in front of a mirror is painful. Practising in front of a friend is excruciating. Rehearsing offers all the discomfort of giving a speech without any of the rewards of doing so. But it will make the end result much better.

For these reasons, I think you should give a TED talk. Almost anyone can. All you need is 18 minutes, a topic and an audience — if only your cat. No matter how often or how rarely you usually speak in public, the act of trying to give a talk in the tradition of TED will change the way you think and feel about public speaking.

As with anything popular, TED talks have their critics, but it is hard to deny that the non-profit organisation behind the videoed presentations on subjects from science to business has helped reinvent the art of the public speech.

TED talks are vastly more entertaining than traditional lectures, while more thought provoking than most television. But that is TED from the point of view of the audience. From the view of an aspiring speaker, the lesson of TED is that most speakers could raise their game. A few TED talks are by professional politicians or entertainers such as Al Gore or David Blaine. Most are not.

There are more than 1,000 talks on the TED website with more than 1m views, typically delivered by writers, academics or entrepreneurs who have been giving mediocre talks as a matter of habit, and who have been suddenly challenged to stop being mediocre. Faced with the obligation to deliver the talk of their lives, they decided to do the work and take the necessary risks.

These speakers have been offered good advice by the organisers of TED, but that advice has never been a secret. It is now available to anyone in the form of TED Talks (buy in the UK) (buy in the US), a guide to public speaking from Chris Anderson, the TED boss. It is excellent; easily the best public speaking guide I have read. (I should admit a bias: I have spoken twice at TED events and benefited from the platform that TED provides.) Unlike many in the genre, Anderson’s book is not a comprehensive guide to going through the motions of wedding toasts and votes of thanks. Instead, it focuses on the stripped-down TED-style challenge: an audience, a speaker, plenty of time to prepare, and 18 minutes to say something worth hearing.

There is no formula for a great talk, insists Mr Anderson, but there are some common elements. First and most important: there is a point, an idea worth hearing about. Second, the talk has a “throughline” — meaning that most of what is said in some way supports that idea. There may be stories and jokes, even surprises — but everything is relevant.

Third, the speaker connects with those listening — perhaps through humour, stories, or simply making eye contact and speaking frankly. Finally, the speech explains concepts or advances arguments by starting from what the audience understand, and proceeding step by step through more surprising territory. It can be very hard for a speaker to appreciate just how much she knows that her audience do not. One reason to rehearse is that an audience can tell you when they get lost.

Most speakers are able to do some of this, some of the time — an interesting anecdote, a funny line, an educational explanation. We are social beings, after all. We have had a lot of practice talking.

Much of what turns a half-decent talk into a brilliant one is the ruthless excision of the fluff — the throat-clearing introduction, the platitudes, the digressions, the additional points that obscure the central message, and the “er, that’s about it” conclusion. With an audience of 60 people, for instance, every minute you waffle is an hour of other people’s time you are wasting. Sharpen up.

My only quibble is that the book offers less to a speaker who is short of preparation time. Because Mr Anderson is so keen to tell speakers how to prepare, he does not fully engage with the challenge of improvised speaking or debating.

Marco “Rubot” Rubio’s presidential dreams may have been snuffed out because he seemed over-rehearsed and unable to improvise. And Martin Luther King Jr’s greatest moment as a speaker — the second half of “I have a dream” — was unscripted. Sometimes the improvised response is more powerful than a prepared speech can ever be.

Instead, Mr Anderson’s aim is to help readers give a full-blown TED talk, despite the hard work that entails. Fair enough. Preparing to give a high-stakes speech is like training for a marathon or studying for an exam: even if you only do it once, the process will teach you things you will always remember.

Written for and first published in the Financial Times.

Free email updates

(You can unsubscribe at any time)

 

A short-cut to speeches

A TED-style talk takes weeks of preparation. What if you have hours, or minutes, to prepare?

• Say something worth hearing. “It’s not about you,” says Chris Anderson, who warns that business presentations are often sales pitches or boasts. He adds that the same information will land much better if it is “here’s what we’ve learnt” rather than “look how great we’ve been”.

• Less is more. Once you have found something worth saying, focus. Strip it down to a single core point. Everything about your speech — stories, jokes, statistics, graphics — should connect to that point.

• Your speaking notes should not intrude. Bullet points are a good idea if they are written on handheld cards, but not when projected on the wall behind you. If your speech is scripted, do not try to memorise it if you have no time, but become familiar with it. “There’s a big difference between being 90 per cent down in the script, and 60 per cent up and connected,” says Anderson.

• You are usually your own best visual aid. By all means use pictures, diagrams or video when they are good. But do not use substandard slides as wallpaper; when you have nothing to show, show nothing. Hit “B” to blank the screen and focus attention on you, or use empty slides.

• Practise. Even one run-through with a friend will help. Or find an empty room and record yourself on your phone. It is awkward but worth it.

• First and final impressions last. Improvised talks often suffer from a slow start and a limp finish. Think of a good opening and closing, and practise them. If you can start and finish strongly, you and your audience will both feel better.

11th of May, 2016Other WritingComments off
Undercover Economist

The odds are you won’t know when to quit

‘The truth is that there are no foolproof methods for knowing when to hold ’em and when to fold ’em’

There is a strong case to be made for persistence. As a child I was told the legend of Robert the Bruce. Cowering and hiding in some dank cave in Scotland, he felt like giving up his struggle against the English. Then he noticed a spider repeatedly failing to spin a web before eventually succeeding. Heartened, King Robert returned to give the English a sound thrashing in 1314. Even for an English boy, it was an inspiring tale. If at first you don’t succeed, try again.

But there is an equally strong case to be made against being stubborn. When Irving Fisher and John Maynard Keynes failed to predict the Wall Street Crash of 1929, the two great economists reacted differently. Fisher stuck to his guns; Keynes shrugged and changed direction. Fisher was ruined; Keynes died a millionaire. If at first you don’t succeed, do something different next time.

Do we tend to quit too soon or quit too late? Are we too stubborn or not determined enough? There has been much excitement recently around the idea of “grit” — a personality trait representing commitment to and enthusiasm for long-term goals, championed by psychologist Angela Duckworth. She argues, plausibly, that grit is more important than talent in predicting a successful life.

The idea is appealing in principle but one must ask what Duckworth’s brief “grit” questionnaire is really measuring. (Perhaps I am just sore because I took the questionnaire and discovered I have less grit than the average marshmallow.)

While Duckworth’s work suggests that perseverance is vital, other psychological research suggests that we sometimes persevere when we should not. Nobel laureate Daniel Kahneman, with the late Amos Tversky, discovered a tendency called “loss aversion”. Loss aversion is a disproportionate dislike of losses relative to gains, and it can lead us to cling on pig-headedly to bad decisions because we hate to stop playing when we’re behind.

My favourite study of loss aversion concerns players of the TV game show Deal or No Deal, in which players must periodically decide whether to keep gambling or accept an offer from the mysterious “Banker” to buy them out of the game. In one notorious Dutch episode, a contestant named Frank was offered €75,000 to stop; he kept playing and lost his next gamble. The Banker’s next offer was just €2,400, which was actually a fair offer. But at that point loss aversion kicked in. With the lost €75,000 in mind, Frank refused all further deals, kept gambling and kept losing. He eventually won just €10.

A study of Deal or No Deal by behavioural economists including Thierry Post and Richard Thaler found that while Frank’s fate was spectacular, his behaviour was statistically typical. People hate to quit if they feel they’re losing.

Loss aversion warps investment strategies in a similar way. We happily sold our stocks in Google and Apple but clung on to those in Enron and Lehman Brothers. The same tendency affects house prices: we hate to sell for less than we paid. Recent research by Alasdair Brown and Fuyu Yang finds that the same thing is true when people are offered the opportunity to cash in a bet on a sporting event that is still in progress. They are happy to cash out if their team is a goal up, even though that will cut their possible gains, but they will cling on if their team is a goal down even though they could cut their losses.

I was struck by a recent FT article by equity analyst Daniel Davies describing how a portfolio based on expert research recommendations would tend to do badly, but if the same portfolio had a “stop-loss” rule that simply jettisoned stocks after a 10 per cent loss, it would tend to do very well. The stop-loss rule cancelled out the instinctive tendency to hold on stubbornly to losers. Yet Warren Buffett seems to do very well by buying and holding.

The truth is that there are no foolproof methods for knowing when to hold ’em and when to fold ’em. But I have three suggestions. The first is to look resolutely away from sunk costs and towards future prospects. Whether you paid $70 or $130 for your Apple shares should be irrelevant to your decision to sell them today for $100. Bygone profits and losses are a distraction.

The second is to persevere flexibly rather than stubbornly. Angela Duckworth’s family follows a “hard thing” rule: the children have to choose an activity, such as music or athletics, that requires dedication and practice. They’re allowed to quit but only at a natural break point and only if they find an alternative “hard thing”. That seems to steer a course between the Scylla of obstinacy and the Charybdis of laziness.

The third is to view decisions as experiments. Signing up to learn the violin is an experiment; so is moving cities or careers. Of course, one can end an experiment too early or doggedly persist too long. But viewing a decision as an experiment gives a useful perspective because experiments are always designed to teach us something. We can keep asking: what have I learnt? And am I still learning? If a new project or activity keeps teaching us new things, it is probably worth continuing — even if the lessons are sometimes painful.

Written for an first published at ft.com.

Free email updates

(You can unsubscribe at any time)

Video

How to give a TED talk in a hurry

Free email updates

(You can unsubscribe at any time)

4th of May, 2016VideoComments off
Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

About Tim

Tim Harford is an author, columnist for the Financial Times and presenter of Radio 4's "More or Less".
Learn More ›

Subscribe to TimHarford.com

Enter your email address to receive notifications of new articles by email.

Tim’s Tweets

Do NOT follow this link or you will be banned from the site!