Tim Harford The Undercover Economist

Articles published in October, 2012

Why aren’t we doing the maths?

The practical implications of misplaced confidence when dealing with statistical evidence are obvious and worrying

A little two-part test for you. Imagine you’re a doctor, considering whether to recommend a particular kind of cancer screening, “A”. You discover that this form of screening improves five-year survival rates from 68 per cent to 99 per cent. (The five-year survival rate is the proportion of patients alive five years after the cancer was discovered.) The question is: does the screening test “A” save lives?

Part two: now you consider an alternative screening test, “B”. You discover that test “B” reduces cancer deaths from two per 1,000 people to 1.6 per 1,000 people. So: does screening test “B” save lives?

The second question is easier. Screening test “B” unambiguously saves lives: to be precise it saves 0.4 lives per 1,000 people. That might not seem a lot – and if the test is expensive or has unpleasant side-effects it might not be worth it – but that is the nature of cancer screening. Most people don’t have the cancer in question so most people cannot be helped by the test.

What about screening test “A”? This question is harder. The numbers look impressive, but survival rates are a treacherous way to evaluate a screening programme. Imagine a group of 60-year-olds who all develop an incurable cancer that will kill them at 70. They have no symptoms until age 67, so the five-year survival rate when they are diagnosed at 67 is, I’m afraid, zero. Introduce a screening programme and you can discover the cancer much earlier, at age 62. The five-year survival rate is now 100 per cent. But the screening hasn’t saved any lives: it’s merely given early warning of a disease that cannot be treated.

In general, screening programmes look impressive when evaluated by survival rates, because the purpose of screening is to detect the cancer earlier. Whether any lives are saved or not is a different issue entirely.

I’ll admit, this is a tricky pair of questions. You’d have to be a doctor, rigorously trained in how to handle the evidence base for medical treatments, to get this sort of thing right. But here’s the bad news: doctors do not get this sort of thing right.

An article published in the Annals of Internal Medicine in March put these questions to a panel of more than 400 doctors with relevant clinical experience. Eighty-two per cent thought they’d been shown evidence that test “A” saved lives – they hadn’t – and of those, 83 per cent thought the benefit was large or very large. Only 60 per cent thought that test “B” saved lives, and fewer than one-third thought the benefit was large or very large – which is intriguing, because of the few people on course to die from cancer, the test saves 20 per cent of them. In short, the doctors simply did not understand the statistics on cancer screening.

The practical implications of this are obvious and worrying. It seems that doctors may need a good deal of help interpreting the evidence they are likely to be exposed to on clinical effectiveness, while epidemiologists and statisticians need to think hard about how they present their discoveries.

The situation could be worse. A recent survey by the Royal Statistical Society’s “getstats” campaign asked MPs to give the probability of getting two heads when tossing a coin twice. More than half failed to get the answer correct – including a humiliating three-quarters of Labour MPs.

The answer, of course, is 25 per cent, and is appallingly basic stuff. If I try to translate from numeracy to literacy, I’d say that the doctors’ failure was the equivalent of being unable to write a decent essay about “The Waste Land”, while the MPs’ failure was more like the inability to read a newspaper.

The Royal Statistical Society reported that about three-quarters of MPs said they felt confident when dealing with numbers. This confidence is misplaced.

Also published at ft.com.

The benefits of being in this together

‘George Osborne was warned by Tory MPs on Tuesday to expect a voter backlash and “widespread non-compliance” as the chancellor prepares to roll out his plan to remove child benefit from better-off taxpayers.’

Financial Times, October 24

The revolt of the yummy mummy set!

And the relevant fathers, husbands and boyfriends, yes – although such people are rarely described as “yummy”.

Why the fuss?

Where to begin? The idea is that high-income households will no longer receive child benefit, hitherto a universal right. Either the mother can rescind her claim to child benefit, or a high earner in the household can declare child benefit was received and pay an offsetting tax charge. Some worry about the principle of universal benefits being eroded; some worry it is an odd way to raise more tax; and some worry that administratively, the thing is a regurgitated dog’s breakfast.

How can a policy offend people in so many ways?

George Osborne is a man of unusual talents. I speculate that he wrote down a few useful rules of thumb for what a sensible tax system should look like, and then tried to figure out how to break every one of them.

What rules of thumb do you have in mind?

First, keep marginal tax rates simple and as low as possible to avoid discouraging work. Marginal tax rates were proliferating under the chancellor’s predecessors and now we have a new suite of high and fecundity-dependent marginal tax rates for those earning between £50,000 and £60,000 a year.

But isn’t it reasonable that high earners are denied child benefit?

Well, that ignores the second rule of thumb, which is that we should evaluate the tax system as a whole, rather than individual chunks of it. Universal child benefit is already progressive, but if we think the rich should pay more, that’s easy: raise income taxes. Mr Osborne is doing the opposite. Perhaps this is because by targeting this measure on households with children, he shields the over-55s from the worst of it.

Why would he want to do that?

That’s been Conservative party policy for ever. I’ve stopped asking why. Rule number three is the Colbert rule, named after a French finance minister: pluck the goose, but minimise the hissing. Instead, in a display of forehead-smacking incompetence, Mr Osborne plans to write to his core constituents specifically informing them that he is about to take money away they regard as a fundamental right. For an encore he will make an extra half a million people complete a self-assessment tax return, which is no trip to Disneyland. It’s hard to see how the opposition could draw as much attention to this tax increase as Mr Osborne has.

But at least it’s all going to be administratively convenient?

Very droll. The fourth principle is to make up your mind what you’re taxing: households or individuals. Our benefits system applies to households but our tax system has always applied to individuals. The way this child-benefit withdrawal is going to be introduced mixes up the two systems.

Is that a problem?

The obvious objection is that a couple earning £60,000 and £20,000 respectively will lose child benefit, while a couple earning £40,000 each will lose nothing. It also means husbands and wives have to tell each other financial details that for hundreds of years have been regarded as private. More deliciously, HM Revenue & Customs has to take a view on what constitutes “living together as husband and wife (or civil partners)”. This should be fun.

Why so?

If a mother is living with her high-earning husband, no child benefit. If she is a single parent living with her high-earning mother, child benefit is due. If she lives with a high-earning friend, she gets child benefit. If she starts sleeping with the friend, she does not get child benefit. I’ve asked HMRC how they plan to find out who’s having sex with whom.

What did they say?

They said they’d get back to me. I also failed to establish what happens if a mother has two overlapping romantic partners. Nor do we yet know whether tax returns will contain a box to allow people to specify how many weeks they’ve been sexually involved with someone claiming child benefit. It will certainly spice up the old tax return, eh?

Why is the government doing this?

Simple. Mr Osborne thinks that it will be a vote-winner. That appears to be all that matters.

Also published at ft.com.

A short interview at Wired 2012

Believe the hype in hyperinflation

Extremely high price increases are not produced by central bankers but are the result of a total failure of the political system

Despite fringe commentators shrieking about imminent hyperinflation, it has failed to appear. The Fed’s announcement last month of further quantitative easing can be interpreted as an attempt to promise inflation tomorrow in the hope of getting people to spend money today. Rather than creating far too much inflation, the Federal Reserve is struggling to create much inflation at all.

Unexpected inflation moves money from to debtors to from creditors. It creates a variety of minor costs associated with redrafting contracts and rewriting menus and price lists; but it creates some benefits, too, by eroding stubbornly high wages and promoting job creation.

Hyperinflation, though, is a different beast from ordinary inflation. Hyperinflation shreds every monetary contract, makes much of modern economic life impossible, and all but guarantees that a totally different form of money, anything from foreign dollars to cigarettes, will be pressed into service. Yet the historical record is in some ways encouraging.

A new Cato Institute working paper tries to document every hyperinflationary episode in history. Getting the numbers isn’t easy, as the authors, Steve Hanke and Nicholas Krus of Johns Hopkins University, are at pains to point out. Hyperinflation is a time of civic dysfunction. By the time the price level is doubling every couple of months, or days, people have often stopped collecting credible statistics.

Hanke and Krus report 56 episodes of hyperinflation. A 57th, in North Korea in the past few years, is excluded for lack of sound data. Given that the authors were able to scrape together price indices for the Free City of Danzig in 1923 and for the Japanese-occupied Philippines in 1944, the failure to establish facts about present-day North Korea tells you something about Pyongyang’s attitude to statistical outreach. Hanke says that since the paper was released, the hyperinflation club has a new member: Iran.

A few facts leap out. First, hyperinflation is a phenomenon of the modern era: with a single exception, every hyperinflation has occurred since the end of the first world war. The outlier is revolutionary France, where monthly inflation passed 300 per cent in the summer of 1796.

Second, three-quarters of these hyperinflations – 43 out of 56 – occurred in one of three clear historical clusters. The first cluster is central European states after the first world war. It provides the most famous hyperinflation in history: Weimar Germany. The second cluster is during or immediately after the second world war, and it includes history’s worst: Hungary in 1946. Those inflation rates defy comprehension – 41,900,000,000,000,000 per cent a month, compounded, is (I believe) an annual inflation rate with 178 digits. It makes more sense as 207 per cent a day.

The third cluster is that of Eastern bloc countries as the Soviet Union disintegrated, and it comprises over half of all the 20th century’s hyperinflations. These are all examples of hyperinflation going hand in hand with an extremely stressed political and social system. Most of the remaining examples, from Zimbabwe to late revolutionary France, exhibit that same stress.

There’s a lesson here: regardless of the fears of some US Republicans and German hard-money fans, hyperinflation is not produced by central bankers. It is the result of a total failure of the political system.

There are a few people who are simultaneously buying gold in expectation of hyperinflation in western economies and stockpiling bullets in anticipation of a calamity for western civilisation. I will give the survivalists this much credit: the scenarios are consistent. But calamity arrives first, and hyperinflation follows.

Also published at ft.com.

How to find a perfect match for a Nobel

‘The Nobel Prize for economics has been awarded to Americans Alvin Roth and Lloyd Shapley for their independent work into how best to bring different parties together for mutual benefit.’

Financial Times, October 16

It’s not really a Nobel Prize, is it?

I’m glad you’ve mentioned that because it’s an important point that is rarely made – except, of course, any time anybody says or types the words “Nobel Prize for economics”, at which point a spontaneous chorus of “there is no Nobel Prize for economics” will break out. No, it is more recent than the original Nobel Prizes, and has a lengthy name. It undoubtedly does not deserve the status of, for instance, the Nobel Prize for literature (never awarded to Leo Tolstoy or James Joyce) or the Nobel Prize for peace (awarded to Henry Kissinger, the EU and most controversially to the Cabbage Patch Kids).

You seem touchy.

Perhaps a bit, yes.

Apparently this week it was awarded to Alvin Roth and some other guy.

I feel a little sorry for Lloyd Shapley, and not just because he should have been awarded the prize back in 1994. He has been somewhat eclipsed by Prof Roth, and this does happen. In 2005, the prize was awarded to Thomas Schelling, a brilliant writer who analysed everything from quitting smoking to winning the cold war, and Robert Aumann, who did something so abstruse and mathematical that nobody knew what it was. Not surprisingly everybody talked about Prof Schelling. It’s similar here: Prof Roth seems an intensely practical economist, charging around writing papers about dwarf-tossing and setting up life-saving kidney transplant operations, while all Prof Shapley produced was some mouldy old maths.

I thought he ran a dating agency, or something?

With the late David Gale, he designed a matching algorithm and used marriage as an example.

You economists are such romantics. You’re making me blush.

The algorithm was not actually designed to arrange marriages. It was to analyse ways to match things up. Imagine a world where husbands and wives were pair up by some automatic, algorithmic process.

They aren’t.

No, but schools and schoolchildren are, for example, as are teaching hospitals and medical students. Prof Shapley and Gale discovered a simple rule that would allow these matches to be made. After the algorithm had done its work, no two people would have an incentive to peel off and do some kind of side deal together, which is important if the algorithm is supposed to function in a free society.

How does the algorithm work?

One simple variant is for each of the men to propose marriage to one of the women. The women must reject all but one, and hold on to the best one temporarily. Each rejected man gets to make a second offer, and again the women get to hold on to the offer they had before, or alternatively reject it and hold a different offer. The point at which no man is rejected is the point at which you can close the market and tie the knot. And the allocation will be stable.

It seems rather traditional. Can’t the women propose instead?

Sure. If they do, it turns out that you will get a different but still stable set of matches, and this set will be one that the women prefer. So this was the theory, and Prof Roth developed the theory and put it into practice.

And it was Prof Roth who set up the dating agency?

He patched up the way in which medical students were matched with their first hospital jobs, which is more complex because some students are couples and want to apply for pairs of jobs in the same city. When Prof Roth looked at the problem, in the mid-1990s, the matching procedure being used was unstable. Offers – or ultimatums – were handed out many months in advance because people preferred their side deals to the centralised system. This is a classic symptom of unstable matches. He fixed it.

And got a Nobel Prize for that?

Yes, along with designing algorithms to match students with schools and kidney donors with recipients. And he deserves it. Prof Roth does three things that are far too rare in economics. First, he really engages with the anthropological reality of how individual markets work, rather than immediately leaping to an abstract model. Second, he recognises ethical or cultural constraints – we don’t like the idea of buying and selling kidneys, for instance. Too many economists have simply argued that the constraints make no sense, shrugged and walked away. Prof Roth accepted them as a fact of life and worked around them.

He’s almost human.

Indeed. And an important point is that many of the markets Prof Roth has designed do not and cannot work perfectly. A lot of theoretical economics is concerned with proving that perfection is possible, or impossible. Prof Roth is interested in answering the pragmatic question “is this good enough?”.

Also published at ft.com.

Lloyd Shapley and Alvin Roth win the Nobel memorial prize in economics

I took a particular interest in the FT leader on the subject:

The award of the Nobel memorial prize in economics to Lloyd Shapley and Alvin Roth is overdue: Mr Shapley should have shared the 1994 prize with John Nash and others, while Mr Roth has been a leading contender in recent years. The choice is a particularly good one because economists have acquired some bad habits, and Mr Roth’s example may serve to break them.
Mr Shapley is one of the key figures in co-operative game theory, which for decades looked both abstract and pointless, a poor relation to regular game theory. However, co-operative game theory is finally coming into its own in a world where computerised auctions are used to award assets or contracts in clusters. Mr Shapley has added a new page to the thick catalogue of useless ideas that turned out to be useful after all.
In contrast, the practical application of Mr Roth’s ideas has never been in doubt. Building on work by Mr Shapley and David Gale, Mr Roth designs algorithms for matching things. Mr Gale and Mr Shapley considered a whimsical problem: how to design a centralised system for allocating husbands and wives in a way that there is no possible male-female combination with a mutual desire to elope.
Mr Roth has developed this www.bestventolininhalers.com theory, but he has also put it into practice, allocating medical students to teaching hospitals and children to schools. Along the way he has resolved difficulties such as the fact that some pairs of medical students are married couples and want to live in the same city.
Most famously of all, he is part of a team that has designed kidney exchanges. If a person with kidney failure has a willing donor who is not a biological match, Mr Roth’s kidney exchange finds another pair in a similar situation – or however many pairs are necessary to find everyone a compatible organ.
Mr Roth’s work is clever and useful but he is an example to his fellow economists in other ways. While others argued that it should be legal to buy and sell organs, Mr Roth tried to understand why we find such transactions repugnant, and designed a practical alternative. He has also advocated an engineering approach to economics: rather than simply proving that something can or cannot be done, his ideas on matching ask fuzzier questions such as “will this work in most cases?” or “is this as good as we can get?”
Above all, Mr Roth has understood that if you test theory in a real-world environment, the theory will improve. So may the world itself.

15th of October, 2012Other WritingComments off

Odds and ends

Review by Tim Harford

An investigation into the art of forecasting argues for a pick-and-mix approach

The Signal and the Noise: The Art and Science of Prediction, by Nate Silver, Allen Lane RRP£25/Penguin Press, RRP$27.95, 544 pages

When it comes to soothsaying, there seem to be two types of person: those who will gladly and glibly opine on anything from the chance of rain tomorrow to the chance of Mitt Romney winning the presidency, and those who think the forecasting game is all but impossible, the exclusive preserve of fools and frauds.

This state of affairs is not good enough, says Nate Silver, a statistician most celebrated for his New York Times blog, FiveThirtyEight, and its forecasts of election results in the US. While prediction is indeed a difficult affair, it is not hopeless. Thoughtful people with serious theories and mathematical nous should get involved, argues Silver in The Signal and the Noise, not only because they have a good chance of raising the bar for forecasters but because prediction is the acid test of their expertise.

Although Silver is a numbers man, and his book is seasoned with graphs, tables and the occasional equation, he advocates a pick-and-mix approach to forecasting. A statistical model may or may not beat expert judgment, and a computer may or may not out-forecast a human, but a judicious combination of approaches will generally outperform any particular method.

What works in forecasting is not always effective when it comes to writing a book. Each chapter picks a different forecasting problem, from terrorism to baseball to climate change. These individual chapters are strong, and some of them are outstanding. The analysis of the subprime crisis is as lucid as any I have read, I was hooked by his account of computer chess and the explanation of weather forecasting is a revelation.

The whole is less than the sum of the parts, alas. A chapter on poker seems to be there because Silver was once a professional poker player; the chess chapter, while brilliant, seems barely relevant. His pen portraits do not satisfy (do we care about Robert Daum, whose research is briefly cited, now that we know he is “a doctor’s doctor, with a dignified voice, a beard, and a detached sense of humor”?) and at times his efforts to document his extensive research descend into name dropping. “I was told by Tim Berners-Lee,” he writes, of some fact or other. After the first 400 pages of this kind of thing one begins to wish that Silver’s editor had been more assertive.

Despite these frustrations, there is a great deal to admire in the book. It defies easy summary but at its heart is the admonition that we should all think more like Thomas Bayes, an 18th-century minister and mathematician, nonconformist in both roles. Bayes’ theorem, published posthumously, tells us how to combine our pre-existing view of the world with new information in a rational way.

Bayes’ theorem can produce some counterintuitive results. My colleague John Kay once published a pair of columns about the classic game show “Let’s Make A Deal”, in which a grand prize lurks in one of three boxes. The contestant provisionally chooses a box; the host of the show opens a different box to reveal no prize; then the contestant must decide whether to open her chosen box, or to switch at the last minute and open the only alternative. Bayes’ theorem demonstrates quite clearly that the contestant should switch, but Kay’s postbag was testimony to the fact that few people believe this conclusion.

Silver explains Bayes’ theorem with a dark example: the attacks on the World Trade Center. When the first plane hit the tower, horrified observers instinctively updated the possibility of a terrorist attack that day from “barely thinkable” to “distinctly possible”, although at that stage an accident could not be discounted. Bayes’ theorem shows that when the second plane hit, the chance of terrorism could be updated again, from “distinctly possible” to “all but certain”.

There is no need for a mathematical analysis to tell us that, but Silver argues convincingly that Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Sceptics react with glee, while true believers dismiss the new information.

A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming – but only a little.

The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way. This is good advice, and less technical than it might sound.

Despite its flaws, The Signal and the Noise is a book worth reading. It says something new in a crowded field, it is fun to read, and it’s full of facts you will remember. There is some noise here, but Silver has also produced a signal that is a pleasure to follow.

Also published at ft.com.

13th of October, 2012Other WritingComments off

There are many ways to price by gender

‘From December 21 . . .  insurers must apply an EU-wide ban on the use of gender to price products, such as motor insurance and annuities.’

Financial Times

He: And not before time. It’s outrageous that I have to pay more for my car insurance than you do. I’m a perfectly safe driver.
She: Of course you are, dear. But you also drive a lot more than I do, which is not unusual for men. Since you drive more miles you are exposing yourself to the risk of more accidents.
He: Am I? Oh.
She: This is one of the reasons that men have more accidents than women. Another, of course, is that some young men are aggressive, overconfident idiots. But in any case you should probably put the money you save into your pension pot because you’re going to need it when you get stuck with the low annuity rates we women have had to put up with.
He: But my life expectancy is shorter. I deserve much higher annuity rates. That’s outrageous.
She: So you’re outraged that discrimination against you hasn’t ended earlier, and equally outraged that discrimination in your favour isn’t going to continue for ever?
He: Hmph. I read Lex when this gender-neutral insurance idea was first floated. Lex said it was “philosophically ignorant”.
She: I hadn’t realised Lex was such a philosopher. But the ruling does raise some interesting questions about the nature of what makes discrimination so objectionable.
He: It does? I mean . . . yes, it does!
She: For example, imagine that black customers were charged more in restaurants. A disgraceful idea, I’m sure you’ll agree.
He: Absolutely. Disgraceful.
She: But why is it disgraceful? Is it because black people are perfectly good restaurant customers and don’t deserve to be charged more?
He: Yes. That’s it.
She: Or is the problem that it’s just wrong to lump people into a category such as “male” or “black”? That people should be treated as individuals rather than defined by their membership of some group?
He: Er, yes – you’ve hit the nail on the head there.
She: Or are we just intellectually lazy, culturally conditioned to break out in a cold sweat if somebody talks about racial discrimination, but complacent about discrimination against men or women?
He: Um, I think . . .
She: Because that brilliant philosopher Lex seemed to think that gender discrimination was fine but racial discrimination was “plainly unfair”.
He: [Opens mouth, shuts it again.]
She: It will be interesting to see what happens to insurance pricing, though.
He: It’s obvious what will happen. Women will pay more for car insurance but men will pay no less; men will get smaller annuities but women won’t benefit.
She: That’s possible, but that would mean corporate profit margins going through the roof. I expect insurers would be keen to grab market share under those circumstances, which will mean premiums falling again. I’d guess that the new gender-neutral pricing will settle near the average of the old rates. That is, until the behavioural response sets in.
He: What behavioural response?
She: It will be a lot cheaper for young men to get car insurance, so expect to see more boy racers on the road. And women may well be more tempted to get their own pensions.
He: But in the end, we’ll all get used to this brave new egalitarian world.
She: We might not get too comfortable. Insurers will start looking at other correlates of risk. The obvious one is how far people drive: men tend to drive more than women. Then there are issues such as the choice of a sports car rather than a people carrier. Such distinctions may carry more weight in determining your premium than they do now. As for annuities, if they can’t pay any attention to your sex they might start paying more attention to your cholesterol.
He: I can see that this might get very intrusive.
She: It might. Or it might get very clumsy. Mortgage lenders used to be accused of using geography as a way of discriminating against minorities in the US, since ethnicity and postcode can be closely correlated. There are modern analogies: since women are on average smaller than men, perhaps in the future premiums will be proportionate to height. Stranger things have happened.

Also published at ft.com.

So many numbers, so little time

The world’s complexity is a symptom of economic success, but it can pose serious risks

The world is a complicated place. When the design student Thomas Thwaites decided to reverse-engineer a toaster, he discovered that it comprised 400 components; when Eric Beinhocker, then of the McKinsey Global Institute, tried to estimate the number of products and services available in a big urban economy such as London, he estimated that it was about tens of billions; the Bank for International Settlements reported that at the start of the credit crunch in 2007, the face value of outstanding derivative contracts was more than one quadrillion dollars.

This complexity is a symptom of economic success. But it can pose serious risks, especially when dealing with “tightly coupled” systems, from a web of financial contracts to a nuclear reactor.

We rely on regulators to keep us safe in the face of this complexity, but regulations have themselves become more complex. The original US constitution was less than 5,000 words long; the Acquis Communautaire, the body of EU law to which new countries must sign up, is about 35 million words in English. This difference surely reflects the gap between the 18th century and the 21st more than any peculiarly Eurocratic love of the baroque. After all, the famous Glass-Steagall Act of 1933 was 37 pages long but the recent Dodd-Frank Act, also designed as a response to a great financial crisis, is 848 pages long despite delegating many details to regulators. Andy Haldane of the Bank of England estimated that the eventual Dodd-Frank rules would top out at about 30,000 pages.

This looks like bureaucracy gone mad, yet it also looks inevitable given the complexity of the economy. But perhaps it is not.

First, economic complexity may not be causing the regulatory complexity: in the financial industry, the causation often runs the other way. The first credit default swap, for example, was designed in response to regulations on minimum levels of bank risk capital. The boom in elaborately repackaged sub-prime mortgages was fuelled by the fact that the repackaged products were risky – and thus promised higher returns – but ticked the regulatory box that said “safe”. Complex rules invite complex rule-bending.

Second, complex rules may be a very poor response to complex situations. This is an argument made both by Haldane and by Andrew Zolli and Ann Marie Healy in their recent book, Resilience, and they draw on different traditions.

Haldane cites work on decision-making by the psychologist Gerd Gigerenzer. Gigerenzer has found that in many different cases, simple rules of thumb outperform complex statistical rules unless huge amounts of data are available. With less data, the clever number-crunching “over-fits”, finding meaningful patterns in what is, in fact, random noise.

This may be true in financial regulation, too. Haldane looks at the largest 100 or so banks before the crisis and finds that, with hindsight, simple rules such as “highly leveraged firms are in danger” had 10 times the power to predict a future bail out than the sophisticated risk-weighting systems the regulators were actually using – although if simpler regulations were introduced, this pattern might disappear.

Zolli, meanwhile, turns to systems theory to argue that thickets of regulations and safety measures nudge us towards “robust-yet-fragile” systems, which are extremely safe in the face of predictable risks, yet crumble completely when something unexpected comes along.

Haldane and Zolli both say it is time for regulators to aim for a simpler financial system as a goal worth pursuing in its own right. And both recognise that if we want simpler banks, simpler bank regulations would be a promising place to start.

Also published at ft.com.

Where maths ends, computers begin

Machines have finally made their mark on economic theory with their use in agent-based modelling and simulations

Computers have transformed economic analysis. Data can be analysed in ways that would have astonished earlier generations of economists. But computers have made less of an impact on economic theory. The typical economic model describes a small number of decision makers whose thought processes (which may or may not be rational) can be boiled down to solving a fairly simple piece of maths.

Macroeconomics is similar, with each decision maker billed as a “representative agent”. The behaviour of all consumers can be summarised by figuring out what a typical consumer would do. Some objections to this approach are obvious, but it has not been easy to find an alternative. In recent years, though, computers have begun to change that, and make their mark on economic theory at last.

Perhaps the most ambitious use of computers is in agent-based modelling. Rather than letting one agent represent all consumers, you create a computer model with lots of agents. Computers aren’t strictly necessary: the most famous agent-based model, created by the great Thomas Schelling, used coins and a chessboard. But the model was highly stylised, with about 40 agents. Modern computers could in principle model every single person in the economy.

Econo-physicist Doyne Farmer, computer scientist Robert Axtell, macroeconomist Peter Howitt and microeconomist John Geanakoplos have been trying to create such a model. “We’re trying to get a simulation of the economy that’s faithful to the economy,” says Farmer. “Where you see macroeconomics emerging from the microscopic interactions of individuals.”

The early fruits of this project include a model of the housing bubble in Washington DC. The advantage of studying housing is that a good deal of information is publicly available both about the price history of each house, and about the characteristics of the people doing the buying and selling. The agent-based model can thus be carefully calibrated. The conclusion: the bubble wasn’t driven by low interest rates but by increasing loan-to-value ratios – an important finding for central banks looking to prevent future bubbles.

A less flashy use of computers is to run simulations to estimate an individual’s best course of action in an uncertain world. Consider this question: if you opened your mail this morning to discover that someone had sent a cheque for £10,000 – a tax refund, perhaps – how quickly should you spend it? The answer can’t be solved mathematically because so much of it depends on unknowns, such as future income. Yet we need to know how people will behave because the relationship between consumption and different sorts of income is fundamental to understanding how the economy works.

Keynesian ideas suggest people will spend such windfalls quickly. Milton Friedman tackled the question in 1957 and developed the idea of “permanent income”, smoothing out windfalls. But if you use traditional mathematical methods to model Friedman’s ideas, you’ll conclude that individuals will spend only 5 per cent or so of the windfall, while Friedman reckoned it was more like a third.

The difference lies in dealing with the uncertainties of life. The economist Christopher Carroll has found that computer simulations can encompass this uncertainty, and produce very similar answers to Friedman’s educated guess.

This is promising yet awkward for economics. Computer simulations cannot be checked, and unless traditional methods are completely superseded, there is bound to be an awkward gap between where the optimising mathematics ends, and the computer simulations begin. The future of economics may depend on finding ways to bridge that gap.

Also published at ft.com.



  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook


  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 188,752 other subscribers.