Machines have finally made their mark on economic theory with their use in agent-based modelling and simulations
Computers have transformed economic analysis. Data can be analysed in ways that would have astonished earlier generations of economists. But computers have made less of an impact on economic theory. The typical economic model describes a small number of decision makers whose thought processes (which may or may not be rational) can be boiled down to solving a fairly simple piece of maths.
Macroeconomics is similar, with each decision maker billed as a “representative agent”. The behaviour of all consumers can be summarised by figuring out what a typical consumer would do. Some objections to this approach are obvious, but it has not been easy to find an alternative. In recent years, though, computers have begun to change that, and make their mark on economic theory at last.
Perhaps the most ambitious use of computers is in agent-based modelling. Rather than letting one agent represent all consumers, you create a computer model with lots of agents. Computers aren’t strictly necessary: the most famous agent-based model, created by the great Thomas Schelling, used coins and a chessboard. But the model was highly stylised, with about 40 agents. Modern computers could in principle model every single person in the economy.
Econo-physicist Doyne Farmer, computer scientist Robert Axtell, macroeconomist Peter Howitt and microeconomist John Geanakoplos have been trying to create such a model. “We’re trying to get a simulation of the economy that’s faithful to the economy,” says Farmer. “Where you see macroeconomics emerging from the microscopic interactions of individuals.”
The early fruits of this project include a model of the housing bubble in Washington DC. The advantage of studying housing is that a good deal of information is publicly available both about the price history of each house, and about the characteristics of the people doing the buying and selling. The agent-based model can thus be carefully calibrated. The conclusion: the bubble wasn’t driven by low interest rates but by increasing loan-to-value ratios – an important finding for central banks looking to prevent future bubbles.
A less flashy use of computers is to run simulations to estimate an individual’s best course of action in an uncertain world. Consider this question: if you opened your mail this morning to discover that someone had sent a cheque for £10,000 – a tax refund, perhaps – how quickly should you spend it? The answer can’t be solved mathematically because so much of it depends on unknowns, such as future income. Yet we need to know how people will behave because the relationship between consumption and different sorts of income is fundamental to understanding how the economy works.
Keynesian ideas suggest people will spend such windfalls quickly. Milton Friedman tackled the question in 1957 and developed the idea of “permanent income”, smoothing out windfalls. But if you use traditional mathematical methods to model Friedman’s ideas, you’ll conclude that individuals will spend only 5 per cent or so of the windfall, while Friedman reckoned it was more like a third.
The difference lies in dealing with the uncertainties of life. The economist Christopher Carroll has found that computer simulations can encompass this uncertainty, and produce very similar answers to Friedman’s educated guess.
This is promising yet awkward for economics. Computer simulations cannot be checked, and unless traditional methods are completely superseded, there is bound to be an awkward gap between where the optimising mathematics ends, and the computer simulations begin. The future of economics may depend on finding ways to bridge that gap.
Also published at ft.com.