‘A “big data” approach to inflation is helping us understand the fundamental question of why recessions happen’
In the dying days of 2015 came news to set any geek’s pulse racing: the declaration of a “statistical emergency” by Mauricio Macri, the new president of Argentina. Macri’s move enabled Jorge Todesca, head of the statistics bureau, to suspend publication of some basic economic data. That might seem extreme but Argentina’s inflation numbers were widely discredited.
The International Monetary Fund censured Argentina in 2013 for its implausible numbers under previous president Cristina Fernández de Kirchner. Government statisticians say they were leaned on by her administration to report low inflation. Todesca himself used to be a private-sector economist, and, in 2011, his firm was fined half a million pesos for publishing numbers that contradicted the official version. (Half a million pesos was about $125,000 at the time; it is $35,000 these days, which rather proves the point.)
But one economist found a way to publish plausible inflation statistics without being prosecuted. His name is Alberto Cavallo, and he realised that by gathering price data published by online retailers, he could produce a credible estimate of Argentine inflation from the safety of Massachusetts. Cavallo’s estimate averaged more than 20 per cent a year between 2007 and 2011; the official figure was 8 per cent.
So began the Billion Prices Project and its commercial arm PriceStats, both collaborations between Cavallo and fellow MIT economics professor Roberto Rigobon. “Billion Prices” sounds hyperbolic but that is the number of prices collected each week by the project, from hundreds of retailers in more than 60 countries.
While the project confirmed that Argentina’s inflation numbers could not be trusted, it also showed that the US inflation numbers published by the US Bureau of Labor Statistics could be. Several maverick commentators had argued that hyperinflation would be the inevitable consequence of money printing at the Federal Reserve. When hyperinflation plainly failed to materialise, some critics suggested the BLS was hiding it — as if nobody would notice.
A second advantage, swiftly noted, was that the daily flow of data from PriceStats was a good predictor of official inflation statistics, which are typically published once a month. Cavallo and Rigobon like to point out that their US online price index started to fall the day after Lehman Brothers declared bankruptcy; the official Consumer Price Index took a month to respond at all, and two months to respond fully.
The BPP is also shedding light on some old economic mysteries. One is the problem of adjusting inflation for changes in quality. To some extent this is an intractable problem. The Edison phonograph cost $20 at the end of the 19th century; an iPod Nano costs about $145 today. What inflation rate does that imply over the past 117 years? There is simply no good answer to that question.
But statistical agencies are always wrestling with smaller slices of the same problem. A new model of washing machine is introduced at a premium price, gradually discounted over the years and eventually sold at clearance prices and replaced with a swankier model. The same thing is happening over differing timescales with computers, summer dresses and cars. If the economic statisticians mishandle these cases, they will get their measure of inflation badly wrong; usually they rely on careful substitutes and clever theory, but success can never be assured.
Cavallo and Rigobon argue that the sheer volume of prices collected by the BPP helps resolve the problem. Every day, the project gathers the prices of hundreds of washing machines. By observing that the availability of the Scrub-O-Mat 9000 overlaps with that of the Cleanado XYZ, it’s possible to adjust as new products are introduced and old products discounted and then phased out.
This “big data” approach to inflation is also helping us to understand the fundamental question of why recessions happen. Without opening a big bag of macroeconomics at this stage in the column, one influential school of thought is that recessions happen (in part) because prices don’t adjust smoothly in the face of a slowdown. Like a small rock that starts an avalanche, this price rigidity causes big trouble. Unsold inventory builds up, retailers slash their orders, and manufacturers go bankrupt.
The trouble with the idea that price stickiness causes recessions is that, according to official inflation statistics, prices routinely change by amounts large or small, which suggests no price rigidity.
But it turns out that many small price changes are statistical illusions. For example, if a product is missing from four monthly inflation surveys and is 1 per cent more expensive when it returns in the fifth month, official statisticians will quite rightly smooth over the gap by imputing a 0.2 per cent rise per month. But it would be a mistake to take this as evidence that retailers did, in fact, repeatedly raise prices by 0.2 per cent. Collecting billions of prices removes the need to fill in these gaps, and in the BPP data very small price changes are rare. Prices will move by several per cent if they move at all. One might guess that in physical stores the cost of relabelling products is higher, and small price changes are even rarer.
The BPP’s big data approach has rescued the important macroeconomic idea of price stickiness. It is a reminder that we often gain from having a second opinion — or a billion of them.
Written for and first published at ft.com.