‘What else might influence portfolio returns? There is literally no limit to the number of variables’
Discomfiting news: most of those financial strategies that claim to beat the market don’t. Even more surprising, many of the financial research papers that claim to have found patterns in financial markets haven’t.
Don’t take my word for it: this is the conclusion of three US-based academics, Campbell Harvey, Yan Liu and Heqing Zhu. What is particularly striking about the way they’ve lobbed a hand grenade into the finance research literature is that Campbell Harvey isn’t some heterodox radical. He’s the former editor of the leading journal in the field, The Journal of Finance.
What’s going on?
Much financial research attempts to figure out what explains the investment returns on financial portfolios. At a first pass, returns follow a random-walk hypothesis. This insight is a century old and we owe it to the mathematician Louis Bachelier. The basic reasoning is that any successful forecast of price movements would be self-defeating: if it was obvious the price would rise tomorrow, then the price would instead rise today. Therefore, there can be no successful forecast of price movements.
A second pass at the problem, courtesy of several researchers in the 1960s, gives us the capital asset pricing model: riskier portfolios will probably offer higher returns. And it seems that they do.
Then, in 1992, Eugene Fama (more recently a Nobel Memorial Prize winner) and Kenneth French found that the returns on a portfolio of shares were explained by three factors: exposure to the market as a whole, exposure to small company stocks, and exposure to “value stocks”.
This is progress of sorts – but it’s also a can of worms. What else might influence portfolio returns? There is literally no limit to the number of different variables that could be examined, because variables can always be transformed or combined with each other, for instance as ratios or rates of change.
In principle, economic logic might limit the number of combinations to be examined – but in practice both academics and quantitatively minded investment managers have been known to throw in all sorts of possibilities just to see what happens. Why not, for example, use the cube of the market capitalisation of the shares? There’s no economic logic behind that variable – at least, none that I can see – but that hasn’t stopped the quants stirring such things into the mix.
The issue here is what we might call the “jelly bean problem”, after a cartoon by nerd hero Randall Munroe. The cartoon shows scientists testing whether jelly beans cause acne, applying a commonly used statistical test. The test is to assume that jelly beans don’t cause acne, then rethink that assumption if the observed correlation between jelly beans and acne has less than a 5 per cent probability of occurring by chance. The scientists test purple, brown, pink, blue, teal, salmon, red, turquoise, magenta, yellow, grey, tan, cyan, green, mauve, beige, lilac, black, peach and orange jelly beans. It turns out that the green ones are correlated with acne!
This is, of course, no way to perform a statistical analysis. If 20 statistical patterns are analysed and there’s no genuine causal relationship in any of them, we’d still expect one of them to look strikingly correlated. (How strikingly? Well, about 19-1 against.)
The finance literature has looked at far more than 20 possibilities. Harvey, Liu and Zhu scrutinise 316 different factors that have been explored by a selection of reputable research studies, of which 296 are statistically significant by conventional standards. That’s just a subset of the factors that have been examined in minor journals, or not published at all because the results were too boring.
For example, a paper might try to explain stock market returns as a function of media coverage of companies; of corporate debt; of momentum in previous returns; or of the volume of trades.
With 316 factors – and probably many more – under investigation, using a 5 per cent significance standard is absurd. Harvey and his colleagues suggest that after trying to correct for the jelly bean problem (more technically known as the multiple-comparisons problem), more than half the 296 statistically significant variables might have to be discarded. They suggest higher and more discerning statistical hurdles in future, not to mention a more explicit role for variables with some theory behind them, rather than variables that have happened to stick after the entire statistical fruit salad has been hurled at the wall.
None of this should astonish us. In 2005 an epidemiologist called John Ioannidis published a research paper that has become famous. It has the self-explanatory title “Why Most Published Research Findings Are False”. The reason is partly the multiple comparisons problem, and partly publication bias: a tendency on the part of researchers and journal editors alike to publish surprising findings and leave dull ones to languish in desk drawers.
Harvey and his colleagues have shown that the Ioannidis critique applies in the finance research literature too. No doubt it applies far more strongly in the advertisements we’re shown for financial products. We should have always been on the lookout for intriguing patterns in the data. But if we’re not careful, our analysis will produce plenty of flukes. And in finance, flukes are just as marketable as the truth.
Written for and first published at ft.com.