The search is on for better ways to measure systemic risk
I don’t propose to predict the next twist in the euro crisis; indeed, given the delay between my writing these magazine columns and you reading them, I’m not even going to hazard a guess as to what has recently happened. We’ve been contemplating a major systemic event: a Greek government default – and worse. A Spanish default? An Italian one? Not imminently likely, but as Eeyore once said, “think of all the possibilities … before you settle down to enjoy yourselves”.
Such events always have nasty but unpredictable consequences. In fact, the nastiness is intimately bound up with the unpredictability. A big financial loss is always likely to have further impacts: anyone holding Greek bonds suffers if the Greeks decide they won’t pay. But what if you’re applying for an overdraft to a bank that has just been burnt by the Greeks? Or applying for an overdraft to a bank that has just been burnt by a collapsing hedge fund that invested in Greece? Or a bank that has written an insurance policy – a credit default swap – on Greek bonds? The possibilities multiply: it will take us a long time if we follow Eeyore’s advice; and each successive failure can lead to further failures. Because we do not really know who is at risk, financial markets can seize up, as they did at the beginning of the credit crunch and, far more severely, when Lehman Brothers evaporated early one Monday morning in September 2008.
How should regulators deal with all this? First, they should never forget that misconceived regulatory rules have contributed in many ways to the crisis we continue to face: it’s in the nature of regulations to force black-and-white responses – as when many financial institutions are simultaneously obliged to sell a particular asset. But for those who, like me, believe the quest for better regulations is not a hopeless one, the search is on for better ways to measure systemic risk.
A number of interesting approaches to this problem have recently crossed my desk. Tobias Adrian of the Federal Reserve Bank of New York and Markus Brunnermeier of Princeton propose a tool they call “CoVaR”, or “contagion Value at Risk”. Value at Risk is a widely used but controversial risk management and regulatory tool, describing the maximum amount of money a financial institution might expect to lose over a given period of time, such as a day, with (say) 99 per cent confidence. (“What about the other 1 per cent?”, you might ask, and with good reason.) The Adrian-Brunnermeier approach calculates Value at Risk for the entire universe of financial firms, and then asks how that VaR changes if one particular entity – say, Lehman Brothers, or Portugal – finds itself in distress.
Alternative approaches look at techniques from network mapping. Francis Diebold and Kamil Yilmaz have a paper out on “network topology”. Andrew Haldane, the Bank of England’s man in charge of financial stability, with Prasanna Gai and Sujit Kapadia, is also pursuing network modelling techniques to understand how risks can spread.
Meanwhile, as regulators such as Haldane and Adrian look to abstract approaches in the hope of deeper understanding, an academic, Darrell Duffie of Stanford University, has been advocating what he calls a “10-by-10-by-10” approach, which is pleasingly pragmatic. Duffie suggests stress tests in which 10 financial firms list the impact of 10 unpleasant scenarios on 10 of their key counterparties; the process can be iterative, as each round of testing suggests new firms to include and new scenarios to try.
One can hardly complain about these efforts to understand more clearly the intricate plumbing of the financial system, but what is becoming most clear is how little we still know. So I particularly applaud one feature of Duffie’s brief working paper: more than a quarter of it is devoted to an exploration of all the ways in which his idea may fail.
Also published at ft.com.