Computers have reduced the cost of buying and selling financial assets, but the gains from further speed seem unclear
In 1987, Thomas Peterffy, an options trader with a background in software, sliced a cable feeding data to his Nasdaq terminal and hacked it into the back of a computer. The result was a fully automated algorithmic trading system, in which Peterffy’s software received quotes, analysed them and executed trades without any need for human intervention.
Not long after, a senior Nasdaq official dropped by at Peterffy’s office to meet what he assumed must be a large team of traders. The official was alarmed to be shown that the entire operation comprised a Nasdaq terminal sitting alongside a single, silent computer.
From such humble beginnings, computerised trading has become very big business. High-frequency trades take place on timescales measured in microseconds – for comparison, Usain Bolt’s reaction time in the Olympic 100m final was 165,000 microseconds.
There is a variety of motives for high-speed trading. Statistical arbitrageurs look for pricing patterns that seem anomalous, and bet that the anomaly will be short-lived. Algo-sniffers try to figure out when someone is in the process of placing a big order and leap in to profit. (Algo-sniffers are likely to fall prey to algo-sniffer-sniffers, and so on ad infinitum.)
Then there are quote-stuffers, which produce and immediately withdraw offers to trade, perhaps in the hope of provoking other algorithms, or perhaps with the explicit aim of gumming up trading networks and exploiting the confusion. And the algorithmic trading game is constantly evolving.
If this sounds unnerving to you, then you have something in common with Thomas Peterffy, now a billionaire on the back of his electronic brokerage firm. Peterffy told NPR’s Planet Money team that “whether you can shave three milliseconds off the execution of an order has absolutely no social value”. It’s hard to disagree. Computers have reduced the cost of buying and selling financial assets, but the gains from further speed seem unclear, and must be set against the risks. Several recent financial “accidents”, including the May 2010 “flash crash” and the implosion of Knight Capital in August last year, attest to the hazards of high-speed trading.
But what is to be done? In a new paper, “Process, Responsibility and Myron’s Law”, the economist Paul Romer argues that we need to start paying attention to the dynamics of how new rules are developed. (“Myron’s Law” is that given enough time, any particular tax code will end up collecting zero revenue, as loopholes are discovered and exploited. Tax codes must therefore adapt. So must many other rules.)
Romer contrasts the box-ticking approach of the Occupational Safety and Health Administration – which has a detailed and somewhat contradictory rule, 1926.1052(c)(3), about the height of stair-rails – with the principles-based approach of the Federal Aviation Administration (FAA), which “simply” requires planes to be airworthy to the satisfaction of its inspectors. Romer argues that financial regulations now resemble the OHSA’s rule 1926.1052(c)(3) more closely than the FAA’s “airworthy” principle – and that this is a problem.
Peterffy’s experience is instructive: the Nasdaq official withdrew, consulted a rulebook and declared that the rules required trades to be entered via a keyboard. Peterffy’s team responded by building a robot typist, and he was allowed to continue. Box-tickers everywhere would be proud, and the actual merits of banning Peterffy did not need to trouble anyone.
The real question here is a question about process – about how new rules are developed, and who takes responsibility for the decisions made. Because as the algorithmic traders are demonstrating, even rules that work today will have to adapt tomorrow.
Also published at ft.com.