If you do not like the price you’re being offered when you shop, do not take it personally: many of the prices we see online are being set by algorithms that respond to demand and may also try to guess your personal willingness to pay. What’s next? A logical next step is that computers will start conspiring against us. That may sound paranoid, but a new study by four economists at the University of Bologna shows how this can happen.
The researchers allowed two simple artificial intelligence algorithms to compete against each other in a setting where they simultaneously set prices and reaped profits accordingly. The algorithms taught themselves to collude, raising prices from the cut-throat competitive level towards what a monopolist would choose. Price cuts were met with price wars, after which collusion would return. Just because you’re paranoid, it doesn’t mean the computers are not out to get you.
This is not a surprising result for anyone who — like me — squandered their youth studying the theory of industrial competition. Robert Axelrod’s book The Evolution of Cooperation (US) (UK) published in 1984, described a tournament in which computers played a “prisoner’s dilemma”, a scenario analogous to two competing sellers. The best approaches used the threat of punishment to sustain co-operation. They were also simple: not something that a machine-learning system would struggle to discover.
An obvious question is, who — if anyone — should be prosecuted for price fixing when the bots work out how to do it without being told to do so, and without communicating with each other? In the US, where the Federal Trade Commission has been pondering the prospect, the answer seems to be no one, because only explicit collusive agreements are illegal. The bots would only be abetting a crime if they started scheming together. Tacit collusion, apparently, would be fine.
This is a reminder that algorithms can misbehave in all kinds of intriguing ways. None of us can quite shake the image of a Skynet scenario, in which an AI triggers a nuclear war and then uses Arnold Schwarzenegger as the model for a time-travelling robot assassin on a mission to suppress human resistance. At least that strategy is refreshingly direct. The true scope of algorithmic mischief is much subtler and much wider.
We are rightly concerned about algorithms that practice racial or sexual discrimination, by accident or design. I am struck by how quickly tales of racist algorithms have gone from novelty to cliché. The stories may fade but the issue is not going away.
Algorithms that simply magnify human errors now appear almost quaint. In 2012, the Financial Times had a headline, “Knight Capital glitch loss hits $461m”; those were innocent times.
Then there were those T-shirts selling on Amazon a few years ago, offering offensive slogans such as “Keep Calm and Hit Her”, and bizarre ones such as “Keep Calm and Skim Me”. Hundreds of thousands of slogans were assembled by an algorithm and, if any appealed, the vendor would print them on demand. “We didn’t do it, it was the algorithm,” was a weak defence in 2013, but at least it was novel. That is no longer true.
We are also realising that the algorithms can amplify other human weaknesses — witness recommendation engines on YouTube and Facebook that seem to amplify disinformation or lead people down the dark tunnels of conspiracy thinking or self-harm.
By no means are all malevolent programs an accident; some are designed with mischief in mind. Bots can be used to generate or spread misinformation. Jamie Bartlett, author of The Dark Net (US) (UK), warns of a future of ultra-personalised propaganda. It is one thing when your internet-enabled fridge knows you’re hungry and orders yoghurt. It’s another when the fridge starts playing you hard-right adverts because they work best when you’re grumpy and low on blood sugar. And unless we radically improve both our electoral laws and our digital systems nobody need ever know that a particular message was whispered in your ear as you searched for cookies.
Obviously, both the law and regulators must be nimble. But ponder, too, the challenges for corporate public relations and social responsibility departments. The latter is about being a good corporate citizen; PR is about seeming to be so. But who takes corporate responsibility for a harmful or tasteless decision made by an algorithm?
It is not an entirely new problem. Before there was tacit collusion between algorithms, there was tacit collusion between sales directors. Before companies blamed rogue algorithms for embarrassing episodes, they could blame rogue employees, or their suppliers. Can we really blame the bank whose cleaning subcontractor underpays the cleaning staff? Or the sportswear brand opposed to sweatshop conditions, whose suppliers quietly hire children and pay them pennies?
The natural answer is: we can and we do, but subcontracting is a source of both deniability and complexity. Subcontracting to algorithms complicates matters, too. But we are going to have to figure it out.
Written for and first published in the Financial Times on 22 February 2019.