You can’t always get what you want, a young man once sang. It’s a simple aphorism, but one worth remembering. Boris Johnson was widely — and rightly — mocked in 2016 for announcing that “our policy is having our cake and eating it”. That was a dishonest refusal to admit that the Brexit referendum had obliged the UK government to make some painful decisions. But it is not always so easy to see when Mick Jagger’s maxim is in play.
Consider the question of whether algorithms make fair decisions. In 2016, a team of reporters at ProPublica, led by Julia Angwin, published an article titled “Machine bias”. It was the result of more than a year’s investigation into an algorithm called Compas, which was being widely used in the US justice system to make recommendations concerning parole, pre-trial detention and sentencing. Angwin’s team concluded that Compas was much more likely to rate white defendants as lower risk than black defendants. What’s more, “black defendants were twice as likely to be rated as higher risk but not reoffend. And white defendants were twice as likely to be charged with new crimes after being classed as low risk.”
That seems bad. Northpointe, the makers of Compas, pointed out that black and white defendants given a risk rating of, say, 3 had an equal chance of being rearrested. The same was true for black and white defendants with a risk rating of 7, or any other rating. The risk scores meant the same thing, irrespective of race.
Shortly after ProPublica and Northpointe produced their findings, rebuttals and counter-rebuttals, several teams of academics published papers making a simple but surprising point: there are several different definitions of what it means to be “fair” or “unbiased”, and it is arithmetically impossible to be fair in all these ways at once. An algorithm could satisfy ProPublica’s definition of fairness or it could satisfy Northpointe’s, but not both.
Here’s Corbett-Davies, Pierson, Feller and Goel: “It’s actually impossible for a risk score to satisfy both fairness criteria at the same time.”
Or Kleinberg, Mullainathan and Raghavan: “We formalise three fairness conditions . . . and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously.”
This is not just a fact about algorithms. Whether decisions about parole are made by human judges, robots or dart-throwing chimps, the same relentless arithmetic would apply.
We need more scrutiny and less credulity about the life-changing magic of algorithmic decision making, so for shining a spotlight on the automation of the gravest judgments, ProPublica’s analysis was invaluable. But if we are to improve algorithmic decision making, we need to remember Jagger’s aphorism. These decisions cannot be “fair” on every possible metric. When it is impossible to have it all, we will have to choose what really matters.
Painful choices are, of course, the bread and butter of economics. There is a particular type which seems to fascinate economists: the “impossible trinity”. The wisest of all impossible trinities will be well known to fans of Armistead Maupin’s More Tales of the City (1980). It’s “Mona’s Law”: you can have a hot job, a hot lover and a hot apartment, but you can’t have all three at once.
In economics, impossible trinities are more prosaic. The most famous is that while you might want a fixed exchange rate, free movement of capital across borders and an independent monetary policy, at best you must pick two. Another, coined by the economist Dani Rodrik, is more informal: you can set rules at a national level, you can be highly economically integrated or you can let the popular vote determine policy, but you can’t do all three at once. An economically integrated national technocracy is possible; so is democratic policymaking at a supranational level. If you don’t fancy either of those, you need to set limits to economic globalisation.
Much like Mona’s Law, these impossible trinities are more like rules of thumb than mathematical proofs. There might be exceptions, but don’t get your hopes up.
Mathematicians call such findings “proof of impossibility”, or just “impossibility results”. Some of them are elementary: we’ll never find the largest prime number, because there is no largest prime number to be found, nor can we express the square root of two as a fraction.
Others are deeper and more mind-bending. Perhaps the most profound is Gödel’s incompleteness theorem, which in 1931 demonstrated that, for any mathematical system, there will be true statements in that system that cannot be proved. Mathematics is therefore incomplete, and the legions of mathematicians trying to develop a complete, consistent mathematical system had been wasting their time. At the end of the seminar in which Gödel detonated this intellectual bombshell, the great John von Neumann laconically remarked, “it’s all over”.
Nobody likes to be told that they can’t have it all, but a painful truth is more useful than a comforting falsehood. Gödel’s incompleteness theorem was one of the painful truths I studied as a young logician alongside Liz Truss. Perhaps she has finally absorbed the lesson. It is important to understand when something is impossible. That truth frees us from fruitlessly trying to always get what we want and lets us focus instead on getting what we need.
Written for and first published in the Financial Times on 28 October 2022.