How to make the algorithms serve us, not the other way around

4th November, 2021

Life is full of difficult decisions. Who should be hired or fired? What grades should students receive in their exams? Should an accused person awaiting trial be released or held in custody?

An increasingly popular alternative is to delegate the decisions to a data-driven algorithm. The hope is that such algorithms might correct our prejudices, our emotional incontinence and our wild inconsistencies. The risk is that the algorithms automate injustice.

So should we use algorithms to make life-changing decisions? Our response to this question has been not to look at the data, but to respond with prejudice, emotional incontinence and wild inconsistency. How very human.

For an example (there are many) of irrational algophobia, see the CNN article “Math is Racist”. For an example (there are many more) of irrational algophilia, see the UK government’s absurd decision last year to allow an algorithm to assign exam grades to students who had never been given the chance to sit the exam.

A better way forward is to look at the data. How are algorithms working in practice and can they be fixed when they fall short? Jens Ludwig and Sendhil Mullainathan of the University of Chicago examine the problem in a forthcoming article for the Journal of Economic Perspectives.

Ludwig and Mullainathan focus on algorithms used in decisions in criminal justice, such as pre-trial release, sentencing and parole. They argue that decisions made by judges are so transparently flawed that there is plenty of room for algorithms to improve matters. Judges have little ability to predict the risk of repeat offending. Their decisions show distinct statistical evidence of racial bias. Judges are also inconsistent, both with their own prior judgments, and with each other. Some judges are tough, others lenient. Sentencing guidelines are an attempt to control the chaos, but what are such guidelines if not a crude algorithm?

Having laid out this catalogue of human failings, Ludwig and Mullainathan drop the other shoe: algorithms also make terrible decisions. Why? Not because they cannot do better — they can — but because when we humans design, procure and deploy algorithms, we’re not really trying. For example, many algorithms produce decisions with a racist or sexist result because they have been trained on data from a racist, sexist world. This is unacceptable, not because the algorithm is worse than what came before it, but because it could so easily do better. Humans do not come equipped with an “equity dial” designed to balance different conceptions of fairness across class, income, gender, ethnicity, disability or any other category. Algorithms do, if we choose to use it. We are often careless about how algorithms are designed, trained or used.

Cathy O’Neil, author of Weapons of Math Destruction, once pointed out to me that in describing an algorithm, I had conflated the risk of reoffending with that of being rearrested. In my defence, so had almost everyone else. It is all too easy to claim the algorithm is doing one thing when in fact it is doing something else, perhaps something both easier and more malign.

We do seem to judge decisions made by humans in a different way from those made by machines. We seem more outraged by biased algorithms than by biased humans, perhaps because we (rightly) expect the algorithm to do better. But that is not the way in which we hold computers to different standards. Recall the famous “trolley problem” in which a decision to divert a runaway railway trolley will save lives overall, but is also an active decision to kill someone who would otherwise have been safe. Researchers have found that people tend to prefer computers that divert the trolley, but forgive humans who remain inactive. Cool utilitarianism is unsettling in a human, but exactly what we want from an algorithm.

What should be done to allow algorithms to realise their potential? First, recognise that they are simply tools, like hammers. At the moment our polarised discussion seems to view the algo-hammer as either a murder weapon or a cure for cancer. It’s neither, but it’s perfectly good for driving in nails.

Second, as Kate Crawford explains in The Atlas of AI, we need to acknowledge there are questions of power and politics in who gets to design the algorithms and who feels the results. To continue the hammer metaphor, a hammer is one thing to a carpenter and quite another to a nail.

Finally, as I argue in my own book, How To Make The World Add Up / The Data Detective, we need to start subjecting algorithms to the same culture of collaborative scrutiny and replication that defines science, and the same requirement to prove effectiveness that we demand from new medicines.

I am convinced that a well-designed algorithm can make fairer decisions about criminal justice, who to invite for a job interview and how support is assigned to vulnerable children. But before we unleash such algorithms, it is only right to expect independent experts to examine their inner workings — and only right to expect proof of effectiveness, for example with a randomised trial. Algorithms, like medicine, can do a lot of good. But before we start dosing each other, let’s check the evidence rather than admiring the pretty label on the bottle.

Written for and first published in the Financial Times on 1 October 2021.

The paperback of “How To Make The World Add Up” is now out. US title: “The Data Detective”.

“One of the most wonderful collections of stories that I have read in a long time… fascinating.”- Steve Levitt (Freakonomics)

“If you aren’t in love with stats before reading this book, you will be by the time you’re done.”- Caroline Criado Perez (Invisible Women)

Pin It on Pinterest

Share This