Algorithms judge us; how can we judge them?

26th December, 2019

If there was ever a demonstration that people think with their guts, it was the furore over the idea that Apple Card is “a f***ing sexist program”. David Heinemeier Hansson, a successful entrepreneur and programmer, complained on Twitter that his wife had a far lower credit limit than he did, and soon everyone from the US senator Elizabeth Warren to Apple co-founder Steve Wozniak to the New York Department of Financial Services were weighing in to show their support.

The idea of women being treated badly by Big Tech and by banks seems all too plausible. Apple is quite literally an iconic brand. Goldman Sachs, the bank that issues and manages the Apple-branded credit card, is nearly as famous. So the ingredients for a viral story are all there, however thin the anecdotal evidence. I

s the Apple Card actually sexist? One definition of equal treatment for men and women would be that credit was extended equally to both, regardless of the fact that women tend to be paid less than men. Another would be that people with the same income got the same credit, regardless of gender. You might have spotted the problem: it’s impossible to offer both forms of equal treatment simultaneously.
This isn’t just some clever piece of logic-chopping. If two groups of people are measurably different, then any rule about how they are treated — be it an algorithm or human judgment — will end up looking unfair, if not by one measure then by another. Is the Apple Card sexist? Arithmetic suggests that, for one definition of sexism or another, it must be.

This doesn’t excuse cases where decision processes — algorithmic or otherwise — are grossly biased, grotesquely inaccurate or both. Our problem is that we don’t know which ones they are, so we tend instead to believe emotionally resonant stories about famous brands. In the algorithm-saturated world we are entering, we need a way to distinguish the good from the bad, the ethical from the outrageous. We should be demanding better evidence that the algorithms that shape our lives are doing so fairly and effectively.

Goldman Sachs says that gender, race, age and sexual orientation are never explicitly part of the decision-making process. The company also says that the process is scrutinised both by consultants and an internal department to ensure that there is no accidental bias. You and I, however, are just going to have to take their word for the robustness of that scrutiny.

Companies are learning the hard way that people now want serious explanations: Goldman claims the Apple Card is unusually transparent, but people evidently want more.

Transparency might help — but it is neither a panacea nor an easy option. Netflix once released anonymised data about movie preferences as part of a competition to improve its recommendations. Alas, because some customers had posted reviews for both Netflix and the Internet Movie Database, it wasn’t hard to link the anonymous serial numbers with real names and intimate film reviews. One woman sued Netflix for potentially revealing her sexual orientation to her husband and children. Transparency is hard; Goldman cannot simply dump its data set and invite us all to poke around. But it could give access to independent assessors.

The philosopher Onora O’Neill argues that anyone who would like to be trusted should be trying to demonstrate trustworthiness. Trustworthiness, she adds, can be bolstered by “intelligent openness”. In the case of algorithms, we should expect a clear and prominent explanation of how the algorithm is making its decision — and perhaps more importantly, we should expect independent experts to be able to assess the claims that are being made.

There are arguably more important algorithms out there than the one that sets your Apple Card credit limit — such as the Facebook news feed or Compas, which is widely used in justice systems to assess the risk that a criminal will reoffend. I am not qualified to assess their fairness or effectiveness. But I know people who are, if they were allowed to see more information.

Compas has now been exhaustively analysed by academics, and worrying features have been exposed. But the analysis was only possible after a team at ProPublica published a painstakingly assembled data set for all to use. It should be easier for independent experts to scrutinise the algorithms that shape our lives.

One reason I am sanguine about the Apple Card is that other credit cards are available. If Goldman is mistakenly turning down creditworthy people, other companies will want their business. That is not a guarantee of fairness but it is, at least, a powerful force pulling in that direction.

In other cases there is no such force: if a criminal is denied parole on the word of an algorithm, there is no option to shop around. When companies peddle software systems that are supposed to identify the best teachers or the worst criminals or the children most at risk of domestic violence, we should demand proof. If not, we will be sold statistical snake-oil.

 

Written for and first published in the Financial Times on 22 November 2019.

My new podcast is “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

Pin It on Pinterest

Share This