Pity the robot drivers snarled in a human moral maze

7th August, 2014

Robotic cars do not get tired, drunk or angry but there are bound to be hiccups, says Tim Harford

Last Wednesday Vince Cable, the UK business secretary, invited British cities to express their interest in being used as testing grounds for driverless cars. The hope is that the UK will gain an edge in this promising new industry. (German autonomous cars were being tested on German, French and Danish public roads 20 years ago, so the time is surely ripe for the UK to leap into a position of technological leadership.)

On Tuesday, a very different motoring story was in the news. Mark Slater, a lorry driver, was convicted of murdering Trevor Allen. He had lost his temper and deliberately driven a 17 tonne lorry over Mr Allen’s head. It is a striking juxtaposition.

The idea of cars that drive themselves is unsettling, but with drivers like Slater at large, the age of the driverless car cannot come quickly enough.

But the question of how safe robotic cars are, or might become, is rather different from the question of the risks of a computer-guided car are perceived, and how they might be repackaged by regulators, insurers and the courts.

On the first question, it is highly likely that a computer will one day do a better, safer, more courteous job of driving than you can. It is too early to be certain of that, because serious accidents are rare. An early benchmark for Google’s famous driverless car programme was to complete 100,000 miles driving on public roads – but American drivers in general only kill someone every 100m miles.

Still, the safety record so far seems good, and computers have some obvious advantages. They do not get tired, drunk or angry. They are absurdly patient in the face of wobbly cyclists, learner drivers and road hogs.

But there are bound to be hiccups. While researching this article my Google browser froze up while trying to read a Google blog post hosted on a Google blogging platform. Two seconds later the problem had been solved, but at 60 miles per hour two seconds is more than 50 metres. One hopes that Google-driven cars will be more reliable when it comes to the more literal type of crash.

Yet the exponential progress of cheaper, faster computers with deeper databases of experience will probably guarantee success eventually. In a simpler world, that would be the end of it.

Reality is knottier. When a car knocks over a pedestrian, who is to blame? Our answer depends not only on particular circumstances but on social norms. In the US in the 1920s, the booming car industry found itself under pressure as pedestrian deaths mounted. One response was to popularise the word “jaywalking” as a term of ridicule for bumpkins who had no idea how to cross a street. Social norms changed, laws followed, and soon enough the default assumption was that pedestrians had no business being in the road. If they were killed they had only themselves to blame.

We should prepare ourselves for a similar battle over robot drivers. Assume that driverless cars are provably safer. When a human driver collides with a robo-car, where will our knee-jerk sympathies lie? Will we blame the robot for not spotting the human idiosyncrasies? Or the person for being so arrogant as to think he could drive without an autopilot?

When such questions arrive in the courts, as they surely will, robotic cars have a serious handicap. When they err, the error can be tracked back to a deep-pocketed manufacturer. It is quite conceivable that Google, Mercedes or Volvo might produce a robo-car that could avoid 90 per cent of the accidents that would befall a human driver, and yet be bankrupted by the legal cases arising from the 10 per cent that remained. The sensible benchmark for robo-drivers would be “better than human”, but the courts may punish them for being less than perfect.

There are deep waters here. How much space is enough when overtaking a slow vehicle – and is it legitimate for the answer to change when running late? When a child chases a ball out into the road, is it better to swerve into the path of an oncoming car, or on to the pavement where the child’s parents are standing, or not to swerve at all?

These are hardly thought of as ethical questions because human drivers make them intuitively and in an instant. But a computer’s priorities must be guided by its programmers, who have plenty of time to weigh up the tough ethical choices.

In 1967 Philippa Foot, one of Oxford’s great moral philosophers, posed a thought experiment that she called the “trolley problem”. A runaway railway trolley is about to kill five people, but by flipping the points, you can redirect it down a line where it will instead kill one. Which is the right course of action? It is a rich seam for ethical discourse, with many interesting variants. But surely Foot did not imagine that the trolley problem would have to be answered one way or another and wired into the priorities of computer chauffeurs – or that lawyers would second-guess those priorities in court in the wake of an accident.

Then there is the question of who opts for a driverless car. Sir David Spiegelhalter, a risk expert at Cambridge university, points out that most drivers are extremely safe. Most accidents are caused by a few idiots, and it is precisely those idiots, Sir David speculates, who are least likely to cede control to a computer.

Perhaps driverless cars will be held back by a tangle of social, legal and regulatory stubbornness. Or perhaps human drivers will one day be banned, or prohibitively expensive to insure. It is anyone’s guess, because while driving is no longer the sole preserve of meatsacks such as you and me, the question of what we fear and why we fear it remains profoundly, quirkily human.

Also published at ft.com.

Pin It on Pinterest

Share This