Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

Remembering Peter Sinclair

Peter Sinclair died yesterday, after many days in hospital with covid-19. It’s a heavy blow. Peter was an inspirational economics teacher and a wonderfully kind man. Peter inspired a generation of great economists and economics journalists, including Dave Ramdsen (long time head of the Government Economic Service), Camilla Cavendish, Tim Leunig, Evan Davis, and Diane Coyle, who posts her own memories. He also taught David Cameron. I’m envious of all of them because Peter left for the University of Birmingham before I had the chance to have the full benefit of his teaching. My loss, Birmingham students’ gain.

But even in a few short months he had a profound influence on me. When I was floundering in my Oxford entrance interviews – I hadn’t got a clue what was going on as I was being grilled by the formidable philosophy tutor – Peter was the one beaming and nodding and encouraging, as though everything was going brilliantly. And when I decided to drop economics and specialise in philosophy, Peter took the trouble to send a long, handwritten letter, full of encouragement, gently suggesting that I reconsider. I remember it vividly. I took his advice. It changed my life.

Peter had many friends, and went to great lengths to keep in touch with and support his former students. They will all be grieving today. My thoughts are very much with his wife Jayne, his family, and his close friends.

I bumped into him on the street a couple of years ago. He was smiling, bobbing around, waving enthusiastically, behaving as though there was nobody in the world he’d rather see. That extravagant friendliness was so very like him. It is how I’ll remember him.

1st of April, 2020Other WritingComments off
Marginalia

How not to lose your mind in the Covid-19 age

here are as many responses to the Covid-19 pandemic as there are people to respond. Some have of us have children to home-school. Some of us have elderly relatives to worry about; some of us are the elderly relatives in question. Some of us have never been busier; others have already lost their jobs.

One experience is common, however: wherever the virus has started to spread, life is changing radically for almost everyone. It’s a strange and anxious time, and some of the anxiety is inevitable. For many people, however, much of the stress can be soothed with – if you will pardon the phrase – one weird trick.

First, a diagnosis. Most of us, consciously or not, have a long list of things to do. As the virus and the lockdowns have spread, many of the items on the to-do list have simply evaporated. At the same time, a swarm of new tasks have appeared, multiplying by the day: everything from the small-yet-unfamiliar (“get toilet paper” and “claim refund on cancelled holiday”) to the huge-and-intimidating (“organise an inspiring home-school curriculum” or “find a new job”).

The change is so fast and comprehensive that for most of us it is unprecedented. Even a divorce or an international relocation is more gradual. The death of a spouse might be the only experience that comes close. No wonder that even those of us who are safe and well and feel loved and financially secure find ourselves reeling at the scale of it all.

To the extent that the problem is that the to-do list is unrecognisable, the solution is oddly simple: get the to-list back in order. Here’s how.

Get a piece of paper. Make a list of all the projects that are on your mind. David Allen, author of the cult productivity manual Getting Things Done, defines a project as “any multistep outcome that can be completed within a year”. So, yes: anything from trying to source your weekly groceries to publishing a book.

That list should have three kinds of projects on it.

First, there are the old projects that make no sense in the new world. For those that can be mothballed until next year, write them down and file them away. Others will disappear forever. Say your goodbyes. Some part of your subconscious may have been clinging on, and I’m going to guess that ten seconds of acknowledging that the project has been obliterated will save on a vague sense of unease in the long run.

Second, there are the existing projects, some of which have become more complicated in the mid-pandemic world. Things that you might previously have done on automatic may now require a little thought. Again, a few moments with a pen and paper will often tell you all you need to know: what’s changed? What do I now need to do? What, specifically, is my next action? Write it down.

Third, there are brand new projects. For me, for example, I need to rewrite the introduction to my forthcoming book (‘How To Make The World Add Up, since you were wondering). It’s going to seem mighty strange without coronavirus references in it. Many of us need to devote more than a little attention to the sudden appearance of our children at home. Some of us need to hunt for new work; others, for a better home-office set-up. Many of us are now volunteering to look after vulnerable neighbours.  In each case, the drill is the same: sketch out the project, ask yourself what the very next step is, and write it down.

Occasionally, you may encounter something that’s on your mind – the fate of western civilisation, for example, or the fact that the health service desperately needs more ventilators and more protective equipment. For my family, it’s an elderly relative, suffering from dementia, in a locked-down nursing home. We can’t visit him. He can’t communicate on the phone or comprehend a video chat. There is, for now, literally nothing we can do but wait and hope. Acknowledging that fact – that there is no action to be taken – is itself a useful step.

I won’t pretend that in this frightening time, working through your to do list in a systematic way will resolve all anxieties. It won’t. But you may be surprised at how much mental energy it saves – and at the feeling of relief as all these confusing and barely-acknowledged new responsibilities take shape and feel more under your control.

Or so it seems to me. Good luck, and keep safe.

 

Oh – and in case it wasn’t obvious, this week’s Book of the Week is David Allen’s superb Getting Things Done.

My NEW book The Next Fifty Things That Made the Modern Economy is out in the UK in May and available to pre-order; please consider doing so online or at your local bookshop – pre-orders help other people find the book and are a huge help.

Receive these posts by email

(You can unsubscribe at any time)

29th of March, 2020MarginaliaOther WritingResourcesComments off
Marginalia

Book of the week 11: Uncharted by Margaret Heffernan

“The sagacious businessman is constantly forecasting,” said the great economist Irving Fisher, a man thoroughly convinced of the power of data to make the future legible. Fisher transformed economics and made millions as an entrepreneur, but died in penury. He is now best remembered as the tragic figure who, shortly before the cataclysmic Wall Street crash of 1929, informed the nation: “Stocks have reached what looks like a permanently high plateau.”

Poor Professor Fisher appears early on in Uncharted. Margaret Heffernan’s book is less a smackdown of failed forecasts than an engaging ramble across our attempts to predict, control, explore or embrace an uncertain future. Heffernan is admired for books that question the received wisdom of how management works; she is a business guru who brings the stern discipline of good sense to the business book genre. In this book, she turns her attention to a topic that absorbs most business leaders — and the rest of us too: how to think about what the future holds. Gazing into the future is not fruitless, she argues, but it is unnerving and hard work. Lazy and fearful, we are far too quick to reach for overblown gurus, or misleading data or other useless guides. Even a good tool, such as GPS, can dull our senses.

“What matters most isn’t the predictions themselves but how we respond to them, and whether we respond to them at all,” she writes. “The forecast that stupefies isn’t helpful, but the one that provides fresh thinking can be.”

And fresh thinking is what Heffernan wishes to provoke, mostly through storytelling, occasionally through rhetoric. Are we trapped by history? Only if we let our own narratives confine us. Can parents use an app to “predict life outcomes and . . . maximise the life-long potential of your child”? No. She finds the idea appalling.

Better, she suggests, to explore, empower, experiment. Whether you’re running a multinational, pondering a career change or being a parent, the same wisdom applies: sometimes things go wrong, or go right, and we don’t know why. Keep your eyes open. Stay engaged. Listen to others. Don’t be afraid to change course. Contribute to your community, and make connections before trouble strikes: “Don’t exchange business cards in a crisis.”

At times, Uncharted resembles a collection of secular sermons illustrated with a story. Heffernan stands in the pulpit quietly admonishing us to be a little wiser, reflect a little more, to do the things that deep down we already know we should be doing.

Moments of counterintuitive astonishment are scarce, but the book is probably better for that. And it largely avoids the usual suspects: Apple, Google, 3M, the US military. Instead, we find ourselves in the shoes of a disillusioned Catholic priest, realising he has fallen in love and getting no help from the Church. Or in a room with a diverse group of Mexicans, from mobsters to senators, as they try to explore the future with a scenario-planning exercise. Or with the management of Nokia, wondering if there is life after cell phones. These are subtle tales of struggle and compromise.

The storytelling is not without its flaws. Physicist Marzio Nessi morphs into a Mr Messi, who is surely a different kind of genius. A discussion of fresh ideas in healthcare required multiple re-readings to sort out who was doing what, where, and whether these were diverse experiments across the nation. More than once I checked the index because I assumed I’d missed something. These are small things, but in a book that tries to flow so freely across so many stories, they are barnacles that produce a drag.

That said, Heffernan is generally a deft storyteller and the book’s reliance on such stories is a strength. Bad “smart thinking” books offer 2×2 matrices and jargon; good ones offer theory and evidence. Heffernan steps outside the category entirely. She wants us to engage with the particularities of people, places and the problems they faced — to empathise with them, reflect on our own lives and our own careers, and to draw our own conclusions.

Uncharted is not a book to skim in the business class lounge. Heffernan’s approach is more like a music lover trying to broaden the appreciation of a patient friend. “Here’s an example; listen to this; here’s another. Compare, contrast. Now do you see what I’m getting at?” It is messy, and occasionally frustrating, but wise and appealingly human.

UK: AmazonBlackwell’s

US: Amazon – Powell’s (Publishes Sep 2020)
Written for and first published in the Financial Times on 19 February 2020.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

16th of March, 2020MarginaliaOther WritingComments off
Other Writing

The changing face of economics

Robert Solow, the Nobel laureate economist, says he had long been “bothered” by the fact that most people — even educated people — “had no clear idea of what economics is, and what economists do”.

Solow was born in Brooklyn in 1924, to what he has described as a “lower-middle-class family”, and grew up during the Great Depression.

Although his father always had work, Solow has said that from about the age of eight onwards, he was conscious that his parents were constantly worrying, “and their worries were purely economic: what was going to happen, could they continue to make ends meet”.

This awareness would shape his thinking throughout his life. He won a scholarship to Harvard at 16 and began an academic career that would see him reach the top of his field, winning the Nobel in 1987 for his contributions to the theory of economic growth.

Yet despite such acclaim, Solow, who is now 95, felt that his subject remained frustratingly opaque to the general public.

Then, a few years ago, he was seated by chance next to the photographer Mariana Cook at a friend’s dinner party. Cook had recently completed a project photographing 92 mathematicians, ranging from Fields Medal winners to promising young men and women at the start of their careers.

Solow suggested that she embark on a similar series of portraits, but of economists — and Cook agreed.

As he writes in the introduction to the resulting book, which contains 90 black-and-white portraits shot by Cook over the course of three years: “The idle thought became a reality, and I found myself involved in many ways. Naturally, I had to ask myself: was making a book of portraits of academic economists a useful or reasonable or even a sane thing to do?”

It is a fair question. Economics remains a perplexing discipline. It is often regarded as purely the study of money. (Far from it: indeed, some critics complain that economists aren’t as interested in studying money as they should be.) It is easily caricatured as overly mathematical, full of absurdly unrealistic assumptions, elitist and corrupted by proximity to business and finance.

And, as with any caricature, there is some truth in all of these complaints.

So what actually is economics? Alfred Marshall began his enduringly influential 1890 book Principles of Economics: “Political economy or economics is a study of mankind in the ordinary business of life; it examines that part of individual and social action which is most closely connected with the attainment and with the use of the material requisites of wellbeing.”

“The ordinary business of life.” It is not a bad definition, even now. But economics has changed since Marshall’s day. What is being studied has changed, and how, and even who does the studying.

Start with the “what”. It might seem obvious that economists should stick to the study of the economy — the production and consumption of goods and services that are either traded in markets or could be. They never really did stay in their lane: Thomas Robert Malthus was a proto-environmentalist and an inspiration for Charles Darwin; John Stuart Mill was a philosopher; John Maynard Keynes was intellectually promiscuous.

But it was Gary Becker and his followers who systematically applied the methodological tools of economics to social issues such as racial discrimination, the family and addiction.

Some of the ideas Becker championed — notably the use of education to improve “human capital” — became so mainstream as to be a cliché. Others remain controversial.

But nobody bats an eyelid when the economist Emily Oster publishes books of advice on pregnancy and parenting, when Steven “Freakonomics” Levitt opines on when to rob a bank, or even when the Financial Times publishes a column using economics to give tips on dating and etiquette. Economic imperialism is here to stay.

The “how” is also changing. Twenty years ago, the economist Ed Lazear published a paper, “Economic Imperialism”, with Becker at its centre.

Lazear argued that economic imperialism had been a success because “economics stresses three factors that distinguish it from other social sciences. Economists use the construct of rational individuals who engage in maximising behaviour. Economic models adhere strictly to the importance of equilibrium as part of any theory. Finally, a focus on efficiency leads economists to ask questions that other social sciences ignore.”

This is, I think, a fair summary of the state of play in 1999. But two decades on, economics is no longer quite so taken with the assumption of rationality. With Nobel memorial prizes for behavioural economics going to Daniel Kahneman (2002), Robert Shiller (2013) and Richard Thaler (2017), it has now become perfectly acceptable to publish economics papers with an alternative view of human decision-making.

That is not the only change in the toolkit of economics. The first modern randomised clinical trial was run by a man trained in economics, Austin Bradford Hill, in the late 1940s — but the methodology did not become widespread in economics until the 21st century.

The randomistas — most prominently the 2019 Nobel laureates Abhijit Banerjee, Esther Duflo and Michael Kremer — put the experimental results centre stage; the considerations that Lazear highlighted are not forgotten, but they are left in the wings.

Other economists are broadening the tools of economics by taking advantage of huge datasets and operating on the fringes of computer science. Two prominent examples are Susan Athey — the first female winner of the John Bates Clark Medal — and Raj Chetty, who won the same prize at the tender age of 33. Among the sources of this new data rush are internet traffic, cell-phone metadata, satellite imagery and the ballooning administrative datasets used by large organisations to run their businesses.

If the “how” is changing quickly, the “who” is stubbornly resistant to change. Economists used to be white and male. Now they are mainly white or Asian, and male.

Of course, there are some spectacular exceptions: in 2005, when I began writing my column for the FT, there was no female winner of the Nobel memorial prize in economics. There are now two.

Even more perplexingly — given that the award is for younger researchers — there was no female winner of the John Bates Clark Medal. There are now four, which is progress. Women such as Elinor Ostrom, Claudia Goldin and Janet Yellen have reached the very top of the profession, as did the late Alice Rivlin.

But economics still lacks the diversity it needs to reach its full potential. The Royal Economic Society has launched a “Discover Economics” campaign to address this, but it will take more than a recruitment drive: a 2014 study, “Women in Academic Science”, concluded that while other academic disciplines had been levelling the playing field, economics was an exception. We need to do better.

Economics is a controversial discipline, and that is not likely to change. Whereas scientists only occasionally have to dip their toes into political waters such as climate change or vaccination, most of what economists study — from inequality to immigration, trade to taxation — lies squarely in the middle of the political battlefield.

Still, some of us are doing our best, and all of us are human, as these portraits show. It is nice to be reminded of that.

Mariana Cook’s book is “Economists“.
Written for and first published in the Financial Times on 21 December 2019.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

 

20th of January, 2020Other WritingComments off
Other Writing

Extreme Economies – disaster zones with lessons for us all

In the 17th century, a boy named Hugh Montgomery fell from his horse and lost part of his rib cage; doctors replaced it with a metal plate and he survived — with a living heart that could be inspected by the pioneering doctor William Harvey. Phineas Gage survived a metal spike through his head in 1848, and the changes in his character inspired fresh understanding of how the brain works. If we can learn about the healthy human body by studying people who have suffered catastrophic injuries, might a similar trick work for economics?

That is the premise of Richard Davies’s book, in which he reports on economies that he views as unusually resilient, such as Aceh after the dreadful tsunami of 2004, or dysfunctional, such as Glasgow and Kinshasa, or otherwise extreme, such as Akita in Japan, where the average age is 53.

This is an unconventional approach. Economists and business journalists tend to focus on the same broad trends in the same major economies. But Davies suggests, plausibly, that many parts of the world will eventually have the demographics of Akita, the inequality of Santiago or the squandered environment of Darien, Panama, and so a journey to the extremes gives us a glimpse of our own future. Even when it does not, there is always the thrill of exploration.

I sympathise with the conceit. One of my own books, The Undercover Economist Strikes Back, lingers on RA Radford’s remarkable 1945 account of an economic system emerging in a prisoner-of-war camp. Quite apart from the grim fascination of the subject matter, a prison camp teaches us a surprising amount about how a real economy works.

Similarly, Davies studies the irrepressible markets inside the Louisiana State Penitentiary. There’s the mackerel economy — mackerel being light, standardised and durable, it makes a good currency — and the “dot” economy. In the outside world, Green Dot pre-paid plastic cards, as good as cash in most stores, can be loaded with value by purchasing a “MoneyPak”, which is essentially just a 14-digit code, the “dots”. Inside the prison, prisoners can bribe guards or pay each other large sums, untraceably; all they need is for an associate to pass them the “dots”.

Extreme Economies makes two promises: to give us a global tour of disaster and recovery, showing us places we would never see first-hand; and to teach us something about how ordinary economies work by studying extreme ones. Davies delivers impressively on the first promise, with crisp and sensitive reporting from an extraordinary range of inaccessible places.

The lessons, however, are more uneven. Davies notes, for example, that after the Aceh tsunami, the few survivors were able to sell their gold jewellery to local gold traders Harun and Sofi, who could access the international market price. That gold was always intended as saving for hard times, and Davies tells us it worked as intended, in “contrast with the western financial system”. Yet while gold bracelets worked, bank accounts would have worked better — it took three months for the gold traders to be up and running again. If there is a lesson for the reform of the western banks here, Davies does not tell us what it is.

While the post-1945 decline of Glasgow’s shipyards is well described, it is not fully explained: the yards on the Clyde did not invest in dry docks, says Davies, but he does not say why. And in the camp of Zaatari in Jordan, Davies praises the entrepreneurial spirit of Syrian refugees, noting that in 2016 the ratio of new to established firms was world-beating. He fails to acknowledge that since Zaatari was barely four years old at the time, it is surprising that the ratio wasn’t higher. One sure way to have a high ratio of start-ups is to live in a place that until recently did not exist.

That aside, the descriptions of Zaatari are a triumph. Davies takes us inside, introduces us to the residents and deftly sketches both their many struggles and some of the pleasures of life in the camp. The contrast with another camp, Azraq, is unforgettable: Azraq is better planned but much more tightly controlled. Life there is equitable but joyless. As a discussion of the strengths and weaknesses of markets versus planned economies, Extreme Economies is one of the most subtle and surprising I have read. Davies sets the austere modernism of Azraq against the messy improvisations of Zaatari. It’s not just about access to material goods, but the way Azraq is “desolate, empty and depressing”. The homes in Azraq are sturdier, the electricity supply more reliable — and yet few people wish to move from Zaatari to Azraq.

Davies returns to Zaatari, and sits on a rooftop sipping orange soda and eating grilled chicken, contemplating the camp’s joys and sorrows. Here he delivers on his promises, giving us a glimpse into a different world, and a lesson learnt about our own.

Written for and first published in the Financial Times on 6 December 2019.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

7th of January, 2020Other WritingComments off
Highlights

Why we fall for cons

There may be times and places where it’s a good idea to talk back to a military officer — but Germany in 1906 wasn’t one of them. So the young corporal didn’t. The corporal — let’s call him Muller — had been leading his squad of four privates down Sylterstrasse in Berlin, only to be challenged by a captain.  Captain Voigt was in his fifties, a slim fellow with sunken cheeks, the outline of his skull prominent above a large, white moustache. Truth be told, he looked strangely down on his luck — but Muller didn’t seem to take that in. Like any man in uniform, Captain Voigt appeared taller and broader thanks to his boots, smart grey overcoat and Prussian-blue officer’s cap. His white-gloved hand rested casually on the hilt of his rapier.

“Where are you taking these men?” he barked.

“Back to barracks, sir,” replied Muller.

“Turn them around and follow me,” ordered Voigt. “I have an urgent mission from the “all-highest” command.”

Direct orders from the kaiser himself!

As the small group marched towards Putlitzstrasse station, the charismatic Captain Voigt saw another squad and ordered them to fall in behind. He led his little army on a train ride towards Köpenick, a charming little town just south-east of the capital.

On arrival, the adventure continued: bayonets were to be fixed for inspection. It had been an extraordinary day for Corporal Muller and his men. But it was going to get a lot more extraordinary: what they were about to do would be the talk of newspapers around the world.

 

Captain Voigt’s impromptu strike force burst into Köpenick town hall and into the office of the mayor, a man named Georg Langerhans. Langerhans, a mild-looking fellow in his mid-thirties with pince-nez spectacles, a pointed goatee and a large, well-groomed moustache, stood up in astonishment and demanded an explanation. Voigt promptly placed him under arrest, by order of the kaiser.

“Where is your warrant?” stammered Langerhans.

“My warrant is the men I command!”

Voigt ordered the town treasurer to open the safe for inspection: fraud was suspected. The safe contained three thousand five hundred and fifty seven marks, forty-five pfennigs. Captain Voigt was punctilious about the count, confiscated the money, and handed over a receipt to be stamped.

It was nearly a quarter of a million dollars in today’s money.

Captain Voigt sent a pair of soldiers to find and detain Mayor Langerhans’s wife. She, too, was a suspect. He then searched the town hall office while his men kept the officials under arrest. Failing to find what he sought, he decided to wrap up the mission. The officials were to be driven to a police station where they would be detained and interrogated.

Captain Voigt himself walked to Köpenick railway station. He collected a package from the left-luggage office, and stepped into a toilet cubicle. A minute or two later, he stepped out again — and he was almost unrecognisable, having changed into shabby civilian clothes. He ambled, bandy-legged, across the station concourse. This anonymous fellow boarded the train back to Berlin, with his uniform neatly folded under one arm, and a bag of money under the other. Just like that, the “Captain of Köpenick” was gone.

Meanwhile, Corporal Muller dutifully presented his prisoners at the police station in central Berlin. The situation quickly became baffling to all concerned. Nobody had heard anything about the “all-highest” demanding the interrogation of the Mayor of Köpenick — nor his wife. After a phone call to headquarters, the head of the German general staff himself, General Helmuth von Moltke the Younger, arrived to resolve the situation. But nobody had received any orders from the kaiser. Nobody could see any reason to detain the mayor, or his wife, or his treasurer. And nobody could recall ever having met a “Captain Voigt” before.  No wonder. Except in the minds of the bemused soldiers and their civilian prisoners, Captain Voigt never existed. They met instead Herr Wilhelm Voigt, an ex-convict, an ex-shoemaker, a nobody, who possessed nothing more than a confident manner . . . and a very nice uniform.

 

The tale I just told you is a famous one in Germany. It became a play, and an Oscar-nominated film. (The most comprehensive English-language account I could find is by the historian Benjamin Carter Hett.) When the Germans tell the story they tend to linger on the prelude to the heist. What kind of a man does this? Who was Wilhelm Voigt, and what inspired his audacious confidence trick? Voigt was a crook, no doubt about it — his crimes included armed robbery. But the judicial system had treated him harshly, stuffing a legitimate appeal into a filing cabinet. In this version of the story, Voigt was persecuted by a cruel bureaucracy, driven to ransacking the mayor’s office looking not for money but for the paperwork he needed to get a job. No wonder he became seen as a sympathetic figure in German literature.

The English-speaking world drew a different lesson from the reports that filled their newspapers: that the Germans are a sucker for a shouty man in a uniform. The Morning Post named Voigt “the most humorous figure of the century”. The writer GK Chesterton could scarcely contain his glee upon reading the “comic” reports from Köpenick of the “absurd fraud (at least, to English eyes)”. An Englishman, mused Chesterton, would have seen through the bluster immediately.

Yet four years later, a group of young upper-class pranksters including the novelist Virginia Woolf and the artist Duncan Grant managed to arrange for a tour of the Royal Navy’s flagship, HMS Dreadnought, by putting on turbans, brown make-up and fake beards, and claiming to be from the royal family of Abyssinia.

“Bunga bunga!” they boomed as they greeted each other, and when they had to improvise further, they spoke scrambled fragments of ancient Greek poetry they’d learnt at school. Faced with this ridiculous, and to our modern eyes profoundly offensive prank, the Royal Navy responded with a commensurate display of ignorance: it treated the visitors with all the honour it could muster, including the flag and anthem of the nation of Zanzibar rather than Abyssinia. That was apparently close enough to satisfy everyone.

It’s easy to laugh — as GK Chesterton did — when it happens to someone else. But the closer I looked at the story of the Captain of Köpenick, the less funny it looks. Faced with the right con, we’re all vulnerable. Any one of us could have been the hapless Corporal Muller. And if we don’t understand how the trick worked, Wilhelm Voigt’s modern-day successors will do far more damage than he could ever have imagined.

 

Since Wilhelm Voigt persuaded people to obey orders that they should not have obeyed, you may already be thinking about Stanley Milgram. Milgram is the psychologist who, in the 1960s, conducted the most famous and controversial psychological experiment of all time — an experiment that I think we tend to misunderstand. Milgram recruited unsuspecting members of the American public — all men — to participate in a “study of memory”. On showing up at the laboratory, in a basement at Yale University, they met a man — apparently a scientist, just as Voigt had apparently been a Prussian army captain — dressed in a tie and grey lab coat.

“Very straightforward and professional, just what you’d expect from Yale,” one participant recalled. (Gina Perry’s book Behind The Shock Machine is an authoritative account of the experiments.)

The man-dressed-as-a-scientist supervised proceedings. Participants would be assigned the role either of “teacher” or “learner”. The learner was then strapped into an electric chair while the teacher retreated into another room to take control of a machine with switches labelled with terms including: “slight shock”, “moderate shock”, “danger: severe shock” and, finally, “XXX”.

As the learner failed to answer questions correctly, the teacher was asked to administer steadily increasing electric shocks. Although the teachers had received a painful shock themselves as a demonstration and had witnessed the learner complaining of a heart condition, many proved willing to deliver possibly fatal shocks while listening to screams of pain from the other side of the wall.  Of course, there were no shocks; both the screaming “learner” and the scientific supervisor were actors. The true experiment was studying the “teachers”: how far would they go when following direct orders?

In the best known study, 65 per cent of experimental subjects went all the way to 450 volts, applying shocks long after the man in the other room had fallen silent. Under the guise of science, Stanley Milgram had perpetrated yet another of these grim hoaxes.

Milgram’s research agenda was influenced by the shadow of the Holocaust and a desire to understand how it had been possible. He made the link explicit, and argued that his experiment was all about “obedience to authority”. But modern scientists no longer see Milgram’s research in quite that way.

There’s a lot we could say about those experiments — about their ethics, and about the more than 20 experimental variations. But the most fundamental objection is that these experiments may not be about obedience at all. Alex Haslam, a psychologist who has re-examined the studies in recent years, found that when the man in the lab coat gave direct orders, they backfired. One pre-scripted instruction produced universal disobedience: “you have no other choice . . . you must continue”. Experimental subjects concluded that this was simply untrue; nobody continued after that order. People need to be persuaded, not bullied, into participating.

So if these experiments weren’t about blind obedience, what were they about? Here’s a detail that is usually overlooked: Milgram’s shock machine had 30 settings, fine increments of 15 volts. It’s hard to object to giving someone a tiny 15-volt shock. And if you’ve decided that 15 volts is fine, then why draw the line at 30 volts? Why draw the line at 45? Why draw the line at all?

At 150 volts, the “learner” yelled out in distress. Some people stopped at that point. But those who continued past 150 volts almost always kept going to the full 450 volts. They were in too deep. Refusing to administer a shock of 225 volts would be an implicit admission that they had been wrong to deliver 210. Perhaps Stanley Milgram’s experiments weren’t a study of obedience so much as a study of our unwillingness to stop and admit that we’ve been making a dreadful mistake. We’re in too deep; we’re committed; we can’t turn back.

Think back to that day in Berlin, in 1906. Voigt stopped Corporal Muller in the street and demanded to know where he and his men were going. What was Muller to do? Demand proof of identification? Of course not. Muller didn’t want to risk a court martial over answering a simple question.

Voigt then asked Muller’s squad to follow him. That’s a bit more of a stretch, but Muller had already obeyed one order, already addressed this stranger-in-a-uniform as “sir”. Marching down the street behind him was just one small action further.

The pattern repeated itself with the second squad: when they first saw Captain Voigt, he was already at the head of half a dozen men; that was the evidence he was who he said he was. Why not fall in? Why not get the train to Köpenick? Why not fix bayonets for inspection? It’s really only at the moment that they burst into the town hall that the doubts might occur.

But by then, the whole business was already well beyond the 210-volt mark. They had travelled all the way across Berlin. They had been following Wilhelm Voigt’s instructions for a couple of hours. It would have been very late in the day for Corporal Muller, or anyone else, to have the presence of mind to stop, think and challenge their new captain.

Georg Langerhans, the young mayor, saw the situation very differently — he immediately demanded to see a warrant. Langerhans, of course, was effectively being asked to apply a 450-volt shock without preamble. No wonder he was sceptical.

At first glance, then, Wilhelm Voigt’s con and Milgram’s shock experiments are evidence for the idea that we’ll do anything for a figure of authority wearing the right outfit. But look deeper and they’re evidence for something else — that we’re willing to help out with reasonable requests, and that step by step we can find ourselves trapped in a web of our own making. Each small movement binds us more tightly to the con artist. We become complicit; breaking free becomes all but impossible.

That said, the right outfit matters. And here I want to think bigger than the world of the con artist. Yes, we fall for cons. But we fall for all kinds of other superficial things that shouldn’t matter, like a nice uniform, and those superficial things are constantly influencing our decisions — including decisions that we may later come to regret.

 

Almost exactly 110 years after Wilhelm Voigt’s audacious heist, Hillary Clinton and Donald Trump squared off in one of three televised debates. You might remember it. In a town-hall format, the candidates were able to roam the stage. And Trump certainly did roam, following Clinton around as she answered questions, looming behind her, always on camera, clearly visible over the top of Clinton’s head.

After the debate, that was all anyone could talk about. Was it an attempt at intimidation? Perhaps. But there’s something else about that footage of Donald Trump stalking Hillary Clinton: he towers over her.

Voters were being offered all kinds of choices in that election but one that was never really articulated was this: would you like to elect the third-tallest president ever, or the shortest president since James Madison two centuries ago?

There’s not much doubt that some voters were influenced by the disparity in height. The US does elect a lot of tall presidents. Trump was taller than Hillary Clinton. Obama was taller than McCain. Bill Clinton and George Bush Sr were the same height — towering over tiny Ross Perot, the feisty independent challenger they beat into third place. Bush Sr was taller than Dukakis. Reagan was taller than Carter, Nixon was taller than Humphrey, Kennedy was taller than Nixon, Truman taller than Dewey. Lyndon Johnson was taller than pretty much everyone. Are we electing a president here, or picking a basketball team? Of course there are some exceptions to the rule: when Carter beat Ford, it was a victory for the little guy.

But serious statistical analysis concludes that taller presidential candidates are more likely to win the election, more likely to win re-election, and more likely — unlike Donald Trump — to win the popular vote. Since the dawn of the television age, the only person ever to have overcome a height deficit of more than three inches was the incumbent George W Bush running against John Kerry.

Hillary Clinton would have been the first female president, true. She would also have been the first president to win despite a 10in height disadvantage since 1812. Americans may not have elected any female presidents over the years — but they haven’t elected any short men, either — not in a long, long time.

This isn’t just about presidential elections and it isn’t just about height. Across the world, voters favour candidates based on the most superficial characteristics imaginable. For example, one study — by economists Daniel Benjamin and Jesse Shapiro — found that people were fairly good at predicting the victor of an election for state governor after being shown a brief piece of video of a gubernatorial debate with the sound turned off: just looking at the candidates seemed to be enough to judge who voters would pick. In fact, giving people audio too actually made the predictions worse, presumably because it distracted them from what mattered: appearances.

We hairless apes seem to go for simple proxies when judging someone’s capacity for leadership. That 400-page manifesto? We’re not going to read it. But we pay close attention, whether we realise it or not, to the fine details of a candidate’s posture, styling, clothes — and, of course, height.  Corporal Muller and his men were completely taken in by Wilhelm Voigt’s appearance and mannerisms. But they’re not the only ones to pay attention to appearances.

Consider the advertising classic, “I’m not a doctor but I play one on TV.”  And then, as though it was the most natural thing in the world, the man who admits he isn’t a doctor goes on to tell us what brand of cough syrup to buy. Even Wilhelm Voigt would not have been quite as audacious as to announce: “I’m not a captain, I’m just wearing the uniform.”

And yet the advertisements work. We buy the cough syrup from the man who tells us, “I only look like a doctor”. That’s how powerful appearances can be. And what about “I’m not a successful businessman, but I play one on TV?” Oh — I think I know that guy.

 

Fraudsters using the playbook of Wilhelm Voigt trick people every day. First, they get the appearances right. Maybe it’s a text message that looks like it’s from your bank — the phone number is right, after all. Maybe the doorbell rings and the man is standing there with an official-looking ID; he wants to come and check your electricity meter. That ID does look genuine. Maybe it’s a smooth-talking politician with a good suit. Milgram well understood the need to get the clothes right. In a variation where the experimenter didn’t wear a lab coat, few people went to 450 volts.

Second, fraudsters put people into what psychologists call a “hot state”. We don’t think so clearly when we’re hungry, or angry, or afraid. Wilhelm Voigt yelled at Corporal Muller. A politician who wanted to put people into a hot state might announce that the country was being taken over by gangs and terrorists, and that his opponent should be locked up. Whatever works.

Third, they pull the heist one small step at a time. They start with the request for information: where are you taking these men? You are Ms Jane Doe, aren’t you? I’m sorry to report that your bank account has been compromised, Ms Doe. Just enter your password and username — just like you usually do — and we’ll sort it out for you.  Give us someone who looks or sounds the part; apply a bit of fear, anger, lust or greed; and then proceed in salami slices from the reasonable to the insane, so smoothly that we don’t stop to think. That’s how Wilhelm Voigt fooled Corporal Muller. But it’s how he would have fooled any of us, if he caught us at the wrong moment.

At first it looked as though Voigt would enjoy the fruits of his acting skills in peace. But as he relaxed with his money, a former accomplice of his saw the reports of the daring heist in all the newspapers and remembered a prison conversation in which Voigt had dreamt of such a coup. He promptly reported Voigt to the authorities.

When four detectives burst in to his apartment at six o’clock in the morning, they found Voigt enjoying breakfast. He protested that the timing was inconvenient. “I should like a moment to finish my meal.”

So the detectives watched him break open another crusty white roll, spread on a thick layer of butter, and wash it down with his coffee. You can’t help but admire the audacity.

At trial, Voigt became a folk hero. The judge sympathised with the way he had been treated, gave him an unexpectedly short sentence, then took off his judge’s cap and stepped down to clasp Voigt by the hand. “I wish you good health throughout your prison term, and beyond.”

The German authorities felt that — in light of the popularity of the Captain of Köpenick — even more ostentatious clemency was required. They pardoned him after less than two years in jail. The kaiser himself was said to have chuckled, “amiable scoundrel” at the deed.

Statues of Voigt were erected and waxworks made of him — including one in Madame Tussauds in London. He was paid to record his story so that people could listen to him recount his deeds. He went on tour, posing in his uniform and signing photographs of himself for money.

A local restaurateur begged him to come and dine as often as he wanted, free of charge, knowing that his presence would attract other customers. A wealthy widow gave him a pension for life. Never let it be said that the Germans lack a sense of humour. But while the comedy is undeniable, we should not be too fond of the Prussian prankster. Perhaps Wilhelm Voigt’s adventure did little harm in the long run. The same cannot be said for some of the con artists who followed in his footsteps. It is exciting to read about a fraud — from a distance. It is not so funny to live through one.

 

This article is based on Episode 2 of my new podcast,“Cautionary Tales”. [Apple] [Spotify] [Stitcher]

Published in FT Magazine, 16/17 November 2019.

 

Further reading

The best English-language account I could find of the Kopenick story is by Benjamin Carter Hett. “The ‘Captain of Köpenick’ and the Transformation of German Criminal Justice, 1891-1914,” Central European History 36 (1), 2003.

I first read about the story in Nigel Blundell’s The World’s Greatest MistakesOther accounts are at Strange History  and The Rags of TimeKoepenickia offers various contemporary German newspaper accounts. There are many small differences in the accounts but the overall story remains just as remarkable.

The definitive account of Stanley Milgram’s experiments is Gina Perry’s Behind the Shock Machine and Alex Haslam was interviewed by Radiolab in a great episode about the same topic.

An overview of the evidence on tall presidents is Gert Stulp, Abraham P. Buunk, Simon Verhulst, Thomas V. Pollet, “Tall claims? Sense and nonsense about the importance of height of US presidents” The Leadership Quarterly  Volume 24, Issue 1, 2013.

The study of gubernatorial elections is Daniel J Benjamin & Jesse M Shapiro, 2009. “Thin-Slice Forecasts of Gubernatorial Elections” The Review of Economics and Statistics, MIT Press, vol. 91(3), pages 523-536, 02.

Daniel Hamermesh’s Beauty Pays looks at the overall evidence that appearances matter – including in politics.

Receive these posts by email

(You can unsubscribe at any time)

Marginalia

What’s it like to have lunch with a Nobel laureate?

My recent “lunch with the FT” with Richard Thaler (Nobel laureate, author of Nudge and Misbehaving) was a lot of fun. I don’t do these formal sit-down interviews often but over the years I’ve racked up a few.

At the end of the lunch I mentioned to Thaler the other economists I’d lunched with. “Good company”, he said. I think he’s right. So, just in case you missed the other interviews:

Thomas Schelling (1921 – 2016, Nobel Laureate 2005). I interviewed Schelling in his home shortly after he won the Nobel. I was still barely a journalist at all; he was charming and gracious. I find Schelling and his ideas endlessly fascinating. If you’d like to read a Schelling book, perhaps start with Micromotives and Macrobehaviour.

Gary Becker (1930 – 2014, Nobel Laureate 1992). Becker, charmingly, committed a “rational crime” during the interview.  Becker’s ideas were a big influence on my writing The Logic of Life.

My very first “Lunch with the FT” was with Steven Levitt, just before Freakonomics came out. It feels like a long time ago…

And if you want more, here’s my lunch with blogger, activist and novelist Cory Doctorow; here’s the time Michael Lewis played me at an obscure German boardgame.

 

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

“If you want people do to something, make it easy.” Richard Thaler has Lunch with the FT

The Anthologist doesn’t serve cashew nuts, so I order a bowl of smoked almonds instead. When they arrive, caramelised and brown as barbecue sauce, I ask for them to be put right in front of Richard Thaler. He protests that the waiter isn’t in on the joke.

The readers will be, I assure him. “The educated ones, perhaps,” he concedes.

Those educated readers may know that Professor Thaler is a Nobel laureate economist, but even more famous as the co-author of Nudge. They may even know — from his later book, Misbehaving: The Making of Behavioural Economics — that the 73-year-old is fond of telling an anecdote about a bowl of cashew nuts that sheds light on his approach to economics.

He served the notorious bowl to some guests while dinner was roasting in the oven, then watched everyone compulsively munch on the nuts and gradually spoil their appetites. So Thaler decided to remove the temptation by hiding the cashews in the kitchen. His guests thanked him.

It would be an unremarkable tale, except that such behaviour simply does not fit the rational economic model of human behaviour. Either eat the cashews or don’t eat the cashews, says classical economics, but don’t thank the person who moves them out of easy reach.

Reflecting on such stories helped Thaler create “behavioural economics” — a branch of the discipline that aims at psychological realism. Doing so also helped him with the equally difficult task of persuading other economists to take the behavioural view seriously.

True, it’s just a story about cashews — but if you don’t think short-termism and weak willpower are economically significant in the grand scheme of things, I have a payday loan, a pension drawdown scheme and an auto-renewing gym membership to sell you.

And, sure enough, Thaler’s ideas about the importance of realistic human behaviour have permeated into the economic mainstream, particularly the study of finance. His policy proposals have influenced tax collection, organ donation, energy efficiency drives — and most notably pensions, where participation in workplace schemes dramatically increases when people must explicitly opt out if they are not to be automatically enrolled.

Thaler cultivates a happy-go-lucky persona, a man whose own weaknesses help him understand the weaknesses of others. “You assume that the agents in the economy are as smart as you are,” he once told Robert Barro, one of the pillars of the economics establishment, “and I assume that they’re as dumb as me.” Barro was happy to agree with that.

This sunny July, however, Thaler is a model of self-control. “Notice how many nuts I’ve had so far,” he announces, 20 minutes into our conversation. He gestures for emphasis. “Zero.”

I’m not surprised by that, although I am when Thaler — who struck me as a bon vivant — admits that he has been skipping lunch entirely. He’s in London for a fortnight, teaching a course at the London campus of the University of Chicago Booth School of Business, and after a generous breakfast he says he has neither the need nor the time for lunch.

This may also explain his lack of interest in the restaurant itself. We meet at the business school, and he’s chosen the closest place — announcing “it’s me again” to the waitress who stands outside. I don’t even glimpse the interior of The Anthologist, because she promptly directs us to a pavement table, which has a large masonry wall on one side and on the other — if you squint — a view down Gresham Street to a back corner of the Bank of England. The scooters and trucks roar past a couple of yards away, but Thaler has no trouble making himself heard.

He used to squeeze more out of his annual fortnights in London. “I would spend the morning with the Behavioural Insight Team” — the famous “nudge” unit established by David Cameron and inspired by Thaler’s book with the law professor Cass Sunstein — “then come and teach all afternoon. And then half the nights there would be dinners with friends. And I was comatose at the end of the first week.”

He does admit to having a few dinners planned, though — and to timing his visit to coincide with the Wimbledon Men’s Final. He and his wife, the photographer France Leclerc, had Centre Court tickets. Was he a fan of Djokovic or Federer?

“We support Rafa. Although if he had been playing in a match like that it might have got too much for my wife. She would have been hiding somewhere by the fifth set.”

It was the same on election night: the Trump/Clinton contest reduced his wife to a nervous wreck. “And who were you supporting in that one?” I ask. He gives me a withering look. “At least credit me with sentience.”

President Barack Obama seemed to appreciate behavioural economics and gave Thaler’s co-author, Cass Sunstein, a senior appointment. The Trump administration, observes Thaler, has no interest in behavioural economics. “Look, there’s no demand for expertise of any sort . . . The lack of competence and expertise is like nothing anyone has ever seen.”

Whitehall’s Behavioural Insight Team seems to be displaying more longevity than the White House equivalent. “The key move they made very early on was to extricate themselves from government.”

They’re now a semi-autonomous social enterprise in which the Cabinet Office retains a stake. They made that move, of course, before Cameron’s referendum-induced autodefenestration. “I will say that David Cameron never talked to anybody at the Behavioural Insight Team about the Brexit referendum”.

And what should they have said if he had? “One thing for sure is Remain is a horrible name. It’s weak. Whereas Leave is strong.”

Thaler has written about the referendum before in the Financial Times. He reminds me that Theresa May said, before the referendum: “The reality is that we do not know on what terms we would have access to the single market.”

The waiter interrupts us and presses Thaler to order some wine. He waves him away. “No, I have to teach for the next three hours.”

We return to May, and her explanation that a vote to Leave would be a vote for something undefined and unknowable. Yet as prime minister, she felt that it was quite sufficient to declare that Brexit means Brexit. “Brexit means Brexit — that is one of the dumbest statements that has ever been uttered by a head of state. And I’m aware that there are thousands of tweets one could compare it with. I mean, it’s simultaneously meaningless and wrong.”

The waiter finally manages to get us to order something. Thaler goes for a crispy duck salad. “It’s called salad, you know it has at least the illusion of being healthy”. I’m tempted by the Wagyu beef burger but feel ashamed (social pressure means nothing to homo economicus but is a powerful nudge for human beings), so I order some cod with samphire.

The waiter is keen to upsell. Spritzer? Some halloumi? Thaler and I are baffled by the suggestion of halloumi with cod and duck, although I would have cracked if the waiter had tried to sell us French fries.

We turn to the state of economics, and how it became so wrapped up in the idea of rational agents. Some of those models have a hypnotic pull, I suggest: they’re so ingenious, so difficult, and once you’ve understood how they work you don’t want to abandon them in favour of the bowl-of-cashews guy.

I’m recalling a time I was reading a classic article by Barro — in the emergency room, having dislocated my jaw after a spectacular yawn, which I protest was unconnected to the research paper in question. I don’t get far. “You should change this story!” hoots Thaler. “It should be that you read this paper and, literally, your jaw dropped.”

It’s a reminder that Thaler is a storyteller as well as a sharp theorist. Misbehaving is full of stories. “I decided to just start writing things that would amuse me,” he says — including an account of a huge academic bunfight over the allocation of corner offices at the University of Chicago economics department that cannot fail to provoke Schadenfreude.

“I sent that to my friend Michael Lewis. I said, ‘How much of the book could be like this?’ and he said ‘All’.”

Lewis (whom I interviewed here) isn’t a bad sounding board: he’s the author of Liar’s Poker, Moneyball and The Big Short. He also wrote a biography of Thaler’s friends and colleagues, the psychologists Daniel Kahneman and Amos Tversky. I wouldn’t mind getting him to look over my first drafts.

When it arrives, the cod is pleasant enough, but there isn’t much of it. I’m regretting not ordering the fries. The smoked almonds look tasty, but they’re across the table sitting beside Thaler’s left hand. He hasn’t so much as twitched towards them.

The key message of Nudge was that governments could improve the health and wellbeing of their citizens without infringing on their liberty, simply by more thoughtfully designing their rules, procedures, or even labelling. “If you want people to do something, make it easy.” Put the cashews in the kitchen and the fruit by the cafeteria checkout.

More recently, Thaler has been thinking and writing about what he calls “sludge”. It’s the same procedure in reverse: if you want people not to do something, make it difficult. Reaching for an example, Thaler has a bone to pick with The Times.

The first review of Misbehaving was published there, and Thaler’s editor sent him a link. “And I can’t get past the paywall without subscribing.” But then he notices there’s an offer of a month’s trial subscription at an introductory rate. “But I read further, having written a book about this, and I see that it will be automatically renewed.”

Not only that, it will be renewed at full price, “and that in order to quit, I have to give them 14 days’ notice. So the one month free trial is actually two weeks. And I have to call London [from Chicago] in London business hours, not on a toll free line.”

He pauses and chides me to check that the FT isn’t placing similar sludge in the way of readers who wish to unsubscribe. I assure him that nobody would ever want to unsubscribe, but in any case such knavery would be beneath us. But part of me wonders. “Check your policy at the FT,” he advises. (Later, I check. The FT offers a very similar introductory offer, but I am relieved to discover that the newspaper offers regional phone numbers and you can also cancel online.)

While we’re talking about the consumption of digital goods, I am keen to ask him about how he deals with email, smartphones and social media. We’re in the middle of a colossal set of experiments in behavioural manipulation that would have been hard to imagine when Sunstein and Thaler wrote Nudge over a decade ago. Google, Apple, Facebook and Amazon are constantly running little trials to see what we do in response.

“The world has changed. I remember that while we were writing the book, I got my first iPhone.”

But does it tempt him? Distract him? An iPhone, it seems to me, is a bottomless bowl of digital cashews. But he’s not worried. “I’m not on Facebook at all . . . I am on Twitter and I find much of it to be quite useful. There’s a growing academic economics Twitter that’s fantastic. There’s almost no ad hominem. There are people live-tweeting conferences. Fantastic. There are people who will give a 10-tweet summary of some new paper.”

Thaler stops eating his salad — he’s managed to get most of it down, in between his answers. I’ve long since finished my little piece of fish. The smoked almonds have somehow migrated into the centre of the table, easily within my reach. They are untouched. “Let the record be noted that my consumption so far is zero,” he declares.

Thaler isn’t interested in coffee or dessert, but says he has time if I want something. I order espresso. After it arrives, I take a sip, and then my hand moves instinctively towards the almonds before I catch myself. He laughs. “That was a trembling hand.”

My involuntary slip prompts us to start talking about accidents. “Here’s something I was thinking about this morning,” he says. “All these announcements to mind the gap. Can that conceivably be useful?”

“Mind the gap,” is part of the sonic wallpaper of the London Underground, a reminder not to accidentally stumble into the space between Tube train and platform. I wonder if Transport for London has run an experiment. “I’m wondering that too.” Although we both doubt it.

“Now here’s my hypothesis. 99.9 per cent of the people on the Tube have blocked this out long ago. And whatever the percentage of tourists is, half of them have no idea what ‘mind the gap’ means. It could be ‘cheerio’.”

In short, the people who might conceivably benefit from the warning probably don’t understand it. So why not experiment with some different approaches to see if that reduces accidents?

The proposal is typical Thaler. He’s noticed a feature of everyday life that most of us either overlook or take for granted — and he’s turned it into an easily implementable experiment that might actually make the world a better place.

It’s time for him to go and teach. We shake hands, and then he reaches forward, slowly and deliberately, for a smoked almond. He holds it up in front of me as though displaying a fine diamond.

“One!” he says. Then he pops it into his mouth, and ambles off towards the business school. Only when his back is turned do I dare grab one myself.

The Anthologist 58 Gresham St, London EC2

Smoked almonds £3.75

Crispy duck salad £11.50

Cod with samphire £14.95

Sparkling water £3.95

Double espresso £2.90

12.5 per cent service £4.63

Waiter rounds up the bill (a nudge?) £0.32

Total £42.00

 

==

 

 
Written for and first published in the Financial Times on 2 August 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

How behavioural economics helped me kick my smartphone addiction

The year 2011 was a big one for me. My son was born. We moved to a new city. I published a book. But something else happened that was in some ways more significant: on February 9 2011, I bought my first smartphone.

It didn’t feel like a milestone in my life at the time. I didn’t note it down in a diary or commit the date to memory. Only finding a copy of the receipt helped pin down the day. Yet I have come to realise that the phone was a very big deal indeed.

Daniel Kahneman, Nobel laureate and author of Thinking, Fast and Slow (UK) (US), distinguishes between the “experiencing self” and the “remembering self”. My remembering self dwells upon the landmark moments such as the new baby. But my experiencing self is all about the phone.

I spend more time interacting with it than I do interacting with my children. I am in the presence of the device more than I am in the presence of my wife, although at least I have my priorities straight as to which I go to bed with.

As Cal Newport puts it in a new book, Digital Minimalism (UK) (US), we didn’t sign up for this. My first email account (1994) received a handful of messages a day, most of them newsletters I subscribed to in order to prevent cobwebs forming in my inbox. Facebook (2004) was a curiosity, less interesting than the latest computer game.

The first iPhone (2007) had no app store and was originally conceived as an iPod that made phone calls — although since “crackberry” had just been named the word of the year by Webster’s New World Dictionary, perhaps we should have seen what was coming.

But we didn’t. The hardware and software of the mobile age have gradually and profoundly entangled themselves in most parts of most people’s lives. If you are anything like me, you pick up your phone much more often than you pick up a knife and fork, and spend far longer reading email than reading books.

Not that I wish to grumble. These tools are enormously powerful. Without them I’d need to hire a secretary, spend hours playing phone tag and give up on working during long journeys by train and plane. Yes, they may occasionally distract me during the school nativity play, but the alternative would have been to miss the play entirely, because the office and the school are 50 miles apart.

I am not entirely happy with the role these technologies play in my life, but neither do I want to relinquish them. I know I’m not alone. For several years now, I’ve been dispensing sporadic advice about email overload both to readers and — if I am honest — to myself.

But late last year, I decided to do something more radical: to deploy everything I knew about economic theory and behavioural science, along with a few hard-won practical discoveries, to rebuild my relationship with the digital world from scratch. This is the story of what I learnt.

The power of the status quo
Inertia is always the first obstacle. Richard Thaler, who won a Nobel Memorial Prize for his contributions to behavioural economics, coined the term “endowment effect” to label the behaviour of an oenophile economist.

The economist had snapped up some Bordeaux wines for $10 a bottle, only to see them appreciate in value to $200 each. The economist wouldn’t have dreamt of paying $200 for a bottle of wine, but didn’t want to sell the wine for $200 either. He was happy to drink it on special occasions instead.

This behaviour is illogical: either the economist should prefer $200 or he should prefer the wine, and which he actually possesses should make no difference. Yet his actions seem perfectly natural, and Thaler and colleagues were able to demonstrate similar behaviour in laboratory experiments.

We like what we have, and these experiments suggest that we have no better reason for liking what we have other than that we have it: the disadvantages of choosing something else often loom larger than the advantages. As a result, we are reluctant to relinquish what we have — including the digital tools we’ve grown accustomed to using.

For this reason, digital sceptics such as Cal Newport and Jaron Lanier suggest that the first step in a reassessment of your digital habits should be a sharp temporary break.

If you are anything like me, you pick up your phone much more often than you pick up a knife and fork

Lanier, a pioneer of virtual reality and the author of Ten Arguments for Deleting Your Social Media Accounts Right Now (UK) (US), advises at least a six-month break from all social media. Newport suggests a briefer but broader ban: not only no social media, but no Netflix, no Google Maps, no smartphones — no digital tools at all for 30 days, apart from whatever is professionally essential.

The point here is not a “detox”. There is no intrinsic benefit to taking a month off from computers any more than one might recommend a brief, invigorating break from smoking or opiates.

The aim is to change the status quo to allow a reassessment. It’s only after you put down the electronic rucksack overflowing with digital possibility and stroll off unencumbered that you’re in a position to make a sensible decision about whether you really want to carry it around all day long.

So, I stripped various apps off my smartphone. The first time I dragged an icon to the “uninstall” bin felt like a big step, but it soon became a giddy pleasure. Off went the news apps, and a blog reader called Feedly that absorbed a huge amount of my time and attention. I already eschew games on my phone, but would have removed them too with gusto.

I spared the Financial Times app (which surely passes Newport’s test of professional necessity), and also retained Google Maps, a podcast player, The Economist’s “Espresso” app, the camera and the weather. Newport would have been more radical but I felt satisfied with my choices.

The big question was: what to do with my social media accounts? Facebook was simply too troublesome to delete, especially since my personal account is connected in opaque ways to a “Tim Harford” page maintained by my publishers. But I never had Facebook on my phone and after briefly unfollowing or muting all my contacts, I had no problem staying logged out.

My Twitter habit is more of a problem. I have 145,000 followers, gently persuaded over 10 years and 40,000 tweets to follow me — that’s about 10 books’ worth, or 20 years of weekly columns. This alone was a reminder of just what an effort Twitter could be; but deleting the account felt like the nuclear option.

So what could I do? Two years ago, I hid the “mentions” column so that I don’t see what other people say about me on Twitter. (Much is friendly, some hurtful and almost all superfluous.) Yet I was still wasting a lot of time noodling around there for no obvious gain. So I deleted the smartphone app and on November 23 2018, I tweeted that I was planning to “get off Twitter for a bit”. By a pleasing coincidence, the last person I interacted with before logging out was the man who named the endowment effect, Richard Thaler.

Time for what?
One of the most important — and misunderstood — ideas in economics is that of opportunity cost. Everything we do is an implicit decision not to do something else. If you decide to go to an evening lecture, you’re also deciding not to be at home reading a bedtime story. If you spend half an hour browsing news websites, that’s half an hour you can’t spend watching football. Those 40,000 tweets cost me something, but I am not sure what and I certainly didn’t ponder the cost while tweeting them.

This neglect of opportunity cost is a very human trait; we often fail to bring to mind the opportunity costs of our choices. One fun if slightly dated illustration of this is the choice between a £1,000 high-end CD player or a slightly less excellent £700 unit.

A difficult choice — until it is phrased as a choice between a top-notch £1,000 CD player or a £700 player plus £300 worth of CDs. At that point, most people clearly prefer the second option. The opportunity cost of the more expensive player could hardly be more obvious, and yet bringing the obvious to our attention changes our decisions.

Trying to get work done with an internet-enabled device is like trying to diet when there’s a mini-fridge full of beer and ice cream on your desk

For this reason I was determined not simply to cut back on my digital activities, but to fill the freed-up time and energy with something else. I focused on three activities. First, more exercise: I replaced Twitter with an exercise app that could run me through some brief, vigorous training sessions.

Second, more fun: I looked up some old friends and invited them to play role-playing games with me every other Sunday evening, rolling dice and pretending to be wizards. (I realise that Dungeons & Dragons isn’t cool. But neither am I, so I don’t care.)

And third, since social media is supposed to be about connecting with far-flung people, and since Christmas was looming, I decided to start writing letters to include with Christmas cards. I couldn’t write properly to everyone but I did manage to write serious letters to nearly 30 old friends, most of whom I’d not seen for a while. I reflected on our long friendships, brought to mind good times long past and, in particular, recalled important moments shared just by the two of us, nobody else. The letters were the antithesis of clicking “Like” on Facebook.

The experiment was beginning to get interesting.

Swiping, fast and slow
As Daniel Kahneman explained in Thinking, Fast and Slow: “When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” Rather than asking whether we should buy shares in Amazon, we ask, “Do I like to shop with Amazon?” Instead of pondering the leadership and managerial qualities of a presidential candidate, we ask ourselves whether we’d enjoy having a beer with them.

Tristan Harris, executive director of the Center for Humane Technology, argues that the digital services we use often perform this substitution for us. Imagine, says Harris, a group of friends on a night out, trying to figure out where they can go to keep the conversation flowing. They turn to their phones for a recommendation and find themselves gawping at images of cocktails on Instagram.

The phones, says Harris, replace the question, “Where can we go to keep talking?” with, “What’s a bar with good photos of cocktails?” Phones simply do not suggest options such as going back to someone’s apartment or strolling along the waterfront.

This happens all the time, and we often don’t notice the substitution. Looking for love, we swipe through faces on Tinder rather than searching for local clubs or volunteering activities. Picking up a phone to check the time in the morning, the question “What’s the time?” is quickly substituted with, “What did I miss while sleeping”?

While writing the last paragraph, I was confronted with the perfect example. It started to rain. Wanting to know whether the shower would last, I typed “weather” into Google. I was given an instant answer to my question, but I was also shown a list of weather presenters. Human faces! They are always eye-catching.

An old university acquaintance became a TV weather presenter; I wondered how she was doing. Who wouldn’t? Of course Google substituted an easier question: What does she look like these days? Other photos of weather presenters were also offered and, 30 seconds later, I was looking at pictures of a completely different weather personality, Tomasz Schafernaker, stripped to the waist.

Fifteen years ago, I would have struggled to explain this sequence of events to my wife. But nowadays, no explanation is really needed. We all know how swiftly and easily “When will it stop raining?” can lead to “What do Tomasz Schafernaker’s nipples look like?”

Trying to get some work done with an internet-enabled device is like trying to diet when there’s a mini-fridge full of beer and ice cream sitting on your desk, always within arm’s reach. You can crack open a can and take a swig before you’ve even realised what you’re doing.

Perhaps even worse, the tempting rewards are unpredictable. The psychologist BF Skinner once found himself trying to eke out a supply of food pellets he’d been using to reward rats. To his surprise, he found that “intermittent reinforcement” — sometimes the rats would get a pellet, sometimes not — was more motivating than reliable rewards. Unpredictable treats are highly addictive, just like email, social media or clickbait headlines.

So what to do about this problem? It’s not easy: by definition an intuitive response occurs before we have time to stop and think. The obvious solution is to create some friction. I installed a software plug-in called Strict Workflow on my desktop browser. With one click, it blocks time sinks such as Twitter, YouTube and various clickbait news websites for a period of 25 minutes.

It’s astonishing how many times during those 25 minutes I reflexively check, see the blocking message instead and go back to work. I’m hopeful that a few weeks or months with this blocker may break this fast-twitch habit, but in any case the software works.

Meanwhile, by uninstalling news apps, Twitter and Feedly, I’d made my phone less like a sweet shop. As a testimony to the power of unconscious habit, after uninstalling Feedly, I deleted a few incoming emails, then unthinkingly tried to find it. It took a moment for me to realise I was searching for an app that I’d deleted less than a minute earlier.

It was a reminder that there’s more going on here than poor or short-sighted decision-making: often when we use our phones, we’re not really making any conscious decision at all.

Spillover benefits
Paul Romer won a Nobel Memorial Prize recently for analysing the way different innovations would spill over, enabling other innovations and the process of economic growth itself. Four weeks into my experiment, I was noticing some unexpected spillover benefits myself. The phone was still tempting, but decreasingly so. I took my children to see a Christmas film and, for the first time in years, didn’t feel the urge to check it.

I was getting a real sense of the mutually reinforcing nature of the distraction ecosystem — and how I’d failed to see it clearly when inside it. In November, for example, I would have been scrolling through Feedly looking for interesting material. I told myself I was looking for things to read, but really I was looking for things to tweet about. If pushed for time, I’d sometimes tweet things instead of reading them. This foolishness was evidence of a seriously bad habit.

But having uninstalled Twitter, I found myself less tempted to go and look at my Twitter stats (nothing to see) and also less tempted to flick through the blogs. After all, if I wasn’t going to tweet about them, why not read a book instead? Each new app that I removed from my phone weakened my tendency to pick up the device; often, it made other apps less useful or less appealing. I hadn’t seen this effect coming, but I wasn’t complaining.

Adapting to events
The first of January is usually the date for turning over a new leaf but, with hindsight, beginning my experiment in late November instead was an accidental masterstroke. The run-up to Christmas is a different kind of busy: the volume of email declines, replaced by Christmas cards and shopping lists. It’s a time when we often see people face-to-face instead of on Facebook.

By unplugging various digital services, I was moving with the wind at my back; doing firmly and deliberately what I might anyway have drifted towards.

The experiment was working well. I wasn’t missing Twitter at all. I was spending much less time with the phone. Some old friends were emerging from the woodwork to tell me how much they enjoyed receiving my letter. A few fretted that I was going through some kind of crisis, but overall the letters felt like a vastly better way to contact people than through Facebook.

When I did see friends and family, I found it easier to give them my full attention. Sherry Turkle, author of Reclaiming Conversation (UK) (US), has found that people initially used texts as an add-on to face-to-face conversation, but the texts soon became a substitute: more convenient, more controllable.

The problem with real conversation, one high-school senior told her, was that “it takes place in real time and you can’t control what you’re going to say”.

I sympathise, and we probably all had face-to-face conversations over Christmas that we wish could have been conducted from a thousand miles away. But while real conversation can be tiring, it is also vastly more rich and meaningful than a few dozen bytes of text. The less distracting I found my phone, the more I enjoyed talking to the people in front of me.

At the end of December came a strange and unexpected test: I was awarded an OBE in the New Year honours list. Suddenly the digital hush of the year’s twilight was interrupted by a steady stream of congratulatory messages.

I was out walking with some old friends, catching up on the news of the past few months and chatting about the year ahead. In my pocket, my phone was pinging, and I felt increasingly anxious about letting the messages go unanswered. I snatched moments here and there to type responses, offering slightly embarrassed excuses to my companions.

It’s not an experience I’m likely to repeat, but it taught me a few lessons. First, even friendly digital messages can provoke anxiety. I was fearful of appearing ungrateful by not replying promptly. This was silly. A delay would not have bothered anyone. But I couldn’t help myself. I should have left the phone at home.

Second, it’s easy to reactivate bad habits. After a couple of weeks in which I checked my phone a few times a day instead of several times an hour, the influx of messages pushed me back into the habit of checking my phone like a rat hoping for a food pellet. It took several days more to regain some calm.

Third, and more positively, the investment in spurning social media was paying dividends. I did buckle and log into Facebook for the first time in weeks, not wanting to ignore messages of congratulation. It was completely silent. People had worked out, it seems, that Facebook wasn’t a good way to reach me. I managed to resist logging into Twitter completely.

Still, I did start to wonder whether the new regime would survive contact with the normal working routines of January. I called Jocelyn Glei, author of Unsubscribe (UK) (US) and host of the Hurry Slowly podcast. “The notion that you’re going to change all your habits and be done is absurd,” she cheerfully warned me. Fair enough — but then how to sustain the new pattern?

Glei’s advice was to remain vigilant. It’s one thing to check out at Christmas, another to do so in September. It makes sense to stay off Twitter while writing a book; less sense, perhaps, while marketing it. Each new project, she advised, required a quick re-evaluation of where to draw the digital boundaries. The digital reset was going to be a work in progress.

Lessons learnt
The point of the break was to allow a thoughtful assessment of which digital services were worth letting back into my life. So as the new year starts up and emails start to flow freely again, what did I learn?

First, I didn’t miss being plugged into Twitter at all. I’ve been ignoring notifications for years — thus missing some of the benefit and much of the aggravation of the platform — but have still been tweeting away out of some strange combination of duty and inertia.

My new plan is to log in for a few hours on Friday, set up some links to my columns and other projects that may interest some people, and log out again. If I ever see a good reason to use the platform more intensively, I’ll be back.

Second, I enjoyed having a more boring phone. With very little on it now but an easily emptied email inbox and the FT app, I pick it up less often and for less time, and am more likely to do something useful with it when I do check it.

I did reinstall Feedly — which I find essential for my job — but will keep an eye on my usage. With no tweets to send, the app has become more useful. I read for the sake of learning rather than for the sake of tweeting.

Third, the “strict workflow” blocker worked so well in saving me from my fast-twitch impulses that I added my email inbox to the blocked list. I’d had limited success with an email blocker before, but this time was much more successful, perhaps because the blocker was part of a larger plan.

Finally, it was good to focus on the upside of the digital decluttering. Although it was partly an exercise in habit-breaking or self-denial, it was much more useful to think of it as spending time and attention on things that mattered.

Some old friends seemed genuinely touched to receive a real letter; nobody has ever been touched by a Facebook “Like”. I felt in better shape at the beginning of January than at the beginning of December, which is hardly the usual Christmas experience. I walked, talked, ate and drank with old friends. I even battled a few imaginary wizards.

I’ve no desire to give all this up to spend more time with my phone.

 

Written for and first published in the Financial Times on 17 January 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

Why big companies squander brilliant ideas

J F C Fuller did not invent the tank.

That distinction should probably fall to E L de Mole, an Australian who approached the British war office in 1912 with a design that was — in the words of historians Kenneth Macksey and John Batchelor — “so convincingly similar to those which finally went into service that one wonders why it was never adopted from the outset”.

But when the British army eventually introduced the tank, it was J F C Fuller, chief staff officer of what would later become the tank corps, who understood what to do with it. At 39 years old, Fuller was a small man with a neatly trimmed moustache and a hairline that had retreated over his crown and was beginning to march down the back of his head. He could have passed for a butler in a costume drama, but his appearance belied an inner radicalism. (He had been friends — and then enemies — with the occultist Aleister Crowley.)

Late in 1917, after almost 400 British tanks had, with modest success, lumbered across the German lines at the battle of Cambrai, Fuller applied his radical streak to the problem of using the tank effectively.

A new and much faster tank, the Medium D, could travel 200 miles at a speed of 20 miles per hour. Fuller proposed that these tanks would attack the German army’s brain — the string of German headquarters miles behind the front line.

A Medium D could roll across the trenches and be on the German command posts in an hour; Fuller’s attack would come from nowhere. Air support would disrupt German road and rail travel.

“Bad news confuses, confusion stimulates panic,” wrote Fuller. His idea was dubbed Plan 1919. By striking suddenly at the German command, Plan 1919 would cause the German army to disintegrate. It would, Fuller declared, be “the winning of the war in a single battle”.

His astonishing idea became “the most famous unused plan in military history”, according to his biographer Brian Holden Reid. But, of course, that is not entirely true. It was used to great effect, in 1940 — by the Germans. J F C Fuller had invented blitzkrieg.

 

The story might be a historical curiosity, had echoes of it not been repeated so frequently since the British army stuffed Fuller’s plans for blitzkrieg into a desk drawer. Organisations from newspapers to oil majors to computing giants have persistently struggled to embrace new technological opportunities, or recognise new technological threats, even when the threats are mortal or the opportunities are golden. Why do some ideas slip out of the grasp of incumbents, then thrive in the hands of upstarts?

In 1970, the photocopying giant Xerox established the Palo Alto Research Center, or Parc. Xerox Parc then developed the world’s first personal computer, with a graphical user interface, windows, icons and a mouse. Bill Gates of Microsoft and Steve Jobs of Apple observed developments at Xerox Parc with great interest. Xerox still makes photocopiers.

In 1975, a 24-year-old engineer named Steven Sasson built the world’s first digital camera — a patched-together device scavenging a lens from a Super-8 camera, magnetic tape in a portable cassette recorder and a TV screen. Sasson worked for Eastman Kodak, where in 1989 he and his colleagues also constructed the first modern digital SLR camera. Kodak built a sizeable line of business in digital photography, and earned a small fortune from the patents. Yet Kodak could not adjust to a world in which every phone contained a camera. The company filed for bankruptcy in 2012.

In 1999, Sony launched the “Memory Stick Walkman”, one of the world’s first digital music players. Sony was armed with the iconic Walkman brand, some of the world’s best consumer electronics engineers and the talent-soaked Sony-BMG music label. The Memory Stick Walkman went nowhere and, two years later, it was eclipsed by a product that transformed the fortunes of a struggling Apple: the iPod.

And in 1918, Britain had the best tanks in the world, a clear vision of how to use them and, in Fuller, one of the best military strategists to emerge from the British army. The German army was forbidden to use tanks at all; it was scarcely more than a collection of officers, a head without a body. Heinz Guderian, later one of the leading Panzer commanders, had not even seen the inside of a tank until he managed to go on manoeuvres with the Swedish army in 1929. Yet by the late 1930s, the British had conceded technical and tactical superiority to Hitler’s new army.

There is an obvious explanation for all of these failures and missed opportunities: people are idiots. “Now we can get back to some real soldiering,” remarked one senior officer to Fuller at the end of the first world war — as though defending Britain in an existential struggle had been a frivolous distraction from tending to noble horses, bright buckles and shiny boots. The army blocked publication of Fuller’s books for several years; they were seen as insubordinate.

When Steve Jobs visited Xerox Parc in 1979, and saw a windows-and-mouse interface for the first time, he couldn’t contain himself, according to Malcolm Gladwell. “Why aren’t you doing anything with this?” he yelled. “This is the greatest thing. This is revolutionary!” If Jobs had been teleported into the British war office in the 1920s, he might well have said the same thing.

Idiocy is a tempting explanation and not without merit. The top man in the British army, Field Marshal Sir Archibald Montgomery-Massingberd, responded to the threat of Nazi militarisation by increasing the amount spent on forage for horses by a factor of 10. Cavalry officers would be provided with a second horse; tank officers would get a horse too. As I say: people are idiots.

But there is something about the “idiot” theory that feels too glib. Consider Xerox Parc: how is it that a corporation could be smart enough to establish such a superb research centre, but then fail to take advantage? Was Sony really run by idiots in the 1990s? Even Montgomery-Massingberd is too casually caricatured. These organisations stumbled for a reason.

Management theorists have a word for it: disruption. “Disruption describes what happens when firms fail because they keep making the kinds of choices that made them successful,” says Joshua Gans, an economist at the Rotman School of Management in Toronto and author of The Disruption Dilemma. (US) (UK) Successful organisations stick to their once-triumphant strategies, even as the world changes around them. More horses! More forage!

Why does this happen? Easily the most famous explanation comes from Clayton Christensen of Harvard Business School. Christensen’s 1997 book, The Innovator’s Dilemma, (US) (UK) told a compelling story about how new technologies creep up from below: they are flawed or under-developed at first, so do not appeal to existing customers. Holiday snappers do not want to buy digital cameras the size of a shoebox and the price of a car.

However, Christensen explains, these technologies do find customers: people with unusual needs previously unserved by the incumbent players. The new technology gets better and, one day, the incumbent wakes up to discover that an upstart challenger has several years’ head start — and once-loyal customers have jumped ship.

Christensen’s story is an elegant one and fits some cases brilliantly. But there are many examples that do not fit — such as the failure of Xerox to exploit the cutting-edge research at Parc. The mouse and the graphic user interface aren’t a low-end competitor to the photocopier. They’re from a completely different universe.

The iPod didn’t sneak up on Sony from below: the company had seen the potential of a digital music player and moved quickly. Dominant organisations often see the disruptive technologies coming. “Kodak and Blockbuster weren’t caught by surprise,” Joshua Gans tells me. “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.” They knew; but they were unable to put together the right response.

There is also a striking counter-example to Christensen’s idea that disruptive technologies begin as flawed or low-quality options. The iPhone was priced as a premium product with never-before-seen capabilities. It devastated Nokia and Research In Motion — now simply named BlackBerry Ltd in an echo of its once-iconic offering.

Christensen has tried to fit the iPhone into his theories. At first he predicted that incumbents would easily respond, and later he recast it as a disruption in a different industry altogether: “It was intended to disrupt the laptop. And that’s the way it happened.”

The laptop? Tell that to Nokia and BlackBerry.

Anyway, is the tank a low-end competitor to the horse? That’s a stretch. When a theory needs to be made this elastic, it may be time to look for another theory.

 

In 1990, a young economist named Rebecca Henderson published an article with her supervisor Kim Clark that presented a different view of why it is hard to do new things in old organisations. The relevant word is “organisations”.

Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.

Consider, for example, IBM — the giant of mainframe computing. IBM is a survivor. It predates the digital computer by more than three decades. While the performance of computers was being revolutionised by the semiconductor, the integrated circuit, the hard drive and the compiler, IBM maintained a dominant position without breaking stride. This was because the organisational challenge of making and selling a sophisticated mainframe computer to a bank in the 1970s was not greatly different from the organisational challenge of making and selling a mechanical tabulating machine to a bank in the 1930s. Change was constant but manageable.

When computers started to be bought by small businesses, hobbyists and even parents, IBM faced a very different challenge. It did build a successful business in PCs, but was unable to maintain its old dominance, or bring to bear its historical strengths. In fact, the PC division prospered only as long as it was able to snub the rest of the organisation, often partnering with component suppliers and distributors that directly competed with IBM divisions. Internal politics soon asserted itself.

A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.

“An architectural innovation is an innovation that changes the relationship between the pieces of the problem,” Henderson tells me. “It can be hard to perceive, because many of the pieces remain the same. But they fit together differently.”

An architectural innovation challenges an old organisation because it demands that the organisation remake itself. And who wants to do that?

 

The armies of the late 19th century were organised — as armies had long been — around cavalry and infantry. Cavalry units offered mobility. Infantry offered strength in numbers and the ability to dig in defensively.

Three technologies emerged to define the first world war: artillery, barbed wire and the machine gun. They profoundly shaped the battlefield, but also slipped easily into the existing decision-making structures. Barbed wire and machine guns were used to reinforce infantry positions. Artillery could support either cavalry or infantry from a distance.

Tanks, however, were different. In some ways they were like cavalry, since their strength lay partly in their ability to move quickly. In other ways, they fitted with the infantry, fighting alongside foot soldiers. Or perhaps tanks were a new kind of military capability entirely; this was the view taken by J F C Fuller.

These discussions might seem philosophical — but in the light of Henderson’s ideas, they are intensely practical. “You have to find an organisation that will accept the new bit of technology,” says Andrew Mackay. Mackay runs an advisory firm, Complexas, but was also the commander of British and coalition forces in Helmand, Afghanistan, in 2008. “The organisational question is deeply unsexy, but it’s fundamental.”

A more recent example: is the helicopter basically a kind of aeroplane, and therefore an asset of the Royal Air Force? Or something quite different? Who should be in charge of drones today?

So it was with the tank. If it was to prosper, it needed an organisational home. Someone would have to argue for it, someone would have to pay for it, and someone would have to make it all work, technologically and tactically.

Perhaps the two most obvious places to put the tank were as a standalone unit (since it offered quite new capabilities) or in cavalry regiments (since it was highly mobile and the horse was becoming obsolete). There were traps along either route: the established regiments would resist a standalone structure for tanks, which would compete for resources while the postwar army was shrinking. A new tank regiment would lack both allies and the heft of historical tradition.

After various twists and turns, it was the cavalry that ended up as the organisational home of the tank. And cavalry officers certainly understand a highly mobile strike capability. But they were never really organised around the concept of “mobility”. They were organised around horses. The cavalry officer loved his horse and rode it with skill. His regiment was devoted to feeding and caring for the horses. Would he not resist the usurper tank with every fibre of his being?

 

Xerox Parc developed or assembled most of the features of a user-friendly personal computer, but Xerox itself did not have the organisational architecture to manufacture and market it. Xerox Parc did develop the laser printer, a product that matched the company’s expertise nicely. As Gladwell pointed out, this easily paid for the entire Parc project. The laser printer was like artillery or the machine gun for Xerox: it was an exciting new technology, but it was not a challenge to the organisation’s architecture. The personal computer was like the tank.

The same is true for Sony and the Memory Stick Walkman. As Sony expanded, it produced radios and televisions, video recorders and camcorders, computers, game consoles and even acquired a film and music empire. But to keep this sprawl manageable, Sony’s leaders divided it into silos. As Gillian Tett explains in The Silo Effect, (US) (UK) the silo that produced the PlayStation had almost nothing to do with the silo that produced portable CD players. The Memory Stick Walkman was like the tank: it didn’t fit neatly into any category. To be a success, the silos that had been designed to work separately would have to work together. That required an architectural change that Sony tried but failed to achieve.

And for IBM, the shift from a mechanical tabulator to a mainframe digital computer was like the shift from rifles to the machine gun: an awesome step up in firepower, but a modest adjustment to organisational capacity. The tank was like the personal computer: it may have been a logical step forward given the technology available, but it required a different organisational architecture — one that bypassed and threatened the existing power centres of Big Blue. That was the problem.

The politics of organisational change are never easy. In the case of the tank, they were brutal. The British public never wanted to fight another war in mainland Europe, and the tank represented an admission that they might have to. The armed forces were starved of cash in the 1920s and 1930s. In 1932, the British army ordered just nine tanks — delicate four-tonners. The total weight of this entire force was less than a single German Tiger tank. But at a time of declining budgets, who could justify buying more?

It did not help that the tank enthusiasts were often politically naive. Since an architectural innovation requires an organisational overhaul, it is a task requiring skilful diplomacy. Fuller was no diplomat. His essays and books were dotted with spiky critiques of senior military officers. After a while, even the junior officers who admired his wit began to tire of his “needlessly offensive” lecturing. D

espite alienating the army top brass, Fuller was handed a unique opportunity to advance the cause of tanks in the British army: he was offered the command of a new experimental mechanised force in December 1926. There was just one problem: he would have to step away from his single-minded focus on the tank, also taking command of an infantry brigade and a garrison. In short, Fuller would have to get into the organisational headaches that surround any architectural innovation.

He baulked, and wrote to the head of the army demanding that these other duties be carried out by someone else, eventually threatening to resign. The position was awarded to another officer, and Fuller’s career never recovered. His petulance cost him — and the British army — dearly. Architectural innovations can seem too much like hard work, even for those most committed to seeing them succeed.

 

Within academia, Rebecca Henderson’s ideas about architectural innovation are widely cited, and she is one of only two academics at Harvard Business School to hold the rank of university professor. The casual observer of business theories, however, is far more likely to have heard of Clayton Christensen, one of the most famous management gurus on the planet. That may be because Christensen has a single clear theory of how disruption happens — and a solution, too: disrupt yourself before you are disrupted by someone else. That elegance is something we tend to find appealing.

The reality of disruption is less elegant — and harder to solve. Kodak’s position may well have been impossible, no matter what managers had done. If so, the most profitable response would have been to vanish gracefully. “There are multiple points of failure,” says Henderson. “There’s the problem of reorganisation. There’s the question of whether the new idea will be profitable. There are cognitive filters. There is more than one kind of denial. To navigate successfully through, an incumbent organisation has to overcome every one of these obstacles.”

In an email, she added that the innovators — like Fuller — are often difficult people. “The people who bug large organisations to do new things are socially awkward, slightly fanatical and politically often hopelessly naive.” Another point of failure.

The message of Henderson’s work with Kim Clark and others is that when companies or institutions are faced with an organisationally disruptive innovation, there is no simple solution. There may be no solution at all. “I’m sorry it’s not more management guru-ish,” she tells me, laughing. “But anybody who’s really any good at this will tell you that this is hard.”

Almost a decade after resigning from a senior position in the British army, Andrew Mackay agrees: “I’d love to think that there could be a solution, but I don’t think there is.”

 

If I had to bet on the most significant disruption occurring today, I would point to the energy industry.

Chris Goodall is a longtime observer of the renewable energy scene and author of The Switch, a book about breakthroughs in solar panel technology. Goodall points out that solar photovoltaics have enjoyed a dramatic fall in costs, one that shows no sign of abating. Solar PV electricity is now cheaper than electricity generated by gas or coal in the sunny climes where most of the planet’s population live. A few more years and that advantage will seem overwhelming, which is great news for the planet and terrible news for incumbents.

Consider General Electric, which this year disappeared from the Dow Jones Industrial Average. In little more than a year, the old industrial titan’s share price had halved. One of the key culprits for its woes was a precipitous collapse in the demand for large gas turbines, that, in turn, was the result of a fall in the cost of solar power cells that had been relentless, predictable and ignored.

This possibility has been clear to the fossil fuel industry for a while. I know: I used to work in long-range scenario planning for Shell International. Back in 2001, my Shell colleagues and I were discussing thin solar films that could be printed cheaply and applied to windows or hung as wallpaper. We could see the threat of exponentially cheaper solar power — but recall what Joshua Gans said about Kodak and Blockbuster: “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.”

They knew. But they could not act. Because what is an oil company to do in a world of abundant, cheap solar energy? Offshore wind farms play to some oil-company strengths; they know a lot about large metal structures in the North Sea. But solar energy is an architectural innovation. The pieces just don’t fit together like an oil rig or a refinery. As a mass-market, manufactured product it is closer to the skill set of Ikea than Exxon.

The implication of Christensen’s theory is that oil companies should have set up solar subsidiaries decades ago. Many of them did, without much success. The implication of Henderson’s theory is that the oil companies are in big trouble.

Chris Goodall thinks the oil companies should rescue what they can — for example, by developing synthetic hydrocarbons derived from water, atmospheric carbon dioxide and solar energy. Such products would play to oil-company strengths. But for most of their business lines, Goodall says, “The best strategy for the oil companies is almost certainly gradual self-liquidation.”

Or as BP’s chief executive Bob Dudley candidly admitted to the Washington Post recently, “If someone said, ‘Here’s $10bn to invest in renewables,’ we wouldn’t know how to do it.”

 

Despite all the obstacles, the British army continued to develop both tanks and tank tactics throughout the 1920s and 1930s. Yet the internal politics proved toxic. The Germans, meanwhile, watched and learnt. If the British were hamstrung by their inability to reorganise what was, after all, a victorious army in the first world war, the Germans had the opposite problem: they had barely any army, and no status quo to defend. There was no organisational architecture to get in the way. When Adolf Hitler came to power in 1933 and began to expand the German army and invest in tanks, he encountered a German military that had been watching, thinking and experimenting for 14 years.

On his 50th birthday in 1939, Hitler celebrated with a parade of Germany’s newly reconstructed army through Berlin. “For three hours,” wrote one witness, “a completely mechanised and motorised army roared past the Führer.”

This witness was a guest of honour at the celebrations. His name: J F C Fuller. After quitting the British army in frustration, he had thrown his lot in with the British fascists of Oswald Mosley. He wrote vitriolic attacks on Jews. Some observers wondered whether this was simply an attempt to win favour with the world’s tank superpower, Nazi Germany. One of Fuller’s biographers, Mark Urban, doubts this: “The facility with which Fuller made anti-Jewish jibes in letters and books suggests pleasure rather than duty.”

Nobody doubts, however, that Fuller was obsessed by German tanks. After all, there was one army that had really understood and embraced his ideas: that of Adolf Hitler. After the parade, Major General Fuller met Hitler himself in a receiving line at the Chancellery.

The Führer grasped Fuller’s hand and asked, “I hope you were pleased with your children?”

“Your excellency,” Fuller replied, “They have grown up so quickly that I no longer recognise them.”

This article was first published as a cover story in the FT Magazine on 8/9 September 2018. 

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)


 

6th of October, 2018HighlightsOther WritingComments off
Previous

Elsewhere

  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 181,481 other subscribers

Do NOT follow this link or you will be banned from the site!