Tim Harford The Undercover Economist

Search Result for development — 129 articles

Nominations for a silly economics prize with a deeper purpose

While the Nobel Prizes are no doubt a splendid thing, the Ig Nobels are far more fun. Ig Nobel prizes have been awarded for discovering that every language has a word for “huh?” (Literature, 2015) and for comparing the discomfort of looking at an ugly painting with the discomfort of being shot with a laser beam (Art, 2014), and of course for studying farts (Biology, 2004; Literature, 1998).

They do not seem very important, and indeed Robert May, then the UK’s Chief Scientific Advisor, requested in the 1990s that Ig Nobels not be awarded to British scientists for fear of damaging their reputations.

That seems a little po-faced. It may be better for researchers to laugh at themselves than to let politicians do it for them. The late William Proxmire, a former US senator, relished his “Golden Fleece” awards, a destructive and sometimes inaccurate mockery of research conducted at taxpayers’ expense.

Proxmire did not seem to care that silly research sometimes has serious benefits. The Ig Nobels are designed to “make you laugh, then make you think”, and they have a surprising record of turning up gems. Andre Geim won an Ig Nobel for levitating a live frog, en route to his Nobel Prize in physics for his work on graphene. David Dunning and Justin Kruger received an Ig Nobel prize in psychology for discovering that incompetent people are too incompetent to know they are incompetent. It seemed funny at the time; nobody is laughing these days.

I have been disappointed, however, with the quality of Ig Nobel prizes in economics and business, which have recognised rogue trader Nick Leeson, Lloyd’s of London, Enron, WorldCom, and the entire banking system of Iceland. This is a shame, because if silly-seeming research in physics and biology might lead somewhere intriguing, why not recognise silly research in economics and business?

I have a few candidates in mind. I’d like to nominate Benjamin Scheibehenne, Peter Todd and Rainer Greifeneder for discovering that whether you offer shoppers a choice between a few types of jam, or lots of types of jam, it doesn’t make much difference to whether they buy jam. This finding might seem unremarkable, but the received wisdom in behavioural economics had been that consumers simply stop buying if offered too many choices. Prof Scheibehenne’s team examined 50 studies and concluded that on average, offering more choices made no difference either way.

I also nominate economists Hunt Allcott and Matthew Gentzkow, for studying fake news by inventing fake fake news. They conducted their study immediately after the 2016 presidential election, in an effort to measure how much fake news was around, and how many people had seen it. The use of “fake” fake news was to test people’s recall of “real” fake news stories: some people will say they remember seeing things that they did not, and so Profs Allcott and Gentzkow put fake fake news alongside real fake news and real real news in order to understand what was really going on. Clear?

Perhaps the Ig Nobel committee is concerned that the pair are trespassing on the domain of recent winner Gordon Pennycook (a psychologist) with the economist David Rand. Profs Pennycook and Rand are studying “bullshit receptivity”, a tendency to read profound meanings into randomly generated sentences such as “we are in the midst of a high-frequency blossoming of interconnectedness that will give us access to the quantum soup itself” and “hidden meaning transforms unparalleled abstract beauty”. Highly bullshit-receptive experimental subjects were more likely to believe in fake news headlines, even when part of the study was conducted on April 1.

If all this seems rather obvious, note that there’s an important difference between the kinds of things people believe because they don’t stop to think (for instance, that Pope Francis endorsed US president Donald Trump), and the kinds of things people believe because their political identities depend on it (for instance, that Mr Trump is “draining the swamp”). Anyone trying to restore sanity to political debate needs to understand the distinction. If you think this isn’t an important issue, I have a story about EU cabbage regulations to tell you.

Finally, I nominate Sendhil Mullainathan and Eldar Shafir, for discovering that being “hangry” is a major impediment to economic development. In their book, Scarcity (UK) (US), Profs Mullainathan (an economist) and Shafir (a psychologist) argue that there is a common response to being short of almost anything: money, time, and even food. Scarcity absorbs our mental energies and makes us act in ways that can be deft in the short term but self-defeating over the long haul.

The Ig Nobels glory in the opposite: a surplus of weird ideas that are foolish in the short term but may pay dividends in the end. And if they do not? There’s no harm in being silly.

Written for and first published in the Financial Times on 21 September 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)


 

Why big companies squander brilliant ideas

J F C Fuller did not invent the tank.

That distinction should probably fall to E L de Mole, an Australian who approached the British war office in 1912 with a design that was — in the words of historians Kenneth Macksey and John Batchelor — “so convincingly similar to those which finally went into service that one wonders why it was never adopted from the outset”.

But when the British army eventually introduced the tank, it was J F C Fuller, chief staff officer of what would later become the tank corps, who understood what to do with it. At 39 years old, Fuller was a small man with a neatly trimmed moustache and a hairline that had retreated over his crown and was beginning to march down the back of his head. He could have passed for a butler in a costume drama, but his appearance belied an inner radicalism. (He had been friends — and then enemies — with the occultist Aleister Crowley.)

Late in 1917, after almost 400 British tanks had, with modest success, lumbered across the German lines at the battle of Cambrai, Fuller applied his radical streak to the problem of using the tank effectively.

A new and much faster tank, the Medium D, could travel 200 miles at a speed of 20 miles per hour. Fuller proposed that these tanks would attack the German army’s brain — the string of German headquarters miles behind the front line.

A Medium D could roll across the trenches and be on the German command posts in an hour; Fuller’s attack would come from nowhere. Air support would disrupt German road and rail travel.

“Bad news confuses, confusion stimulates panic,” wrote Fuller. His idea was dubbed Plan 1919. By striking suddenly at the German command, Plan 1919 would cause the German army to disintegrate. It would, Fuller declared, be “the winning of the war in a single battle”.

His astonishing idea became “the most famous unused plan in military history”, according to his biographer Brian Holden Reid. But, of course, that is not entirely true. It was used to great effect, in 1940 — by the Germans. J F C Fuller had invented blitzkrieg.

 

The story might be a historical curiosity, had echoes of it not been repeated so frequently since the British army stuffed Fuller’s plans for blitzkrieg into a desk drawer. Organisations from newspapers to oil majors to computing giants have persistently struggled to embrace new technological opportunities, or recognise new technological threats, even when the threats are mortal or the opportunities are golden. Why do some ideas slip out of the grasp of incumbents, then thrive in the hands of upstarts?

In 1970, the photocopying giant Xerox established the Palo Alto Research Center, or Parc. Xerox Parc then developed the world’s first personal computer, with a graphical user interface, windows, icons and a mouse. Bill Gates of Microsoft and Steve Jobs of Apple observed developments at Xerox Parc with great interest. Xerox still makes photocopiers.

In 1975, a 24-year-old engineer named Steven Sasson built the world’s first digital camera — a patched-together device scavenging a lens from a Super-8 camera, magnetic tape in a portable cassette recorder and a TV screen. Sasson worked for Eastman Kodak, where in 1989 he and his colleagues also constructed the first modern digital SLR camera. Kodak built a sizeable line of business in digital photography, and earned a small fortune from the patents. Yet Kodak could not adjust to a world in which every phone contained a camera. The company filed for bankruptcy in 2012.

In 1999, Sony launched the “Memory Stick Walkman”, one of the world’s first digital music players. Sony was armed with the iconic Walkman brand, some of the world’s best consumer electronics engineers and the talent-soaked Sony-BMG music label. The Memory Stick Walkman went nowhere and, two years later, it was eclipsed by a product that transformed the fortunes of a struggling Apple: the iPod.

And in 1918, Britain had the best tanks in the world, a clear vision of how to use them and, in Fuller, one of the best military strategists to emerge from the British army. The German army was forbidden to use tanks at all; it was scarcely more than a collection of officers, a head without a body. Heinz Guderian, later one of the leading Panzer commanders, had not even seen the inside of a tank until he managed to go on manoeuvres with the Swedish army in 1929. Yet by the late 1930s, the British had conceded technical and tactical superiority to Hitler’s new army.

There is an obvious explanation for all of these failures and missed opportunities: people are idiots. “Now we can get back to some real soldiering,” remarked one senior officer to Fuller at the end of the first world war — as though defending Britain in an existential struggle had been a frivolous distraction from tending to noble horses, bright buckles and shiny boots. The army blocked publication of Fuller’s books for several years; they were seen as insubordinate.

When Steve Jobs visited Xerox Parc in 1979, and saw a windows-and-mouse interface for the first time, he couldn’t contain himself, according to Malcolm Gladwell. “Why aren’t you doing anything with this?” he yelled. “This is the greatest thing. This is revolutionary!” If Jobs had been teleported into the British war office in the 1920s, he might well have said the same thing.

Idiocy is a tempting explanation and not without merit. The top man in the British army, Field Marshal Sir Archibald Montgomery-Massingberd, responded to the threat of Nazi militarisation by increasing the amount spent on forage for horses by a factor of 10. Cavalry officers would be provided with a second horse; tank officers would get a horse too. As I say: people are idiots.

But there is something about the “idiot” theory that feels too glib. Consider Xerox Parc: how is it that a corporation could be smart enough to establish such a superb research centre, but then fail to take advantage? Was Sony really run by idiots in the 1990s? Even Montgomery-Massingberd is too casually caricatured. These organisations stumbled for a reason.

Management theorists have a word for it: disruption. “Disruption describes what happens when firms fail because they keep making the kinds of choices that made them successful,” says Joshua Gans, an economist at the Rotman School of Management in Toronto and author of The Disruption Dilemma. (US) (UK) Successful organisations stick to their once-triumphant strategies, even as the world changes around them. More horses! More forage!

Why does this happen? Easily the most famous explanation comes from Clayton Christensen of Harvard Business School. Christensen’s 1997 book, The Innovator’s Dilemma, (US) (UK) told a compelling story about how new technologies creep up from below: they are flawed or under-developed at first, so do not appeal to existing customers. Holiday snappers do not want to buy digital cameras the size of a shoebox and the price of a car.

However, Christensen explains, these technologies do find customers: people with unusual needs previously unserved by the incumbent players. The new technology gets better and, one day, the incumbent wakes up to discover that an upstart challenger has several years’ head start — and once-loyal customers have jumped ship.

Christensen’s story is an elegant one and fits some cases brilliantly. But there are many examples that do not fit — such as the failure of Xerox to exploit the cutting-edge research at Parc. The mouse and the graphic user interface aren’t a low-end competitor to the photocopier. They’re from a completely different universe.

The iPod didn’t sneak up on Sony from below: the company had seen the potential of a digital music player and moved quickly. Dominant organisations often see the disruptive technologies coming. “Kodak and Blockbuster weren’t caught by surprise,” Joshua Gans tells me. “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.” They knew; but they were unable to put together the right response.

There is also a striking counter-example to Christensen’s idea that disruptive technologies begin as flawed or low-quality options. The iPhone was priced as a premium product with never-before-seen capabilities. It devastated Nokia and Research In Motion — now simply named BlackBerry Ltd in an echo of its once-iconic offering.

Christensen has tried to fit the iPhone into his theories. At first he predicted that incumbents would easily respond, and later he recast it as a disruption in a different industry altogether: “It was intended to disrupt the laptop. And that’s the way it happened.”

The laptop? Tell that to Nokia and BlackBerry.

Anyway, is the tank a low-end competitor to the horse? That’s a stretch. When a theory needs to be made this elastic, it may be time to look for another theory.

 

In 1990, a young economist named Rebecca Henderson published an article with her supervisor Kim Clark that presented a different view of why it is hard to do new things in old organisations. The relevant word is “organisations”.

Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.

Consider, for example, IBM — the giant of mainframe computing. IBM is a survivor. It predates the digital computer by more than three decades. While the performance of computers was being revolutionised by the semiconductor, the integrated circuit, the hard drive and the compiler, IBM maintained a dominant position without breaking stride. This was because the organisational challenge of making and selling a sophisticated mainframe computer to a bank in the 1970s was not greatly different from the organisational challenge of making and selling a mechanical tabulating machine to a bank in the 1930s. Change was constant but manageable.

When computers started to be bought by small businesses, hobbyists and even parents, IBM faced a very different challenge. It did build a successful business in PCs, but was unable to maintain its old dominance, or bring to bear its historical strengths. In fact, the PC division prospered only as long as it was able to snub the rest of the organisation, often partnering with component suppliers and distributors that directly competed with IBM divisions. Internal politics soon asserted itself.

A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.

“An architectural innovation is an innovation that changes the relationship between the pieces of the problem,” Henderson tells me. “It can be hard to perceive, because many of the pieces remain the same. But they fit together differently.”

An architectural innovation challenges an old organisation because it demands that the organisation remake itself. And who wants to do that?

 

The armies of the late 19th century were organised — as armies had long been — around cavalry and infantry. Cavalry units offered mobility. Infantry offered strength in numbers and the ability to dig in defensively.

Three technologies emerged to define the first world war: artillery, barbed wire and the machine gun. They profoundly shaped the battlefield, but also slipped easily into the existing decision-making structures. Barbed wire and machine guns were used to reinforce infantry positions. Artillery could support either cavalry or infantry from a distance.

Tanks, however, were different. In some ways they were like cavalry, since their strength lay partly in their ability to move quickly. In other ways, they fitted with the infantry, fighting alongside foot soldiers. Or perhaps tanks were a new kind of military capability entirely; this was the view taken by J F C Fuller.

These discussions might seem philosophical — but in the light of Henderson’s ideas, they are intensely practical. “You have to find an organisation that will accept the new bit of technology,” says Andrew Mackay. Mackay runs an advisory firm, Complexas, but was also the commander of British and coalition forces in Helmand, Afghanistan, in 2008. “The organisational question is deeply unsexy, but it’s fundamental.”

A more recent example: is the helicopter basically a kind of aeroplane, and therefore an asset of the Royal Air Force? Or something quite different? Who should be in charge of drones today?

So it was with the tank. If it was to prosper, it needed an organisational home. Someone would have to argue for it, someone would have to pay for it, and someone would have to make it all work, technologically and tactically.

Perhaps the two most obvious places to put the tank were as a standalone unit (since it offered quite new capabilities) or in cavalry regiments (since it was highly mobile and the horse was becoming obsolete). There were traps along either route: the established regiments would resist a standalone structure for tanks, which would compete for resources while the postwar army was shrinking. A new tank regiment would lack both allies and the heft of historical tradition.

After various twists and turns, it was the cavalry that ended up as the organisational home of the tank. And cavalry officers certainly understand a highly mobile strike capability. But they were never really organised around the concept of “mobility”. They were organised around horses. The cavalry officer loved his horse and rode it with skill. His regiment was devoted to feeding and caring for the horses. Would he not resist the usurper tank with every fibre of his being?

 

Xerox Parc developed or assembled most of the features of a user-friendly personal computer, but Xerox itself did not have the organisational architecture to manufacture and market it. Xerox Parc did develop the laser printer, a product that matched the company’s expertise nicely. As Gladwell pointed out, this easily paid for the entire Parc project. The laser printer was like artillery or the machine gun for Xerox: it was an exciting new technology, but it was not a challenge to the organisation’s architecture. The personal computer was like the tank.

The same is true for Sony and the Memory Stick Walkman. As Sony expanded, it produced radios and televisions, video recorders and camcorders, computers, game consoles and even acquired a film and music empire. But to keep this sprawl manageable, Sony’s leaders divided it into silos. As Gillian Tett explains in The Silo Effect, (US) (UK) the silo that produced the PlayStation had almost nothing to do with the silo that produced portable CD players. The Memory Stick Walkman was like the tank: it didn’t fit neatly into any category. To be a success, the silos that had been designed to work separately would have to work together. That required an architectural change that Sony tried but failed to achieve.

And for IBM, the shift from a mechanical tabulator to a mainframe digital computer was like the shift from rifles to the machine gun: an awesome step up in firepower, but a modest adjustment to organisational capacity. The tank was like the personal computer: it may have been a logical step forward given the technology available, but it required a different organisational architecture — one that bypassed and threatened the existing power centres of Big Blue. That was the problem.

The politics of organisational change are never easy. In the case of the tank, they were brutal. The British public never wanted to fight another war in mainland Europe, and the tank represented an admission that they might have to. The armed forces were starved of cash in the 1920s and 1930s. In 1932, the British army ordered just nine tanks — delicate four-tonners. The total weight of this entire force was less than a single German Tiger tank. But at a time of declining budgets, who could justify buying more?

It did not help that the tank enthusiasts were often politically naive. Since an architectural innovation requires an organisational overhaul, it is a task requiring skilful diplomacy. Fuller was no diplomat. His essays and books were dotted with spiky critiques of senior military officers. After a while, even the junior officers who admired his wit began to tire of his “needlessly offensive” lecturing. D

espite alienating the army top brass, Fuller was handed a unique opportunity to advance the cause of tanks in the British army: he was offered the command of a new experimental mechanised force in December 1926. There was just one problem: he would have to step away from his single-minded focus on the tank, also taking command of an infantry brigade and a garrison. In short, Fuller would have to get into the organisational headaches that surround any architectural innovation.

He baulked, and wrote to the head of the army demanding that these other duties be carried out by someone else, eventually threatening to resign. The position was awarded to another officer, and Fuller’s career never recovered. His petulance cost him — and the British army — dearly. Architectural innovations can seem too much like hard work, even for those most committed to seeing them succeed.

 

Within academia, Rebecca Henderson’s ideas about architectural innovation are widely cited, and she is one of only two academics at Harvard Business School to hold the rank of university professor. The casual observer of business theories, however, is far more likely to have heard of Clayton Christensen, one of the most famous management gurus on the planet. That may be because Christensen has a single clear theory of how disruption happens — and a solution, too: disrupt yourself before you are disrupted by someone else. That elegance is something we tend to find appealing.

The reality of disruption is less elegant — and harder to solve. Kodak’s position may well have been impossible, no matter what managers had done. If so, the most profitable response would have been to vanish gracefully. “There are multiple points of failure,” says Henderson. “There’s the problem of reorganisation. There’s the question of whether the new idea will be profitable. There are cognitive filters. There is more than one kind of denial. To navigate successfully through, an incumbent organisation has to overcome every one of these obstacles.”

In an email, she added that the innovators — like Fuller — are often difficult people. “The people who bug large organisations to do new things are socially awkward, slightly fanatical and politically often hopelessly naive.” Another point of failure.

The message of Henderson’s work with Kim Clark and others is that when companies or institutions are faced with an organisationally disruptive innovation, there is no simple solution. There may be no solution at all. “I’m sorry it’s not more management guru-ish,” she tells me, laughing. “But anybody who’s really any good at this will tell you that this is hard.”

Almost a decade after resigning from a senior position in the British army, Andrew Mackay agrees: “I’d love to think that there could be a solution, but I don’t think there is.”

 

If I had to bet on the most significant disruption occurring today, I would point to the energy industry.

Chris Goodall is a longtime observer of the renewable energy scene and author of The Switch, a book about breakthroughs in solar panel technology. Goodall points out that solar photovoltaics have enjoyed a dramatic fall in costs, one that shows no sign of abating. Solar PV electricity is now cheaper than electricity generated by gas or coal in the sunny climes where most of the planet’s population live. A few more years and that advantage will seem overwhelming, which is great news for the planet and terrible news for incumbents.

Consider General Electric, which this year disappeared from the Dow Jones Industrial Average. In little more than a year, the old industrial titan’s share price had halved. One of the key culprits for its woes was a precipitous collapse in the demand for large gas turbines, that, in turn, was the result of a fall in the cost of solar power cells that had been relentless, predictable and ignored.

This possibility has been clear to the fossil fuel industry for a while. I know: I used to work in long-range scenario planning for Shell International. Back in 2001, my Shell colleagues and I were discussing thin solar films that could be printed cheaply and applied to windows or hung as wallpaper. We could see the threat of exponentially cheaper solar power — but recall what Joshua Gans said about Kodak and Blockbuster: “They knew what the future looked like. They didn’t know later than everybody else, they knew ahead of everybody else.”

They knew. But they could not act. Because what is an oil company to do in a world of abundant, cheap solar energy? Offshore wind farms play to some oil-company strengths; they know a lot about large metal structures in the North Sea. But solar energy is an architectural innovation. The pieces just don’t fit together like an oil rig or a refinery. As a mass-market, manufactured product it is closer to the skill set of Ikea than Exxon.

The implication of Christensen’s theory is that oil companies should have set up solar subsidiaries decades ago. Many of them did, without much success. The implication of Henderson’s theory is that the oil companies are in big trouble.

Chris Goodall thinks the oil companies should rescue what they can — for example, by developing synthetic hydrocarbons derived from water, atmospheric carbon dioxide and solar energy. Such products would play to oil-company strengths. But for most of their business lines, Goodall says, “The best strategy for the oil companies is almost certainly gradual self-liquidation.”

Or as BP’s chief executive Bob Dudley candidly admitted to the Washington Post recently, “If someone said, ‘Here’s $10bn to invest in renewables,’ we wouldn’t know how to do it.”

 

Despite all the obstacles, the British army continued to develop both tanks and tank tactics throughout the 1920s and 1930s. Yet the internal politics proved toxic. The Germans, meanwhile, watched and learnt. If the British were hamstrung by their inability to reorganise what was, after all, a victorious army in the first world war, the Germans had the opposite problem: they had barely any army, and no status quo to defend. There was no organisational architecture to get in the way. When Adolf Hitler came to power in 1933 and began to expand the German army and invest in tanks, he encountered a German military that had been watching, thinking and experimenting for 14 years.

On his 50th birthday in 1939, Hitler celebrated with a parade of Germany’s newly reconstructed army through Berlin. “For three hours,” wrote one witness, “a completely mechanised and motorised army roared past the Führer.”

This witness was a guest of honour at the celebrations. His name: J F C Fuller. After quitting the British army in frustration, he had thrown his lot in with the British fascists of Oswald Mosley. He wrote vitriolic attacks on Jews. Some observers wondered whether this was simply an attempt to win favour with the world’s tank superpower, Nazi Germany. One of Fuller’s biographers, Mark Urban, doubts this: “The facility with which Fuller made anti-Jewish jibes in letters and books suggests pleasure rather than duty.”

Nobody doubts, however, that Fuller was obsessed by German tanks. After all, there was one army that had really understood and embraced his ideas: that of Adolf Hitler. After the parade, Major General Fuller met Hitler himself in a receiving line at the Chancellery.

The Führer grasped Fuller’s hand and asked, “I hope you were pleased with your children?”

“Your excellency,” Fuller replied, “They have grown up so quickly that I no longer recognise them.”

This article was first published as a cover story in the FT Magazine on 8/9 September 2018. 

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)


 

The cool tools that are reshaping economics

If Hollywood is to be believed, every mad scientist who ever lived has a laboratory full of bubbling flasks, flashing consoles and glowing orbs. Science writer Philip Ball — who has visited countless research labs — tells me that reality is not so very different: the gear may be more subdued, but the gear is always there.

Science depends on tools, often instruments to detect or measure that which was previously undetectable — think of Galileo’s telescope or Newton’s prisms. Nobel Prizes have often been awarded to the physicists who developed such tools: the cloud chamber (1927); the electron microscope (1986); and LIGO, the laser interferometer gravitational-wave observatory (2017).

What, then, of economics? Economics has its own quasi-Nobel Prize, but it is a stretch to find a single example of a prize being awarded for the development of new tools or instruments. Simon Kuznets (laureate in 1971) probably has the best such claim, for developing the ideas behind the gross domestic product measure. Alas, GDP is a broad aggregate with limitations that Kuznets himself understood all too well.

The great Alfred Marshall described economics as being the study of humanity “in the ordinary business of life”. Unfortunately, in Marshall’s day — he died in 1924 — there was no way to observe the ordinary business of life, except perhaps as an anthropologist. Economists spent a lot of time in armchairs, thinking hard about theory rather than measurement.

Some economists now make progress using old tools from other fields. MIT’s Esther Duflo, winner of the prestigious John Bates Clark medal, answers economic questions using randomised controlled trials. RCTs are typically dated back to Austin Bradford Hill’s 1948 trial of streptomycin for tuberculosis. (Hill was trained as an economist, so perhaps we can score that one for the profession.)

But the holy grail is to be able to observe the ordinary business of life in detail, in real time, and at scale — ideally all three at once. That was once an impossible goal, but three new developments put that goal within reach.

The first is the availability of high-resolution satellite images. In the mid-1990s, an economist named Alex Pfaff realised that these images could be used to answer questions about the connection between development projects and deforestation in the Amazon.

Hundreds of others have followed suit. Satellites can easily measure illumination at night, a simple way to track economic activity and patterns of urban development. It is also possible to measure various kinds of air pollution, and to observe the growth of crops. Algorithms are starting to extract subtle information at scale: how many Ethiopian homes have tin roofs? Which roads in Kenya are in good condition? And ever-cheaper small satellites are taking detailed photographs of everywhere, every day.

An even bigger change is that economists are using administrative data. I realise that “economists are using administrative data” is a contender for the most boring sentence uttered in 2018. But over the past two decades or so, this has been a quietly revolutionary move.

Administrative data are the numbers generated by governments or private companies for the purposes of getting things done. Schools keep track of attendance and grades. Tax authorities know your (declared) income — but also where you live, your age, and perhaps who your children are.

As such records have become digitised, they can be used to answer serious questions in research. For example, tax data can tell us the extent to which the children of rich or poor parents grow up to be rich or poor themselves. These detailed data are now at the forefront of empirical economic research.

According to Dave Donaldson, who like Prof Duflo is a John Bates Clark medallist at MIT: “In my field, international trade, I rarely see a paper that doesn’t use customs-level data. Every shipment generates a record which will specify what it is, where it came from, where it’s going, and the tax paid.”

A third measurement tool is the mobile phone. Every time a call is placed, the phone company generates a record of who called whom, when, for how long, and where the phones were, sometimes to within less than a hundred metres. With that kind of “metadata”, economists and other researchers can ask questions such as: how rapidly are people moving around, and to what extent is that correlated with the spread of an epidemic? Is a city’s transport infrastructure working well? How quickly are refugees integrating into a new society?

This is both an opportunity and a challenge for economists. Data scientist and economist Josh Blumenstock told me that “anyone who graduated with an economics PhD more than five years ago has no idea how to handle this data, and is frantically scrambling.”

Surely the scramble will produce results. At last, it is possible not just to theorise about Marshall’s “ordinary business of life”, but to observe it. Our tools are letting us see something new — and what we can see determines what we can think.

 
Written for and first published in the Financial Times on 13 July 2018.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now (or in the US, very very soon) in paperback – buy online or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Bread, Brexit, and the power of the third option

Written for and first published in the Financial Times on 18 May 2018.

Imagine that you sell bread-making machines. Your task is complicated by the fact that most people have only a hazy grasp of what a bread-making machine does, let alone the joys and sorrows of owning one.

Nevertheless, there is a simple trick that will help these machines to fly off your shelves: next to what seems to be a perfectly adequate $150 bread-maker, place a $250 bread-maker with a long list of bewildering extra functions. Customers will think to themselves: “I don’t need all that nonsense. The cheaper, simpler bread maker is the better option.” Some of them will buy it, even though they would not have otherwise.

Itamar Simonson, a marketing professor at Stanford University, attests that the kitchenware company Williams-Sonoma doubled their sales of bread-makers in the early 1990s using this sort of technique. Mr Simonson, along with Amos Tversky, one of the fathers of behavioural economics, demonstrated similar preference reversals in a laboratory setting.

Mr Simonson and Tversky showed that when people are wavering between two options, you can change what they choose by offering a third, unattractive option. A $1,000 camera might seem extravagant unless there’s a $5,000 camera sitting next to it. The grande sized cup at Starbucks seems restrained when put next to the venti, a Brobdingnagian vat of flavoured warm milk.

All this brings us to Brexit. What we voters feel about different flavours of Brexit (hard, soft, train-crash) depends in part on facts, in part on propaganda, and in part on our prejudices. But it also depends on the comparisons that come readily to mind.

That means that the re-appearance of the European Economic Area is an intriguing development in the debate. The House of Lords recently voted to keep the UK in the EEA, and therefore the single market, after leaving the EU. This “Norway option” seems a popular enough plan: a BMG opinion poll in January found 52 per cent of people in favour of staying in the single market, and only 14 per cent of people against. In these polarised times that is as decisive a margin as one might expect for anything. Nevertheless, both prime minister Theresa May and the leader of the opposition, Jeremy Corbyn, have rejected the single market option, making it unlikely.

This might seem illogical. Why not go for a popular compromise that respects both the close vote and the fact that the Leave campaign won the referendum? But, remembering the tale of the bread-maker, it makes perfect sense that Mr Corbyn and Mrs May, both Euro-sceptics, should fear the Norway option being placed in front of voters.

To most voters, the EU is like a bread-maker: we don’t really understand what it does and we don’t know what to think about it. The Norway option clarifies matters in a way that does not help Leavers. It is very much like being in the EU, except just a little bit worse. If it becomes a salient possibility, it makes staying in the EU look rather attractive by comparison.

A hard Brexit will probably go quite badly for the UK, but it does have the merit of being a very different path to remaining in the EU. A Norway-option Brexit might well work out smoothly, but it is almost guaranteed to underperform the option of not leaving at all. No wonder Brexiters — so cavalier about having their cake and eating it before the vote — are now determined to ensure that the Norway option is taboo. They realise that if the British public decides that staying in the single market is a plausible plan, they might eventually reach the conclusion that staying in the EU itself would be even wiser.

This sort of preference reversal can occur in other circumstances, too. A hard Brexit offers temptations to many voters: control over immigration; an independent trade policy; no more membership fees to Brussels. It also offers obvious risks: leaving the largest single market in the world; damage to the political settlement in Northern Ireland; setbacks to scientific and diplomatic collaboration. Staying in the EU merely offers business as usual.

Do we tend to find a mix of stark risks and clear rewards appealing? That depends on whether the costs or the opportunities seem more salient. During the referendum campaign, the opportunities opened by Leavers seemed expansive, while the costs (“lower GDP by 2030!”) were vague and dull. During the negotiation process, it is the opportunities that are starting to seem vague while the costs are becoming vivid, at least to the small number of people who are paying attention.

None of this makes it likely that Brexit will be reversed. The simple fact that Leave won the referendum is likely to be proof against all sorts of psychological subtleties. Yet these seem to be nerve-racking times for the Brexiters.

It was always clear that asking an absurdly simple question about an absurdly complicated decision was unlikely to work out well. There is one ironic consolation: however befuddled our referendum decision might have been, the divided cabinet is now doing its best to make us, the great British public, seem like philosopher kings by comparison.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Why it’s important to gather the evidence – but easy to forget it even then

Written for and first published in the Financial Times on 11 May 2018.

It is hard to know which is more frustrating: last week’s announcement that over the past nine years, 450,000 British women were accidentally not invited for breast cancer screening; or the widespread indifference of a howling media to the evidence that such screening is of doubtful benefit anyway. Mammograms lengthen the lives of some women and shorten the lives of others: they allow the early detection and treatment of dangerous tumours, but they also produce many false positives, leading to the unnecessary and risky treatment of tumours that would never have caused any problems.

The best evidence we have, after weighing up several high-quality clinical trials, is that the harms and the benefits are finely balanced. When UK women are offered breast screening, they are sent a leaflet explaining the advantages and the risks so that they can make an informed choice. That choice should not have been denied to them by an administrative blunder. Still, we should be grateful that the error did not disrupt cervical cancer screening instead, which has convincingly been shown to save lives.

We should draw two lessons from the affair, beyond the obvious, which is that British institutions need to get a grip. The first lesson is that it pays to collect the best evidence that we can. The second is that having the best evidence is seldom enough.

Still, the evidence is a start. The world is full of sensible-seeming ideas that disappoint — along with some odd-seeming ideas that turn out to work. Among the latter is the idea that lemon juice prevents and cures scurvy, a disease so debilitating that ships could lose half their crews. In 1747, James Lind, a Scottish doctor, conducted one of the most celebrated early clinical trials proving the efficacy of lemon juice. This is not what common sense might have suggested. The mechanism was obscure: a chemical in lemons — later dubbed “vitamin C” — makes the difference between life and death in tiny doses.

Randomised trials have finally become entrenched in medicine as the obvious way to assess what works — as, just as importantly, have reviews that systematically assemble, evaluate and summarise all the available trials in one place. This did not happen easily, since few senior doctors enjoy being embarrassed by an unexpected trial result.

Such trials have also become an important way to assess ideas in education, criminal justice and economic development. Their use is far more patchy and more controversial but is still yielding dividends.

A new book, Randomistas (UK) (US), by Andrew Leigh, an Australian economist turned politician, gives plenty of examples. One — notorious in geek circles — is “Scared Straight”, a programme designed to deter juvenile offenders by taking them to prison to be bullied in short bursts by terrifying inmates. Scared Straight was so fashionable in the late 1970s that a documentary film about the policy won an Oscar; randomised trials showed it to be counterproductive.

That is often the way. Three decades ago the sociologist Peter Rossi quipped that the more rigorously a social programme evaluation was designed, the more likely it was to show a net benefit of zero. Unfortunately, Mr Rossi may well have been right — but showing which ideas do not work is one of the most important roles of high quality trials.

And not every idea fails. While the evaluations of Scared Straight showed that it made matters worse, a randomised trial of police protocols for domestic violence in Minneapolis in 1981 demonstrated that the police needed to be tougher on domestic abusers, arresting them rather than having a quiet word, if they wanted to prevent future assaults.

Randomised trials of cash transfers to entrepreneurs in developing countries have shown some excellent results, including a spectacular trial in which some Nigerian entrepreneurs with high-quality business plans were randomly chosen to receive $50,000 to realise their ideas.

This research is useless, however, if the people making the decisions are not aware of it. The academic’s cliché, “more research is needed”, is not necessarily true. Often all the necessary research has been done, but it has not been assembled and systematically reviewed. Or — as in the case of breast screening — it has been systematically reviewed, but not enough people have noticed.

Lind’s trial of lemon juice is instructive here. As early as 1601, James Lancaster of the East India Company had demonstrated in an informal trial that lemon juice was proof against scurvy. It took two centuries for the Royal Navy to make it part of sailors’ rations — and longer still for other navies to catch on. Yet as voyages grew shorter, and still lacking a convincing theory for why lemon juice vanquished scurvy, we simply forgot. In 1911, 300 years after Lancaster’s demonstration, Robert Scott’s expedition to the South Pole — including a Royal Navy surgeon — did not know how to prevent scurvy. They suffered grievously as a result. Knowledge can be gained; it can also be ignored, or forgotten.

 

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

The world is not as gloomy, or wonderful, as you may think

Written for and first published in the Financial Times on 20 April 2018.

Is the glass half full, half empty, or laced with cyanide? Last week I wrote about “statistics, fast and slow” — the gap between the world as we intuitively perceive it, and the world as described in spreadsheets. Nowhere is this gap more obvious than when we are invited to reflect on whether things are going well, or badly.

With some telling exceptions, the situation is this: the world is getting better in many of the ways that matter, but we simply don’t realise that this is true. Population growth has slowed dramatically. Most of the world’s children have been vaccinated against at least one disease. Girls are rapidly catching up with boys in their access to education. The world is full of flaws, but progress is not only possible — it is happening.

A new book, Factfulness (UK) (US), by Anna Rosling Rönnlund, Ola Rosling and the late Hans Rosling, describes this knowledge gap, which is at times grotesque: two-thirds of US citizens believe the global proportion of people living in extreme poverty has doubled in the past couple of decades; it has halved. As Hans Rosling used to say, we don’t become this ignorant by accident.

Nor are our misperceptions limited to global development. Surveys by the polling company Ipsos Mori show that citizens of the developed world are also ignorant about our own countries. Most people vastly overestimate the prevalence of crime (which in the UK is dramatically down since the 1990s) and teenage pregnancy (which affects fewer than 1 per cent of 13-15 year old girls). We also seriously overestimate the size of the Muslim population in the west, which suggests that the concerns of tabloid newspapers loom large in our imaginations.

This is not just a statistical phenomenon — it’s a political and psychological puzzle. How worried should we be about unemployment, vandalism, immigration, litter, bad hospitals, or drug dealing? There is no objective answer, but there is a strong tendency for people to be concerned about these issues for their nation, but more relaxed about their local area. We don’t see a serious problem where we live, but we feel strongly that trouble is all around us, just over the horizon. The economist Max Roser — creator of Our World in Data — calls this “local optimism and national pessimism”.

The mismatch is particularly stark when people are asked about their own happiness. Almost all of us are reasonably content: in the UK, 92 per cent of us are “rather happy” or “very happy” with our lives. But we believe that fewer than half of our fellow citizens are in the same cheery situation. The UK is typical in this respect: full of happy people who believe they are surrounded by misery.

This generalised pessimism seems powerful. The one global question that people reliably get right, despite ferocious misinformation campaigns, is the one where the news is bad: do climate experts believe the planet will get warmer over the next century?

So it would be tempting to conclude that we are all systematically too pessimistic about everything except our own experience. That is not quite true. The FT’s chart doctor, Alan Smith, tells me that Saudi Arabians are far too sanguine about the prevalence of obesity: they think a quarter of the nation is overweight or obese, but the true figure is closer to three-quarters. Most people in most countries also underestimate wealth inequality; it’s worse than we think, although, here, the UK is an exception to this belief.

The optimists are not right about everything. Angus Deaton, Nobel laureate in economics, has found that we are too optimistic about our own futures: almost everywhere, people tend to feel that they will be living a strikingly better life in five years’ time. We are doomed to disappointment. Life satisfaction is already high, does not tend to move much, and if anything tends to fall as mid-life approaches.

This misplaced optimism about ourselves is a striking contrast to an equally misplaced despair about our children: across Europe and North America, according to the Pew Research Center, twice as many people believe their children will be worse off financially than they are, rather than better off. Given the past decade of recession and slow recovery, that is not impossible. But economies do tend to grow over the long term, so it is a remarkably grim forecast.

What should we conclude from all this? One plausible hypothesis is that we form many of our impressions about the world from the priorities of the mass media. That would explain why we are pessimistic about most things, but not about obesity, since television loves skinny people.

A second conclusion is that many of us — citizens, the media and mainstream politicians — need to take more interest in the way the world really is. I hardly need to list the political movements that have travelled from the lunatic fringe to positions of power by reinforcing people’s worst fears. But when your policy platform is built on misperceptions, little good is likely to come of it. Optimism and pessimism both have their merits, but right now the world needs a dose of realism.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Books to make you feel better about the world

I recently reviewed the excellent Factfulness (UK) (US) by the late Hans Rosling, his son Ola and his daughter-in-law Anna. It’s an absolutely terrific, inspiring, and wise book, which among many other things is likely to make you feel better about the world. This is not because everything is rosy, but because most people’s perceptions of the world are badly skewed by a mixture of outdated ideas, dramatic media stories, and our own instincts to spot the worst and most frightening facts about the world. Hence “Factfulness” is a relaxing condition.

Bravo – everyone should read this book. But there are some others to look out for.

Charles Kenny, in Getting Better (UK) (US), also points to dramatic progress in achieving some (not all) of the goals that really matter, and in showing the connections between economic growth and progress on health, education, freedom and happiness. He also explores what else needs to be done to get the most out of development aid and to make development work for everyone; this is a nice complement to Factfulness, which is more focused on helping people understand the world.

Steven Pinker, in Enlightenment Now (UK) (US) also reviews this progress. But where Pinker differentiates himself is in Better Angels of Our Nature (UK) (US), which even for an optimist like me is surprising in its message that violence, torture and cruelty – measured in a variety of ways – has been in widespread decline for centuries. Well worth your attention, and I found Pinker persuasive in rebutting many of the obvious objections.

Rebecca Solnit’s A Paradise Built In Hell (UK) (US) takes a different tack, telling stories of the way people respond to disasters such as 9/11, Hurricane Katrina or the Blitz. Solnit argues that the famous “stiff upper lip” is a common response across communities. We scare each other with tales of looting and anarchy, but in fact most communities pull together.

One of the best and most thought-provoking books I’ve ever read – although sadly it did not make me feel as good about the world as the others – is David Mackay’s Sustainable Energy Without The Hot Air (UK) (US). David, who died far too young, goes step by step through the way we consume and produce energy, teaching us how to make estimates, what really matters, and what the most promising sustainable energy sources might be. Spoiler alert: sustainable energy will probably involve some very hard choices. Utterly brilliant book and it is available online as a free resource.

I suppose I should mention my own Fifty Things That Made The Modern Economy (UK) (US) – although I don’t see the book as making an argument for progress as such, it’s impossible to ponder the list of ideas and inventions, from the contraceptive pill to the cold chain, the S-bend to the light bulb, without feeling grateful for those who went before us. It’s true that barbed wire was a bit of a mixed blessing and leaded petrol was an unmitigated disaster – but still, where would we be without paper, or beautiful beautiful concrete? A French journalist told me that the book put me squarely in the category of optimistic Anglo-Saxons, so there.

 

Free email updates

(You can unsubscribe at any time)

“Basic income is about the freedom to say no” – Rutger Bregman goes bouldering

“You have the instinct for it,” says Rutger Bregman, as I haul myself up an indoor climbing wall, nestled under the arches at Vauxhall station in London. “Shit, this is some talent!” he bellows, as I reach the top. I am inwardly delighted, even though I realise the praise is absurd: I have climbed about four metres and it’s a beginner’s route.

Bregman has suggested that we go bouldering together. Bouldering is a variety of rock climbing, done over short distances without safety ropes. Coming from Bregman, it seems a curious choice. The young Dutch historian and author is most famous for advocating a universal basic income — a regular cash grant to every single person, given unconditionally, to support them and provide a minimum standard of living, no matter what might go wrong.

His book, Utopia for Realists (UK) (US), has been a surprise bestseller, finding an audience eager for radical yet plausible policy ideas. Yet this celebrated advocate of unconditional handouts has chosen a sport that is all about self-reliance, and the ultimate departure from the principle of the safety net.

“There is a safety net — look!” says Bregman, pointing at the crash mats. I am not totally convinced. It doesn’t take long before I fall off — a combination of lack of skill and lack of fitness. As I peel myself off the mat, I realise the skin of one elbow has not remained with me.

Bregman’s contention is that a basic income would be the logical and perfectly affordable next step for a human race that has already taken huge leaps forward since before the industrial revolution, when, he writes, “nearly everyone, everywhere was still poor, hungry, dirty, afraid, stupid, sick and ugly”.

Bregman himself looks the picture of health, possibly because, at 29, he’s 15 years younger than me, and possibly because he’s been practising. He climbs twice a week; his T-shirt says Sterk, the name of his local bouldering hall in Utrecht. The word means “strong” in Dutch. My limited experience of rock climbing with my daughters has taught me that the legs take the strain. Bouldering, however, requires more upper-body strength.

“It’s more explosive,” I am told. And within 15 minutes, I’m done: the tendons below my wrist have given up and I am close to doing the same. The first three routes were exhilarating but without a rope, even the short climbs under the arches of VauxWall are starting to feel vertiginous. I’m losing my nerve as well as my strength. Bregman, on the other hand, is just getting started.

“How long is a typical session?” I ask. “Fifteen minutes or an hour or . . . I can’t imagine anyone keeping this up for an hour.

“Two, two-and-a-half hours, if I have the time. Which I usually don’t,” he says. “If you warm up slowly, not like today, then you are at your peak after 45 minutes, and then you can keep that up for another 45 minutes.”

I spend much of the next hour watching Bregman solve one route after another. Sometimes he is dangling loosely off an overhang, as though resting in an invisible hammock. Sometimes he is moving laterally, his legs as high as his arms in a spiderlike scurry across the wall. Once, he hangs vertically as he works his way from left to right across a whimsical hold: a huge pair of pouting lips in one corner, just below the roof. He took up the sport three years ago. “I didn’t like to exercise at all. It’s so soul-destroying. But this is different.”

Bregman sees soul-destroying activity in much of modern life. Too many people, he says, are doing jobs they dislike or see as pointless, because they have no alternative. A basic income would liberate people: perhaps a minimum of €1,000 a month, given unconditionally as a cash grant, or through the tax system as a negative income tax.

Bregman has branded a basic income as “venture capital for the people”. A good line, I congratulate him. But what does it mean?

“OK, so basic income is all about the freedom to say no. That’s a privilege for the rich right now. With a basic income, you can say no to a job you don’t want to do. You can say no to a city in which you no longer want to live. You can say no to an employer who harasses you at work . . . that’s what real freedom looks like.”

Part of the impetus for a basic income has come from the sense that the robots are coming for our jobs — maybe not today, maybe not tomorrow, but soon. The venture capital firm Y Combinator is funding research into basic income, which seems to be a popular idea in Silicon Valley. But Bregman has no patience for the idea that technological change underpins the case for basic income.

“This is not about AI,” he insists. “You go back to the 1960s, and all the economists, all the philosophers, all the sociologists said we’re going to be working less and less and less and less and boredom is going to be the great challenge of the future. Didn’t happen . . . mostly because we have this ideological obsession with creating new jobs.”

Advocates of basic income have included two rather different Nobel laureates: the civil rights activist Martin Luther King Jr and the free-market evangelist Milton Friedman. The idea draws support from leftwingers who see an opportunity to redistribute and to give workers more bargaining power, and rightwingers who see an opportunity to dismantle paternalistic bureaucracies and empower ordinary people to make their own choices.

Bregman’s own sympathies seem to lie more with the left. At one point I tease him about the fact that he is in London on Valentine’s Day while his wife Maartje (a photographer and collaborator) is not. His response is spat out with a vehemence that might have been for comic effect, and might not: “You know that Valentine’s Day is just a capitalist scam to make you buy stuff you don’t need, to impress people you don’t like, right?”

But like Friedman, Bregman is clearly no fan of paternalistic bureaucracies. “Nowhere you’ll find as much support for something like basic income as [among] people who work for unemployment agencies,” he says. “In Holland I did a couple of lectures for those groups and they just give me a standing ovation when you say that we should abolish their jobs.”

It is the unconditional nature of the cash transfer that particularly appeals to him. With the transfer of money, no strings attached, there is a transfer of dignity, of bargaining power, and of responsibility. People have to make their own choices.

Again, I venture a connection between the basic income idea and bouldering: it’s a solo sport in which individuals need to find their own path, judging risks for themselves?

“If I would make this sport political, what I like about it is that it is competitive, but with yourself. So you’re not competing with anyone else, you’re just trying to do better yourself. And it’s a puzzle, every time it’s different. It’s a very creative sport, I guess.”

Utopia for Realists was itself a slowly assembled puzzle. The early drafts were articles in De Correspondent, an online crowdfunded news website founded by a Dutch pop-philosopher and columnist, Rob Wiijnberg. “It’s an anarchist-idealist collective of journalists who don’t follow the news,” Bregman explains.

This may explain why Utopia for Realists is such a curiously enjoyable read. The title sums up Bregman’s belief that evidence-based pragmatism should not rule out provocative, ambitious ideas. The book is lively, well researched and full of unlikely pieces of history, from the Speenhamland system of poor relief, developed in England in 1795, to US President Richard Nixon’s flirtation with the idea of a basic income in 1969. (Bregman studied history rather than economics or politics.) It is also perfectly orthogonal to anything one might read in a newspaper. The book was published in Dutch by De Correspondent, built a following slowly, then was self-published in English.

“I was my own PR employee at that point. I was emailing everyone — no interviews, no reviews. Nothing.” Yet when Bregman emailed me out of the blue with the English translation and a request for my support, I was sufficiently impressed to endorse the book. Steven Pinker also gave it a glowing cover quote. And as Bregman and his colleagues were pondering giving up, the project suddenly took off. While not quite Fifty Shades of Grey, in a short space of time Utopia for Realists went from brave failed experiment to international bestseller, due to be published in 28 languages.

“Ideas always start on the fringe and then they move towards the centre,” he says. “Then I was invited to come to Davos this year. Like, yeah, that’s pretty much it, right? My first lectures about basic income were for anarchists with long hair, and smelly.”

Did he go to Davos? “No, I had to go to a book fair in Colombia.” He did, however, give a talk at TED last year, and seems aware of the irony of advocating the dismantling of an entire class of do-gooders.

“You’re talking for an audience of 1,500 people, many of them involved in kinds of charities. The CEO of Toms, for example, was there.” Toms donates a pair of shoes to a poor family for every pair purchased; Bregman isn’t impressed. “Buy one shoe, give one shoe. That is just a horrible, horrible idea.”

He got a huge round of applause when he proposed scrapping aid bureaucracies and replacing them with direct cash transfers. The rapturous reception struck him as odd. “I was saying we should hand over the salaries of all these paternalistic bureaucrats and give them to the poor, who are the real experts on their own lives. And they were all clapping and laughing, and I was thinking on stage, ‘But I’m talking about you! It’s you!’”

It’s a good talk, I tell him. “I like to prepare for these things. I knew it off by heart three months before I went on stage.”

I press him on the details of the talk. He skips a little too lightly between the idea of replacing international development aid with direct cash transfers to poor people, and the idea of overhauling modern western welfare states to place unconditional cash payments at their heart. The two ideas are cousins, not identical twins, I suggest. Adding a dollar a day, no strings attached, to a non-existent social safety net might be transformative in rural India or Africa. A resident of London is going to want a little more than that before she willingly gives up her housing benefit. Bregman agrees: his focus now is on welfare reform.

Another question mark is over the evidence base for a basic income. Bregman mentions “dozens of experiments” but, arguably, there has never been a completely satisfactory randomised trial of a long-term basic income. (A literature review by the charity GiveDirectly counted six shorter-term randomised trials; policymakers should conduct many more.)

One promising episode — a four-year trial in Manitoba, Canada, in the 1970s — received little attention. When the economist Evelyn Forget managed to get hold of the mothballed archives in 2009, they were on the verge of being discarded. There is a new study in Kenya, funded by GiveDirectly. With 5,000 recipients getting a basic income for 12 years, that trial shows real ambition — but the income in question is just over $20 a month. This is unlikely to tell us much about reforming a European welfare state. Nor is a much-hyped but rather small trial in Finland, which will last just two years and is focused only on those already receiving unemployment benefits.

Other trials have been excitedly announced but have yet to begin, let alone conclude. We are still waiting for a study large and patient enough to tell us much about a basic income in a developed economy. So what are these “dozens of experiments”?

Bregman says that the experiments he has in mind are less evaluating a full basic income scheme, and more exploring the impact of cash transfers in development aid. That is indeed a well-studied area, although not quite the same thing. Those experiments provide encouragement for proponents of a basic income: households tend to put the money to good use, and reap long-term benefits.

By now, we’re talking over a coffee, my enfeebled hands thankfully strong enough to grip a mug. My final question is about one of his other ideas: dramatically liberalising immigration rules.

“Every utopian system is obviously grounded in the injustices of the present,” he says. “What’s the biggest injustice in the world right now? It’s pretty easy to see. It’s borders: apartheid on a global scale.”

But while basic income seems to be having a day in the sun, an end to passport control is hardly in tune with the Trumpian zeitgeist, is it? “Well that’s almost my problem with basic income right now. I get questions during lectures, people say, ‘Is this really a radical idea?’ So I’m like, I should move on. Because utopias are meant to make people angry.”

Fair enough: as in bouldering, so in utopian politics. Once you’ve solved one puzzle, it is time to move on to a new challenge.

 

 
Written for and first published in the Financial Times on 9 March 2018.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Oxfam, #MeToo and the psychology of outrage

This week I overheard someone describing Oxfam as “all a bit Jimmy Savile”. When the UK’s most prominent development charity finds itself being compared to the UK’s most infamous sex offender, it’s safe to say that Oxfam has had a bad week.

The allegations are certainly disturbing: that senior Oxfam staff made liberal use of prostitutes in the wake of the catastrophic Haiti earthquake of 2010 — a crime, as well as an abuse of trust — and that Oxfam quietly showed them the door rather than take a blow to its reputation. The blow has landed now, and it is a heavy one. (Oxfam denies there was any cover-up.)

This is hardly the first wave of outrage to break. Before Oxfam there was the Presidents Club dinner — a men-only fundraiser at which waitresses were treated as sex objects. One FT investigation and the organisation was closed within hours.

There was Harvey Weinstein and the emergence of the #MeToo movement from niche to mainstream. There was the UK parliamentary expenses scandal. Then there are campaigns to take Cecil Rhodes’s statue off an Oxford college, and — from a different political direction — campaigns to ban transgender people from using the public bathroom they prefer.

Where does the outrage come from, and why does it seem to emerge so suddenly? Media reporting is often a trigger, but for every hard-hitting investigation that unleashes a sustained storm, a dozen squalls blow over swiftly.

One clue comes from a large research study of jury-style deliberations, conducted by psychologists Daniel Kahneman and David Schkade, along with Cass Sunstein, who has recently been exploring the dynamics of outrage. (Mr Sunstein was a senior official in the Obama administration, co-author with Richard Thaler of Nudge and is a legal scholar at Harvard Law School.)

This study looked at debates over punitive damage awards against corporations. When individual jurors felt a corporate crime was outrageous, the group displayed a “severity shift”. The group’s verdict could be more severe than any individual’s initial impression. The jurors egged each other on.

But juries could also display a “leniency shift”; if individuals thought the crime was trivial the jury as a whole would often feel even less worried. Sometimes we don’t know how to feel until we see how other people feel. We are, rightly, much more relaxed about gay cabinet ministers than we used to be, and this is partly because everyone sees that everyone else feels there is nothing shameful about it.

The severity shift and the leniency shift contribute to outrage being unpredictable. Our initial impressions are reinforced once we see what other people think.

But not all of these shifts are in favour of progressive causes. One experiment — conducted by economists Leonardo Bursztyn, Georgy Egorov, and Stefano Fiorin — examined people’s willingness to support an apparently xenophobic organisation. In 2016, people often wanted anonymity before they were willing to back the xenophobes.

US president Donald Trump changed that. When people were reminded that Mr Trump was leading in the polls in their state, anonymity no longer mattered. When the experiment was rerun after his election victory, the result was the same: some people were xenophobes and some were not, but in the Trump era, nobody kept their xenophobia in the closet.

The force of these jolts to public opinion is amplified by several other factors. Over the past year, it has become safer to speak out about sexual harassment, but it has also become riskier to make light of it. This reinforces the trend.

And the sudden salience of an issue may bring further problems to light. One woman tells her story of sexual assault at the hands of a famous man, and other women come forward to say that he’s done the same thing to them.

Or, since everyone is now concerned about sexual exploitation by Oxfam staff in Haiti, where else has this happened? How often? Journalists ask questions that could not have been asked a decade ago. Regulators open investigations. Other charities scramble to get ahead of the story.

The self-reinforcing dynamics mean that unpredictability is a feature of the outrage system. They also suggest that we need to learn two lessons.

The first is that we should ask ourselves, is there anything that happens in my profession, industry or community that is taken for granted, but that the wider world might view with sudden outrage? The in-crowd may lure each other into viewing transgressions with a leniency-shifted forgiveness. When everyone else pays attention, the leniency shift may flip to a severity shift.

The second is to beware tribalism. Outrage may be unpredictable, but once it has grown it is easy to manipulate for political ends, whether noble or reprehensible. Surrounded as we are with people who share our sense of outrage, it is easy to wonder why some other group just doesn’t seem to feel the same way.

Righteous outrage is a powerful weapon, and one that has smashed many barriers of injustice. We should pull the trigger of that weapon with care, not with abandon.

Written for and first published in the Financial Times on 16 February 2018.

My book “Messy: How To Be Creative and Resilient in a Tidy-Minded World” is now available in paperback both in the US and the UK – or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

What AlphaGo Zero teaches us about what’s going wrong with innovation

It is hard not to be impressed — and perhaps a little alarmed — by the progression. In 1997, IBM’s supercomputer Deep Blue beat the world’s greatest chess player, Garry Kasparov. It was a hugely expensive piece of hardware, closely tended and coached by humans.

Go is a far harder game for computers to master than chess. Yet when the AlphaGo programme emerged with muted fanfare in 2016, it comfortably outclassed the world’s best Go players after a few months of training.

Then last week, the AI research firm DeepMind unveiled AlphaGo Zero. It is faster, uses less hardware, beat its predecessor AlphaGo by 100 games to none, and is entirely self-taught. What is more, it achieved this performance after just 72 hours of practice.

The bewildering progress of AlphaGo Zero has fed an already-febrile anxiety about a robot takeover causing mass unemployment. Yet that anxiety sits uneasily with the high employment rates and disappointing productivity growth we see in the US and particularly the UK. There are plenty of jobs, but apparently not a lot of innovation.

There are various possible explanations for this paradox, but the simplest one is this: AlphaGo Zero is an outlier. Productivity and technological progress are lacklustre because the research behind AlphaGo Zero is not typical of the way we try to produce new ideas.

Mr Kasparov’s own perspective on this is fascinating. In his recent book, Deep Thinking (UK) (US), he quotes the late computer scientist Alan Perlis: “Optimization hinders evolution”. In the case of computer chess, Perlis’s maxim describes researchers who chose pragmatic short-cuts for quick results. Deeper, riskier research was neglected. IBM’s priority with Deep Blue was not knowledge, but victory — and victory was a scientific dead end.

That is a shame. Computing pioneers such as Alan Turing and Claude Shannon (UK) (US) believed that chess might be a fertile field of research to develop artificial intelligence in more meaningful areas. This hope was quickly sidelined by brute-force approaches that taught us little but played strong chess.

It is easy to see why a commercial company would have had little interest in the early pattern-recognition techniques now refined by AlphaGo. Mr Kasparov describes an attempt to use them in chess; observing that grandmasters promptly won games in which they had sacrificed their queens, the machine concluded that it should sacrifice its own queen at every opportunity. Yet in the end, these pattern-recognition techniques have proved far more powerful and generally applicable than the methods used by the best chess-playing computers; the question is whether we wish to change our world, or merely win a chess game.

This is not just a cautionary tale about chess. Corporations have reined in their ambitions elsewhere. Corporate research laboratories once bankrolled fundamental research of the highest importance. Leo Esaki of Sony and IBM won a Nobel Prize in physics, as did Jack Kilby of Texas Instruments. Irving Langmuir of General Electric won a Nobel in chemistry. Bell Labs boasted too many Nobel laureates to list — along with Shannon himself. It was a time when companies weren’t afraid to invest in basic science.

That has changed, as a research paper from three economists — Ashish Arora, Sharon Belenzon, and Andrea Patacconi — shows. Companies still invest heavily in innovation, but the focus is on practical applications rather than basic science, and research is often outsourced to smaller outfits whose intellectual property can easily be bought and sold.

Corporate researchers produce more patents but they are less visible in the pages of learned journals. As Prof Arora puts it, research and development has become “less R, more D”. The AlphaGo research, he says, is an exception. And this matters because most basic research ends up being commercially useful eventually. We like the golden eggs, but we may be starving the golden goose.

All this need not be disastrous if other research bodies such as universities fill in the gap. Yet this is not something to take for granted. As the economist Benjamin F. Jones has documented, new ideas are harder to find. One sign of this is the complexity of research teams, which are larger, full of increasingly specialised researchers and ever costlier.

Perhaps it is naive to simply exhort companies to spend more on fundamental research — but somebody has to. One interesting approach is for governments to fund “innovation prizes” for breakthroughs. Such prizes mobilise public funds and public goals while deploying the agility and diversity of private sector approaches. But such prizes only work in certain situations.

Professional sport has made fashionable the practice of “marginal gains” — rapid optimisation in search of the tiniest edge. It turns out that corporate research took the same turn decades ago. There is nothing wrong with marginal improvements, but they must not be allowed to crowd out more speculative research. Science is a deeper, messier practice than sport. We must continue to devote time, space and money to bigger, riskier leaps.

Written for and first published in the Financial Times on 27 Oct 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Previous Next

Elsewhere

  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 177,409 other subscribers

Do NOT follow this link or you will be banned from the site!