Tim Harford The Undercover Economist

Other WritingOther Writing

Articles from the New York Times, Forbes, Wired and beyond – any piece that isn’t one of my columns.

Other Writing

Budget 2017 shows a reactive government throwing cash at crises

Firefighting is a brave and essential profession, but for a politician it is not a good look. Successive British governments have found themselves locked in a vicious cycle: some part of the public sector is squeezed for money, manages decline for a while, then cracks under pressure. The crisis is extinguished by a sudden spray of last-minute cash — an expensive way to solve any problem — while money and attention is drained from some other area. So the next crisis smoulders.

Philip Hammond, the chancellor, did not create this situation but neither has he shown much sign of breaking out of it. His very first policy announcement was a slug of money aimed squarely at the civil war in the Conservative party: £3bn to be spent on “Brexit preparations”, whatever they may be.

A similar amount was offered to the National Health Service in England over the next few years, in an act of largesse Mr Hammond emphasised was exceptional and “outside the spending review process”. This prompts an obvious question: what is wrong with the spending review process?

What’s wrong — according to a recent report from the Institute for Government — is that the government has been too reactive. It has squeezed public service funding, hoping nothing goes wrong, and doled out cash when it does. The most obvious culprit is Mr Hammond’s predecessor George Osborne, who presided over a sustained period of austerity.

One could reserve some blame for the Labour government which handed Mr Osborne an economy in disastrous shape — and for civil servants eager to make the numbers add up with optimistic forecasts that quality could be sustained during a funding drought. And let’s not even talk about the chancellor’s boss, a prime minister who triggered Article 50 and called a general election without ever seeming to think seriously about the hard choices involved in Brexit.

Meanwhile public services continue to be stretched. The next crisis looms in the prison system; after that, the police service and the UK’s visa and immigration system are likely candidates to show signs of serious strain. So far they have not done so, which may be why Mr Hammond said nothing abut them.

It is hard not to have some sympathy for the chancellor. He stands in the middle of a party in turmoil; he is on the periphery of a predictably difficult Brexit process; he has been handed a huge downgrade in growth and productivity forecasts from the independent Office for Budget Responsibility. None of this is his fault, and he deserves credit for ramping up spending on infrastructure and the housing market. It remains to be seen whether that money will be sufficient, or well spent, but the sums involved are not trivial.

Meanwhile Mr Hammond increasingly resembles a beaten-down dad in the middle of a raucous children’s party, trying to tidy up the mess on his hands and knees while the chaos continues around him. One five-year-old sits kicking on his back and another tries to spread Nutella in his ear. He has reached into his pockets and hurled out a few sweets, but his best efforts are unlikely to be enough.

For now, the cycle of emergency spending continues. There was a telling moment as the chancellor turned to discuss a real fire, the Grenfell Tower disaster. The House of Commons rightly fell silent. The tragedy should never have happened, said Mr Hammond. He then announced that he would be spending some money on the aftermath. Crisis, then cash. Same old story.

 

Written for and first published in the Financial Times on 22 November 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

23rd of November, 2017Other WritingComments off
Other Writing

Review of “The Square and The Tower” by Niall Ferguson

“The world remains a world of squares and towers,” concludes Niall Ferguson, after skipping across 500 years in about as many pages. The square — the town square, the market square — represents social networks, “the structures that human beings naturally form”. The tower represents hierarchical control and secular authority, the top-down approach to social structure.

The study of how networks compete or co-operate with each other and with hierarchies is a hot topic in the social sciences, and it is easy to see why: think of the US military versus Isis; or Russian intelligence trying to exploit the US media; or Facebook and, well, almost anything.

Yet both networks and hierarchies have been around for a long time, as Ferguson is quick to remind us in The Square and the Tower (UK) (US). Networks flourished in the years 1450 to 1790, he writes; hierarchies reasserted themselves until around 1970, and networks have been making a comeback ever since. The book is a history told with the focus on the way networks and hierarchies shaped events.

This approach is engaging but not always helpful. It is unclear that we gain much by describing Pizarro’s conquistadors and their allies as a network opposing Atahualpa’s hierarchical Inca society.

When it does work, however, it works well. German National Socialism is described as a network that then transformed itself into a crushingly powerful hierarchy. Faced with the power of the German state, the network of Jewish business interests that had loomed so large in the Nazi imagination proved helpless. “After all that had been written about the web of Jewish power,” he writes, “the only networks that really mattered were the ones that enabled emigration, and those were often simple family ties.” The analysis is illuminating, chilling and still relevant today.

While National Socialism was a network that infected a hierarchy, the Soviet infiltration of the British old boys’ club between the 1930s and the 1960s shows that hierarchies can infect networks, too.

No book written by a historian of Ferguson’s gifts is likely to disappoint, but The Square and the Tower does have one obvious weakness: it’s not at all clear that the author takes his own premise seriously. That premise, set out in the first 50 pages of the book, is that by adding the formal social science of networks to the informal descriptive practice of history, we can unlock new insights.

This union of history and social science is an exciting prospect with Ferguson in charge. But the early chapters in which he outlines the science and social science of networks are dutiful literature reviews; though he nods to network scholars such as Ronald Burt, Mark Granovetter and Duncan Watts, those names do not resurface later in the book. Ferguson cites an impressive range of social science research papers; he does not always trouble to explain technical terms as a skilled science writer might. One is left with the impression that he is happy to list every tool in the toolkit but doesn’t actually want to pick up a spanner himself.

The impression is reinforced by the way the author uses diagrams. Network diagrams always look good, whether it’s diagram 22, showing the interconnected nodes of the Bloomsbury Group (“it was . . . sexual relationships that defined the network”, we are told) or, over the page, diagram 23 depicting the evolving connections between the great powers in the late 19th century. These diagrams have been reproduced from other sources, but without sufficient labelling. Those lines mean something yet we can only guess what, unless we consult the original sources directly. The network diagrams, like the network research described early on in the book, appear to be largely decorative. That is a missed opportunity.

Yet that same flip of the page takes us from Virginia Woolf and John Maynard Keynes to a theory of the causes of the first world war outlined by none other than Henry Kissinger. There’s a joy in such leaps — from industrial networks in pre-Victorian Britain to the Taiping Rebellion, from Kissinger’s use of networked influence to how the World Economic Forum reshaped Nelson Mandela’s policy of nationalisation.

“By choice, I am more of a networks guy”, Ferguson tells us early on, and he is convincing in his claim that networks have been playing an important role for centuries. Yet at the end of his freewheeling history, he yearns for someone to take charge: “The lesson of history is that trusting in networks to run the world is a recipe for anarchy.” At best, the Illuminati take control; more likely, the Jacobins.

 

Written for and first published in the Financial Times on 11 October 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

8th of November, 2017Other WritingComments off
Marginalia

Why Thaler’s Nobel is a well-deserved nudge for behavioural economics

Richard Thaler has won the Nobel memorial prize in economics, an award that had been anticipated for some time. Mr Thaler is a behavioural economist, one of the group of economists who applies insights from psychology, or perhaps plain common sense, into the idealised world of economic modelling.

One trivial behavioural insight that Mr Thaler is fond of mentioning concerns a large bowl of cashew nuts he once served to dinner guests over drinks. Observing his guests hoovering up the contents of the bowl, he removed it to the kitchen so as not to spoil everyone’s appetite. The guests could in principle have stopped of their own accord; nevertheless they were pleased to see temptation removed.

Early in his career, he started making a list of “Dumb Stuff People Do” on the blackboard in his office. The cashew nut example was on the list, and it is a classic piece of Thaler thinking: obvious, trivial, fun and yet completely beyond the scope of traditional economics to model. Mr Thaler’s insight is that such trivia might lead to important analytical and policy insights.

Thomas Schelling, Nobel laureate in 2005, was also a master of these deceptively simple observations of human nature. And Daniel Kahneman — a psychologist, mentor for Mr Thaler, and winner of the prize in 2002 — had with Amos Tversky laid the foundations for behavioural economics.

Mr Thaler advanced the field in two important ways. He campaigned for behavioural economics to be taken seriously within the economics profession. He also brought it into the policy environment with his book Nudge (co-authored with Cass Sunstein) and his support for behavioural policy units in the White House and 10 Downing Street.

Within the profession, Mr Thaler found a pulpit in the Journal of Economic Perspectives, an academic journal supplied to all members of the American Economic Association. His Anomalies column was witty and sharply reasoned, highlighting strange features of the economic and financial world that standard economic theory could not explain, and rigorously debunking unconvincing attempts at rationalisation.

His evangelism for behavioural economics has been successful, at least in microeconomics: it is commonplace to see economic models incorporate psychological realism, and Mr Thaler himself was president of the American Economic Association in 2015.

In the policy world, Mr Thaler’s most famous idea was to use behavioural insights in pensions policy — for example, by enrolling people in a pension scheme by default, while giving them the choice to opt out. The stakes here are much higher than with cashew nuts: default enrolment has, according to the UK pensions regulator, increased participation in private-sector pension schemes from 42 per cent to 73 per cent between 2012 and 2016.

Rational economic man does not care — or even notice — whether a pension is opt-in or opt-out. He simply calculates (instantly) whether it pays to participate and chooses accordingly. Mr Thaler’s insight is not only that people are not perfectly rational (that much is obvious, even to the most traditional of economists) but that apparently small departures from rationality can have outsized impacts.

Mr Thaler’s catch-all advice: whether you’re a business or a government, if you want people to do something, make it easy. This year’s choice of Nobel Prize winner is an easy one to like.

Written for and first published in the Financial Times on 9 October 2017.

My new book is “Fifty Inventions That Shaped The Modern Economy”. Grab yourself a copy in the US or in the UK (slightly different title) or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

Highlights

What We Get Wrong About Technology

Blade Runner (1982) is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being.  So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?

He calls her up from a payphone.

There is something revealing about the contrast between the two technologies — the biotech miracle that is Rachael, and the graffiti-scrawled videophone that Deckard uses to talk to her. It’s not simply that Blade Runner fumbled its futurism by failing to anticipate the smartphone. That’s a forgivable slip, and Blade Runner is hardly the only film to make it. It’s that, when asked to think about how new inventions might shape the future, our imaginations tend to leap to technologies that are sophisticated beyond comprehension. We readily imagine cracking the secrets of artificial life, and downloading and uploading a human mind. Yet when asked to picture how everyday life might look in a society sophisticated enough to build such biological androids, our imaginations falter. Blade Runner audiences found it perfectly plausible that LA would look much the same, beyond the acquisition of some hovercars and a touch of noir.

Now is a perplexing time to be thinking about how technology shapes us. Some economists, disappointed by slow growth in productivity, fear the glory days are behind us. “The economic revolution of 1870 to 1970 was unique in human history,” writes Robert Gordon in The Rise and Fall of American Growth (UK) (US). “The pace of innovation since 1970 has not been as broad or as deep.” Others believe that exponential growth in computing power is about to unlock something special. Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age” (UK) (US), while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.

Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere. Instead, when we try to imagine the future, the past offers two lessons. First, the most influential new technologies are often humble and cheap. Mere affordability often counts for more than the beguiling complexity of an organic robot such as Rachael. Second, new inventions do not appear in isolation, as Rachael and her fellow androids did. Instead, as we struggle to use them to their best advantage, they profoundly reshape the societies around us.

 

 

To understand how humble, cheap inventions have shaped today’s world, picture a Bible — specifically, a Gutenberg Bible from the 1450s. The dense black Latin script, packed into twin blocks, makes every page a thing of beauty to rival the calligraphy of the monks. Except, of course, these pages were printed using the revolutionary movable type printing press. Gutenberg developed durable metal type that could be fixed firmly to print hundreds of copies of a page, then reused to print something entirely different.  The Gutenberg press is almost universally considered to be one of humanity’s defining inventions. It gave us the Reformation, the spread of science, and mass culture from the novel to the newspaper. But it would have been a Rachael — an isolated technological miracle, admirable for its ingenuity but leaving barely a ripple on the wider world — had it not been for a cheap and humble invention that is far more easily and often overlooked: paper.

The printing press didn’t require paper for technical reasons, but for economic ones. Gutenberg also printed a few copies of his Bible on parchment, the animal-skin product that had long served the needs of European scribes. But parchment was expensive — 250 sheep were required for a single book. When hardly anyone could read or write, that had not much mattered. Paper had been invented 1,500 years earlier in China and long used in the Arabic world, where literacy was common. Yet it had taken centuries to spread to Christian Europe, because illiterate Europe no more needed a cheap writing surface than it needed a cheap metal to make crowns and sceptres.

Paper caught on only when a commercial class started to need an everyday writing surface for contracts and accounts. “If 11th-century Europe had little use for paper,” writes Mark Kurlansky in his book Paper (UK) (US), “13th-century Europe was hungry for it.” When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.

Paper opened the way for printing. The kind of print run that might justify the expense of a printing press could not be produced on parchment; it would require literally hundreds of thousands of animal skins. It was only when it became possible to mass-produce paper that it made sense to search for a way to mass-produce writing too. Not that writing is the only use for paper. In his book Stuff Matters (UK) (US), Mark Miodownik points out that we use paper for everything from filtering tea and coffee to decorating our walls. Paper gives us milk cartons, cereal packets and corrugated cardboard boxes. It can be sandpaper, wrapping paper or greaseproof paper. In quilted, perforated form, paper is soft, absorbent and cheap enough to wipe, well, anything you want. Toilet paper seems a long way from the printing revolution. And it is easily overlooked — as we occasionally discover in moments of inconvenience. But many world-changing inventions hide in plain sight in much the same way — too cheap to remark on, even as they quietly reorder everything. We might call this the “toilet-paper principle”.

 

 

It’s not hard to find examples of the toilet-paper principle, once you start to look. The American west was reshaped by the invention of barbed wire, which was marketed by the great salesman John Warne Gates with the slogan: “Lighter than air, stronger than whiskey, cheaper than dust.” Barbed wire enabled settlers to fence in vast areas of prairie cheaply. Joseph Glidden patented it in 1874; just six years later, his factory produced enough wire annually to circle the world 10 times over. Barbed wire’s only advantage over wooden fencing was its cost but that was quite sufficient to cage the wild west, where the simple invention prevented free-roaming bison and cowboys’ herds of cattle from trampling crops.  Once settlers could assert control over their land, they had the incentive to invest in and improve it. Without barbed wire, the American economy — and the trajectory of 20th-century history — might have looked very different.

There’s a similar story to be told about the global energy system. The Rachael of the energy world — the this-changes-everything invention, the stuff of dreams — is nuclear fusion. If we perfect this mind-bendingly complex technology, we might safely harvest almost limitless energy by fusing variants of hydrogen. It could happen: in France, the ITER fusion reactor is scheduled to be fully operational in 2035 at a cost of at least $20bn. If it works, it will achieve temperatures of 200 million degrees Celsius — yet will still only be an experimental plant, producing less power than a coal-fired plant, and only in 20-minute bursts. Meanwhile, cheap-and-cheerful solar power is quietly leading a very different energy revolution. Break-even costs of solar electricity have fallen by two-thirds in the past seven years, to levels barely more than those of natural gas plants. But this plunge has been driven less by any great technological breakthrough than by the humble methods familiar to anyone who shops at Ikea: simple modular products that have been manufactured at scale and that snap together quickly on site.

The problem with solar power is that the sun doesn’t always shine. And the solution that’s emerging is another cheap-and-cheerful, familiar technology: the battery. Lithium-ion batteries to store solar energy are becoming increasingly commonplace, and mass-market electric cars would represent a large battery on every driveway. Several giant factories are under construction, most notably a Tesla factory that promises to manufacture 35GWh worth of batteries each year by 2020; that is more than the entire global production of batteries in 2013. Battery prices have fallen as quickly as those of solar panels. Such Ikea-fication is a classic instance of toilet-paper technology: the same old stuff, only cheaper.

Perhaps the most famous instance of the toilet-paper principle is a corrugated steel box, 8ft wide, 8.5ft high and 40ft long. Since the shipping container system was introduced, world merchandise trade (the average of imports and exports) has expanded from about 10 per cent of world GDP in the late 1950s to more than 20 per cent today. We now take for granted that when we visit the shops, we’ll be surrounded by products from all over the globe, from Spanish tomatoes to Australian wine to Korean mobile phones.

“The standard container has all the romance of a tin can,” says historian Marc Levinson in his book The Box (UK) (US). Yet this simple no-frills system for moving things around has been a force for globalisation more powerful than the World Trade Organisation. Before the shipping container was introduced, a typical transatlantic cargo ship might contain 200,000 separate items, comprising many hundreds of different shipments, from food to letters to heavy machinery. Hauling and loading this cornucopia from the dockside, then packing it into the tightest corners of the hull, required skill, strength and bravery from the longshoremen, who would work on a single ship for days at a time. The container shipping system changed all that.

Loading and unloading a container ship is a gigantic ballet of steel cranes, choreographed by the computers that keep the vessel balanced and track each container through a global logistical system. But the fundamental technology that underpins it all could hardly be simpler. The shipping container is a 1950s invention using 1850s know-how. Since it was cheap, it worked. The container was a simple enough idea, and the man who masterminded its rise, Malcom McLean, could scarcely be described as an inventor. He was an entrepreneur who dreamed big, took bold risks, pinched pennies and deftly negotiated with regulators, port authorities and the unions.

McLean’s real achievement was in changing the system that surrounded his box: the way that ships, trucks and ports were designed. It takes a visionary to see how toilet-paper inventions can totally reshape systems; it’s easier for our limited imaginations to slot Rachael-like inventions into existing systems.  If nuclear fusion works, it neatly replaces coal, gas and nuclear fission in our familiar conception of the grid: providers make electricity, and sell it to us. Solar power and batteries are much more challenging. They’re quietly turning electricity companies into something closer to Uber or Airbnb — a platform connecting millions of small-scale providers and consumers of electricity, constantly balancing demand and supply.

 

 

Some technologies are truly revolutionary. They transcend the simple pragmatism of paper or barbed wire to produce effects that would have seemed miraculous to earlier generations. But they take time to reshape the economic systems around us — much more time than you might expect. No discovery fits that description more aptly than electricity, barely comprehended at the beginning of the 19th century but harnessed and commodified by its end. Usable light bulbs had appeared in the late 1870s, courtesy of Thomas Edison and Joseph Swan. In 1881, Edison built electricity-generating stations in New York and London and he began selling electricity as a commodity within a year. The first electric motors were used to drive manufacturing machinery a year after that. Yet the history of electricity in manufacturing poses a puzzle. Poised to take off in the late 1800s, electricity flopped as a source of mechanical power with almost no impact at all on 19th-century manufacturing. By 1900, electric motors were providing less than 5 per cent of mechanical drive power in American factories. Despite the best efforts of Edison, Nikola Tesla and George Westinghouse, manufacturing was still in the age of steam.

Productivity finally surged in US manufacturing only in the 1920s. The reason for the 30-year delay? The new electric motors only worked well when everything else changed too. Steam-powered factories had delivered power through awe-inspiring driveshafts, secondary shafts, belts, belt towers, and thousands of drip-oilers. The early efforts to introduce electricity merely replaced the single huge engine with a similarly large electric motor. Results were disappointing.

As the economic historian Paul David has argued, electricity triumphed only when factories themselves were reconfigured. The driveshafts were replaced by wires, the huge steam engine by dozens of small motors. Factories spread out, there was natural light. Stripped of the driveshafts, the ceilings could be used to support pulleys and cranes. Workers had responsibility for their own machines; they needed better training and better pay. The electric motor was a wonderful invention, once we changed all the everyday details that surrounded it.

David suggested in 1990 that what was true of electric motors might also prove true of computers: that we had yet to see the full economic benefits because we had yet to work out how to reshape our economy to take advantage of them. Later research by economists Erik Brynjolfsson and Lorin Hitt backed up the idea: they found that companies that had merely invested in computers in the 1990s had seen few benefits, but those that had also reorganised — decentralising, outsourcing and customising their products — had seen productivity soar.

Overall, the productivity statistics have yet to display anything like a 1920s breakthrough. In that respect we are still waiting for David’s suggestion to bear fruit. But in other ways, he was proved right almost immediately. People were beginning to figure out new ways to use computers and, in August 1991, Tim Berners-Lee posted his code for the world wide web on the internet so that others could download it and start to tinker. It was another cheap and unassuming technology, and it unlocked the potential of the older and grander internet itself.

 

 

If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael? The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise — from hackable pacemakers to botnets comprised of printers to, inevitably, internet-enabled sex toys that leak the most intimate data imaginable. Both the potential and the pitfalls are spectacular.

Whatever the technologies of the future turn out to be, they are likely to demand that, like the factories of the early 20th century, we change to accommodate them. Genuinely revolutionary inventions live up to their name: they change almost everything, and such transformations are by their nature hard to predict. One clarifying idea has been proposed by economists Daron Acemoglu and David Autor. They argue that when we study the impact of technology on the workplace, we should view work in bite-sized chunks — tasks rather than jobs.

For example, running a supermarket involves many tasks — stacking the shelves, collecting money from customers, making change, and preventing shoplifters. Automation has had a big impact on supermarkets, but not because the machines have simply replaced human jobs. Instead, they have replaced tasks done by humans, generally the tasks that could be most easily codified. The barcode turned stocktaking from a human task into one performed by computers. (It is another toilet-paper invention, cheap and ubiquitous, and one that made little difference until retail formats and supply chains were reshaped to take advantage.)

A task-based analysis of labour and automation suggests that jobs themselves aren’t going away any time soon — and that distinctively human skills will be at a premium. When humans and computers work together, says Autor, the computers handle the “routine, codifiable tasks” while amplifying the capabilities of the humans, such as “problem-solving skills, adaptability and creativity”. But there are also signs that new technologies have polarised the labour market, with more demand for both the high-end skills and the low-end ones, and a hollowing out in the middle. If human skills are now so valuable, that low-end growth seems like a puzzle — but the truth is that many distinctively human skills are not at the high end. While Jane Austen, Albert Einstein and Pablo Picasso exhibited human skills, so does the hotel maid who scrubs the toilet and changes the bed. We’re human by virtue not just of our brains, but our sharp eyes and clever fingers.

So one invention I’m keen to observe is the “Jennifer unit”, made by a company called Lucas Systems. Jennifer and the many other programmes like her are examples of a “voice-directed application” — just software and a simple, inexpensive earpiece. Such systems have become part of life for warehouse workers: a voice in their ear or instructions on a screen tell them where to go and what to do, down to the fine details. If 13 items must be collected from a shelf, Jennifer will tell the human worker to pick five, then five, then three. “Pick 13” would lead to mistakes. That makes sense. Computers are good at counting and scheduling. Humans are good at picking things off shelves. Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human? Like paper, Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.

 

 
Written for and first published in the FT Magazine on 8 July 2017.

My new book is “Fifty Things That Made The Modern Economy” – now out! Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

29th of August, 2017HighlightsOther WritingComments off
Other Writing

Books about calling statistical bullshit

A friend recently emailed to ask me for books that might help navigate a world full of statistical bullshit. Here are some recommendations.

I can’t think of a better science writer than Ben Goldacre, who burns with righteous mischief. His Bad Science (UK) (US) isn’t always about statistics, but it’s excellent throughout and an essential read for anyone who wants to understand some of the faults of modern health and nutrition journalism. Wonderful book.

Of course you should subscribe to the More or Less podcast, but you could also enjoy The Tiger That Isn’t (UK) (US). This is the unofficial book of the series, written by More or Less founders Andrew Dilnot and Michael Blastland. A highly readable guide to making sense of numbers in the wild.

Also very good – with more US examples – is Stat-Spotting (UK) (US) by Joel Best. Best’s book has given me some of my favourite examples of bad stats, but it currently seems a bit overpriced on Amazon, alas.

The classic of the field is, of course, Darrell Huff’s How To Lie With Statistics (UK) (US). There’s a sad coda that will tarnish your view of Huff; but this is still a terrific book.

Brand new book by the very splendid Evan Davis is called Post Truth (UK) (US) – haven’t yet read much but looks good.

And finally try Naked Statistics (UK) (US) by Charles Wheelan, who with wit and clarity wrote the similarly excellent Naked Economics (UK) (US).

Best, Dilnot, Huff and Wheelan all cover quite similar ground. If I was picking just one of them I’d go for Dilnot for a UK audience and Wheelan in the US.

My new book is “Fifty Things That Made The Modern Economy” – coming very, very soon and available for pre-order. Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

26th of June, 2017Other WritingResourcesComments off
Other Writing

Economics lessons from Dr Seuss

Language matters. Any poet can attest to that, as can any lawyer. (One recent court case in the US turned on an ambiguity created by a missing comma.) But it’s less clear that we economists have realised how important it is to write clearly.
One who has is Paul Romer, the Chief Economist of the World Bank. Mr Romer has provoked a staff rebellion by instructing his large team of research economists to sharpen up their language. He’s threatened to block publication of a flagship report if more than 2.6 per cent of the words in it are “and”. Such medicine seems to be too strong for the World Bank: Mr Romer keeps his job title but is to be stripped of his managerial responsibilities.
No doubt the amusing surface of this story hides tedious depths of office politics. But Mr Romer has a point: economists seem to be drawn to obfuscatory polysyllables like wasps to jam. This is true even when compared to other academics, and even in a medium that encourages brevity: Twitter.
Recently economist Marina Della Giusta and colleagues at the University of Reading conducted an as-yet-unpublished linguistic analysis of the tweets of the top 25 academic economists and the top 25 scientists on Twitter. (The top 3 economists: Paul Krugman, Joseph Stiglitz, and Erik Brynjolfsson; the top 3 scientists: Neil DeGrasse Tyson, Brian Cox, and Richard Dawkins.)
Della Giusta and her colleagues found that economists tweeted less, were less likely to mention other twitter users, and mentioned fewer people when they did. This implies that the economists were less likely than the scientists to have Twitter conversations, especially with people they didn’t know. I can’t say I blame them; I avoid using Twitter as a medium for conversation myself. Still, the scientists managed it and the economists did not.
The economists also used less accessible language with more complex words and more abbreviations. Both their language and their behaviour was less chatty than that of the scientists.
The Bank of England has been pondering this kind of thing, too. Last year on Bank Underground, a blog for Bank of England staff, analyst Jonathan Fullwood compared Bank of England reports to the writings of Dr Seuss.
Mr Fullwood’s analysis uses statistical measures of writing complexity: long words, long sentences and long paragraphs make for more difficult prose. The Cat In The Hat stands at one end of the scale; Bank of England reports stand at the other. Mr Fullwood suggests that this complexity is not a good thing – and his work has been praised this week by Minouche Shafik, who recently left the Bank of England to run the London School of Economics.
We economists should write simpler, clearer prose if we want anybody to pay attention to what we think. But at the World Bank, Paul Romer has another mission. He has long argued that economists need to write clearly to help them think clearly.  He also believes that trust depends on clarity. If we economists write prose that sounds complicated but does not tie us down to meaning anything in particular, we cannot be surprised if nobody trusts us.
Mr Romer is much taken by a linguistic analysis from Stanford’s Literary Lab. The analysis, published in 2015, tracks the development of “Bankspeak” in World Bank annual reports since 1948.
These reports once described specific situations (“Congo’s present transport system is geared mainly to the export trade”) and what the World Bank had done to improve them. Now they more likely to be clouds of feelgood bureaucratese, in which nothing specific ever happens. Projects “are emerging” while “players” are “partnering”. The result is somewhere on the Venn diagram between unobjectionable and incomprehensible.
When I worked at the World Bank, in the early 2000s, I first heard the phrase “Christmas Tree” used to bemoan work sagging under the pet ideas that had been loaded onto it. This explains Mr Romer’s irritation at excessive use of the word “and”. The Stanford analysis has a prime example of a Christmas tree from the 1999 annual report, which wants to “promote corporate governance and competition policies and reform and privatize state-owned enterprises and labor market/social protection reform…”
Such sentences are written by committee. It is surprisingly easy to write like this when you don’t know what you think, or cannot agree, or dare not say.
Mr Romer knows what he thinks and has never been afraid to say it. His focus on clear language can do little harm. It may even do some good, although I fear that too many “ands” are the symptom but not the cause of the trouble.
We should all aspire to write a bit more like Dr Seuss. If we write more clearly we tend to think more clearly. Since what we say is easy to understand we must make sure that it is true.
But simplicity alone will not save us.
“We’re going to build a big, beautiful wall and Mexico is going to pay for it,” has the same simple tone as Dr Seuss, although it lacks his compassion. Does it reflect clear, trustworthy thinking? I do not think so, Sam-I-Am.

My new book is “Fifty Things That Made The Modern Economy” – coming soon! If you want to get ahead of the curve you can pre-order in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

31st of May, 2017Other WritingComments off
Marginalia

Books that will help you give a superb talk

Nobody ever mastered a skill by reading books – with the possible exception of reading itself. But books can help. Below are a few that I’ve found helpful over the years. But first, a few observations.

First, a good speech needs to have a purpose. All too often people view speeches the way my daughter sometimes views her school homework: “I’ve got to write an essay about the Henry VIII and the dissolution of the monasteries, and it’s got to be at least two pages long.” “I’ve got to give a talk about information security and it’s got to fill 25 minutes.”

If that’s how you look at things, you’re well on your way to a tedious speech. The starting point will be to sit down with a piece of paper (or worse, to fire up PowerPoint) and start listing all the things you can think of that might fill the void.

Instead, start with the question, “what’s the one thing I want people to learn, or feel, or do, as a result of hearing this?”. Everything else – jokes, stories, visual aids, supporting arguments – flows from that.

Second, deliberate practice helps. Each good speech you give tends to improve every future speech: set yourself the task of giving a truly sensational talk just once in your life. You’ll learn a lot. And when you’re preparing for a speech, practice in front of the mirror, or record yourself on your phone, or recruit a friend to listen. Yes, it’s painful, but even one run-through will make an enormous difference.

Third, distinguish between your speaking notes, your handouts, and your visual aids, and decide whether you need any of them. Your speaking notes are a series of bullet-point prompts; PowerPoint is a perfectly decent tool to generate these but they should be on 3×5 inch cards in your hand, not projected on the screen behind you. Your handouts provide a reminder of what you’ve said, or references, further reading, extra detail. You may not need them at all, but if you do, this is the place for the small print and the footnotes – not on the screen. The only thing that should go on the projector screen is the bona fide visual aid – a graph, image, movie or diagram that makes a genuine contribution to the purpose of your speech (remember that?). If no visual aid is appropriate, insert a blank slide or press “B” or “W” to turn the screen blank black or white.

Okay – lesson over. Here are my recommendations.

The single best book on public speaking I’ve ever read is Chris Anderson’s TED Talks (UK) (US). I reviewed it here; my only caution about the book is that it’s focused on giving the talk of your life. Anyone looking for quick tips to perk up the monthly sales meeting won’t find them here.

A great companion to Anderson’s book is Jonathan Swabisch’s Better Presentations (UK) (US). This is a workmanlike book aimed at academics, and covers all the basics – structure, visual aids, delivery. It’s smart and comprehensive and even an experienced presenter will learn a thing or too.

A more touchy-feely effort is Garr Reynolds’s Presentation Zen (UK) (US). Contains lots of good advice, wrapped up in all kinds of talk about “mind of a swordsman” and “being present”. It would annoy some people but it’s actually full of good advice.

If you want to do the McKinsey slide-deck thing with 50 data-packed slides, but do it well, I would suggest Gene Zelazny’s Say it With Charts (UK) (US). This is not the way I present, but it is appropriate for some contexts.

Finally, good advice on design in general, which will perk up any slide, comes from The Non-Designer’s Design Book by Robin Williams (UK) (US).

 
My new book is “Fifty Things That Made The Modern Economy” – coming soon! If you want to get ahead of the curve you can pre-order in the US (slightly different title) or in the UK or through your local bookshop.

Free email updates

(You can unsubscribe at any time)

22nd of May, 2017MarginaliaOther WritingResourcesComments off
Other Writing

The real answer to the problem of texting while driving

The UK government is — again — cracking down on driving while using a mobile phone. Tougher sanctions and sharper enforcement will no doubt make some difference. But the real risk of driving while impaired — either drunk, or using a phone — is not the risk of losing your licence. It’s the risk of being in a serious accident. That’s not enough to change the behaviour of some people. What will?

A cardinal rule of behaviour change is: make it easy.

A fine example is the idea of the “designated driver”, the person who stays sober and drives his or her inebriated friends home. It’s a clever concept. The designated driver is the hero, “the life of the party”, who makes it possible for everyone else to drink socially. Friends take turns to be the designated driver, tapping into deep habits of reciprocity. And the question, “who’s the designated driver?” reinforces the social norm that drunk-driving just isn’t right.

What’s the equivalent for texting while driving? It’s not immediately obvious. Distracted driving, like drunk-driving, is dangerous. But the parallel is imperfect because the decision-making process is very different. Having some drinks with friends, knowing I must drive later, is one kind of stupidity. Glancing at a phone which pings at me as I drive across town, then impulsively trying to tap out a reply, is a different kind.

Many of us have deeply ingrained habits of checking our phones and responding to their beeps. That’s not an accidental glitch in the interface: our phones are designed to interrupt us. Ad-funded apps need to attract our attention as often as possible. Public safety demands that we “make it easy” to ignore our phones while driving; the phones themselves want the exact opposite.

Most phones have an “airplane mode”, but not an obvious “drive mode”, despite the fact that your phone is vastly more likely to cause an accident in a car than in a plane. That should change. Smartphones should have, as standard, an easily accessible, well-publicised drive mode. Drive modes do exist, and in the US, the National Highway Traffic Safety Administration has been pushing the idea. But they’re not prominent.

Drive-mode phones might automatically read out text messages, automatically reply to such messages with “sorry, I’m driving”, and send incoming calls directly to voice mail — while allowing drivers to play music and use satellite navigation. In short, drive-mode phones would stop pestering us for our attention.

But why aren’t drive modes more popular? Perhaps we’re waiting for a clever marketing campaign: the “designated driver” idea managed to get itself into The Cosby Show and Cheers.

But we also have to recognise the perverse incentives at work. Many of us want to be distracted less by our phones — not just while driving, but in meetings, during conversations, at mealtimes and in the bedroom. The phones themselves want something rather different. Distracted driving is an acute symptom of a wider problem: distracted living.



Written for and first published in the Financial Times.

My book “Messy” is available online in the US and UK or in good bookshops everywhere.

Free email updates

(You can unsubscribe at any time)

2nd of March, 2017Other WritingComments off
Other Writing

Kenneth Arrow, economist, 1921-2017

Kenneth Arrow, who has died aged 95 at his home in Palo Alto, California, on Tuesday was a towering figure in 20th century economics. In 1972, at the age of 51, he won one of the first Nobel memorial prizes in economics, the youngest winner then or since. Yet even a Nobel Prize understates Arrow’s contribution to economic theory. A brilliant mathematician, he ranged widely, breaking ground in areas that have subsequently yielded many further Nobels, including risk, innovation, health economics and economic growth.

Two achievements are particularly celebrated: his impossibility theorem about the paradoxes of social choice, and his welfare theorems, which formalised the most famous intuition in economics — Adam Smith’s idea that a market produces social good from individual selfishness.

Born in New York on August 23 1921 to immigrant parents, Kenneth Joseph Arrow had his formative experiences shaped by poverty — his businessman father “lost everything” in the Depression. But Arrow flourished at school and received an MA in mathematics from Columbia University at the age of 19. He interrupted his graduate studies to serve as a wartime weather researcher and US Army Air Corps captain.

His doctorate, published in 1951, made up for lost time. The thesis explored the problem of turning individuals’ preferences into a picture of what a society as a whole preferred. Scholars had long known that voting systems could produce perverse results but Arrow went further, showing that the very idea of “what society prefers” was incoherent. He set out four reasonable sounding requirements for building social preferences from individual ones — and proved that no system could satisfy all four of those requirements.

Arrow then turned to the familiar problem of supply and demand. In a well-functioning market for a single good such as apples, there is an efficient outcome with a price at which the number of apples supplied would equal the number of apples demanded.

But that was just one market. It was influenced by the market for pears, for agricultural land, for farm labourers and even for bank loans. Each market pushed and pulled others. What happened when one considered the interactions between every market in the world?

Working at times with the French economist Gérard Debreu, Arrow demonstrated that the intuitions from a single market could be generalised. First, there was a general equilibrium at which prices equalised supply and demand in every market at once. Second, this equilibrium was efficient. And third, any efficient allocation of resources could be reached by redistributing wealth and then letting competitive markets take over. Markets could still fail, but Arrow’s analysis explained the circumstances under which they would succeed.

Alongside such deep theoretical work, Arrow made many contributions to practical economic problems from insurance to healthcare to climate change. On occasion he took an active role on politically contentious issues, and was co-author of the 1997 “Economists’ Statement on Climate Change”, which warned of the dangers of global warming.

He was also noted for his love of gossip and his quick wit. One story tells of Arrow and a colleague waiting for an elevator to take them down, while several passed them going up. The colleague wondered aloud why everyone was going up. The immediate reply: “You’re confusing supply with demand.”

Arrow spent most of his career at Stanford University, apart from an 11-year spell at Harvard. He married Selma Schweitzer in 1947; she died in 2015. He is survived by his sons David and Andrew. He is also survived by his sister Anita, who married Robert Summers, a noted economist and brother of Nobel laureate Paul Samuelson. Her son, Arrow’s nephew, is the former US Treasury secretary Lawrence Summers.
Written for and first published in the Financial Times.

 

Free email updates

(You can unsubscribe at any time)

23rd of February, 2017Other WritingComments off
Other Writing

Undercover Friday – Lies and Statistics

A friendly guide to fake news…

Since this seems as topical as ever, a few interesting titbits. Here’s an attempt by two economists (Gentzkow is widely admired, haven’t encountered Alcott before) to quantify the electoral impact of fake news stories circulating through social media. Here’s the wonderful Maria Konnikova on “Trump’s Lies vs Your Brain” – although it’s not going to cheer you up.  William Davies takes a step back and asked where statistics originally came from, why we no longer seem to trust them, and what comes next. And here’s a brilliant-looking reading list from two academics at the University of Washington on “Calling Bullshit” in the modern age. And my own feature article – almost a year old now, dated in some ways and horrifyingly relevant in others, is “How Politicians Poisoned Statistics“.

Musical recommendation

Try Jimmy Scott, “Holding Back the Years”. (UK) (US) A sublimely restrained cover of Nothing Compares 2U and much else.

Books

I’ve been reading Daniel Levitin’s “A Field Guide to Lies and Statistics” (UK) (US) – so far does what it says on the tin. Clearly written and explained although these will be familiar ideas to many readers of this blog.

Or why not build your own Brutal London? (UK) (US) A papercraft kit of London’s best hunks of concrete.

Discovery of the week, though, is the Alan Moore classic “The Ballad of Halo Jones” (UK) (US). It’s weird and sprawling and flawed and I loved it.

My own book “Messy” is now out and available online in the US and UK.

Free email updates

(You can unsubscribe at any time)

27th of January, 2017Other WritingComments off
Previous

Elsewhere

  • 1 Twitter
  • 2 Flickr
  • 3 RSS
  • 4 YouTube
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Tim’s Tweets

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 157,809 other subscribers

Website Malware Scan
Do NOT follow this link or you will be banned from the site!