Tim Harford The Undercover Economist

Articles published in January, 2020

Will the 2020s be the decade that the robots finally come for our jobs?

Will this decade finally be the one in which the machines take our jobs? Such concerns have been aired many times over the centuries and they have always been wrong. But they are not intrinsically absurd.

In 1979, the economist Wassily Leontief pointed to the fate of the horse. Horses had long been of vital economic importance, but faded in the second half of the 20th century as the internal combustion engine became the dominant source of horsepower. Horses still have a niche, but will never outcompete engines, no matter how cheap oats become.

Might large numbers of human workers go the way of the horse? In 2003, the economists David Autor, Frank Levy and Richard Murnane published a study of the economics of technological change that made two influential observations.

First, they pointed out (correctly) that it is misleading to talk of robots — or any other technology — taking jobs. Instead, machines perform tasks, a narrower unit of work. Since most jobs involve many different tasks, robots do not take jobs, but they may radically reshape them. A robot accountant is not C-3PO; it’s Excel or QuickBooks. As with the horse, there is no wage at which human calculators can compete with a computer at the task of adding up a spreadsheet. Still, human accountants exist in large numbers. Their jobs simply look very different today.

Second, argued Profs Autor, Levy and Murnane, the tasks that machines took on were best described not as “skilled” or “unskilled” but as “routine” or “non-routine”. Recalculating a spreadsheet is a skilled but routine task, easily automated. Cleaning a toilet requires little skill — even I can do it — but is non-routine and therefore hard to automate.

This way of looking at the world proved very useful. It explained why technology could disrupt our jobs without destroying them. And why both the low-paid and high-paid ends of the labour market were proving robust, while the middle, packed with skilled-yet-routine tasks, was hollowed out.

But in a new book, A World Without Work, Daniel Susskind argues that the second part of the Autor-Levy-Murnane perspective is proving more questionable. He observes that the boundaries of the “routine” are blurring fast. Consider, for example, CloudCV, a system that answers open-ended questions about images. Upload an image and ask any question you like.

One photograph showed some 20-somethings sitting on a sofa with white wine and cans of Kronenbourg lager in front of them, with one fellow standing in a dramatic pose. “What are they doing?” I asked the computer. “Playing Wii,” it replied, correctly. “What are they drinking?” Probably beer, it said. “How’s the weather?” I asked of an outdoor snapshot. “Cloudy.” It was.

The system gives accurate answers to informally phrased questions about random photographs. Is that task routine? Hardly.

Neither is the performance of Alpha Zero, the game-playing algorithm developed by DeepMind, a sister company of Google. In 2017, AlphaZero trained itself in a few hours to thrash the best chess-playing engine and the best Go program, both of which easily beat the best humans. Some claim this performance is less impressive than it first appears — but 10 years ago the mere idea that a computer could beat a human at Go seemed implausible. What DeepMind’s supercomputers can do today will be achievable on laptops and phones by 2030.

In task after task, the computers are overtaking us. In the Visual Question Answering challenge that CloudCV attempts, humans score 81 per cent. The machines were at 55 per cent as recently as 2016; by the summer of 2019 they were at 75 per cent. It’s only a matter of time before they do a better job than us — just as AlphaZero does.

The Artificial Intelligence Index project, based at Stanford University, tracks a wide variety of benchmarks. The machines are making rapid progress at symbolic achievements — such as playing poker — but also at translation, speech recognition, and classifying diseases such as skin cancer (from images of moles) and diabetes (from images of retinas).

These achievements are real. And despite the fact that there are many things computers cannot do, when an algorithm does a narrow task cheaply and well, we humans end up contorting ourselves to unleash the new capability while sweeping up the tasks the software leaves behind. Just look at the self-checkout at your local supermarket.

So — will the machines take all the jobs in the coming decade? No, and that remains an unhelpful way to phrase the question. Machines encroach on tasks, and we reorganise our jobs in response, becoming more productive as a result. But there is good reason to believe that such reorganisations will be wrenching in the decade to come, and also that some people will be permanently unable to contribute economically in the way they would have hoped and expected. Above all, it is likely that our political institutions will be unable to adapt to the challenge.

 
Written for and first published in the Financial Times on 3 January 2020.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Book of the Week 4: The Body – A guide for occupants

A confession: Bill Bryson came to visit the More or Less studios, gave us a signed copy of his book and was wonderfully charming to all of us. I was well-disposed to him before I even cracked the spine.
That said: it’s a wonderful book. I used to read a lot of Bryson books (as a rambler, A Walk In The Woods had particular appeal) but it had been a while since I picked one up, and I’d forgotten just what an effortlessly engaging writer he is.
Structurally, The Body: A Guide For Occupants is simple. Bryson tells you all sorts of things about the body – your brain, your skin, your blood, your ears – and while it all seems to be bang up to date, there’s no radical reinterpretation of medical science (thank goodness). Nor is there some too-clever-by-half conceit to the way the tale is told.
Instead, Bryson is just wonderfully fun to read: there’s the perfect blend of crazy medical stories, biographical snippets about scientists, surprising facts and figures, and invitations to reflect on your own experience. A remarkable volume of information is conveyed with the easy charm of a skilled raconteur. Writing a book like this is very, very hard and Bryson makes it look very, very easy.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

27th of January, 2020MarginaliaComments off

Ten years of social media have left us all worse off

The last decade has had plenty of landmark moments — but one big change crept up on us slowly: our experiences in the liminal space of social media. Somewhere between Silicon Valley and our vibrating pockets, between our closest friends and some faceless trolls, our privacy, politics, economy and above all our attention were reshaped by Facebook and its outriders.

Social media existed before 2010, but not as we now know it. Few of us had smartphones in 2009. Facebook’s active user base has grown sevenfold over the past 10 years, and there simply aren’t enough people for that to happen again. Instagram and WhatsApp were both launched about a decade ago, and swiftly absorbed into the mother of all social networks. As for Twitter, let me simply note that Donald Trump only started tweeting in earnest in 2011.

What effect has all this had? It’s plausible to argue that social media enabled major events such as the Arab spring and the election of Mr Trump, although of course there is never a single explanation for such things. There have been some telling little moments, however — such as when the UK Conservative party press office took the low-rent Orwellian step of posing as an independent fact-checking organisation on Twitter. (No doubt they would describe that incident differently, while adding that Oceania has always been at war with Eastasia.)

I should not exaggerate. This isn’t 1984. Partisan news sources were popular long before we self-selected into online echo chambers. Propaganda is not new. And there are benefits from social media: it gives a platform to all sorts of people who deserve to be heard. But it is hard to make the case that social media has led to a more thoughtful, rigorous or compassionate discourse about politics. Amid the bullying, the misogyny, and the endless outrage, it’s hard to tell the bots and the people apart, largely because so many humans have lowered themselves to the level of the bots.

What about the economics? Network effects mean that social media platforms tend to spiral towards monopoly. You want to be where your friends are. It might be hard for a new search engine to displace Google, but if I am tempted by an alternative, I don’t need to persuade my friends and family to move too.

An obvious antitrust measure would be to force Facebook to divest WhatsApp and Instagram, two services that could and should be its competitors. A more radical approach is to require social networks to improve their interoperability and data portability — effectively allowing other services to piggyback, or users to flit among services. If I switch email providers or phone companies I can bring my phone number and contact database with me, or automatically forward messages sent to my old email address. It’s possible to imagine social media working more like that in future, although it would require substantial effort both technologically and legislatively.

Yet none of this solves perhaps the most basic problem. Ten years ago all we had to worry about was email overload. Now we carry around powerful and highly distracting devices. They observe our behaviour, buzz insistently to get attention, and leverage our desire to fit in, communicate and reciprocate. We did not consciously sign up for this, and each of us needs to think carefully about what we really want from social media.

Last Christmas I vowed to spend less time on my smartphone. It worked — until a couple of months ago, when I started using Twitter much more. Why? I had something to sell. That seems wretchedly appropriate. Still, another decade is starting. I cannot break Facebook up by myself, but I can plan to do something more constructive with the time and energy I often spend on social media. I hope I am not the only one.

 

Written for and first published in the Financial Times on 27 December 2019.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

Book of the week 3: Rebel Ideas

I hesitated to read Matthew Syed’s Rebel Ideas, not because I disapproved, but because I wondered whether I would learn anything new. The territory is familiar: cognitive diversity leads to better decisions. Like attracts like, meaning that we fill our organisational toolkits with hammers and neglect to recruit the screwdrivers, hacksaws and wrenches. That’s a bad idea, no matter how good the hammers are.
But my hesitancy was a mistake: Rebel Ideas is a great book and I’ve learned plenty that’s new, as well as gaining a deeper appreciation of what I thought I already knew.
Matthew Syed does cover some territory that was familiar to me from writing Messy. He cites my book and others that I admire, including Charlan Nemeth’s No!, Sunstein and Hastie’s Wiser, Joe Heinrich’s The Secret of Our Success and Scott Page’s The Difference. If you haven’t read these books, I suggest that you do – you’re in for a treat.
But even if you have, Syed’s synthesis is impressive. His storytelling is breathtaking – he opens with a discussion of the CIA’s failure to spot the 9/11 attacks, and flits across plane crashes, the invention of the wheeled suitcase, and the rise of Silicon Valley. His discussion of a disastrous Everest expedition is particularly hard to put down. This approach – story plus science – is of course standard in the genre, but I can assure you that it’s very hard to do well, and Syed does it very well indeed.
Syed covers collective blindness, constructive dissent, innovation, echo chambers and the evolution of culture itself. My usual book-reading habit of creasing the bottom corner of a page I want to come back to has somewhat backfired – there are dozens of creases because the book is packed with good stuff.
Recommended!

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

20th of January, 2020MarginaliaComments off

The changing face of economics

Robert Solow, the Nobel laureate economist, says he had long been “bothered” by the fact that most people — even educated people — “had no clear idea of what economics is, and what economists do”.

Solow was born in Brooklyn in 1924, to what he has described as a “lower-middle-class family”, and grew up during the Great Depression.

Although his father always had work, Solow has said that from about the age of eight onwards, he was conscious that his parents were constantly worrying, “and their worries were purely economic: what was going to happen, could they continue to make ends meet”.

This awareness would shape his thinking throughout his life. He won a scholarship to Harvard at 16 and began an academic career that would see him reach the top of his field, winning the Nobel in 1987 for his contributions to the theory of economic growth.

Yet despite such acclaim, Solow, who is now 95, felt that his subject remained frustratingly opaque to the general public.

Then, a few years ago, he was seated by chance next to the photographer Mariana Cook at a friend’s dinner party. Cook had recently completed a project photographing 92 mathematicians, ranging from Fields Medal winners to promising young men and women at the start of their careers.

Solow suggested that she embark on a similar series of portraits, but of economists — and Cook agreed.

As he writes in the introduction to the resulting book, which contains 90 black-and-white portraits shot by Cook over the course of three years: “The idle thought became a reality, and I found myself involved in many ways. Naturally, I had to ask myself: was making a book of portraits of academic economists a useful or reasonable or even a sane thing to do?”

It is a fair question. Economics remains a perplexing discipline. It is often regarded as purely the study of money. (Far from it: indeed, some critics complain that economists aren’t as interested in studying money as they should be.) It is easily caricatured as overly mathematical, full of absurdly unrealistic assumptions, elitist and corrupted by proximity to business and finance.

And, as with any caricature, there is some truth in all of these complaints.

So what actually is economics? Alfred Marshall began his enduringly influential 1890 book Principles of Economics: “Political economy or economics is a study of mankind in the ordinary business of life; it examines that part of individual and social action which is most closely connected with the attainment and with the use of the material requisites of wellbeing.”

“The ordinary business of life.” It is not a bad definition, even now. But economics has changed since Marshall’s day. What is being studied has changed, and how, and even who does the studying.

Start with the “what”. It might seem obvious that economists should stick to the study of the economy — the production and consumption of goods and services that are either traded in markets or could be. They never really did stay in their lane: Thomas Robert Malthus was a proto-environmentalist and an inspiration for Charles Darwin; John Stuart Mill was a philosopher; John Maynard Keynes was intellectually promiscuous.

But it was Gary Becker and his followers who systematically applied the methodological tools of economics to social issues such as racial discrimination, the family and addiction.

Some of the ideas Becker championed — notably the use of education to improve “human capital” — became so mainstream as to be a cliché. Others remain controversial.

But nobody bats an eyelid when the economist Emily Oster publishes books of advice on pregnancy and parenting, when Steven “Freakonomics” Levitt opines on when to rob a bank, or even when the Financial Times publishes a column using economics to give tips on dating and etiquette. Economic imperialism is here to stay.

The “how” is also changing. Twenty years ago, the economist Ed Lazear published a paper, “Economic Imperialism”, with Becker at its centre.

Lazear argued that economic imperialism had been a success because “economics stresses three factors that distinguish it from other social sciences. Economists use the construct of rational individuals who engage in maximising behaviour. Economic models adhere strictly to the importance of equilibrium as part of any theory. Finally, a focus on efficiency leads economists to ask questions that other social sciences ignore.”

This is, I think, a fair summary of the state of play in 1999. But two decades on, economics is no longer quite so taken with the assumption of rationality. With Nobel memorial prizes for behavioural economics going to Daniel Kahneman (2002), Robert Shiller (2013) and Richard Thaler (2017), it has now become perfectly acceptable to publish economics papers with an alternative view of human decision-making.

That is not the only change in the toolkit of economics. The first modern randomised clinical trial was run by a man trained in economics, Austin Bradford Hill, in the late 1940s — but the methodology did not become widespread in economics until the 21st century.

The randomistas — most prominently the 2019 Nobel laureates Abhijit Banerjee, Esther Duflo and Michael Kremer — put the experimental results centre stage; the considerations that Lazear highlighted are not forgotten, but they are left in the wings.

Other economists are broadening the tools of economics by taking advantage of huge datasets and operating on the fringes of computer science. Two prominent examples are Susan Athey — the first female winner of the John Bates Clark Medal — and Raj Chetty, who won the same prize at the tender age of 33. Among the sources of this new data rush are internet traffic, cell-phone metadata, satellite imagery and the ballooning administrative datasets used by large organisations to run their businesses.

If the “how” is changing quickly, the “who” is stubbornly resistant to change. Economists used to be white and male. Now they are mainly white or Asian, and male.

Of course, there are some spectacular exceptions: in 2005, when I began writing my column for the FT, there was no female winner of the Nobel memorial prize in economics. There are now two.

Even more perplexingly — given that the award is for younger researchers — there was no female winner of the John Bates Clark Medal. There are now four, which is progress. Women such as Elinor Ostrom, Claudia Goldin and Janet Yellen have reached the very top of the profession, as did the late Alice Rivlin.

But economics still lacks the diversity it needs to reach its full potential. The Royal Economic Society has launched a “Discover Economics” campaign to address this, but it will take more than a recruitment drive: a 2014 study, “Women in Academic Science”, concluded that while other academic disciplines had been levelling the playing field, economics was an exception. We need to do better.

Economics is a controversial discipline, and that is not likely to change. Whereas scientists only occasionally have to dip their toes into political waters such as climate change or vaccination, most of what economists study — from inequality to immigration, trade to taxation — lies squarely in the middle of the political battlefield.

Still, some of us are doing our best, and all of us are human, as these portraits show. It is nice to be reminded of that.

Mariana Cook’s book is “Economists“.
Written for and first published in the Financial Times on 21 December 2019.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

 

20th of January, 2020Other WritingComments off

‘Salvator Mundi’ and the limits of certainty

Mona Lisa may be famously inscrutable, but “Salvator Mundi” has surely replaced her as Leonardo da Vinci’s most enigmatic work. It has been two years since it was reported that the long-lost painting had been sold to a Saudi prince as a gift to the Louvre Abu Dhabi, for an astonishing $450m — two and a half times the previous record for any painting sold at auction.

Since then the unveiling has been postponed without explanation, and the painting’s whereabouts are unknown: on a yacht, says one report; in secure storage in Switzerland, says another.

No doubt the mystery of its whereabouts will be resolved. The mystery of its provenance is deeper. In 2005, “Salvator Mundi” was bought for about $1,000 at an auction in New Orleans by two art dealers, Alexander Parish and Robert Simon. (Mr Parish later told Vulture that they had been willing to go as high as $10,000, but it proved unnecessary.)

On the surface, the painting was worth little: it was in very bad shape. But Messrs Parish and Simon thought it might be by a disciple of Leonardo; in which case it might easily be worth several hundred thousand dollars — a gamble worth taking. As a painting by Leonardo’s studio, with a touch or two by the master himself, it might have been worth $20m.

So what is it? Ben Lewis, author of The Last Leonardo, notes that the debate rages “over whether it belongs in the first division autograph Leonardo category or the second division Leonardo+Workshop category”. Apparently that is a $430m distinction. And the desire for clarity is not merely financial. When we gaze at a painting on a gallery wall, we like to know.

It is hard, too, to disentangle the time-scarred original work from its substantial restoration by Dianne Modestini — which, in turn, was influenced by the close inspection of known works by Leonardo.

Yet as the criminologist Federico Varese points out, it is curious that we insist on a binary distinction. We feel powerfully that the painting is either an autograph Leonardo, or it is not. As a matter of logic that may be true, but as a matter of practicality we do not know and we will never know. There is some evidence of Leonardo’s involvement, but the evidence is circumstantial. We are relying heavily on intuition — albeit the intuition of people with deep expertise. Regrettably but unsurprisingly, the experts differ.

This is partly a problem of knowledge: we cannot travel back in time to see who painted what. But it is also a problem of definition. Philosophers might recognise the “bald man paradox” here. Plucking out a single hair from a full head of hair does not produce a bald man. Keep going, however, and baldness will result. And yet it seems absurd to identify any particular hair as the crucial one that made the difference between baldness and non-baldness. Similarly with “Salvator Mundi”: how many brushstrokes from Leonardo does it take to distinguish a workshop piece from an autograph work?

So “Salvator Mundi” is the Schrödinger’s cat of paintings — perhaps one thing, perhaps another. We can’t know.

Schrödinger’s cat discomfited the Austrian physicist Ernst Schrödinger, for good reason. But to a statistician or a social scientist, this sort of irresolvable uncertainty is part of life.

I just tossed a coin. Did it come up heads or tails? One or the other, clearly. But even after the fact, if you haven’t seen the result it is not absurd to say that there is a 50 per cent chance of either outcome. And if I then put the coin back in my pocket without checking, 50-50 is the closest we will ever get to knowing.

We should be able to live with such fuzziness. When asking a question such as “who is the greatest ever Formula 1 driver?”, we know that we can have a fun argument — Lewis Hamilton, Michael Schumacher, Ayrton Senna, Juan Fangio? And we also know that the argument cannot be resolved.

But we forget this in other parts of life. Who would be the better UK prime minister, for example, Jeremy Corbyn or Boris Johnson? Which Democratic candidate would be most likely to defeat US president Donald Trump in the 2020 elections? Is it Joe Biden, Elizabeth Warren, Bernie Sanders or Pete Buttigieg? How serious a threat is climate change, and how drastic a change is required to deal with it?

The answers matter far more than the question of how much Leonardo contributed to “Salvator Mundi”, if he contributed at all. But we will never know for sure what the answers are.

One approach to all this fuzziness is to demand sharpness. I have often written admiringly about the work of Philip Tetlock, who has examined the problem of forecasting — a field dominated by vague prognostications — by asking forecasters to make verifiable predictions with deadlines.

But there are limits. The world defies our attempts to confine it with neat definitions.

It is not wrong to debate these vast questions of policy and politics. Indeed, it is vital that we do. But it is futile to expect a certain answer.

Written for and first published in the Financial Times on 6 December 2019.

Catch up with the first season of “Cautionary Tales” [Apple] [Spotify] [Stitcher]

For more on the joys of ambiguity, try my book, Messy

Receive these posts by email

(You can unsubscribe at any time)

Book of the Week 2: Dreyer’s English

Yes, a book about how to write, by a celebrated copy-editor. Benjamin Dreyer offers an enjoyable tour through all the rules of grammar and style that people break, making their prose dull or ridiculous. He also rails against the pedants who insist on rules that any good writer would happily break, such as prohibition on splitting infinitives.

It’s fun – even funny. Dreyer’s humour is on every page; one reviewer described it as ‘relentless’ but I was glad to have the jokes to keep me company. This is, after all, a book about grammar and linguistic precision. It needs jokes, and some of Dreyer’s are good enough to have me annoying my wife by reading them out to her. (Sorry.)

The book was easily good enough to keep me reading despite the fact that I wasn’t learning very much. I was aware that the book was a combination of advice I already knew and advice I would promptly forget, although one or two observations may stick.

Perhaps I am the wrong reviewer. Not only I have read similar books before, there is the small matter of having been on the receiving end of 14 copy-edits (7 books in the US, 7 in the UK). I have absorbed certain predilections of copy-editors by osmosis by now. I suspect a reader with less of this painful first-hand experience might learn more, but no matter: the point of this book is to be enjoyed, rather than to serve as a style manual. And enjoyable it is.

One thing that was missing from the book is a sense of just what it’s like to be an author on the receiving end of a copy-edit, although Dreyer does mention one author who scrawled in the margin of one edit, “WRITE YOUR OWN FUCKING BOOK”. Just so.

An alternative offering is Steven Pinker’s The Sense of Style, which seemed much briefer to me, and is really two good books in one. The first is all about the cognitive science of why communication is hard, and it’s very good. The second is – again – that list of grammar and style rules that one should obey or ignore. Like Dreyer, Pinker has little patience with old-school pedants; like Dreyer, he’s funny.

Also consider Style: Lessons in Clarity and Grace by Williams and Colomb, if you can find a copy. I haven’t read this book for many years but it made a big impression on me. Williams and Colomb go beyond the tired grammar advice.  They and pull sentences and paragraphs apart to show why some writing is confusing in its very structure. This book is superb, and a real eye-opener. Fewer jaunty jokes, but more likely to improve your writing.

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

13th of January, 2020MarginaliaComments off

How Sesame Street set a gold standard for education

The children’s television show Sesame Street just celebrated its 50th birthday. I know my favourite character should be Count von Count, who shares my fondness for numbers. But I’ve always had a soft spot for Mr Snuffleupagus, Big Bird’s best friend.

Mr Snuffy was thought by every adult on Sesame Street to be imaginary despite being as real as Elmo. It’s a good joke: Mr Snuffy, a strange anteater-mammoth hybrid, is colossal. How could the adults not notice him?

After the gag had run for 14 years, the adults finally realised that Mr Snuffleupagus was real, and apologised to Big Bird for doubting him. This was a weighty decision: Sesame Street’s writers were concerned about child abuse, and reflected that it might be unwise to portray the adults as endlessly disbelieving what the childlike Big Bird told them.

This was typically painstaking behaviour from a show that has always had ambitious ideas about helping children. In 1967, a former TV producer named Joan Ganz Cooney wrote a report for the Carnegie Corporation titled “The Potential Uses of Television in Pre-school Education”. She made the case that carefully crafted television could “foster intellectual and cultural development in pre-schoolers”. Two years later, her vision became reality, in the Children’s Television Workshop and Sesame Street.

It was a radical idea: just a few years earlier, Marshall McLuhan had infamously argued that “the medium is the message”. It seemed natural enough to many that television was an inherently superficial medium with, therefore, a superficial message.

By contrast, Sesame Street was a bet that good television could make a real difference to children’s readiness for school, particularly for those starved of other opportunities to learn. Not only would it help them to read and count, but it would be racially integrated. Over the years it would tackle issues including death, divorce, autism, infertility, adoption and HIV.

Researchers swarmed all over Sesame Street, trying to figure out whether it actually worked. This wasn’t as easy as one might think. One early study, conducted by Samuel Ball and Gerry Ann Bogatz, aimed at a conventional experiment: some families, chosen at random, would be encouraged to sit preschoolers in front of this brand new show, while a control group of other families would receive no encouragement.

The problem was that Sesame Street became so popular, so quickly, that it became hard to distinguish between the two groups; everyone was watching. Nevertheless, the study authors did the best they could. They found that children who watched more Sesame Street learnt more, and that “in terms of its own stated goals, Sesame Street was in general highly successful”. Perhaps the message is the message after all.

Yet it is hard to be sure about causation. Did Sesame Street help kids learn? Or was the programme attractive to children who were already flourishing?

A recent study by two economists, Melissa Kearney and Phillip Levine, approaches the problem from a different angle. Professors Kearney and Levine noted that in the early years of Sesame Street, some geographical areas simply couldn’t receive the broadcast signals that carried the show. Two-thirds of US children could watch the show, and many did, but one-third could not.

Based on this accidental experiment, Profs Kearney and Levine concluded that the children who had lived in a region where Sesame Street was available were less likely to fall behind at school. The effect was about as large as attending the US Head Start early childhood education programme — impressive, given that TV is so cheap. The benefits were particularly large for children who lived in deprived areas.

It is hard to read about this study without being reminded that Sesame Street was born in a very different world — one where children received Sesame Street via UHF broadcast, rather than watching Baby Shark on YouTube, where a version produced by the South Korean media brand Pinkfong has nearly 4bn views.

Like the Children’s Television Workshop 50 years ago, Pinkfong has lofty educational goals: its videos are supposed to teach English to Korean children. It has more than twice as many YouTube subscribers as Sesame Street, which struggled financially in recent years before cutting a deal with HBO.

But the vast, cosmopolitan and mysterious world of toddler YouTube seems unlikely to deliver the same educational benefits to children as Sesame Street, which was continually tweaked to help children learn rather than being relentlessly optimised for the clicks. As Alexis Madrigal observed in a long report for The Atlantic on toddler YouTube, the viral videos tend to be fast-paced and full of superfluous details. These features may attract the attention of preschoolers, but educational experts think they are unhelpful.

I’m an optimist. Online video could surely be even more educational than Sesame Street, given its ability to be interactive and to gather data on an individual child’s progress. But it would have to be carefully designed and tested, in the same way that Sesame Street was. An educational revolution doesn’t happen by accident.

Written for and first published in the Financial Times on 8 November 2019.

My new podcast is “Cautionary Tales” [Apple] [Spotify] [Stitcher]

Receive these posts by email

(You can unsubscribe at any time)

Extreme Economies – disaster zones with lessons for us all

In the 17th century, a boy named Hugh Montgomery fell from his horse and lost part of his rib cage; doctors replaced it with a metal plate and he survived — with a living heart that could be inspected by the pioneering doctor William Harvey. Phineas Gage survived a metal spike through his head in 1848, and the changes in his character inspired fresh understanding of how the brain works. If we can learn about the healthy human body by studying people who have suffered catastrophic injuries, might a similar trick work for economics?

That is the premise of Richard Davies’s book, in which he reports on economies that he views as unusually resilient, such as Aceh after the dreadful tsunami of 2004, or dysfunctional, such as Glasgow and Kinshasa, or otherwise extreme, such as Akita in Japan, where the average age is 53.

This is an unconventional approach. Economists and business journalists tend to focus on the same broad trends in the same major economies. But Davies suggests, plausibly, that many parts of the world will eventually have the demographics of Akita, the inequality of Santiago or the squandered environment of Darien, Panama, and so a journey to the extremes gives us a glimpse of our own future. Even when it does not, there is always the thrill of exploration.

I sympathise with the conceit. One of my own books, The Undercover Economist Strikes Back, lingers on RA Radford’s remarkable 1945 account of an economic system emerging in a prisoner-of-war camp. Quite apart from the grim fascination of the subject matter, a prison camp teaches us a surprising amount about how a real economy works.

Similarly, Davies studies the irrepressible markets inside the Louisiana State Penitentiary. There’s the mackerel economy — mackerel being light, standardised and durable, it makes a good currency — and the “dot” economy. In the outside world, Green Dot pre-paid plastic cards, as good as cash in most stores, can be loaded with value by purchasing a “MoneyPak”, which is essentially just a 14-digit code, the “dots”. Inside the prison, prisoners can bribe guards or pay each other large sums, untraceably; all they need is for an associate to pass them the “dots”.

Extreme Economies makes two promises: to give us a global tour of disaster and recovery, showing us places we would never see first-hand; and to teach us something about how ordinary economies work by studying extreme ones. Davies delivers impressively on the first promise, with crisp and sensitive reporting from an extraordinary range of inaccessible places.

The lessons, however, are more uneven. Davies notes, for example, that after the Aceh tsunami, the few survivors were able to sell their gold jewellery to local gold traders Harun and Sofi, who could access the international market price. That gold was always intended as saving for hard times, and Davies tells us it worked as intended, in “contrast with the western financial system”. Yet while gold bracelets worked, bank accounts would have worked better — it took three months for the gold traders to be up and running again. If there is a lesson for the reform of the western banks here, Davies does not tell us what it is.

While the post-1945 decline of Glasgow’s shipyards is well described, it is not fully explained: the yards on the Clyde did not invest in dry docks, says Davies, but he does not say why. And in the camp of Zaatari in Jordan, Davies praises the entrepreneurial spirit of Syrian refugees, noting that in 2016 the ratio of new to established firms was world-beating. He fails to acknowledge that since Zaatari was barely four years old at the time, it is surprising that the ratio wasn’t higher. One sure way to have a high ratio of start-ups is to live in a place that until recently did not exist.

That aside, the descriptions of Zaatari are a triumph. Davies takes us inside, introduces us to the residents and deftly sketches both their many struggles and some of the pleasures of life in the camp. The contrast with another camp, Azraq, is unforgettable: Azraq is better planned but much more tightly controlled. Life there is equitable but joyless. As a discussion of the strengths and weaknesses of markets versus planned economies, Extreme Economies is one of the most subtle and surprising I have read. Davies sets the austere modernism of Azraq against the messy improvisations of Zaatari. It’s not just about access to material goods, but the way Azraq is “desolate, empty and depressing”. The homes in Azraq are sturdier, the electricity supply more reliable — and yet few people wish to move from Zaatari to Azraq.

Davies returns to Zaatari, and sits on a rooftop sipping orange soda and eating grilled chicken, contemplating the camp’s joys and sorrows. Here he delivers on his promises, giving us a glimpse into a different world, and a lesson learnt about our own.

Written for and first published in the Financial Times on 6 December 2019.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

7th of January, 2020Other WritingComments off

Book of the Week 1: A World Without Work

I’ve set myself the goal of writing a short book review every week in 2020. Let’s see how that goes. Happy New Year!

Daniel Susskind’s A World Without Work is, primarily, an excellent guide to the economics of automation and to the latest progress in artificial intelligence. Susskind begins by describing “a history of misplaced anxiety” about the machines taking the jobs, before outlining the influential Autor-Levy-Murnane (ALM) paradigm of 2003.

ALM emphasise tasks, rather than jobs: automation is far more likely to encroach on a narrow task (such as adding up the prices of goods at a supermarket checkout) than to completely replace a job such as a checkout assistant. We should therefore expect automation to reshape jobs, not replace them.

So far, so good – and Susskind’s contribution is to deliver a crystal-clear explanation of the received wisdom in economics, with plenty of examples. It’s a model of popular academic writing.

Susskind then moves to argue that many economists are underestimating what automation is now achieving. The ALM idea of “routine” and “non-routine” tasks is starting to break down – consider the progress in image recognition, legal document analysis (something Susskind has studied deeply), or translation. Is Google Translate really performing a “routine” task? What about AlphaZero, the self-trained system that destroyed the best Chess and Go players in the world, human or computer?

Susskind’s point is that the ALM paradigm needs rexamining: we can no longer simply assume that large numbers of tasks are “non-routine” and therefore robot-proof. Neither can we assume that almost all humans will find it straightforward to earn a living. We need to adapt to a world where technological unemployment may arrive on a large enough scale to cause real misery and disruption.

Finally Susskind reviews solutions, such as a basic income, education, and – speculatively – a “meaning-creating state”, by which he means a state that is able to produce a sense of purpose, meaning and identity that in the 20th century was provided by our careers. I think he’s right to identify the goal of helping people find a sense of meaning and identity; I’ve no idea, however, what a “meaning-creating state” would really look like. But perhaps that is less a criticism of Susskind and more a recognition of how deep and complex the challenge might become.

Recommended!

Compare Carl Benedikt Frey’s The Technology Trapa book which is intimidatingly weighty but is well-written and accessible. Frey was one of the researchers behind the viral “xx% of jobs are vunerable to  automation” claim, but this book is much more than a book about robots taking jobs – it’s a history of automation from pre-industrial times.

So far I’ve only read the (penultimate) chapter on artificial intelligence; it’s excellently written, full of examples and studies I hadn’t previously encountered, and I learned a lot. Not obviously contradictory to Susskind’s book, and it is intriguing that there are so many ideas out there that the overlap is modest.

Catch up on the first season of my podcast “Cautionary Tales” [Apple] [Spotify] [Stitcher]

My book “Fifty Things That Made the Modern Economy” (UK) / “Fifty Inventions That Shaped The Modern Economy” (US) is out now in paperback – feel free to order online or through your local bookshop.

Receive these posts by email

(You can unsubscribe at any time)

6th of January, 2020MarginaliaComments off
Previous

Elsewhere

  • 1 Twitter
  • 3 RSS
  • 5 Podcasts
  • 6 Facebook

Books

  • Fifty Inventions That Shaped the Modern Economy
  • Messy
  • The Undercover Economist Strikes Back
  • Adapt
  • Dear Undercover Economist
  • The Logic of Life
  • The Undercover Economist

Search by Keyword

Free Email Updates

Enter your email address to receive notifications of new articles by email (you can unsubscribe at any time).

Join 181,476 other subscribers

Do NOT follow this link or you will be banned from the site!