Worming our way to the truth
‘Why does such a large policy push need to be based on a handful of clinical trials?’
It was one of the most influential economics studies to have been published in the past 20 years, with a simple title, “Worms”. Now, its findings are being questioned in an exchange that somehow manages to be encouraging and frustrating all at once. Development economics is growing up, and getting acne.
The authors of “Worms”, economists Edward Miguel and Michael Kremer, studied a deworming project in an area of western Kenya where parasitic intestinal worms were a serious problem in 1998. The project was a “cluster randomisation”, meaning that the treatment for worms was randomised between entire schools rather than between children within each school.
Miguel and Kremer concluded three things from the randomised trial. First, deworming treatments produced not just health benefits but educational ones, because healthier children were able to attend school and flourish while in class. Second, the treatments were cracking value for money. Third, there were useful spillovers: when a school full of children was treated for worms, the parasites became less prevalent, so infection rates in nearby schools also fell.
The “Worms” study was influential in two very different ways. Activists began to campaign for wider use of deworming treatments, with some success. Development economists drew a separate lesson: that running randomised trials was an excellent way to figure out what worked.
In this, they were following in the footsteps of epidemiologists. Yet it is the epidemiologists who are now asking the awkward questions. Alexander Aiken and three colleagues from the London School of Hygiene and Tropical Medicine have just published a pair of articles in the International Journal of Epidemiology that examine the “Worms” experiment, test it for robustness and find it wanting.
Their first article follows the original methodology closely and uncovers some programming errors. Most are trivial but one of them calls into question the key claim that deworming produces spillover benefits. Their second article uses epidemiological methods rather than the statistical techniques preferred by economists. It raises the concern that the central “Worms” findings may be something of a fluke.
Everyone agrees that there were some errors in the original paper; such errors aren’t uncommon. There’s agreement, too, that it’s very useful to go back and check classic study results. All sides of the debate praise each other for being open and collegial with their work.
But on the key questions, there is little common ground. Miguel and Kremer stoutly defend their findings, arguing that the epidemiologists have gone through extraordinary statistical contortions to make the results disappear. Other development economists support them. After reviewing the controversy, Berk Ozler of the World Bank says: “I find the findings of the original study more robust than I did before.”
Yet epidemiologists are uneasy. The respected Cochrane Collaboration, an independent network of health researchers, has published a review of deworming evidence, which concludes that many deworming studies are of poor quality and produce rather weak evidence of benefits.
What explains this difference of views? Partly this is a clash of academic best practices. Consider the treatment of spillover effects. To Miguel and Kremer, these were the whole point of the cluster study. Aiken, however, says that an epidemiologist is trained to think of such effects as “contamination” — an undesirable source of statistical noise. Miguel believes this may explain some of the disagreement. The epidemiologists fret about the statistical headaches the spillovers cause, while the economists are enthused by the prospect that these spillovers will help improve childhood health pharmacies and education.
Another cultural difference is this: epidemiologists have long been able to run rigorous trials but, with big money sometimes at stake, they have had to defend the integrity of those trials against the possibility of bias. They place a high value on double-blind methodologies, where neither subjects nor researchers know who has received the treatment and who is in the control group.
Economists, by contrast, are used to having to make the best of noisier data. Consider a century-old intervention, when John D Rockefeller funded a programme of hookworm eradication county by county across the American south. A few years ago, the economist Hoyt Bleakley teased apart census data from the early 20th century to show that this programme had led to big gains in schooling and in income. To an economist, that is clever work. To an epidemiologist, it’s a curiosity and of limited scientific value.
As you might expect, my sympathies lie with the economists. I suspect that the effects that Miguel and Kremer found are quite real, even if their methods do not quite match the customs of epidemiologists. But the bigger question is why so large a policy push needs to be based on a handful of clinical trials. It is absolutely right that we check existing work to see if it stands up to scrutiny but more useful still is to run more trials, producing more information about how, where and why deworming treatments work or do not work.
This debate is a sign that development policy wonks are now serious about rigorous evidence. That’s good news. Better news will be when there are so many strong studies that none of them will be indispensable, and nobody will need to care much about what exactly happened in western Kenya in 1998.
Written for and first published at ft.com.