in early 1999, Les Roberts traveled to Bukavu, a city of more than 200,000 in the Democratic Republic of the Congo (drc). The country’s brutal civil war was in full swing, and a nearby region, Katana, had been largely cut off from the outside world for nearly a year. Roberts, a former Centers for Disease Control (cdc) epidemiologist who’d taken over as director of health policy for the International Rescue Committee, wanted to see how the locals were faring.
Every morning for weeks, Roberts and his team rode into the jungle. After finding a spot they’d selected randomly on a map, they approached the people living in the area and asked them about recent deaths in their households. When Roberts finally crunched the numbers, he determined that the mortality rate in Katana was two and a half times the peacetime rate. The next year, using a similar approach, he concluded that the war’s overall death toll in eastern drc at the time wasn’t 50,000, as widely reported, but a staggering 1.7 million.
Roberts’ results helped boost the reputation of conflict epidemiology, a fledgling discipline that applies the tools of public health research to the surprisingly difficult question of how many civilians die in war zones. Historically, soldiers and journalists have been the main sources of real-time casualty estimates, leaving the truth somewhere between propaganda and a best guess. Researchers are still revising death tolls for wars that ended decades ago; estimates of civilian deaths in Vietnam even now range from 500,000 to 2 million or more. The methods Roberts helped pioneer aimed to end some of that uncertainty.
Advocates and policymakers quickly discovered the power of scientifically valid mortality studies to spur leaders into action. After Roberts presented his Congo data on Capitol Hill in 2001, US aid to the country jumped tenfold. A cdc survey of mortality rates in Kosovo was used as evidence in Slobodan Milosevic’s war crimes trials at The Hague. Multiple such surveys in Darfur contributed to former Secretary of State Colin Powell’s decision to condemn the Sudanese government for facilitating genocide. And Roberts’ 2001 estimate of deaths during Sierra Leone’s civil war has been widely accepted.
But when Roberts took on the challenge of tracking civilian casualties in Iraq, he was quickly reminded that, as with the use of sampling in the US census, statistical methodology can become highly politicized. He found his Iraq work misunderstood, misrepresented, even written off as propaganda. Lifting the fog of war, Roberts discovered, isn’t a question of finding the most accurate number, but one people are willing to accept.
in september 2004, Roberts, then at Johns Hopkins University, arrived in Baghdad to supervise a nationwide mortality survey in collaboration with Gilbert Burnham, codirector of the university’s Center for Refugee and Disaster Response. At first Roberts accompanied the Iraqi physicians who were conducting the interviews. But after police detained two of them, Roberts and the doctors decided he should stay behind. “They all realized that being with an American was something radioactive,” he says.
So while Roberts sat in his hotel room, his research teams fanned out across the country, following the approach he had used in the Congo five years earlier. The technique, known as a two-stage cluster survey, works much like an opinion poll in which interviews with a random sampling of people are extrapolated to reflect the views of an entire population. Roberts’ teams surveyed 990 households located near 33 randomly selected spots, more than the minimum number of “clusters” epidemiologists consider necessary to get an accurate picture of what’s going on in a country.
Upon returning home, he and Burnham analyzed the data. They were floored: In the 18 months after the American invasion, the numbers suggested, roughly 100,000 Iraqis had died as a result of the war, 60 percent of them violently. That dwarfed the figure from the widely cited website Iraq Body Count, which had tallied no more than 19,061 deaths by scouring press reports and official documents. The Iraqi government’s numbers were also much lower. The researchers sent their study to the prestigious medical journal The Lancet, which published it in October 2004.
The unexpectedly large death toll elicited skepticism, and questions about the methodology. The study had a wide “confidence interval” of 8,000 to 194,000. “This isn’t an estimate. It’s a dart board,” scoffed Slate military writer Fred Kaplan.
But leading epidemiologists and statisticians insist the study is valid. A confidence interval is structured like a bell curve, with the numbers in the bulging middle far more likely to be accurate than those at the tapering ends. It was a larger interval than Roberts and Burnham had hoped fora consequence of their sample size and the uneven distribution of violence in Iraq. That didn’t render their estimate meaningless, however, just easy to dismiss. “I expected to be criticized,” says Roberts, who has since joined the public health faculty at Columbia University. “I was more struck by the lack of press coverage.”
He didn’t help matters by telling reporters he’d opposed the invasion, leading the AP to suggest that the study’s timing was politically motivated. Critics, meanwhile, have questioned Roberts’ decision, in the year following the Lancet article, to launch a short-lived congressional run in upstate New York as a pro-science, anti-war Democrat. Roberts resents the notion that scientists should stay out of politics. “Everyone who writes about public health problems wants them solved,” he says. “No one who writes about measles is neutral.”
Roberts’ politics don’t bother John Tirman, who runs the Center for International Studies at mit. “I thought it explained as few other things had the origins of the insurgency,” he says of the study. “In a country like Iraq where there are very strong kinship networks, where if someone is attacked and killed it obligates a very large number of men to defend the community, this large scale of violence suggested that there were a large number of Iraqis that were essentially being drawn into the insurgency by the way the invasion and occupation was conducted.”
Tirman (whofull disclosureserved as a board member for Mother Jones‘ parent foundation during the 1990s) helped secure funding through mit for a second, larger survey. In the spring and summer of 2006, the team’s researchers canvassed the country yet again, visiting more than 1,800 households clustered around 47 sites. As of that July, Roberts and Burnham would later estimate, the war had claimed about 655,000 Iraqi lives, suggesting that about 1 in 7 Iraqi families had lost someone because of the ongoing violence. As in the first study, there was a wide confidence intervalplus or minus about 275,000 deaths. But even the low end of the range suggested a death toll far beyond anything previously reported.
That October, after the new findings appeared in The Lancet, the critics pounced, again honing in on what they called fuzzy math. In a Wall Street Journal op-ed Steven E. Moore, a pollster and former adviser to Coalition Provisional Authority chief Paul Bremer, declared, “I wouldn’t survey a junior high school, no less an entire country, using only 47 cluster points.”
“That’s wrong,” says Jennifer Leaning, a professor at Harvard University’s School of Public Health. “You can sample very large populations with 33 clusters.” Other epidemiologists I contacted agreed.
But there were some valid critiques: By their own admission, Roberts and Burnham had to rely on outdated population estimates to set up the Iraq study, which overlooked war-induced migration as a result. Michael Spagat, an economist at the University of London, argued that the way the interviewers chose their starting pointsthey’d abandoned the handheld gps devices used in the first study, deeming them a security riskwould have led them to homes near main thoroughfares, where ied explosions would be more common. And because the lead authors weren’t on hand to monitor the second survey, Spagat and others have suggested that the interviewers simply lied. (That the new results agreed with the team’s earlier findings for the same period suggests that the doctors did their job properly.)
In any case, such problems are common in war zones, according to nearly a dozen leading survey statisticians and epidemiologists I spoke with. “Iraq is not an ideal condition in which to conduct a survey, so to expect them to do the same things that you would do in a survey in the United States is really not reasonable,” says David Marker, a senior statistician with the research corporation Westat. Even if the outdated population data led the researchers to a 20 percent overestimate, Marker explains, the revised death toll would still be at least a couple hundred thousand. “These methodological concerns don’t change the basic message.”
The White House struck back with its own basic message: The study was bunk. Never mind that Roberts and Burnham had used methods similar to those employed for the Kosovo survey and others approvingly cited by the Bush administration. With the notable exception of This American Life producer Alex Blumberg, most reporters dutifully slapped Roberts’ research with the “controversial” label. And when asked about the study directly, President Bush declared that it had been “pretty well discredited.”
“By whom? By him and his political staff?” snaps Bradley Woodruff, who retired last year from his job as a senior cdc epidemiologist. Woodruff has conducted mortality surveys himself, and considers Roberts’ research solid. But when cbs‘s 60 Minutes sought to interview Woodruff about the Lancet study in 2007, the cdc wouldn’t allow it. And when Rep. Dennis Kucinich invited Woodruff to Washington to discuss the study, his bosses nixed that, too. “I never had this kind of censorship under previous administrations,” he says.
more than two years later, the Iraq study remains mired in controversy. But other recent findings suggest that Roberts and Burnham were on the right track. In the summer of 2006, the World Health Organization conducted a large family health survey along with Iraq’s Ministry of Health, interviewing about five times as many people as Roberts and Burnham had, and in a more distributed fashion. In August, Mohamed Ali, a who statistician, reported his preliminary results to colleagues at a Denver statistics conference: Nearly 397,000 Iraqis had died because of the war as of July 2006.
That number falls at the low end of Roberts and Burnham’s confidence interval, which ranges from roughly 393,000 to 943,000. But while epidemiologists and statisticians are still pondering questions raised by differences between the two surveys, there’s no longer much doubt among them that Iraq’s civilian casualties number in the hundreds of thousands.
This grim statistic continues to elude most Americans. According to a February 2007 AP poll, Americans’ median estimate of the number of Iraqis killed since the invasion was just 9,890. And while the Pentagon has presented limited estimates of civilian casualties, it has yet to release any numbers for the total toll since the invasion.
Roberts had set out to provide a legitimate number that might be used to inform public policy. For now, at least, that policy has been to keep the truth buried in academic journalsand beneath the sands of Iraq.
Correction appended: An earlier version of this story inaccurately stated that the Iraq Body Count had tallied no more than 23,000 deaths. We had estimated this figure using their published data, and did not obtain the more precise figure of 19,061 until after press time.