EDITOR’S NOTE: Though a landmark, the research on the global burden of disease uses underlying raw data that is of poor quality, according to Victoria Fan, a research fellow at the Center for Global Development. This, she adds, has brought about “uncertainty and inaccuracy in many global health statistics.”
Although counting the sick and dead in a country can seem quite dull if not morbid, these facts are critical inputs to designing any national health policy, let alone global priorities in health. Yet 85 percent of the world’s population still lack systems that register births and deaths along with high-quality data on causes of death. The global burden of disease — whose first edition was commissioned by the World Bank in 1991 and whose latest edition came out in December 2012 in the Lancet — was the first systematic attempt to count the sick and dead in a rigorous way. The GBD researchers used all data sources available to them.
And while this work is a landmark in global health history and deserves praise, the underlying data the researchers use are of poor quality. For example, it’s hard to figure out how many deaths were actually counted in the latest GBD, and how many deaths were “extrapolated” from a variety of methods. What’s worse, there has been slow — if any — progress in improving the underlying data since the first GBD two decades ago.
These poor “raw ingredients,” the underlying raw data, are the main reason why there is such uncertainty and inaccuracy in many global health statistics. Even with the wide application of new statistical methodologies by top-notch researchers, only so much can be done in correcting for biased and missing data. Or, put another way, even the best recipes and best chefs in the world can’t make a meal out of spoiled (or non-existent) ingredients.
There are countless examples where global estimates were significantly revised because of these uncertainties. The most recent are the twofold differential between estimates by the World Health Organization and Institute for Health Metrics and Evaluation in the number of malaria deaths. Another previous big “guess” was also for HIV and AIDS — when in 2007 UNAIDS decreased its estimate of people living with HIV by over 6 million to 33.2 million. Maternity mortality estimates may have been previously overestimated, too. The list of embarrassing “whoops, bad guess” goes on, and those “whoops” will keep on happening under the current business-as-usual scenario.
Meanwhile, people have been criticizing the global burden of disease, particularly for its lack of transparency and lack of consultation with countries (see here). The GBD authors recently gave an entirely predictable explanation to their actions — the academics owning the data need to publish. As a researcher, I can sympathize. And while there is reason for concern about the academic replicability and transparency of the work by folks in Seattle or Geneva, these issues don’t strike to the real heart of the problem: The problem is not with the chefs or their secret recipes. It’s with the bad ingredients.
During a recent meeting in Geneva on the state of global-health statistics, Dr. Richard Horton, editor of the eminent Lancet journal, highlighted this central problem through his always-fascinating stream of tweets:
@richardhorton1: “Theo Vos: the problem is that health data are collected by people who don’t care about health.”
@richardhorton1: “Henk Bekedam: those working in national health information systems have been badly ignored. It’s time we paid them more serious attention.”
@richardhorton1: “Shams El Arifeen: Multiple estimates are not the problem. It is explaining them for local use that is the problem.”
@richardhorton1: “Claudia Stein (WHO EURO): “Estimates are really political…good people in national institutions get fired” if a number gives bad news.”
@richardhorton1: “Ken Hill: ‘country review of estimates is the weakest part of what we do.’”
Horton concluded his stream of tweets with a snap-shot (see below) of the “Recommendations on the way forward” produced from this meeting, with notable emphasis on strengthening country health information systems and country capacity. But at first glance, these recommendations do not learn from the failures of making progress on country statistical capacity over the past two decades. The recommendations need clarification on what exactly “strengthening” of health information systems and country capacity means, or why it hasn’t already happened over the past 20 years. Recommendations made by academics and policymakers without extensive leadership from countries run the risk of being ineffective.
Which is why I’m encouraged by the Data for Development working group convened by Alex Ezeh of the African Population and Health Resource Center and Amanda Glassman of the Center for Global Development. The working group has convened a number of “local,” “country” actors particularly connected with the ministries of statistics, along with donors. The group has focused on the poorly aligned incentives to collect this data at all levels in their creation, along with the political economy and institutional arrangements that have helped or hindered better statistical capacity. Their perspectives, I believe, will shed new light and offer new recommendations in addressing these persistent problems.
Edited for style and republished with permission from the Center for Global Development. Read the original article.