Revising the Risk of Dirty Air
For several years, a team of School researchers have been presenting the results of a massive analysis of air pollution’s daily effect on mortality in the nation’s 90 largest cities over an eight-year time span. After five years of work, they had strong evidence linking particulate pollutants to increased deaths and hospitalization from cardiac and pulmonary conditions.
Principal investigator Jonathan Samet, professor and chair of Epidemiology; Scott Zeger, professor and chair of Biostatistics; Francesca Dominici, assistant professor of Biostatistics; and Aidan McDermott, an assistant scientist in Biostatistics, were nearing their final goal, presentation of the results to an influential Environmental Protection Agency (EPA) committee, when they hit a small but alarming hitch.
Dominici, PhD, and McDermott, PhD, were working on what they thought would be their last paper on the National Morbidity, Mortality, and Air Pollution Study (NMMAPS) when they ran the analysis with an important analytical factor altered. The results should have changed dramatically but, to their amazement, nothing happened. Further dramatic alterations of this factor also produced no change in the results.
Time was excruciatingly short. The team had planned to present the results in July to the Clean Air Scientific Advisory Committee (CASAC), the group responsible for evaluating the EPA’s assessment of air pollution’s effects on health. That assessment (for particulate matter and other pollutants) is mandated every five years and shapes the EPA’s national air pollution control policy for years to come.
In hopes of understanding what was causing the strange lack of effect, Dominici and McDermott took a close look at S-Plus, the popular statistical analysis software they had used in NMMAPS. In what Zeger, PhD, calls “an amazing bit of detective work,” Dominici and McDermott discovered that default values in the S-Plus implementation of a widely used statistical method (called Generalized Additive Models) hadn’t been updated since the early 1990s. The values, known as the convergence criteria, affected the degree of processing given to statistical analyses. Its unchanged setting meant that a large fraction of currently available computing power—all the tremendous advances in computing speed and capacity since the early 1990s—wasn’t being brought to bear on the NNMAPS analyses when such power was merited.
Dominici and McDermott spent several sleepless weeks identifying stricter convergence parameters and then redoing the analyses (nearly 30,000 by Zeger’s estimate). They found that there was still a definite connection between increased particulate pollution and higher risk of death or illness.
“The revised results lowered that risk, but we still had strong, consistent evidence of association between air pollution and increased morbidity and mortality across the whole country,” says Zeger. “Our first results weren’t qualitatively wrong, they just weren’t optimal.”
The software issue and the new results were reported to the EPA, other concerned groups, and the scientific community by the Health Effects Institute (HEI), the nonprofit foundation that funded the project, according to Samet. HEI’s “scientific, peer-reviewed communication” on the matter explained the problem and described new results and plans for further analyses, Samet says.
When the story hit the media, however, many news reports misinterpreted the revisions as not only invalidating the results of NMMAPS, but as invalidating all previous studies of air pollution’s effects on health (since some prior assessments had used S-Plus as well).
Detractors focused particular attention on the percentage change in risk. Some claimed that the previous results were reduced by nearly 50 percent. Zeger says this relative change is true from a strictly numerical perspective—the average increase in risk declined from .41 percent to .27 percent per every 10 micrograms increase of particulate matter per cubic meter of air. That’s a .14 percent difference, or half of the final result, from the critic’s perspective. (Taken from another perspective, .27 is 65 percent of .41, meaning a 35 percent decline in estimated increased risk.)
However, emphasizing the change in relative risk can be misleading. Zeger and the other NMMAPS authors find it much more useful to focus on the absolute change in risk—how many people will die? Taking the city of Baltimore as their example, they calculated that their final result means that lowering average particulate pollution levels by 10 micrograms per cubic meter could prevent 20 deaths per year from acute exposure. Original risk estimates would have put this improvement at 30 deaths per year.
“But most importantly, NMMAPS only estimates the acute effects of exposure to pollution,” emphasizes Zeger. “If we take account of the effects of longer-term exposure in a city like Baltimore, the attributable deaths are an order of magnitude higher.
“Everybody’s trying to use this update, and this thing now seems to have a life of its own,” says Zeger. “Even after we posted a question and answer document about the revisions on the Internet, people still don’t seem to look to what’s true.” (See the document.)
Dominici presented the revised results in mid-July to the EPA’s CASAC, the panel in charge of reviewing and approving EPA’s summary and synthesis of the evidence on particulate matter, and a National Research Council committee.
“This is very high stakes science,” says Samet, MD, MS, who in addition to being principal investigator on NMMAPS, served as consultant to CASAC for particulate matter and chair of the National Research Council Committee on Research Priorities for Particulate Matter. “There are enormous financial implications for a huge sweep of industry, and that means the evidence is looked at very carefully. It needs to hold up to scrutiny.”
Samet says it will be several years before the final EPA documents and recommendation are completed.