PDF Version Printer-Friendly Version
Discoveries
An observation about numbers
By Kelley Meck
Hard data: The very crispness of the words sounds solid and incontrovertible. But when it comes to crunching the complex numbers in observational medical studies, it turns out that different statistical methods can produce dramatically different results.
When DMS researchers, led by Therese Stukel, Ph.D., used four different analytic methods to interpret the same set of observational data, they found that only one accurately predicted the success rate of a routine invasive cardiac procedure. The study, published in the Journal of the American Medical Association, offers researchers—as well as patients and doctors—what Stukel calls "a word of caution."
Sure-fire: In a perfect world, all medical treatments would be based on randomized controlled trials, the gold standard of medical science. Such studies are a nearly sure-fire method for getting good data but are also costly, time-intensive, and sometimes not possible because of ethical concerns. For example, randomly assigning people to smoke or not smoke to study the effects
Randomized controlled trials are both costly and time-intensive.
of tobacco use would be unethical because it is already known that smoking is harmful.
So researchers often depend on observational studies because they are less expensive to conduct and the data is easier to collect. But such data can be trickier to interpret. The subjects being compared may differ in all sorts of ways—such as age, income, education, and medications they take. And there may be hidden biases; for example, some physicians may be more likely to choose younger and healthier patients for surgery. So researchers apply statistical models designed to adjust the data and "remove" the differences.
Stukel's team set out to compare the effectiveness of several such models. They applied three standard methods and one nonstandard method to the same set of data about longterm survival after a heart attack. The subjects were 122,124
Medicare patients hospitalized for a heart attack in 1994-95; some had received an invasive cardiac treatment, and the rest had received one of several nonsurgical therapies. The researchers looked to see if predictions based on the observational data were consistent with actual results from randomized trials.
They weren't. The results of the randomized trials showed that surgery reduced relative mortality between 8% and 21%, while the three standard analytic methods put relative mortality at one year at 50% lower for those who got surgery, a result that Stukel termed "clinically implausible." But the fourth method—instrumental variable analysis, a technique used in econometric research—predicted a 16% reduction in relative mortality; it was more accurate, the team concluded, because it mimicked randomization.
The study, says Stukel, should be "a teaching example to show medical researchers to use caution in certain circumstances in interpreting the results of certain observational studies."
If you'd like to offer feedback about this article, we'd welcome getting your comments at DartMed@Dartmouth.edu.
This article may not be reproduced or reposted without permission. To inquire about permission, contact DartMed@Dartmouth.edu.