Are We Hunting Too Hard?
additional treatment because they have difficulty urinating following surgery; 28% must wear pads because they have the opposite problem—they cannot hold their urine; and more than half are bothered by a loss of sexual function."
In addition to causing harm from unneeded treatment, overdiagnosis can distort our perceptions of how well certain cancer treatments work, says Black. Because very slow-growing and potentially harmless cancers are relatively easy to control and eliminate, finding and treating more of them make therapies seem more effective. "We see that [such cancers] behave well when we treat them, and we falsely attribute their good behavior to our treatment," explains Black. "By definition, survival only pertains to people who are diagnosed with the disease. So when you have overdiagnosis, survival is very, very misleading. Survival is going to be very long in people who are overdiagnosed," as well as in people whose cancers are found very early. For these reasons, Black, Welch, and others at DMS are critical of using survival rates—such as the oft-cited five-year survival rate—as a measurement of the effectiveness of screening. Looking at death rates is a better way of evaluating screening, they argue, but even that approach has problems.
"You can't just look at disease-specific mortality, because we're not sure what causes death in a lot of people," Black explains. "There are a lot of deaths that are difficult to determine the cause of, and you can bias your results strongly in one direction or the other with diseasespecific mortality."
The best measure of a cancer screening technique, Black and Welch contend, is total deaths—known as all-cause mortality. Yet the most widely accepted measure remains disease-specific mortality. In 2002, Black and Welch wrote a paper for the Journal of the National Cancer Institute (JNCI) showing just how shaky a foundation disease-specific mortality can be. They looked at the death rates in several screening trials for breast cancer, colon cancer, and lung cancer, each of which included a screened population and a control, or non-screened, population. Of the 12 trials they reviewed, seven showed major inconsistencies between disease-specific and all-cause mortality. In two there was a significant difference in the magnitude of the result, and in five the diseasespecific and all-cause results actually contradicted one another—with one method showing a screening benefit and the other a screening harm.
For example, among the trials Black and Welch examined was the well-known 1989
Swedish Two-County mammography study, which reported that mammography reduces breast-cancer mortality. But when Black and Welch looked at the number of deaths from all causes in the screened group and the non-screened group, they found that there were actually slightly more deaths in the screened population. Likewise, Black and Welch examined a 1993 New England Journal of Medicine study that showed a 30% reduction in colon-cancer mortality in people screened with the fecal occult blood test. But the screening benefit disappeared when all-cause mortality was considered.
Such findings beg the question that if screening doesn't reduce all-cause mortality, why be screened? "Think for a minute about why you might want to be screened," writes Welch in his book. "Is it because you want to reduce your chance of a cancer death? Or is it because you want to reduce your chance of death, period?"
Just asking these questions makes Black and Welch controversial. Nationally, researchers and physicians who are asking such questions are "a small minority, maybe one percent of the entire cancer community," says Black. "And they are actually [considered] extreme in their views relative to everybody else." The skeptics of screening are "not usually the loud voices because nobody wants them to be loud," Black continues. At Dartmouth, however, Black and especially Welch have been very outspoken. Since
the release of his book, Welch has been quoted in the Wall Street Journal, the New York Times, Harper's, and many other prominent national publications, often as the voice of opposition against ubiquitous cancer screening.
And to the dismay of some advocates of screening at DMS, Welch and Black have earned Dartmouth a reputation as "the center of antagonism for screening," as one visiting National Cancer Institute senior scientist recently put it. But Welch and Black insist that they are not anti-screening.
"Gil and I aren't saying that people shouldn't be screened," explains Black, clarifying their perspective. "Our main point is that there are pros and cons and none of them, none of the screening tests, is a slam dunk in terms of producing a lot more good than harm."
"I want to emphasize, there is not a right" answer to the screening question, says Welch. "It's about personal preferences. It's about how you weigh the different outcomes. No one is suggesting that no one should get invited [to be screened], but they should be aware it's a close call."
There is also "a real theology here," he says, "and I understand exactly where it comes from. The idea is so appealing: Earlier is better. Prevention is better than cure. Finding small, bad breast cancers must be good." But the closest Welch can come to the "truth" about cancer screening, he says, is that "the effects of screening are probably mixed in general. Very few are helped. And very few are hurt. And most [screenings] have no effect." He repeats the last point for emphasis: "Most have no effect."
In other words, most people who get screened never develop cancer and never would have. Thus for them, screening confers no benefit. Even advocates of screening like Patricia Carney, Ph.D., codirector of the Cancer Control Program