HomeCurrent IssuePast IssuesAbout UsContact Us Twitter Icon Facebook Logo Google Plus Logo LinkedIn Logo

Truth in Numbers

A husband-and-wife team takes on medical misinformation.

By Cathy Shufro

On a large screen at the front of the auditorium, an advertisement shows a 14-year-old girl with rosy cheeks and a pink bow tied around her ponytail. Below her portrait, the girl has written in colorful block letters, "It would never happen to me. I've got bigger things to worry about, like homework, friends, and all the cute upperclassmen." The bottom of the ad reads, "Rachel Kramer, 14, the day before she was diagnosed with thyroid cancer."

Cathy Shufro is a writer and editor based in New Haven, Conn., and a spring 2012 fellow of the International Reporting Project. She can be reached at cathy.shufro@yale.edu.

It's a punch to the gut. So why have the journalists in this Dartmouth College auditorium exploded into laughter?

Blame it on Geisel professors Lisa Schwartz, M.D., and Steven Woloshin, M.D. Every year, from the first hour of the annual Medicine in the Media course to the last, they promote a few central ideas: Don't trust your gut. Look at the numbers. Scrutinize the studies. Cultivate skepticism.

So it's not that the reporters are heartless. They're just unconvinced that Rachel Kramer's case—unsettling as it is—justifies a scary ad. And they're right: according to the National Cancer Institute (NCI), the chances that a 14-year-old white female in the United States will develop thyroid cancer before her 15th birthday are 2 in 100,000 (or 0.002%). The print at the bottom of the ad reads "Confidence kills," suggesting that Rachel Kramer's life hangs in the balance. Actually, the likelihood that a girl in this demographic group will die at the age of 14 from thyroid cancer is much less than one in a million. So if a teenage girl's parents are casting about for a risk to act upon, they might want to keep their daughter out of a car driven by one of those cute upperclassmen.

Schwartz and Woloshin are spending this summer day in the gloom of an auditorium at Dartmouth to help an international group of journalists to recognize what's worth reporting and how to do it well. Lectures, panel discussions, and guest speakers during the three-day course keep the journalists busy from early morning to late evening. It's a lot like boot camp. (In fact, Woloshin and Schwartz teach similar skills to journalists at the Knight Foundation's annual "Medical Evidence Boot Camp" at the Massachusetts Institute of Technology.)

We want doctors, the public, and policymakers to know what they can and cannot get from various medications, treatments, and interventions.

The journalists—reporters and editors from newspapers, websites, magazines, radio, and television—learn how to evaluate research for themselves rather than depending on press releases. They use worksheets to figure out whether claims about a study that appear in a press release or journal article are actually borne out by the numbers. They learn about disease-mongering by drug companies and by screening enthusiasts that turns healthy people into patients. They consider the merits of not reporting on weak or very preliminary findings.

Last summer about 250 journalists competed for 50 spots in the course. "That's impressive, given what's happening in journalism," says Schwartz. During the nine years since Medicine in the Media was initiated, 500 journalists have attended the course, which is sponsored by the National Institutes of Health, the Center for Medicine in the Media at Geisel, and the White River Junction Veterans Affairs Medical Center in Vermont.

Training journalists is just one aspect of an ambitious project that has occupied Woloshin and Schwartz for 17 years: improving the communication of benefits and risks in medicine.

"We want doctors, the public, and policymakers to know what they can and cannot get from various medications, treatments, and interventions, so people can make wise decisions," says Schwartz.

Both patients and doctors urgently need this information, says Sir Iain Chalmers, M.D., a British health services researcher who was knighted for his work in the field. Chalmers says that the influence of Schwartz and Woloshin's work has been so significant that "it's difficult to exaggerate."

The easy-to-understand nutrition labels on the side of cereal boxes inspired Lisa Schwartz and Steven Woloshin to find a better way to communicate vital information about the benefits and risks of medications.

Chalmers helped to establish the nonprofit Cochrane Collaboration, a large network of contributors from more than 100 countries to evaluate evidence about the effects of health-care interventions, and he has worked closely with Schwartz and Woloshin. "Like them," says Chalmers, "I'm very concerned about the poor quality of information available to the patients and indeed to doctors and other health professionals. I would say that Steve and Lisa have a worldwide reputation for having done more to basically address that problem than anyone else who is an actual practicing doctor."

For Woloshin and Schwartz, that work has included frequent talks to physicians and researchers and consultations with government agencies. To reach larger audiences, they have written newspaper columns and op-ed essays and a book for general readers, Know Your Chances: Understanding Health Statistics, co-written with Geisel colleague H. Gilbert Welch, M.D.

Part of Woloshin and Schwartz's evidence comes from their own randomized studies of how medical information is understood, misunderstood, distorted, and accurately communicated. They even did a randomized study of Know Your Chances to test whether an early version of their book helped readers understand statistics—perhaps the first randomized trial of a book. Their study showed that the book did, indeed, help readers answer questions about health data.

Fortunately for any journalist attending Medicine in the Media who finds statistics daunting, the curriculum is deeply informed by Woloshin's lifelong study of the Marx Brothers' oeuvre and by his early immersion in the escapades of Rocky and Bullwinkle. The opening slide for the course shows Virgil leading Dante into hell, with the inscription, "Abandon hope all who enter here." Actually, Woloshin tells the reporters, they needn't abandon hope. "All we ask you to abandon is hype." He adds a characteristic aside: "Feel free to laugh if anything I say is even remotely funny."

Before Lisa Schwartz and Steven Woloshin got interested in communicating risk, they got interested in each other. That was in 1990, when both were residents in internal medicine assigned to Manhattan's Bellevue Hospital. "We met and we have been inseparable since," says Woloshin.

As residents, they'd planned on doing primarily clinical work, not research. But then, while working at two city-run clinics for low-income patients—one in Manhattan's Chinatown, the other on the Lower East Side—they stumbled upon what they saw as bad policy. Nearly all their patients were recent immigrants from China and the Caribbean, and few spoke much English. Yet although the clinics underwrote all sorts of high-tech diagnostics, they employed no interpreters.

"You had to grab somebody from the waiting room, or the janitor, or the patient's child," Woloshin recalls. "It was a terrible situation." They became interested in understanding policies like the ones that authorized payments for x-rays but not for interpreters.

Woloshin and Schwartz married in 1992 and, in 1994, moved to Hanover for a two-year research fellowship. They've been at Dartmouth and the VA Medical Center ever since. At Dartmouth, Schwartz says, "We were challenged in a lot of things we had bought into during training," such as "screening is always good . . . early diagnosis is always better; the more medicine you give the patient the better." Back in New York City, they had thought that "the more screening tests you ordered for people, the more you could show how much you cared about them," Schwartz says. "We've become a lot more skeptical."

That skepticism made them increasingly uneasy about the way health information was communicated. "Every time we'd pick up the newspaper, we'd read something that would get us really mad," Schwartz recalls.

Every time we'd pick up the newspaper, we'd read something that would get us really mad.

In 1999, they got mad at the U.S. Postal Service when they read about the imminent unveiling of a first-class stamp that would read, "Prostate Cancer Awareness: Annual checkups and tests." Their reaction, as Schwartz recalls it: "How was it that the Postal Service was making screening recommendations?"

They answered that question in an article in the New England Journal of Medicine that criticized the use of postage stamps as vehicles for public health advocacy. Although they conceded that prostate cancer diagnoses had increased in recent years, they noted that most experts agreed that the increase derived primarily from more screening, not more disease. Moreover, they argued, the stamp promoted screening "despite the lack of evidence of a benefit." Today, many former advocates of routine prostate-specific antigen (PSA) screening have come around to their way of thinking. The U.S. Preventive Services Task Force, for example, has concluded that there is not enough evidence to recommend routine PSA tests.

The problem with cancer screening, argue Schwartz and Woloshin, is that even when cells look cancerous under a microscope, pathologists can't always distinguish cells that will proliferate from those that will not. If the cancer would never have multiplied and spread further, says Schwartz, "the only thing the treatment can do for you is harm you."

But Schwartz and Woloshin have found that few Americans share their misgivings. For a study published in the Journal of the American Medical Association (JAMA) in 2004, they interviewed 500 representative adults (women 40 and older and men 50 and older) about their attitudes toward screening. They reported that 87% believed that routine cancer screening is almost always a good idea. Nearly three out of four (74%) believed that finding a cancer early would save a person's life most or all of the time. Two-thirds said they'd want to be tested for cancer even if they could do nothing to treat it.

They also asked whether people would want one of the total-body CT scans that were being "directly and aggressively marketed to consumers." Of those interviewed, 73% said that given a choice between receiving $1,000 in cash and getting a total-body CT scan, they'd choose the scan—even though, as the researchers noted, "There are no data to support the benefit (or even safety) of total-body CT screening."

"We were shocked," Schwartz recalls.

Over the years, Woloshin and Schwartz have come to realize that the blame for exaggerated health claims and misinformation doesn't lie solely with journalists.

"We've learned that the journalists' sources are often a problem," says Woloshin. For example, in a paper published in JAMA in 2002, they looked at press releases from medical journals—perhaps the most direct way that journals communicate with the media. They found that over one-third of the press releases failed to quantify results, and over three-quarters failed to mention any study limitations. More recently, they looked at press releases from academic medical centers (in an article published in Annals of Internal Medicine in 2009). They found not only the same problems but also excessive promotion of preliminary findings based on animal or lab studies with uncertain relevance to human health.

Training journalists is beginning to have an effect, says Barnett "Barry" Kramer, M.D., the director of the NCI's Division of Cancer Prevention. "Before Medicine in the Media, the reporting on cancer screening tests and medical screening tests was almost uniformly positive," he says. "It was presented as very simple, that any test that picks up cancer earlier must have a benefit, and it ignored the very real possibility of overdiagnosis." Kramer helped launch the Medicine in the Media course in 2003, and each summer he leads a session on the facts behind cancer screening.

Where do patients get this uncritical confidence in drugs? For starters, drug companies spent more than $4 billion on direct-to-consumer advertisements in 2010.

"We're going to send Barry [Kramer] to every newsroom in the country, so every editor and reporter will wake up in a cold sweat worrying they have overbilled screening," joked Tami Dennis, who took the course in 2009 and returned last summer for a panel discussion of life after Medicine in the Media. As the health and science editor of the Los Angeles Times, Dennis said that she came away from Medicine in Media recognizing that "there's a lot of rubbish out there." When competitors give glowing coverage to bad science, she still feels obligated to have her staff cover the story, too—but only minimally. "We get in and we get out," said Dennis, now vice president of health content for the Tribune Company.

Fellow panelist Natasha Singer of the New York Times said, "The hardest thing is to go up against conventional wisdom." For instance, she once reported that no reliable research had substantiated rumors of the notorious "freshman 15" weight gain among first-year college students. She calls the freshman 15 "the Loch Ness monster of campus life." ("We've heard about it. Some people have spotted it.") Singer has also reported that studies linking cancer to hair dye have shown either small effects or none. "It's sexier to have a headline that says 'Hair dye kills!'" she said "It's a much better headline than 'You're fine.'"

Overall, said Dennis, "What the course did was to give some spine to my inherent skepticism. I tend not to believe anybody and to doubt everything."

For Woloshin and Schwartz, bolstering the skepticism of reporters and editors is only half their battle: they want patients to resist the hype, too. They've both noticed that often when patients ask about a drug, "The question is not 'Does it work?' but 'How can I get it?'"

Where do patients get this uncritical confidence in drugs? For starters, drug companies spent more than $4 billion on direct-to-consumer advertisements in 2010—ads that contain "some lies, and certainly a lot of exaggeration," according to Woloshin. He and Schwartz use the story of the painkiller Vioxx to illustrate how marketing can make a bestseller out of a drug. The drug company Merck won approval for Vioxx in 1999. By 2003, when worldwide sales of Vioxx reached $2.5 billion, Merck was spending $500 million to advertise the drug. But in 2004, a large study showed that the drug increased the risk of heart attacks and strokes, and Merck took Vioxx off the market.

But Woloshin and Schwartz suspected that drugs ads weren't the only problem. There's also a lack of access to relevant information about drugs. And they've realized that it's not just patients who have trouble finding this information; doctors face barriers, too. Even the Physicians' Desk Reference, often called the doctor's bible, provides incomplete information. Woloshin and Schwartz explain that the reference book is simply a compilation of the FDA-approved drug labels that are inserted in every medication package and that these inserts are written by the drug companies themselves, then negotiated with the FDA.

And, they note, those inserts often fail to make important information accessible. In a 2009 article in the New England Journal of Medicine (NEJM), Woloshin and Schwartz used the popular insomnia drug Lunesta as a case study to examine the usefulness of drug inserts. In 2005 and 2006, Lunesta had the "most-remembered" television drug ad of the year, according to a research firm that tracked how well viewers recalled commercials. By 2010, annual U.S. sales of Lunesta were nearly $800 million.

But can a patient who reads the package insert tell how well the drug works? Woloshin and Schwartz say no. The insert states: "Lunesta was superior to placebo on subjective measures of sleep latency [time it takes to fall asleep], total sleep time, and WASO [wake time after sleep onset]." In other words, Lunesta worked better than a sugar pill. But by how much? The insert doesn't say.

To find out, Woloshin and Schwartz tracked down the FDA's medical review and then searched for the numbers. They found them on page 306: In the longest and largest clinical trial, on average, participants who took Lunesta fell asleep 15 minutes faster and slept 37 minutes longer than did participants who took a placebo. But as Schwartz and Woloshin pointed out in their article, participants who took Lunesta still met the criteria for insomnia, and they "reported no clinically meaningful improvement in next-day alertness or functioning."

Effectiveness matters, Schwartz and Woloshin argue, because drugs can cause harm. The Lunesta label approved by the FDA in 2004 warns that the drug "may cause a severe allergic reaction" that could include hives, difficulty breathing, and swelling of the face, lips, tongue, or throat. Other "serious" potential side effects include aggression, agitation, changes in behavior, thoughts of hurting oneself, and hallucinations. A caution added to the insert in 2008 warns that patients taking Lunesta risk carrying out "complex behaviors" while they sleep, such as driving, making phone calls, and having sex. In short, Schwartz and Woloshin noted, information about drugs is available, but it's "practically inaccessible."

Over the past decade, they've been promoting a straightforward solution to this problem. They'd struggled for years to find a handy way to communicate complex information about the risks and benefits of drugs. Then, one morning in 2002, they had a breakfast-table epiphany. Their insight: a label on medications could serve the same function as does the nutrition information box on a package of Cocoa Krispies, Woloshin's childhood breakfast of choice.

The "drug facts box," as they called it, would provide a snapshot of how well a drug actually works and what risks it poses. Just as a nutrition facts box uses a standard, concise format to provide data (calories, grams of fat, and so on), a drug facts box could provide essential data about a drug (effectiveness compared to placebo, likelihood of serious side effects, and so on).

Schwartz and Woloshin tested the effectiveness of drug facts boxes in two randomized trials. In one trial, they used two heartburn drugs, giving each participant a print ad for both Amcid and Maxtor. Half of the participants were also given the brief summary written by the maker of the drug. The other half received drug facts boxes instead of the brief summary. The drug facts boxes led to a better understanding of the effectiveness of each drug.

They described the drug facts box to an FDA official and got an invitation to visit the agency's headquarters in Maryland. They arrived to find that 20 members of the staff had gathered to hear more about the idea. That was the first of many meetings. Support was not unanimous: One FDA official said he worried that if patients saw that a drug had only modest benefits, they might not want to take it. That would be a problem, he said.

"We thought that attitude was a problem," says Woloshin.

The Robert Wood Johnson Foundation considered the drug facts box so promising that it gave the pair a Pioneer Portfolio grant, a type of funding reserved for what the foundation calls "ideas with the potential for exponential change." So far, Schwartz and Woloshin have conducted two randomized trials with 450 subjects in all to test whether a drug facts box helps ordinary people to understand the differences between two competing drugs. In both studies, participants saw actual ads for two real—and competing—medications. Half the participants also received a brief summary of the FDA-sanctioned drug label—the summary required in drug ads. The other half got a one-page drug facts box instead of the brief summary on the drug label.

The drug facts box proved to be far more informative. For instance, in the first study, patients looked at ads for two heartburn medicines. Among participants given drug facts boxes for the medications, 70% responded correctly that one of the two drugs was much more effective than the other. In contrast, just 8% of the control group got that right. The study also showed that those with the drug facts box better understood side effects and were less likely to overestimate a person's baseline risk of the disease targeted by the drug.

In 2009, the FDA's Risk Communication Advisory Committee proposed that the drug facts box become law. In 2010, the health reform law stipulated that the federal Department of Health and Human Services (HHS) decide within a year whether to adopt the drug facts box.

"We were definitely on a high then," recalls Schwartz. But when the year was up, in March 2011, HHS announced it would take at least another three years to deliberate on the matter.

Woloshin and Schwartz responded with an opinion piece in the New York Times. In "Thinking Inside the Box," they called the decision to postpone action on the drug facts box "a strange position, given the body of published peer-reviewed research showing that drug boxes improve consumer decision making."

They don't plan on waiting three years to see how things turn out.

"We've tried to rally some of the forces to push the FDA," says Woloshin. "There are ways to do it without going through Congress." For instance, the FDA could require its reviewers to write drug facts boxes as an executive summary of their review, which would then be available to doctors and patients.

"If you can do that for a product like Cocoa Krispies, why can't you do that for a product like Lunesta or Lipitor, where the stakes are so high?" Woloshin asks.

British health researcher Chalmers shares his colleagues' frustration. Chalmers says that when ordinary people hear of the proposal to provide doctors and the public with basic facts about drugs, "They ask 'Why on earth hasn't this been adopted ages ago?' It's so obviously the right thing to do."

If you can do that for a product like Cocoa Krispies, why can't you do that for a product like Lunesta or Lipitor, where the stakes are so high?

Schwartz and Woloshin now have a second grant from the Robert Wood Johnson Foundation. Its purpose is to find a "parallel track" for moving the drug facts box forward.

"I think everybody knows it's really important," says Schwartz.

Gary Schwitzer, an expert on health-care communication, is more emphatic. "If we don't improve the public understanding and the public dialogue about the tradeoffs of harms and benefits [of drugs], I don't think we have a chance at meaningful health-care reform in this country," he says.

Schwitzer first met Schwartz and Woloshin in the 1990s, when he worked at the Foundation for Informed Medical Decision Making, which was then based at Dartmouth. He now publishes HealthNewsReview.org, a website that evaluates news stories about medical treatments, tests, products, and procedures.

Schwitzer says he feels "deep admiration" for Schwartz and Woloshin. "They are two of my heroes," he says.

For their part, Schwartz and Woloshin joke that they work so closely together that they've become "a brand." They didn't plan it that way. "When we came here, we didn't have this vision of a joint career," says Woloshin. "It just worked out that no matter what I was doing, or what Lisa was doing, the other one thought it was really, really interesting and had all kinds of useful things to say."

When the two work on a paper together, they exchange drafts so often that by the time it's ready for publication it's not clear who the lead author should be. "There's so much back and forth—" Schwartz, begins, and then Woloshin completes the thought: "we don't know who wrote what." They take turns listing their names as first authors, and they add a note saying that they have contributed equally. Schwartz says they realized early on that, "If we didn't acknowledge that collaboration . . . that was going to be dangerous, because it was going to create competition."

Their partnership extends beyond their research. They teach a course together on survey research methods, they are both part of the Outcomes Group at the VA Medical Center, and they both see patients at the VA a half day each week. And, of course, they collaborate on raising their two children, 14-year-old Emma and 11-year-old Eli.

"It's not that we agree all the time; we're always disagreeing," says Woloshin. "We work really well together, because we don't see things the same way, but we're able to negotiate and move things forward. We both are committed to mutual success, so a lot of the ego stuff has been stripped away, and I think that's why we've been successful.

"We're just having a lot of fun now," he adds. "It's just fabulous."


Questions to ask about health-related messages

The questions below can help you understand claims about the benefits of medical tests and treatments, determine whether the claims apply to you, and decide whether you can believe the claims.

WHAT IS THE BENEFIT?
Ask what outcome is affected by an intervention and think about how much you care about that outcome. Does the message discuss a patient outcome? That is, would you experience the outcome directly, as with symptoms of a disease or even death? Or is the message about a surrogate outcome, such as a blood test or an x-ray result? Be most skeptical of interventions that have been shown to improve only surrogate outcomes, because changes in surrogate outcomes do not always translate into feeling better or living longer. For example, there are drugs that can lower cholesterol levels but do not change—or may even increase—the risk of dying from a heart attack.

HOW BIG IS THE BENEFIT?
Find out your chance of experiencing the outcome if you do not take the action discussed in the message—such as going on a medication or changing your lifestyle—and compare that to your chance of experiencing the outcome if you do take the action. In other words, know your starting risk and your modified risk. This is especially important when you hear something like, "Drug X lowers your risk by 40%." Always ask, "Lower than what?" Unless you know your starting risk (the "lower than what" part), the message does not really tell you anything. It's like buying an item on sale in a store—you have to know the original price to know how significant the sale is. Saving 40% on a flat-screen TV is a lot more significant than saving 40% on a pack of gum.

DOES THE BENEFIT REASONABLY APPLY TO ME?
Learn whether the health message is based on studies of people like you (people of your age and sex and whose health is similar to yours). The more similar you are to the participants in the studies, the more likely you are to face the same starting risk and experience the same benefit.

WHAT ARE THE DOWNSIDES THAT COME WITH THE BENEFIT?
Make sure you understand the downsides of the intervention and think about how much they matter to you. Does the intervention have any life-threatening side effects? What are the important symptom side effects? And don't forget to take into account the inconvenience and cost of the intervention.

IS THE BENEFIT WORTH THE DOWNSIDES?
Look at the benefits and downsides side by side. The more compelling the benefit is—a big change in an outcome you really care about—the more downsides you might be willing to tolerate. But a small change in a surrogate outcome may not be worth even a small sacrifice.

WHAT—AND WHO—IS BEHIND THE NUMBERS?
Find out where the information used in the message comes from. Give the most serious consideration to the findings of large, publicly funded, long-term randomized trials that measure an important patient outcome and whose results are published in a peer-reviewed medical journal. Be very skeptical of the findings of small, industry-funded, uncontrolled, or observational studies that measure surrogate outcomes and whose preliminary results are presented at the meeting of a medical or scientific association. You should also be very skeptical of short-term randomized trials (such as many studies of new drugs, which are often conducted for only six months or less). These studies may not include enough participants or last long enough to pick up important life-threatening side effects or even to determine the long-term benefit.

These questions are adapted from Know Your Chances: Understanding Health Statistics, by Lisa Schwartz, Steven Woloshin, and H. Gilbert Welch (University of California Press, 2008).


If you'd like to offer feedback about this article, we'd welcome getting your comments at DartMed@Dartmouth.edu.

This article may not be reproduced or reposted without permission. To inquire about permission, contact DartMed@Dartmouth.edu.

Back to Table of Contents

Geisel School of Medicine at DartmouthDartmouth-Hitchcock Medical CenterWhite River Junction VAMCNorris Cotton Cancer CenterDartmouth College