Skip to main content

Is Cancer Screening Worth It?

Claims that screening saves lives are likely exaggerated by focusing too narrowly on saving patients from one particular disease while ignoring others.
(Photo: zlikovec/Shutterstock)

(Photo: zlikovec/Shutterstock)

Cancer is a scary thing, so it's easy to think the smart choice is to get screened for different sorts of cancer on a regular basis. Campaigns promoting such screenings have been on the ropes lately, however, and today researchers add another reason to be skeptical: Claims that cancer checks save lives are likely overstated, a consequence of how those claims define saving a life.

Among the problems with cancer screenings are startlingly high false positive rates for some tests and potentially unpleasant complications for others. That's not to say that screenings are worthless—organ transplant patients, for example, probably should get screened regularly for skin cancer. But concerns about some of these tests led the American Cancer Society to recommend that women delay routine mammograms until age 45 and that men without symptoms do away with prostate cancer screenings altogether.

"As long as we are unsure of the mortality benefits of screening we cannot provide people with the information they need to make an informed choice. We must be honest about this uncertainty."

Yet screening advocates persist. In one particularly entertaining example, NBC's Today Show hosts Matt Lauer and Al Roker underwent prostate cancer screenings on live television, giving Lauer's doctor David Samadi a chance to suggest such tests "would save a lot of men"—a year after ACS updated its recommendations.

It's such "saving lives" claims that oncologist Vinay Prasad, journalist Jeanne Lenzer, and physician David Newman take issue with in a new analysis published in BMJ. Such claims, they point out, are often based on estimates of "disease-specific mortality," which measures how likely a person is to die from a particular disease within a particular window. Using disease-specific mortality, the team argues, puts too much emphasis on how many people a screening could save from a particular cancer—say, the number of lives mammograms could save from breast cancer.

The problem with quoting disease-specific mortality, the authors write, is that saving someone from a given cancer isn't the same as saving that person's life. Here's an example: Suppose you're diagnosed with breast cancer and start chemotherapy—almost always an unpleasant experience. Even if it saves you—and these days your chances aren't so bad—you might soon die of another cancer, heart disease, or even a traffic accident. What's more, the screening itself might lead indirectly to a lower quality of life or even an early death.

The point is, the value of screening and subsequent treatment isn't just a matter of saving you from one specific disease. Screening's real value, Prasad, Lenzer, and Newman argue, depends on how many extra years that screening buys you—and for that, you need to know the overall mortality rate.

Prasad, Lenzer, and Newman don't suggest doing away with cancer screening—instead, they suggest doctors and their patients work together to decide whether screening is appropriate. "But as long as we are unsure of the mortality benefits of screening we cannot provide people with the information they need to make an informed choice," they write. "We must be honest about this uncertainty."


Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.