The NY Times Magazine
for Dec. 13th featured an article
that showed how “evidence-based medicine” can be complex, counterintuitive or ambiguous—and the author, a professor of math at Temple—used the recent mammogram testing brouhaha as an example.
The government task force, as you recall, advised that routine screening for asymptomatic women in their 40’s was not warranted and that mammograms for women 50 or over should be given biennially rather than annually. The result was fury on the part of many, especially women!
But to understand the task force’s results and the brouhaha that followed it takes both math and psychology. “Earlier and more frequent screening increases the likelihood of detecting a possibly fatal cancer” – right? Well, no, he says. First, because we don’t know the cumulative effects of all that radiation and second, think of false positives.
Assume there is a screening test for a certain cancer that is 95% accurate; assume that if someone doesn’t have the cancer, the test will be positive just 1% of the time. Assume further that 0.5%--one out of 200 people—actually have this type of cancer. So if you’ve taken the test and your doctor somberly intones that you’ve tested positive, does that mean you’re likely to have the cancer?
And the math professor answers, “Surprisingly, no.”
And here’s the math: Suppose 100,000 screenings for this cancer are conducted. On average 500 of these 100,00 people (0.5%) will have cancer. And so, since 95% of these 500 people will test positive, we will have, on average, 475 positive tests (.95x500). Of the 99,500 people without cancer, 1% will test positive for a total of 1, 470 positive tests (995+475=1,470, or about 32%).
But what about the psychology? The math doesn’t touch people’s perceptions. They still think more is better; earlier screening leads to earlier detection.
The author called the math “trivial”—I thought it was fascinating. But here is an example of “evidence-based medicine” that is not at all “evident” to the general population.