What Is Evidence-Based Medicine?
In medicine, the term “evidence-based” causes more arguments than you might expect. And that’s quite apart from the recent political controversy over why certain words were avoided in Centers for Disease Control and Prevention budget documents. The arguments don’t divide along predictable partisan lines, either. The mission of “evidence-based medicine” is surprisingly recent. Before its arrival, much of medicine was based on clinical experience. Doctors tried to figure out what worked by trial and error, and they passed their knowledge along to those who trained under them. Many were first introduced to evidence-based medicine through David Sackett’s handbook, first published in 1997. The book taught me how to use test characteristics, like sensitivity and specificity, to interpret medical tests. It taught me how to understand absolute risk versus relative risk. It taught me the proper ways to use statistics in diagnosis and treatment, and in weighing benefits and harms. It also firmly established in my mind the importance of randomised controlled trials, and the great potential for meta-analyses, which group individual trials for greater impact. This influence is apparent in what I write for The Upshot.
But evidence-based medicine is often described quite differently. Many of its supporters say that using evidence-based medicine can address the problems of cost, quality and access that bedevil the health care system. If we all agree upon best practices — based on data and research — we can reduce unnecessary care, save money and push people into pathways to yield better results. Critics of evidence-based medicine, many of them from within the practice of medicine, point to the weak evidence behind many guidelines. Some believe that medicine is more of an “art” than a “science” and that limiting the practice to a cookbook approach removes focus from the individual patient. Some of these critics (as well as many readers who comment on my articles) worry that guidelines line the pockets of pharmaceutical companies and radiologists by demanding more drugs and more scans. Others worry that evidence-based medicine makes it harder to get insurance companies to pay for needed care. Insurance companies worry that evidence-based recommendations put them on the hook for treatment with minimal proven value. Everyone is a bit right here, and everyone is a bit wrong. This battle isn’t new; it has been going on for some time. It’s the old guard versus the new. It’s the patient versus the system. It’s freedom versus rationing. It’s even the individual physician versus the proclamations of a specialised elite. Because of the tensions in that last conflict, this debate has become somewhat political.
The benefits of evidence-based medicine, when properly applied, are obvious. We can use test characteristics and results to make better diagnoses. We can use evidence from treatments to help people make better choices once diagnoses are made. We can devise research to give us the information we lack to improve lives. And, when we have enough studies available, we can look at them together to make widespread recommendations with more confidence than we’d otherwise be able. When evidence-based medicine is not properly applied, though, it not only undermines its reasons for existence, but it also can lead to harm. Guidelines — and there are many — are often promoted as “evidence-based” even though they rely on “evidence” unsuited to its application. Sometimes, these guidelines are used by vested interests to advance an agenda or control providers. Further, too often we treat all evidence as equivalent. I’ve lost track of the number of times I’ve been told that “research” proves I’m wrong. All research is not the same. A hierarchy of quality exists, and we have to be sure not to overreach. There is a difference between statistical significance and clinical significance. Get a large enough cohort together, and you will achieve the former. That by itself does not ensure that the result achieves clinical significance and should alter clinical practice.
Credit: Aaron E. Caroll for The New York Times, 27 December 2017.