High cost is the high-profile villain of American health care, and fall is the season when it sashays onto center stage. It is the time of year when employers and Medicare make annual announcements of the extra bite rising health insurance premiums will take out of next year’s paycheck or retirement income, and, this fall, it is also presidential election season. That means a spotlight on the urgent need to corral high medical costs as part of national health care reform.
Yet if the cost of insurance is an obvious concern, there is another fundamental problem in American medicine at least as disturbing in its implications for both wallet and well-being. That problem centers on what happens to patients once they are inside the doctor’s office or hospital.
One way to highlight what is at stake is with a number: 55 percent. That figure represents the chances you will receive the treatment the medical literature says is best for your illness. Put differently, it’s roughly the flip of a coin — heads, yes; tails, no. The odds for common individual conditions are hardly more encouraging. Hip fracture? Patients receive what is known as evidence-based care a dismal 23 percent of the time, according to a study in The New England Journal of Medicine. Breast cancer treatment? The disease’s high profile helped it finish at the top of the list by being evidence-based all of three-quarters of the time (76 percent).
An earlier comprehensive study of evidence-based care that used a slightly different methodology was only slightly more reassuring. While the average rate of evidence-based treatment was higher (60 percent for chronic care and 70 percent for acute care), the authors calculated that patients received “contraindicated” therapy (that’s medicalese for “Don’t do this!”) 20 percent of the time for chronic conditions and 30 percent of the time for acute ones.
If you are tempted to think, “Not my doctor,” think again. Providers in low-income areas perform more poorly on some quality measures, but the broader research shows socioeconomic status provides no sanctuary. The widespread failure to do the right thing for the right patient at the right time is egalitarian in its impact.
By comparison to health care, America’s airlines are paragons of reliable performance. In 2007, one of the worst years on record, the average airline on-time rate was about 74 percent, while the individual airline that was the worst of the bad had an on-time record of 65 percent. The rate of “contraindicated” flights (flying to the wrong city, for example) was negligible, and, perhaps most significantly, the safety record was vastly better.
Experts believe that a stunning 20 to 40 percent of the $2.4 trillion America spends on health care in 2008 will be wasted on misuse (including harmful and fatal errors), overuse (care that’s unnecessary) or underuse (effective care that’s not provided). If you take a midrange figure — let’s say 30 percent — you end up with $720 billion in savings. That’s enough in health care savings to pay the cumulative costs of the Iraq war (about $560 billion by mid-September 2008) and still have enough cash left over to pay for universal health care and the entire federal education budget. If you simply sent out a rebate check, it would come to some $2,100 for every man, woman and child in the country.
And that’s just one year of savings.
The failure to follow best practice carries a price tag in human lives, too, and it is equally enormous. Providing appropriate, effective and safe care where we know how to do it — no “medical mysteries” included — could annually prevent the deaths of hundreds of thousands of Americans in and out of the hospital and millions of injuries.
If the practice of “evidence-based medicine” has so many benefits, and they’re so well known among researchers, why isn’t it an urgent priority of medical and political leaders alike? The answer casts a discomfiting light on how patients think the practice of medicine works, how medical decisions are made in real life and what it will take to bridge that divide.
Evidence-based medicine seeks “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.” At first glance, that might seem to describe what doctors already do. In reality, there is a very substantial difference between explicit and implicit use of evidence and between systematic and episodic attempts to do so.
To give just a few examples, one study showed that the medications a physician prescribed to treat high blood pressure were determined primarily by which medications the doctor was prescribing when he or she began practice. Another study showed that the average innovation in the medical literature can take 17 years to make it into common practice.
A third study, on heart attacks, found that something as simple as giving out aspirin and taking patients off a class of drugs shown to be more harmful than helpful was slow to be adopted despite dramatic and highly publicized evidence that lives could be saved. Three years after the studies first appeared, more than a quarter of heart attack patients still were not being given aspirin, and a third continued to receive the drugs shown to be dangerous.
Although one explanation for this failure to adopt evidence-based practices is the large number of studies appearing in the medical literature, the deeper cause is the way physicians are trained and the culture in which they practice.
Patients often assume that “medical science, medical research and medical practice are identical or, worse, hierarchically arranged,” notes medical sociologist Ann Lennarson Greer. In actuality, she notes, “medical practice does operate under the umbrella of medical science, but medical practitioners do not take practice directives from medical researchers.”
Indeed, practicing physicians may even suspect that the world of controlled studies and careful patient categorization has little relevance to the messy business of everyday care. Eliot Freidson, another prominent medical sociologist, has shown that physicians are trained to work as a consulting profession, not a scholarly one. The professional model of a practicing physician “encourage(s) individual deviation from codified knowledge on the basis of personal, first-hand observation of concrete cases,” Freidson writes. “This deviation is called ‘judgment’ or even ‘wisdom.’… Since it is intimately bound up with the personal life of the knower ... it is no wonder it has a dogmatic edge to it, resisting contradiction by embarrassing facts and contorting itself to reconcile contradictions.”
Or, as Dr. Sherwin Nuland, a Yale School of Medicine professor and respected medical author, confessed in The New York Times, “Physicians flip-flop dramatically, and with unabashed confidence.”
Unfortunately, many patients and doctors alike perceive something very different. Rather than “flip-flops,” they see savvy clinicians responding to clinical uncertainty and individual patient differences. This “art of medicine” explanation certainly has some validity, but it is also a vastly overused alibi for care that deviates from any sort of evidence base. One of my favorite examples comes from a survey of family practitioners in Washington state in the early 1990s. Asked about treating a simple urinary tract infection, one of the most common complaints of women, 82 physicians produced an extraordinary 137 different strategies!
The idiosyncratic rather than scientific nature of medical decision making has been richly documented in the medical practice variation studies pioneered by Dartmouth Medical School’s Dr. John Wennberg, beginning in the early 1970s. Wennberg showed that doctors practicing in the same locale with similar patients varied widely in the treatments they recommended.
Thirty-five years after Wennberg’s first article appeared, the methodology has gotten more sophisticated, but the basic conclusion remains sadly the same: Medical practice varies wildly for reasons having little to do with the patient. Wennberg is still publishing variation studies, but he has been joined by a new generation that includes his son, Dr. David Wennberg, and Dartmouth colleagues such as Dr. Elliott Fisher. Using measures of cost and outcome, they have directly confronted the widespread belief that “more” care is better.
Americans may like to complain about the cost of care, but when we go to our own doctor we are quick to demand a hefty serving of specialist consultations topped off by a good-sized dollop of high technology. However, when Dartmouth researchers examined the pattern of Medicare spending in different regions of the country, they found that patients in places where spending is very high received the same or even lower quality of care (as measured by evidence-based treatments), compared with lower-spending regions. Not only that, the high spenders had the same or even worse health outcomes and satisfaction than the low spenders. Beyond the clinical ramifications, there were financial ones. Had the use of medical specialists, diagnostic tests, hospitalization and intensive care units in the high-spending regions matched the rate in the low-spending ones, Medicare spending would have been slashed 30 percent.
That same theme of “more may be worse” continued when researchers analyzed the treatment of hip fracture, colorectal cancer and heart attack. Higher-spending Medicare regions showed a generally higher risk of care-associated complications. Might that be because “more” care also means more opportunity for unnecessary procedures, unneeded drugs (such as discredited cardiac medications) and unintended harm?
The idea that “less” care is less good was even more pointedly challenged by a separate study involving some of the most famous hospitals in America. The study started with a standardized outcome (death from a serious chronic disease) and then examined cost and resource use for Medicare beneficiaries during their last six months of life. Once again, expensive and unexplained variation reared its head. At one extreme, patients at New York-Presbyterian Hospital spent an average of 23.9 days hospitalized during those six months before dying. At the other extreme, the average hospitalization was 12.9 days at the Mayo Clinic-owned St. Mary’s Hospital in Minnesota. Within California alone, the pre-death length of stay in the hospital for similar patients ranged from an average of 19.2 days at the UCLA Medical Center down to 13.2 days at UCSF Medical Center. Within each hospital there was further variation in the use of expensive services such as ICUs.
This study of stellar hospitals highlights one big problem in addressing the lack of evidence-based practice: There are no easy villains. Inappropriate, ineffective and unsafe care is routinely provided by well-trained, hardworking and caring physicians — in other words, by your doctor and mine — all trying to do the right thing.
Unfortunately, physicians have few feedback mechanisms to tell them whether their decisions are correct. The right treatment does not always lead to a good outcome, and the wrong treatment does not always lead to a poor outcome. Even assessing process measures — did you actually give the right treatment? — has often been resisted. So, too, has the use of information technology to help pinpoint which studies in the medical literature can be applied to the care of which patients.
This premise that individual clinicians can be held accountable for appropriately using medical evidence in individual treatment decisions represents a radical change in practice. After all, it was not until the early 20th century that, in the famous words of Harvard University biochemist Lawrence J. Henderson, “a random patient with a random disease, consulting a doctor chosen at random, had, for the first time in the history of mankind, a better than 50-50 chance of profiting from the encounter.”
The notion that physicians can systematically measure and manage what they do started making its way into medicine in the early 1980s, courtesy of the “continuous quality improvement” movement sweeping industry. At a time when U.S. automakers and other industrial companies were hemorrhaging market share to the Japanese, CQI showed that one big difference was the Japanese commitment to consistent high-quality work and to continuous process improvements. The Americans, by contrast, were engaged in a costly game of “make it, then fix it” that took a toll on both reputation and the bottom line.
Continuous quality improvement advocates looked at the wild variance in treating similar patients found by John Wennberg and others and called on doctors to become more consistent in giving the best possible care. Critics countered that they were unconvinced the similarity between auto making and medicine was real and, in any event, preserving individual clinician autonomy was more critical than any short-term cost savings. Pioneering proponents of CQI, such as the Institute for Healthcare Improvement, have had difficulty persuading fellow physicians that “systematization” is not a clinical straitjacket. So has everyone else. The core problem, once again, is the culture of medical practice.
Most physicians in this country are independent agents, free to admit their patients to whichever hospital they wish and reporting to no one other than loose-knit committees of peers. A hospital administrator who forgets that it is satisfied physicians who are responsible for filling his beds with paying patients may experience what a friend of mine calls a “CLE”: a career-limiting event. (The friend, a physician, was fired as a hospital chief executive officer after pushing his own medical staff to move on quality improvement too hard and too fast.)
But there is good news. The advent of the evidence-based medicine movement in the early 1990s placed systematization into a scientific and clinical context that has found better acceptance from doctors. Achieving the “conscientious, explicit and judicious use of current best evidence” requires systematization. But at the same time, evidence-based medicine, premised on work in the early 1970s by Scottish epidemiologist Archie Cochrane, explicitly acknowledges the importance of “integrating individual clinical expertise with the best available … research.”
This embrace of the “individual” has distanced evidence-based medicine from the whiff of the assembly line. EBM advocates emphasize that the doctor can override any practice guideline as long as the reason for and outcome of that decision are documented. Nonetheless, critics still characterize evidence-based medicine as “cookbook medicine.” Given the long reign of “eminence-based medicine” (important doctors each writing his or her own recipe), that’s no surprise.
By contrast, the new medical paradigm values what you do as much as what you know, where you went to school to learn it or the length of your curriculum vitae. The marriage of science and systems thinking is bringing unaccustomed accountability to a profession comfortable in its autonomy and genuinely certain that the old ways work best.
Much of the motivation for change is economic. Controlling what the doctor orders has become the focus of a concerted effort, bringing together large payers (the government, employers and insurers) and some medical leaders. The average doctor may want only to be left alone, but medical leaders increasingly understand that the alternative to evidence-based care is politically driven cost cutting. The high cost of health care is now called a threat to America’s basic economic security by such prominent individuals as the Federal Reserve chairman.
Leaders of corporate America and the medical community are learning to speak each other’s language. A prominent example is the Leapfrog Group, whose founders work for some of the nation’s biggest corporations but whose mission is to prod doctors to practice safer and more evidence-based care. Its first three “leaps”: a reduction in medication errors; more referrals of high-risk patients to hospitals where the evidence showed a likelihood of better results; and the use by hospitals of specially trained ICU specialists called intensivists. Leapfrog calculated a clinical and economic bottom line of more than 65,000 lives saved, more than 907,000 medication errors avoided and up to $41.5 billion less spent on care.
Meanwhile, the prestigious Institute of Medicine of the National Academy of Sciences has learned to talk like an economist. The institute estimated that there are 44,000 to 98,000 deaths each year from preventable errors in hospitals and that the total annual cost of all adverse events resulting in some sort of injury amounted to between $17 billion and $29 billion. This figure included direct health care costs as well as lost income, lost household production and disability costs.
JAMA, the official Journal of the American Medical Association, printed an article last year by Harvard Business School professors Michael Porter and Elizabeth Teisberg prescribing a new structure for health care. The authors called for a “value-based” system grounded in three principles. First, the goal is value for patients. Second, care delivery is organized around medical conditions and care cycles. And third, results are measured.
The words “value” (which links clinical outcomes to financial inputs), “organization” and “measurement” are not found in the Hippocratic Oath, but they do highlight a major barrier to change: incentives. The physician who stops writing unnecessary prescriptions risks alienating patients who judge competence and caring by the prescriptions in their hands. Keeping patients healthy risks an appointment book filled with revenue-free blank spots. And cutting back referrals to specialists, or perhaps chiding a colleague about a preventable medical mistake, risks alienating colleagues whose goodwill is central to survival in the closed medical ecosystem.
A steadily building wave of government and private initiatives to address these problems is now under way using such mechanisms as bundled payments among primary care physicians and specialists, “pay-for-performance” incentives for evidence-based clinical behavior and rewards for keeping patients healthy. The programs are still recent, however, and their effectiveness remains unknown.
On the political front, both presidential candidates have expressed support for quality improvement and payment reform, even if the details are sometimes fuzzy. Meanwhile, performance information on hospitals taken from publicly available data and, in some cases, information on individual physicians are becoming a staple of health care Web sites.
Yet sweeping economic and political incentives cannot work unless providers have the ability and willpower to focus on the prosaic details called “microsystems.” In 2002, Ascension Health launched an initiative to eliminate all preventable injuries and deaths in the 65-hospital Catholic system by July 2008. “All” has not been accomplished, but the organization has documented a steep drop in injuries through unrelenting attention to the little things. So, for example, 60 different types of bed frames were replaced with a few standardized models to help hospitals prevent pressure ulcers. Ascension, the nation’s largest Catholic system, calculates that more than 3,000 patient deaths have been avoided through its initiative.
On a smaller scale are the kind of efforts at Grand Rapids, Mich.-based Spectrum Health and a host of similar institutions. Examining the process steps associated with knee replacement surgery, the system found that its 21 orthopedic surgeons varied in the frequency with which they gave patients blood transfusions from 10 percent of the time all the way up to 83 percent of the time — unrelated to patient characteristics. The hospital began working with the surgeons on appropriateness criteria, and the transfusion rate plunged from 31 percent of all cases to 13.5 percent.
The savings at Spectrum weren’t enormous, but they added up to $135,700 in one year from one component of one surgery in one small Midwestern system. In addition, patients were less likely to experience a transfusion-related adverse reaction. There were other small steps, like reducing the rate of infections of cesarean-section wounds from a seemingly low 2 percent to a truly low 0.6 percent. The result: 23 fewer patients infected, $66,000 saved in one year. Think of it as a journey of hundreds of billions of dollars and millions of patient lives that begins with one dollar and one patient at a time.
It is an apt analogy. For the quality of American medicine to be transformed, a lot of the little things must begin to change together — everywhere. Financial incentives, political pressure, reputational risk and reward, professional pride and the involvement of patients and families all will have a role to play before old habits give way to a new dedication to systematic measurement, management and decision support — as well as the preservation of caring.
The coming destruction of the old ways of doctoring is an unavoidable source of anxiety, but it should not be a source of despair. Patients and caregivers alike should celebrate better days ahead. Destruction often precedes renewal, and it is in the renewal of evidence-based care that the future of American medicine lies.
Are you on Facebook? Click here to become our fan.