Advertisement
Advertisement
evidence-based medicine
ATD Blog

Making the Case for Evidence-Based Care

Wednesday, March 23, 2016
Advertisement

Good medical practice relies on a continuous learning and refinement of knowledge. Doing so enables a coherent rationalization of knowledge, amassing positive and negative experiences from different disease and treatments. Medical professionals keep the treatments that work, and discard the rest. This is evidence-based medicine. 

Here’s how it works: if certain plant compounds seem to provide relief to a patient, they are repeatedly used—hopefully, to the same positive effect. Likewise, if other compounds seem to have negative effects or produce adverse reactions, these are disfavored from routine use. Thus begins the incremental, inductive reasoning that forms the foundation of medicine. 

But do medical professionals follow the same sort of evidentiary paths for developing patient care? They should. In fact, the Institute of Medicine issued a report in 2003 requesting that health education programs include evidence-based care as one of five core competencies. It also established a goal that 90 percent of all patient-care decisions should be based on evidence by 2020. Let’s look at where we stand today. 

Evolution of Evidence-Based Medicine 

Often considered the first controlled clinical trial, James Lind aboard the HMS Salisbury was faced with treating a group of scurvy sailors. He split them into six groups of two. Each group was given a different treatment, which included:

  • ·one quart of cider
  • 25 drops of elixir of vitriol (sulfuric acid)
  • six spoonfuls of vinegar
  • half a pint of seawater
  • two oranges and one lemon
  • spicy paste, plus a drink of barley water. 

In the citrus fruits “trial arm,” one sailor was fit for duty while the other had almost recovered. Although Lind’s conclusions weren’t exactly correct, they represented an entirely new way of approaching medicine. By following this method, competing modes of treatment could be directly compared head-to-head to see which option was superior. 

Of course, several decades ago, there were no validated biomarkers identified, nor systematic ways to objectively gauge a treatment’s effect. A physician’s “optimism bias” was as likely to cloud his observation of an effective outcome as it was for the patient to experience a positive effect due to placebo. The medical community needed to strongly refine its concept of evidence and evidentiary strength, in which the more objective the measure (repeatedly quantifiable), the stronger the evidentiary value. 

When “evidence-based medicine” became a buzzword (a term initially coined by epidemiologists at McMaster University in 1988, but not popularized for a few more years), pervading clinical practice about two decades ago, it seemed to bring to the forefront a return to medicine’s scientific roots—of evidence and not opinion. 

A lot of the robustness of scientific evidence comes from large sample sizes, and it can take years for enough people to be exposed to a treatment for scientists and medical professionals to understand the overall outcomes. This is why post-marketing surveillance is necessary with medical treatments, and in some cases, there are indications of small rates of unanticipated adverse events. If these occur at a rate of .1 percent (and some do so at a rate far lower than this), then out of 1,000 people trialed for a treatment, perhaps a single person may show an adverse effect. 

But when a specific course of medical treatment is used on hundreds of thousands of patients over the next 10 years, there may be a few hundred instances of some adverse effect—and that needs to be explored. So, literally billions of patients receive treatment worldwide, and the field amasses hundreds of billions of patient-years’ worth of data to build an evidence base. All of which begs the question: Ben Locwin addresses the question: Are healthcare practitioners taking advantage of evidence-based care? 

The answer: not exactly. 

What Is the Current State of Evidence-Based Care? 

There are a large number of providers who don’t use evidence-based practice on a regular basis—or even at all. In fact, the European Heart Journal reported that heart failure in high-risk patients was associated with poor compliance to evidence-based medicine (Galinier et al., 2007). Point of fact: according to some indications, warfarin is still used when more advanced next-generation treatments are available and have shown greater patient benefit with lower risk in some cases. 

In another example, the prostate-specific antigen (PSA) test for men has led to thousands of cases of unnecessary and disfiguring surgery. In fact, at one point, 43 men would have had unnecessary surgeries for every single case that had positive benefit. This led the US Preventative Services Task Force (USPSTF) to recommend against PSA screening for all men in 2012, citing concerns that widespread screening identified individuals that did not require treatment and for whom treatment could have negative consequences. Following this recommendation, the rates of PSA testing in primary care settings dropped by 57 percent. However, in urology offices, the use of PSA tests only decreased 4 percent, and therefore, continuing to put a substantial population of men at risk due to unnecessary testing. 

Advertisement

Why is this the case? 

Unfortunately, the biggest asset to clinical decision making is also the greatest source of error variance: people making the decisions. The American Psychological Association defines evidence-based practice as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences.” But when practitioners decide individually what evidence or practices to use, it introduces considerable variability in behaviors, as well as adverse patient outcomes (Titler, 2008). Actually, most of the balance of data on evidence-based care indicates that patient complications are greatly reduced. In addition, equivocal evidence suggests healthcare costs can be reduced by about 30 percent. 

According to the article, “The Science of Implementation: Changing the Practice of Critical Care,” in Current Opinions in Critical Care, “The absence of a proven effective framework for implementing clinical practice change has resulted in a patchwork of interventions in ambulatory and acute care medicine. There is an increasing appreciation that interventions should be undertaken only after careful, theory-based examination of the source and strength of the evidence, the organizational and professional context in which the change will be made, and the availability of facilitating methods.” (Weinert and Mann, 2008) 

A large part of building consistency in practice over time needs to be devoted to mitigating what I call “practice drift.” This is where individual interpretations and behaviors lead to a drift in actual practice over time. Another aspect of this phenomenon is the idea that as new guidelines are developed, there is then a need for a shift in performance. 

Clearly, though, the actual translation of research findings into meaningful practice recommendations for healthcare providers is not always immediately apparent. What’s more, the rate of diffusion of a new (and likely superior) treatment intervention suffers in the wake of clinical practice inertia. In other words, current localized practices continue to persist because an understanding of a better way hasn’t penetrated through all the ranks of the organization. 

Hospitals, clinics, and other healthcare organizations have a tremendous influence on the practicing patterns of their staff and the outcomes of the patients they service. This takes place largely through organizational cultural influences (localized practicing patterns), as well as the systematized processes that are in place within each individual organization. 

Here’s the good news: Research reported in Journal of Nursing Administration found that 76.2 percent of a random sample of 1,015 healthcare practitioners wanted more education and skills-building in evidence-based practice (Melnyk et al., 2012). The study found that the barriers most-frequently standing in the way of evidence-based practice were leaders, politics, and organizational cultures. Finally, those who had been practicing in medicine the longest tended to be most resistant to evidence-based practice. 

Again, this data demonstrates the dramatic influence an organization can have on the adoption and adherence to evidence-based medicine. 

Evidence-Based Care Succeeds on Adherence to Standards 

Let’s review a recent study that examined how to standardize patient handover in the chain of emergency care. After e-learning on this topic was delivered to 78 members of emergency hospital staff, 315 patient handovers during emergency situations were observed. Researchers found no statistically significant improvement in adherence to protocol (p = .159), and actually witnessed a substantial increase in questions and interruptions by emergency department staff during handover. 

Although this study failed to indicate any improvement in knowledge retention of specific actions or procedural information, it did have one significant outcome: the emergency department staff began to pose more questions—about the process and about patient care. This suggests that what would have been rote execution of the emergency procedure had become a situation in which the emergency department staff began to wonder whether the activities they were engaged in were the most appropriate response. 

No doubt, changing inertia is hard. People and organizations tend to overestimate their performance with regard to evidence-based practice, so their own self-assessments tend to be positively biased. But the meta-analysis shows that changing practicing patterns to be more reflective of evidence-based patient care needs to include monitoring and evaluation of specific target outcomes to truly demonstrate evidence of change. 

Advertisement

If you have examples of evidence of where you have seen or done this well, please let us know. Also, please join me for the upcoming webcast, “Evidence-Based Care Management.“ We’ll take a closer look at where decision making in healthcare (and business) goes wrong and what the evidence says about improving decision making. 

Further Reading 

Weinert, CR., Mann, HJ. (2008). The science of implementation: Changing the practice of critical care. Current Opinions in Critical Care. 14(4):460-5. http://www.ncbi.nlm.nih.gov/pubmed/18614913 

Ebben, RH. et al. (2015).  A tailored e-learning program to improve handover in the chain of emergency care: a pre-test post-test study. Scandinavian Journal of Trauma Resuscitative Emergency Medicine, 16;23:33. http://www.ncbi.nlm.nih.gov/pubmed/25887239 

Donaldson, NE., Rutledge, DN., Ashley, J. (2004). Outcomes of adoption: measuring evidence uptake by individuals and organizations. Worldviews on Evidence Based Nursing. Suppl. 1, 41-51. 

Titler, M.G. (2008). Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville, MD: Agency for Healthcare Research and Quality (US). http://www.ncbi.nlm.nih.gov/books/NBK2659 

Melnyk, BM et al. (2012). The state of evidence-based practice in US nurses: critical implications for nurse leaders and educators. Journal of Nursing Administration, 42(9):410-7. http://www.ncbi.nlm.nih.gov/pubmed/22922750 

Galinier et al. (2007).  Heart failure in AMI-diabetic patient is associated with poor compliance to evidence based medicine despite high risk profile: insight from the FAST-MI registry. European Heart Journal, 28, 300-301. http://www.sfcardio.fr/sites/default/files/pdf/29%20-%20Abstract%20ESC%2007%20-%20Heart%20failure.pdf 

Locwin, B. (2014). Why the ‘central dogma’ isn’t. BioProcess International. http://www.bioprocessintl.com/business/careers/central-dogma-isnt 

Medscape. (2015). US recommendations against PSA test taken to heart by PCPs. 

About the Author

Dr. Ben Locwin has been a frequent collaborator with ATD, including as a member of the Advisory Board for the Healthcare Community of Practice.

He is a behavioral neuroscientist and author of a wide variety of scientific articles for books and magazines, as well as an acclaimed speaker. He is an expert media contact for the American Association of Pharmaceutical Scientists and a committee member of the American Statistical Association. He also provides expertise to organizations on human learning and performance, and advises on a range of business, healthcare, clinical, and patient concerns.

He is an author, an international speaker, and has hired over 1,000 people in many high-criticality roles. Says Dr. Locwin, “I have refined my approaches empirically, using deep humility to challenge long-held beliefs and preconceptions that plague these aspects of the people-centered discipline.”

Follow him on Twitter: @BenLocwin.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.