Skip to main content

Advertisement

ADVERTISEMENT

Original Contribution

Hard to Believe

March 2011

I first heard of “evidence-based medicine” (EBM) during an ACLS class in the mid ‘90s. We had started the day honing intubation and CPR, then took our seats for the didactic portion of the program. Our instructor credited EBM with changing the way we manage cardiac arrests; for example, discouraging routine use of sodium bicarbonate —treatment that had been considered a standard of care for two decades. He also lauded bretylium’s unique “chemical defibrillation” properties, and suggested we make space in our drug bags for high-dose epinephrine. The message to a roomful of novice ALS providers was clear: Driven by the scientific method, EMS was discarding antiquated procedures in favor of leading-edge innovations.

Fast forward to 2000: EBM was discrediting bretylium, high-dose epi, and almost every other drug once thought to promote cardiac arrest survival. Five years later, CPR supplanted pulse checks as the first post-defibrillation step. In 2010, continuous chest compressions leapfrogged airway management and ventilation on the American Heart Association’s BLS algorithm, forcing us to relearn “The ABCs” as “The CABs.” Also, “large differences in direction and effect between results from the laboratory and those derived from clinical trials” indicated we should take another look at bicarb.

Why so many contradictions in care? Are old studies flawed? Has EBM matured? After reading Jonah Lehrer’s “The Truth Wears Off” in December 13th’s The New Yorker, I think the problem might be EBM itself.

Lehrer labels “replicability” – confirmation of findings by multiple investigators – as “the foundation of modern research,” then uses a 2005 JAMA review of 49 groundbreaking clinical trials to show how elusive replicability can be. Of 34 studies repeated, only 20 showed comparable results. Lehrer suggests several possible reasons:

Selective reporting. Often a consequence of subconscious bias rather than willful deceit, selective reporting is more likely when research involves subjective criteria, e.g. appearance, odor, texture. However, even quantitative studies can be compromised by scientists’ prior beliefs. It’s human nature to prefer results that prove us right.

Selective publishing. Of the 49 trials JAMA cited, 45 supported initial hypotheses. Even if we give researchers the benefit of the doubt by assuming their theories are correct 70% of the time, the chances of being right on at least 45 of 49 consecutive, independent occasions are less than 3 in 10,000. A more likely explanation is that authors, publishers, and readers of research often ignore negative findings.

Statistical anomalies. The ongoing debate about global warming reminds us that correlated data—even lots of it—doesn’t necessarily forecast long-term trends. Sometimes our conclusions are based on aberrations—chance occurrences that even out over a very long time.

It’s not just evidence-based medicine we’re questioning; evidence of all kinds is on trial. No wonder my Google search for “not well understood” yielded 23 million hits. How, then, should we react to the premise that it’s hard to prove anything?

As an engineer, I’m frightened by that prospect. I spent four years in college and 18 years in business solving hundreds, maybe thousands of problems based on generally accepted principles of energy and motion, all of which were even more entrenched than, say, prophylactic aspirin for MIs (one of the 49 studies highlighted by JAMA). Now I’m reading about discrepancies in the behavior of planets and stars that contradict the work of Newton and Einstein. I can handle doubts about bicarb, but gravity?

Conversely, as a medic, I’m not surprised by Lehrer’s piece. Each major modification in prehospital care I’ve witnessed during 19 years in EMS has been accompanied by caveats from more experienced colleagues. That doesn’t mean every revised recommendation was ill-conceived, but now I know I was wrong to assume that opponents of our industry’s “enhancements” were merely resistant to change. Maybe they found a workable, middle ground between evidence-based and “experience-based” medicine.

In the mid-‘90s my colleague, Kevin, expressed surprise at new restrictions on use of MAST trousers. A veteran of NYC*EMS, Kevin recalled countless victims of penetrating trauma who had regained pulses after MAST application. I remember being skeptical because research had shown that raising BPs in patients with open wounds simply caused them to exsanguinate faster. That’s still plausible; so is the notion that Kevin’s experience contributed more to the care of select patients than all the MAST experiments ever done.

There’s another reason I’m rethinking EBM-driven protocols: Five minutes into a recent cardiac arrest, with v-fib refractory to two shocks and CPR, I inserted an OPA. My elderly patient gagged, converted to a sinus rhythm, woke up, and wanted to know why I thought he needed an ambulance.

Was it timing? Luck? A subtle shift in Nashville’s magnetic field? Should we resurrect two 20-year-old studies linking vagal stimulation to a rise in v-fib threshold?

I don’t know—I’m still contemplating CPR without gravity.

 Mike Rubin, BS, NREMT-P, is a paramedic in Nashville, TN, and a member of EMS World's editorial advisory board. Contact him at mgr22@prodigy.net.

Advertisement

Advertisement

Advertisement