ADVERTISEMENT
Fallible Medicine
It's a warm spring evening in Wheat Ridge, CO, and I'm en route to the nearest trauma center with Bob, a middle-aged male. Bob moans but does not wake as I flick the side of his face and yell his name in his ear. He was c-spined by the local fire department after being partially ejected through the side window of his Celica. As I systematically move through my secondary assessment, I recognize that Bob has begun to dangerously hypoventilate, and I admit to myself a fact I'd been avoiding since I arrived on scene: Bob needs to be intubated.
Inserting a 7.5 ET tube in Bob's right nare, I feel a sense of relief as I advance the tube in time with Bob's respirations and see the telltale fog gather inside the plastic. The presence of bilateral lung sounds, absence of epigastric noise and normal-looking capnography waves all further confirm my assessment that the tube is in the correct place. At the emergency department, the respiratory therapist checks the tube placement and announces that the tube is good. I return to my ambulance, relieved and pleased that I performed my skills well.
I was mistaken. Ten minutes after I left Bob's bedside, a nurse returned to the room with his x-ray and startling news. The ET tube did not appear to be in the trachea. The physician, who was busy cauterizing a wound on the patient's scalp, set down his cauterizing tool and rushed to the light screen to examine the x-ray. Moments later, the diagnosis confirmed, my nasal tube was removed and the patient was orally intubated. During the procedure, the hot cauterizing tool, still lying on the bed, rolled under Bob's shoulder, creating a significant burn.
This is what James Reason, in his book Human Error, refers to as an evolving disaster. His theory is that errors don't just happen--they evolve. Systems of checks and balances often discover a single error before it becomes problematic. But while a single error can be significant by itself, our most critical mistakes often occur when one error leads to another in a compounding manner.
When I heard about the episode with Bob, I returned to the ED in disbelief. My initial reaction was to defend myself. The capnography wave, the breath sounds--how could the tube possibly have been bad? Could it have been dislodged after I left? My ego wanted to defend my care. I wanted to immediately turf the error onto someone else. Sure, mistakes like these happen, but that's little consolation when the mistake is yours.
The physician and I reviewed the x-ray together, and he patiently pointed out the obvious radiopaque line that showed the tip of the ET tube sitting at the tracheal opening, the inflated balloon precariously sealing the opening.
"Didn't the tube seem awkwardly shallow for a nasal tube?" he asked.
"Well…yes," I hesitantly agreed. I was used to seeing the tube much deeper after a nasal intubation.
"Did you orally observe the tube after intubation?" he continued.
Again, I admitted that due to the patient's active gag reflex at the time of intubation, I hadn't gone back and attempted to orally visualize the tube, a technique that probably would have been possible as Bob's level of responsiveness waned.
As the conversation continued, I recognized that I had made an error. The intubation would need to be recorded as a hyperpharyngeal tube placement. The tube was indeed ventilating Bob, but its location placed him in danger of losing the airway with the slightest movement of his head. In retrospect, the spinal immobilization was probably the only factor that had allowed the tube to remain in place for as long as it had.
Even worse, my error had begun a cascade of events that led the patient to suffer an additional injury. While Bob's burn was not causally related to the misplaced tube, it was the precarious airway that set off the unfortunate chain of events. The sting of the failed intubation seemed intensified because of the role it played in this secondary injury.
The doctor would now need to have an embarrassing conversation with the family, and although the burn would be a small footnote in the overall course of care, it stung nonetheless. Sure, I made mistakes as a new paramedic, but wasn't there supposed to be a point where these things didn't happen anymore? What would my medical director say? Where had my judgment failed? What should happen next?
Introduction
Conventional wisdom might suggest that medical errors are a rarity, anomalies of care committed primarily by bad or inexperienced clinicians. It's comforting to believe that errors in judgment and care and even medical negligence are not only uncommon, but are caused by just a few poor caregivers in unusual circumstances. But research suggests this idea may be both inaccurate and incomplete, and the circumstances behind medical errors are neither uncommon nor unusual.1-6
As quality assurance reviews attempt to identify errors in judgment and interventions, the question of how best to address the errors once they're identified is often ignored. It is assumed that those responsible for errors will be corrected through training, disciplinary action or some combination of the two. The theory is that ultimately, the frequency of errors will decrease, and patient care will be improved. Does it work?
By championing a punitive model of quality assurance, we cling to the idea that medical errors are fundamentally an issue of bad caregivers or poor training. It's easier to address the question of what to do when bad EMTs and paramedics make mistakes than to consider the far more complex reality that good EMTs and paramedics make mistakes as well, and perhaps with similar frequency.
Why We Make Mistakes
In their 1975 essay "Toward a Theory of Medical Fallibility," philosophers Samuel Gorovitz and Alasdair MacIntyre explored the nature of medical errors. They found three possible causes for medical practitioners failing. First they identified ignorance: Perhaps the clinician did not possess the knowledge to reasonably assess the situation and act with appropriate judgment. Second, there is ineptitude: All the proper knowledge and training are in place, but the clinician failed to utilize them appropriately. Third--and most compelling--is something the philosophers termed necessary fallibility.4
The theory of necessary fallibility suggests that when we try to move from predicting how things behave, such as head-injury patients, to predicting how a certain thing will behave, like Bob's uncontrolled airway, we enter a grey area where skill and experience can only go so far.
Take, for instance, the classic computer-versus-man chess battle of Garry Kasparov and the Deep Blue computer, or the 1996 duel between cardiologist Hans Ohlin and Lars Edenbrandt's ECG-interpretation computer.7 In each case the computers trumped the best human minds society had to offer, but why? The answer lies with necessary fallibility.
Our brains are poor at factoring in large numbers of variables. In many cases, when signs are clear, experience is the perfect guide. But when we enter a grey area, we tend to focus on details that serve our perception and discount ones that do not. Recall my willingness to focus on the signs that identified my ET tube being in Bob's trachea (the humidity in the tube and the lung sounds) and ignore the information that suggested otherwise (the shallow depth of the tube). Add to that our very human susceptibility to distractions, fatigue and stress, and the heartless, nonintuitive computer trumps experience and clinical judgment again and again.
For quality-control specialists, necessary fallibility may sound like a copout. Certainly, there were identifiable errors in the management of my patient's airway and factors that skill and experience could have recognized and corrected. But necessary fallibility won't be dismissed that easily. It still presents questions. In the unpredictable world of emergency medicine, what degree of error is acceptable? Do we need to accept that the presence of human error is inevitable in the course of our care? If quality assurance is to reduce our fallibility, the questions need to be addressed.
Classifying the nature of patient care errors may not only serve to help direct the focus of quality assurance solutions, but also minimize some of the unfortunate byproducts of punitive systems, such as deception, evasion and defensiveness. But first there is a nagging argument that must be addressed: the idea that perhaps errors are just a byproduct of bad clinicians, and something near perfection is indeed an acceptable goal.
"The Paramedic Did What?"
An ambulance crew leaves a patient improperly restrained in her wheelchair, and she later slides out and dies of strangulation. Another crew releases a juvenile from a car accident scene with a cursory assessment and later learns of his admission to the hospital with an encapsulated spleen. A paramedic accepts an unlabeled syringe from her EMT partner and administers an incorrect drug to her patient. Another paramedic incorrectly gives an amp of sodium bicarbonate instead of glucose to a hypoglycemic patient because the prefilled syringes looked similar. How could caregivers like these be allowed to practice? Shouldn't they be fired…or worse?
In truth, errors like these aren't rare. This sample was gathered by simply asking several experienced paramedics with whom I work each day to tell me the single biggest mistake they had made in the past two years. Every one of them had a story to tell. These aren't bad providers. These stories came from experienced caregivers working for a service that is highly respected for its quality of care. Some of these errors were identified, and others went unnoticed by the QA process, but none of the caregivers I spoke with were immune to committing significant clinical errors.
Research seems to support the conclusion that it happens to everyone. Consider a 1995 study that found that mistakes in administering drugs occurred, on average, about once per hospital admission.1 Another small study at the Brigham and Women's Hospital in Boston looked at responses to sudden cardiac arrest and found that 27 of 30 clinicians had made errors in the initial defibrillation process.8 In 1991, a large study known as the Harvard Medical Practice Study found that 1% of all patients admitted to the hospital would be subject to an act of medical negligence during their stay. The study defined negligence as a substandard level of care resulting in injury to the patient.2
When we ponder these statistics, we tend to visualize a small population of incompetent caregivers behind the numbers. Like my ego-centered initial response to my intubation failure, we paint a picture that comfortably distances our own care from that of those prone to error. This image doesn't play out in the real world.
For instance, research like the defibrillation study points out that, in many emergency interventions, error is not the exception but the rule. A well-known study tracking the long-term effects of clinical errors found that malpractice lawsuits didn't cluster around a small group of presumably incompetent physicians, but stretched out in a classic bell curve formation, leaving few untouched. Most surgeons anticipated being sued at least once in their careers. To further complicate matters, the factors that made an individual more likely to sue tended to have little correlation with the actual or perceived act of negligence.9
Identifying Errors
Perhaps the first step in responding effectively to medical errors (beyond recognizing their inevitability) is identifying their sources. To put a more diplomatic spin on the first two factors in Gorovitz's and MacIntyre's triad, ignorance and ineptitude, we could classify these errors as knowledge-based errors and skill/experience-based errors. The two call for two different types of interventions.
Assuming a caregiver has the internal motivation to correct knowledge-based errors, the simple solution to these types of errors is education. Knowledge-based errors would include failing to incorporate an appropriate treatment into the patient's treatment plan or failing to recognize a specific arrhythmia on the ECG.
Errors stemming from knowledge deficiencies should be tracked within an organization to help identify possible requirements for future continuing education. A quality assurance process that identifies a pattern of difficulty with trauma scene times or the undisputed king of clinical errors, medication mistakes, might lead to incorporating these subjects into future training.
In the arena of skill- and experience-based errors, the research is clear. Nothing can take the place of repetition. Studies conducted on ED residents have demonstrated the link between repetition and proficiency in skills ranging from cricothyrotomy and chest tube placement to identifying heart murmurs.10,11
Cognitive psychotherapist K. Anders Ericsson discovered from observing masters of art, from ballet dancers to virtuoso violinists, that the key difference between them and others is not innate talent but their sheer tolerance for practice and repetition.12 They don't practice out of enjoyment. If that were the case, they would continue to practice after retirement (most don't). They simply understand that mastery comes from repetition and practice.
Addressing knowledge- and skill-based errors seems fairly straightforward. It is the third classification of medical errors, necessary fallibility, which proves more challenging to anticipate and respond to.
Reviewing Quality Review
The greatest roadblock to effective quality review is the failure to recognize the truly complex nature of human errors. This is what Charles Perrow, author of an extensive investigation into the Three Mile Island nuclear accident, refers to as hindsight bias.13 Hindsight bias is the human tendency to view elements that were not recognized or understood at the time of an incident as obvious in retrospect.
In my case, one might be apt to look back and say that the shallow depth of the tube should have alerted me to the possibility that it had not passed completely into the trachea. After the x-ray proved it, it was simple to judge that it should have been recognized sooner. While I may well recognize this in the future, the fact that the ED physician and respiratory therapist made the same error suggests this is hindsight bias.
Another important consideration is recognizing the presence of latent errors. James Reason's pioneering work into medical errors described the medical field as one fraught with latent errors--errors that occurred prior to the error in question and contributed to it.14 Latent errors may include poor scheduling or caregiver fatigue, environmental distractions, improper equipment checking or maintenance, poorly designed equipment and faulty training. All of these things and many others can play roles in clinical errors.
The field of engineering has dubbed the practice of searching for latent errors root cause failure analysis and created a systematic approach to looking beyond the physical and human factors to the latent causes of system failures. This type of analysis is hardly revolutionary when dealing with mechanized processes, but the idea has been slow to take form in medical quality assurance programs.
It is our nature to place blame solely with the care provider and never consider other contributing factors. However, there is a proven correlation between correcting latent errors and improved performance. There has never been a proven correlation between disciplinary action and improved clinical performance in EMS.
With our dedication to professionalism, it seems only natural to fall in line with the punitive model. We tell ourselves, "Don't make excuses. Take accountability and move on." However, quality assurance professionals who gather detailed information about the nature of medical mistakes and strive to identify contributing latent causes may find that they are viewed with much greater trust than individuals who focus on punitive corrections.
This by no means absolves caregivers from their roles in medical errors. And certainly, punitive measures are warranted in some circumstances. However, it does recognize that human error is far more complex than it often appears, and it offers usable solutions. Each quality assurance program will need to incorporate a means to recognize and address individuals who seem to commit unusually high numbers of errors or the same error repeatedly. In these cases the key to proceeding with confidence is an accurate map of what type and frequency of error is truly normal.
The Value Of Error
The morning after my intubation error, I dropped by my medical director's office. Dr. Kanowitz is a friendly and amicable doctor with an easy smile. I knew he would understand, even though the organization was auditing all intubations to assess the feasibility of adding RSI to our protocols. I could see him stifle a grimace when I announced, "Dr. K, I had a bad tube last night."
We discussed what I thought went wrong and how I could correct it. Dr. Kanowitz started his career as a paramedic, and he hasn't forgotten what it's like to be one. After the discussion I was certain not only that I would not repeat the error in the future, but that I was ready to make sure other paramedics could learn from my experience as well. At this stage a mistake can become a powerful thing. This is when quality assurance programs can shine.
Conclusion
It brings a powerful sense of peace when we accept that the practice of medicine is one of uncertainty. We wish it were not so, but it is. We long for the concrete patient presentation that allows us the comfortable conviction of certainty, but things rarely work out that way.
As long as human beings carry out the basic tasks of identifying medical disorders and performing clinical interventions, there are times when all of our training, experience and knowledge will lead us astray. When we make peace with this fact, our mistakes can truly serve us, and others as well. We begin to talk about them openly, discuss them with other care providers and lead the way for those that will follow.
Organizations that are willing to forge a culture of openness, where individuals are encouraged to publicize their mistakes without fear of judgment or ridicule, may find that very culture their greatest weapon in creating quality patient care. When we are comfortable with owning our errors, we can own the good as well as the bad. We own that which we have learned. In the light of day, human mistakes have greater meaning for the caregivers who made them, the patients who, to varying degrees, paid for them and those who would choose to learn from them.
Medical Error Prevention
In an upcoming issue of EMS Magazine, a related article will discuss the value of adverse-event and near-miss reporting in EMS, illustrated using cases from an EMS event reporting system called MEPARS (Medical Error Prevention and Reporting System, EMSsafePatient.com). This system, which is modeled after NASA's successful Aviation Safety Reporting System, benefited from a recent NIH grant to support further refinement of its infrastructure and expansion within the United States.
Authors Karthik Rajasekaran, EMT, a first-year medical student at Chicago Medical School, and Terry Fairbanks, an assistant professor of emergency medicine at the University of Rochester, emergency physician, EMS medical director and paramedic, will use actual case reports to highlight the disadvantages of the current culture in EMS, which often fosters a response to medical errors with a "name, blame, shame and train" mentality. The systems approach and "just culture" will be proposed as preferred methods of reducing adverse events. These approaches have proven to be successful in other high-consequence industries such as aviation and nuclear power, and have been gaining popularity in the medical field. The authors will describe their experience with the MEPARS system and will use actual results from this system to give readers concrete ideas that can be put to practice in their EMS agencies.
References
- Bates DW, Cullen D, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: Implications for prevention. JAMA 274(1): 29-34, 1995.
- Brennan T, et al. Incidence of adverse events and negligence in hospitalized patients: Results of the Harvard Medical Practice Study. N Engl J Med 324(6): 370-376, 1991.
- Donchin Y, Gopher D, Olin M, et al. A look into the nature and causes of human errors in the intensive care unit. Crit Care Med 23(2): 294-300, 1995.
- Gorovitz S, MacIntyre A. Toward a theory of medical fallibility. Hastings Cent Rep 5(6): 13-23, 1975.
- Leape LL. Error in medicine. JAMA 272(23): 1,851-1,857, 1994.
- Lunn JN, Devlin HB. Lessons from the confidential enquiry into perioperative deaths in three NHS regions. Lancet 12(2): 1,384-1,386, 1987.
- Heden B, Ohlin H, Rittner R, Edenbrandt L. Acute myocardial infarction detected in the 12-lead ECG by artificial neural networks. Circulation 96(6): 1,798-1,802, 1997.
- Gawande A. Interview in reference to 2004 research project at Brigham and Women's Hospital. April 5, 2005.
- Localio A. Relation between malpractice claims and adverse events due to negligence: Results of the Harvard Medical Practice Study III. N Engl J Med 325(4): 370-376, 1991.
- Barrett MJ, et al. Mastering cardiac murmurs: The power of repetition. Chest 126(2): 470-475, 2004.
- Custalow CB, et al. Emergency department resuscitative procedures: Animal laboratory training improves procedural competency and speed. Acad Emerg Med 9(6): 575-586, 2002.
- Ericsson K. The Road to Excellence. Mahwah, NJ: Lawrence Erlbaum Press, 1996.
- Perrow C. Normal Accidents: Living With High Risk Technologies. New York, NY: Basic Books, 1984.
- Reason J. Human Error. Cambridge, MA: Cambridge University Press, 1990.
Bibliography
Bridge Medical, Solana Beach, CA. www.mederrors.com.
Gawande A. Complications: A Surgeon's Notes On An Imperfect Science. New York, NY: Picador, 2002.
Kohn L, Corrigan J, Donaldson M. The Committee on Quality of Health Care in America. To Err Is Human: Building A Safer Health System. Institute of Medicine, Washington, DC, National Academy Press, 2000.
Steve Whitehead, NREMT-P, has been involved in prehospital medicine and education for over 17 years. He is a firefighter/paramedic for the Parker Fire Protection District in Parker, CO. He can be reached at sfwhitehead@parkerfire.org.