ADVERTISEMENT
Transition Topics for the Paramedic Part 1: Evidence-Based Decision Making in EMS
Brady is pleased to share with you a preview of its EMS Transition Series. Transition Series: Topics for the Paramedic provides both an overview of new information contained within the Education Standards at the paramedic level and a source of continuing education for practicing paramedics. During 2013, EMS World Magazine will feature exclusive excerpts from this new textbook. Visit www.bradybooks.com for more information.
Throughout healthcare, medicine is moving to a more evidence-based approach. This means that outcomes of therapies and interventions are carefully measured to ensure that they have the intended results. When changes are made, decisions are based on clear indications and outcomes that point to meaningful improvements in patient care. As healthcare matures, more and more progress is based on an evidence-based approach. This means that the interventions we perform and the therapies we deliver must be meaningful and should be measurable. To assess the value of our care, we must turn to research. Quality studies and experimentation will separate important, relevant strategies from frivolous, wasteful endeavors. Consider the following situation:
You work as a paramedic for an ambulance service that has 20 ambulances. Staffing is tight—and the budget is tighter. Tomorrow a new device used to treat cardiac arrest will be released, at a cost of $1,500 each. The company that created the device claims that it is the most important device the company has ever released, and that it will significantly increase cardiac arrest survival rates.
As your company decides whether to buy the device, it must weigh the idea of improving survival rates against the high cost of the device. Of course everyone would like to see patient care improve, but there is a significant cost to implementing this new device. If your company chooses to outfit its entire fleet, it will have to spend $30,000. If the device truly works, it makes a great deal of sense to purchase it, but if it does not work, the $30,000 could pay for the addition of another paramedic.
How could your service make the correct decision? The answer is quality EMS research. If there existed unbiased scientific experimentation that this device really did double survival rates of cardiac arrest, then this decision would be easy to make. Similarly, if a quality study demonstrated that the device did not improve cardiac arrest survival, then the service could invest its money in more meaningful areas.
This example is hypothetical, of course, but every day EMS agencies are faced with similar dilemmas. In addition to spending money, consider the dilemmas EMS leaders and providers are faced with developing protocols, adopting new medications and devices, allotting staffing resources, and addressing myriad questions that desperately need thoughtful answers.
In an age of slick marketing and ever-burgeoning technology, quality research and an evidence-based approach will help us answer questions and expend our resources wisely.
But what about the local level—how does EMS research affect the practice of a single paramedic? Consider your state, regional or even service-level protocols. Decisions regarding scope of practice, equipment and medications are made every day. Medical directors, state officials and services consult relevant research to aid in choosing the paths and procedures that make the most sense.
Consider also the dynamic nature of healthcare. The history of EMS is chock full of ideas, medications and other therapies that were considered state of the art until they were thoroughly examined and determined to be incorrect. Understanding how to interpret research will help you, as a provider, stay on the cutting edge of patient care.
Even in the moment of patient care, research can be beneficial. Using research to guide immediate decisions used to be the realm only of physicians, but now as paramedics learn to read and effectively interpret research, this door opens to them as well. Consider the recent discussions around the drug vasopressin. This drug may be an option for you in cardiac arrest care as a substitute for epinephrine. In the moment, you might be faced with a decision to use either drug. If you have followed the research, you might know that there has been little evidence to support its use in pediatrics, even though it might be available in that circumstance.
The bottom line is that decision makers are consulting EMS research to make choices about your profession. EMS providers at every level are involved in making these decisions, but without a basic understanding of research, providers are routinely excluded from the conversation. Understanding research, even at a very simple level, allows you to speak the language of healthcare and can deliver a seat at the table.
Basics of EMS Research
Not all research is created equal. There are good studies and there are bad studies. As we evolve in an evidence-based environment, we should strive to embrace the best practices of conducting and evaluating research. The finer points of medical research are by no means a simple topic, and a thorough examination of how to evaluate research is beyond the scope of this text. However, some broad concepts can be helpful to consider.
Remember that the process of research is the same whether you are an EMS researcher or a scientist in a laboratory. We each rely on the scientific method, developed by Galileo almost 400 years ago as a process of experimentation for answering questions and acquiring new knowledge. In this method, general observations are turned into a hypothesis (or unproven theory). Predictions are then made, and these predictions, based on the hypothesis, are then tested to either prove or disprove the theory.
For example, you might note that applying a bandage seems to control minor external bleeding. To use the scientific method, you might hypothesize that bandages do indeed control bleeding better than doing nothing at all. You could conduct a randomized control study to test your hypothesis by randomly assigning patients to the “bandage group” or to the “do nothing group.” You could then measure the amount of bleeding in each group and compare your results. Although there might be some ethical issues with your study, this experiment would help you prove or disprove the value of bandaging. Furthermore, if your experiment was done properly, your results would hold up if the study was repeated, regardless of who conducted the experiment. That is the value of quality research. However, not all research can live up to these quality markers.
Unfortunately, much of what we do still relies on the “best guess strategy,” but as we progress, we rely more and more on research and, in particular, research studies. When making decisions based on evidence (especially patient care decisions), it is clearly best to obtain an opinion based on many studies and not just a single work.
The key is to obtain an objective opinion. When more than one study points to the same conclusion, it is more likely to be free from opinion and bias. Many individual studies are clouded by bias, which occurs when research is influenced by prior inclinations, beliefs or prejudices. Bias influences a study when the outcomes are manipulated to fit an expected outcome instead of being measured objectively against the hypothesis. This can occur when researchers have a financial gain in a particular outcome, but more commonly it occurs simply as a result of poor methods used to conduct the research.
In the true scientific method, outcomes are not bent to conform to previously held notions, but are examined objectively and evaluated based solely on the facts. Valid research embraces this idea and uses methods designed to limit outside influences.
Different methods of research are considered more valid than others. This is typically judged by how well they avoid potential bias and exclude the possibility of error. Consider the following strategies:
Prospective versus retrospective
Retrospective reviews look at events that have occurred in the past. Healthcare frequently uses retrospective reviews to look back at the outcome of therapies previously performed. In contrast, prospective studies are designed to look forward. Methods are designed to test therapies and outcomes that will occur in the future. Prospective studies are generally easier to control, as rules and regulations can be put in place to control errors and prevent bias. Retrospective studies cannot necessarily be controlled in such a way. Retrospective studies can certainly be considered valid, but a prospective method is generally considered more valid.
Randomization
High-quality studies use randomization. In medicine, this type of study typically compares one therapy against another. Bias is controlled by randomly assigning a therapy to patients, as opposed to having predetermined groups. It also improves objectivity when analyzing outcomes. In high-quality studies, the researcher and the patients may not even know which therapy they are receiving. This process is called blinding. A research study can be either single-blinded (the researcher knows who gets what therapy) or double-blinded (neither the patient nor the researcher knows who gets what therapy). By blinding a study, it is very difficult to influence outcomes in any way, and the results are far more likely to be objective.
Control groups
The use of a control group helps to better evaluate outcomes fairly. In medicine, a control group is commonly a known or currently used therapy. In our previous bandaging study, we compared the bandaging group against a “do nothing” group. In that case, the “do nothing” group would be our control group. Including this group allows us not just to evaluate the outcome of bandaging, but also to compare those outcomes against a different strategy. This comparison adds weight and value to the analysis.
Study group similarity
If a group of patients is being used to test a new treatment, it is important that subjects in that group have a certain degree of similarity. Let us say, for example, that we want to test a new airway device’s impact on survival in trauma patients. We have designed and implemented a study to compare the use of the new device against a group of patients who received care without using the device. In our study, paramedics were allowed to choose the patients on whom they wanted to use the new device.
When we look at our results, we find that the device group had a much higher mortality rate. At face, we could assume that this means the device did not work. However, as we analyze the results, we find that the group assigned to the new device was much sicker than the group that did not receive the device. Here, the paramedics just thought it would be better to use the new device only in the worst-off patients. Did more people die because of the device, or did more people die simply because they were hurt worse from the beginning? It is difficult to say—and therein lies the difficulty in comparing two vastly different groups. Consider also the challenges in comparing different age groups, different treatment protocols, or even different genders.
No study can be completely free from bias, but as you have seen, certain methods help to minimize the impact of subjectivity. Because of the dynamic and often sensitive nature of medicine, a large variety of research is used to make conclusions on therapies and treatment. Ideally, systematic reviews guide our most important decisions, but more commonly, a combination of research studies and research methods guides the decisions that are made. Consider the following types of medical research:
Systematic review
In a systematic review, a series of studies pertaining to a single question is evaluated. Their results are reviewed, summarized and used to draw evidence-based conclusions. It is important to remember that a systematic review is made up of not one, but many different research experiments.
Randomized controlled trials (RCTs)
In an RCT, researchers randomly assign eligible subjects into groups to receive or not receive the intervention being tested. A control group is used to compare the tested theory against a known outcome. In 2000, for example, Marianne Gausche-Hill and her colleagues looked at pediatric intubation in Los Angeles County, CA. In their study, children needing airway management were randomized, based on the day of the week, to either an intubation group or a bag-valve-mask (BVM) group. Outcomes of these patients were studied. Objectivity was improved because subjects were randomized and the results were more meaningful, as they could compare outcomes of the intubation group against the control BVM group.
In medicine, drugs are frequently tested in randomized studies using a placebo. To measure the outcome of a drug, patients are frequently randomized to receive either the real drug or a placebo, or “sugar pill,” which has no effect. Frequently these studies use a double-blinding process so that even the providers carrying out the study, as well as the patients, do not know which path they are taking. In this type of study, the results of the new medication can be compared against the placebo control group to accurately assess the effect of the therapy.
Cohort/concurrent control/case-control studies
In these types of studies, two groups or therapies or patients are compared, but subjects are not necessarily randomized. For example, you might compare the outcomes of one service that uses continuous positive airway pressure (CPAP) against the outcomes of another service that does not. A cohort study might follow patients who have a specific disease and compare them with a group of patients that does not have the disease. In both studies, you are comparing two groups and have a control group, but the results are not truly randomized. Frequently, case-control studies are retrospective in nature, looking at two groups of events or outcomes that occurred in the past. All these studies can be valid and yield important information, but they also can be prone to bias in that it is difficult to control all aspects of similarity and methods among the different groups.
Case series/case reports
Case studies and case reports review the treatment of a single patient or a series of patients. Frequently they report on unusual circumstances or outcomes. There is no control group and these reports are always retrospective. They are certainly not as valid as randomized studies, but they often help us formulate larger questions to be investigated.
Meta-analysis
A meta-analysis is not truly a study itself, but a compilation of different studies looking at a single topic. A meta-analysis will summarize the work of others and frequently will comment on outcomes. In many cases, these are similar to a systematic review, but frequently are of a much smaller scale.
Level of Evidence Designations
It is important to remember that every study should be reviewed independently. The fact that it is a randomized controlled study does not ensure that its results are valid. That said, methodology does play a role in evaluating a study’s importance. The American Heart Association qualifies the validity of research in a linear fashion using a “level of evidence” designation, which assigns varying levels of importance based on the way a study was conducted. This progression is useful in evaluating the importance of data and can be used as a framework for considering the utility of a particular study.
- Level of Evidence 1. In this sliding scale, the highest level (most valuable) set of data would result from RCTs or meta-analyses of RCTs.
- Level of Evidence 2. Studies using concurrent controls without true randomization. Because they are often retrospective and because the methods are more difficult to control without randomization, these types of studies are often less reliable.
- Level of Evidence 3. Studies using retrospective controls. There is little control of these experiments, as the testing is based on events that have already occurred. Because of this, it is difficult to ensure similar circumstances among research subjects. Although the data from retrospective studies may be useful, they can be prone to bias.
- Level of Evidence 4. Studies without a control group (e.g., case series). In these studies, only one group is looked at and is not compared to a second group. Here there is no control group against which to examine the results. Important information may be gained, but outcomes are difficult to truly understand without comparing with similar patients who received a different therapy.
- Level of Evidence 5. Studies not directly related to the specific patient/population (e.g., a different patient/population, animal models, or mechanical models). These studies are common in EMS and are frequently used to evaluate prehospital treatments. Unfortunately, their data are prone to a wide range of interpretation, as we must make assumptions that what works in different populations or under different circumstances would work in the world of EMS.
Questions to Ask
Regardless of the type of study you are reading, you should always review research in a way that helps you identify bias or flaws in the methodology. There is certainly a great deal more to learn about the evaluation of medical research, but there are some important questions to consider when reading a study. Consider the following questions:
1. Was the study randomized, and was the randomization blinded?
2. If more than one group was reviewed, were the groups similar at the start of the trial?
3. Were all eligible patients analyzed, and if they were excluded, why were they excluded? Bias often occurs by removing data that leads in a different direction from your hypothesis. Often, the removal of patients from a study can identify potential problems.
4. Were the outcomes really the result of the therapy? Consider the previously discussed example of the new airway device being used in only sicker patients. Occasionally, outcomes can be measured that would have happened randomly. For example, a company could invent a new device that, it claims, would make the sun rise tomorrow at 6 a.m. Although the company could certainly produce a study that demonstrates the desired outcome, that outcome would have occurred whether the device was used or not. A powerful study is one that can be reproduced with the same results in relatively different circumstances.
5. Is the outcome truly relevant? Many studies show differences among treatment but no real relevance. For example, a study might show that a new medication increases the return of spontaneous circulation in sudden cardiac arrest compared with a placebo control group. Getting a pulse back in more patients is important, but that result is not really relevant if exactly the same number of patients die at the conclusion of care as compared with the control group.
Conclusion
Learning more about this topic as a provider will help you to understand the decisions and discussions that are ongoing both in EMS and in healthcare in general. Classes, textbooks and many other tutorials can improve your capability to read and evaluate research. However, there is no better way to learn about research than to become involved in a research study.
Daniel Batsie, EMT-P, is the EMS program coordinator at Eastern Maine Community College in Bangor, ME, and clinical and education coordinator for Atlantic Partners Regional EMS.
Joseph J. Mistovich, MEd, NREMT-P, is chair of the Department of Health Professions and a professor at Youngstown State University in Youngstown, OH. He has more than 25 years of experience as an educator in emergency medical services. He is an author or coauthor of numerous EMS books and journal articles and is a frequent presenter at national and state EMS conferences.
Daniel Limmer, AS, EMT-P, has been involved in EMS for 31 years. He is active as a paramedic with Kennebunk Fire-Rescue in Kennebunk, ME. A passionate educator, Dan teaches basic, advanced and continuing education EMS courses throughout Maine.