Skip to main content

Advertisement

ADVERTISEMENT

Original Contribution

How GOOD Is That Data?

July 2007

Measuring EMS system performance is crucial to identifying trends in patient care that affect patient outcomes. But while collecting standardized data sets is important in all facets of healthcare, the field of prehospital care poses inherent difficulties. Defining obtainable data elements across multiple institutions (e.g., 9-1-1 centers, first-response agencies, EMS agencies and hospitals) can prove cumbersome. Such difficulties have hindered research that could have a meaningful impact on patient outcomes. Those involved in EMS research must work to establish sound methodologies, consolidate data from multiple sources and generate integrated feedback reports. These will be important steps in unlocking the gate to evidence-based medicine in prehospital care.

     Standardized research templates and metrics improve the rate and quality of data collection because they permit the creation of composite databases of information that's comparable from one healthcare system to another.1 EMS is no different--standardizing the data-collection process enables systems to analyze events both internally and across regions. EMS organizations and emergency physicians have begun to respond to the call for evidence-based medicine, and recent research initiatives have led to standardization of some EMS data collection.2 However, the difficulties inherent in EMS research can complicate such efforts. In examining one of the most frequently studied prehospital events, cardiac arrest, evaluating these difficulties can help illustrate the considerations that should be made when developing research methodology.

A Scenario
     Prehospital providers are dispatched for a pulseless and apneic 56-year-old male in cardiac arrest. Bystanders have initiated CPR. The first-responding fire agency arrives on scene and confirms the absence of spontaneous respirations and pulses. Firefighters apply an AED and assist ventilations with a bag-valve mask. EMS arrives with advanced life support interventions, and the patient is transported to the closest facility. He has a return of spontaneous circulation while in the care of EMS providers, then is turned over to the hospital staff for further care.

     In this case, neither the EMS record nor the dispatch record will indicate the patient's ultimate outcome. Despite the importance of prehospital care in the outcome of this patient, this metric of successful response and intervention is not measured. Because of disconnects between the agencies involved in such responses--i.e., EMS, hospitals and dispatch centers--the ability to consistently measure response intervals and collect meaningful outcome data is lost, or becomes too cumbersome to continue efficiently. But before it's possible to piece these data elements together, a decision must be made on which data elements to collect.

Keeping It Simple
     Reporting prehospital cardiac arrest data in a standardized format was the goal of the Utstein style of data collection. The first version of Utstein, in 1991, required the measurement of several time intervals,2 including the interval from call reception to first defibrillation. However, capturing this data was not feasible because the clocks in 9-1-1 centers and the EMS monitors/defibrillators that delivered first shocks were not synchronized. Because a "universal stopwatch" was absent, this data element was deemed unobtainable, and the next version of Utstein excluded it.3 It was also determined that some time elements, such as time of EMS arrival to the patient, were inherently unclear because multiple caregivers may arrive at different times with different pieces of equipment.3

     In addition to cardiac arrest, there are current measures in trauma research that are difficult to collect. The Golden Hour, for instance, is a commonly used interval for predicting outcomes in trauma patients. It begins at the time of injury and ends at the point of definitive care. It can be quite uncertain, though, as to when an event actually occurred, based on whether it was witnessed and how long it took bystanders to activate EMS. Because of such imprecisions, a true Golden Hour may not be easily identified.

     Another trauma-related metric is the Glasgow Coma Scale (GCS) score, considered the gold standard in measuring neurological status.4 It is often not reported due to the cumbersome nature of remembering the scale. As an alternative, researchers have tested a three-point GCS, which seems to be as effective and more likely to be utilized in the field.5

     As a rule, selecting which elements are obtainable in the field should be guided by the infamous "KISS" principle: keep it simple, stupid. This is achieved by collecting the bare minimum data elements that will allow meaningful research conclusions. By complicating the data template, a project is likely to collect inconsistent or inaccurate data.

     In the scenario above, the long-term goal may be to synchronize the first-responder AED, EMS monitor/defibrillator and CAD systems to ensure the accuracy of recorded time intervals. But until we are capable of mimicking this scenario, it may be difficult, if not impossible, to record the estimated collapse time of the unwitnessed arrest victim or the response time of prehospital providers from the ambulance to the patient's side. Due to the inability to consistently obtain these intervals, they should remain as secondary, or supplemental, data elements, which can be reviewed independently as case studies. Once obtainable data elements are selected, investigators must determine how to define them to provide meaningful information.

Defining Elements
     Defining data elements must be done in a way to answer pertinent questions.6 Standardizing these elements provides a successful and efficient means of doing this.

     The National EMS Information System (NEMSIS) is an effort that has been instrumental in developing methods for data collection.7 The goal of NEMSIS since its inception in 2001 has been to establish an electronic database for local, state, regional and national reporting of information from prehospital events to ultimately improve patient outcomes. In 2006, NEMSIS distributed the EMS Uniform Prehospital Data Set Version 2.2.1, which is an updated version of an earlier NHTSA data set developed based on a national consensus.8 This data set provides researchers with a template of standard questions and answers that will allow uniform reporting of EMS information in various scenarios, including prehospital cardiac arrests.

     Among these cardiac arrest elements is a description of the first monitored rhythm of the patient (see Table 1).9 The NEMSIS data dictionary specifies this field to be completed only if the event is described as a cardiac arrest.10 In addition, the Utstein update also describes this data field explicitly as the first rhythm following the beginning of cardiac arrest.3 Since these data fields are specific to patients in cardiac arrest, such evaluations would include rhythms that are only pertinent to cardiac arrests. However, the data field options also allow selection of non-arrest rhythms such as bradycardia and normal sinus rhythm. Referencing the scenario patient, we could adjust the event to consider a patient who was not found in arrest, but who arrested in the presence of EMS. In this case, EMS personnel may be confused by inconsistencies in field options and definitions and therefore be inclined to document the first monitored rhythm of the patient during the interval of a perfusing pulse.

     In another case of misunderstanding, a perfusing rhythm, such as normal sinus rhythm or bradycardia, may be incorrectly selected if the rhythm during the cardiac arrest was actually pulseless electrical activity (PEA).

     Either case results in the loss of the desired, meaningful data element. These additional data element options, which are essentially not applicable to cardiac arrest events, allow discrepancies in data collection. EMS providers can misunderstand the definitions of data fields in the data dictionary, or the options provided can create ways in which EMS personnel may submit data that is inconsistent with the correct definition.

     Referring to trauma-related statistics, GCS is difficult to define due to variations in applying it. In one example, two emergency physicians scored patients differently when assessments were made within five minutes of one another.5 It may also be difficult to assess patients who have received pain medication.4 In addition, developmental considerations must be made for pediatric trauma patients. Although a pediatric version of the GCS is available, prehospital providers may not be aware of such versions or have means to document and/or articulate findings from them to the emergency department. Because individuals apply GCS differently and GCS values may vary due to medication or developmental complications, defining this trauma assessment tool may have inherent difficulties that present research hurdles.

Consolidation
     The most daunting aspect in achieving meaningful EMS research is the consolidation of data across multiple providers, jurisdictions and agencies. In our cardiac arrest example, one can see why so few communities currently collect statistics about such events. When a cardiac arrest event occurs, the system response is initiated by a 9-1-1 communications center. A record of incoming call information is entered into a CAD system, which is organized and managed by a specific software program. These programs generate an incident number that is formatted in a unique way. However, in many systems, when this information is forwarded to the responding EMS agency, a second CAD system accepts the call. Because it is unlikely that the EMS CAD software is the same as the 9-1-1 CAD software, a new incident number is assigned to the same event. This number is formatted differently and is unable to be linked across CAD software systems. This lack of synchronization and feedback results in response-time intervals that may be inconsistent. The difficulty in linking the EMS event with the 9-1-1 times could potentially be overcome through active involvement among CAD vendors to allow a universal method of communication between software programs.11 This flexibility could attract EMS systems that are interested in collecting meaningful data regarding time-sensitive events such as cardiac arrests.

     Similarly, studying cardiac arrest events requires knowing patient survival outcomes. Because successful cardiac arrest resuscitations, as related in the scenario, usually occur in the prehospital setting, EMS personnel play a crucial role. However, the standard measure of survival of prehospital cardiac arrest is determined by neurological status at the time of discharge from the hospital. Additional obstacles are presented by the inability of EMS personnel to wait at the hospital until the patient is discharged and the time-consuming process associated with EMS administrators obtaining hospital data. Since the EMS agency is rarely affiliated with all of its cardiac-arrest-receiving hospitals, communication must be established between the EMS agency and these institutions. And beyond the additional time and effort involved just in collecting this data, concern over the Health Insurance Portability and Accountability Act (HIPAA) has led to another obstacle in data collection in prehospital research. Establishing a new unique identifier for each patient across every EMS and hospital agency would give any research coordinator headaches to last a lifetime. With the advent of advanced technology, patient information should be communicated between relevant agencies using software, e-mail and data import/export procedures that maximize security and are compliant with HIPAA.

Getting It Right
     When all components of sound EMS research methods are present, the data comes together quite nicely. As shown in Figure 1, each component is critical in achieving data, and subsequent feedback reports, that are concise and make sense. Throughout the data-collection process, be aware of situations where certain values may be excluded based on circumstances (e.g., GCS in sedated patients) or where values are frequently miscoded (e.g., perfusing rhythm used as first arrest rhythm). In such situations, inclusion/exclusion criteria must be reevaluated and/or data elements must be redefined.

     Last year's Emergency Medical Services at the Crossroads report by the Institute of Medicine mentioned the Cardiac Arrest Registry to Enhance Survival (CARES) program.12 This system of data collection features a method of consolidating and reporting essential elements in a systematic and efficient manner. CARES provides a means of unifying data among various participants (bystander, CAD, first responder, EMS and hospital) that allows the comparison and assessment necessary to improve EMS system performance. As the data is collected, CARES provides feedback reports that describe meaningful metrics.

     The idea is that simply collecting even meaningful and defined data fails to complete the task; the ultimate purpose is to evaluate these metrics through benchmarking both internally (comparing the metrics within one agency over time) and externally (comparing the metrics from one agency to another that's similar in composition, demographics and resources). Using a trauma example, GCS is used to predict patient outcomes.13 However, if no single method of applying GCS is accepted as the standard, reports may be meaningful only internally, and external benchmarking and comparisons impossible.4,14

Conclusion
     Research methods that reflect some of the inherent difficulties in advancing prehospital care will guide investigators away from some of the key pitfalls. Defining data elements, collecting data that's obtainable, and compiling that data into a unified format to produce meaningful reports are a few of these important considerations. Only after properly developing research standards based on identifying previous pitfalls can we facilitate the collection and analysis of meaningful data. Although surveillance of EMS systems is beginning to provide promising methods, the EMS research community should bring attention to the need for ongoing evaluation of data collection plans and methods.

References

  1. Mears G, Ornato JP, Dawson DE. Emergency medical services information systems and a future EMS national database. Preh Emerg Care 6:123-30, 2002.
  2. Cummins RO. The Utstein style for uniform reporting of data from out-of-hospital cardiac arrest. Ann Emerg Med 22:37-40, 1993.
  3. Jacobs I, et al. Resuscitation outcome reports: Utstein templates update. Circ 110:3,385-97, 2004.
  4. Fischer J, Mathieson C. The history of the Glasgow Coma Scale: Implications for practice. Crit Care Nurs Q 23(4):52-58, 2001.
  5. Gill M, Steele R, Windemuth R, Green SM. A comparison of five simplified scales to the out-of-hospital Glasgow Coma Scale for the prediction of traumatic brain injury outcomes. Acad Emerg Med 13(9):968-73, 2006.
  6. Stokes A. Gold-plated standards. J Emerg Med Serv 31(8):20, 2006.
  7. National EMS Information System Technical Assistance Center, www.nemsis.org.
  8. National Highway Traffic Safety Administration Emergency Medical Services Program, www.nhtsa.dot.gov.
  9. National EMS Information System, NHTSA Uniform Pre-Hospital Emergency Medical Services Dataset Version 2.2.1: NEMSIS Data Elements and Values, www.nemsis.org/dataElements.
  10. National EMS Information System, NHTSA Uniform Pre-Hospital Emergency Medical Services Dataset Version 2.2.1: NEMSIS Data Dictionary, www.nemsis.org/dataElements.
  11. Stokes A. Reconciling fractured communications data. Emerg Med Serv 36(5):46-55, 2007.
  12. Institute of Medicine. Emergency Medical Services at the Crossroads. Washington, DC: The National Academies Press, 2006.
  13. Gill M, Windemuth R, Steele R, Green SM. A comparison of the Glasgow Coma Scale score to simplified alternative scores for the prediction of traumatic brain injury outcomes. Ann Emerg Med 45:37-42, 2005.
  14. Marion DW, Carlier PM. Problems with initial Glasgow Coma Scale assessment caused by prehospital treatment of patients with head injuries: Results of a national survey. J Trauma 36:89-95, 1994.

Allen Stokes, BSc, NREMT-P, is senior research project coordinator for prehospital and disaster medicine in the Department of Emergency Medicine at Emory University in Atlanta.

Bryan McNally, MD, MPH, is an assistant professor of emergency medicine and assistant medical director for Emory Flight in the Department of Emergency Medicine at Emory University in Atlanta, GA.

Advertisement

Advertisement

Advertisement