ADVERTISEMENT
Journal Watch: A Scale for Dyspnea
Reviewed This Month
Predicting Outcome for Ambulance Patients With Dyspnea: A Prospective Cohort Study
Authors: Lindskou TA, Lübcke K, Kløjgaard TA, et al.
Published in: JACEP Open, 2020 Apr 1
We have likely all asked our patients to rate their pain on a scale from 0–10. Because we do not have a test that can objectively determine a patient’s level of pain, the scale is a useful alternative.
The severity of acute dyspnea can also be difficult to assess objectively. Some research has shown respiratory rate and oxygen saturation do not strongly correlate to the perceived severity of a patient’s dyspnea. A small study in an Australian emergency department reported the successful use of a dyspnea scale that asked patients to assess their level of dyspnea on a 0–10 scale. However, data describing the validation of this scale or the association of dyspnea scores and patient outcomes are limited.
The objective of the study we review this month was to “validate the discrimination and classification accuracy of an acute dyspnea scale for identifying outcomes among ambulance patients with acute dyspnea.” This was a prospective study in northern Denmark. Denmark staffs ambulances with two EMS providers whose skill level is comparable to EMTs in the USA.
Denmark was a great place to complete this study because all citizens have a “personal civil registration number.” All healthcare contacts are registered in Denmark’s National Patient Registry using this number. While the patient registry contains information about diagnosis, treatments, and procedures, it does not include mortality data. However, in Denmark data collection systems and registries are all linkable by the personal civil registration number, thus enabling assessment of the entire continuum of care. Any researcher in the USA would be envious of this level of data linkage. Further, all of Denmark’s out-of-hospital medical records are identical for the entire country.
The study period was from July 1, 2017 to September 14, 2019. All adult prehospital patients cared for in the country’s northern region were included if dyspnea was the main symptom assessed by either the dispatcher or EMS provider. Patients were asked, “On a scale from 0–10, where 0 is no difficulty breathing and 10 is the worst possible breathlessness imaginable, how are you experiencing your breathing now?” If a patient was unable to report a dyspnea score, the EMS providers reported and attributed that to either an “acute medical situation” or “other reasons.” Other reasons could be things like language barriers.
Because the analysis included patients whose dyspnea was identified during the emergency call, the authors recognized there could be patients included who did not actually have acute dyspnea. Therefore, patients who had initial scores of 0 were also excluded. Dyspnea scores were recorded at first and last contact, and all were time-stamped.
The main outcome of interest was hospitalization for two days or more. The authors indicated they chose this threshold to exclude patients who only required brief treatments. They also assessed two secondary outcomes, admission to an ICU within 48 hours following transport by EMS and mortality within 30 days.
Let’s just pause a minute to take note that in Denmark, they can assess these outcomes for their entire population (more than five million citizens)! If we could do this in even one large EMS system, it would be very impressive.
Providers obtained vital signs, including respiratory rate, blood oxygen saturation, blood pressure, and heart rate, and those recorded closest to the time of the acute dyspnea score were included in the analysis.
To assess whether the acute dyspnea scale score contributed to the predicted logistic regression model (in other words, to see if the score mattered with respect to the outcomes of interest), the authors built logistic regression models with all variables and with all variables except the dyspnea score. They used receiver operator characteristic curves and calculated the area under the curve, or AUC. To understand AUC you just basically need to know that a model whose predictions are 100% wrong has an AUC of 0.0, while one whose predictions are 100% correct has an AUC of 1.0—so the higher the better.
Results
During the study period there were 116,409 ambulance runs, and 4,261 included patients reporting dyspnea. There were 897 (21%) patients who were unable to use the dyspnea score and 220 (5%) with an initial score of 0. So the final analytical data set included 3,144 ambulance runs. There were 1,966 (63%) patients admitted to the hospital for at least two days, 164 (5%) were admitted to the ICU within 48 hours, and 224 (9%) died within 30 days.
After analyzing their data, the authors found the acute dyspnea scale score showed poor accuracy and discrimination when predicting hospitalization, intensive care unit stays, and mortality on its own. The AUC for hospitalization was unchanged when the dyspnea score was removed from the logistic regression model (AUC 0.71; 95% CI, 0.69–0.73), meaning it did not contribute to the prediction of a patient’s length of stay greater than two days. Similarly, the AUC was unchanged for 30-day mortality when the dyspnea scale score was removed from the model (AUC 0.71; 95% CI, 0.67–0.75). When evaluating ICU admissions within 48 hours, the AUC decreased from 0.73 to 0.70 when the dyspnea score was removed; however, the confidence intervals crossed, meaning there was no statistically significant difference.
Conclusion
I hope you have an opportunity to read this manuscript in its entirety. This was a well-done study with some sophisticated statistical analyses. There are some tables and figures that really help clarify the analysis and results. One of the things you likely noticed is that the intervention was not successful. It is rare but extremely important for studies like these to be published. Often we only read about the interventions that worked. It is valuable to know what doesn’t work also!
Antonio R. Fernandez, PhD, NRP, FAHA, is a research scientist at ESO and an assistant professor in the department of emergency medicine at the University of North Carolina–Chapel Hill. He is on the board of advisors of the Prehospital Care Research Forum at UCLA.