ADVERTISEMENT
Executives Should Scrutinize Research Data Like Their Financial Data
Successful executives understand their company’s financial statements. If not trained in accounting, executives take courses in finance or engage in mentoring to become confident speaking to any aspect of the balance sheet, income statement, or cash flow statement. A healthcare executive should have a similar comfort level with research studies and statistics. It is part of being data-driven.
Behavioral healthcare has a complex body of research supporting its value. Yet empirical evidence is easily politicized, and leaders must be ready to combat data distortion and misuse. They should be able to discern the quality of research findings as routinely as they do the quality of a company’s earnings.
Many practitioners and executives in our field have a marginal understanding of statistics. Executives often compensate by consulting with clinical experts on their team, but those discussions pale next to collaboration with the CFO. It may be unrealistic to expect the same standards given that financial data represents the vitality of a company, but research data can be critical in setting a company’s direction.
Research Literacy
The goal should be basic research literacy. This should include understanding common types of studies and data analyses, findings related to core products and services, and salient misunderstandings related to data. They should grasp that research has value at both the population and the patient level.
Executives value brevity, and so this education might start with a few nuggets from decades of research. Each should aid in gaining a comprehension of research design and statistics. A few examples about psychotherapy research clarify this:
- Results vary for different problems and therapies, but in general therapy is remarkably effective
- Cognitive behavioral therapy (CBT) is great, but it is not the greatest (i.e., many therapies have great results)
- The therapist personally drives more of the clinical outcomes from therapy than the techniques
- Empirically validated therapies do not work equally well with every individual
- Without randomized study designs, we cannot know if the people studied are unique
- Early adopters of new or innovative clinical services may be unusually motivated to get well
Literate executives understand how randomized, controlled studies differ from naturalistic ones. They know what basic statistical tests prove. They are familiar with advanced techniques like meta-analysis for summarizing multiple study findings. Statistical formulas are not important. A key point is never to over-value a single study. Remember, CBT has great results, but no therapy has been found the best.
Many successful clinicians and executives misunderstand one or more of these ideas. The consequences may not be immediate, but at some point, erroneous beliefs lead to poor decisions. There are many reasons for distortions about empirical data gaining hold, but one bias needing correction holds that much of the subjectivity of our field defies rigorous scientific study.
Results actually highlight the importance of subjectivity. The research validates the centrality of therapeutic (i.e., healing) relationships and the importance of establishing a good fit with an empathic clinician. Empirically validated therapies do not work equally well with all individuals, and so we should be validating results for every client in keeping with the principles of measurement-based care.
Ask Not If It Works, But Rather How Well It Works
What does empirical validation mean? Stating that a treatment “works” simplifies much complexity. One must understand a study, its measures, and the statistics used to know the strength of any findings.
Passing a test of statistical significance does not guarantee that results are meaningful clinically. Some studies find that a service meets a minimum threshold of effectiveness, while others provide a more precise degree of effectiveness. Sadly, many marginally effective remedies gain widespread use.
Some of this complexity is reduced by the statistic, Number Needed to Treat, or NNT. It takes the amount of change produced by an intervention and translates it into a standardized number. For example, the NNT for psychotherapy is 3, while it is 12 for the flu vaccine. A lower number reflects greater effectiveness.
We should know more about a given service than its superiority to no care. We need studies that compare available treatments, as well as ways to boost effectiveness. For example, research shows we can monitor the response to psychotherapy, modify it as needed, and ultimately improve outcomes.
We can address similar questions at the program level. Executives should be considering ways to improve clinical results, much as they do financial results. Imagine how public esteem for our work might rise by reframing our product as our outcomes rather than our services.
Financial results are complex. Only some factors contributing to the bottom line are under management’s control. Yet every executive is expected to achieve planned results. Clinical results are quite similar.
Improved outcomes should be every company’s mission, and leaders should report these results annually. Without these numbers, we are only telling part of the story of our work.
Ed Jones, PhD is currently with ERJ Consulting, LLC and previously served as president at ValueOptions and chief clinical officer at PacifiCare Behavioral Health.
The views expressed in Perspectives are solely those of the author and do not necessarily reflect the views of Behavioral Healthcare Executive, the Psychiatry & Behavioral Health Learning Network, or other Network authors. Perspectives entries are not medical advice.
References