Skip to main content

Advertisement

ADVERTISEMENT

Evidence-Based Practices: Promotion or Performance?

Evidence-based practices” are the current buzzword in the behavioral health field. Without question, the development and promotion of effective prevention and clinical treatment services are worthy goals. However, we also must be vigilant that those programs and services are at least as effective as the programs they might replace, and that the evidence in “evidence based” supports real-world outcomes.

In the addiction field, both treatment and prevention efforts have seen examples of promotion triumphing over performance. Notable examples of this in the area of prevention are the Drug Abuse Resistance Education (D.A.R.E.) program and the drug testing of high school students. The independent evaluations of D.A.R.E. published to date have indicated that the program has no demonstrable effects on youth substance use. D.A.R.E., as developed by the Los Angeles Police Department, may provide excellent public relations opportunities for law enforcement agencies, but should not be promoted as a prevention program to reduce substance use. The fact that these programs have proliferated in the absence of documented performance, and continue to be actively promoted and defended despite the lack of meaningful evidence, defies logic and sound public policy.

Numerous prevention efforts have been developed with sound empiric support for their effectiveness in reducing substance use among youth. A guide listing approximately 20 evidence-based programs covering grades 1 through 12 can be found at https://www.drugabuse.gov/pdf/prevention/RedBook.pdf. Federal agencies with information on prevention programs include the National Institute on Alcohol Abuse and Alcoholism (https://www.niaaa.nih.gov) and the National Institute on Drug Abuse (https://www.nida.nih.gov).

Drug testing students is another potential boondoggle. Many schools limit drug testing to athletes and other students who participate in extracurricular activities. How is that for looking for troubled youths and high-risk kids? One school district spent $5,000 on drug testing to find four youths who were positive for marijuana. This school year it budgeted $16,000 more for drug testing the lowest-risk groups in the school system-athletes and students active in other extracurriculars.

The fallacious assumptions concerning drug testing programs are that testing the relatively low-risk students will serve as a deterrent to use and will get help for those needing it. Although 18% of high schools drug test students, one independent study found the prevalence of use in schools that drug test to be almost identical to that of schools that don't. It would seem more logical to test athletes for use of performance-enhancing drugs, and to test students with unexcused absences for use of marijuana. Drug testing also does not address alcohol, the most prevalent substance of misuse. According to the annual Monitoring the Future survey of students, underage drinkers drink more heavily when they drink than adults do.

While school officials belabor the point that drug testing is not meant to be punitive, schools appear to do a poor job of facilitating referrals to treatment. Recent data from the Treatment Episode Data Set (TEDS) reveal that schools account for only 11% of treatment referrals for youth under the age of 18, as compared with the 52% that come from the criminal justice system.

The great harms from prevention programs that do not work are that they waste resources that could be used for more effective programs, and they give the impression of achieving something significant when they haven't. Turning back to drug testing, I would suggest that hiring a part-time counselor or contracting with a community agency or addiction professional to interview students with multiple unexcused absences, certain behavior problems, or dropping grades would constitute a more cost-effective means of identifying and addressing students with substance-related problems than drug testing students in extracurricular activities.

Treatment examples

The treatment field also has seen its share of promotion versus performance cases. Having founded and directed an independent evaluation system for 15 years, I have seen any number of programs that purported to have remarkable outcomes. In some cases the numbers appeared to be made up or someone's best guess. In other cases, a closer look at the “evidence” revealed serious problems in the methodology. In one memorable case, the “success rate” went from 80% to 4% when one went from outcomes calculated only on those completing 18 months of aftercare to outcomes for all individuals who had entered the program.

Obviously, we do need evidence-based practices to counter the promotion-performance issue, but all evidence is not equal, nor is all evidence relevant. Several recent articles on arbitrary metrics point this out.1,2 An arbitrary metric is a measure that is reliable and scientifically valid, but irrelevant to the real world. In the context of this discussion, an arbitrary metric would be an outcome measure that may or may not be indicative of recovery or wellness. For example, if an individual's Beck Depression Inventory or Hamilton Depression Rating Scale score drops five points, is the individual no longer clinically depressed? Maybe and maybe not.

One of the most frequent measures used to demonstrate that an addiction treatment program or model is evidence-based is whether the average days of use in the past 30 has decreased. The 30-day interval may be measured as soon as three months after treatment or after a year or more. But is days of use an arbitrary metric?

Consider the following hypothetical case. Programs A and B each treat 100 substance-dependent individuals. The average days of use out of 30 before treatment was 25 for each group. At one year, the average days of use for Program A was 8, and the average days of use for Program B was 10. Which is the more effective program?

Let's look more closely at the circumstances of the outcomes to find an answer. All of the individuals treated in Program A are still using, but the number of average days of use has decreased so that most use occurs on weekends. Of the 100 in Program A, all are still experiencing consequences of use, so they still qualify for a current diagnosis of dependence. In Program B, 55 individuals have maintained abstinence since entering treatment and report positive recovery. The other 45 are using for an average of about 23 days in the past 30, resulting in the overall average of 10 days of use. Which program would you consider evidence-based, given this information? To which program would you refer a family member?

Clearly, the measures used to determine whether a program is effective need to include measures relevant to life in the real world. Simple measures, or those that are simply easy to collect, may not meet the test of being nonarbitrary. The measures to substantiate that a given treatment model or program is evidence-based should be able to determine that the program makes a tangible difference in the lives of those treated and produces recovery, or wellness.

Even with metrics that matter, simply looking at outcomes does not mean that programs are performing at an equal level. To make appropriate comparisons, one must also know the case-mix of the treatment population. Oncology offers a clear example. For some types of cancer, a community hospital may have impressive recovery rates that are much better than those of some tertiary or teaching hospitals. This is not because the oncologists serving the community facility provide superior care. It is because the community hospital refers the more advanced and complicated cases to the tertiary and teaching facilities. Unless one knows the proportion of Stage 1 through Stage 4 cancer cases at each facility, it is impossible to judge the relative effectiveness of the treatments delivered.

Evaluation needs

The emphasis on evidence-based practices constitutes a positive step in overcoming the performance vs. promotion conundrum. However, the potential for misunderstandings and misuse of the evidence-based construct is almost as great. Randomized clinical trials may not be representative of patients treated in routine practice, because of the exclusion criteria in such studies. Also, outcome measures from such studies may not always reflect real-world differences.

Perhaps the best approach is to look at routine evaluation of outcomes as part of the service delivery continuum. Given that substance dependence is a chronic condition requiring some duration of contact with service systems, monitoring outcomes during the time individuals are in contact with the service systems seems to be a logical approach. Such evaluations could be part of the treatment planning reflecting real-world outcomes and could be done at minimal cost. Tracking individuals for interviews after completion of services, or other more extensive research evaluation efforts, are beyond what is practical for routine evaluations and could be restricted to formal research efforts.

For prevention, routine surveys and other measures might be equally appropriate. They would not be more costly than the various drug testing regimes now required of schools. Other readily available measures from school or public records, such as prevalence of disciplinary actions, underage drinking arrests, etc., also might provide indications of current levels of use and problems among youth populations.

As with any complex problem area, there are no good, simple solutions. Those that are simple don't work, and those that are good and do work are not simple.

Norman G. Hoffmann, PhD, a clinical psychologist, has been evaluating treatment effectiveness and developing a broad range of practical clinical assessment instruments during the past 25 years. He is president of Evince Clinical Assessments. He co-authored an article on proper use of clinical instruments in the September 2005 issue.

References

  1. Blanton H, Jaccard J. Arbitrary metrics in psychology. American Psychologist 2006; 61:27-41.
  2. Kazdin AE. Arbitrary metrics: Implica-tions for identifying evidence-based treatments. American Psychologist 2006; 61:42-79.

Advertisement

Advertisement