Type to search

Outcomes: Developing Better Pathways to Quality Care

https://www.dreamscapemarketing.com/
blog
Based on the increasing need to demonstrate program efficacy facing the Affordable Health Care Act (ACA) and Parity, many addiction treatment programs have developed a renewed interest in collecting outcomes. There has been no lack of scholarly research emanating from prestigious universities such as Rutgers, Brown, Yale, and others. As early as 1970 the Drug Abuse Treatment Information Project (DATIP) began collecting outcomes on some of the very early treatment programs in the United States (Nash, 1973). In 1972, a national evaluation of treatment effectiveness based on outcomes titled the Drug Abuse Reporting Program (DARP) developed extensive outcome data along with posttreatment follow-up outcome studies on recovery evidence (Simpson & Sells, 1982). Norman Hoffmann, PhD, developed the Comprehensive Assessment and Treatment Outcome Research (CATOR) in 1980, the aggregate databases totaling over seventy-five thousand adults and eleven thousand adolescents. Project MATCH began in 1989 in the United States and was sponsored by the National Institute on Alcohol Abuse and Alcoholism (NIAAA). The project was an eight-year, multisite, $27 million investigation that studied which types of alcoholics respond best to which forms of treatment. MATCH studied whether treatment should be uniform or assigned to patients based on specific needs and characteristics. The programs were administered by psychotherapists and, although Twelve Step methods were incorporated into the therapy, actual Alcoholics Anonymous (AA) meetings were not included (“Matching alcoholism,” 1998).

 

Through several generations outcomes research in addiction continued to progress, with perhaps the Addiction Severity Index (ASI) being the most widely recognized (McLellan, Luborsky, Woody, & O’Brien, 1980). With so many volumes of data collected over more than forty years one is hard pressed to cite outcomes research that has actually informed treatment. Clearly, treatment for addiction has evolved very slowly compared to the advances in medical science as many programs cling to theory and practice that continues to be unsupported by science.

 

The increasing practice of medication-assisted therapy (MAT) for substance use disorders (SUDs) supported by extensive data has set a high research standard for abstinence-based treatment programs. The Substance Abuse and Mental Health Services Administration (SAMHSA) describes MAT as the use of medications, in combination with counseling and behavioral therapies, to provide a whole-patient approach to the treatment of substance use disorders (n.d.). MAT, although not without controversy, has taken a firm hold on the practice of addiction treatment. Methadone is one major example of early MAT and is a synthetic opioid agonist. It is used medically as an analgesic and a maintenance antiaddictive and reductive preparation for use by patients with opioid dependence. Methadone was introduced into the United States in 1947 by Eli Lilly and Company (Chen, Hedegaard, & Warner, 2014). It is beneficial for those in the abstinence-based treatment arena to become highly familiar with some of the modern MAT therapies now available as they continue to play a significant role in the treatment of SUDs, with new drugs becoming increasingly available.   

 

Historically, outcomes were and continue to be developed by some treatment programs for marketing with greater focus on presenting positive outcomes for promotion rather than improving care. Oftentimes outcomes have been generated using questionable scientific approaches and a biased lens. In addition, outcomes among treatment programs are rarely comparable. This presents a real challenge for the consumer seeking treatment and payers who are increasingly reluctant to support treatment when valid outcomes are not available. This has the result of actually devaluing treatment as opposed to supporting it. In medically related fields outcomes developed with scientific rigor as scholarly research has been a vehicle that informs practice, while in addiction treatment outcomes have taken on many different meanings.  Treatment for SUDs, often described as a life-threatening disease, has been driven more by intuition than science. It is often routinized as clients are exposed to rigid, program-oriented treatment models. A consumer often knows more about an appliance they are purchasing than a treatment center they are choosing for a loved one.   

 

Addiction treatment providers rarely utilize scientific methods which are supported by credible data in developing the relationship of outcomes to practice or methods of incorporating theoretical approaches to treatment. Universal standards for reporting outcomes would benefit the consumer, the program, and the treatment community. Terms like “outcome-driven treatment” and “outcome-based care” have now become part of the jargon heard among drug and alcohol treatment providers. It is critical to view outcomes as important to both demonstrate the value of treatment and to inform the treatment process. Even when outcomes are collected with rigor, an emphasis on outcomes alone as a way of promoting and defining treatment has limitations. There are several issues that should be considered when outcomes are utilized to drive quality care, and this article will examine some of them. 

 

The Value of Outcomes  

 

Outcomes, while an important component in demonstrating program efficacy, do not stand alone in driving quality care. When combined with a strong, research-informed clinical practice, outcomes become the missing ingredient for developing superior treatment programs which represents tier one treatment. The tier one level of care is where programs are driven by   research, outcomes, aggregated client data, client feedback, context, case management, and spirituality (Lynn, 2013). Outcomes are increasingly valuable when aggregated client data becomes part of the mix driving client care. Add a strong client feedback and the formula for clinical programs to evolve becomes even more potent. The value of a client’s perception of treatment is often cited as a primary variable in the success of treatment (Zhang, Gerstein, & Friedman, 2008). Programs must be cautious that in their zeal to collect outcomes to drive care they do not fall into the trap of using a narrow lens to describe a multifaceted system. 

 

The Importance of Research  

 

Programs must have a strong, science-supported theoretical and practice base before offering treatment. Some treatment programs assert that they practice evidence-supported treatment based on the inclusion of psychological theory and practice that has undergone scientific scrutiny. Often the inclusion of these therapies in the program has little relationship to outcomes or the overall treatment modality. Cognitive behavioral therapy (CBT), for example, is a very popular clinical practice in treatment programs. Since evidence is available that supports the use of this practice for several disorders, some programs then take the quantum leap of asserting that they are a program that is evidence-based because they include some theory and practice that is supported by data. More accurately stated they are a program that has incorporated well-known psychological theory and practice into their treatment system. The challenge is to determine how these therapies actually relate to treatment outcomes.

 

Case-Mix Considerations  

 

Case mix refers to the characteristics of cases served by a health service provider, where some clients are at greater “risk” of having less successful treatment outcomes than other clients. In drug abuse treatment, level of risk is commonly associated with addiction severity, but other factors may also be important, such as client demographics, socioeconomic status, and medical and social functioning (Koenig, Fields, Dall, Ameen, & Harwood, 2000).

 

 
One must be able to account for the differences in clients entering treatment, particularly as it relates to severity to ensure the validity of the findings. Treatment providers often employ restrictive admission criteria that accounts for significant differences in clients among treatment centers as well. Case mix issues are more often than not ignored by treatment centers which result in either understated or overstated findings. If you are in need of a major operation do you choose a hospital based on mortality rates or quality of staff and treatment? The best hospitals may have the highest mortality rates because they treat the patients with the highest severity.

 

Determining severity offers additional challenges. The DSM-5 includes severity specifier ratings which are determined by the number of symptoms present indicating mild, moderate or severe. Since the ratings are based on the number of symptoms and not the severity of the individual symptoms, these ratings actually reflect the complexity of the illness rather than the severity of symptoms (APA, 2013).

 

 
On the other hand, the ASAM Criteria defines severity in terms of risk. It is suggested that risk ratings across the six ASAM dimensions and an understanding of matching risk and severity with intensity of services will facilitate a focus on individual patient needs (Mee-Lee, Shulman, Fishman, Gastfriend, & Miller, 2013). The DSM-5 and the ASAM Criteria offer a pathway to determine methods for including case mix variations in treatment outcomes. Since severity is multifactorial (i.e., includes such issues as trajectory of the illness, current symptoms, and diagnosis) it would be valuable to develop universal criteria that can be applied to outcomes.

 

 
Scientific Standards  

 

Challenges from sample selection to implied causality affect the validity of outcome data presented by many treatment programs. As an example, one program director that I was speaking with reported that his concierge treatment program had a 100 percent success rate. What he failed to mention was that this was for clients who completed the program and remained in AA for more than one year, which was a very small percentage of those who actually began treatment. Yes, if you torture data long enough you can get it to say what you want. At the same time, it is my impression that these less than scientifically useful results are driven more by scientific naiveté combined with the desire to promote the program than the desire to be less than honest. 

 

Rarity of Comparable or Generalizable Data  

 

A major challenge when determining the most appropriate treatment program for an individual is the lack of comparable data to rely on. Since the field has not embraced universal scientific standards for collecting and reporting data, individual program reports are often not very useful. This may also be true for some of the SAMHSA evidence-based models as well, which may be   arguably a significant step above care based on intuition and dogma. At the same time, questions of generalizability and lack of individualized attention to care plague this approach. In addition, most programs only have access to their own data; repositories managed by independent posttreatment client advocacy agencies are extremely valuable in meeting this challenge. Further complicating issue of comparability, some programs screen out failures through restrictive admission criteria more often found in the public sector where waiting lists are pervasive, while some private centers tend to admit clients more liberally, not always fitting well into their scope of practice. Similarly, the criteria for program completion or what some term “graduation” varies greatly among treatment providers.  

 

Chronic Illness  

 

Client engagement, acute care stabilization, and continuing care should all be part of outcomes-focused research. Too often studies, even those with scientific rigor, have only focused on one segment of the care continuum. 

 

Client engagement includes such issues as admission criteria, targeted population, and case mix. In conducting outcomes research, this phase of care has been frequently ignored. The result is data that has a strong bias from the very start—either reflecting inflated outcomes or poor outcomes which are mistakenly attributed to the treatment rather than the initial client population.

 

Since the majority of treatment for SUDs has been acute care stabilization, reported outcomes are often only related to this phase of care. It is similar to evaluating a diabetic’s initial medical intervention while ignoring his or her postacute care life. Acute care stabilization is what the majority of treatment programs offer during the early stages of care prior to what is sometimes mistakenly termed graduation. This phase often lasts thirty, sixty, or ninety days or more in the case of outpatient care. Again, this data alone is less than useful in evaluating long-term recovery. 

 

Another interesting issue is that even when a program reports favorable outcomes resulting from this phase of treatment, they are not able to document how different clinical/program practice actually impacted on the treatment. I recall one treatment program that was actually a sheep farm where residents spent their days caring for the flock. In the evening they attended AA meetings and lectures in addition to group and individual counseling. Researchers found that the outcomes from this program were similar to other inpatient programs in the same state. What they could not report was what the relationship of living on a farm had on the outcomes, if any. The expectation that acute care stabilization could arrest a chronic illness is naïve at best, as long-term care is often required for sustained recovery.  

 

While there are few that would argue that this is the most important phase of care, continuing care—which may also be described as “long-term recovery—receives the least attention both clinically and from a research perspective. Sure, researchers query this population, but it is with a view toward the impact of acute care stabilization and not long-term recovery. Proximal outcomes are often ignored by researchers who employ a myopic lens, looking only at abstinence. Many would agree that recovery, regardless of how one defines it, is multifactorial and includes a quality of life focus. Finally, continuing care research needs to include contextual issues that impact on recovery. The common experience of sending clients back to the toxic environment from whence they came and expecting long-term recovery flies in the face of a chronic disease orientation. 

 

The bottom-line is that treatment for substance use disorders often requires a lifelong commitment and it is incumbent upon researchers to use a broad lens when evaluating treatment. Studies need to be long term and focus on proximal outcomes in addition to variables such as drug-free days. 

 

Research and Applied Outcomes  

 

There does not appear to be clarity regarding the application of outcomes to treatment. In other words, there is a need to create user-friendly pathways from outcomes to practice. For example, there are at least three possibilities of data combinations that are worth discussion: outcomes without research, outcomes with research, and research without outcomes.

 

Outcomes without Research  

 

These outcomes lack the rigor of a scientific paradigm and are the most prevalent in the field today. These outcomes, although oftentimes based on observation and anecdotal data, are not necessarily research. However, they can be very interesting and useful in understanding a treatment system. The real problem occurs when conclusions are drawn that are not supported by the data. This may occur due to lack of scientific prowess and sometimes in the zeal to promote the program statements are made without regard for the reliability or validity of the data. One might suggest that this has been a problem in the abstinence-based addiction community that has led to distrust of data in general. Since most people to whom these outcomes are marketed are not viewing them through a scientific lens, the result is a skeptical public and distrust by insurance companies and other third party payers. How often do we read statements such as “Our success rate is 82 percent” without a clear explanation of sample selection or criteria for success? The “appearance” of a scientific methodology when the methods are weak can lead to more harm than good. 

 

This is not to suggest that all outcome data needs to meet the standards of rigorous science. However, all outcome data should be presented with transparency, stating clearly the limitations based on the methods in which the outcomes were assembled. 
 

 

Valuable treatment theory and practice has been derived from anecdotal reports, clinical observation, and data repositories. These observations have been the catalyst for understanding treatment and encouraging scientific exploration of the same. Unfortunately, data collected without rigor that becomes the rationale for theory and practice is a formula for disaster and even malpractice. Many syndromes and even books have been written based on this type of data. Anecdotal data, testimonials, and other such information can be engaging and may help to describe the treatment system, but at the same time should not be the sole basis for developing clinical practice. Just to offer a level playing field one should consider the plethora of practice protocols in counseling and psychology only supported by the need to rationalize a clinical approach. Clinical practice that lacks scientific rigor—or for that matter the lack of scientific data in the DSM-5—reflects similar challenges in the psychological and psychiatric communities. Finally, there are very few of us who would like our surgeon to perform a lifesaving operation based on these types of outcomes.

 

 
Outcomes with Research  

 

Outcomes with research, or outcomes which are developed within a research paradigm, are best suited to inform care. The gold standard of a double-blind randomized methodology is not required to render the data useful; most important is that whatever standard is used is driven by scientific rigor. It is this type of outcomes data that may inform care, create new or improved practice protocols or reject worn systems as being ineffectual. This data can help a program to evolve and continually update theory and practice. Similarly, this information can be cost effective in better utilizing finite resources and producing improved treatment results. Unfortunately, some treatment programs shy away from this type of investigation for fear of not meeting perceived treatment outcome goals. A shift in perception from rating programs to using data to evolve treatment systems may alleviate some of these concerns and barriers to research.

 

 
One should understand the distinction between evidence-informed care and evidence-based treatment as presented by SAMHSA. Evidence-based models have been defined by SAMHSA and the National Registry of Evidence-based Programs and Practices (NREPP) as interventions that meet a specific review process established by SAMHSA (SAMHSA, 2007). Further, 

 

To have an intervention listed in NREPP, the intervention’s developer submits required information about the intervention for expert review. Experts then rate the intervention on the quality of research supporting specific intervention outcomes, and on the availability of implementation resources to translate the scientific findings into routine practice. All NREPP reviewers are recruited, selected, and approved by SAMHSA based on their experience and areas of expertise (SAMHSA, 2007).

 

These approaches, rather than result in best practice guidelines for treatment, run the risk of producing competing models. Today the focus on evidence-based models has not provided a clear pathway for either promoting or informing treatment.  
Evidence-based practice (EBP) should not be confused with SAMHSA’s evidence-based treatment. The Institute of Medicine (IOM) defined evidence-based medicine as “the integration of best research evidence with clinical expertise and patient values” (APA, 2006). An American Psychological Association (APA) task force, beginning with this foundation and expanding it to mental health, defined evidence-based practice as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences,” a much more progressive approach to outcomes (APA, 2006). 

 

Evidence-informed practice is the level at which a program should provide treatment or a least strive to move in this direction. Outcomes couched in a research paradigm should be ongoing and can be built in to many modern electronic health record (EHR) systems. Even those outcomes collected within a scientific paradigm can be compromised when there is little evidence that the reported outcomes are actually driven by the treatment. As Alan E. Kazdin suggests, 

 

Demonstrating a causal relation does not necessarily provide the construct to explain why the relation was obtained. The treatment may have caused the change, but we do not know whether the change can be attributed to specific or conceptually hypothesized components of treatment (e.g., cognitive restructuring, habituation, stress reduction, mobilization of hope) and how the change came about (2008).

 

Recognizing that data is always open to manipulation, one must use an educated lens in determining that which is applicable to a given care system. 

 

 
Research without Outcomes  

 

Simply stated, research without outcomes is research based on scientific rigor that adds value to the field and can inform practice. Epidemiological studies and research performed with scientific rigor either qualitative or quantitative, prospective or retrospective, are the basis that all treatment should rest upon. The challenge has been in bridging the gap between science and practice. The obstacles are based on several factors. Researchers often working in academic ivory towers have little incentive to help bridge the gap between research and practice. Even when these bridges are built by the researchers, many treatment programs do not have staff who are able to apply the data to daily practice—they also have little incentive to do so. Many programs receive their knowledge from conferences and association meetings that operate on the basis of choosing their presenters based on sponsorship over value to the treatment community (i.e., “pay to play”). The result is the perpetuation of pseudoscience based promotion rather than actual science.  

 

Research is what drives treatment. This does not imply that there is one best way to treat a diagnostic category, as there are often several pathways to producing quality outcomes. As long as the ensuing treatment can be supported by current research, all is fine. In fact it is beneficial to have choices in treatment, particularly when good case management is available for client advocacy. 

 

 
Meta-analysis is another method that supports the generalizability of data by comparing several studies using a wider lens. Meta-analysis refers to statistical methods for contrasting and combining results from different studies in the hope of identifying patterns among study results, sources of disagreement among those results or other interesting relationships that may come to light in the context of multiple studies. Merriam-Webster defines meta-analysis as “a quantitative statistical analysis of several separate but similar experiments or studies” (2015).

 

 
A word of caution: some programs have taken a quantum leap by applying good research to inappropriate clinical models, which amounts to banging square pegs into round holes. This occurs when a program imports conclusions which are context specific and not necessarily related to the treatment being offered. 

 

The Next Step  

 

The abstinence-based addiction treatment field is at a critical juncture which may determine the survival of some programs. There are several steps an individual program can take to evolve treatment through the application of outcomes. Developing a relationship with a university that has a focus on addiction research would be a good beginning. This would facilitate the theory to application of research often termed “bench to trench.” Independent researchers can help to develop a system of collecting outcomes which is based on the rigor needed to present the outcomes with a degree of confidence. Another benefit of this relationship is that universities may develop methods of informing research by becoming involved with ongoing treatment, trench to bench. Addiction treatment providers must be educated concerning the relationship of research and outcome measures to practice to include methods of incorporating theoretical approaches to treatment supported by credible data.
 
 

 

Summary  

 

Outcomes are an important part of a multifaceted treatment planning process. They are one spoke on the wheel along with research, clinical judgment, client feedback, clinical supervision, aggregated client data, context, spirituality, and case management being equally important in driving quality care. Treatment programs should adopt a recovery-oriented focus for their clients and the entire organization.

 

The integration of the above can create the basis for establishing a treatment program and evolving a superior care system. The major value of outcomes should be to inform care. Based on the sophistication and technical base of one’s record system, this can either be accomplished in real time or with significant delay in the case of paper-focused record systems. In either case the knowledge that comes from outcomes may go far in both informing individual care and providing information for building a system that is based on research supported best practice methods. 

 

Big pharma has been highly focused on presenting quality data to support MAT, which may account for the high percentage of clients now being channeled in this direction. In addition, drug companies are held to a higher standard than abstinence-based programs when developing medically supported therapy. The role of MAT in relation to abstinence-based models leaves much room for additional research as both play a significant role in the care continuum, though abstinence-based programs lag far behind in demonstrating treatment efficacy based on credible outcomes.

 

The chronic nature of the disease has spurred the development of some high quality, posttreatment relapse prevention agencies, such as MAP Accountability Services in Austin, Texas, which has become the aftercare monitoring portion for several programs in addition to being a valuable data repository. The same degree of professional data collection and attention to standards should be employed from the first day of treatment. 

 

This article is in part a call to the abstinence-based treatment community to come together in developing scientific methods for collecting and presenting outcome data as survival may depend on the same.

 

 

 

 

References  

 

American Psychological Association (APA). (2006). Evidence-based practice in psychology. Retrieved from http://www.apa.org/practice/resources/evidence/evidence-based-statement.pdf
 
American Psychological Association (APA). (2013). Desk reference to the diagnostic criteria from DSM-5. Washington, DC: Author. 
 
Chen, L. H., Hedegaard, H., & Warner, M. (2014). Drug-poisoning deaths involving opioid analgesics: United States, 1999–2011. Retrieved from http://www.cdc.gov/nchs/data/databriefs/db166.pdf
 
Kazdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63(3), 146–59. 
 
Koenig, L., Fields, E. L., Dall, T. M., Ameen, A. Z., & Harwood, H . J. (2000). Using case-mix adjustment methods to measure the effectiveness of substance abuse treatment: Three examples using client employment outcomes. Retrieved from http://www.lewin.com/~/media/Lewin/Site_Sections/Publications/1561.pdf
 
Lynn, B. (2013). Outcomes revisited. Counselor, 14(6), 66–9. 
 
“Matching alcoholism treatments to client heterogeneity: Project MATCH three-year drinking outcomes.” (1998). Alcoholism, Clinical & Experimental Research, 22(6), 1300–11. 
 
McLellan, A. T., Luborsky, L., Woody, G. E., & O’Brien, C. P. (1980). An improved diagnostic evaluation instrument for substance abuse patients: The Addiction Severity Index. Journal of Nervous & Mental Disease, 168(1), 26–33. 
 
Mee-Lee, D., Shulman, G. D., Fishman, M. J., Gastfriend, D. R., & Miller, M. M. (Eds.). (2013). The ASAM criteria: Treatment criteria for addictive, substance-related, and co-occurring conditions (3rd ed.). Carson City, NV: Change Companies. 
 
Merriam-Webster. (2015). Meta-analysis. Retrieved from http://www.merriam-webster.com/dictionary/meta-analysis
 
Nash, G. (1973). The impact of drug abuse treatment upon criminality: A look at nineteen programs. Retrieved from http://files.eric.ed.gov/fulltext/ED095463.pdf 
 
Simpson, D. D., & Sells, S. B. (1982). Effectiveness of treatment for drug abuse: An overview of the DARP research program. Advances in Alcohol & Substance Abuse, 2(1), 7–29. 
 
Substance Abuse and Mental Health Services Administration (SAMHSA). (n.d.). About medication-assisted treatment. Retrieved from http://www.dpt.samhsa.gov/patients/mat.aspx
 
Substance Abuse and Mental Health Services Administration (SAMHSA). (2007). SAMHSA launches searchable database of evidence-based practices in prevention and treatment of mental health and substance use disorders. Retrieved from http://www.nrepp.samhsa.gov/pdfs/press-release-2007-03-01.pdf
 
Zhang, Z., Gerstein, D. R., & Friedman, P. D. (2008). Patient satisfaction and sustained outcomes of drug abuse treatment. Journal of Health Psychology, 13(3), 388–400.
Have you subscribed to our free Weekly Digest? Click here to learn more!
Holler Box