After a decade of working with clients, there are important questions I’m asking myself. Why do I use the theories I use? What makes therapy effective? Why do some clients fail to meet treatment goals? And how do I practice science-based therapy? These questions are related, but the very last question is the one I’ll be focusing on most in this article.
Recently I prepared a marketing post outlining the biggest mistakes I made when launching my private practice. At the top of the list was “believing that everything my graduate program taught me was based on science and research.” I’ve learned a sobering truth in preparing this article; all theories are not evidenced-based. And there’s an association in our culture with evidence and safety. In fact, “clinicians are ethically obligated to provide safe treatments” (Williams et al., 2020). To be evidenced-based means to be science-based. And I most certainly want to practice science-based, safe psychotherapy. I actually thought that’s what I was doing.
Science-based treatments must be empirically supported. To be empirically supported, the treatment must demonstrate superiority to another treatment or a placebo or show equivalence to an already-established treatment (Wampold & Imel, 2001). So how do I practice science-based therapy? It’s pretty clear to me now that I must use research to make decisions regarding treatment, along with understanding the treatment goals, cultural environment, available resources and diagnostic assessment of my client/s. In a recent conversation with Alexander Williams, PhD, John Sakaluk, PhD and Yevgeny Botanov, PhD, co-authors of articles in Scientific American and Aeon magazines, as well as, several research papers on “potentially harmful therapies” (PHTs) published in the Journal of Abnormal Psychology and Clinical Psychology Science and Practice, I learned that the fastest route to keeping myself science-based is to ask “how do I know what I know?” If the answer to this question is any of the following, it’s probably not based on science.
- It’s what I’ve always done
- It’s what my professor taught me
- A lot of other therapists do it
- There are books about it
- Clients say it helped them
If I do nothing else to ensure that I am practicing science-based therapy, I can do my very best to remain open to new information (like the kind I’m sharing with you) and recognize the inherent bias in myself. Religiously adhering to a theory or model of therapy is not a science-based practice. It may be convenient. It may be comfortable. But that doesn’t make it scientific or even ethical. To be truly evidenced-based, a theoretical model or tool must have proven outcome measures which requires at least two different random controlled trials (RCTs) by at least two different research teams (Hupp & Santa Maria, 2023). That, I have learned, is very rare in mental healthcare. According to the work of Williams and Sakaluk, “50 percent of empirically supported treatments have subpar outcomes measures and more critically they lack statistical credibility” (Williams & Sakaluk, 2020). “In cases where randomized controlled experiments are not practical, researchers instead perform observational studies, in which they merely record data, rather than controlling it. The problem of such studies is that it is difficult to untangle the causal from the merely correlative” (Pearl et al., 2016). All we see in the training literature, however, is “based on research.” This gives us so little information about the validity and scientific basis of our certifications.
RCTs are a lot more complicated in mental healthcare than they are in other scientific disciplines. For starters, many researchers eliminate research participants based on symptom severity. So for example, if I want to research major depression, I might recruit individuals with mild or moderate depression because, quite simply, their symptoms are less likely to affect participation. This in and of itself prevents me from scientifically demonstrating the effectiveness of the treatment on the very population it’s designed to treat. Ideal conditions for RCT’s are very difficult in psychotherapy. Drug trials include a placebo and the study is “double-blind” because both the researcher and research participant don’t know which treatment they are giving and receiving. In psychotherapy, it’s impossible for the researcher or administrator to be “blind” to the treatment since the therapist always knows which treatment is being delivered and often knows the intent of the treatment. Additionally, “treatments most often are tested by the clinical scientists who developed them and are invested in establishing their efficacy and disseminating them to be widely used, not too different from efforts by pharmaceutical companies to have drugs approved for use” (Wampold & Imel, 2001). Researcher allegiance affects the very design of studies in mental healthcare. And just because a treatment shows superiority to another treatment does not mean that said treatment is essential for the recovery or management of a specific disorder. There would need to be an entirely different experimental design to demonstrate that.
Placebos themselves have different effects in different cultures. The gender of the administrator, the clothing worn such as a lab coat or even the consumption of a pill all have cultural associations and expectations. These factors affect outcomes in research, especially research on social, emotional and cognitive processing.
Another factor to consider in psychological research is the level of skill of the administrator. In many studies the treatment is provided by someone with minimal training in the methodology. Delivery greatly affects outcomes. So if an experiment is conducted by someone without the adequate expertise it can jeopardize both the reliability of the results and the efficacy of the design itself.
In the 1990s, the American Psychological Association (APA) created a task force dedicated to continually updating a list of empirically-supported treatments (ESTs; Sakaluk et al., 2019). The criteria for inclusion meant demonstrations of statistically significant improvement over no treatment, placebo treatment or alternative treatment. If a treatment passed this criteria it was labeled by the APA as “strong.” The work of Sakaluk and colleagues has shown that statistical significance is a “precarious and easily-misleading standard of evidence.” According to them, “the vast majority of statistical analysis in the EST literature were not reported with sufficient detail to make inferential tests verifiable or re-analyzable (Sakaluk et al., 2019). Despite the high evidential value suggested by the labeling of these treatments, researchers found that reports of many ESTs—including a number categorized as possessing particularly “strong” evidence by traditional criteria—were “contaminated by reporting errors, imprecise studies, inflated rates of statistically significant effects given a particular study’s precision, and ambiguous efficacy compared with a null effect” (Williams et al., 2020). Using this information as a guide, both therapist and client cannot assume that ESTs are strong predictors of treatment success. It’s also clear that the mere absence of harm is not evidential proof of benefit or reliability.
But anecdotal evidence is not evidence-based. And many of the theories MFTs are using today in practice with patients are anecdotal.
At this time, there is no overarching governing body regulating theories of psychological interventions. It’s much more “at your own risk” than that both for the clinician and the client. We are responsible for determining how much science is behind the models and methods we are taught in graduate school and beyond. This has left a lot of room for all kinds of treatments and practitioners. And the celebrity of those treatments and their creators have made them “verifiable” and authoritative due to popularity, not science. But anecdotal evidence is not evidence-based. And many of the theories MFTs are using today in practice with patients are anecdotal. “The early history of psychotherapy did not have the benefit of controlled designs and was primarily distinguished by proponents’ belief that the treatments of various psychodynamic and eclectic therapies were beneficial.” (Wampold & Imel, 2001) This would be a great time to ask ourselves, how do we know what we know?
To better understand how we came to have so many effective but non-efficacious treatment methods, it’s important to break down the difference. “If a treatment is found to be superior to a waitlist control group in a treatment package design, then the treatment is said to be efficacious” (Wampold & Imel, 2001). When a treatment is effective, it means it is applicable and functional. It has demonstrated usefulness to clients. When a treatment is efficacious, it means it has demonstrated repeatable outcomes. Ideally, we’d all be practicing treatment methods that are both efficacious and effective. If a treatment is only shown to be effective and not efficacious we’re better off calling it “evidenced-informed” rather than evidenced-based. Clinical trials are conducted in artificial contexts which affects treatment outcomes. Practice-based research has the benefit of real context but it is not random or controlled which affects efficacy. As Williams, Sakaluk and Botanov work to identify and reform treatment regulations to protect both clinicians and clients against pseudoscience, I argue the rest of us have an ethical obligation to assess for bias, allegiance, and complacency. As Maya Angelou said, “when you know better, do better.”
If you’re still not convinced that it’s time to scrutinize your own preferred methodologies, allow me to identify the risks of continuing to operate in ignorance. On average, it takes a client nearly 11 years to find an appropriate diagnosis, and by association, an appropriate treatment (Wang et al., 2004). That’s 11 years of spending money on treatments that were never going to work anyway. These treatments may not have been life threatening but there’s an argument to be made that they were indeed harmful. It violates many of our ethical responsibilities to keep administering a treatment that isn’t getting the client to their treatment goals. “A common link across these pseudoscientific approaches is that in addition to taking time, energy, and money to complete, they delay individuals from receiving efficacious treatments” (Hupp & Santa Maria, 2023). Time, energy, and money are minor consequences if you consider the alternatives such as suicide, divorce, failed educational pursuits, and disease development. Look at the research from Kaiser Permanente on Adverse Childhood Experiences or the growing evidence from Rachel Yehuda in epigenetics for more information on the risks of non-intervention or inappropriate intervention. It makes underlying traumas, diagnoses, and suffering last generations.
COVID-19 put all of us at more risk for disadvantageous treatments. There was a magnitude of pressure on our field to “do something” to help offset the mental effects of social distancing, grief, and loss. Many of us were working overtime and teaching ourselves how to use technology in ways we never imagined. I believe this exacerbated an underlying tendency to use what we were comfortable with and what we preferred with clients rather than what they really needed from treatment. Currently, our mental health system is “designed to prioritize things like cost, proximity, and availability of services—not expertise in a particular problem or a good fit between patient and provider” (Cummins, 2022). This is not a science-based system. To quote psychologist Richard Gist: we must “discourage yielding to the desperate need to do something by acquiescing to the compulsion to use anything” (Williams et al., 2020). It’s not our ethical responsibility to just do something. We must do the right thing. The safe thing. The science-based thing.
If you’re like me and you’re ready to “do better,” I’ve got a few great places for us to start.
- Demand reform. Ask the American Psychological Association, the American Association for Marriage and Family Therapy and any other professional association you represent to create written standards for acceptable research designs and statistical analysis to establish more efficacy in the field of psychotherapy.
- Desist the use of certain treatments that are demonstrably ineffective and harmful such as: Critical Incident Stress Debriefing, Boot Camps, DARE, expressive re-experiencing therapies. You can find more information on PHTs by searching for Scott O. Lilienfeld’s work in Perspectives on Psychological Science from 2007.
- Advocate for your graduate program to include a class on research methods, scientific inquiry and/or critical thinking skills. The Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE) has required a course on research since 2016.
- When gauging accuracy in treatment research, look for a Bayesian analysis which better ensures the hypothesis has withstood the test of refutation (Pearl et al., 2016). The data alone is not sufficient.
- Practice Feedback Informed Treatment (FIT) which means gathering input from clients in real time about the effectiveness of your work together. Part of FIT is Routine Outcome Monitoring (ROM). Not only does this practice foster greater professional development, but it also ensures a more ethical and “client-centered” practice. “Measuring and monitoring clinical practices is most likely to increase a therapist’s effectiveness when it identifies opportunities for deliberate practice” (Duncan et al., 2003). For more on FIT visit centerforclinicalexcellence.com
- Use psychological assessments to track therapeutic benefit when strong science-based evidential support in your theoretical framework is missing. If you’re not hitting treatment goals with your client/s, change theories or refer out for additional help.
- Ask yourself, how do I know what I know?
- Live out some of the pillars of science-based practice:
Humility
Skepticism
Curiosity

Julia Harkleroad, MS, LCMFT, is an AAMFT Professional member and a mental health advocate. She has a private practice in Prairie Village, KS. She has presented and designed curriculum for school districts, corporations, and not-for-profits. She recently completed a training manual on Brief Intervention and Consultation.
Cummins, E. (2022, September 26). Why therapy is broken. WIRED. Retried from https://www.wired.com/story/therapy-broken-mental-health-challenges/
Duncan, B. L., Miller, S. D., Sparks, J. A., Claud, D. A., Reynolds, L. R., Brown, J., & Johnson, L. D. (2003). The session rating scale: preliminary psychometric properties of a “working” alliance measure. Journal of Brief Therapy, 3, 3-12.
Hupp, S., & Santa Maria, C.L. (2023). Pseudoscience in therapy: A skeptical field guide. Cambridge University Press.
Pearl, J., Glymour, M., & Jewell, N.P. (2016). Causal inference in statistics: A primer. John Wiley & Sons Ltd.
Sakaluk, J. K., Williams, A. J., Kilshaw, R. E., & Rhyner, K. T. (2019). Evaluating the evidential value of empirically supported psychological treatments (ESTs): A meta-scientific review. Journal of Abnormal Psychology, 128(6), 500-509.
Wampold, B. E., & Imel, Z. E. (2001). The great psychotherapy debate: The evidence for what makes psychotherapy work. Routledge.
Wang, P., Berglund, P. A., Olfson, M., & Kessler, R. C. (2004). Delays in initial treatment contact after first onset of mental disorder. Health Services Research, 39(2), 393-416.
Williams, A. J., & Sakaluk, J. K. (2020, February 24). The evidence for evidence-based therapy is not as clear as we thought. Aeon. Retrieved from https://aeon.co/ideas/the-evidence-for-evidence-based-therapy-is-not-as-clear-as-we-thought
Williams, A. J., Botanov, Y., Kilshaw, R. E., Wong, R. E., & Sakaluk, J. K. (2020). Potentially harmful therapies: A meta-scientific review of evidential value. Clinical Psychology Science and Practice, 28(1), 5-18. https://doi.org/10.1111/cpsp.12331
Other articles
Maybe I Was Overreacting: A MedFT’s Role with Autism Spectrum Disorder
One in 44 children was diagnosed with autism in 2020 (Maenner et al., 2021), or approximately 1.7% of children in the United States (Bridgemohan et al., 2019). Despite increasing autism awareness, parents still express dissatisfaction with care from their primary care physician (PCP; Carbone et al., 2010). One-third of parents expressed concerns with the diagnostic delivery (Crane et al., 2015). PCPs rarely administer ASD screening tools (Carbone et al., 2020).
Justin Romney, PhD and Randall Reitz, PhD
All Hands on Deck: Let Us Serve With Those Who Serve
“Thank you for your service” is a statement that those currently serving, or who have served, in the U.S. military receive from family, friends, and strangers. Additionally, Thank You For Your Service is the title of a movie that was released in 2017, which depicts U.S. soldiers returning home from Iraq and their struggles to reintegrate into the civilian world, displaying symptoms of PTSD, and their struggles to find mental health resources.
Sean Surber, PhD
Dangerous Patients and You—Terminating a Contentious Relationship
Treating dangerous patients who represent a risk to themselves or others can often trigger worrisome ethical issues— What constitutes a reportable danger? Does a specific person have to be threatened? How specific does the threat have to be? Who must be informed of the threat? These are common but complex issues that are frequently discussed in ethical consults with AAMFT’s Ethics and Legal Affairs staff—but won’t be covered under this article’s purview.