In articles containing psychological research, one frequently comes across the notion of efficacy. Researchers conduct laboratory studies in order to demonstrate how much a certain treatment (i.e. a drug or psychotherapy) works. While testing for efficacy in this way can be beneficial, Martin Seligman (1995), in The effectiveness of psychotherapy, illustrates how absolute dependence on this kind of research can be flawed. While it is essential to gather research for various kinds of therapeutic treatment, there are drawbacks to ‘efficacy studies’. As defined by Martin Seligman (1995), an efficacy study “contrasts some kind of therapy to a comparison group under well-controlled conditions” (p. 965). The rigorous nature of the efficacy study might make it appear to demonstrate that a specific therapy is effective, but one must be careful about equating laboratory results with what can work in society at large.
First of all, psychotherapy out in the field is not of a fixed duration. Clients attend sessions until they feel they have improved (or until they decide that the treatment is not working). The stringent requirements of efficacy studies mean that people are seen for a specific number of sessions, regardless of their condition at the end of those sessions. This is a flaw of efficacy studies, because despite the fact one participant might not have made progress by the end of their sessions, it is possible that they could improve if given further treatment. Therefore, efficacy studies run the risk of discounting a treatment option because it was not effective in a certain time frame.
Another restriction of efficacy studies is that participants are selected because they experience one kind of disorder. Outside the laboratory, patients are rarely diagnosed with one disorder as comorbidity is much more common (Seligman, 1995). Consequently, a treatment successfully tested in an efficacy study might not be effectual on a patient with multiple disorders.
And lastly, the most important requirement of an efficacy study is random assignment. This means that participants are assigned to an experimental group or a control group. This allocation does not occur in therapeutic treatment outside the laboratory. Psychologists, psychiatrists and social workers do not decide on a method of treatment before they meet. And in the course of treatment they might decide that a different approach would be more effective and therefore switch approaches. Therefore, just because the pre-determined approach does not work in a study, does not mean that it will not work with a specific patient.
Ten years after this paper, Martin Seligman joined with Tracy Steen, Nansook Park, and Christopher Peterson, (2005) to conduct what seems analogous to an efficacy study on positive interventions as a way to increase happiness. Their study was randomized and placebo-controlled (p. 415) although it was carried out via the internet and not in a laboratory. Because of these similarities to an efficacy study there were limitations to the results. While Seligman, et al. found significant long-term results for two exercises (‘three good things’ and ‘using signature strengths in a new way’), the authors offer possible improvements for the study (increase time spent on each intervention and couple with other interventions). They also delineate shortcomings of their study (i.e. the need for “longitudinal, placebo-controlled design” (p. 419)).
Seligman et al. (2005) have furthered the field of positive psychology with their study of positive interventions. Unfortunately it includes some of the same limitations Seligman so eloquently highlighted ten years earlier. Unmistakably, there are shortcomings in efficacy studies done in laboratories, which is why they should be combined with studies done in the field before determining their effectiveness.
Seligman, M.E.P. (1995). The effectiveness of psychotherapy: The consumer reports study. American Psychologist 50(12), 965-974.
Seligman, M.E.P.; Steen, T.A.; Park, N.; & Peterson, C. (2005). Positive psychology progress: Empirical validation of interventions. American Psychologist 60(5), 410-421.