BY:
American Psychologist ●
May–June 2006
A sizable body of scientific evidence drawn from a variety of research designs and
methodologies attests to the effectiveness of psychological practices. The
research literature on the effect of psychological interventions indicates
that these
interventions are safe and effective for a large number
of children and youths (Kazdin & Weisz, 2003; Weisz, Hawley, & Doss, 2004), adults (Barlow, 2004; Nathan & Gorman, 2002; Roth & Fonagy, 2004; Wampold et al., 1997), and older adults (Duffy, 1999; Zarit & Knight, 1996) across a wide range of psychological, addictive, health, and relational problems.
of children and youths (Kazdin & Weisz, 2003; Weisz, Hawley, & Doss, 2004), adults (Barlow, 2004; Nathan & Gorman, 2002; Roth & Fonagy, 2004; Wampold et al., 1997), and older adults (Duffy, 1999; Zarit & Knight, 1996) across a wide range of psychological, addictive, health, and relational problems.
More recent research has indicated that compared with
alternative approaches, such as medications, psychological treatments are
particularly enduring
(Hollon, Stewart, & Strunk, 2006). Further, research has demonstrated that
psychotherapy can and often does pay for itself in terms of medical-cost offset,
increased productivity,
and life satisfaction (Chiles, Lambert, & Hatch, 2002; Yates, 1994).
Psychologists possess distinctive strengths in designing,
conducting, and interpreting research studies that can guide evidence-based practice.
Moreover, psychology—as a science and as a profession—is distinctive in
combining scientific
commitment with an emphasis on human relationships and individual differences.
As such, psychology can help to develop, broaden, and improve the research
base for evidence-based
practice. There
is broad consensus that psychological practice needs to be based on evidence
and that research needs to balance internal and external validity. Research will
not always
address all practice needs.
Major issues in integrating research in day-to-day practice include
(a) The relative weight to place on different research methods;
(b) The representativeness of research samples;
(c) Whether research results should guide practice at the
level of principles of change, intervention strategies, or specific protocols;
(d) The generalizability and transportability of treatments supported in
controlled research to clinical practice settings;
(e) The extent to which judgments can be made about treatments of choice
when the number and duration of treatments tested has been limited; and
(f) The degree to which the results of efficacy and effectiveness
research can be
generalized from primarily White samples to minority and marginalized populations
(see Westen, Novotny, & Thompson-Brenner, 2004;
see also contrasting position papers in Norcross, Beutler, & Levant, 2005).
Nevertheless, research on practice has made progress in
investigating these issues and is providing evidence that is more responsive to day-to-day
practice. There is sufficient consensus to move forward with the principles of
EBPP. Meta-analytic investigations since the 1970s have shown that most therapeutic
practices in widespread clinical use are generally effective for treating a
range of problems
(Lambert & Ogles, 2004; Wampold, 2001).
In fact, the effect sizes of psychological interventions
for children,
adults, and older adults rival or exceed those of widely accepted medical
treatments (Barlow, 2004; Lipsey & Wilson, 2001; Rosenthal, 1990; Weisz, Jensen,
& McLeod,
2005). It is important not to assume that interventions that have not yet been
studied in controlled trials are ineffective.
Specific interventions that have not been subjected to systematic
empirical testing for specific problems cannot be assumed to be either
effective or ineffective; they are simply untested to date. Nonetheless, good
practice and science
call for the timely testing of psychological practices in a way that adequately
operationalizes them
using appropriate scientific methodology.
Widely used psychological practices as well as innovations
developed in the field or laboratory should be rigorously evaluated, and
barriers to conducting this research should be identified and addressed. Multiple Types of Research Evidence Best research evidence refers
to scientific results related to intervention strategies, assessment, clinical
problems, and patient
populations in laboratory and field settings as well as to clinically relevant
results of basic research in psychology and related fields. APA endorses
multiple types of research evidence (e.g., efficacy, effectiveness, cost-effectiveness,
cost–benefit, epidemiological, treatment utilization) that contribute to
effective psychological practice.
Multiple research designs contribute to evidence based
practice, and different research designs are better suited to address different
types of questions (Greenberg & Newman, 1996):
● Clinical observation
(including individual case studies) and basic psychological science are valuable
sources of innovations and hypotheses (the context of scientific
discovery).
● Qualitative research can be used to describe the subjective, lived experiences of people, including
participants in psychotherapy.
● Systematic case studies are particularly useful when aggregated—as in the form of practice research networks—for comparing individual patients with others with similar characteristics.
● Single-case experimental designs are particularly useful for establishing causal relationships in the
context of an individual.
● Public health and ethnographic research are especially useful for tracking the availability, utilization, and acceptance of mental health treatments as well as suggesting ways of altering these treatments to maximize their utility in a given social context.
● Process–outcome studies are especially valuable for identifying mechanisms of change.
● Studies of interventions as these are delivered in naturalistic settings (effectiveness research) are well suited for assessing the ecological validity of treatments.
● RCTs and their logical equivalents (efficacy research) are the standard for drawing causal inferences about the effects of interventions (context of scientific verification).
● Meta-analysis is a systematic means to synthesize results from multiple studies, test hypotheses, and quantitatively estimate the size of effects.
● Qualitative research can be used to describe the subjective, lived experiences of people, including
participants in psychotherapy.
● Systematic case studies are particularly useful when aggregated—as in the form of practice research networks—for comparing individual patients with others with similar characteristics.
● Single-case experimental designs are particularly useful for establishing causal relationships in the
context of an individual.
● Public health and ethnographic research are especially useful for tracking the availability, utilization, and acceptance of mental health treatments as well as suggesting ways of altering these treatments to maximize their utility in a given social context.
● Process–outcome studies are especially valuable for identifying mechanisms of change.
● Studies of interventions as these are delivered in naturalistic settings (effectiveness research) are well suited for assessing the ecological validity of treatments.
● RCTs and their logical equivalents (efficacy research) are the standard for drawing causal inferences about the effects of interventions (context of scientific verification).
● Meta-analysis is a systematic means to synthesize results from multiple studies, test hypotheses, and quantitatively estimate the size of effects.
With respect to evaluating research on specific
interventions, current APA policy identifies two widely accepted dimensions. As stated in the
Criteria for Evaluating Treatment Guidelines (American Psychological
Association, 2002),
The first dimension is treatment
efficacy, the systematic and
scientific evaluation of whether a treatment
works. The second dimension
is clinical utility,
the applicability, feasibility, and usefulness
of the intervention in the local or specific setting where
it is to be offered. This dimension also
includes determination of the
generalizability of an intervention whose efficacy has been
established. (p. 1053)
Types of research evidence with regard to intervention research in ascending order
as to their contribution to conclusions about efficacy include “clinical opinion,
observation, and consensus among recognized experts representing the range of
use in the field” (Criterion 2.1); “systematized clinical observation”
(Criterion 2.2); and “sophisticated empirical methodologies, including
quasi experiments
and randomized controlled experiments or their logical equivalents”
(Criterion 2.3; American Psychological Association, 2002, p. 1054). Among
sophisticated empirical methodologies, “randomized controlled
experiments represent a more stringent way to evaluate treatment
efficacy because they
are the most effective way to rule out threats to internal
validity in a single experiment” (American Psychological Association, 2002, p.
1054).
Evidence on clinical utility is also crucial. Per established
APA policy (American Psychological Association, 2002), at a minimum this
includes attention to generality of effects across varying and diverse patients,
therapists, settings, and the interaction of these factors; the robustness of
treatments across
various modes of delivery; the feasibility with which treatments can be
delivered to patients in real-world settings; and the costs associated with
treatments.
Evidence-based practice requires that psychologists recognize
the strengths and limitations of evidence obtained from different types of research.
Research has shown that the treatment method (Nathan & Gorman, 2002), the
individual psychologist (Wampold, 2001), the treatment relationship
(Norcross, 2002), and
the patient (Bohart & Tallman, 1999) are all vital contributors to the success of
psychological practice.
Comprehensive evidence-based practice will consider all of these determinants and
their optimal combinations. Psychological practice is a complex relational and
technical enterprise that requires clinical and research attention to multiple,
interacting sources of treatment effectiveness. There remain many disorders,
problem constellations, and clinical situations for which empirical data are sparse.
In such instances, clinicians use their best clinical
judgment and knowledge of the best available research evidence to develop
coherent treatment strategies. Researchers and practitioners should join together to ensure
that the research available on psychological practice is both clinically relevant and
internally valid.
Future Directions
EBPP has important implications for research programs and funding priorities. These programs and priorities should emphasize research on the following:● the generalizability and transportability of interventions shown to be efficacious in controlled research settings;
● Patient Treatment interactions (moderators);
● the efficacy and effectiveness of psychological practice with underrepresented groups, such as those characterized by gender, gender identity, ethnicity, race, social class, disability status, and sexual orientation;
● the efficacy and effectiveness of psychological treatments with children and youths at different
developmental stages;
developmental stages;
● the efficacy and effectiveness of psychological treatments with older adults;
● distinguishing common and specific factors as mechanisms of change;
● characteristics and actions of the psychologist and the therapeutic relationship that contribute to positive outcomes;
● the effectiveness of widely practiced treatments— based on various theoretical orientations and integrative blends—that have not yet been subjected to controlled research;
● the development of models of treatment based on identification and observation of the practices of
clinicians in the community who empirically obtain the most positive outcomes;● criteria for discontinuing treatment;
● accessibility and utilization of psychological services;
● the cost-effectiveness and costs–benefits of psychological interventions;
● development and testing of practice research networks;
● the effects of feedback regarding treatment progress to the psychologist or patient;
● development of profession-wide consensus, rooted in the best available research evidence, on psychological treatments that are considered discredited; and
● research on prevention of psychological disorders and risk behaviors.
● psychological
treatments of established efficacy in combination with—and as an alternative
to—pharmacological treatments;
No comments:
Post a Comment