In her book Evaluation (2nd Edition) Carol Weiss writes, “Outcomes define what the program intends to achieve.” (p.117) Outcomes are the results or changes that occur, either in individual participants, or targeted communities. Outcomes occur because a program marshals resources and mobilizes human effort to address a specified social problem. Outcomes, then, are what the program is all about; they are the reason the program exists.
Outcome Evaluations
In order to assess which outcomes are achieved, program evaluators design and conduct outcome evaluations. These evaluations are intended to indicate, or measure, the kinds and levels of change that occur for those affected by the program or treatment. “Outcome evaluations measure how clients and their circumstances change, and whether the treatment experience (or program) has been a factor in causing this change. In other words, outcome evaluations aim to assess treatment effectiveness. (World Health Organization)
Outcome evaluations, like other kinds of evaluations, may employ a logic model, or theory of change, which can help evaluators and their clients to identify the short-, medium-, and long-term changes that a program seeks to produce. (See our blog post “Using a Logic Model” ) Once intended changes are identified in the logic model, it is critical for the evaluator to further identify valid and effective measures of said changes, so that these changes are correctly documented. It is preferable to identify desired outcomes before the program begins operation, so that these outcomes can be tracked throughout the program’s life-span.
Most outcome evaluations employ instruments that contain measures of attitudes, behaviors, values, knowledge, and skills. Such instruments may be either standardized, often validated, or they may be uniquely designed, special- purpose instruments, (e.g., a survey designed specifically for this particular program.) Additionally, the measures contained in an instrument can be either “objective,” i.e., they don’t rely on individuals’ self-reports, or conversely, they can be “subjective,” i.e., based on informants’ self-estimates of effect. Ideally, outcome evaluations try to use objective measures, whenever possible. In many instances, however, it is desirable to use instruments that rely on participants’ self-reported changes and reports of program benefits.
It is important to note that outcomes (i.e. changes or results) can occur at different points in time in the life span of a program. Although outcome evaluations are often associated with “summative,” or end-of-program-cycle evaluations, because program outcomes can occur in the early or middle stages of a program’s operation, outcomes may be measured before the final stage of the program. It may even be useful for some evaluations to look at both short- and long-term outcomes, and therefore to be implemented at different points in time (i.e., early and late.)
Another issue relevant to outcome evaluation is dealing with unintended outcomes of a program. As you know, programs can have a range of intended goals. Some outcomes or results, however, may not be a part of the intended goals of the program. They nonetheless occur. It is critical for evaluations to try to capture the unintended consequences of programs’ operation as well as the intended outcomes.
Ultimately, outcome evaluations are the way that evaluators and their clients know if the program is making a difference, which differences it’s making, and if the differences it’s making, are the result of the program. To learn more about our evaluation and outcome assessment methods visit our Data collection & Outcome measurement page.
Resources:
World Health Organization, Workbook 7
http://whqlibdoc.who.int/hq/2000/WHO_MSD_MSB_00.2h.pdf
Measuring Program Outcomes: A Practical Approach (1996) United Way of America’s
Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources
http://managementhelp.org/evaluation/outcomes-evaluation-guide.htm#anchor153409
Evaluation Methodology Basics, The Nuts and Bolts of Sound Evaluation,
E. Jane Davidson, Sage, 2005.
Evaluation (2nd Edition) Carol Weiss, Prentice Hall, 1998.