Program evaluations entail research. Research is a systematic “way of finding things out.” Evaluation research depends on the collection and analysis of data (i.e., evidence, facts) that indicate the outcomes (i.e., effects, results, etc.) of the operation of programs. Typically, evaluations want to discover evidence of whether valued outcomes have been achieved. (Other kinds of evaluations, like formative evaluations, seek to discover, through the collection and analysis of data, ways that a program may be strengthened.)
Data can be either qualitative (descriptive, entail words and observations) or quantitative (numerical). What counts as data depends upon the design and character of the evaluation research. Quantitative evaluations rely primarily on the collection of countable information like measurements and statistical data. Qualitative evaluations depend upon language-based data and other descriptive data. Usually, program evaluations combine the collection of quantitative and qualitative data.
There are a range of data sources for any evaluation. These can include: observations of programs’ operation; interviews with program participants, program staff, and program stakeholders; administrative records, files, and tracking information; questionnaires and surveys; focus groups; and visual information, such as video data and photographs.
The selection of the kinds of data to collect and the ways of collecting such data will be contingent on the evaluation design, the availability and accessibility of data, economic considerations about the cost of data collection, and both the limitations and potentials of each data source. The kinds of evaluation questions and the design of the evaluation research will, together, help to determine the optimal kinds of data that will need to be collected. (See our articles
“Questions Before Methods” and “Using Qualitative Interviews in Program Evaluations”)
Resources
What’s the Difference? Evaluation vs. Research
Evaluation, Carole Weiss, Prentice Hall; 2nd edition (1997)