Before beginning an evaluation, it may be helpful to consider the following questions:

1. Why is the evaluation being conducted? What is/are the purpose(s) of the evaluation?

Common reasons for conducting an evaluation are to:

  • monitor progress of program implementation and provide formative feedback to designers and program managers (i.e., a formative evaluation seeks to discover what is happening and why, for the purpose of program improvement and refinement.)
  • measure final outcomes or effects produced by the program (i.e., a summative evaluation);
  • provide evidence of a program’s achievements to current or future funders;
  • convince skeptics or opponents of the value of the program;
  • elucidate important lessons and contribute to public knowledge;
  • tell a meaningful and important story;
  • provide information on program efficiency;
  • neutrally and impartially document the changes produced in clients or systems;
  • fulfill contractual obligations;
  • advocate for the expansion or reduction of a program with current and/or additional funders.

Evaluations may simultaneously serve many purposes. For the purpose of clarity and to ensure that evaluation findings meet the client’s and stakeholders’ needs, the client and evaluator may want to identify and rank the top two or three reasons for conducting the evaluation. Clarifying the purpose(s) of the evaluation early in the process will maximize the usefulness of theevaluation.

2. What is the “it” that is being evaluated? (A program, initiative, organization, network, set of processes or relationships, services, activities?) There are many things that may be evaluated in any given program or intervention. It may be best to start with a few (2-4) key questions and concerns (See #4, below ). Also, for purposes of clarity, it may be useful to discuss what isn’t being evaluated.

3. What are the intended outcomes that the program or intervention intends to produce? What is the program meant to achieve? What changes or differences does the program hope to produce, and in whom? What will be different as the result of the program or intervention? Please note that changes can occur in individuals, organizations, communities, and other social environments. While evaluations often look for changes in persons, changes need not be restricted to alterations in individuals’ behavior, attitudes, or knowledge, but can extend to larger units of analysis, like changes in organizations, networks of organizations, and communities. For collective groups or institutions, changes may occur in: policies, positions, vision/mission, collective actions, communication, overall effectiveness, public perception, etc. For individuals: changes may occur in behaviors, attitudes, skills, ideas, competencies, etc.

4. Every evaluation should have some basic questions that it seeks to answer. What are the key questions to be answered by the evaluation? What do clients want to be able to say (report) about the program to key stakeholders? By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings.

5. Who is the evaluation for? Who are the major stakeholders or interested parties for evaluation results? Who wants to know? Who are the various “end users” of the evaluation findings?

6. How will evaluation findings be used? (To improve the program; to make judgements about the economic or social value of the program—it’s costs and benefits–; to document and publicize efforts; to expand or curtail the program?)

7. What information will stakeholders find useful and how will they use the evaluation findings?

8. What will be the “product” of the evaluation? What form will findings take? How are findings to be disseminated? (Written report, periodic briefings, analytic memos, a briefing paper, public presentation, etc.?)

9. What are the potential sources of information/data? (Interviews, program documents, surveys, quantitative/statistical data, comparison with other/similar programs, field observations, testimony of experts?) What are the most accessible and cost effective sources of information for the client?

10. What is the optimal design for the evaluation? Which methods will yield the most valid, accurate, and persuasive evaluation conclusions? If interested in indicating a cause and effect relationship, is it possible (and desirable) to expend the resources necessary to conduct an experimental (i.e., “control group study”), or quasi-experimental design study? Experimental designs can be resource intensive and therefore, more costly. If resources are not substantial, the client and the evaluator will want to discuss other kinds of evaluation designs that will provide stakeholders with the most substantial, valid, and persuasive evaluative information.

To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Recommended Posts