“I would rather have questions that can’t be answered than answers that can’t be questioned.”
― Richard Feynman
The Cambridge Dictionary defines research as “A detailed study of a subject, especially in order to discover (new) information or reach a (new) understanding.” Program evaluation necessarily involves research. As we mentioned in our most recent blogpost, “Just the Facts: Data Collection” program evaluation deploys various research methods (e.g., surveys, interviews, statistical analyses, etc.) to find out what is happening and what has happened in regards to a program, initiative, or policy. At the core of every evaluation are key questions that should guide the evaluation. Below we reprise our previous blogpost, “Questions Before Methods,” which emphasizes the importance of specifying evaluation questions prior to the design and implementation of each evaluation.
Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider”) Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.
Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).
While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.
- Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
- Did the program’s services, products, and resources reach their intended audiences and users?
- Were services, products, and resources made available to intended audiences and users in a timely manner?
- What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
- What steps were taken by the program to address these challenges?
- How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
- How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants and stakeholders?
- What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
- Which elements of the program do participants find most beneficial, and which least beneficial?
- What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills, and practices)?
- Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
- Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
- What is the ultimate worth, merit, and value of the program?
- Should the program be continued or curtailed?
Resources
“Just the Facts: Data Collection”