Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer.  (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”).  Evaluation questions are what guide the evaluation, give it direction, and express its purpose.   Identifying guiding questions is essential to the success of any evaluation research effort.

Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program.  For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions).  During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions).  Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions).  Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).

While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations.  Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.

Types of Evaluation Questions

Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.

▪ Process Evaluation Questions

  • Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
  • Did the program’s services, products, and resources reach their intended audiences and users?
  • Were services, products, and resources made available to intended audiences and users in a timely manner?
  • What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
  • What steps were taken by the program to address these challenges?

▪ Formative Evaluation Questions

  • How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
  • How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants  and stakeholders?
  • What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

▪ Outcome/Summative Evaluation Questions

  • What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
  • Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
  • Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
  • What is the ultimate worth, merit, and value of the program?
  • Should the program be continued or curtailed?

The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use.  If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT).  Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so.  In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review,  etc.) that can help elucidate why what happens, happens, and what program participants experience.

Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Recommended Posts