A summative evaluation is typically conducted near, or at the end of a program or program cycle. Summative evaluations seek to determine if, over the course of the intervention, the desired outcomes of a program were achieved. An “outcome” is that change, effect, or result which a program or initiative intends to achieve (See “What Counts as an “outcome” and Who Decides” ). Summative evaluations, as their name implies, offer a kind of “summary” of the value or worth of a program. Such an estimation is based on whether, and to what degree, intended outcomes have been achieved. Whereas formative evaluations are conducted near the beginning of a program, summative evaluations are conducted near or at the end of a program. Formative evaluations provide information with which to strengthen the implementation of the program. Conversely, summative evaluations determine whether the program should be continued or discontinued. (See our article “Strengthening Programs and Initiatives through Formative Evaluation”)
Summative evaluations are important because they gather and analyze data that indicate whether a program or initiative has been successful in affecting desired changes. Summative evaluations can be of use in making a case to potential funders and other stakeholders that continued support is a worthwhile investment. A word of caution: While it is important for funders to know that their investments are effective, and that desired changes are happening, summative evaluations may also provide evidence that discontinuation of a program is in order. (See “Fail Forward: What We Can Learn from Program ‘Failure’” )
“Building Our Understanding: Key Concepts of Evaluation What is it and how do you do it” Center for Disease Control
Evaluation, Second Edition, Carol H Weiss, Prentice Hall
“Types of Evaluation You Need to Know,” by Vipul Nanda
Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program or initiative. Formative evaluations typically are conducted in the early-to-mid period of a program’s implementation. Formative evaluations can be contrasted with summative evaluation which are conducted near, or at the end of, a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities). Summative evaluations are used to indicate the ultimate value, merit, and worth of the program. Their findings can be used to determine whether the program should be continued, replicated, or curtailed.
The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.
While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:
- Which features of a program or initiative are working and which aren’t working so well?
- Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
- Which components of the program do program participants say could be strengthened?
- Which elements of the program do participants find most beneficial, and which least beneficial?Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved.Brad Rose Consulting has conducted dozens of formative evaluation, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served.
We—humans—spend a lot of time in groups. Families, workplaces, churches, mosques, and synagogues, political organizations, sports teams, clubs, associations, etc. A “group” is a collection of two or more people that interact, communicate, and influence one another. A crowded elevator or a subway car is not generally considered a group; it’s a crowd. A club or a work-team is a group.
Groups are the settings for a range of behaviors, all of which entail human interaction and influence. Individuals become members of groups in order to achieve goals and to satisfy needs. Groups have shared goals, or agendas, which include their “task agenda”—getting work done; and their “social agenda”—meeting the social-emotional and identity needs of members. Groups assign members to roles that prescribe a set of expectations for each member’s behavior. These roles typically have different statuses, or different levels of prestige associated with each role. There are “in-groups” and “out-groups”; the former are groups with which people identify as members and the latter are groups with which people don’t identify and are often “assigned” by members of other groups. An organization is a kind of group, whose members work together for a shared purpose in a continuing way. Organizations can contain various groups, both formal and informal, within its boundaries.
Groups have different levels of cohesion or incoherence. Both internal competition among group members and external competition with other groups, can affect the degree of cohesion, or solidarity of the group. While cohesion is important to most groups, if excessive, it can be the cause of undesirable factors like “groupthink” which can lower the quality of the group’s decision-making ability, lead to closed-mindedness, prejudice, and exert undue pressure to conform.
These features and dynamics (above) are applicable to most groups. They are especially noticeable at work, where group dynamics are often operative. Status of members, specified roles, pressures towards conformity and “groupthink”, leadership and “followership,” group decision-making, etc., are issues with which we must often deal—both consciously and unconsciously. In the for-profit world and the non-profit world, group dynamics are at play. Awareness of these features can help us to productively deal with them, rather than experience them unconsciously, and at times, adversely.
When you’re thinking about doing an evaluation — either conducting one yourself, or working with an external evaluator to conduct the evaluation — there are a number of issues to consider. (See our earlier article “Approaching an Evaluation—Ten Issues to Consider”)
I’d like to briefly focus on four of those issues:
- What is the “it” that is being evaluated?
- What are the questions that you’re seeking to answer?
- What concepts are to be measured?
- What are appropriate tools and instruments to measure or indicate the desired change?
1. What is the “it” that is being evaluated?
Every evaluation needs to look at a particular and distinct program, initiative, policy, or effort. It is critical that the evaluator and the client be clear about what the ‘it” is that the evaluation will examine. Most programs or initiatives occur in a particular context, have a history, involve particular persons (e.g. staff and clients/service recipients) and are constituted by a set of specific actions and practices (e.g., trainings, educational efforts, activities, etc.) Moreover, each program or initiative has particular changes (i.e. outcomes) that it seeks to produce. Such changes can be manifold or singular. Typically, programs and initiatives seek to produce changes in attitudes, behavior, knowledge, capacities, etc. Changes can occur in individuals and/or collectivities (e.g. communities, schools, regions, populations, etc.)
2. What are the questions that you’re seeking to answer?
Evaluations like other investigative or research efforts, involve looking into one or more evaluation questions. For example, does a discreet reading intervention improve students’ reading proficiency, or does a job training program help recipients to find and retain employment? Does a middle school arts program increase students’ appreciation of art? Does a high school math program improve students’ proficiency with algebra problems?
Programs, interventions, and policies are implemented to make valued changes in the targeted group of people that these programs are designed to serve. Every evaluation should have some basic questions that it seeks to answer. By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings. (See “Program Evaluation Methods and Questions: A Discussion”)
3. What concepts are to be measured?
Before launching the evaluation, it is critical to clarify the kinds of changes that are desired, and then to find the appropriate measures for these changes. Programs that seek to improve maternal health, for example, may involve adherence to recommended health screening measures, e.g., pap smears. Evaluation questions for a maternal health program, therefore might include: “Did the patient receive a pap smear in the past year? Two years? Three years?” Ultimately, the question is “Does receipt of such testing improve maternal health?” (Note that this is only one element of maternal health. Other measures might include nutrition, smoking abstinence, wellness, etc.)
4. What are appropriate tools and instruments to measure or indicate the desired change?
Once the concept (e.g. health, reading proficiency, employment, etc.) are clearly identified, then it is possible to identify the measures or indicators of the concept, and to identify appropriate tools that can measure the desired concepts. Ultimately, we want tools that are able either to quantify, or qualitatively indicate, changes in the conceptual phenomenon that programs are designed to affect. In the examples noted above, evaluations would seek to show changes in program participants’ reading proficiency (education), employment, and health.
We have more information on these topics:
Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the common reasons for conducting program evaluation are to:
- monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
- measure the outcomes, or effects, produced by a program in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
- provide objective evidence of a program’s achievements to current and/or future funders and policy makers
- elucidate important lessons and contribute to public knowledge.
There are numerous reasons why a program manager or an organizational leader might chose to conduct an evaluation. Too often however, we don’t do things until we have to. Program evaluation is a way to understand how a program or initiative is doing. Compliance with a funder’s evaluation requirements need not be the only motive for evaluating a program. In fact, learning in a timely way about the achievements of, and challenges to, a program’s implementation—especially in the early-to-mid stages of a program’s implementation—can be a valuable and strategic endeavor for those who oversee programs. Evaluation is a way to learn about and to strengthen programs.