Program evaluation services that increase program value for grantors, grantees, and communities. x

Tag Archive for: formative evaluation

June 2, 2020
02 Jun 2020

How Evaluation Can Help Non-profits to Respond to the COVID-19 Health Crisis

The current health crisis is compelling many non-profits to rethink how they do business. Many must consider how to best serve their stakeholders with new and perhaps untested, means. Among questions that many non-profits must now ask themselves: How do we continue to reach program participants and service recipients? How do we change/adjust our programing so that it reaches existing and new service recipients? How do we maximize our value while ensuring the safety of staff and clients? Are there new, unanticipated opportunities to serve program participants?

New conditions require new strategies. While the majority of non-profits’ attention will necessarily be focused on serving the needs of those they seek to assist, non-profit leaders will benefit from paying attention to which strategies work, and which adaptations work better than others.

In order to investigate the effectiveness of new programmatic responses, non-profits will benefit from conducting evaluation research that gathers data about the effects and the effectiveness of new (and continuing) interventions. Formative evaluation is one such means for discovering what works under new conditions.

The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.

While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:

  • Which features of a program or initiative are working and which aren’t working so well?
  • Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
  • Which components of the program do program participants say could be strengthened?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved. Brad Rose Consulting has conducted dozens of formative evaluations, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served. For the foreseeable future, non-profits are likely to be called upon to offer ever greater levels of services. Program evaluation can help non-profits to maximize their effectiveness in ever more challenging times.

Resources:

January 21, 2020
21 Jan 2020

Did We Achieve What We Intended? Summative Evaluations

A summative evaluation is typically conducted near, or at the end of a program or program cycle. Summative evaluations seek to determine if, over the course of the intervention, the desired outcomes of a program were achieved. An “outcome” is that change, effect, or result which a program or initiative intends to achieve (See “What Counts as an “outcome” and Who Decides” ). Summative evaluations, as their name implies, offer a kind of “summary” of the value or worth of a program. Such an estimation is based on whether, and to what degree, intended outcomes have been achieved. Whereas formative evaluations are conducted near the beginning of a program, summative evaluations are conducted near or at the end of a program. Formative evaluations provide information with which to strengthen the implementation of the program. Conversely, summative evaluations determine whether the program should be continued or discontinued. (See our article “Strengthening Programs and Initiatives through Formative Evaluation”)

Summative evaluations are important because they gather and analyze data that indicate whether a program or initiative has been successful in affecting desired changes. Summative evaluations can be of use in making a case to potential funders and other stakeholders that continued support is a worthwhile investment. A word of caution: While it is important for funders to know that their investments are effective, and that desired changes are happening, summative evaluations may also provide evidence that discontinuation of a program is in order. (See “Fail Forward: What We Can Learn from Program ‘Failure’” )

Resources:

Understanding Different Types of Program Evaluation

“Building Our Understanding: Key Concepts of Evaluation What is it and how do you do it” Center for Disease Control

Evaluation, Second Edition, Carol H Weiss, Prentice Hall

“Types of Evaluation You Need to Know,” by Vipul Nanda

“Making Sense of Summative Evaluation: Three Tips for Making Those “Strings” Work in Your Favor,” by Heather Stombaugh

Just the Facts: Data Collection

January 7, 2020
07 Jan 2020

Strengthening Programs and Initiatives through Formative Evaluation

Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program or initiative. Formative evaluations typically are conducted in the early-to-mid period of a program’s implementation. Formative evaluations can be contrasted with summative evaluation which are conducted near, or at the end of, a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities). Summative evaluations are used to indicate the ultimate value, merit, and worth of the program. Their findings can be used to determine whether the program should be continued, replicated, or curtailed.

The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.

While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:

  •  Which features of a program or initiative are working and which aren’t working so well?
  • Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
  • Which components of the program do program participants say could be strengthened?
  • Which elements of the program do participants find most beneficial, and which least beneficial?Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved.Brad Rose Consulting has conducted dozens of formative evaluation, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served.
February 5, 2019
05 Feb 2019

Pretending to Love Work

In a previous blog post, “Why You Hate Work” we discussed an article that appeared in the New York Times that investigated the way that the contemporary workplace too often produces a sense of depletion and employee “burnout.” In that article, the authors, Tony Schwartz and Christin Porath, argued that only when companies attempt to address the physical, mental, emotional, and spiritual dimensions of their employees by creating “truly human-centered organizations,” can these companies create the conditions for more engaged and fulfilled workers, and in so doing, become more productive and profitable organizations.

In that eponymous blogpost, we suggested that employee burnout is not an unknown feature of the non-profit world, and that, while program evaluation cannot itself prevent employee burnout, it can add to non-profit organizations’ capacities to create organizations in which staff and program participants have a greater sense of efficacy and purposefulness. (See also our blogpost “Program Evaluation and Organization Development” )

Of course, the problem of employee burnout and alienation is a perennial one. It occurs in both the for-profit and non-profit sectors. In a more recent article, “Why Are Young People Pretending to Love Work?” New York Times, January 26, 2019, Erin Griffith says that in recent years, there has emerged a “hustle culture,”—especially for millennials. This culture, Griffith argues, “…is obsessed with striving, (is) relentlessly positive, devoid of humor, and — once you notice it — impossible to escape.” She sites the artifacts of such a culture, which include at one WeWork location, in New York, neon signs that exhorts workers to “Hustle harder,” and murals that spread the gospel of T.G.I.M. (Thank God It’s Monday). Somewhat horrified by the Stakhanovite tenor of the WeWork environment, Griffith notes, “Even the cucumbers in WeWork’s water coolers have an agenda. ‘Don’t stop when you’re tired,’… ‘Stop when you are done.’” “In the new work culture,” Griffith observes, “enduring or even merely liking one’s job is not enough. Workers should love what they do, and then promote that love on social media, thus fusing their identities to that of their employers.”

Griffith is not concerned with employee burnout. Instead, she is horrified by the degree to which many younger employees have internalized the obsessively productivist, “workaholic” norms of their employers and, more broadly, of contemporary corporations. These norms include the apotheosis of excessive work hours and the belief that devotion to anything other than work is somehow a shameful betrayal of the work ethic. She quotes the founder of online platform, Basecamp, David Heinemeier Hansson, who observes, “The vast majority of people beating the drums of hustle-mania are not the people doing the actual work. They’re the managers, financiers and owners.”

Griffith writes, “…as tech culture infiltrates every corner of the business world, its hymns to the virtues of relentless work remind me of nothing so much as Soviet-era propaganda, which promoted impossible-seeming feats of worker productivity to motivate the labor force. One obvious difference, of course, is that those Stakhanovite posters had an anti-capitalist bent, criticizing the fat cats profiting from free enterprise. Today’s messages glorify personal profit, even if bosses and investors — not workers — are the ones capturing most of the gains. Wage growth has been essentially stagnant for years.”

Resources:

“Why Are Young People Pretending to Love Work?” Erin Griffith, New York Times, January 26, 2019

“Why You Hate Work”

“The Fleecing of Millennials” David Leonhardt, New York Times, January 27, 2019

December 11, 2018
11 Dec 2018

Program Evaluation Methods and Questions: A Discussion

“I would rather have questions that can’t be answered than answers that can’t be questioned.” 
― Richard Feynman

The Cambridge Dictionary defines research as “A detailed study of a subject, especially in order to discover (new) information or reach a (new) understanding.” Program evaluation necessarily involves research. As we mentioned in our most recent blogpost, “Just the Facts: Data Collection” program evaluation deploys various research methods (e.g., surveys, interviews, statistical analyses, etc.) to find out what is happening and what has happened in regards to a program, initiative, or policy. At the core of every evaluation are key questions that should guide the evaluation. Below we reprise our previous blogpost, “Questions Before Methods,” which emphasizes the importance of specifying evaluation questions prior to the design and implementation of each evaluation.

Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider”) Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.

Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).

While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.

Types of Evaluation Questions

Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.

Process Evaluation Questions
  • Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
  • Did the program’s services, products, and resources reach their intended audiences and users?
  • Were services, products, and resources made available to intended audiences and users in a timely manner?
  • What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
  • What steps were taken by the program to address these challenges?
Formative Evaluation Questions
  • How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
  • How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants  and stakeholders?
  • What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
  • Which elements of the program do participants find most beneficial, and which least beneficial?
Outcome/Summative Evaluation Questions
  • What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills, and practices)?
  • Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
  • Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
  • What is the ultimate worth, merit, and value of the program?
  • Should the program be continued or curtailed?
The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use. If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT). Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so. In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review, etc.) that can help elucidate why what happens, happens, and what program participants experience.

Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions.

To learn more about our evaluation methods visit our Data Collection & Outcome Measurement page.

Resources

“Just the Facts: Data Collection”

“Questions Before Methods”

“Approaching and Evaluation: Ten Issues to Consider”

“Data Collection and Outcome Measurement”

December 4, 2018
04 Dec 2018

Strengthening Program AND Organizational Effectiveness

Program evaluation is seldom simply about making a narrow judgment about the outcomes of a program (i.e., whether the desired changes were, in fact, ultimately produced.) Evaluation is also about helping to provide program implementers and stakeholders with information that will help them strengthen their organization’s efforts, so that desired programmatic goals are more likely to be achieved.

Brad Rose Consulting is strongly committed to translating evaluation data into meaningful and actionable knowledge, so that programs, and the organizations that host programs, can strengthen their efforts and optimize results. Because we are committed not just to measuring program outcomes, but to strengthening the organizations that host and manage programs, we work at the intersection of program evaluation and organization development (OD).

Often challenges facing discrete programs reflect challenges facing the organizations that host programs. (For the difference between “organizations” and “programs” see our previous post “What’s the Difference? 10 Things You Should Know About Organizations vs. Programs,” ) Program evaluations thus present opportunities for host organizations to:
  • engage in the clarification of their goals and purposes
  • enhance understanding of the often implied relationships between a program’s causes and effects
  • articulate for internal stakeholders a collective understanding of the objectives of their programming
  • reflect on alternative concrete strategies to achieve desired outcomes
  • strengthen internal and external communications
  • improve relationships between individuals within in programs and organizations
Although Brad Rose Consulting evaluation projects begin with a focus on discrete programs and initiatives, the answers to the questions that drive our evaluation of programs often provide vital insights into ways to strengthen the effectiveness of the organizations that host, design, and implement those programs. (See “Logic Modeling: Contributing to Strategic Planning” )
Typically, Brad Rose Consulting works with clients to gather data that will help to improve, strengthen, and “nourish” both programs and organizations. For example, our formative evaluations, which are conducted during a project’s implementation, aim to improve a program’s design and performance. (See “Understanding Different Types of Program Evaluation” ) Our evaluation activities provide program managers and implementers with regular, data-based briefings, and with periodic written reports so that programs can make timely adjustments to their operations. Formative feedback, including data-based recommendations for program refinement, can also help to strengthen the broader organization, by identifying opportunities for organizational learning, clarifying the goals of the organization as these are embodied in specific programming, specifying how programs and organizations work to produce results (i.e., articulating cause and effect) and by strengthening systems and processes.

Resources

“What’s the Difference? 10 Things You Should Know About Organizations vs. Programs,”

“Logic Modeling: Contributing to Strategic Planning”

“Understanding Different Types of Program Evaluation”

February 4, 2015
04 Feb 2015

Understanding Different Types of Program Evaluation

Brad Rose Consulting, Understand Different Types of Program EvaluationProgram evaluations are conducted for a variety of reasons.  Purposes can range from a mechanical compliance with a funder’s reporting requirements, to the genuine desire by program managers and stakeholder to learn “Are we making a difference?” and if so, “What kind of difference are we making?”  The different purposes of, and motivations for, conducting evaluations determine the different types of evaluations.  Below, I briefly discuss the variety of evaluation types.

Formative, Summative, Process, Impact and Outcome Evaluations

Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program.  Formative evaluations typically are conducted in the early- to mid-period of a program’s implementation. Summative evaluations are conducted near, or at, the end of a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities) and to indicate the ultimate value, merit and worth of the program.  Summative evaluations seek to determine whether the program should be continued, replicated or curtailed, whereas formative evaluations are intended to help program designers, managers, and implementers to address challenges to the program’s effectiveness.

Process evaluations, like formative evaluations, are conducted during the program’s early and mid-cycle phases of implementation. Typically process evaluations seek data with which to understand what’s actually going on in a program (what the program actually is and does), and whether intended service recipients are receiving the services they need. Process evaluations are, as the name implies, about the processes involved in delivering the program.

Impact evaluations sometimes alternatively called “outcome evaluations,” gather and analyze data to show the ultimate, often broader range, and longer lasting, effects of a program. An impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes. (see, for example) The International Initiative for Impact Evaluation (3ie) defines rigorous impact evaluations as: ”Analyses that measure the net change in outcomes for a particular group of people that can be attributed to a specific program using the best methodology available, feasible and appropriate to the evaluation question that is being investigated and to the specific context” (see, for example)  Impact (and outcome evaluations) are primarily concerned with determining whether the effects of the program are the result of the program, or the result of some other extraneous factor(s). Ultimately, outcome evaluations want to answer the question, “What effect(s) did the program have on its participants (e.g., changes in knowledge, attitudes, behaviors, skills, practices) and were these effects the result of the program?

Although the different types of evaluation described above differ in their intended purposes and times of implementation, it is important to keep in mind that every program evaluation should be guided by good evaluation research questions.  (See our earlier post, Questions Before Methods)  Program evaluation, like any effective research project depends upon asking important and insight-producing questions.  Ultimately, the different types of evaluations discussed above support the general definition of “program evaluation —a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.” (see, for example) To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Resources

Understanding the Causes of Outcomes and Impacts—An AEA video webinar (18 minutes):
http://comm.eval.org/communities/resources/viewdocument/?DocumentKey=a2b20160-c052-499d-bdb5-0ae578477d2a

Introduction to Evaluation:
http://www.socialresearchmethods.net/kb/intreval.php

Impact Evaluation:
https://en.wikipedia.org/wiki/Impact_evaluation#Definitions_of_Impact_Evaluation

What is Program Evaluation:
https://en.wikipedia.org/wiki/Program_evaluation#Assessing_the_impact_.28effectiveness.29

 

June 12, 2014
12 Jun 2014

Questions Before Methods

Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer.  (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”).  Evaluation questions are what guide the evaluation, give it direction, and express its purpose.   Identifying guiding questions is essential to the success of any evaluation research effort.

Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program.  For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions).  During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions).  Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions).  Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).

While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations.  Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.

Types of Evaluation Questions

Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.

▪ Process Evaluation Questions

  • Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
  • Did the program’s services, products, and resources reach their intended audiences and users?
  • Were services, products, and resources made available to intended audiences and users in a timely manner?
  • What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
  • What steps were taken by the program to address these challenges?

▪ Formative Evaluation Questions

  • How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
  • How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants  and stakeholders?
  • What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

▪ Outcome/Summative Evaluation Questions

  • What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
  • Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
  • Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
  • What is the ultimate worth, merit, and value of the program?
  • Should the program be continued or curtailed?

The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use.  If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT).  Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so.  In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review,  etc.) that can help elucidate why what happens, happens, and what program participants experience.

Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Copyright © 2020 - Brad Rose Consulting