What is Evaluation?
Program evaluation is an applied research process that examines the effects and effectiveness of programs and initiatives. Michael Quinn Patton notes that, “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs in order to make judgements about the program, to improve program effectiveness, and/or to inform decisions about future programming.”
Program evaluation can be used to look at:
- the process of program implementation,
- the intended and unintended results/effects produced by programs,
- the long-term impacts of interventions.
Program evaluation employs a variety of social science methodologies–from large-scale surveys and in-depth individual interviews, to focus groups and review of program records. Although program evaluation is research-based, unlike purely academic research, it is designed to produce actionable and immediately useful information for program designers, managers, funders, stakeholders, and policymakers. (See our previous article “What’s the Difference? Evaluation vs. Research”
Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the most common reasons for conducting program evaluation are to:
- monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
- improve program design and efficacy
- measure the outcomes, or effects, produced by a program, in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
- provide objective evidence of a program’s achievements to current and/or future funders and policy makers
- elucidate important lessons and contribute to public knowledge
There are numerous reasons why a program manager or an organizational leader might choose to conduct an evaluation. Program evaluation is a way to understand how a program or initiative is doing. Learning about a program’s effectiveness in a timely way, especially learning about a program’s achievements and challenges, can be a valuable endeavor for those who are responsible for programs’ successes. Evaluation is not simply a way to “judge” a program, but a way to learn about and strengthen a program. Moreover, evaluation can help to strengthen not just a particular program, but the organization that hosts the program. (See “Strengthening Program AND Organizational Effectiveness”)
The COVID-19 Pandemic has affected many aspects of life, not least of which is education. In April, 2020, The World Economic Forum estimated that school closures had affected 1.2 Billion children. While, worldwide, not all children have begun to participate in on-line learning, many have, and much of traditional classroom-based education has been compelled to move to on-line, computer-mediated education.
How effective is on-line learning? The results are mixed. On-line learning depends of course, on accessibility. For those that have access, research indicates that on-line learning can be as effective, and in some cases, a more efficient, mode of instruction than tradition classroom learning, including enhanced retention, speed of learning, and advantages of self-paced vs. other-paced instruction. “For those who do have access to the right technology…Some research shows that on average, students retain 25-60% more material when learning online compared to only 8-10% in a classroom. This is mostly due to the students being able to learn faster online; e-learning requires 40-60% less time to learn than in a traditional classroom setting because students can learn at their own pace, going back and re-reading, skipping, or accelerating through concepts as they choose. (See “5 Reasons Why Online Learning is More Effective”)
Effectiveness however, may vary by the age of students. Younger students often thrive in a more immersive, face-to-face environment, and benefit from learning a range of social and emotional skills that are often more difficult to convey in a “narrow-cast” learning venue. Classroom structure can itself be an important social-educational factor. The effectiveness of on-line learning also may vary depending on whether instruction is exclusively on-line, or if it is “blended” (i.e., includes both on-line and face-to-face instruction). Some research has shown that blended instruction results in better student outcomes than solely on-line learning. (See “The Effectiveness of Online Learning: Beyond No Significant Difference and Future Horizons,” Tuan Nguyen, Leadership, Policy, and Organization Peabody College, Vanderbilt University)
While there appear to be benefits to on-line learning for some students, Susanna Loeb, writing in a recent issue of Education Week reminds us “Just like in brick-and-mortar classrooms, online courses need a strong curriculum and strong pedagogical practices. Teachers need to understand what students know and what they don’t know, as well as how to help them learn new material. What is different in the online setting is that students may have more distractions and less oversight, which can reduce their motivation. The teacher will need to set norms for engagement—such as requiring students to regularly ask questions and respond to their peers—that are different than the norms in the in-person setting.” She further observes that some instruction (i.e. on-line) is better than no instruction, and that “especially (for) students with fewer resources at home, (these students) learn less when they are not in school. Right now, virtual courses are allowing students to access lessons and exercises and interact with teachers in ways that would have been impossible if an epidemic had closed schools even a decade or two earlier. So, we may be skeptical of online learning, but it is also time to embrace and improve it.”
“The COVID-19 pandemic has changed education forever. This is how” World Economici Forum, April 2020
“The Effectiveness of Online Learning: Beyond No Significant Difference and Future Horizons,” Tuan Nguyen, Leadership, Policy, and Organization Peabody College, Vanderbilt Universy
“How Effective Is Online Learning? What the Research Does and Doesn’t Tell Us” By Susanna Loeb, Education Week, March 20, 2020
In the latest issue of the American Journal of Evaluation (Vol 41, No.2, June 2020) Robert Picciotto argues that the time has arrived for evaluation to make use of Big Data. In “Evaluation and the Big Data Challenge,” Picciotto observes that evaluators, although not yet well versed in Big Data’s use, should nonetheless actively engage Big Data, which is data composed of large data sets that are too large and complex for traditional data processing tools and that imply just-in-time information for decision-making, continuous storage and processing, and the extensive use of algorithms. The scale and seeming availability of Big Data, coupled with exponentially growing computer power, Picciotto tells us, increasingly makes the use of such data more attractive for evaluators. Big Data makes it possible for evaluators to identify patterns and to gain insights that arise from large data sets, insights that can’t be secured though limited and costly access to traditional data. Additionally, Big Data, if handled correctly, may improve the quality of evaluation. It may even allow evaluators to more effectively wrestle with the persistently thorny problem of discerning causality in complex social systems.
While Big Data offers a range of new opportunities to evaluators, Big Data (and big tech that privately owns and deploys this data) is not without its challenges and drawbacks. Governments, corporations, and interest groups are increasingly reliant on Big Data to manipulate public opinion, shape consumer behavior though predatory advertising, and in some cases to manipulatively intervene in the civic and political lives of nations. (See our previous article, “Everybody Lies”) Picciotto acknowledges the often pernicious uses of private data, and points to the lamentably under-regulated use of Big Data to monitor and influence the behavior of citizens and consumers. He also notes that the algorithms now used to analyze the volumes of data are neither objective nor universally accurate. Despite these substantial challenges, Picciotto—somewhat sanguinely I think—believes that evaluators and greater governmental regulation of big tech may ameliorate some of the more egregious dimensions and uses of Big Data. “Big Data has let lose a host of social threats: oppressive surveillance, loss of privacy, reduced autonomy, digital addiction, spread of disinformation, social polarization, and so on.” Whether evaluators and the public are capable of taming an enterprise that is now overarchingly global, under-regulated, ethically questionable, and resistant to national constraints, will need to be seen.
“Evaluation and the Big Data Challenge,” Robert Picciotto, American Journal of Evaluation pp.166-181. (Vol 41, No.2, June 2020)
“Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are,” (Seth Stephens-Davidowitz, Dey St., 2017)
“What Is Big Data?” Lisa Arthur, Forbes, Aug 15, 2013
“What is Big Data?” Bernard Marr
The Metric Society: On the Quantification of the Social, (Polity, 2019)
The Oxford English Dictionary defines a system as, “A set of things working together as parts of a mechanism or an interconnecting network; a complex whole.” There are, of course, a range of specific kinds of systems, including economic systems, computer systems, biological systems, social systems, psychological systems, etc. In each of these domains, the system includes specialization of component parts (a division of labor), boundaries for each of the constituent parts, both a degree of relative autonomy and an interdependence of each part on the functioning of the other parts of that system, long-term functioning (i.e. function over time), and the production of outcomes (whether such outcomes are intended or not). Systems produce effects.
While various systems are distinct, there has been an effort to generate a general science of systems, under the umbrella of “systems theory” (See, for example, this summary, “Systems Theory” ) Theorists have attempted to construct a general and abstract science that is able to describe a variety of systems. These efforts, although subject to some questions and criticisms, have been useful for mapping and describing a variety of systems and structures, and have helped social scientists and organizational/social change advocates to describe approaches to intervening in a variety of contexts, including organizational, educational, social welfare, and economic systems.
Systems thinking—or thinking about systems vs. thinking exclusively about individuals or single events, can help those who are attempting to strengthen initiatives and interventions. As Michael Goodman points out in “Systems Thinking: What, Why, When, Where, and How?” “Systems thinking often involves moving from observing events or data, to identifying patterns of behavior over time, (and) to surfacing the underlying structures that drive those events and patterns. By understanding and changing structures that are not serving us well (including our mental models and perceptions), we can expand the choices available to us and create more satisfying, long-term solutions to chronic problems.”
Program evaluation benefits from a systems approach because interventions (e.g., programs and initiatives) are themselves systems, and are embedded or nested in larger social and economic systems. Rather than thinking that challenges to program effectiveness are the exclusive result of individuals’ one-off actions, it is more productive to examine the systemic features of the program in order to identify how both internal structures and repeated behaviors, and larger external systemic constraints shape programs’ effectiveness.
The current health crisis is compelling many non-profits to rethink how they do business. Many must consider how to best serve their stakeholders with new and perhaps untested, means. Among questions that many non-profits must now ask themselves: How do we continue to reach program participants and service recipients? How do we change/adjust our programing so that it reaches existing and new service recipients? How do we maximize our value while ensuring the safety of staff and clients? Are there new, unanticipated opportunities to serve program participants?
New conditions require new strategies. While the majority of non-profits’ attention will necessarily be focused on serving the needs of those they seek to assist, non-profit leaders will benefit from paying attention to which strategies work, and which adaptations work better than others.
In order to investigate the effectiveness of new programmatic responses, non-profits will benefit from conducting evaluation research that gathers data about the effects and the effectiveness of new (and continuing) interventions. Formative evaluation is one such means for discovering what works under new conditions.
The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.
While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:
- Which features of a program or initiative are working and which aren’t working so well?
- Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
- Which components of the program do program participants say could be strengthened?
- Which elements of the program do participants find most beneficial, and which least beneficial?
Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved. Brad Rose Consulting has conducted dozens of formative evaluations, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served. For the foreseeable future, non-profits are likely to be called upon to offer ever greater levels of services. Program evaluation can help non-profits to maximize their effectiveness in ever more challenging times.