Program evaluation services that increase program value for grantors, grantees, and communities. x

Tag Archive for: evaluation

September 8, 2020
08 Sep 2020

What is Evaluation and Why Do It?

What is Evaluation?

Program evaluation is an applied research process that examines the effects and effectiveness of programs and initiatives. Michael Quinn Patton notes that, “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs in order to make judgements about the program, to improve program effectiveness, and/or to inform decisions about future programming.

Program evaluation can be used to look at:

  • the process of program implementation,
  • the intended and unintended results/effects produced by programs,
  • the long-term impacts of interventions.

Program evaluation employs a variety of social science methodologies–from large-scale surveys and in-depth individual interviews, to focus groups and review of program records. Although program evaluation is research-based, unlike purely academic research, it is designed to produce actionable and immediately useful information for program designers, managers, funders, stakeholders, and policymakers. (See our previous article “What’s the Difference? Evaluation vs. Research”

Why Evaluate?

Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the most common reasons for conducting program evaluation are to:

  • monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
  • improve program design and efficacy
  • measure the outcomes, or effects, produced by a program, in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
  • provide objective evidence of a program’s achievements to current and/or future funders and policy makers
  • elucidate important lessons and contribute to public knowledge

There are numerous reasons why a program manager or an organizational leader might choose to conduct an evaluation. Program evaluation is a way to understand how a program or initiative is doing. Learning about a program’s effectiveness in a timely way, especially learning about a program’s achievements and challenges, can be a valuable endeavor for those who are responsible for programs’ successes. Evaluation is not simply a way to “judge” a program, but a way to learn about and strengthen a program. Moreover, evaluation can help to strengthen not just a particular program, but the organization that hosts the program. (See “Strengthening Program AND Organizational Effectiveness”)

Resources:

October 22, 2019
22 Oct 2019

What the Heck are We Evaluating, Anyway?

When you’re thinking about doing an evaluation — either conducting one yourself, or working with an external evaluator to conduct the evaluation — there are a number of issues to consider. (See our earlier article “Approaching an Evaluation—Ten Issues to Consider”)

I’d like to briefly focus on four of those issues:

  • What is the “it” that is being evaluated?
  • What are the questions that you’re seeking to answer?
  • What concepts are to be measured?
  • What are appropriate tools and instruments to measure or indicate the desired change?

1. What is the “it” that is being evaluated?

Every evaluation needs to look at a particular and distinct program, initiative, policy, or effort. It is critical that the evaluator and the client be clear about what the ‘it” is that the evaluation will examine. Most programs or initiatives occur in a particular context, have a history, involve particular persons (e.g. staff and clients/service recipients) and are constituted by a set of specific actions and practices (e.g., trainings, educational efforts, activities, etc.) Moreover, each program or initiative has particular changes (i.e. outcomes) that it seeks to produce. Such changes can be manifold or singular. Typically, programs and initiatives seek to produce changes in attitudes, behavior, knowledge, capacities, etc. Changes can occur in individuals and/or collectivities (e.g. communities, schools, regions, populations, etc.)

2. What are the questions that you’re seeking to answer?

Evaluations like other investigative or research efforts, involve looking into one or more evaluation questions. For example, does a discreet reading intervention improve students’ reading proficiency, or does a job training program help recipients to find and retain employment? Does a middle school arts program increase students’ appreciation of art? Does a high school math program improve students’ proficiency with algebra problems?

Programs, interventions, and policies are implemented to make valued changes in the targeted group of people that these programs are designed to serve. Every evaluation should have some basic questions that it seeks to answer. By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings. (See “Program Evaluation Methods and Questions: A Discussion”)

3. What concepts are to be measured?

Before launching the evaluation, it is critical to clarify the kinds of changes that are desired, and then to find the appropriate measures for these changes. Programs that seek to improve maternal health, for example, may involve adherence to recommended health screening measures, e.g., pap smears. Evaluation questions for a maternal health program, therefore might include: “Did the patient receive a pap smear in the past year? Two years? Three years?” Ultimately, the question is “Does receipt of such testing improve maternal health?” (Note that this is only one element of maternal health. Other measures might include nutrition, smoking abstinence, wellness, etc.)

4. What are appropriate tools and instruments to measure or indicate the desired change?

Once the concept (e.g. health, reading proficiency, employment, etc.) are clearly identified, then it is possible to identify the measures or indicators of the concept, and to identify appropriate tools that can measure the desired concepts. Ultimately, we want tools that are able either to quantify, or qualitatively indicate, changes in the conceptual phenomenon that programs are designed to affect. In the examples noted above, evaluations would seek to show changes in program participants’ reading proficiency (education), employment, and health.

We have more information on these topics:

“Approaching an Evaluation—Ten Issues to Consider”

“Understanding Different Types of Program Evaluation”

“4 Advantages of an External Evaluator”

October 8, 2019
08 Oct 2019

Why Evaluate Your Program?

Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the common reasons for conducting program evaluation are to:

  • monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
  • measure the outcomes, or effects, produced by a program in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
  • provide objective evidence of a program’s achievements to current and/or future funders and policy makers
  • elucidate important lessons and contribute to public knowledge.

There are numerous reasons why a program manager or an organizational leader might chose to conduct an evaluation. Too often however, we don’t do things until we have to. Program evaluation is a way to understand how a program or initiative is doing. Compliance with a funder’s evaluation requirements need not be the only motive for evaluating a program. In fact, learning in a timely way about the achievements of, and challenges to, a program’s implementation—especially in the early-to-mid stages of a program’s implementation—can be a valuable and strategic endeavor for those who oversee programs. Evaluation is a way to learn about and to strengthen programs.

 

Resources:

July 9, 2019
09 Jul 2019

Ranking, Rating, and Measuring Everything

In an important and brief new book, The Metric Society: on the Quantification of the Social, (Polity, 2019), German sociologist Steffen Mau argues that the historic growth in the availability of data and a seeming societal obsession with quantitatively measuring and ranking everything, is fast making us a “metric society. “A cult of numbers masquerading as rationalization” he says, is having unparalleled impact on how we understand both social and personal value. We are becoming increasingly trapped in a social world where, “The possibilities of life and activity logging are growing apace: consumption patterns, financial transactions, mobility profiles, friendship networks, states of health, education activities, work output, etc.—all this is becoming statistically quantifiable.” Such quantification is far from neutral and scientific, Mau says. It leads to ever greater tendencies, both individual and institutional, to classify, differentiate, and construct social hierarchies. He argues further that these tendencies are paving the way for us to become “an evaluation society,” a society where individuals constantly measure and compare their social worth with others (e.g., dating sites and Facebook “likes”) and where both corporations and the state sort people, based on narrow statistics, into categories that ultimately have differential access to valuable resources.

While the book is filled with examples, Chapter 5, “The Evaluation Cult: Points and Stars,” explores how “‘the evaluation cult’ is binding us to the metrics of measurement, evaluation, and comparison.” Mau scans the proliferation of various tools for evaluation: satisfaction surveys, preference measures, self-assessments, health tracking algorithms, and myriad ranking systems, ranging from Yelp, to publicly available starred reviews of medical providers and lawyers. He shows us how such ratings and rankings—often justified by the claims of providing “transparency,” helpful information, and consumer influence on service providers and products— are upending both markets and the professions, in some cases driving companies to purchase good reviews.

Mau raises questions not just about the validity of measures (after all, what is the difference between a three-star restaurant rating and a four-star rating?), but argues that the growth in the use of such measures is transforming how we view and value ourselves and others. “The universal language of numbers, their lack of ambiguity, and the illusion of commensurability, pave the way for the hegemony of a metrics-based apparatus of comparison.” He says that today, we are witnessing and participating in the emergence of a new “status regime” characterized by quantification and numerical ranking. This “quantitative comparison is frequently translated into a competitive ethos of better versus worse, more versus less.”

Among the other observations Mau offers:

  • growing reliance upon numbers changes our everyday notions of value and social status
  • the availability of quantitative information reinforces the tendency toward social comparison and rivalry
  • quantitative measurement of social phenomena fosters the expansion of competition
  • representations of quantitative data, such as graphs, tables, lists, and scores, change qualitative differences into quantitative inequalities
  • the availability of, and reliance upon, quantitative data leads to further social hierarchization

Ultimately, “…the measurement and quantification of the social realm are not neutral representations of reality. On the contrary, they are representative of specific orders of worth which are invariably based on forgone conclusions as to what can and should be measured and evaluated, and by what means. Metrics may claim to give an objective, accurate, and rational picture of the world as it is, but they also contribute, through the selection, weighting, and linking of information, to the establishment of the normative order.” Essentially. Mau raises a perennial question that is relevant to all evaluative efforts: Do we measure what’s valuable, or is it valuable because we choose to measure it? Please see our previous posts “The Tyranny of Metrics” and “What Counts as an ‘Outcome’—and Who Determines?” Mau argues further that we are becoming a society obsessed with managing our reputations, and ultimately a society of ever greater competition and rivalry.

Resources:

The Metric Society: On the Quantification of the Social, (Polity, 2019),

Heather Douglas, “Facts, Values, and Objectivity”

Max Weber, Objectivity in the Social Sciences 

Max Weber, Methodology of the Social Sciences, Transaction Press, 2011

January 24, 2017
24 Jan 2017

A Brief Guide to Evaluation (Part 1)

This is the first in a series of blog posts, that outlines the key elements of an evaluation.

There are many things in the world that can be evaluated. You will want to decide and define the specific object (i.e. the “evaluand”) of the evaluation. One useful approach is for program managers, and program implementers to speak with the evaluator, and together, consider these defining questions about what needs to be evaluated:

Whether you are managing a program, funding an initiative, or about to undertake a policy review, you will want to first consider:

What is the “it” that’s being evaluated?

  • program
  • initiative
  • organization
  • set of processes or relationships
  • campaign
  • services
  • activities
  • product

Once you’ve identified what is being evaluated, it will be useful to consider these additional questions:

  • What’s being done? What are the intentional actions that are being undertaken?
  • Who does what to whom, and when do they do it?
  • Why—for what set of reasons—are these things being done? Describe the need for the initiative or program?
  • Who (which people, and which positions) are doing/carrying out the of the program, initiative, policy?
  • What resources (i.e. inputs) are involved? (not just money and time, but knowledge, cooperation of others, networks of collaborators, etc.)
  • Who (or what) are implementers working to change? What specifically is supposed to change, or be different as a result of the program or initiative doing what it does?
  • Describe who benefits (e.g. program participants, beneficiaries, consumers, etc.) and what the benefits are.

By clearly identifying a determinate entity (i.e., “evaluand”) and the effects (i.e., outcomes) of the operation or implementation of that entity, you will help to ensure a successful evaluation. The clearer you can be in describing what the program, initiative, or policy is, the more effective the evaluation will be. Answering the above questions and sketching out a logic model of the program’s operation (which we’ll discuss in a subsequent blog post), will ensure an effective, accurate, and ultimately highly useful evaluation.

Also, for purposes of clarity, it may be useful for program managers and implementers to consider and discuss with the evaluator what isn’t being evaluated. This conversation can help to define the necessary boundaries around the program and will help to prevent evaluators expending unnecessary and potentially costly efforts to design evaluations of things that exist outside the boundary of the initiative, program or policy. To learn more about our evaluation practices visit our Impact & Assessment page.

March 16, 2015
16 Mar 2015

If You’re Thinking about Conducting a Program Evaluation….  

Before beginning an evaluation, it may be helpful to consider the following questions:

1. Why is the evaluation being conducted? What is/are the purpose(s) of the evaluation?

Common reasons for conducting an evaluation are to:

  • monitor progress of program implementation and provide formative feedback to designers and program managers (i.e., a formative evaluation seeks to discover what is happening and why, for the purpose of program improvement and refinement.)
  • measure final outcomes or effects produced by the program (i.e., a summative evaluation);
  • provide evidence of a program’s achievements to current or future funders;
  • convince skeptics or opponents of the value of the program;
  • elucidate important lessons and contribute to public knowledge;
  • tell a meaningful and important story;
  • provide information on program efficiency;
  • neutrally and impartially document the changes produced in clients or systems;
  • fulfill contractual obligations;
  • advocate for the expansion or reduction of a program with current and/or additional funders.

Evaluations may simultaneously serve many purposes. For the purpose of clarity and to ensure that evaluation findings meet the client’s and stakeholders’ needs, the client and evaluator may want to identify and rank the top two or three reasons for conducting the evaluation. Clarifying the purpose(s) of the evaluation early in the process will maximize the usefulness of the
evaluation.

2. What is the “it” that is being evaluated? (A program, initiative, organization, network, set of processes or relationships, services, activities?) There are many things that may be evaluated in any given program or intervention. It may be best to start with a few (2-4) key questions and concerns (See #4, below ). Also, for purposes of clarity, it may be useful to discuss what isn’t being evaluated.

3. What are the intended outcomes that the program or intervention intends to produce? What is the program meant to achieve? What changes or differences does the program hope to produce, and in whom? What will be different as the result of the program or intervention? Please note that changes can occur in individuals, organizations, communities, and other social environments. While evaluations often look for changes in persons, changes need not be restricted to alterations in individuals’ behavior, attitudes, or knowledge, but can extend to larger units of analysis, like changes in organizations, networks of organizations, and communities. For collective groups or institutions, changes may occur in: policies, positions, vision/mission, collective actions, communication, overall effectiveness, public perception, etc. For individuals: changes may occur in behaviors, attitudes, skills, ideas, competencies, etc. If you to learn more about out evaluation practice visit our Impact & Assessment reporting page.

Or click here to read our blog post “Approaching an Evaluation” and see 7 more important issues to consider.

 

See also:

Understanding Different Types of Program Evaluation

Transforming Data into Knowledge

Program Evaluation vs. Social Research

Copyright © 2020 - Brad Rose Consulting