Program evaluation services that increase program value for grantors, grantees, and communities. x

Tag Archive for: program evaluation

September 8, 2020
08 Sep 2020

What is Evaluation and Why Do It?

What is Evaluation?

Program evaluation is an applied research process that examines the effects and effectiveness of programs and initiatives. Michael Quinn Patton notes that, “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs in order to make judgements about the program, to improve program effectiveness, and/or to inform decisions about future programming.

Program evaluation can be used to look at:

  • the process of program implementation,
  • the intended and unintended results/effects produced by programs,
  • the long-term impacts of interventions.

Program evaluation employs a variety of social science methodologies–from large-scale surveys and in-depth individual interviews, to focus groups and review of program records. Although program evaluation is research-based, unlike purely academic research, it is designed to produce actionable and immediately useful information for program designers, managers, funders, stakeholders, and policymakers. (See our previous article “What’s the Difference? Evaluation vs. Research”

Why Evaluate?

Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the most common reasons for conducting program evaluation are to:

  • monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
  • improve program design and efficacy
  • measure the outcomes, or effects, produced by a program, in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
  • provide objective evidence of a program’s achievements to current and/or future funders and policy makers
  • elucidate important lessons and contribute to public knowledge

There are numerous reasons why a program manager or an organizational leader might choose to conduct an evaluation. Program evaluation is a way to understand how a program or initiative is doing. Learning about a program’s effectiveness in a timely way, especially learning about a program’s achievements and challenges, can be a valuable endeavor for those who are responsible for programs’ successes. Evaluation is not simply a way to “judge” a program, but a way to learn about and strengthen a program. Moreover, evaluation can help to strengthen not just a particular program, but the organization that hosts the program. (See “Strengthening Program AND Organizational Effectiveness”)


July 14, 2020
14 Jul 2020

Big Data and Evaluation

In the latest issue of the American Journal of Evaluation (Vol 41, No.2, June 2020) Robert Picciotto argues that the time has arrived for evaluation to make use of Big Data. In “Evaluation and the Big Data Challenge,” Picciotto observes that evaluators, although not yet well versed in Big Data’s use, should nonetheless actively engage Big Data, which is data composed of large data sets that are too large and complex for traditional data processing tools and that imply just-in-time information for decision-making, continuous storage and processing, and the extensive use of algorithms. The scale and seeming availability of Big Data, coupled with exponentially growing computer power, Picciotto tells us, increasingly makes the use of such data more attractive for evaluators. Big Data makes it possible for evaluators to identify patterns and to gain insights that arise from large data sets, insights that can’t be secured though limited and costly access to traditional data. Additionally, Big Data, if handled correctly, may improve the quality of evaluation. It may even allow evaluators to more effectively wrestle with the persistently thorny problem of discerning causality in complex social systems.

While Big Data offers a range of new opportunities to evaluators, Big Data (and big tech that privately owns and deploys this data) is not without its challenges and drawbacks. Governments, corporations, and interest groups are increasingly reliant on Big Data to manipulate public opinion, shape consumer behavior though predatory advertising, and in some cases to manipulatively intervene in the civic and political lives of nations. (See our previous article, “Everybody Lies”) Picciotto acknowledges the often pernicious uses of private data, and points to the lamentably under-regulated use of Big Data to monitor and influence the behavior of citizens and consumers. He also notes that the algorithms now used to analyze the volumes of data are neither objective nor universally accurate. Despite these substantial challenges, Picciotto—somewhat sanguinely I think—believes that evaluators and greater governmental regulation of big tech may ameliorate some of the more egregious dimensions and uses of Big Data. “Big Data has let lose a host of social threats: oppressive surveillance, loss of privacy, reduced autonomy, digital addiction, spread of disinformation, social polarization, and so on.” Whether evaluators and the public are capable of taming an enterprise that is now overarchingly global, under-regulated, ethically questionable, and resistant to national constraints, will need to be seen.


“Evaluation and the Big Data Challenge,” Robert Picciotto, American Journal of Evaluation pp.166-181. (Vol 41, No.2, June 2020)

“Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are,” (Seth Stephens-Davidowitz, Dey St., 2017)

“What Is Big Data?” Lisa Arthur, Forbes, Aug 15, 2013

“What is Big Data?” Bernard Marr

“Ranking, Rating, and Measuring Everything”

The Metric Society: On the Quantification of the Social, (Polity, 2019)

October 22, 2019
22 Oct 2019

What the Heck are We Evaluating, Anyway?

When you’re thinking about doing an evaluation — either conducting one yourself, or working with an external evaluator to conduct the evaluation — there are a number of issues to consider. (See our earlier article “Approaching an Evaluation—Ten Issues to Consider”)

I’d like to briefly focus on four of those issues:

  • What is the “it” that is being evaluated?
  • What are the questions that you’re seeking to answer?
  • What concepts are to be measured?
  • What are appropriate tools and instruments to measure or indicate the desired change?

1. What is the “it” that is being evaluated?

Every evaluation needs to look at a particular and distinct program, initiative, policy, or effort. It is critical that the evaluator and the client be clear about what the ‘it” is that the evaluation will examine. Most programs or initiatives occur in a particular context, have a history, involve particular persons (e.g. staff and clients/service recipients) and are constituted by a set of specific actions and practices (e.g., trainings, educational efforts, activities, etc.) Moreover, each program or initiative has particular changes (i.e. outcomes) that it seeks to produce. Such changes can be manifold or singular. Typically, programs and initiatives seek to produce changes in attitudes, behavior, knowledge, capacities, etc. Changes can occur in individuals and/or collectivities (e.g. communities, schools, regions, populations, etc.)

2. What are the questions that you’re seeking to answer?

Evaluations like other investigative or research efforts, involve looking into one or more evaluation questions. For example, does a discreet reading intervention improve students’ reading proficiency, or does a job training program help recipients to find and retain employment? Does a middle school arts program increase students’ appreciation of art? Does a high school math program improve students’ proficiency with algebra problems?

Programs, interventions, and policies are implemented to make valued changes in the targeted group of people that these programs are designed to serve. Every evaluation should have some basic questions that it seeks to answer. By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings. (See “Program Evaluation Methods and Questions: A Discussion”)

3. What concepts are to be measured?

Before launching the evaluation, it is critical to clarify the kinds of changes that are desired, and then to find the appropriate measures for these changes. Programs that seek to improve maternal health, for example, may involve adherence to recommended health screening measures, e.g., pap smears. Evaluation questions for a maternal health program, therefore might include: “Did the patient receive a pap smear in the past year? Two years? Three years?” Ultimately, the question is “Does receipt of such testing improve maternal health?” (Note that this is only one element of maternal health. Other measures might include nutrition, smoking abstinence, wellness, etc.)

4. What are appropriate tools and instruments to measure or indicate the desired change?

Once the concept (e.g. health, reading proficiency, employment, etc.) are clearly identified, then it is possible to identify the measures or indicators of the concept, and to identify appropriate tools that can measure the desired concepts. Ultimately, we want tools that are able either to quantify, or qualitatively indicate, changes in the conceptual phenomenon that programs are designed to affect. In the examples noted above, evaluations would seek to show changes in program participants’ reading proficiency (education), employment, and health.

We have more information on these topics:

“Approaching an Evaluation—Ten Issues to Consider”

“Understanding Different Types of Program Evaluation”

“4 Advantages of an External Evaluator”

October 8, 2019
08 Oct 2019

Why Evaluate Your Program?

Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the common reasons for conducting program evaluation are to:

  • monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
  • measure the outcomes, or effects, produced by a program in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
  • provide objective evidence of a program’s achievements to current and/or future funders and policy makers
  • elucidate important lessons and contribute to public knowledge.

There are numerous reasons why a program manager or an organizational leader might chose to conduct an evaluation. Too often however, we don’t do things until we have to. Program evaluation is a way to understand how a program or initiative is doing. Compliance with a funder’s evaluation requirements need not be the only motive for evaluating a program. In fact, learning in a timely way about the achievements of, and challenges to, a program’s implementation—especially in the early-to-mid stages of a program’s implementation—can be a valuable and strategic endeavor for those who oversee programs. Evaluation is a way to learn about and to strengthen programs.



September 1, 2016
01 Sep 2016

Using Group Facilitation in Program Evaluations

brad rose, group facilitator, group evaluationGroup facilitation is a critical element in many evaluations. Although the definition of “facilitation” varies, one view of it is, “A process in which a person whose selection is acceptable to all members of the group, who is substantively neutral, and who has no substantive decision-making authority, diagnoses and intervenes to help a group improve how it identifies and solves problems and makes decisions…” (R. Schwartz “The Skilled Facilitator Approach,” in Schwartz and Davidson, eds., The Skilled Facilitator Handbook, Jossey-Baas, 2005).  Correlatively, a facilitator is, “an individual who enables groups and organizations to work more effectively; to collaborate and achieve synergy. He or she is a ‘content neutral’ party who by not taking sides or expressing or advocating a point of view during the meeting, can advocate for fair, open, and inclusive procedures to accomplish the group’s work” (Kaner, S. with Lind, L., Toldi, C., Fisk, S. and Berger, D. Facilitator’s Guide to Participatory Decision-Making, (2007) Jossey-Bass).

In the Spring 2016 issue of New Directions in Evaluation, which is dedicated to the subject of evaluation and facilitation, the editors Rita Sinorita Fierro, Alissa Schwartz, and Dawn Hanson Smart, note the multiple uses that facilitation can play in evaluations. “Many evaluations are undertaken with groups, and require evaluators to play a facilitative role—to engage stakeholders in meaningful conversations, to structure these discussions in ways that surface multiple perspectives, and to conduct focus groups on key data and make progress toward next steps or decision-making. Facilitation can help groups map theories of change, undertake data collection through focus groups or other dialogues, participate in analysis of findings, and help craft appropriate recommendations based on these findings.”

In “Facilitating Evaluation to Lead to Meaningful Change,” (which appears in the same issue, pp. 19-29) Tessie Tzavaras Catsambas writes that, “…evaluations typically require the evaluator to work with groups of people for organizing and managing an evaluation, as well as in collecting and analyzing data… evaluators must frequently facilitate group interactions to conclude an evaluation successfully.” Additionally, Tzavaras Catsambas observes, “Evaluators may often find themselves in situations in which they have to ‘facilitate’ their way through political sensitivities, misunderstandings, competing interests, confusion, apathy or refusal to cooperate.”

Although evaluation and facilitation share many of the same principles and practices, including: respect, asking good questions, engagement of/with others, and promoting effective communication and participation, the most effective facilitations and evaluations require even deeper capacities and skills. Among these: the ability to sense the mood of others, to listen deeply, to capture group-generated information, to reflect, and to move both individuals and groups toward action. Effective facilitation and evaluation also require the ability of the evaluator to build and maintain constructive relationships, to engage stakeholders, to incorporate the perspective and views of others, and to work with integrity and objectivity. (See also “Interpersonal Skills Enhance Program Evaluation” and “Establishing Essential Competencies for Evaluators” by L. Stevahn, J. King, G. Ghere, and J. Minnema). While evaluations are of course, distinct from group facilitation, facilitation can be a critical tool for evaluators. To find out about our evaluation methods visit our Data collection & Outcome measurement page.


International Association of Facilitators—Core Competencies


March 14, 2016
14 Mar 2016

Evaluating Collaboration

Nonprofit organizations and program evaluators are increasingly being called upon to evaluate multi-stakeholder initiatives. These initiatives often depend upon collaboration among various organizations and agencies. As a consequence, the need to evaluate collaborations/coalitions has become an important requirement for a wide-range of contemporary program evaluations.

In a recent article, “Evaluating Collaboration for Effectiveness: Conceptualization and Measurement,” (American Journal of Evaluation, 2015, Vol. 36, p. 67-85) Lydia I. Marek, Donna-Jean Brock and Jyoti Savla discuss the customary difficulties in assessing collaborations, including, “lack of validated tools, undefined or immeasurable community outcomes, the dynamic nature of coalitions, and the length of time typical for interventions to effect community outcomes…the diversity and complexity of collaborations and the increasingly intricate political and organizational structures that pose challenges for evaluation design” (p.68).

Building on previous research by Mattesich and Monesy (Collaboration: What Makes it Work? Fieldstone Alliance, 1992) which outlines a six category inventory of characteristics typical of collaborations ( i.e., environment, membership characteristics, process/structure, communication, purpose, and resources), Marek, et al., argue for seven key factors that typify successful inter-organizational collaborations and coalitions:

1. Context—the shared history among coalition partners, the context in which they function, and the coalition’s role within the community

2. Membership—individual coalition members’ skills, attitudes, and beliefs that together contribute to, or detract from, successful outcomes

3. Process and organizational factors like flexibility and adaptability of members, and members’ clear understanding of their roles and responsibilities

4. Communication—formal and informal communication among members, and communication with the community

5. Function—the determination and articulation of coalition goals

6. Resources—the coordination of financial and human resources required for the coalition to achieve its goals

7. Leadership—strong leadership skills, including organizing and relationship – building skills

Marek et al. offer a tool for assessing the effectiveness of collaboration, a 69 item Collaboration Assessment Tool (CAT) survey instrument, which asks a series of questions for each of the factors identified above. For example, the “function” domain of the survey asks organizational respondents the following questions:

  • This coalition has clearly defined the problem that it wished to address
  • The goals and objectives of the coalition are based upon key community needs
  • This coalition has clearly defined short-term goals and objectives
  • This coalition has clearly defined long-term goals and objectives
  • Members agree upon the goals and objectives
  • The goals and objectives set for this coalition can be realistically attained
  • Members view themselves as interdependent in achieving the goals and objectives of this coalition
  • The goals and objectives of this coalition differ, at least in part, from the goals and objectives of each of the coalition members

“Evaluating Collaboration for Effectiveness: Conceptualizations and Measurement,” offers a valuable tool for assessing the often complex inter-organizational relationships that constitute multi-stakeholder collaborations. The CAT offers a system for gathering data for evaluating the quality of collaborations, and implicitly suggests an outline of the key characteristics of successful collaborations. The CAT tool will be useful for organizations who are embarking on collaborations, and for program evaluators who are charged with evaluating the success of such inter-organizational collaborations. To learn about our evaluation methods visit our Data collection & Outcome measurement page.


“Evaluating Collaboration for Effectiveness: Conceptualization and Measurement,” Lydia I. Marek, Donna-Jean Brock and Jyoti Savla, (American Journal of Evaluation, 2015, Vol. 36, p. 67-85)

September 23, 2014
23 Sep 2014

Program Evaluation vs. Social Research

I recently participated in a workshop at Brandeis University for graduate students who were considering non-academic careers in the social sciences.  During the workshop, one of the students asked about the difference between program evaluation and other kinds of social research.  This is a valuable and important question to which I responded that program evaluation is a type of applied social research that is conducted with “a value, or set of values, in its denominator.”  I further explained that I meant that evaluation research is always conducted with an eye to whether the outcomes, or results, of a program were achieved, especially when these outcomes are compared to a desired and valued standard or criterion.  At the heart of program evaluation is the idea that outcomes, or changes, are valuable and desired.  Evaluators conduct evaluation research to find out if these valuable changes (often expressed as program goals or objectives) are, in fact, achieved by the program.

Evaluation research shares many of the same methods and approaches as other social sciences, and indeed, natural sciences.  Evaluators draw upon a range of evaluation designs (e.g. experimental design, quasi-experimental desing, non-experimental design) and a range of methodologies (e.g. case studies, observational studies, interviews, etc.) to learn what the effects of a given intervention have been.  Did, for example, 8th grade students who received an enriched STEM curriculum do better on tests, than did their otherwise similar peers who didn’t receive the enriched curriculum?   Do homeless women who receive career readiness workshops succeed at obtaining employment at greater rates than do other similar homeless women who don’t participate in such workshops? (For more on these types of outcome evaluations, see our previous blog post, “What You Need to Know About Outcome Evaluations: The Basics,”) While not all program evaluations are outcome evaluations, all evaluations gather systematic data with which judgments about the program can be made.

Evaluation’s Differences From Other Kinds of Social Research

Evaluation research is distinct from other forms of applied social research in so far as it:

  • seeks to determine the merit, value, and/or worth of a program’s activities and results.
  • entails the systematic collection of empirical data that is used to measure the processes and/or outcomes of a program,  with the goal of furthering the program’s development and improvement.
  • provides actionable information for decision-makers and program stakeholders, so that, based on objective data, a program can be strengthened or curtailed.
  • focuses on particular knowledge (usually about a program and its outcomes), rather than seeks widely generalizable  and  universal knowledge.

While evaluators share many of the same methods and approaches as other researchers, program evaluators must employ an explicit set of values against which to judge the findings of their empirical research.  The means that evaluators must both be competent social scientists and exercise value-based judgments and interpretations about the meaning of data. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.


Research vs. Evaluation

Differences Between Research and Evaluation

Harvard Family Research Project’s “Ask an Expert” series.
See “Michael Scriven on the Differences Between Evaluation and Social Science Research,”

Office of Educational Assessment

Sandra Mathison’s “What is the Difference Between Evaluation and Research, and Why Do We Care?”

“Distinguishing Evaluation from Research”

September 10, 2014
10 Sep 2014

What You Need to Know About Outcome Evaluations: The Basics

In her book Evaluation (2nd Edition) Carol Weiss writes, “Outcomes define what the program intends to achieve.” (p.117) Outcomes are the results or changes that occur, either in individual participants, or targeted communities. Outcomes occur because a program marshals resources and mobilizes human effort to address a specified social problem.  Outcomes, then, are what the program is all about; they are the reason the program exists.

Outcome Evaluations

In order to assess which outcomes are achieved, program evaluators design and conduct outcome evaluations. These evaluations are intended to indicate, or measure, the kinds and levels of change that occur for those affected by the program or treatment.  “Outcome evaluations measure how clients and their circumstances change, and whether the treatment experience (or program) has been a factor in causing this change.  In other words, outcome evaluations aim to assess treatment effectiveness. (World Health Organization)

Outcome evaluations, like other kinds of evaluations, may employ a logic model, or theory of change, which can help evaluators and their clients to identify the short-, medium-, and long-term changes that a program seeks to produce. (See our blog post “Using a Logic Model” )  Once intended changes are identified in the logic model, it is critical for the evaluator to further identify valid and effective measures of said changes, so that these changes are correctly documented.  It is preferable to identify desired outcomes before the program begins operation, so that these outcomes can be tracked throughout the program’s life-span.

Most outcome evaluations employ instruments that contain measures of attitudes, behaviors, values, knowledge, and skills. Such instruments may be either standardized, often validated, or they may be uniquely designed, special- purpose instruments, (e.g., a survey designed specifically for this particular program.) Additionally, the measures contained in an instrument can be either “objective,” i.e., they don’t rely on individuals’ self-reports, or conversely, they can be “subjective,” i.e., based on informants’ self-estimates of effect. Ideally, outcome evaluations try to use objective measures, whenever possible. In many instances, however, it is desirable to use instruments that rely on participants’ self-reported changes and reports of program benefits.

It is important to note that outcomes (i.e. changes or results) can occur at different points in time in the life span of a program.  Although outcome evaluations are often associated with “summative,” or end-of-program-cycle evaluations, because program outcomes can occur in the early or middle stages of a program’s operation, outcomes may be measured before the final stage of the program.  It may even be useful for some evaluations to look at both short- and long-term outcomes, and therefore to be implemented at different points in time (i.e., early and late.)

Another issue relevant to outcome evaluation is dealing with unintended outcomes of a program.  As you know, programs can have a range of intended goals. Some outcomes or results, however, may not be a part of the intended goals of the program. They nonetheless occur. It is critical for evaluations to try to capture the unintended consequences of programs’ operation as well as the intended outcomes.

Ultimately, outcome evaluations are the way that evaluators and their clients know if the program is making a difference, which differences it’s making, and if the differences it’s making, are the result of the program. To learn more about our evaluation and outcome assessment methods visit our Data collection & Outcome measurement page.


World Health Organization, Workbook 7

Measuring Program Outcomes: A Practical Approach (1996) United Way of America’s

Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources

Evaluation Methodology Basics, The Nuts and Bolts of Sound Evaluation,
E. Jane Davidson, Sage, 2005.

Evaluation (2nd Edition) Carol Weiss, Prentice Hall, 1998.

August 26, 2014
26 Aug 2014

Focus Groups

Pioneered by market researchers and mid-20th century sociologists, focus groups are a qualitative research method that involves small groups of people in guided discussions about their attitudes, beliefs, experiences, and opinions about a selected topic or issue.  Often used by marketers who obtain feedback from consumers about a product or service, focus groups have also become an effective and widely recognized social science research tool that enables researchers to explore participants’ views, and to reveal rich data that often remain under-reported by other kinds of data collection strategies (e.g., surveys, questionnaires, etc. ).

Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions that allow participants to address questions. Focus groups usually include people who are somewhat similar in characteristics or social roles.  Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions.  Ideally—although not always possible—it is best to involve participants who don’t previously know one another.

Focus group conversations enable participants to offer observations, define issues, pose and refine questions, and create informative debate/discussions.  Focus group moderators must: be attentive, pose useful and creative questions, create a welcoming and non-judgmental atmosphere, be sensitive to non-verbal cues and the emotional tenor of participants.   Typically, focus group sessions are recorded or videoed so that researchers can later transcribe and analyze participants’ comments.  Often an assistant moderator will take notes during the focus group conversation.

Focus groups have advantages over other date collection methods.  They often employ group dynamics that help to reveal information that would not emerge from an individual interview or survey: they produce relatively quick, low cost data (they produce an ‘economy of scale’ as compared to individual interviews); allow the moderator to pose appropriate and responsive follow-up questions;  enable the moderator to observe non-verbal data; and often produce greater and richer data than a questionnaire or survey.

Focus groups also can have some disadvantages, especially if not conducted by an experienced and skilled moderator:  Depending upon their composition, focus groups are not necessarily representative of the general population; respondents may feel social pressure to endorse other group members’ opinions or refrain from voicing their own; group discussions require effective “steering” so that key questions are answered, and participants don’t stray from the questions/topic.

Focus groups are often used in program evaluations.  I have had extensive experience conducting focus groups with a wide-range of constituencies.  During my 20 years of experience as a program evaluator, I’ve moderated focus groups composed of:  homeless persons; disadvantaged youth; university pr ofessors and administrators; k-12 teachers; k-12 and university students, corporate managers; and hospital administrators.  In each of these groups I’ve found that it’s been beneficial to: have a non-judgmental attitude, be genuinely curious; exercise a gentle guidance; and respect the opinions, beliefs, and experiences of each focus group member.   A sense of humor can also be extremely helpful. (See our previous post: “Interpersonal Skills Enhance Program Evaluation,”  Also “Listening to Those Who Matter Most, the Beneficiaries” ) Or if you want to learn more about our qualitative approaches visit our Data collection & Outcome measurement page.


About focus groups:

About focus groups:

How focus groups work:

Focus group interviewing:

‘Focus groups’ at Wikipedia

August 11, 2014
11 Aug 2014

Needs Assessment

br-needsassessmentA needs assessment is a systematic research and planning process for determining the discrepancy between an actual condition or state of affairs, and a future desired condition or state of affairs.  Needs assessments are undertaken not only to identify the gap between “what is” and “what should be” but also to identify the programmatic actions and resources that are required to address that gap.  Typically, a needs assessment is a part of planning processes that is intended to yield improvements in individuals, education/training, organizations, and/ or communities. ( ) Ultimately, a needs assessment is “a systematic process whose aim is to acquire an accurate, thorough picture of a system’s strengths and weaknesses, in order to improve it and to meet existing and future challenges.” Needs assessments have a variety of purposes. They can be used to identify and address challenges in a community, to develop training strategies, or to improve the performance of organizations.

There are a variety of conceptual modules of needs assessments. (For a review of various models (See ) One of the most popular is the SWOT analysis, in which researchers and action teams conduct a study to determine the strengths, weaknesses, opportunities, and threats involved in a project or business venture. In Planning and Conducting Needs Assessments: A Practical Guide.  Thousand Oaks, CA. Sage Publications. (1995) Witkin, and Altschuld, identify a three-stage model of needs assessment, which includes pre-assessment (exploration), assessment (data gathering), and post-assessment (utilization).

Although there are various approaches to needs assessment, most include the following essential components/steps:

  • Identify issue/concern
  • Conduct a gap analysis (where things are now vs. where they should be)
  •  Specify methods for collecting information/data
  • Perform literature review
  • Collect and analyze data
  • Develop action plan
  • Produce implementation report
  • Disseminate report/recommendations to stakeholders.
Why Conduct a Needs Assessment

Needs assessments can be used to identify real-world challenges, to formulate plans to correct inequities, and to involve critical stakeholders in building consensus and mobilizing resources to address identified challenges.  For non-profit organizations needs assessments: 1) use data to identify an unaddressed or under-addressed need; 2) help to more effectively utilize resources to address a given problem; 3) make programs measurable, defensible, and fundable; and 4) inform, mobilize, and re-energize stakeholders.  Needs assessments can be used with an organizations internal and external stakeholders and constituents.

Brad Rose Consulting Inc. has extensive experience in designing and implementing needs assessments for non-profit organizations, educational institutions, and health and human service programs.  We’d welcome a chance to speak with you and your colleagues about how we can help you to conduct a needs assessment. To learn more about our assessment methods visit our Data collection & Outcome measurement page.


SWOT Analysis:

Pyramid Model of Needs Assessment

Needs Assessment: Strategies for Community Groups and Organizations

Needs Assessment 101

U.S. Department of Education

Needs Assessment: A User’s Guide

August 29, 2013
29 Aug 2013

Evaluation Workflow

Typically, we work with clients from the early stages of program development in order to understand their organization’s needs and the needs of program funders and other stakeholders. Following initial consultations with program managers and program staff, we work collaboratively to identify key evaluation questions, and to design a strategy for collecting and analyzing data that will provide meaningful and useful information to all stakeholders.

Depending upon the specific initiative, we implement a range of evaluation tools (e.g., interview protocols, web-based surveys, focus groups, quantitative measures, etc.) that allow us to collect, analyze, and interpret data about the activities and outcomes of the specified program. Periodic debriefings with program staff and stakeholders allow us to communicate preliminary findings, and to offer program managers timely opportunities to refine programming so that they can better achieve intended goals.

Our collaborative approach to working with clients allows us to actively support program managers, staff, and funders to make data-informed judgments about programs’ effectiveness and value. At the appropriate time(s) in the program’s implementation, we write a report(s) that details findings from program evaluation activities and that makes data-based suggestions for program improvement. To learn more about our approach to evaluation visit our Data collection & Outcome measurement page.

June 26, 2013
26 Jun 2013

Understanding How Programs Work: Using Logic Models to “Map” Cause and Effect

A logic model is a schematic representation of the elements of a program and the program’s resulting effects.  A logic model (also known, as a “theory of change”) is a useful tool for understanding the way a program intends to produce the outcomes (i.e. changes) it hopes to produce.  Logic models typically consist of a flowchart schematic that shows the logical connection between a program’s “inputs” (i.e. invested resources), “outputs” (program activities and actions), “short-term outcomes” (changes), “medium-term outcomes” (changes), and “long range impacts” (changes).


When developing a logic model many evaluators and program staff rightly focus on inputs, outputs, and program outcomes (the core of the program).  However, it is critical to also include in the logic model the implicit assumptions that underlie the program’s operation, the needs that the program aspires to address, and the program’s environment, or context.  Assumptions, needs, and context are crucial factors in understanding how the program does what it intends to do.  Ultimately these are crucial to understanding the causal mechanisms that produce the intended changes of any program.

Without clearly understanding the causal mechanisms at work in a program, program staff may work ineffectively, placing emphasis on the wrong or inefective activities—and ultimately fail to correctly address the challenges the program intends to address.   Similarly, without a clear understanding of the causal mechanisms that enable the program to achieve its outcomes, the program evaluation may not measure the proper outcomes or fail to see the changes the program, in fact, brings about.

Brad Rose Consulting, Inc. works with clients to develop simple, yet robust, logic models that explicitly document the causal mechanisms that are at work in a program.  By discussing, and explicitly identifying the often implicit causal assumptions,  as well as highlighting the needs for the program and the social context of a program, we not only ensure that the evaluation is properly designed and executed, we also help program implementers to ensure that they are activating the causal processes/mechanisms that yield the changes that the program strives to achieve.

Other Resources:

Read Brad’s current whitepaper “Logic Modeling

Monitoring and Evaluation: Some Tools Methods and Approaches, The World Bank.

The Logic Model Development Guide,” W.K. Kellogg Foundation.

Logic Model Resources at the University of Wisconsin

A Bibliography for Program Logic Models/Logframe Analysis


June 4, 2013
04 Jun 2013

Social Entrepreneurs: How Evaluation Can Make A Difference

In recent years, “social entrepreneur” has become a prominent term in the not-for-profit, foundation, and NGO worlds.  But what exactly is a “social entrepreneur?”  While social entrepreneurs share many of the characteristics ascribed to for-profit entrepreneurs, Roger L. Martin & Sally Osberg observe in their article in the Stanford Innovation Review  Social Entrepreneurship: The Case for Definition that “the social entrepreneur aims for value in the form of large-scale, transformational benefit that accrues either to a significant segment of society or to society at large.” They also note that “social entrepreneurship….is as vital to the progress of societies as is entrepreneurship to the progress of economies, and it merits more rigorous, serious attention than it has attracted so far.”

PBS (  ) notes “A social entrepreneur identifies and solves social problems on a large scale. Just as business entrepreneurs create and transform whole industries, social entrepreneurs act as the change agents for society, seizing opportunities others miss in order to improve systems, invent and disseminate new approaches and advance sustainable solutions that create social value.  Unlike traditional business entrepreneurs, social entrepreneurs primarily seek to generate “social value” rather than profits. And unlike the majority of non-profit organizations, their work is targeted not only towards immediate, small-scale effects, but sweeping, long-term change.”

The Ashoka Foundation  similarly stresses the large-scale effects that social entrepreneurs seek to make. “Social entrepreneurs are individuals with innovative solutions to society’s most pressing social problems. They are ambitious and persistent, tackling major social issues and offering new ideas for wide-scale change. Rather than leaving societal needs to the government or business sectors, social entrepreneurs find what is not working and solve the problem by changing the system, spreading the solution, and persuading entire societies to take new leaps.”  (See )

While the above sources highlight the often ambitious, indeed global, goals of social entrepreneurs, most social entrepreneurship, in fact, involves developing and sustaining specific organizations and, in turn, operating discrete programs.  These programmatic efforts can benefit from program evaluations that gather information to show that they are making the differences that their founders hope to make.  Evaluations of social entrepreneur-sponsored initiatives not only provide critical evidence of impact, but equally importantly, provide objective information gathered directly from program recipients and others program stakeholders about the ways that such efforts might be strengthened.  (See our previous post, “Listening to Those Who Matter Most: Beneficiaries.”)  Evaluations of social entrepreneur-sponsored initiatives are especially important because program participants, service recipients, and other beneficiaries are seldom in a position to directly provide feedback to the innovators who are responsible for the programming. Transformative initiatives–no less than smaller scale programs—can substantially benefit from program evaluations. To learn more about our work with Non-Profits visit our Non-Profits page.

See also “Advancing Evaluation Practices in Philanthropy” at The Stanford Innovation Review

May 9, 2013
09 May 2013

Listening to Those who Matter Most, the Beneficiaries.

br-eblast-05082013Listening to Those Who Matter Most, the Beneficiaries,” (Spring, 2013 Stanford Social Innovation Review) highlights the importance of incorporating the perspectives of program beneficiaries (participants, clients, service recipients, etc.) into program evaluations.  The authors note that non-profit organizations, unlike their counterparts in health care, education, and business, are often not as effective in gathering feedback and input from those who they serve.  Although extremely important, the collection of opinions and perspectives from program participants has three fundamental challenges: 1) It can be expensive and time intensive.  2) It is often difficult to collect data, especially with disadvantaged and minimally literate populations.  3) Honest feedback can make us (i.e., program funders and program implementers) uncomfortable, especially if program beneficiaries don’t think that programs are working the way they are supposed to.

As the authors point out, feedback from participants is important for two fundamental reasons: It provides a voice to those who are served. As Bridgespan Group partner, Daniel Stid, notes, “Beneficiaries aren’t buying your service; rather a third party is paying you to provide it to them.  Hence the focus shifts more toward the requirements of who is paying, versus the unmet needs and aspirations of those meant to benefit.”  Equally importantly, gathering and analyzing the perspectives and opinions of beneficiaries can help program implementers to refine programming and make it more effective.

The authors of “Listening to Those Who Matter Most, the Beneficiaries,” make a strong case for systematically collecting and utilizing beneficiary input. “Beneficiary Feedback isn’t just the right thing to do, it is the smart thing to do.”

Our experience in designing and conducting program evaluations has shown the value of soliciting the views, perspectives, and narrative experiences of program beneficiaries.  Beneficiary feedback is a fundamental component of our program evaluations—whether we are evaluating programs that serve homeless mothers, or programs that serve college students.  We’ve had 20 years of experience conducting interviews, focus groups, and surveys, which are designed to efficiently gather and productively use information from program participants.  While the authors of “Listening to Those Who Matter Most, the Beneficiaries,” suggest that such efforts can be resource intensive, and indeed they can be, we’ve developed strategies for maximizing the effectiveness of these techniques while minimizing the cost of their implementation. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

April 16, 2013
16 Apr 2013

Integrity and Objectivity – Critical Components of Program Evaluation

eblastquoteNot long ago I was meeting with a prospective client.  It was our first meeting, and shortly after our initial conversation had begun—but long before we had a chance to discuss either the purposes of the evaluation, the questions the evaluation would address, or the methods that would be used— the client began imagining the many marketing uses for the evaluation’s findings.  Eager to dissuade my colleague from prematurely celebrating her program’s successes, I observed that while it may turn out that the evaluation would reveal important information about the program’s achievements and benefits, it might also find that the program had, in fact, not achieved some of the goals it had set out to realize.  I cautioned my colleague, “My experience tells me that we will want to wait to see what the evaluation shows, before making plans to use evaluation findings for marketing purposes.”  In essence, I was making a case for discovering the truth of the program before launching an advertising campaign.

We live in a period where demands for accountability and systematic documentation of program achievements are pervasive.  Understandably, grantees and program managers are eager to demonstrate the benefits of their programs.  Indeed, many organizations rely upon evaluation findings to demonstrate to current and future funders that they are making a difference and that their programs are worthy of continued funding.  Despite these pressures, it is very important that program evaluations be conducted with the utmost integrity and objectivity so that findings are accurate and useful to all stakeholders.

The integrity of the evaluation is critical, indeed, paramount for all stakeholders.  Reliable, robust, and unbiased evaluation findings are important not just to funders who want to know whether their financial resources were used wisely, but also to the program implementers, who need to know whether they are making the difference(s) they intend to make.  Without objective data about the outcomes that a program produces, no one can know with any certainty whether a program is a success or a “failure” (take a look out our blog post Fail Forward, that examines what we can learn from “failure”) i.e., if it needs refining and strengthening.

As a member of the American Evaluation Association, Brad Rose Consulting, Inc. is committed to upholding the AEA’s “Guiding Principles for Evaluators”,  the Principles state:
“Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process. Evaluators should be explicit about their own, their clients’, and other stakeholders’ interests and values concerning the conduct and outcomes of an evaluation (and)… should not misrepresent their procedures, data or findings. Within reasonable limits, they should attempt to prevent or correct misuse of their work by others.”

In each engagement, Brad Rose Consulting, Inc. adheres to the “Guiding Principles for Evaluators” because we are committed to ensuring that the findings of its evaluations are clearly and honestly represented to all stakeholders. This commitment is critical not just to program sponsors—the people who pay for programs— but also to program managers and implementers, who also are in need of unbiased  and dispassionate information about the results of the programs they operate. To learn about the evaluation methods we offer visit our Data collection & Outcome measurement page.

March 19, 2013
19 Mar 2013

Transforming “Data” Into Knowledge

br-transforming-dataIn his recent article in the New York Times, “What Data Can’t Do” (February 18, 2013, visit here ), David Brooks discusses some of the limits of “data.”

Brooks writes that we now live in a world that is saturated with gargantuan data collection capabilities, and that today’s powerful computers are able to handle huge data sets which “can now make sense of mind-bogglingly complex situations.” Despite these analytical capacities, there are a number of things that data can’t do very well. Brooks remarks that data is unable to fully understand the social world; often fails to integrate and deal with the quality (vs. quantity) of social interactions; and struggles to make sense of the “context,” i.e., the real environments, in which human decisions and human interactions are inevitably embedded. (See our earlier blog post Context Is Critical.)

Brooks insightfully notes that data often “obscures values,” by which he means that data often conceals the implicit assumptions, perspectives, and theories on which they are based. “Data is never ‘raw,’ it’s always structured according to somebody’s predispositions and values.” Data is always a selection of information. What counts as data depends upon what kinds of information the researcher values and thinks is important.

Program evaluations necessarily depend on the collection and analysis of data because data constitutes important measures and indicators of a program’s operation and results. While evaluations require data, it is important to note that data alone while, necessary, is insufficient for telling the complete story about a program and its effects. To get at the truth of a program, it is necessary to 1) discuss both the benefits and limitations of what constitutes “the data”—to understand what counts as evidence; 2) to use multiple kinds of data—both quantitative and qualitative; and 3) to employ experience- based judgment when interpreting the meaning of data.

Brad Rose Consulting, Inc., addresses the limitations pointed out by David Brooks, by working with clients and program stakeholders to identify what counts as “data,” and by collecting and analyzing multiple forms of data. We typically use a multi-method evaluation strategy, one which relies on both quantitative and qualitative measures. Most importantly, we bring to each evaluation project our experience-based judgment when interpreting the meaning of data, because we know that to fully understand what a program achieves (or doesn’t achieve), evaluators need robust experience so that they can transform mere information into genuine, useable knowledge. To learn about our diverse evaluation methods visit our Data collection & Outcome measurement page.

February 15, 2013
15 Feb 2013

4 Advantages of an External Evaluator

Although there are a number of perfectly good reasons that an organization may choose to create and maintain an internal program evaluation capacity, there are number of very good reasons, indeed, advantages, associated with the use of an external evaluator.

Breadth of Experience
External evaluators bring a breadth of experience evaluating a range of programs.  Such eclectic and wide-ranging experience can be especially useful when evaluating innovative programs that seek to creatively serve their target populations.  Evaluators who have worked in a variety of program contexts and who have worked with a diversity of program stakeholders can draw on their experience to inform current evaluation initiatives.  External evaluators have often “seen” an abundance of programs, and the resulting knowledge can be a substantial asset to the organization that engages their professional services.

External evaluators are often more disinterested and objective in their view of a program and its outcomes.  External evaluators are less susceptible to the internal politics of organizations, have less of an economic ‘stake’ in the success or failure of a program, and therefore are better positioned to provide an unbiased eye with which to conduct a program evaluation.  Objectivity is critical for discovering whether a program really works or not and is essential if program stakeholders are to know whether a program achieves its intended results.

While internal evaluators may be highly skilled, external evaluators often mobilize a range of expertise and technical skills that internal evaluators under-develop. Professional evaluators specialize in developing their skills, and apply their expertise to a variety of programs, they often have a superior ‘quiver’ of evaluation knowledge and wisdom.  Professional expertise is the province of specialization, and career program evaluators necessarily develop rich and deep evaluation expertise.

An external evaluator can be very cost effective, especially for smaller and mid-sized organizations (local non-profits, community-based organizations, school districts, family and community foundations,  colleges, etc.) that may not have sufficient resources with which to fund and maintain an internal evaluation capacity.  External evaluators are able to contain and reduce infrastructure costs, and therefore are comparatively inexpensive for clients.

With 20 years of experience in providing program evaluation services to its clients in the non-profit, foundation, education, health, and community service sectors, Brad Rose Consulting, Inc. is able to provide cost effective, customized, and high-value, program evaluations to its clients.  More information about the kinds of program evaluation services we provide is available here.

December 17, 2012
17 Dec 2012

Context is Critical

“Context matters,” Debra Rog,  past President of the American Evaluation Association reported in her address to the 2012 meeting of ASA, and recently wrote in her insightful article, “When Background Becomes Foreground: Toward Context-Sensitive Evaluation Practice (New Directions for Evaluation No.135, Fall, 2012)
Indeed, as Rog correctly points out, program evaluators (and program sponsors) often ignore at their own peril, program context.  Too often evaluations begin with attention focused on methodology, then later—often when it’s too late–discover that the results, meanings, and uses, of evaluation findings have been hampered by insufficiently considering the context in which an evaluation is conducted.

Five Aspects of Context
Rog says there are 5 aspects of context that directly and indirectly affect the selection, design, and ultimate success of an evaluation:
1. the nature of the problem that a program or intervention seeks to address
2. the nature of the intervention—how the structure, complexity, and dynamics of the program (including program life cycle) affect the selection and implementation of the evaluation approach
3. the broader environment (or setting) in which the program is situated and operates (for example, the availability of affordable housing may profoundly affect the outcomes and successes of a program intended to assist homeless persons to accesses housing.)
4. the parameters of the evaluation, including the evaluation budget, allotted time for implementation of evaluation activities, and the availability of data
5.  the decision-making context for the evaluation—who are the decision-makers that will use evaluation findings, which types of decisions do they need to make, and whcih standards of rigor and levels of confidence do decisions-makers require

Context-Sensitive Evaluations – What to Look For
Rog underscores the importance of conducting “context-sensitive” evaluations—evaluations that first consider the various aspects of the context in which programs operate and in which program evaluation activities will occur.  She makes a plea to evaluators and evaluation sponsors to refrain from engaging in a “method’s first approach” which too often fetishizes methodologies at the cost of conducting appropriate evaluations that can be of maximum value and use to all program stakeholders.

In our experience, the most effective context-sensitive evaluations address:

  • Who program stakeholders are (funders, program managers, program participants, community members, etc.)
  • What stakeholders need to learn about a program’s operation and outcomes
  • The social and economic context of the program’s operation
  • Key questions to guide the evaluation research
  • Research methodologies that provide robust and cost-effective findings
  • A logic model that clearly specifies the program activities and results
  • A wide range of evaluation research methodologies and data collection strategies to ensure that program results are systematically and rigorously measured.
  • Clear, accessible reports so that all stakeholders benefit from evaluation findings
  • Detailed recommendations for how sponsors and program managers can strengthen further efforts

When it comes to program evaluation, not only does context matter, it is on the critical path for getting the best results. To find out about how we consider context in our evaluation methods visit our Data collection & Outcome measurement page.

November 7, 2012
07 Nov 2012

Fail Forward: What We Can Learn from Program “Failure”

Vilfredo Pareto, an Italian economist, sociologist, and philosopher, observed: “Give me a fruitful error any time, full of seeds, bursting with its own corrections. You can keep your sterile truth for yourself.” Programs frequently achieve some of their original goals, while missing others.  In some cases, they achieve unintended results, both desirable and undesirable.  In other cases, programs fail to achieve any of the original outcomes (i.e., changes, results) they intended at all.

Discovering New Information in Failure
Rather than identifying a program’s unachieved results as merely program failure, we need to rethink what we can learn from failures. What do program failures tell us about what we do that’s ineffective or otherwise misses the mark? By understanding how and why programs don’t achieve the results they intend, we can design and execute improved programs in the future. It is important to note that psychological research has shown that individuals learn more from failure than they do from success. Our goals should be to learn from our defeats and to surmount them—especially in programs that address critical social, educational, and human service needs. Learning from the challenges that confront these kinds of programs can have a powerful impact of the success of future programming.

Of course programs shouldn’t seek to fail, but they should seek to learn from the challenges that they encounter.  Constructive program evaluation can help organizations to learn what they need to be doing more effectively, and point the way to strengthened programming and enhanced results.  Constructive program evaluation identifies challenges, analyzes why such challenges detract from desired outcomes, and helps program sponsors and implementers to understand how to strengthen and refine programming so that the next iteration achieves its goals.

Brad Rose Consulting, Inc. is committed to conducting program evaluations that help program managers, funders, and stakeholders to ensure successful program design, accurately measure results, and make timely adjustments in order to maximize positive program impacts. To learn more about our commitment to learning from failure visit our Feedback & Continuous improvement page.

Samuel Becket wrote, “Ever tried.  Ever Failed.  No matter.  Try again. Fail again.  Fail better.”
See: “Embracing Failure,” an article at the Asian Development Bank

See also the website
*This post is indebted to a lively and productive discussion which appeared on the American Evaluation Association’s listserv October, 2012.
Copyright © 2020 - Brad Rose Consulting