Program evaluation services that increase program value for grantors, grantees, and communities. x

Archive for category: Reporting

November 3, 2020
03 Nov 2020

The Pandemic and Nonprofits


There are few aspects of society that the pandemic is not affecting. Accordingly, COVID-19 is having a substantial effect on the operation and sustenance of nonprofits. A June, 2020 article in the Stanford Social Innovation Review, which summarizes a national survey of 750 members of primarily US based nonprofits, reported that nearly 75% of survey respondents stated their organizations had experienced a drop in revenues, and over 80% had moved all or some of their programs and services to an online format. (See “The Continuing Impact of COVID-19 on the Social Sector”) Eighty percent of surveyed nonprofits are shifting to work from home, many are expecting or already have made reductions in staff, and many are considering mergers with other non-profits. (See also the reported effects on 110 nonprofits in  “The Impact of COVID-19 on Large and Mid-Sized Nonprofits,” June 15, 2020, Independent Sector which reports that 71% of surveyed large and medium sized nonprofits have reduced services.)

In the Nonprofit Quarterly article, “Nonprofits Struggle to Stay Alive amid COVID-19” one nonprofit CEO says “The impact of COVID-19 on the nonprofit community is unprecedented. It has affected the capacity and sustainability of every nonprofit—from education to the environment, affordable housing to mental health services, animal welfare to the arts—no organization will emerge unscathed.” This article goes on to say that in Arizona, “Fundraising and program cancellations as a result of COVID-19 have cost Arizona nonprofits an estimated $53 million in lost revenue (as of June 11). In that same survey, 25 percent of nonprofits indicated that they’ve had to lay off or furlough employees, and 69 percent report a loss of critical program volunteers.”

In a more recent, October, 2020  article “Second Wave of Virus Looms, Some US Nonprofits Running Out of Road”, Ruth McCambridge writes “Nearly every aspect of operations has been shaken. Organizations have had to find new ways to provide their services while staying as close as they can to their stakeholders. Revenues shrank, but expenses did not go away. As communities are reopening and trying—perhaps too quickly—to return to “normal,” we are also learning that no matter how important an organization’s mission might be, things are harder and more expensive in a world with COVID-19 still undefeated.”


How Important Are Nonprofits?

It’s vital to keep in mind not only the social role, but the economic importance of the nonprofit sector. The National Council on Nonprofits reports that, “Nonprofits employ 12.3 million people, with payrolls exceeding those of most other U.S. industries, including construction, transportation, and finance. A substantial portion of the nearly $2 trillion nonprofits spend annually is the more than $826 billion they spend on salaries, benefits, and payroll taxes every year. Also, nonprofit staff members pay taxes on their salaries, as well as sales taxes on their purchases and property taxes on what they own.”  (See: “Economic Impact“) The Urban Institute says that “Approximately 1.54 million nonprofits were registered with the Internal Revenue Service (IRS) in 2016, an increase of 4.5 percent from 2006.”  (See “The Nonprofit Sector in Brief” Urban Institute) “ The United States non-profit sector alone would rank as the 17th largest economy in the world.” (See “The Economic Impact of Nonprofit Organizations —Part One” by Andrew Paniello)



While the 2020 COVID-19 pandemic brings multiple challenges, might there be things that nonprofits can do to adapt and sustain themselves? In “Three Things Nonprofits Should Prioritize in the Wake of COVID-19” by Amy Celep, Megan Coolidge & Lori Bartczak, in the April, 30, 2020 Stanford Social Innovation Review, the authors say that nonprofits may choose to revisit their purposes and value in the current environment, in order to rally donors, engage partners, and motivate staff. They also suggest that nonprofits: 1) assess their financial situations, including understanding how they depend on various revenue streams; 2) create various financial scenarios for best, worst, and most likely financial scenarios; and 3) have frank conversations with donors and stakeholder about their plans and intentions so that these nonprofits can get some sense of the reliability of contributed revenue streams. They note that “Many organizations are finding success in asking customers or members to donate pre-paid fees for canceled services and events to help programs and services they care about continue in the future.”

Although there are certainly new opportunities for nonprofits to rethink their operations and to create new approaches to fundraising, it is likely that the nonprofit sector will continue to face rather steep challenges in the months and years ahead. In fact, many of the challenges confronting nonprofits began before the onset of the pandemic. The ameliorative goals of many non-profits had already been challenged by the operation of the “normal” economy, which favored the well-positioned and affluent, and created ever greater needs among ever larger swaths of the American population for the many services that nonprofits have delivered.

For additional resources and ideas that will help nonprofits to continue to serve their communities see “Nonprofits and Corona virus, COVID-19,” National Council of Nonprofits.



COVID-19’s Impact on Nonprofits’ Revenues, Digitization, and Mergers,” by David La Piana, Stanford Social Innovation Review Jun. 4, 2020
As Second Wave of Virus Looms, Some US Nonprofits Running out of Road” by Ruth McCambridge, Oct 20, 2020, Nonprofit 

Nonprofits Struggle to Stay Alive amid COVID-19” by Martin Levine, June 23, 2020,  NonProfit Quarterly

Three Things Nonprofits Should Prioritize in the Wake of COVID-19” by Amy Celep, Megan Coolidge & Lori Bartczak   Apr. 30, 2020 Stanford Social Innovation Review.

The Impact of COVID-19 on Large and Mid-Sized Nonprofits,” June 15, 2020, Independent Sector

Nonprofits and Coronavirus, COVID-19,” National Council of Nonprofits

Economic Impact,” National Council of Nonprofits.
February 5, 2019
05 Feb 2019

Pretending to Love Work

In a previous blog post, “Why You Hate Work” we discussed an article that appeared in the New York Times that investigated the way that the contemporary workplace too often produces a sense of depletion and employee “burnout.” In that article, the authors, Tony Schwartz and Christin Porath, argued that only when companies attempt to address the physical, mental, emotional, and spiritual dimensions of their employees by creating “truly human-centered organizations,” can these companies create the conditions for more engaged and fulfilled workers, and in so doing, become more productive and profitable organizations.

In that eponymous blogpost, we suggested that employee burnout is not an unknown feature of the non-profit world, and that, while program evaluation cannot itself prevent employee burnout, it can add to non-profit organizations’ capacities to create organizations in which staff and program participants have a greater sense of efficacy and purposefulness. (See also our blogpost “Program Evaluation and Organization Development” )

Of course, the problem of employee burnout and alienation is a perennial one. It occurs in both the for-profit and non-profit sectors. In a more recent article, “Why Are Young People Pretending to Love Work?” New York Times, January 26, 2019, Erin Griffith says that in recent years, there has emerged a “hustle culture,”—especially for millennials. This culture, Griffith argues, “…is obsessed with striving, (is) relentlessly positive, devoid of humor, and — once you notice it — impossible to escape.” She sites the artifacts of such a culture, which include at one WeWork location, in New York, neon signs that exhorts workers to “Hustle harder,” and murals that spread the gospel of T.G.I.M. (Thank God It’s Monday). Somewhat horrified by the Stakhanovite tenor of the WeWork environment, Griffith notes, “Even the cucumbers in WeWork’s water coolers have an agenda. ‘Don’t stop when you’re tired,’… ‘Stop when you are done.’” “In the new work culture,” Griffith observes, “enduring or even merely liking one’s job is not enough. Workers should love what they do, and then promote that love on social media, thus fusing their identities to that of their employers.”

Griffith is not concerned with employee burnout. Instead, she is horrified by the degree to which many younger employees have internalized the obsessively productivist, “workaholic” norms of their employers and, more broadly, of contemporary corporations. These norms include the apotheosis of excessive work hours and the belief that devotion to anything other than work is somehow a shameful betrayal of the work ethic. She quotes the founder of online platform, Basecamp, David Heinemeier Hansson, who observes, “The vast majority of people beating the drums of hustle-mania are not the people doing the actual work. They’re the managers, financiers and owners.”

Griffith writes, “…as tech culture infiltrates every corner of the business world, its hymns to the virtues of relentless work remind me of nothing so much as Soviet-era propaganda, which promoted impossible-seeming feats of worker productivity to motivate the labor force. One obvious difference, of course, is that those Stakhanovite posters had an anti-capitalist bent, criticizing the fat cats profiting from free enterprise. Today’s messages glorify personal profit, even if bosses and investors — not workers — are the ones capturing most of the gains. Wage growth has been essentially stagnant for years.”


“Why Are Young People Pretending to Love Work?” Erin Griffith, New York Times, January 26, 2019

“Why You Hate Work”

“The Fleecing of Millennials” David Leonhardt, New York Times, January 27, 2019

September 5, 2018
05 Sep 2018

A Lifetime of Learning

Pablo Picasso once said, “It takes a long time to become young.” The same may be said about education and the process of becoming educated. While we often associate formal education with youth and early adulthood, the fact is that education is an increasingly recognized lifelong endeavor that occurs far beyond the confines of early adulthood and traditional educational institutions.

In a recent article “Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic the authors discuss the emergence of a rich and ever-expanding “ecosystem” of organizations and institutions that have arisen to serve the unmet educational needs and expectations of learners who are not enrolled in formal, traditional educational institutions (e.g. community colleges, colleges, and universities). “This ecosystem of semi-structured, unorthodox learning providers is emerging at “the edges” of the current postsecondary world, with innovations that challenge the structure and even the existence of traditional education institutions.”

Hagel III, et al. argue that economic forces together with emerging technologies are enabling learners to do an “end run” around traditional educational providers and to gain access to knowledge and information in new venues. The growing availability of, and access to, MOOCs (Massive Online Open Courses), Youtube, Open Educational Resources, and other online learning platforms enable more and more learners to advance their learning and career goals outside the purview of traditional post-secondary institutions.

While the availability of alternative, lifelong educational resources is helping some non-traditional students to advance their educational goals, it is also having an effect on traditional post-secondary institutions. Hagel III, Seely Brown, Wooll and Tsu, argue that, “The educational institutions that succeed and remain relevant in the future …will likely be those that foster a learning environment that reflects the networked ecosystem and (that will become) meaningful and relevant to the lifelong learner. This means providing learning opportunities that match the learner’s current development and stage of life.”  The authors site as examples, community colleges that are now experimenting with “stackable” credentials that provide short-term skills and employment value, while enabling students to return over time and assemble a coherent curriculum that helps them progress toward career and personal goals” and “some universities (that) have started to look at the examples coming from both the edges of education and areas such as gaming and media to imagine and conduct experiments in what a future learning environment could look like.”

The authors say that in the future colleges and universities will benefit from considering such things as:

  1. Providing the facilities and locations for a variety of learning experiences, many of which will depend external sources for content
  2. Aggregating knowledge resources and connecting these resources with appropriate learners rather than acting as sole “vendors” of knowledge
  3. Acting as a lifelong “agents” for learners by helping learners to navigate a world of exponential change and an abundance of information

While these goals are ambitious, they highlight the remarkably changing terrain in continuing education. Educational “consumers” are increasingly likely to seek inexpensive and more accessible pathways to knowledge. As the authors point out, individuals’ lifelong learning needs are likely to continue to increase, so correspondingly, the pressures on traditional post-secondary education are likely to grow. Whether learners’ needs are more effectively addressed by re-orienting traditional post-secondary institutions or by the patchwork “ecosystem” of semi-structured, unorthodox learning-providers who inhabit what the authors of “Lifetime Learner” term “the edges” of the postsecondary world, is difficult to predict.


Lifelong learning, Wikipedia

“Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic

“The Third Education Revolution: Schools are moving toward a model of continuous, lifelong learning in order to meet the needs of today’s economy” by Jeffrey Selingo, The Atlantic, Mar 22, 2018

August 14, 2018
14 Aug 2018

Robots Grade Your Essays and Read Your Resumes

We’ve previously written about the rise of artificial intelligence and the current and anticipated effects of AI upon employment.  (See links to previous blog posts, below) Two recent articles treat the effects of AI on the assessment of students and the hiring of employees.

In her recent article for NPR, “More States Opting To ‘Robo-Grade’ Student Essays By Computer” Tovia Smith discusses how so-called “robo-graders” (i.e., computer algorithms) are increasingly being used to grade students’ essays on state standardized tests. Smith reports that Utah and Ohio currently use computers to read and grade students’ essays and that soon, Massachusetts will follow suit. Peter Foltz, a research professor at the University of Colorado, Boulder observes, “We have artificial intelligence techniques which can judge anywhere from 50 to 100 features…We’ve done a number of studies to show that the (essay) scoring can be highly accurate.” Smith also notes that Utah, which once had humans review students’ essays after they had been graded by a machine, now relies on the machines almost exclusively. Cyndee Carter, assessment development coordinator for the Utah State Board of Education reports “…the state began very cautiously, at first making sure every machine-graded essay was also read by a real person. But…the computer scoring has proven “spot-on” and Utah now lets machines be the sole judge of the vast majority of essays.”

Needless to say, despite support for “robo-graders”, there are critics of automated essay assessments. Smith details how one critic, Les Perelman at MIT, has created an essay-generating program, the BABEL generator, that creates nonsense essays designed to trick the algorithmic “robo-graders” for the Graduate Record Exam (GRE). When Perelman submits a nonsense essay to the GRE computer, the algorithm gives the essay a near perfect score. Perelman observes, “”It makes absolutely no sense,” shaking his head. “There is no meaning. It’s not real writing. It’s so scary that it works….Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid.”

Critics of “robo-graders” are also worried that students might learn how to game the system, that is, give the algorithms exactly what they are looking for, and thereby receive undeservedly high scores. Cyndee Carter, the assessment development coordinator for the Utah State Board of Education, describes instances of students gaming the state test: “…Students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.”

Despite these shortcomings, computer designers are learning and further perfecting computer algorithms. It’s anticipated that more states will soon use refined algorithms to read and grade student essays.

Grading student essays is not the end of computer assessment. Once you’ve left school and start looking for a job, you may find that your resume is read not by an employer eager to hire a new employee, but by an algorithm whose job it is to screen for appropriate job applicants. In the brief article, “How Algorithms May Decide Your Career: Getting a job means getting past the computer,” The Economist reports that most large firms now use computer programs, or algorithms, for screening candidates seeking junior jobs.  Applicant Tracking Systems (ATS) can reject up to 75% of candidates, so it becomes increasingly imperative for applicants to send resumes filled with key words that will peak screening computers’ interests.

Once your resume passes the initial screening, some companies use computer driven visual interviews to further screen and select candidates. “Many companies, including Vodafone and Intel, use a video-interview service called HireVue. Candidates are quizzed while an artificial-intelligence (AI) program analyses their facial expressions (maintaining eye contact with the camera is advisable) and language patterns (sounding confident is the trick). People who wave their arms about or slouch in their seat are likely to fail. Only if they pass that test will the applicants meet some humans.”

Although one might think that computer-driven screening systems might avoid some of the biases of traditional recruitment processes, it seems that AI isn’t bias free, and that algorithms may favor applicants who have the time and monetary resources to continually retool their resumes so that these present the code words that employers are looking for. (This is similar to gaming the system, described above.) “There may also be an ‘arms race’ as candidates learn how to adjust their CVs to pass the initial AI test, and algorithms adapt to screen out more candidates.”


More States Opting To ‘Robo-Grade’ Student Essays By Computer,” Tovia Smith, NPR, June 30, 2018

How Algorithms May Decide Your Career: Getting a job means getting past the computer” The Economist, June 21, 2018

Will You Become a Member of the Useless Class?

Humans Need Not Apply: What Happens When There’s No More Work?

Will President Trump’s Wall Keep Out the Robots?

Welcoming our New Robotic Overlords,” Sheelah Kolhatkar, The New Yorker, October 23 2017

AI, Robotics, and the Future of Jobs,” Pew Research Center

Artificial intelligence and employment,” Global Business Outlook

July 31, 2018
31 Jul 2018

Are There Any Questions?

Asking questions is a critical aspect of learning. We’ve previously written about the importance of questions in our blog post “Evaluation Research Interviews: Just Like Good Conversations.” In a recent article, “The Surprising Power of Questions,” which appears in the Harvard Business Review, May-June, 2018, authors Alison Wood Brooks and Leslie K. John offer suggestions for asking better questions.

As Brooks and John report, we often don’t ask enough questions during our conversations. Too often we talk rather than listen. Brooks and John, however, note that recent research shows that by asking good questions and genuinely listening to the answers, we are more likely to achieve both genuine information exchange and effective self-presentation. “Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding.”

Although asking more questions in our conversations is important, the authors show that asking follow-up questions is critical. Follow-up questions “…signal to your conversation partner that you are listening, care, and want to know more. People interacting with a partner who asks lots of follow-up questions tend to feel respected and heard.”

Another critical component of a question-asking is to be sure that we ask open-ended questions, not simply categorial (yes/no) questions. “Open-ended questions …can be particularly useful in uncovering information or learning something new. Indeed, they are wellsprings of innovation—which is often the result of finding the hidden, unexpected answer that no one has thought of before.”

Asking effective questions depends, of course, on the purpose and context of conversations. That said, it is vital to ask questions in an appropriate sequence. Counterintuitively, asking tougher questions first, and leaving easier questions until later “…can make your conversational partner more willing to open up.” On the other hand, asking tough questions too early in the conversation, can seem intrusive and sometimes offensive. If the ultimate goal of the conversation is to build a strong relationship with your interlocutor, especially with someone who you don’t know, or don’t know well, it may be better opening with less sensitive questions and escalate slowly. Tone and attitude are also important: “People are more forthcoming when you ask questions in a casual way, rather than in a buttoned-up, official tone.”

While question-asking is a necessary component of learning, the authors remind us that “The wellspring of all questions is wonder and curiosity and a capacity for delight. We pose and respond to queries in the belief that the magic of a conversation will produce a whole that is greater than the sum of its parts. Sustained personal engagement and motivation—in our lives as well as our work—require that we are always mindful of the transformative joy of asking and answering questions.”


The Surprising Power of Questions,” Alison Wood Brooks and Leslie K. John. Harvard Business Review, May–June 2018 (pp.60–67)

Using Qualitative Interviews in Program Evaluations

December 11, 2017
11 Dec 2017

What’s the Difference? 10 Things You Should Know About Organizations vs. Programs

Organizations vs. Programs

Organizations are social collectivities that have: members/employees, norms (rules for, and standards of, behavior), ranks of authority, communications systems, and relatively stable boundaries. Organizations exist to achieve purposes (objectives, goals, and missions) and usually exist in a surrounding environment (often composed of other organizations, individuals, and institutions.) Organizations are often able to achieve larger-scale and more long-lasting effects than individuals are able to achieve.  Organizations can take a variety of forms including corporations, non-profits, philanthropies, and military, religious, and educational organizations.

Programs are discreet, organized activities and actions (or sets of activities and actions) that utilize resources to produce desired, typically targeted, outcomes (i.e., changes and results). Programs typically exist within organizations. (It may be useful to think of programs as nested within one or, in some cases, more than one organization.) In seeking to achieve their goals, organizations often design and implement programs that use resources to achieve specific ends for program participants and recipients. Non-profit organizations, for example, implement programs that mobilize resources in the form of activities, services, and products that are intended to improve the lives of program participants/recipients. In serving program participants, nonprofits strive to effectively and efficiently deploy program resources, including knowledge, activities, services, and materials, to positively affect the lives of those they serve.

What is Program Evaluation?

Program evaluation is an applied research process that examines the effects and effectiveness of programs and initiatives. Michael Quinn Patton notes that “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs in order to make judgements about the program, to improve program effectiveness, and/or to inform decisions about future programming. Program evaluation can be used to look at:  the process of program implementation, the intended and unintended results produced by programs, and the long-term impacts of interventions. Program evaluation employs a variety of social science methodologies–from large-scale surveys and in-depth individual interviews, to focus groups and review of program records.” Although program evaluation is research-based, unlike purely academic research, it is designed to produce actionable and immediately useful information for program designers, managers, funders, stakeholders, and policymakers.

Organization Development, Strategic Planning, and Program Evaluation

Organization Development is a set of processes and practices designed to enhance the ability of organizations to meet their goals and achieve their overall mission. It entails “…a process of continuous diagnosis, action planning, implementation and evaluation, with the goal of transferring (or generating) knowledge and skills so that organizations can improve their capacity for solving-problems and managing future change.” (See: Organizational Development Theory, below) Organization Development deals with a range of features, including organizational climateorganizational culture (i.e., assumptions, values, norms/expectations, patterns of behavior) and organizational strategy. It seeks to strengthen and enhance the long-term “health” and performance of an organization, often by focusing on aligning organizations with their rapidly changing and complex environments through organizational learning, knowledge management, and the specification of organizational norms and values.

Strategic Planning is a tool that supports organization development. Strategic planning is a systematic process of envisioning a desired future for an entire organization (not just a specific program), and translating this vision into broadly defined set of goals, objectives, and a sequence of action steps to achieve these. Strategic planning is an organization’s process of defining its strategy, or direction, and making decisions about allocating its resources to pursue this strategy.

Strategic plans typically identify where and organization is at and where it wants to be in the future. It includes statements about how to “close the gap,” between its current state and its desired, future state. Additionally, strategic planning requires making decisions about allocating resources to pursue an organizations strategy. Strategic planning generally involves not just setting goals and determining actions to achieve the goals, but also mobilizing resources.

Program evaluation is uniquely able to contribute to organization development–the deliberately planned, organization-wide effort to increase an organization’s effectiveness and/or efficiency. Although evaluations are customarily aimed at gathering and analyzing data about discrete programs, the most useful evaluations collect, synthesize, and report information that can be used to improve the broader operation and health of the organization that hosts the program. Additionally, program evaluation can aid the strategic planning process, by using data about an organization’s programs to indicate whether the organization is successfully realizing its goals and mission through its current programming.

Brad Rose Consulting works at the intersection of evaluation and organization development. While our projects begin with a focus on discrete programs and initiatives, the answers to the questions that drive our evaluation research provide vital insights into the effectiveness of the organizations that host, design, and fund those programs. Findings from our evaluations often have important implications for the development and sustainability of the entire host organization.


Organizations: Structures, Processes, and Outcomes, Richard H. Hall and Pamela S Tolbert, Pearson Prentice Hall, 9th edition.

Utilization Focused Evaluation, Michael Quinn Patton, Sage, 3rd edition, 1997

Organization Development: What Is It & How Can Evaluation Help?

Organization Development

Organizational Development Theory

Strategic Planning, Bain and Co. 2017

Strategic Planning

What a Strategic Plan Is and Isn’t
Ten Keys to Successful Strategic Planning for Nonprofit and Foundation Leaders

Elements of a Strategic Plan

Types of Strategic Planning
Understanding Strategic Planning

Five Steps to a Strategic Plan
Five Steps to a Strategic Plan

The Big Lie of Strategic Planning, Roger L. Martin, Harvard Business Review, January-February 2014

October 6, 2014
06 Oct 2014

Evaluation Reporting:  True Stories, Well Told

In a recent, themed issue devoted to the topic of validity in program evaluation, the journal New Directions in Evaluation (No. 142, Summer, 2014) revisited and commented on Ernest House’s influential 1980 book, Evaluating with Validity. House then argued that validity in evaluation must not just be limited to classic scientific conceptions of the valid (i.e. empirically describing things as they are), but that it must also include an expanded dimension of argumentative validity, in which an evaluation “must be true, coherent, and just.”  Paying particular attention to social context, House argued that “there is more to validity than getting the facts right.” He wrote that “…the validity of an evaluation depends upon whether the evaluation is true, credible, and normatively correct.” House ultimately argued that evaluations must make compelling and persuasive arguments about what is true (about a program) and thereby bring “truth, beauty, and justice” to the evaluation enterprise.

In the same issue of New Directions in Evaluation, in her essay, “How ‘Beauty’ Can Bring Truth and Justice to Life,”  E. Jane Davidson argues that the process of creating a clear, compelling, and coherent evaluative story ( i.e., a “beautiful” narrative account) is the key to unlocking “validity (truth) and fairness (justice).” To briefly summarize, Davidson argues that a coherent evaluation story weaves together quantitate evidence, qualitative evidence, and clear evaluative reasoning to produce an account of what happened and the value of what happened. She says that an effective evaluation—one that is truly accessible, assumption-unearthing, and values-explicit—enables evaluators to “arrive at robust conclusions about not just what has happened, but how good, valuable, and important it is.” (P.31)

House’s book and Davidson’s essay highlight how effective evaluations—ones that allow us to clearly see and understand what has happened in a program— rely on strong narrative accounts that tell a coherent  and revealing story. Evaluations are not just tables and data—although these are necessary parts of any evaluation narrative—they are true, compelling, and fact-based stories about what happened, why things happened the way they did, and what the value (and meaning) is of the things that happened.

Davidson writes:

“When I reflect on what has improved the quality of my own work in recent years, it has been a relentless push toward succinctness and crystal clarity while grappling with some quite complex, and difficult material. For me this means striving to produce simple direct and clear answers to evaluation questions and being utterly transparent in the reasoning I have used to get to those conclusions.” (p.39)

Davidson further observes that evaluation reports often are often plagued by confusing, long-winded, and academic jargon, that make them not only difficult to read, but obfuscate the often muddled and ill-reasoned thinking behind the evaluation process itself.  She argues that evaluation reporting must be clear, accessible, and simple—which does not mean that reports need to be simplistic, but that they must be coherent and comprehensible. I am reminded of a statement by the philosopher John Searle, ‘If you can’t say it clearly, you don’t understand it yourself’.

Reflecting on Davidson’s article, I realize that the best evaluation reports are the product of well thought-out and effectively conducted evaluation research, presented in a clear and cogent way. The findings from such research may be complex, but they need not be obscure or enigmatic. On the contrary, clear evaluation reports must be true stories, well told. To learn more about our evaluation reporting visit our Impact & Assessment reporting page.

September 10, 2014
10 Sep 2014

What You Need to Know About Outcome Evaluations: The Basics

In her book Evaluation (2nd Edition) Carol Weiss writes, “Outcomes define what the program intends to achieve.” (p.117) Outcomes are the results or changes that occur, either in individual participants, or targeted communities. Outcomes occur because a program marshals resources and mobilizes human effort to address a specified social problem.  Outcomes, then, are what the program is all about; they are the reason the program exists.

Outcome Evaluations

In order to assess which outcomes are achieved, program evaluators design and conduct outcome evaluations. These evaluations are intended to indicate, or measure, the kinds and levels of change that occur for those affected by the program or treatment.  “Outcome evaluations measure how clients and their circumstances change, and whether the treatment experience (or program) has been a factor in causing this change.  In other words, outcome evaluations aim to assess treatment effectiveness. (World Health Organization)

Outcome evaluations, like other kinds of evaluations, may employ a logic model, or theory of change, which can help evaluators and their clients to identify the short-, medium-, and long-term changes that a program seeks to produce. (See our blog post “Using a Logic Model” )  Once intended changes are identified in the logic model, it is critical for the evaluator to further identify valid and effective measures of said changes, so that these changes are correctly documented.  It is preferable to identify desired outcomes before the program begins operation, so that these outcomes can be tracked throughout the program’s life-span.

Most outcome evaluations employ instruments that contain measures of attitudes, behaviors, values, knowledge, and skills. Such instruments may be either standardized, often validated, or they may be uniquely designed, special- purpose instruments, (e.g., a survey designed specifically for this particular program.) Additionally, the measures contained in an instrument can be either “objective,” i.e., they don’t rely on individuals’ self-reports, or conversely, they can be “subjective,” i.e., based on informants’ self-estimates of effect. Ideally, outcome evaluations try to use objective measures, whenever possible. In many instances, however, it is desirable to use instruments that rely on participants’ self-reported changes and reports of program benefits.

It is important to note that outcomes (i.e. changes or results) can occur at different points in time in the life span of a program.  Although outcome evaluations are often associated with “summative,” or end-of-program-cycle evaluations, because program outcomes can occur in the early or middle stages of a program’s operation, outcomes may be measured before the final stage of the program.  It may even be useful for some evaluations to look at both short- and long-term outcomes, and therefore to be implemented at different points in time (i.e., early and late.)

Another issue relevant to outcome evaluation is dealing with unintended outcomes of a program.  As you know, programs can have a range of intended goals. Some outcomes or results, however, may not be a part of the intended goals of the program. They nonetheless occur. It is critical for evaluations to try to capture the unintended consequences of programs’ operation as well as the intended outcomes.

Ultimately, outcome evaluations are the way that evaluators and their clients know if the program is making a difference, which differences it’s making, and if the differences it’s making, are the result of the program. To learn more about our evaluation and outcome assessment methods visit our Data collection & Outcome measurement page.


World Health Organization, Workbook 7

Measuring Program Outcomes: A Practical Approach (1996) United Way of America’s

Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources

Evaluation Methodology Basics, The Nuts and Bolts of Sound Evaluation,
E. Jane Davidson, Sage, 2005.

Evaluation (2nd Edition) Carol Weiss, Prentice Hall, 1998.

August 26, 2014
26 Aug 2014

Focus Groups

Pioneered by market researchers and mid-20th century sociologists, focus groups are a qualitative research method that involves small groups of people in guided discussions about their attitudes, beliefs, experiences, and opinions about a selected topic or issue.  Often used by marketers who obtain feedback from consumers about a product or service, focus groups have also become an effective and widely recognized social science research tool that enables researchers to explore participants’ views, and to reveal rich data that often remain under-reported by other kinds of data collection strategies (e.g., surveys, questionnaires, etc. ).

Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions that allow participants to address questions. Focus groups usually include people who are somewhat similar in characteristics or social roles.  Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions.  Ideally—although not always possible—it is best to involve participants who don’t previously know one another.

Focus group conversations enable participants to offer observations, define issues, pose and refine questions, and create informative debate/discussions.  Focus group moderators must: be attentive, pose useful and creative questions, create a welcoming and non-judgmental atmosphere, be sensitive to non-verbal cues and the emotional tenor of participants.   Typically, focus group sessions are recorded or videoed so that researchers can later transcribe and analyze participants’ comments.  Often an assistant moderator will take notes during the focus group conversation.

Focus groups have advantages over other date collection methods.  They often employ group dynamics that help to reveal information that would not emerge from an individual interview or survey: they produce relatively quick, low cost data (they produce an ‘economy of scale’ as compared to individual interviews); allow the moderator to pose appropriate and responsive follow-up questions;  enable the moderator to observe non-verbal data; and often produce greater and richer data than a questionnaire or survey.

Focus groups also can have some disadvantages, especially if not conducted by an experienced and skilled moderator:  Depending upon their composition, focus groups are not necessarily representative of the general population; respondents may feel social pressure to endorse other group members’ opinions or refrain from voicing their own; group discussions require effective “steering” so that key questions are answered, and participants don’t stray from the questions/topic.

Focus groups are often used in program evaluations.  I have had extensive experience conducting focus groups with a wide-range of constituencies.  During my 20 years of experience as a program evaluator, I’ve moderated focus groups composed of:  homeless persons; disadvantaged youth; university pr ofessors and administrators; k-12 teachers; k-12 and university students, corporate managers; and hospital administrators.  In each of these groups I’ve found that it’s been beneficial to: have a non-judgmental attitude, be genuinely curious; exercise a gentle guidance; and respect the opinions, beliefs, and experiences of each focus group member.   A sense of humor can also be extremely helpful. (See our previous post: “Interpersonal Skills Enhance Program Evaluation,”  Also “Listening to Those Who Matter Most, the Beneficiaries” ) Or if you want to learn more about our qualitative approaches visit our Data collection & Outcome measurement page.


About focus groups:

About focus groups:

How focus groups work:

Focus group interviewing:

‘Focus groups’ at Wikipedia

August 29, 2013
29 Aug 2013

Evaluation Workflow

Typically, we work with clients from the early stages of program development in order to understand their organization’s needs and the needs of program funders and other stakeholders. Following initial consultations with program managers and program staff, we work collaboratively to identify key evaluation questions, and to design a strategy for collecting and analyzing data that will provide meaningful and useful information to all stakeholders.

Depending upon the specific initiative, we implement a range of evaluation tools (e.g., interview protocols, web-based surveys, focus groups, quantitative measures, etc.) that allow us to collect, analyze, and interpret data about the activities and outcomes of the specified program. Periodic debriefings with program staff and stakeholders allow us to communicate preliminary findings, and to offer program managers timely opportunities to refine programming so that they can better achieve intended goals.

Our collaborative approach to working with clients allows us to actively support program managers, staff, and funders to make data-informed judgments about programs’ effectiveness and value. At the appropriate time(s) in the program’s implementation, we write a report(s) that details findings from program evaluation activities and that makes data-based suggestions for program improvement. To learn more about our approach to evaluation visit our Data collection & Outcome measurement page.

June 26, 2013
26 Jun 2013

Understanding How Programs Work: Using Logic Models to “Map” Cause and Effect

A logic model is a schematic representation of the elements of a program and the program’s resulting effects.  A logic model (also known, as a “theory of change”) is a useful tool for understanding the way a program intends to produce the outcomes (i.e. changes) it hopes to produce.  Logic models typically consist of a flowchart schematic that shows the logical connection between a program’s “inputs” (i.e. invested resources), “outputs” (program activities and actions), “short-term outcomes” (changes), “medium-term outcomes” (changes), and “long range impacts” (changes).


When developing a logic model many evaluators and program staff rightly focus on inputs, outputs, and program outcomes (the core of the program).  However, it is critical to also include in the logic model the implicit assumptions that underlie the program’s operation, the needs that the program aspires to address, and the program’s environment, or context.  Assumptions, needs, and context are crucial factors in understanding how the program does what it intends to do.  Ultimately these are crucial to understanding the causal mechanisms that produce the intended changes of any program.

Without clearly understanding the causal mechanisms at work in a program, program staff may work ineffectively, placing emphasis on the wrong or inefective activities—and ultimately fail to correctly address the challenges the program intends to address.   Similarly, without a clear understanding of the causal mechanisms that enable the program to achieve its outcomes, the program evaluation may not measure the proper outcomes or fail to see the changes the program, in fact, brings about.

Brad Rose Consulting, Inc. works with clients to develop simple, yet robust, logic models that explicitly document the causal mechanisms that are at work in a program.  By discussing, and explicitly identifying the often implicit causal assumptions,  as well as highlighting the needs for the program and the social context of a program, we not only ensure that the evaluation is properly designed and executed, we also help program implementers to ensure that they are activating the causal processes/mechanisms that yield the changes that the program strives to achieve.

Other Resources:

Read Brad’s current whitepaper “Logic Modeling

Monitoring and Evaluation: Some Tools Methods and Approaches, The World Bank.

The Logic Model Development Guide,” W.K. Kellogg Foundation.

Logic Model Resources at the University of Wisconsin

A Bibliography for Program Logic Models/Logframe Analysis


April 16, 2013
16 Apr 2013

Integrity and Objectivity – Critical Components of Program Evaluation

eblastquoteNot long ago I was meeting with a prospective client.  It was our first meeting, and shortly after our initial conversation had begun—but long before we had a chance to discuss either the purposes of the evaluation, the questions the evaluation would address, or the methods that would be used— the client began imagining the many marketing uses for the evaluation’s findings.  Eager to dissuade my colleague from prematurely celebrating her program’s successes, I observed that while it may turn out that the evaluation would reveal important information about the program’s achievements and benefits, it might also find that the program had, in fact, not achieved some of the goals it had set out to realize.  I cautioned my colleague, “My experience tells me that we will want to wait to see what the evaluation shows, before making plans to use evaluation findings for marketing purposes.”  In essence, I was making a case for discovering the truth of the program before launching an advertising campaign.

We live in a period where demands for accountability and systematic documentation of program achievements are pervasive.  Understandably, grantees and program managers are eager to demonstrate the benefits of their programs.  Indeed, many organizations rely upon evaluation findings to demonstrate to current and future funders that they are making a difference and that their programs are worthy of continued funding.  Despite these pressures, it is very important that program evaluations be conducted with the utmost integrity and objectivity so that findings are accurate and useful to all stakeholders.

The integrity of the evaluation is critical, indeed, paramount for all stakeholders.  Reliable, robust, and unbiased evaluation findings are important not just to funders who want to know whether their financial resources were used wisely, but also to the program implementers, who need to know whether they are making the difference(s) they intend to make.  Without objective data about the outcomes that a program produces, no one can know with any certainty whether a program is a success or a “failure” (take a look out our blog post Fail Forward, that examines what we can learn from “failure”) i.e., if it needs refining and strengthening.

As a member of the American Evaluation Association, Brad Rose Consulting, Inc. is committed to upholding the AEA’s “Guiding Principles for Evaluators”,  the Principles state:
“Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process. Evaluators should be explicit about their own, their clients’, and other stakeholders’ interests and values concerning the conduct and outcomes of an evaluation (and)… should not misrepresent their procedures, data or findings. Within reasonable limits, they should attempt to prevent or correct misuse of their work by others.”

In each engagement, Brad Rose Consulting, Inc. adheres to the “Guiding Principles for Evaluators” because we are committed to ensuring that the findings of its evaluations are clearly and honestly represented to all stakeholders. This commitment is critical not just to program sponsors—the people who pay for programs— but also to program managers and implementers, who also are in need of unbiased  and dispassionate information about the results of the programs they operate. To learn about the evaluation methods we offer visit our Data collection & Outcome measurement page.

March 19, 2013
19 Mar 2013

Transforming “Data” Into Knowledge

br-transforming-dataIn his recent article in the New York Times, “What Data Can’t Do” (February 18, 2013, visit here ), David Brooks discusses some of the limits of “data.”

Brooks writes that we now live in a world that is saturated with gargantuan data collection capabilities, and that today’s powerful computers are able to handle huge data sets which “can now make sense of mind-bogglingly complex situations.” Despite these analytical capacities, there are a number of things that data can’t do very well. Brooks remarks that data is unable to fully understand the social world; often fails to integrate and deal with the quality (vs. quantity) of social interactions; and struggles to make sense of the “context,” i.e., the real environments, in which human decisions and human interactions are inevitably embedded. (See our earlier blog post Context Is Critical.)

Brooks insightfully notes that data often “obscures values,” by which he means that data often conceals the implicit assumptions, perspectives, and theories on which they are based. “Data is never ‘raw,’ it’s always structured according to somebody’s predispositions and values.” Data is always a selection of information. What counts as data depends upon what kinds of information the researcher values and thinks is important.

Program evaluations necessarily depend on the collection and analysis of data because data constitutes important measures and indicators of a program’s operation and results. While evaluations require data, it is important to note that data alone while, necessary, is insufficient for telling the complete story about a program and its effects. To get at the truth of a program, it is necessary to 1) discuss both the benefits and limitations of what constitutes “the data”—to understand what counts as evidence; 2) to use multiple kinds of data—both quantitative and qualitative; and 3) to employ experience- based judgment when interpreting the meaning of data.

Brad Rose Consulting, Inc., addresses the limitations pointed out by David Brooks, by working with clients and program stakeholders to identify what counts as “data,” and by collecting and analyzing multiple forms of data. We typically use a multi-method evaluation strategy, one which relies on both quantitative and qualitative measures. Most importantly, we bring to each evaluation project our experience-based judgment when interpreting the meaning of data, because we know that to fully understand what a program achieves (or doesn’t achieve), evaluators need robust experience so that they can transform mere information into genuine, useable knowledge. To learn about our diverse evaluation methods visit our Data collection & Outcome measurement page.

February 15, 2013
15 Feb 2013

4 Advantages of an External Evaluator

Although there are a number of perfectly good reasons that an organization may choose to create and maintain an internal program evaluation capacity, there are number of very good reasons, indeed, advantages, associated with the use of an external evaluator.

Breadth of Experience
External evaluators bring a breadth of experience evaluating a range of programs.  Such eclectic and wide-ranging experience can be especially useful when evaluating innovative programs that seek to creatively serve their target populations.  Evaluators who have worked in a variety of program contexts and who have worked with a diversity of program stakeholders can draw on their experience to inform current evaluation initiatives.  External evaluators have often “seen” an abundance of programs, and the resulting knowledge can be a substantial asset to the organization that engages their professional services.

External evaluators are often more disinterested and objective in their view of a program and its outcomes.  External evaluators are less susceptible to the internal politics of organizations, have less of an economic ‘stake’ in the success or failure of a program, and therefore are better positioned to provide an unbiased eye with which to conduct a program evaluation.  Objectivity is critical for discovering whether a program really works or not and is essential if program stakeholders are to know whether a program achieves its intended results.

While internal evaluators may be highly skilled, external evaluators often mobilize a range of expertise and technical skills that internal evaluators under-develop. Professional evaluators specialize in developing their skills, and apply their expertise to a variety of programs, they often have a superior ‘quiver’ of evaluation knowledge and wisdom.  Professional expertise is the province of specialization, and career program evaluators necessarily develop rich and deep evaluation expertise.

An external evaluator can be very cost effective, especially for smaller and mid-sized organizations (local non-profits, community-based organizations, school districts, family and community foundations,  colleges, etc.) that may not have sufficient resources with which to fund and maintain an internal evaluation capacity.  External evaluators are able to contain and reduce infrastructure costs, and therefore are comparatively inexpensive for clients.

With 20 years of experience in providing program evaluation services to its clients in the non-profit, foundation, education, health, and community service sectors, Brad Rose Consulting, Inc. is able to provide cost effective, customized, and high-value, program evaluations to its clients.  More information about the kinds of program evaluation services we provide is available here.

October 1, 2012
01 Oct 2012

Sample Evaluation Report – MMUF

The Mellon Mays Undergraduate Fellowship is an initiative to to reduce the serious under-representation of minorities in the faculties of higher education. MMUF commissioned Brad Rose Consulting to do an evaluation of their success over the past 10 years. Here are Brad Rose’s findings. The report is a fantastic example of what you can expect from a commissioned evaluation.

To learn more about our work in education visit our Higher education & K-12 page.

Copyright © 2020 - Brad Rose Consulting