What are Nonprofit Organizations?
Organizations are social entities that have a collective purpose and that interact with larger environments (the economy, society, other organizations, etc.) Nonprofit organizations are a sub-type of organization that use “surplus revenues” (i.e. revenues beyond those used for operation and sustenance) to achieve desirable social ends, rather than to produce profits or dividends. In essence, nonprofits use their financial resources (provided by individual donations, foundation and government grants, etc.) to ameliorate the lives and conditions of community members. In the US, there are a range of nonprofit organizations, including hospitals, charities, educational organizations, social welfare organizations, foundations, community organizations, etc. According to the National Center for Charitable Statistics, there are approximately 1.5 million nonprofits in the US.
Organizations vs. Programs
Many nonprofit organizations mobilize resources in the form of organizedprograms that provide activities and products that are designed to improve the lives of program participants. Carter MacNamara writes that, “A program is a collection of resources in an organization and is geared to accomplish a certain goal or set of goals. Programs are one major aspect of the non-profit’s structure. The typical non-profit organizational structure is built around programs, that is, the non-profit provides certain major services, each of which is usually formalized into a program.” (http://literacy.kent.edu/Oasis/grants/overviewprogplan.html) In serving program participants, nonprofits strive to effectively and efficiently deploy program resources, including knowledge, activities, and materials, to positively affect the lives of those they serve.
Program Evaluation Meets Organization Development
In order to assess the effectiveness and efficiency of programs, nonprofits will often conduct program evaluations. Program evaluations are customarily guided by a set of evaluation research questions, e.g., What are the effects of the program on participants? What challenges did participants encounter while in the program? Did the program make a difference in the lives of those it was intended to serve? Did the program cause the observed changed in program participants? etc. (for more examples of evaluation questions, see our previous posts “Questions Before Methods” and “Approaching an Evaluation: 10 Issues to Consider” ) Although program evaluations are customarily aimed at gathering and analyzing data about discrete programs, the most useful evaluations often collect, synthesize, and report information that can useful in improving the broader operation and health of the organization that hosts the program. Program evaluation thus can contribute to organization development, “the deliberately planned, organization-wide effort to increase an organization’s effectiveness and/or efficiency and/or to enable the organization to achieve its strategic goals.” (wikipedia)
In fact, findings from program evaluations often have important implications for the development and sustainability of the entire host organization. This is especially true in the case of small-to-medium sized nonprofit organizations, whose core programs often comprise the bulk of the organization’s structure and raison d’être. Consequently, information from program evaluations—especially formative evaluations which focus on strengthening program effectiveness—can be used to clarify the organization’s goals and objectives, to identify key organizational challenges and ways to address these, and to strengthen the overall effectiveness of the organization’s efforts. Additionally, program evaluations can offer an ideal opportunity for an organization to reflect on its practices and purposes, to rethink ways to achieve the organization’s mission, and to identify new data-based strategies for enhancing the organization’s long-term viability and well-being. Ultimately, program evaluation can, and in many cases should, be an integral component of organization development.
About Nonprofit Organizations:
Types of Non Profit Organizations in the US:
Overview of Nonprofit Program Planning by Carter McNamara:
Basic Guide to Nonprofit Program Design and Marketing:
What is Organization Development?
When discussing with clients potential sources of data about a program’s operations and effects, it has often been said to me, “But we just have anecdotal evidence.” It’s as if anecdotal data don’t count. Too often anecdotes are dismissed as unscientific and valueless—as if they are just stories. In point of fact, anecdotes (qualitative accounts, “word-based” data) can be a valuable source of information and offer powerful insights about how a program works and the effects it produces. When carefully collected and systematically analyzed, especially when combined with other sources of quantitative data, anecdotes can be a powerful “window” on a program.
In a recent blog post, (see link below) the evaluator Michael Quinn Patton reflects on the value and utility of anecdotal information. Patton shows that, when collected in sufficient quantity, compared (or “triangulated’) with other kinds of data, and systematically and sensibly analyzed, anecdotes can provide important information about the character and meaning of a given phenomenon. Furthermore, anecdotes are often the starting place for hypotheses and experiments that ultimately produce quantitative evidence of phenomena. William Trochim underscores the importance of word-based, qualitative data (of which anecdotes are a specific type) when he points out. “All quantitative data is based on qualitative judgment. Numbers in and of themselves can’t be interpreted without understanding the assumptions which underlie them…”(David Foster Wallace made a similar point from an entirely different vantage point, in Consider the Lobster: “You can’t escape language. Language is everything and everywhere. It’s what lets us have anything to do with one another. p. 70.)
Trochim goes on to say,
“All numerical information involves numerous judgments about what the number means. The bottom line here is that quantitative and qualitative data are, at some level, virtually inseparable. Neither exists in a vacuum or can be considered totally devoid of the other. To ask which is “better” or more “valid” or has greater “verisimilitude” or whatever ignores the intimate connection between them. To do good research we need to use both the qualitative and the quantitative.”
Patton reminds us of the importance of anecdotes when he quotes N.G. Carr, author of The Shallows: What the Internet is doing to our brains. W. W. Norton(2010). New York:
“We live anecdotally, proceeding from birth to death through a series of incidents, but scientists can be quick to dismiss the value of anecdotes. “Anecdotal” has become something of a curse word, at least when applied to research and other explorations of the real.. . . . The empirical, if it’s to provide anything like a full picture, needs to make room for both the statistical and the anecdotal.
The danger in scorning the anecdotal is that science gets too far removed from the actual experience of life, that it loses sight of the fact that mathematical averages and other such measures are always abstractions.”
I believe that it is important to use multiple kinds of information to understand what programs do, and what their outcomes are. Quantitative data is essential for understanding abstract trends and at getting at the “larger picture.” That said, it is nearly impossible to make sense of quantitative data without using language to reveal assumption, implications, explanations, and meaning of quantitative data. Anecdotal data, as one kind of qualitative data, is critical to effective program evaluation research.
Michael Quinn Patton, “Anecdote as Epithet – Rumination #1 from Qualitative Research and Evaluation Methods”
William Trochim, “The Qualitative Debate”
“The Qualitative- Quantitative Debate”
Video about Qualitative, Quantitative, and Mixed-Methods Research
A Table Summarizing Qualitative versus Quantitative Research: Key Points in a Classic Debate
Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research
Qualitative research interviews are a critical component of program evaluations. In-person and telephone interviews are especially valuable because they allow the evaluator to participate in direct conversations with program participants, program staff, community members, and other stakeholders. These conversations enable the evaluator to learn in a rich conversational venue about interviewees’ experiences, perspectives, attitudes, and knowledge. Unlike questionnaires and surveys, which typically require structured, categorical responses to standardized written questions so that data can be quantified, qualitative interviews allow for deeper probing of interviewees and the use of clarifying follow-up questions which can surface information that often remains unrevealed in survey/questionnaire formats.
Although research interviews are guided by a pre-determined, written protocol which contains guiding questions, excellent interviews require a nimble and improvisational interviewer who can thoughtfully and swiftly respond to interviewees’ observations and reflections. Qualitative research interviews also require that the interviewer be a skilled listener and thoughtful interpreter of verbally presented data. Interviewers must listen carefully both to the denotative narrative “text” of the interviewee, and to the connotative subtext (the implied intent, tacit sub-themes and connotations) that the interviewee presents.
The most productive qualitative interviews are those that approximate a good conversation. This requires the interviewer to establish a comfortable atmosphere; ask interesting and germane questions; display respect for the interviewee; and create a sense of equality, candor, and reciprocity between the interviewer and the interviewee. Good interviews not only are a source of rich and informative data for the interviewer, they can also be a reflective learning opportunity for the interviewee. Like every good conversation, both parties should benefit.
Discussion of interview basics:
Tip Sheet for Qualitative Interviewing:
Advantages and disadvantages of different research methods (personal interviews, telephone surveys, mail surveys, etc.):
Interviews: An Introduction to Qualitative Research Interviewing, by Steinar Kvale, Sage Publications, Thousand Oaks California, 1996:
About research interviewing:
Interviewing in qualitative research
Interviewing in educational research:
In a recent New York Times article, “Why You Hate Work”
Tony Schwartz and Christin Porath discuss why employees’ experience of work has increasingly become an experience of depletion and “burnout.” The factors are many, including: 1) demands on employee time that far exceed employee capacity to meet demands; 2) a leaner and less populous workforce, and therefore more work distributed to fewer workers; and 3) technology-driven expectations for immediate response to requests for employees’ attention and commitment (think here of answering e-mails at 1:00 AM). The authors cite both national and international studies that indicate that workers at all levels of various kinds of organizations feel less engaged, less satisfied, and less fulfilled by their experience at work. Schwartz and Porath argue however, that when companies better address the physical, mental, emotional, and spiritual dimensions of their workers, they not only produce more engaged and fulfilled workers, but more productive and profitable organizations. Organizations can begin to do this by instituting simple changes like mandating meetings that last no longer than 90 minutes; rewarding managers that display empathy, care, and humility; and providing regular and frequent breaks so that employees can ‘recharge ‘and work more creatively. Successful companies provide opportunities for employee renewal, focus, emotional support, and sense of purpose. When companies provide such opportunities, companies, investors, and employees benefit.
How might program evaluation add to non-profit organizations’ efforts to create what Schwartz and Porath call “truly human-centered organizations” (which) put their people first….because they recognize that they are the key to creating long-term value.” While program evaluation, alone, cannot prevent employee burnout, timely and well-designed formative evaluations can add to non-profit organizations’ capacities to implement effective programming by providing insight into the unintended features of programs that often ‘get in the way’ of staff (and program participants’) sense of efficacy and purposefulness. By conducting formative evaluations—evaluations that focus on program strengthening and effectiveness-maximization— program evaluation can help organizations and funders to create programs in which staff and participants don’t have to “spin their wheels,” i.e., programs where both staff and participants can achieve a greater sense of effectiveness, purpose, and satisfaction.
Because Brad Rose Consulting, Inc. often works at the intersection of program evaluation and organization development (OD), we work with clients to collect data, understand the characteristics of programs, and provide evidence-based insights into how programs, and the organizations that support them, can become maximally effective. We make concrete recommendations that help our clients adjust their modes of operation and thereby increase staff engagement and better serve their participants/clients. While the latter are the reason programs exist, the former are often the under-recognized key to programs’ success.
In a recent, themed issue devoted to the topic of validity in program evaluation, the journal New Directions in Evaluation (No. 142, Summer, 2014) revisited and commented on Ernest House’s influential 1980 book, Evaluating with Validity. House then argued that validity in evaluation must not just be limited to classic scientific conceptions of the valid (i.e. empirically describing things as they are), but that it must also include an expanded dimension of argumentative validity, in which an evaluation “must be true, coherent, and just.” Paying particular attention to social context, House argued that “there is more to validity than getting the facts right.” He wrote that “…the validity of an evaluation depends upon whether the evaluation is true, credible, and normatively correct.” House ultimately argued that evaluations must make compelling and persuasive arguments about what is true (about a program) and thereby bring “truth, beauty, and justice” to the evaluation enterprise.
In the same issue of New Directions in Evaluation, in her essay, “How ‘Beauty’ Can Bring Truth and Justice to Life,” E. Jane Davidson argues that the process of creating a clear, compelling, and coherent evaluative story ( i.e., a “beautiful” narrative account) is the key to unlocking “validity (truth) and fairness (justice).” To briefly summarize, Davidson argues that a coherent evaluation story weaves together quantitate evidence, qualitative evidence, and clear evaluative reasoning to produce an account of what happened and the value of what happened. She says that an effective evaluation—one that is truly accessible, assumption-unearthing, and values-explicit—enables evaluators to “arrive at robust conclusions about not just what has happened, but how good, valuable, and important it is.” (P.31)
House’s book and Davidson’s essay highlight how effective evaluations—ones that allow us to clearly see and understand what has happened in a program— rely on strong narrative accounts that tell a coherent and revealing story. Evaluations are not just tables and data—although these are necessary parts of any evaluation narrative—they are true, compelling, and fact-based stories about what happened, why things happened the way they did, and what the value (and meaning) is of the things that happened.
“When I reflect on what has improved the quality of my own work in recent years, it has been a relentless push toward succinctness and crystal clarity while grappling with some quite complex, and difficult material. For me this means striving to produce simple direct and clear answers to evaluation questions and being utterly transparent in the reasoning I have used to get to those conclusions.” (p.39)
Davidson further observes that evaluation reports often are often plagued by confusing, long-winded, and academic jargon, that make them not only difficult to read, but obfuscate the often muddled and ill-reasoned thinking behind the evaluation process itself. She argues that evaluation reporting must be clear, accessible, and simple—which does not mean that reports need to be simplistic, but that they must be coherent and comprehensible. I am reminded of a statement by the philosopher John Searle, ‘If you can’t say it clearly, you don’t understand it yourself’.
Reflecting on Davidson’s article, I realize that the best evaluation reports are the product of well thought-out and effectively conducted evaluation research, presented in a clear and cogent way. The findings from such research may be complex, but they need not be obscure or enigmatic. On the contrary, clear evaluation reports must be true stories, well told.