When discussing with clients potential sources of data about a program’s operations and effects, it has often been said to me, “But we just have anecdotal evidence.” It’s as if anecdotal data don’t count. Too often anecdotes are dismissed as unscientific and valueless—as if they are just stories. In point of fact, anecdotes (qualitative accounts, “word-based” data) can be a valuable source of information and offer powerful insights about how a program works and the effects it produces. When carefully collected and systematically analyzed, especially when combined with other sources of quantitative data, anecdotes can be a powerful “window” on a program.
In a recent blog post, (see link below) the evaluator Michael Quinn Patton reflects on the value and utility of anecdotal information. Patton shows that, when collected in sufficient quantity, compared (or “triangulated’) with other kinds of data, and systematically and sensibly analyzed, anecdotes can provide important information about the character and meaning of a given phenomenon. Furthermore, anecdotes are often the starting place for hypotheses and experiments that ultimately produce quantitative evidence of phenomena. William Trochim underscores the importance of word-based, qualitative data (of which anecdotes are a specific type) when he points out. “All quantitative data is based on qualitative judgment. Numbers in and of themselves can’t be interpreted without understanding the assumptions which underlie them…”(David Foster Wallace made a similar point from an entirely different vantage point, in Consider the Lobster: “You can’t escape language. Language is everything and everywhere. It’s what lets us have anything to do with one another. p. 70.)
Trochim goes on to say,
“All numerical information involves numerous judgments about what the number means. The bottom line here is that quantitative and qualitative data are, at some level, virtually inseparable. Neither exists in a vacuum or can be considered totally devoid of the other. To ask which is “better” or more “valid” or has greater “verisimilitude” or whatever ignores the intimate connection between them. To do good research we need to use both the qualitative and the quantitative.”
Patton reminds us of the importance of anecdotes when he quotes N.G. Carr, author of The Shallows: What the Internet is doing to our brains. W. W. Norton(2010). New York:
“We live anecdotally, proceeding from birth to death through a series of incidents, but scientists can be quick to dismiss the value of anecdotes. “Anecdotal” has become something of a curse word, at least when applied to research and other explorations of the real.. . . . The empirical, if it’s to provide anything like a full picture, needs to make room for both the statistical and the anecdotal.
The danger in scorning the anecdotal is that science gets too far removed from the actual experience of life, that it loses sight of the fact that mathematical averages and other such measures are always abstractions.”
I believe that it is important to use multiple kinds of information to understand what programs do, and what their outcomes are. Quantitative data is essential for understanding abstract trends and at getting at the “larger picture.” That said, it is nearly impossible to make sense of quantitative data without using language to reveal assumption, implications, explanations, and meaning of quantitative data. Anecdotal data, as one kind of qualitative data, is critical to effective program evaluation research.
Michael Quinn Patton, “Anecdote as Epithet – Rumination #1 from Qualitative Research and Evaluation Methods”
William Trochim, “The Qualitative Debate”
“The Qualitative- Quantitative Debate”
Video about Qualitative, Quantitative, and Mixed-Methods Research
A Table Summarizing Qualitative versus Quantitative Research: Key Points in a Classic Debate
Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research
Qualitative research interviews are a critical component of program evaluations. In-person and telephone interviews are especially valuable because they allow the evaluator to participate in direct conversations with program participants, program staff, community members, and other stakeholders. These conversations enable the evaluator to learn in a rich conversational venue about interviewees’ experiences, perspectives, attitudes, and knowledge. Unlike questionnaires and surveys, which typically require structured, categorical responses to standardized written questions so that data can be quantified, qualitative interviews allow for deeper probing of interviewees and the use of clarifying follow-up questions which can surface information that often remains unrevealed in survey/questionnaire formats.
Although research interviews are guided by a pre-determined, written protocol which contains guiding questions, excellent interviews require a nimble and improvisational interviewer who can thoughtfully and swiftly respond to interviewees’ observations and reflections. Qualitative research interviews also require that the interviewer be a skilled listener and thoughtful interpreter of verbally presented data. Interviewers must listen carefully both to the denotative narrative “text” of the interviewee, and to the connotative subtext (the implied intent, tacit sub-themes and connotations) that the interviewee presents.
The most productive qualitative interviews are those that approximate a good conversation. This requires the interviewer to establish a comfortable atmosphere; ask interesting and germane questions; display respect for the interviewee; and create a sense of equality, candor, and reciprocity between the interviewer and the interviewee. Good interviews not only are a source of rich and informative data for the interviewer, they can also be a reflective learning opportunity for the interviewee. Like every good conversation, both parties should benefit.
Discussion of interview basics:
Tip Sheet for Qualitative Interviewing:
Advantages and disadvantages of different research methods (personal interviews, telephone surveys, mail surveys, etc.):
Interviews: An Introduction to Qualitative Research Interviewing, by Steinar Kvale, Sage Publications, Thousand Oaks California, 1996:
About research interviewing:
Interviewing in qualitative research
Interviewing in educational research:
In a recent New York Times article, “Why You Hate Work”
Tony Schwartz and Christin Porath discuss why employees’ experience of work has increasingly become an experience of depletion and “burnout.” The factors are many, including: 1) demands on employee time that far exceed employee capacity to meet demands; 2) a leaner and less populous workforce, and therefore more work distributed to fewer workers; and 3) technology-driven expectations for immediate response to requests for employees’ attention and commitment (think here of answering e-mails at 1:00 AM). The authors cite both national and international studies that indicate that workers at all levels of various kinds of organizations feel less engaged, less satisfied, and less fulfilled by their experience at work. Schwartz and Porath argue however, that when companies better address the physical, mental, emotional, and spiritual dimensions of their workers, they not only produce more engaged and fulfilled workers, but more productive and profitable organizations. Organizations can begin to do this by instituting simple changes like mandating meetings that last no longer than 90 minutes; rewarding managers that display empathy, care, and humility; and providing regular and frequent breaks so that employees can ‘recharge ‘and work more creatively. Successful companies provide opportunities for employee renewal, focus, emotional support, and sense of purpose. When companies provide such opportunities, companies, investors, and employees benefit.
How might program evaluation add to non-profit organizations’ efforts to create what Schwartz and Porath call “truly human-centered organizations” (which) put their people first….because they recognize that they are the key to creating long-term value.” While program evaluation, alone, cannot prevent employee burnout, timely and well-designed formative evaluations can add to non-profit organizations’ capacities to implement effective programming by providing insight into the unintended features of programs that often ‘get in the way’ of staff (and program participants’) sense of efficacy and purposefulness. By conducting formative evaluations—evaluations that focus on program strengthening and effectiveness-maximization— program evaluation can help organizations and funders to create programs in which staff and participants don’t have to “spin their wheels,” i.e., programs where both staff and participants can achieve a greater sense of effectiveness, purpose, and satisfaction.
Because Brad Rose Consulting, Inc. often works at the intersection of program evaluation and organization development (OD), we work with clients to collect data, understand the characteristics of programs, and provide evidence-based insights into how programs, and the organizations that support them, can become maximally effective. We make concrete recommendations that help our clients adjust their modes of operation and thereby increase staff engagement and better serve their participants/clients. While the latter are the reason programs exist, the former are often the under-recognized key to programs’ success.
In a recent, themed issue devoted to the topic of validity in program evaluation, the journal New Directions in Evaluation (No. 142, Summer, 2014) revisited and commented on Ernest House’s influential 1980 book, Evaluating with Validity. House then argued that validity in evaluation must not just be limited to classic scientific conceptions of the valid (i.e. empirically describing things as they are), but that it must also include an expanded dimension of argumentative validity, in which an evaluation “must be true, coherent, and just.” Paying particular attention to social context, House argued that “there is more to validity than getting the facts right.” He wrote that “…the validity of an evaluation depends upon whether the evaluation is true, credible, and normatively correct.” House ultimately argued that evaluations must make compelling and persuasive arguments about what is true (about a program) and thereby bring “truth, beauty, and justice” to the evaluation enterprise.
In the same issue of New Directions in Evaluation, in her essay, “How ‘Beauty’ Can Bring Truth and Justice to Life,” E. Jane Davidson argues that the process of creating a clear, compelling, and coherent evaluative story ( i.e., a “beautiful” narrative account) is the key to unlocking “validity (truth) and fairness (justice).” To briefly summarize, Davidson argues that a coherent evaluation story weaves together quantitate evidence, qualitative evidence, and clear evaluative reasoning to produce an account of what happened and the value of what happened. She says that an effective evaluation—one that is truly accessible, assumption-unearthing, and values-explicit—enables evaluators to “arrive at robust conclusions about not just what has happened, but how good, valuable, and important it is.” (P.31)
House’s book and Davidson’s essay highlight how effective evaluations—ones that allow us to clearly see and understand what has happened in a program— rely on strong narrative accounts that tell a coherent and revealing story. Evaluations are not just tables and data—although these are necessary parts of any evaluation narrative—they are true, compelling, and fact-based stories about what happened, why things happened the way they did, and what the value (and meaning) is of the things that happened.
“When I reflect on what has improved the quality of my own work in recent years, it has been a relentless push toward succinctness and crystal clarity while grappling with some quite complex, and difficult material. For me this means striving to produce simple direct and clear answers to evaluation questions and being utterly transparent in the reasoning I have used to get to those conclusions.” (p.39)
Davidson further observes that evaluation reports often are often plagued by confusing, long-winded, and academic jargon, that make them not only difficult to read, but obfuscate the often muddled and ill-reasoned thinking behind the evaluation process itself. She argues that evaluation reporting must be clear, accessible, and simple—which does not mean that reports need to be simplistic, but that they must be coherent and comprehensible. I am reminded of a statement by the philosopher John Searle, ‘If you can’t say it clearly, you don’t understand it yourself’.
Reflecting on Davidson’s article, I realize that the best evaluation reports are the product of well thought-out and effectively conducted evaluation research, presented in a clear and cogent way. The findings from such research may be complex, but they need not be obscure or enigmatic. On the contrary, clear evaluation reports must be true stories, well told.
I recently participated in a workshop at Brandeis University for graduate students who were considering non-academic careers in the social sciences. During the workshop, one of the students asked about the difference between program evaluation and other kinds of social research. This is a valuable and important question to which I responded that program evaluation is a type of applied social research that is conducted with “a value, or set of values, in its denominator.” I further explained that I meant that evaluation research is always conducted with an eye to whether the outcomes, or results, of a program were achieved, especially when these outcomes are compared to a desired and valued standard or criterion. At the heart of program evaluation is the idea that outcomes, or changes, are valuable and desired. Evaluators conduct evaluation research to find out if these valuable changes (often expressed as program goals or objectives) are, in fact, achieved by the program.
Evaluation research shares many of the same methods and approaches as other social sciences, and indeed, natural sciences. Evaluators draw upon a range of evaluation designs (e.g. experimental design, quasi-experimental desing, non-experimental design) and a range of methodologies (e.g. case studies, observational studies, interviews, etc.) to learn what the effects of a given intervention have been. Did, for example, 8th grade students who received an enriched STEM curriculum do better on tests, than did their otherwise similar peers who didn’t receive the enriched curriculum? Do homeless women who receive career readiness workshops succeed at obtaining employment than do other similar homeless women who don’t participate in such workshops? (For more on these types of outcome evaluations, see our previous blog post, “What You Need to Know About Outcome Evaluations: The Basics,”) While not all program evaluations are outcome evaluations, all evaluations gather systematic data with which judgments about the program can be made.
Evaluation’s Differences From Other Kinds of Social Research
Evaluation research is distinct from other forms of applied social research in so far as it:
- seeks to determine the merit, value, and/or worth of a program’s activities and results.
- entails the systematic collection of empirical data that is used to measure the processes and/or outcomes of a program, with the goal of furthering the program’s development and improvement.
- provides actionable information for decision-makers and program stakeholders, so that, based on objective data, a program can be strengthened or curtailed.
- focuses on particular knowledge (usually about a program and its outcomes), rather than seeks widely generalizable and universal knowledge.
While evaluators share many of the same methods and approaches as other researchers, program evaluators must employ an explicit set of values against which to judge the findings of their empirical research. The means that evaluators must both be competent social scientists and exercise value-based judgments and interpretations about the meaning of data.
Research vs. Evaluation
Differences Between Research and Evaluation
Harvard Family Research Project’s “Ask an Expert” series.
See “Michael Scriven on the Differences Between Evaluation and Social Science Research,”
Office of Educational Assessment
Sandra Mathison’s “What is the Difference Between Evaluation and Research, and Why Do We Care?”
“Distinguishing Evaluation from Research”