Qualitative research interviews are a critical component of program evaluations. In-person and telephone interviews are especially valuable because they allow the evaluator to participate in direct conversations with program participants, program staff, community members, and other stakeholders. These conversations enable the evaluator to learn in a rich conversational venue about interviewees’ experiences, perspectives, attitudes, and knowledge. Unlike questionnaires and surveys, which typically require structured, categorical responses to standardized written questions so that data can be quantified, qualitative interviews allow for deeper probing of interviewees and the use of clarifying follow-up questions which can surface information that often remains unrevealed in survey/questionnaire formats.
Although research interviews are guided by a pre-determined, written protocol which contains guiding questions, excellent interviews require a nimble and improvisational interviewer who can thoughtfully and swiftly respond to interviewees’ observations and reflections. Qualitative research interviews also require that the interviewer be a skilled listener and thoughtful interpreter of verbally presented data. Interviewers must listen carefully both to the denotative narrative “text” of the interviewee, and to the connotative subtext (the implied intent, tacit sub-themes and connotations) that the interviewee presents.
The most productive qualitative interviews are those that approximate a good conversation. This requires the interviewer to establish a comfortable atmosphere; ask interesting and germane questions; display respect for the interviewee; and create a sense of equality, candor, and reciprocity between the interviewer and the interviewee. Good interviews not only are a source of rich and informative data for the interviewer, they can also be a reflective learning opportunity for the interviewee. Like every good conversation, both parties should benefit.
Discussion of interview basics:
Tip Sheet for Qualitative Interviewing:
Advantages and disadvantages of different research methods (personal interviews, telephone surveys, mail surveys, etc.):
Interviews: An Introduction to Qualitative Research Interviewing, by Steinar Kvale, Sage Publications, Thousand Oaks California, 1996:
About research interviewing:
Interviewing in qualitative research
Interviewing in educational research:
In a recent New York Times article, “Why You Hate Work”
Tony Schwartz and Christin Porath discuss why employees’ experience of work has increasingly become an experience of depletion and “burnout.” The factors are many, including: 1) demands on employee time that far exceed employee capacity to meet demands; 2) a leaner and less populous workforce, and therefore more work distributed to fewer workers; and 3) technology-driven expectations for immediate response to requests for employees’ attention and commitment (think here of answering e-mails at 1:00 AM). The authors cite both national and international studies that indicate that workers at all levels of various kinds of organizations feel less engaged, less satisfied, and less fulfilled by their experience at work. Schwartz and Porath argue however, that when companies better address the physical, mental, emotional, and spiritual dimensions of their workers, they not only produce more engaged and fulfilled workers, but more productive and profitable organizations. Organizations can begin to do this by instituting simple changes like mandating meetings that last no longer than 90 minutes; rewarding managers that display empathy, care, and humility; and providing regular and frequent breaks so that employees can ‘recharge ‘and work more creatively. Successful companies provide opportunities for employee renewal, focus, emotional support, and sense of purpose. When companies provide such opportunities, companies, investors, and employees benefit.
How might program evaluation add to non-profit organizations’ efforts to create what Schwartz and Porath call “truly human-centered organizations” (which) put their people first….because they recognize that they are the key to creating long-term value.” While program evaluation, alone, cannot prevent employee burnout, timely and well-designed formative evaluations can add to non-profit organizations’ capacities to implement effective programming by providing insight into the unintended features of programs that often ‘get in the way’ of staff (and program participants’) sense of efficacy and purposefulness. By conducting formative evaluations—evaluations that focus on program strengthening and effectiveness-maximization— program evaluation can help organizations and funders to create programs in which staff and participants don’t have to “spin their wheels,” i.e., programs where both staff and participants can achieve a greater sense of effectiveness, purpose, and satisfaction.
Because Brad Rose Consulting, Inc. often works at the intersection of program evaluation and organization development (OD), we work with clients to collect data, understand the characteristics of programs, and provide evidence-based insights into how programs, and the organizations that support them, can become maximally effective. We make concrete recommendations that help our clients adjust their modes of operation and thereby increase staff engagement and better serve their participants/clients. While the latter are the reason programs exist, the former are often the under-recognized key to programs’ success.
In a recent, themed issue devoted to the topic of validity in program evaluation, the journal New Directions in Evaluation (No. 142, Summer, 2014) revisited and commented on Ernest House’s influential 1980 book, Evaluating with Validity. House then argued that validity in evaluation must not just be limited to classic scientific conceptions of the valid (i.e. empirically describing things as they are), but that it must also include an expanded dimension of argumentative validity, in which an evaluation “must be true, coherent, and just.” Paying particular attention to social context, House argued that “there is more to validity than getting the facts right.” He wrote that “…the validity of an evaluation depends upon whether the evaluation is true, credible, and normatively correct.” House ultimately argued that evaluations must make compelling and persuasive arguments about what is true (about a program) and thereby bring “truth, beauty, and justice” to the evaluation enterprise.
In the same issue of New Directions in Evaluation, in her essay, “How ‘Beauty’ Can Bring Truth and Justice to Life,” E. Jane Davidson argues that the process of creating a clear, compelling, and coherent evaluative story ( i.e., a “beautiful” narrative account) is the key to unlocking “validity (truth) and fairness (justice).” To briefly summarize, Davidson argues that a coherent evaluation story weaves together quantitate evidence, qualitative evidence, and clear evaluative reasoning to produce an account of what happened and the value of what happened. She says that an effective evaluation—one that is truly accessible, assumption-unearthing, and values-explicit—enables evaluators to “arrive at robust conclusions about not just what has happened, but how good, valuable, and important it is.” (P.31)
House’s book and Davidson’s essay highlight how effective evaluations—ones that allow us to clearly see and understand what has happened in a program— rely on strong narrative accounts that tell a coherent and revealing story. Evaluations are not just tables and data—although these are necessary parts of any evaluation narrative—they are true, compelling, and fact-based stories about what happened, why things happened the way they did, and what the value (and meaning) is of the things that happened.
“When I reflect on what has improved the quality of my own work in recent years, it has been a relentless push toward succinctness and crystal clarity while grappling with some quite complex, and difficult material. For me this means striving to produce simple direct and clear answers to evaluation questions and being utterly transparent in the reasoning I have used to get to those conclusions.” (p.39)
Davidson further observes that evaluation reports often are often plagued by confusing, long-winded, and academic jargon, that make them not only difficult to read, but obfuscate the often muddled and ill-reasoned thinking behind the evaluation process itself. She argues that evaluation reporting must be clear, accessible, and simple—which does not mean that reports need to be simplistic, but that they must be coherent and comprehensible. I am reminded of a statement by the philosopher John Searle, ‘If you can’t say it clearly, you don’t understand it yourself’.
Reflecting on Davidson’s article, I realize that the best evaluation reports are the product of well thought-out and effectively conducted evaluation research, presented in a clear and cogent way. The findings from such research may be complex, but they need not be obscure or enigmatic. On the contrary, clear evaluation reports must be true stories, well told.
I recently participated in a workshop at Brandeis University for graduate students who were considering non-academic careers in the social sciences. During the workshop, one of the students asked about the difference between program evaluation and other kinds of social research. This is a valuable and important question to which I responded that program evaluation is a type of applied social research that is conducted with “a value, or set of values, in its denominator.” I further explained that I meant that evaluation research is always conducted with an eye to whether the outcomes, or results, of a program were achieved, especially when these outcomes are compared to a desired and valued standard or criterion. At the heart of program evaluation is the idea that outcomes, or changes, are valuable and desired. Evaluators conduct evaluation research to find out if these valuable changes (often expressed as program goals or objectives) are, in fact, achieved by the program.
Evaluation research shares many of the same methods and approaches as other social sciences, and indeed, natural sciences. Evaluators draw upon a range of evaluation designs (e.g. experimental design, quasi-experimental desing, non-experimental design) and a range of methodologies (e.g. case studies, observational studies, interviews, etc.) to learn what the effects of a given intervention have been. Did, for example, 8th grade students who received an enriched STEM curriculum do better on tests, than did their otherwise similar peers who didn’t receive the enriched curriculum? Do homeless women who receive career readiness workshops succeed at obtaining employment than do other similar homeless women who don’t participate in such workshops? (For more on these types of outcome evaluations, see our previous blog post, “What You Need to Know About Outcome Evaluations: The Basics,”) While not all program evaluations are outcome evaluations, all evaluations gather systematic data with which judgments about the program can be made.
Evaluation’s Differences From Other Kinds of Social Research
Evaluation research is distinct from other forms of applied social research in so far as it:
- seeks to determine the merit, value, and/or worth of a program’s activities and results.
- entails the systematic collection of empirical data that is used to measure the processes and/or outcomes of a program, with the goal of furthering the program’s development and improvement.
- provides actionable information for decision-makers and program stakeholders, so that, based on objective data, a program can be strengthened or curtailed.
- focuses on particular knowledge (usually about a program and its outcomes), rather than seeks widely generalizable and universal knowledge.
While evaluators share many of the same methods and approaches as other researchers, program evaluators must employ an explicit set of values against which to judge the findings of their empirical research. The means that evaluators must both be competent social scientists and exercise value-based judgments and interpretations about the meaning of data.
Research vs. Evaluation
Differences Between Research and Evaluation
Harvard Family Research Project’s “Ask an Expert” series.
See “Michael Scriven on the Differences Between Evaluation and Social Science Research,”
Office of Educational Assessment
Sandra Mathison’s “What is the Difference Between Evaluation and Research, and Why Do We Care?”
“Distinguishing Evaluation from Research”
In her book Evaluation (2nd Edition) Carol Weiss writes, “Outcomes define what the program intends to achieve.” (p.117) Outcomes are the results or changes that occur, either in individual participants, or targeted communities. Outcomes occur because a program marshals resources and mobilizes human effort to address a specified social problem. Outcomes, then, are what the program is all about; they are the reason the program exists.
In order to assess which outcomes are achieved, program evaluators design and conduct outcome evaluations. These evaluations are intended to indicate, or measure, the kinds and levels of change that occur for those affected by the program or treatment. “Outcome evaluations measure how clients and their circumstances change, and whether the treatment experience (or program) has been a factor in causing this change. In other words, outcome evaluations aim to assess treatment effectiveness. (World Health Organization)
Outcome evaluations, like other kinds of evaluations, may employ a logic model, or theory of change, which can help evaluators and their clients to identify the short-, medium-, and long-term changes that a program seeks to produce. (See our blog post “Using a Logic Model” ) Once intended changes are identified in the logic model, it is critical for the evaluator to further identify valid and effective measures of said changes, so that these changes are correctly documented. It is preferable to identify desired outcomes before the program begins operation, so that these outcomes can be tracked throughout the program’s life-span.
Most outcome evaluations employ instruments that contain measures of attitudes, behaviors, values, knowledge, and skills. Such instruments may be either standardized, often validated, or they may be uniquely designed, special- purpose instruments, (e.g., a survey designed specifically for this particular program.) Additionally, the measures contained in an instrument can be either “objective,” i.e., they don’t rely on individuals’ self-reports, or conversely, they can be “subjective,” i.e., based on informants’ self-estimates of effect. Ideally, outcome evaluations try to use objective measures, whenever possible. In many instances, however, it is desirable to use instruments that rely on participants’ self-reported changes and reports of program benefits.
It is important to note that outcomes (i.e. changes or results) can occur at different points in time in the life span of a program. Although outcome evaluations are often associated with “summative,” or end-of-program-cycle evaluations, because program outcomes can occur in the early or middle stages of a program’s operation, outcomes may be measured before the final stage of the program. It may even be useful for some evaluations to look at both short- and long-term outcomes, and therefore to be implemented at different points in time (i.e., early and late.)
Another issue relevant to outcome evaluation is dealing with unintended outcomes of a program. As you know, programs can have a range of intended goals. Some outcomes or results, however, may not be a part of the intended goals of the program. They nonetheless occur. It is critical for evaluations to try to capture the unintended consequences of programs’ operation as well as the intended outcomes.
Ultimately, outcome evaluations are the way that evaluators and their clients know if the program is making a difference, which differences it’s making, and if the differences it’s making, are the result of the program.
World Health Organization, Workbook 7
Measuring Program Outcomes: A Practical Approach (1996) United Way of America’s
Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources
Evaluation Methodology Basics, The Nuts and Bolts of Sound Evaluation,
E. Jane Davidson, Sage, 2005.
Evaluation (2nd Edition) Carol Weiss, Prentice Hall, 1998.