Occasionally, I encounter a resource that I think may be useful to clients and colleagues. I recently had the pleasure of enlisting the help of Green Heron Information services who helped me conduct a literature review for a project that I was working on. I’d like to share with you some of the central ideas about doing literature reviews—ideas that may be helpful to both evaluators and grant seekers— and encourage you to connect with Matt Von Hendy who is the president of Green Heron Information Services. (240) 401-7433 (email at firstname.lastname@example.org or visit www.greenheroninfo.com)
If you are like most people you probably have not thought about literature reviews since college or graduate school until you need to write one for a contract report, journal article or grant proposal. Just a quick review: A literature review is a piece of work that provides an overview of published information on a particular topic or subject usually within a specific period of time and discusses critical points of the current state of knowledge in the field including major findings as well as theoretical and methodological contributions. It will generally seek to present a summary of the important works but also provide a synthesis of this information as well.
Literature reviews matter for a number of reasons: they demonstrate a strong knowledge of the current state of research in the field or topic; they show what issues are being discussed or debated and where research is headed; and they provide excellent background information for placing a program, initiative or grant proposal in context. In short, a well-written literature review can provide a ‘mental road map’ of the past, present and future of research in a particular field.
Literature reviews can take many different types and forms but typically good ones share certain characteristics such as:
- Follows an organizational pattern that combines summary and synthesis
- Tracks the intellectual progression of a thought or a field of study
- Contains a conclusion that offer suggestions for future research
- Is well-researched
- Uses a wide variety of high quality resources including journal articles, conference papers, books and reports
Many evaluation and grant professionals when doing research for literature reviews use some combination of Google and/or other professionals as their primary information sources. While these resources are a great place to start, they both have limitations which make them not so good places to end your research. For example, search engines such as Google filter results based on a number of factors and very few experts can keep up to date with the amount of information that is being published. Fortunately, many high quality tools such as citation databases and subject specific databases exist that make going beyond Google relatively easy. Many evaluation professionals and proposal writers are motivated to do their own research but there are times such as working in new areas or tight deadlines where hiring an information professional to consult, research or write a literature review can be helpful.
You may think while this all sounds good in theory but wonder how would it work in practice? Let me offer a very quick case study of conducting research for a literature review in evaluating programs that attempt to improve mental health outcomes for teenagers in the United States. I would first start a list of sources by consulting experts and searching on Google. My next step would be to look at the two major citation databases, Scopus and Web of Science, and find out which journal articles and conference papers are most cited. I would then search in the subject specific databases that cover the health and psychology areas such as PubMed, Medline and PsycINFO. Finally, I would examine resources such as academic and non-profit think tanks just to make sure I was not missing anything important.
A well-researched and written literature review offers a number of benefits for evaluation professionals, grant-seekers and even funders and grantors: it can show an excellent understanding of the research in a subject area; it can demonstrate about what current issues or topics are being debated and suggest directions for future research; it can also provide an excellent way to place a program, initiative or proposal into context within the larger picture of what is happening in an area. If you have questions with getting started on a literature review, we are always glad to offer suggestions.
Green Heron Information Services offers consulting, research and writing services in support of literature review efforts. www.greenheroninfo.com
Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”). Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.
Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).
While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.
Types of Evaluation Questions
Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.
▪ Process Evaluation Questions
- Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
- Did the program’s services, products, and resources reach their intended audiences and users?
- Were services, products, and resources made available to intended audiences and users in a timely manner?
- What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
- What steps were taken by the program to address these challenges?
▪ Formative Evaluation Questions
- How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
- How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants and stakeholders?
- What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
- Which elements of the program do participants find most beneficial, and which least beneficial?
▪ Outcome/Summative Evaluation Questions
- What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
- Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
- Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
- What is the ultimate worth, merit, and value of the program?
- Should the program be continued or curtailed?
The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use. If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT). Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so. In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review, etc.) that can help elucidate why what happens, happens, and what program participants experience.
Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions.
Evaluator competencies—the skills, knowledge and attitudes— required to be an effective program evaluator have been much discussed. (See, for example, The International Board of Standards for Training, Performance and Instruction Evaluator Competencies, and the CDC’s “Finding the Right People for Your Program Evaluation Team: Evaluator and Planning Team Job Descriptions.” )
A good evaluator must, of course, be able to develop a research design, carry out research in the field, analyze data, and report findings. These technical/methodological skills, although of critical importance, are however, not the only skills that evaluators need. Effective evaluations also depend upon a range of interpersonal or relational skills that make effective and responsive interposal interaction possible.
I recently posted to the American Evaluation Associations listserv a query about the importance and role of the interpersonal skills in evaluation. I asked for AEA members’ opinions about the importance of interpersonal skills in conducting successful evaluations. A number of evaluators responded to my inquiry. The central theme of those responses was that successful evaluators and successful evaluation engagements require that evaluators possess and employ key interpersonal skills, and that without these, evaluation engagements are unlikely to be successful.
Among the most prominent reasons that my AEA colleagues reported for the importance of interpersonal skills were: 1) the importance of building strong, candid, and constructive relationships, on which effective data collection depends; and 2) the importance of establishing trusting and collaborative relationships between evaluators and stakeholders in order to help to ensure that evaluation findings will be utilized by clients and stakeholders. Additionally, some colleagues commented that the self-evident reason for utilizing strong interpersonal skills in evaluation engagements: these skills enhance the probability that clients and stakeholders will share information and provide insights about the program. Thus effective evaluation necessarily entails trusting, open, and amicable relationships that make access to program knowledge and information possible .
Reflecting on my 25 years of professional experience, which includes observing the work of many evaluators, I think that key interpersonal characteristics, include the abilities to:
- Build rapport and trust with clients, evaluands, and stakeholders
- Act with personal integrity
- Display a genuine curiosity and ask good questions
- Make oneself vulnerable in order to learn (see my earlier blog post on the role of vulnerability in learning and creativity at http://bradroseconsulting.com/index.php/secret-innovation-creativity-change/
- Actively listen
- Be empathic
- Be both socially aware and self-aware— i.e., be aware of, and manage, both one’s own and other’s emotions (including the features of emotional intelligence, i.e, capacities to accurately perceive emotions, use emotions to facilitate thinking, to understand emotional meanings, and to manage emotions).
- Treat each person with respect
- Manage conflict and galvanize collaboration
- Problem solve
- Facilitate collective (group) learning
- These interpersonal skills are central to successful program evaluations. Attention to these characteristics– both by program evaluators and by those seeking to engage a program evaluator (i.e. evaluation clients)— will greatly maximize the probability of successful evaluation projects.
Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation J.A. King and L. Stevahn (Sage).
Working with Emotional Intelligence. Daniel Goleman (Bantam)
This past March Brad Rose was selected as Fellow by the National Center for Innovation and Excellence. Brad will join a group of 13 Fellows all of whom work with the Center to further their mission across the United States.
About the Center:
“The National Center for Innovation and Excellence is a dynamic community dedicated to developing youth, changing communities, growing economies, and improving the lives of others by working with organizations, foundations, governments, and communities to design, implement, and test outcomes for large scale youth development strategies and community change initiatives.”
To learn more visit http://www.ncfie.org/
Programs are seldom implemented under pristine laboratory conditions. Instead, they occur in the real world, in real time. They unfold in complex environments, with ever-changing circumstances and unforeseeable developments. Consequently, program evaluations need to be adaptive, aware of the reality of programs’ often tumultuous contexts, and capable of suppleness and flexibility. This is especially true for evaluations that seek to assess the impact of innovative initiatives whose goals are often not standardized and pre-determined, but are evolving and emergent.
Over the last 20 years, Developmental Evaluation has emerged as an important evaluation approach for meeting the evaluation needs of innovative initiatives. As Michael Quinn Patton, a noted theorist and practitioner of Developmental Evaluation has noted,
“Developmental evaluation (DE) is especially appropriate for innovative initiatives or organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback.” (http://comm.eval.org/viewdocument/?DocumentKey=95f16941-7e8a-4785-907a-42615d919d7a )
Developmental Evaluation Serves Innovative Programs
While Developmental Evaluation is appropriate for many programs and organizations, it is especially useful for programs that aspire to continuous learning, that value adaptation, and that seek innovative means to address emerging (vs. “known”) issues. Such programs are typically found in the social philanthropic and non-profit sectors. Evaluators who practice Developmental Evaluation transcend the typical role of a traditional evaluator—they don’t just design formative or summative evaluations. Developmental evaluators work closely with decision makers, program designers and staff to ask key questions about program design and logic, to collect data—sometimes in rapid time frames—to inform real-time program implementation and refinement, and to ensure that programs consistently employ the principles of learning and continuous improvement. Patton observed in his book Utilization Focused Evaluation (3rd Edition):
“Developmental Evaluation refers to evaluation processes undertaken for the purpose of supporting program, project, staff and/or organizational development, including asking evaluative questions and applying evaluation logic for development purposes. The evaluator is part of a team whose members collaborate to conceptualize, design, and test new approaches in a long-term, on-going process of continuous improvement, adaptation and intentional change. The evaluator’s primary function is to elucidate them discussions with evaluative questions, data and logic, and to facilitate data-based decision-making…”
Brad Rose Consulting, Inc. utilizes the principles and insights of Developmental Evaluation. Our 20+ years of experience working with social entrepreneurs and innovative non-profit organizations has taught us that even seemingly “standard” program designs can benefit from a nuanced, responsive, context-sensitive, evaluation approach, one that draws on the practices of Developmental Evaluation. Additionally, innovative programs whose outcomes are not fully predictable nor exclusively pre-determined, will find that Developmental Evaluation provides the iterative feedback necessary to strengthen the program and to achieve enhanced outcomes. Because Developmental Evaluation is essentially consultative, integrative, and built on a constructive and supportive relationship between the evaluator and the organization’s staff, it offers programs designers, managers, and implementers superior insights into the complex, often rapidly changing conditions in which genuine innovations occur.
A Developmental Evaluation Primer, at J.W. McConnell Family Foundation
Video Michael Quinn Patton on Developmental Evaluation
Link to Michael Quinn Patton, Developmental Evaluation
A conversation with Michal Quinn Patton
“Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use.” Guilford Press New York. 2011