As mentioned in a previous blog post, program evaluation can play an important role in an organization’s strategic planning initiatives. This is especially true in non-profit organizations, human service agencies, k-12, and higher education institutions, all of whom must rely on non-market data for evidence of program effects. Evaluation can help these organizations to identify, gather, and analyze data with which to judge the impact of their activities and to strengthen current, or redirect future, efforts. Only with clear and accurate information can a non-profit organization take stock of its effectiveness and make informed choices about needed changes in direction. As Heather Tunis and Maura Harrington, note “An evaluation plan helps refine data collection and assessment practices so that the information is most useful to advancing the organization’s mission and the objectives of the program being evaluated. Evaluation is a key component of being a learning organization.” (Non-profits: Strategic Planning and Future Program Evaluation) As Mark Fulop observes, “Indeed nonprofits that embrace evaluation as strategy will be driven by internal excellence rather than an external locus of control. Nonprofits that embrace evaluation as strategy will strengthen not only their organizational core but the centrality of their place in solving social needs.” (“The Roles of Strategic Evaluation in Non-profits” )
Here are a few links to resources about strategic planning and program evaluation:
From CDC, “Using Program Evaluation to Improve Programs: Strategic Planning” http://www.cdc.gov/HealthyYouth/evaluation/pdf/sp_kit/sp_toolkit.pdf
“Strategic Planning Resources for Non-Profits” http://nonprofitanswerguide.org/faq/strategic-planning/
“Why Strategic Planning for Non-profits is Important,” http://www.event360.com/blog/why-nonprofit-strategic-planning-is-important/
“What is strategic planning, and why should all schools have a strategic plan?” http://www.strategicplanning4schools.com/overview.html
From the National Alliance for Media Arts and Culture, “Basic Steps to a Strategic Planning Process,”http://www.namac.org/strategic-planning-steps
From the World Bank, “Strategic Planning- A Ten-Step Guide” http://siteresources.worldbank.org/INTAFRREGTOPTEIA/Resources/mosaica_10_steps.pdf
Occasionally, I encounter a resource that I think may be useful to clients and colleagues. I recently had the pleasure of enlisting the help of Green Heron Information services who helped me conduct a literature review for a project that I was working on. I’d like to share with you some of the central ideas about doing literature reviews—ideas that may be helpful to both evaluators and grant seekers— and encourage you to connect with Matt Von Hendy who is the president of Green Heron Information Services. (240) 401-7433 (email at firstname.lastname@example.org or visit www.greenheroninfo.com)
If you are like most people you probably have not thought about literature reviews since college or graduate school until you need to write one for a contract report, journal article or grant proposal. Just a quick review: A literature review is a piece of work that provides an overview of published information on a particular topic or subject usually within a specific period of time and discusses critical points of the current state of knowledge in the field including major findings as well as theoretical and methodological contributions. It will generally seek to present a summary of the important works but also provide a synthesis of this information as well.
Literature reviews matter for a number of reasons: they demonstrate a strong knowledge of the current state of research in the field or topic; they show what issues are being discussed or debated and where research is headed; and they provide excellent background information for placing a program, initiative or grant proposal in context. In short, a well-written literature review can provide a ‘mental road map’ of the past, present and future of research in a particular field.
Literature reviews can take many different types and forms but typically good ones share certain characteristics such as:
- Follows an organizational pattern that combines summary and synthesis
- Tracks the intellectual progression of a thought or a field of study
- Contains a conclusion that offer suggestions for future research
- Is well-researched
- Uses a wide variety of high quality resources including journal articles, conference papers, books and reports
Many evaluation and grant professionals when doing research for literature reviews use some combination of Google and/or other professionals as their primary information sources. While these resources are a great place to start, they both have limitations which make them not so good places to end your research. For example, search engines such as Google filter results based on a number of factors and very few experts can keep up to date with the amount of information that is being published. Fortunately, many high quality tools such as citation databases and subject specific databases exist that make going beyond Google relatively easy. Many evaluation professionals and proposal writers are motivated to do their own research but there are times such as working in new areas or tight deadlines where hiring an information professional to consult, research or write a literature review can be helpful.
You may think while this all sounds good in theory but wonder how would it work in practice? Let me offer a very quick case study of conducting research for a literature review in evaluating programs that attempt to improve mental health outcomes for teenagers in the United States. I would first start a list of sources by consulting experts and searching on Google. My next step would be to look at the two major citation databases, Scopus and Web of Science, and find out which journal articles and conference papers are most cited. I would then search in the subject specific databases that cover the health and psychology areas such as PubMed, Medline and PsycINFO. Finally, I would examine resources such as academic and non-profit think tanks just to make sure I was not missing anything important.
A well-researched and written literature review offers a number of benefits for evaluation professionals, grant-seekers and even funders and grantors: it can show an excellent understanding of the research in a subject area; it can demonstrate about what current issues or topics are being debated and suggest directions for future research; it can also provide an excellent way to place a program, initiative or proposal into context within the larger picture of what is happening in an area. If you have questions with getting started on a literature review, we are always glad to offer suggestions.
Green Heron Information Services offers consulting, research and writing services in support of literature review efforts. www.greenheroninfo.com
Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”). Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.
Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).
While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.
Types of Evaluation Questions
Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.
▪ Process Evaluation Questions
- Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
- Did the program’s services, products, and resources reach their intended audiences and users?
- Were services, products, and resources made available to intended audiences and users in a timely manner?
- What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
- What steps were taken by the program to address these challenges?
▪ Formative Evaluation Questions
- How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
- How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants and stakeholders?
- What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
- Which elements of the program do participants find most beneficial, and which least beneficial?
▪ Outcome/Summative Evaluation Questions
- What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
- Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
- Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
- What is the ultimate worth, merit, and value of the program?
- Should the program be continued or curtailed?
The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use. If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT). Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so. In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review, etc.) that can help elucidate why what happens, happens, and what program participants experience.
Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions.
Evaluator competencies—the skills, knowledge and attitudes— required to be an effective program evaluator have been much discussed. (See, for example, The International Board of Standards for Training, Performance and Instruction Evaluator Competencies, and the CDC’s “Finding the Right People for Your Program Evaluation Team: Evaluator and Planning Team Job Descriptions.” )
A good evaluator must, of course, be able to develop a research design, carry out research in the field, analyze data, and report findings. These technical/methodological skills, although of critical importance, are however, not the only skills that evaluators need. Effective evaluations also depend upon a range of interpersonal or relational skills that make effective and responsive interposal interaction possible.
I recently posted to the American Evaluation Associations listserv a query about the importance and role of the interpersonal skills in evaluation. I asked for AEA members’ opinions about the importance of interpersonal skills in conducting successful evaluations. A number of evaluators responded to my inquiry. The central theme of those responses was that successful evaluators and successful evaluation engagements require that evaluators possess and employ key interpersonal skills, and that without these, evaluation engagements are unlikely to be successful.
Among the most prominent reasons that my AEA colleagues reported for the importance of interpersonal skills were: 1) the importance of building strong, candid, and constructive relationships, on which effective data collection depends; and 2) the importance of establishing trusting and collaborative relationships between evaluators and stakeholders in order to help to ensure that evaluation findings will be utilized by clients and stakeholders. Additionally, some colleagues commented that the self-evident reason for utilizing strong interpersonal skills in evaluation engagements: these skills enhance the probability that clients and stakeholders will share information and provide insights about the program. Thus effective evaluation necessarily entails trusting, open, and amicable relationships that make access to program knowledge and information possible .
Reflecting on my 25 years of professional experience, which includes observing the work of many evaluators, I think that key interpersonal characteristics, include the abilities to:
- Build rapport and trust with clients, evaluands, and stakeholders
- Act with personal integrity
- Display a genuine curiosity and ask good questions
- Make oneself vulnerable in order to learn (see my earlier blog post on the role of vulnerability in learning and creativity at http://bradroseconsulting.com/index.php/secret-innovation-creativity-change/
- Actively listen
- Be empathic
- Be both socially aware and self-aware— i.e., be aware of, and manage, both one’s own and other’s emotions (including the features of emotional intelligence, i.e, capacities to accurately perceive emotions, use emotions to facilitate thinking, to understand emotional meanings, and to manage emotions).
- Treat each person with respect
- Manage conflict and galvanize collaboration
- Problem solve
- Facilitate collective (group) learning
- These interpersonal skills are central to successful program evaluations. Attention to these characteristics– both by program evaluators and by those seeking to engage a program evaluator (i.e. evaluation clients)— will greatly maximize the probability of successful evaluation projects.
Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation J.A. King and L. Stevahn (Sage).
Working with Emotional Intelligence. Daniel Goleman (Bantam)
This past March Brad Rose was selected as Fellow by the National Center for Innovation and Excellence. Brad will join a group of 13 Fellows all of whom work with the Center to further their mission across the United States.
About the Center:
“The National Center for Innovation and Excellence is a dynamic community dedicated to developing youth, changing communities, growing economies, and improving the lives of others by working with organizations, foundations, governments, and communities to design, implement, and test outcomes for large scale youth development strategies and community change initiatives.”
To learn more visit http://www.ncfie.org/