Periodically, I share with you and your colleagues information and resources that are of special interest to the non-profit sector. In May, a colleague of mine, Matt Von Hendy, at Green Heron Information, will be offering a series of webinars that will help you and your organization to:
- Effectively search for, find, and respond to government Requests for Proposals (RFPs);
- Improve your on-line informational web search strategies; and
- Locate high quality, health-related information with an emphasis on evidence-based medical resources.
Links to registration for each of these webinars and more information about each, appear below:
Supercharge Your Search 2015 — Tuesday, May 5, 2-3 PM EDT
In the age of Google and Web 3.0, everyone is a researcher but the amount of information can be overwhelming. How do you minimize the amount of time spent looking at useless results just to get the right high quality information that you need? This webinar is designed to give you the strategies, techniques, and tools to help you find and organize twice the amount of high quality information in half the time. Click here to register.
Searching For and Finding RFPs (Government Contracting Opportunities): A Systematic Research Based Approach — Tuesday, May 12, 2-3 PM EDT
Searching for and finding RFPs (request for proposals) from government agencies can be a frustrating and time-consuming process. This webinar is designed to equip you with the strategies, techniques, and tools to help you more quickly and efficiently search for and locate federal, state, tribal and local government contracting opportunities. Click here to register.
Finding High Quality Health (Evidence Based) Resources on a Deadline and a Budget — Tuesday, May 19, 2-3 PM EDT
With increasing frequency, project sponsors are asking grantees to use high quality health resources in their proposals. Locating high quality health-related information can be a challenge – especially when you have an impending deadline and/or have a limited budget. This webinar covers strategies, resources and tools to quickly and effectively locate high quality low-cost health information with an emphasis on evidence-based medical (ebm) resources. Click here to register.
Each session costs $40 dollars, which covers all the standard webinar material AND also includes a 30 minute personalized post webinar session with Matt, where you can discuss your particular research needs.
About the presenter:
Matthew Von Hendy MA/MLS has more than 20 years of experience as a professional research librarian and has worked at EPA, NASA, and the National Academies of Science. In 2012, he established his own research and information consulting firm, Green Heron Information Services, and has been working closely with evaluation and other research professionals ever since. He frequently presents at local, regional, and national evaluation conferences.
Have questions or need more information? Contact Matt Von Hendy – firstname.lastname@example.org
You will recall that in an earlier post we discussed the importance of learning from program “failure.” (“Fail Forward: What We Can Learn from Program Failure”)
Below are some films and other resources about the value of risk and failure in helping us to learn and improve.
“I have not failed. I’ve just found 10,000 ways that won’t work.” Thomas Edison
“By understanding how and why programs don’t achieve the results they intend, we can design and execute improved programs in the future. It is important to note that psychological research has shown that individuals learn more from failure than they do from success. Our goals should be to learn from our defeats and to surmount them—especially in programs that address critical social, educational, and human service needs. Learning from the challenges that confront these kinds of programs can have a powerful impact of the success of future programming.”
Before beginning an evaluation, it may be helpful to consider the following questions:
1. Why is the evaluation being conducted? What is/are the purpose(s) of the evaluation?
Common reasons for conducting an evaluation are to:
- monitor progress of program implementation and provide formative feedback to designers and program managers (i.e., a formative evaluation seeks to discover what is happening and why, for the purpose of program improvement and refinement.)
- measure final outcomes or effects produced by the program (i.e., a summative evaluation);
- provide evidence of a program’s achievements to current or future funders;
- convince skeptics or opponents of the value of the program;
- elucidate important lessons and contribute to public knowledge;
- tell a meaningful and important story;
- provide information on program efficiency;
- neutrally and impartially document the changes produced in clients or systems;
- fulfill contractual obligations;
- advocate for the expansion or reduction of a program with current and/or additional funders.
Evaluations may simultaneously serve many purposes. For the purpose of clarity and to ensure that evaluation findings meet the client’s and stakeholders’ needs, the client and evaluator may want to identify and rank the top two or three reasons for conducting the evaluation. Clarifying the purpose(s) of the evaluation early in the process will maximize the usefulness of the
2. What is the “it” that is being evaluated? (A program, initiative, organization, network, set of processes or relationships, services, activities?) There are many things that may be evaluated in any given program or intervention. It may be best to start with a few (2-4) key questions and concerns (See #4, below ). Also, for purposes of clarity, it may be useful to discuss what isn’t being evaluated.
3. What are the intended outcomes that the program or intervention intends to produce? What is the program meant to achieve? What changes or differences does the program hope to produce, and in whom? What will be different as the result of the program or intervention? Please note that changes can occur in individuals, organizations, communities, and other social environments. While evaluations often look for changes in persons, changes need not be restricted to alterations in individuals’ behavior, attitudes, or knowledge, but can extend to larger units of analysis, like changes in organizations, networks of organizations, and communities. For collective groups or institutions, changes may occur in: policies, positions, vision/mission, collective actions, communication, overall effectiveness, public perception, etc. For individuals: changes may occur in behaviors, attitudes, skills, ideas, competencies, etc.
How do programs know what they should be doing— which target populations require services, the types of services programs should provide, the amounts of services, which kinds of services will be most effective, etc.? Needs assessments are the best way to determine the needs of individuals, communities, and other populations. A needs assessment is a systematic process for identifying and determining such needs. Like program evaluations, needs assessments draw on a range of social science methods—from surveys and observations, to focus groups and individual interviews.
Needs assessments assume a clear definition of “a need”. As James Altschuld and Ryan Watkins point out in New Directions in Evaluation, No. 144, Winter, 2014 “A need, in the simplest sense, is a measurable gap between two conditions: what currently is, and what should be….This requires ascertaining what the circumstances are at a point in time, what is desired in the future, and a comparison of the two.” Needs assessments don’t just exclusively focus on what is and should be, but also on gathering and synthesizing data about how to narrow the gap between the existing state and the desired state. Needs assessments also prioritize needs so that users of the assessment can address specified needs in a reasonable order, and devote appropriate resources to meeting identified needs.
By gathering data from a range of stakeholders, needs assessments are able to determine the best means to achieve the desired results. To be effective, however, needs assessments must not simply focus on deficits in individuals and communities, but must also explore existing strengths, capacities, and assets. Too narrow a focus on “what’s missing” can blind researchers and program designers to the existing assets on which effective programming can be built. Effective needs assessments therefore, ask questions about: 1) on-going needs, 2) current strengths/assets/ capacities, 3) and desired states.
Needs assessments may differ in their design, but regardless of design, most needs assessment follow these phases:
- Explore and gather data about the current condition/state of affairs (including existing assets)
- Explore/identify desired or optimal condition/state of affairs
- Analyze data to understand the difference or “gap” between the current condition and desired condition.
- Prioritize identified needs and “gaps.”
- With needs (and assets) in mind, design program to address (diminish or eliminate) the gap between existing needs and desired state.
When conducted in a timely and thoughtful way, needs assessments can be of substantial utility in helping programs to effectively deliver services to those who most need them.
“Needs Assessment: Trends and a View Toward the Future,” New Directions in Evaluation, No. 144, Winter, 2014 James W. Altschuld and Ryan Watkins (eds.)
Definition of Needs Assessment
Comprehensive Needs Assessment
Methods for Conducting an Educational Needs Assessment
Program evaluations are conducted for a variety of reasons. Purposes can range from a mechanical compliance with a funder’s reporting requirements, to the genuine desire by program managers and stakeholder to learn “Are we making a difference?” and if so, “What kind of difference are we making?” The different purposes of, and motivations for, conducting evaluations determine the different types of evaluations. Below, I briefly discuss the variety of evaluation types.
Formative, Summative, Process, Impact and Outcome Evaluations
Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program. Formative evaluations typically are conducted in the early- to mid-period of a program’s implementation. Summative evaluations are conducted near, or at, the end of a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities) and to indicate the ultimate value, merit and worth of the program. Summative evaluations seek to determine whether the program should be continued, replicated or curtailed, whereas formative evaluations are intended to help program designers, managers, and implementers to address challenges to the program’s effectiveness.
Process evaluations, like formative evaluations, are conducted during the program’s early and mid-cycle phases of implementation. Typically process evaluations seek data with which to understand what’s actually going on in a program (what the program actually is and does), and whether intended service recipients are receiving the services they need. Process evaluations are, as the name implies, about the processes involved in delivering the program.
Impact evaluations sometimes alternatively called “outcome evaluations,” gather and analyze data to show the ultimate, often broader range, and longer lasting, effects of a program. An impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes. (see, for example) The International Initiative for Impact Evaluation (3ie) defines rigorous impact evaluations as: ”Analyses that measure the net change in outcomes for a particular group of people that can be attributed to a specific program using the best methodology available, feasible and appropriate to the evaluation question that is being investigated and to the specific context” (see, for example) Impact (and outcome evaluations) are primarily concerned with determining whether the effects of the program are the result of the program, or the result of some other extraneous factor(s). Ultimately, outcome evaluations want to answer the question, “What effect(s) did the program have on its participants (e.g., changes in knowledge, attitudes, behaviors, skills, practices) and were these effects the result of the program?
Although the different types of evaluation described above differ in their intended purposes and times of implementation, it is important to keep in mind that every program evaluation should be guided by good evaluation research questions. (See our earlier post, Questions Before Methods) Program evaluation, like any effective research project depends upon asking important and insight-producing questions. Ultimately, the different types of evaluations discussed above support the general definition of “program evaluation —a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.” (see, for example)
Understanding the Causes of Outcomes and Impacts—An AEA video webinar (18 minutes):
Introduction to Evaluation:
What is Program Evaluation: