Programs are seldom implemented under pristine laboratory conditions. Instead, they occur in the real world, in real time. They unfold in complex environments, with ever-changing circumstances and unforeseeable developments. Consequently, program evaluations need to be adaptive, aware of the reality of programs’ often tumultuous contexts, and capable of suppleness and flexibility. This is especially true for evaluations that seek to assess the impact of innovative initiatives whose goals are often not standardized and pre-determined, but are evolving and emergent.
Over the last 20 years, Developmental Evaluation has emerged as an important evaluation approach for meeting the evaluation needs of innovative initiatives. As Michael Quinn Patton, a noted theorist and practitioner of Developmental Evaluation has noted,
“Developmental evaluation (DE) is especially appropriate for innovative initiatives or organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback.” (http://comm.eval.org/viewdocument/?DocumentKey=95f16941-7e8a-4785-907a-42615d919d7a )
Developmental Evaluation Serves Innovative Programs
While Developmental Evaluation is appropriate for many programs and organizations, it is especially useful for programs that aspire to continuous learning, that value adaptation, and that seek innovative means to address emerging (vs. “known”) issues. Such programs are typically found in the social philanthropic and non-profit sectors. Evaluators who practice Developmental Evaluation transcend the typical role of a traditional evaluator—they don’t just design formative or summative evaluations. Developmental evaluators work closely with decision makers, program designers and staff to ask key questions about program design and logic, to collect data—sometimes in rapid time frames—to inform real-time program implementation and refinement, and to ensure that programs consistently employ the principles of learning and continuous improvement. Patton observed in his book Utilization Focused Evaluation (3rd Edition):
“Developmental Evaluation refers to evaluation processes undertaken for the purpose of supporting program, project, staff and/or organizational development, including asking evaluative questions and applying evaluation logic for development purposes. The evaluator is part of a team whose members collaborate to conceptualize, design, and test new approaches in a long-term, on-going process of continuous improvement, adaptation and intentional change. The evaluator’s primary function is to elucidate them discussions with evaluative questions, data and logic, and to facilitate data-based decision-making…”
Brad Rose Consulting, Inc. utilizes the principles and insights of Developmental Evaluation. Our 20+ years of experience working with social entrepreneurs and innovative non-profit organizations has taught us that even seemingly “standard” program designs can benefit from a nuanced, responsive, context-sensitive, evaluation approach, one that draws on the practices of Developmental Evaluation. Additionally, innovative programs whose outcomes are not fully predictable nor exclusively pre-determined, will find that Developmental Evaluation provides the iterative feedback necessary to strengthen the program and to achieve enhanced outcomes. Because Developmental Evaluation is essentially consultative, integrative, and built on a constructive and supportive relationship between the evaluator and the organization’s staff, it offers programs designers, managers, and implementers superior insights into the complex, often rapidly changing conditions in which genuine innovations occur.
A Developmental Evaluation Primer, at J.W. McConnell Family Foundation
Video Michael Quinn Patton on Developmental Evaluation
Link to Michael Quinn Patton, Developmental Evaluation
A conversation with Michal Quinn Patton
“Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use.” Guilford Press New York. 2011
In our last blog post, Evaluation Serving Community Foundations and Donors, we discussed the value of program evaluation to community foundations, especially as a means for demonstrating to donors and other foundation stakeholders that program and project funds are used to achieve the outcomes that donors desire. In this blog post, I’d like to discuss an additional benefit of program evaluations.
Program Evaluation for Strengthening Grantee Effectiveness
Because well-designed formative evaluations can provide useful information about how programs are working, especially during programs’ early and intermediate stages, timely and well-designed formative evaluations can play a significant role in helping grantees, i.e., program managers and implementers, with critical information with which to refine and strengthen programming. Community Foundations, therefore, can use formative evaluations to help grantees better achieve their programmatic goals and objectives. By conducting formative evaluations, not just summative evaluations, Community Foundations can ensure that grantees are optimally prepared to achieve the desired program results.
Formative evaluations typically ask the following kinds of questions:
a.. As the project is developed and launched, what are the project’s strengths and vulnerabilities?
b.. In regard to the programs implementation, what’s working and what’s not?
c.. What kinds of implementation problems emerge and how are these being addressed?
d.. What’s happening that wasn’t expected?
e.. How do participants, staff, and stakeholders perceive the program’s initial and mid-cycle effectiveness?
f.. What new ideas are emerging from project implementation that can be tested to strengthen the program’s effectiveness?
g.. How can the program be improved to maximize program outcomes and impacts?
By implementing formative evaluations, Community Foundations can:
a.. Effectively monitor the progress grantees make toward program goals.
b.. Develop grantees’ awareness of, and facility with program evaluation.
c.. Ensure that programs are optimally positioned to achieve intended outcomes.
d.. Develop the indigenous capacities of grantee organizations to monitor and evaluate.
e.. Assure that organizations supported by Community Foundations make their best efforts to refine programming and thus, better serve their stakeholders and service recipients.
Brad Rose Consulting, Inc. has extensive experience designing formative evaluations and program monitoring initiatives. We work with clients in the Community Foundation and Non-Profit sectors to ensure that programs are effectively working towards their goals and making timely, data-informed adjustments to improve program quality. Ultimately we are concerned both with measuring programs’ effects AND with maximizing programs’ effectiveness. We can help Community Foundations to achieve these dual goals.
The Importance of Demonstrating to Community Foundation Funders Programs’ Impacts
As donors seek to strengthen and improve the lives of their communities, they are increasingly interested in understanding the effects of their social investments. Consequently, community foundations are now frequently asked to show that the programs and organizations they support are having the effects that donors would like to see. Program evaluations of community foundation-supported programs are an extremely useful way to document the effectiveness of programs. Evaluations can provide critical information about the effects of programs, show donors that their expectations are being met, and provide critical information with which to ensure that critical community needs are being effectively addressed. A recent article in the New Yorker noted, “Measuring the impact of giving is more important to donors aged 21-40 than it was to previous generations.” (New Yorker December, 2013)
Brad Rose Consulting, Inc. has evaluated a wide-range of philanthropic programs, ranging from human services and arts education programs, to youth development and domestic violence prevention programs. We work with foundations to ensure both effective and sensitive program evaluations that show the effects of donor- supported programming. Our evaluations are especially helpful to programs supported by designated funds. While our experience includes working with national-level foundations, our approach to tailored evaluation initiatives is especially geared to helping community-level foundations who are increasingly asked to demonstrate the effectiveness of the programs their donor funds support.
Program Evaluation to Support Planning
Although our evaluations gather data to show the effects of existing programs, our evaluation approach can be very helpful as funders and donors plan for future initiatives. By showing both the achievements and challenges of current programs, our evaluations provide to program planners and funders helpful insight into a variety of program area “best practices”. We also work with foundation grantees to help them develop their own capacities for evaluating the effects of their efforts. By educating grant recipients about the need for, and fundamental elements of, program evaluation, we prepare grantees to better plan and execute programs that demonstrate results.
If you are a community foundation staff member, we would welcome and opportunity to discuss with you the ways that Brad Rose Consulting, Inc. can help you and your grantees to strengthen the programs you support so that they better serve the goals and expectations of your donors. Similarly, if you are an organization that receives foundation support, we would welcome a chance to discuss with you how our program evaluations can help you strengthen your programmatic efforts and help to secure future foundation support. Please feel free to contact me, click here for contact information.
Typically, we work with clients from the early stages of program development in order to understand their organization’s needs and the needs of program funders and other stakeholders. Following initial consultations with program managers and program staff, we work collaboratively to identify key evaluation questions, and to design a strategy for collecting and analyzing data that will provide meaningful and useful information to all stakeholders.
Depending upon the specific initiative, we implement a range of evaluation tools (e.g., interview protocols, web-based surveys, focus groups, quantitative measures, etc.) that allow us to collect, analyze, and interpret data about the activities and outcomes of the specified program. Periodic debriefings with program staff and stakeholders allow us to communicate preliminary findings, and to offer program managers timely opportunities to refine programming so that they can better achieve intended goals.
Our collaborative approach to working with clients allows us to actively support program managers, staff, and funders to make data-informed judgments about programs’ effectiveness and value. At the appropriate time(s) in the program’s implementation, we write a report(s) that details findings from program evaluation activities and that makes data-based suggestions for program improvement.
Most organizations conduct evaluations because they want to determine if they are making a difference in the lives of the people that they serve. Determining program effectiveness, showing the specific effects of programming, and using data to strengthen programs, are all important and laudable reasons for carrying out an evaluation.
In recent years, however “accountability” has become a driving force for many organizations (schools, government agencies, and non-profits) to conduct evaluations. “Accountability” has become a watchword—especially in the educational and non-profit sectors. While accountability has its legitimate purposes (i.e., demonstrating to stakeholders that an organization or program is responsible, ethical, committed to achieving its goals, etc.) too often the desire to demonstrate accountability, especially legal compliance, overshadows the use of evaluations to enhance program effectiveness and strengthen program outcomes.
Brad Rose Consulting, Inc. is committed to evaluations that provide an evidence- based account of a program’s effects. Equally importantly, however, we are committed to designing and conducting evaluations that help to strengthen a program (and its host organization) so that it can better achieve its desired outcomes. We work with organizations to objectively find out what’s working and what needs to be strengthened.
When, for example, we work with educators (superintendents, principals, teachers and school staff) and school systems to evaluate their educational programs, we design evaluations that BOTH show program effectiveness AND provide data-based insights that help strengthen future outcomes. We understand that educators need BOTH to know if students are learning AND how to enhance future student achievement. Such evaluations require not merely collecting static and tiresome data, but implementing evaluations that richly show how educational initiatives can be made more effective. Often such evaluations transcend the collection merely of student test scores and look at the multiple factors that influence student achievement. We deliberately work to make our evaluations constructive opportunities for strengthening instruction and enhancing student achievement.
More information about accountability: