Program evaluations seldom occur in stable, scientifically controlled environments. Often programs are implemented in complex and rapidly evolving settings that make traditional evaluation research approaches—which depend upon the stability of the “treatment” and the designation of predetermined outcomes—difficult to utilize.
Michael Quinn Patton, one of the originators of Developmental Evaluation, says that “Developmental evaluation processes include asking evaluative questions and gathering information to provide feedback and support developmental decision-making and course corrections along the emergent path. The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change. The evaluator’s primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and to facilitate data-based assessments and decision-making in the unfolding and developmental processes of innovation.”
In their paper, “A Practitioners Guide to Developmental Evaluation,” Dozios and her colleagues note, “Developmental Evaluation differs from traditional forms of evaluation in several key ways:”
- The primary focus is on adaptive learning rather than accountability to an external authority.
- The purpose is to provide real-time feedback and generate learnings to inform development.
- The evaluator is embedded in the initiative as a member of the team.
- The DE role extends well beyond data collection and analysis; the evaluator actively intervenes to shape the course of development, helping to inform decision-making and facilitate learning.
- The evaluation is designed to capture system dynamics and surface innovative strategies and ideas.
- The approach is flexible, with new measures and monitoring mechanisms evolving as understanding of the situation deepens and the initiative’s goals emerge
Development evaluation is especially useful for social innovators who often find themselves inventing the program as it is implemented, and who often don’t have a stable and unchanging set of anticipated outcomes. Following Patton, Dozois, Langlois, and Blanchet-Cohen observe that Developmental Evaluation is especially well suited to situations that are:
- Highly emergent and volatile (e.g., the environment is always changing)
- Difficult to plan or predict because the variables are interdependent and non-linear
- Socially complex— requiring collaboration among stakeholders from different organizations, systems, and/or sectors
- Innovative, requiring real-time learning and development
Developmental Evaluation, however, is increasingly appropriate for use in the non-profit world, especially where the stability of programs’ key components including the program’s core treatment and eventual, often evolving, outcomes, are not as certain or firm as program designers might wish.
Brad Rose Consulting is experienced in working with program’s whose environments are volatile and whose iterative program designs are necessarily flexible. We are adept at collecting data that can inform the on-going evolution of a program, and have 20+ years of providing meaningful data to program designers and implementers that help them to adjust to rapidly changing and highly variable environments.
Resources:
A Practitioner’s Guide to Developmental Evaluation, Elizabeth Dozois, Marc Langlois, and Natasha Blanchet-Cohen