Dawn Bentley of the Michigan Association of Special Educators recently drew my attention to an important article appearing in the Huffington Post, “Proven Programs vs. Local Evidence,” by Robert Slavin, of Johns Hopkins University. “Proven Programs vs. Local Evidence” compares and contrasts two kinds of evaluations of educational programs.
On the one hand, Slavin says, there are evaluations that are conducted of large-scale, typically federally funded programs. These programs represent program structures, that once found to be effective, can be replicated in a variety of settings. Evaluation findings from such programs are usually generalizable, that is they are applicable to a broader range of contexts than the individual case under study. Slavin terms such evaluations, “Proven Programs.” “Proven Program” evaluations are becoming increasingly important because the federal government is interested in funding efforts that are researched-based and show strong evidence of effectiveness. Examples of such “Proven Programs” include School Improvement Grants (SIGs) and Title II Seed grants.
On the other hand, Slavin notes, there are locally specific evaluations that are “not intended to produce answers to universals problems,” and whose findings typically are not generalizable. These evaluations are conducted on programs of a more limited, usually local, scope, and tend to be of interest principally to local program stakeholders, rather than state or national policy makers. Slavin calls these evaluations, “Local Evidence” because they yield evidence that typically isn’t generalizable to larger contexts.
Slavin notes that these two kinds of program evaluations are not necessarily mutually exclusive, for example, when a district or state implements and evaluates a replicable program that responds to its own needs. That said, Slavin says that “proven programs” are likely to contribute to national evidence and experience of what works, while “Local Evidence” evaluations are more likely to be of interest to local educators and local stakeholders. He notes that “Local Evidence” evaluations are more likely to result in stakeholders utilizing and acting on evaluation findings.
While Brad Rose Consulting, Inc. has experience in working with the U.S. Dept. of Education in conducting evaluations of national scope initiatives, we are also have extensive experience and are strongly committed to assisting state-level and district-level education agencies to design and conduct evaluation research to produce evaluation findings that will constructively inform both local policy and programming innovations. To find out more about our work in education visit our Higher education & K-12 page.