During the last 15-20 years, “social innovations” (SIs) have grown both in number and in terminological confusion. Social innovations include initiatives and programs as substantively diverse as micro credit organizations, charter schools, environmental emissions credit trading schemes, and online volunteerism. Social innovations are distinguished by a focus on “the process of innovation, how innovation and change take shape… and center on new work and new forms of cooperation, especially on those that work towards the attainment of a sustainable society” (Wikipedia). The Center for Social Innovation reports that, “Social innovation refers to the creation, development, adoption, and integration of new and renewed concepts, systems, and practices that put people and planet first.” Social innovations are thought to cut across the traditional boundaries separating nonprofits, government, and for-profit businesses. SIs are also considered to be distinct from conventional social programs.

The rise of social innovations presents new challenges to those who seek to evaluate (and in some cases, those who seek to nurture) these initiatives. Social innovations often bring together unrelated agencies and organizations, and involve complex and changing social dynamics, roles, and relationships.

The volatile, ever-unfolding nature of many SIs — including their evolving outcomes, collaborations among different social sectors and organizations, and dynamic and volatile contexts of SIs —may present challenges to evaluators who are more familiar with traditional social programs. In their recent article, “Evaluating Social Innovations: Implications for Evaluation Design,” Kate Svennson, Barbara Szijarto, Peter Milley, and J Bradley Cousins, (American Journal of Evaluation, Vol. 39, No. 4, December 2018 pp. 459-477) review the international literature on SI evaluation and summarize critical insights about the unique challenges associated with selecting evaluation designs for social innovations.

Svennson, et al. remind us that the design of every evaluation is driven by “…what questions will be answered by the evaluation, what data will be collected, how the data will be analyzed to answer the questions, and how resulting information will be used?” (See our previous blogposts “Approaching An Evaluation – Ten Issues to Consider” and “Questions Before Methods”).

The authors surveyed 28 peer reviewed empirical studies of SIs, with an eye to identifying commonly reported issues and conditions that influence the choice of SI evaluation design. Svennson, et al. report that the choice of evaluation design was most frequently influenced by:

  1. a “complexity perspective,” i.e., one that acknowledges the often messy, trial-and-error landscape of such initiatives,
  2. a focus on the desire for collective learning by evaluators and evaluands,
  3. the need for collaboration between evaluators and SIs, including the need for timely feedback from evaluators, and
  4. the need for accountability to evaluation funders, including funders’ preferences for evaluation methods and design.

Interestingly, Svennson, et al. find that, in the 28 studies they reviewed, there is little diversity in the types of evaluation designs selected by evaluators, and what they term a “lingering ambiguity” among evaluators about what constitutes a social innovation.

In the future, the authors tell us, evaluators of SIs will want to:

  1. be sensitive to the processual nature of SIs
  2. focus on capturing and facilitating feedback, especially after each iteration of an SI
  3. support productive collaboration, especially among often competing SI stakeholders
  4. incorporate multiple methods of reporting to meet the needs for various, often divergent, stakeholders
  5. help SI practitioners to clarify outcomes, especially as these evolve
  6. capture both intended and unexpected outcomes of SIs

Resources:

Defining Social Innovation—Stanford Business School

Social Innovation

Center for Social Innovation

“Evaluating Social Innovations: Implications for Evaluation Design,” Kate Svennson, Barbara Szijarto, Peter Milley, and J Bradley Cousins, American Journal of Evaluation, Vol. 39, No. 4, December 2018.

For an critique of social innovation, see the “Criticism” section at Wikipedia

Recommended Posts