Typically, we work with clients from the early stages of program development in order to understand their organization’s needs and the needs of program funders and other stakeholders. Following initial consultations with program managers and program staff, we work collaboratively to identify key evaluation questions, and to design a strategy for collecting and analyzing data that will provide meaningful and useful information to all stakeholders.
Depending upon the specific initiative, we implement a range of evaluation tools (e.g., interview protocols, web-based surveys, focus groups, quantitative measures, etc.) that allow us to collect, analyze, and interpret data about the activities and outcomes of the specified program. Periodic debriefings with program staff and stakeholders allow us to communicate preliminary findings, and to offer program managers timely opportunities to refine programming so that they can better achieve intended goals.
Our collaborative approach to working with clients allows us to actively support program managers, staff, and funders to make data-informed judgments about programs’ effectiveness and value. At the appropriate time(s) in the program’s implementation, we write a report(s) that details findings from program evaluation activities and that makes data-based suggestions for program improvement.
Most organizations conduct evaluations because they want to determine if they are making a difference in the lives of the people that they serve. Determining program effectiveness, showing the specific effects of programming, and using data to strengthen programs, are all important and laudable reasons for carrying out an evaluation.
In recent years, however “accountability” has become a driving force for many organizations (schools, government agencies, and non-profits) to conduct evaluations. “Accountability” has become a watchword—especially in the educational and non-profit sectors. While accountability has its legitimate purposes (i.e., demonstrating to stakeholders that an organization or program is responsible, ethical, committed to achieving its goals, etc.) too often the desire to demonstrate accountability, especially legal compliance, overshadows the use of evaluations to enhance program effectiveness and strengthen program outcomes.
Brad Rose Consulting, Inc. is committed to evaluations that provide an evidence- based account of a program’s effects. Equally importantly, however, we are committed to designing and conducting evaluations that help to strengthen a program (and its host organization) so that it can better achieve its desired outcomes. We work with organizations to objectively find out what’s working and what needs to be strengthened.
When, for example, we work with educators (superintendents, principals, teachers and school staff) and school systems to evaluate their educational programs, we design evaluations that BOTH show program effectiveness AND provide data-based insights that help strengthen future outcomes. We understand that educators need BOTH to know if students are learning AND how to enhance future student achievement. Such evaluations require not merely collecting static and tiresome data, but implementing evaluations that richly show how educational initiatives can be made more effective. Often such evaluations transcend the collection merely of student test scores and look at the multiple factors that influence student achievement. We deliberately work to make our evaluations constructive opportunities for strengthening instruction and enhancing student achievement.
More information about accountability:
Last week, I was working with a new client, and we were sketching out a logic model for one, among a variety, of the programs that my client operates. As we talked about the inputs, outputs, and the short-, medium-, and long-term, outcomes their program produces, it dawned on me that we were unintentionally sketching many of the elements that might contribute to a strategic plan for the overall organization. This experience prompted me to think about how developing logic models for specific initiatives and programs, can also assist organizations to consider and reflect upon their broader goals, operations, and results. Although a logic model typically charts the logic of a particular program, it shares many of the features and illustrates many of the features of the organization that runs the program.
What is Strategic Planning?
“Simply put, strategic planning determines where an organization is going over the next year or more, how it’s going to get there and how it’ll know if it got there or not. The focus of a strategic plan is usually on the entire organization…” (See the Free Management Library at http://managementhelp.org/strategicplanning/index.htm#anchor1234) Balanced Scorecard says, “Strategic planning is an organizational management activity that is used to set priorities, focus energy and resources, strengthen operations, ensure that employees and other stakeholders are working toward common goals, establish agreement around intended outcomes/results, and assess and adjust the organization’s direction in response to a changing environment. It is a disciplined effort that produces fundamental decisions and actions that shape and guide what an organization is, who it serves, what it does, and why it does it, with a focus on the future. Resource.
Features Common to Both Logic Models and Strategic Plans
My experience working with clients has shown me that logic models raise many of the same questions that strategic plans do: What are our assumptions about how a program works? What is the environment (context) in which a program operates? What are we trying to achieve (goals and objectives)? What investments (inputs) do we make? What activities (outputs) do we engage in? What are the results, changes, impacts (outcomes) that we want to, and in fact, DO, produce? How do we measure our effects and achievements (measures/metrics)?
Although logic modeling can’t do all of the things a strategic plan can, it can become – especially when it includes an organization’s many stakeholders – an important contributor to the process through which an organization reflects upon where it is and where it wants to go. The collective learning that accompanies the process of building a logic model for a specific program, can also inform the organization’s efforts to develop a broader strategic plan.
What is a Strategic Plan?
What is the Balanced Scorecard?
The Basics of Strategic Planning and Strategic Management
What a Strategic Plan Is and Isn’t
Ten Keys to Successful Strategic Planning for Nonprofit and Foundation Leaders
Types of Strategic Planning
Understanding Strategic Planning
Steps to a Strategic Plan
Five Steps to a Strategic Plan
Strategic Planning for Non-Profits
Strategic Planning for Non-Profits
What is the best way to do strategic planning for a nonprofit?
Videos About Strategic Planning
University of Arizona
Introduction to Strategic Planning
A logic model is a schematic representation of the elements of a program and the program’s resulting effects. A logic model (also known, as a “theory of change”) is a useful tool for understanding the way a program intends to produce the outcomes (i.e. changes) it hopes to produce. Logic models typically consist of a flowchart schematic that shows the logical connection between a program’s “inputs” (i.e. invested resources), “outputs” (program activities and actions), “short-term outcomes” (changes), “medium-term outcomes” (changes), and “long range impacts” (changes).
When developing a logic model many evaluators and program staff rightly focus on inputs, outputs, and program outcomes (the core of the program). However, it is critical to also include in the logic model the implicit assumptions that underlie the program’s operation, the needs that the program aspires to address, and the program’s environment, or context. Assumptions, needs, and context are crucial factors in understanding how the program does what it intends to do. Ultimately these are crucial to understanding the causal mechanisms that produce the intended changes of any program.
Without clearly understanding the causal mechanisms at work in a program, program staff may work ineffectively, placing emphasis on the wrong or inefective activities—and ultimately fail to correctly address the challenges the program intends to address. Similarly, without a clear understanding of the causal mechanisms that enable the program to achieve its outcomes, the program evaluation may not measure the proper outcomes or fail to see the changes the program, in fact, brings about.
Brad Rose Consulting, Inc. works with clients to develop simple, yet robust, logic models that explicitly document the causal mechanisms that are at work in a program. By discussing, and explicitly identifying the often implicit causal assumptions, as well as highlighting the needs for the program and the social context of a program, we not only ensure that the evaluation is properly designed and executed, we also help program implementers to ensure that they are activating the causal processes/mechanisms that yield the changes that the program strives to achieve.
In recent years, “social entrepreneur” has become a prominent term in the not-for-profit, foundation, and NGO worlds. But what exactly is a “social entrepreneur?” While social entrepreneurs share many of the characteristics ascribed to for-profit entrepreneurs, Roger L. Martin & Sally Osberg observe in their article in the Stanford Innovation Review Social Entrepreneurship: The Case for Definition that “the social entrepreneur aims for value in the form of large-scale, transformational benefit that accrues either to a significant segment of society or to society at large.” They also note that “social entrepreneurship….is as vital to the progress of societies as is entrepreneurship to the progress of economies, and it merits more rigorous, serious attention than it has attracted so far.”
PBS (http://www.pbs.org/opb/thenewheroes/whatis/ ) notes “A social entrepreneur identifies and solves social problems on a large scale. Just as business entrepreneurs create and transform whole industries, social entrepreneurs act as the change agents for society, seizing opportunities others miss in order to improve systems, invent and disseminate new approaches and advance sustainable solutions that create social value. Unlike traditional business entrepreneurs, social entrepreneurs primarily seek to generate “social value” rather than profits. And unlike the majority of non-profit organizations, their work is targeted not only towards immediate, small-scale effects, but sweeping, long-term change.”
The Ashoka Foundation similarly stresses the large-scale effects that social entrepreneurs seek to make. “Social entrepreneurs are individuals with innovative solutions to society’s most pressing social problems. They are ambitious and persistent, tackling major social issues and offering new ideas for wide-scale change. Rather than leaving societal needs to the government or business sectors, social entrepreneurs find what is not working and solve the problem by changing the system, spreading the solution, and persuading entire societies to take new leaps.” (See https://www.ashoka.org/social_entrepreneur )
While the above sources highlight the often ambitious, indeed global, goals of social entrepreneurs, most social entrepreneurship, in fact, involves developing and sustaining specific organizations and, in turn, operating discrete programs. These programmatic efforts can benefit from program evaluations that gather information to show that they are making the differences that their founders hope to make. Evaluations of social entrepreneur-sponsored initiatives not only provide critical evidence of impact, but equally importantly, provide objective information gathered directly from program recipients and others program stakeholders about the ways that such efforts might be strengthened. (See our previous post, “Listening to Those Who Matter Most: Beneficiaries.”) Evaluations of social entrepreneur-sponsored initiatives are especially important because program participants, service recipients, and other beneficiaries are seldom in a position to directly provide feedback to the innovators who are responsible for the programming. Transformative initiatives–no less than smaller scale programs—can substantially benefit from program evaluations.
See also “Advancing Evaluation Practices in Philanthropy” at The Stanford Innovation Review http://www.ssireview.org/supplement/advancing_evaluation_practices_in_philanthropy