What is ‘Organization Development’?
Organization Development is an intentional set of processes and practices designed to enhance the ability of organizations to meet their goals. It entails “…a process of continuous diagnosis, action planning, implementation and evaluation, with the goal of transferring (or generating) knowledge and skills so that organizations can improve their capacity for solving-problems and managing future change.”(See: Organizational Development Theory) Organization Development deals with a range of features, including organizational climate, organizational culture (i.e., assumptions, values, norms/expectations, patterns of behavior) and organizational strategy. It seeks to strengthen and enhance the long-term “health” and performance of an organization, often by focusing on aligning organizations with their rapidly changing and complex environments through organizational learning, knowledge management and transformation of organizational norms and values.
What is an organization?
At the most abstract level, an organization is a collectivity of people, a social entity, which seeks to achieve specific aims and goals, and typically is characterized by a structure of designated roles, established rules, a system or structure of authority, a process for decision-making, and a division of labor among organizational members. Organizations are composed of a discreet “membership,” that is, a limited population of incumbents or personnel. Organizations exist in, affect, and are affected by, the larger social and economic environment. Formal, large-scale organizations may take the form of businesses, schools, the military, churches, prisons, foundations, and non-profits.
The Relationship Between Organizations and Evaluation
In seeking to achieve goals, organizations often design and implement discreet initiatives, policies, and programs. They mobilize resources to achieve specific ends. Non-profit organizations, for example, implement programs that mobilize resources in the form of activities, services, and products that are intended to improve the lives of program participants/recipients. “A program is a collection of resources in an organization and is geared to accomplish a certain goal or set of goals. Programs are one major aspect of the non-profit’s structure. The typical non-profit organizational structure is built around programs, that is, the non-profit provides certain major services, each of which is usually formalized into a program.”(See: Overview of Non-Profit Program Planning). In serving program participants, nonprofits strive to effectively and efficiently deploy program resources, including knowledge, activities, services, and materials, to positively affect the lives of those they serve.
Although evaluations are customarily aimed at gathering and analyzing data about discrete programs, the most useful evaluations often collect, synthesize, and report information that can be used to improve the broader operation and health of the organization that hosts the program. Program evaluation thus can contribute to organization development, the deliberately planned, organization-wide effort to increase an organization’s effectiveness and/or efficiency and to enable the organization to achieve its strategic goals.
Brad Rose Consulting Aids Organizational Development
Brad Rose Consulting works at the intersection of evaluation and organization development. While our projects often begin with a focus on discrete initiatives and programs, the questions that drive out evaluation research provide insights into the effectiveness of the organizations that host, design, and fund those programs. Findings from our evaluations often have important implications for the development and sustainability of the entire host organization. This is especially true in the case of small-to-medium sized nonprofit organizations and educational organizations, whose core programs often comprise the bulk of the organization’s structure and raison d’être. Information from our evaluations can be used to clarify the organization’s goals and objectives, to identify key organizational challenges and help to develop ways to address these, and to strengthen the overall effectiveness of the organization’s efforts. Additionally, Brad Rose Consulting’s evaluations offer an ideal opportunity for an organization to reflect on its practices and purposes, to rethink ways to achieve the organization’s mission, and to identify new data-based strategies for enhancing the organization’s long-term viability and well-being.
See our previous post: Helpful Resources: Program Evaluation Supports Strategic Planning
Overview of Nonprofit Program Planning by Carter McNamara
Group facilitation is a critical element in many evaluations. Although the definition of “facilitation” varies, one view of it is, “A process in which a person whose selection is acceptable to all members of the group, who is substantively neutral, and who has no substantive decision-making authority, diagnoses and intervenes to help a group improve how it identifies and solves problems and makes decisions…” (R. Schwartz “The Skilled Facilitator Approach,” in Schwartz and Davidson, eds., The Skilled Facilitator Handbook, Jossey-Baas, 2005). Correlatively, a facilitator is, “an individual who enables groups and organizations to work more effectively; to collaborate and achieve synergy. He or she is a ‘content neutral’ party who by not taking sides or expressing or advocating a point of view during the meeting, can advocate for fair, open, and inclusive procedures to accomplish the group’s work” (Kaner, S. with Lind, L., Toldi, C., Fisk, S. and Berger, D. Facilitator’s Guide to Participatory Decision-Making, (2007) Jossey-Bass).
In the Spring 2016 issue of New Directions in Evaluation, which is dedicated to the subject of evaluation and facilitation, the editors Rita Sinorita Fierro, Alissa Schwartz, and Dawn Hanson Smart, note the multiple uses that facilitation can play in evaluations. “Many evaluations are undertaken with groups, and require evaluators to play a facilitative role—to engage stakeholders in meaningful conversations, to structure these discussions in ways that surface multiple perspectives, and to conduct focus groups on key data and make progress toward next steps or decision-making. Facilitation can help groups map theories of change, undertake data collection through focus groups or other dialogues, participate in analysis of findings, and help craft appropriate recommendations based on these findings.”
In “Facilitating Evaluation to Lead to Meaningful Change,” (which appears in the same issue, pp. 19-29) Tessie Tzavaras Catsambas writes that, “…evaluations typically require the evaluator to work with groups of people for organizing and managing an evaluation, as well as in collecting and analyzing data… evaluators must frequently facilitate group interactions to conclude an evaluation successfully.” Additionally, Tzavaras Catsambas observes, “Evaluators may often find themselves in situations in which they have to ‘facilitate’ their way through political sensitivities, misunderstandings, competing interests, confusion, apathy or refusal to cooperate.”
Although evaluation and facilitation share many of the same principles and practices, including: respect, asking good questions, engagement of/with others, and promoting effective communication and participation, the most effective facilitations and evaluations require even deeper capacities and skills. Among these: the ability to sense the mood of others, to listen deeply, to capture group-generated information, to reflect, and to move both individuals and groups toward action. Effective facilitation and evaluation also require the ability of the evaluator to build and maintain constructive relationships, to engage stakeholders, to incorporate the perspective and views of others, and to work with integrity and objectivity. (See also “Interpersonal Skills Enhance Program Evaluation” and “Establishing Essential Competencies for Evaluators” by L. Stevahn, J. King, G. Ghere, and J. Minnema). While evaluations are of course, distinct from group facilitation, facilitation can be a critical tool for evaluators.
What Happens When There’s No More Work? (7/6/2016)
In the June 25- July 1, 2016 issue of New Scientist, Michael Bond and Joshua Howgego report that a recent study by Oxford University concludes that within two decades, one-half of all jobs in the US could be done by machines. Artificial intelligence (AI) and advanced automation are having a profound effect on work and employment, especially in the advanced industrial economies. (See “When Machine’s Take Over: What Will Humans Do When Computers Run the World?” New Scientist, June 25- July 1, 2016, Vol. 230, Issue 3079, p29 &ff.)
Martin Ford’s 2015 book, Rise of the Robots: Technology and the Threat of a Jobless Future, explores in greater depth the impact that AI and robotics on employment. Ford traces the powerful (and disturbing) effects of robotization and artificial intelligence on a range of sectors in the economy, and argues that in addition to job elimination, the current AI-driven revolution in the world of work promises to displace both blue collar, manual laborers and white collar, college-educated professionals—the latter including but not limited to, lawyers,computer programmers, managers, office and retail workers. The current and anticipated “rise of the robots” thus threatens to create an increasingly jobless future for all; a future, Ford argues, that cannot be addressed with more education and upskilling of the workforce, because those jobs for which displaced blue collar workers once retrained increasingly will be carried out by robots and smart machines.
Ford’s book, as does Bond and Howgego’s article, underscore both the ominous changes in the economy and the profound losses that such changes portend. Bond and Howgego’s article explores the significant role work has played, especially in the advanced economies—not only as source of income and livelihood, but also as an important source of employees’ sense of purpose, identity, and meaning. For instance, they cite a recent Gallop poll that shows that 50% of manual workers, and 70% of college educated employees report that they get a sense of identify from their jobs. They also discuss the health benefits associated with the performance of meaningful work, and how the risks of diseases such as dementia and Alzheimer’s may be reduced for those who work more years, and postpone retirement.
As work continues to change because of employers’ preference for AI and automation, and fewer people are able to find employment, how will society deal with what looks like an imminent if not current, tidal wave of unemployment and forced ‘leisure’? Ford shows how recent history has been characterized by diminishing job creation, lengthening jobless recoveries, and soaring long-term unemployment—all of which are certain to lead to significant social and economic results—if not adequately addressed.
Ford, Bond, and Howgego all suggest that society will necessarily need to rethink the distribution of wealth and society’s assets. Ford, for example, argues for a guaranteed basic income of 10,000 dollars annually for all citizens (augmentable of course, by paid employment), and says that if the guaranteed income was not set too high, it would be likely to avoid the pitfalls of creating disincentives to work. He estimates such a plan would cost about 2 trillion dollars annually—about one half of which would be recouped through cost savings on discontinued welfare programs (e.g., food stamps, housing assistance programs, Earned Income Tax Credits, etc.) and the other half which might be raised by new taxes, like a carbon tax. Bond, and Howgego also explore basic incomes, but also discuss alternative income-supporting plans, such as a negative tax program, in which poor people receive a guaranteed annual income, middle earners aren’t taxed, and the wealthy are taxed.
Whether society is culturally and politically ready for the introduction of a guaranteed minimum income remains to be seen. Ominously, current and forthcoming changes in work and the resulting displacement of workers is likely to necessitate a sweeping examination of the economic and moral implications of the disappearance of paid employment. AI and robotic technology, as these writers convincingly show, will continue to eliminate jobs and make human employment increasingly rare.
“When Machine’s Take Over: What Will Humans Do When Computers Run the World,” Michael Bond and Joshua,” New Scientist, Vol 230, Issue 3079, p29 &ff.)
Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford, Basic Books, 2015
Inventing the Future: Post-Capitalism and a World Without Work, Nick Srnicek and Alex Williams, Verso, 2016
“Evidence-based” – What is it?
“Evidenced-based” has become a common adjectival term for identifying and endorsing the effectiveness of various programs and practices in fields ranging from medicine to education, from psychology to nursing, from criminal justice to psychology. The motivation for marshalling objective evidence in order to guide practices and policies in these diverse fields has been the result of the growing recognition that professional practices—whether they be doctoring or teaching, social work, or nursing—need to be based on something more sound than custom/tradition, practitioners’ habit, professional culture, received wisdom, and hearsay.
What does “evidenced-based” mean?
While definitions of “evidence-based” vary, the most common characteristics of evidence-based research include: objective, empirical research that is valid and replicable, whose findings are based on a strong theoretical foundation, and include high quality data and data collection procedures. The most common definition of Evidence-Based Practice (EBP) is drawn from Dr. David Sackett’s, original (1996) definition of “evidence-based” practices in medicine, i.e., “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research” (Sackett D, 1996). This definition was subsequently amended to, “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” (Sackett DL, Strauss SE, Richardson WS, et al. Evidence-based medicine: how to practice and teach EBM. London: Churchill-Livingstone,2000). (See, “Definition of Evidenced-based Medicine”).
An evidenced-based program, whether it be in youth development or education, is comprised of a set of coordinated services/activities that demonstrate effectiveness, as such effectiveness has been established by sound research, preferably, scientifically based research. (See, “Introduction to Evidence-Based Practice“).
In education, evidence-based practices are those practices that are based on sound research that shows that desired outcomes follow from the employment of such practices. “Evidence-based education is a paradigm by which education stakeholders use empirical evidence to make informed decisions about education interventions (policies, practices, and programs). ‘Evidence-based’ decision making is emphasized over ‘opinion-based’ decision making.” Additionally, “the concept behind evidence-based approaches is that education interventions should be evaluated to prove whether they work, and the results should be fed back to influence practice. Research is connected to day-to-day practice, and individualistic and personal approaches give way to testing and scientific rigor.” (See, “What is Evidence-Based Education?“).
Of course there are different kinds of evidence that can be used to show that practices, programs, and policies are effective. In a subsequent blog post I will discuss the range of evidence-based studies—from individual case studies and quasi experimental designs, to randomized controlled trials (RCT). The quality of the evidence as well as the quality of the study in which such evidence appears is a critical factor in deciding whether the practice or program is not just “evidence-based,” but in fact, effective.
Nonprofit organizations and program evaluators are increasingly being called upon to evaluate multi-stakeholder initiatives. These initiatives often depend upon collaboration among various organizations and agencies. As a consequence, the need to evaluate collaborations/coalitions has become an important requirement for a wide-range of contemporary program evaluations.
In a recent article, “Evaluating Collaboration for Effectiveness: Conceptualization and Measurement,” (American Journal of Evaluation, 2015, Vol. 36, p. 67-85) Lydia I. Marek, Donna-Jean Brock and Jyoti Savla discuss the customary difficulties in assessing collaborations, including, “lack of validated tools, undefined or immeasurable community outcomes, the dynamic nature of coalitions, and the length of time typical for interventions to effect community outcomes…the diversity and complexity of collaborations and the increasingly intricate political and organizational structures that pose challenges for evaluation design” (p.68).
Building on previous research by Mattesich and Monesy (Collaboration: What Makes it Work? Fieldstone Alliance, 1992) which outlines a six category inventory of characteristics typical of collaborations ( i.e., environment, membership characteristics, process/structure, communication, purpose, and resources), Marek, et al., argue for seven key factors that typify successful inter-organizational collaborations and coalitions:
1. Context—the shared history among coalition partners, the context in which they function, and the coalition’s role within the community
2. Membership—individual coalition members’ skills, attitudes, and beliefs that together contribute to, or detract from, successful outcomes
3. Process and organizational factors like flexibility and adaptability of members, and members’ clear understanding of their roles and responsibilities
4. Communication—formal and informal communication among members, and communication with the community
5. Function—the determination and articulation of coalition goals
6. Resources—the coordination of financial and human resources required for the coalition to achieve its goals
7. Leadership—strong leadership skills, including organizing and relationship – building skills
Marek et al. offer a tool for assessing the effectiveness of collaboration, a 69 item Collaboration Assessment Tool (CAT) survey instrument, which asks a series of questions for each of the factors identified above. For example, the “function” domain of the survey asks organizational respondents the following questions:
- This coalition has clearly defined the problem that it wished to address
- The goals and objectives of the coalition are based upon key community needs
- This coalition has clearly defined short-term goals and objectives
- This coalition has clearly defined long-term goals and objectives
- Members agree upon the goals and objectives
- The goals and objectives set for this coalition can be realistically attained
- Members view themselves as interdependent in achieving the goals and objectives of this coalition
- The goals and objectives of this coalition differ, at least in part, from the goals and objectives of each of the coalition members
“Evaluating Collaboration for Effectiveness: Conceptualizations and Measurement,” offers a valuable tool for assessing the often complex inter-organizational relationships that constitute multi-stakeholder collaborations. The CAT offers a system for gathering data for evaluating the quality of collaborations, and implicitly suggests an outline of the key characteristics of successful collaborations. The CAT tool will be useful for organizations who are embarking on collaborations, and for program evaluators who are charged with evaluating the success of such inter-organizational collaborations.
“Evaluating Collaboration for Effectiveness: Conceptualization and Measurement,” Lydia I. Marek, Donna-Jean Brock and Jyoti Savla, (American Journal of Evaluation, 2015, Vol. 36, p. 67-85)