Program evaluation services that increase program value for grantors, grantees, and communities. x

Author Archive for: Brad Rose

June 29, 2020
29 Jun 2020

Systems Thinking in Evaluation

The Oxford English Dictionary defines a system as, “A set of things working together as parts of a mechanism or an interconnecting network; a complex whole.” There are, of course, a range of specific kinds of systems, including economic systems, computer systems, biological systems, social systems, psychological systems, etc. In each of these domains, the system includes specialization of component parts (a division of labor), boundaries for each of the constituent parts, both a degree of relative autonomy and an interdependence of each part on the functioning of the other parts of that system, long-term functioning (i.e. function over time), and the production of outcomes (whether such outcomes are intended or not). Systems produce effects.

While various systems are distinct, there has been an effort to generate a general science of systems, under the umbrella of “systems theory” (See, for example, this summary, “Systems Theory” ) Theorists have attempted to construct a general and abstract science that is able to describe a variety of systems. These efforts, although subject to some questions and criticisms, have been useful for mapping and describing a variety of systems and structures, and have helped social scientists and organizational/social change advocates to describe approaches to intervening in a variety of contexts, including organizational, educational, social welfare, and economic systems.

Systems thinking—or thinking about systems vs. thinking exclusively about individuals or single events, can help those who are attempting to strengthen initiatives and interventions. As Michael Goodman points out in “Systems Thinking: What, Why, When, Where, and How?” “Systems thinking often involves moving from observing events or data, to identifying patterns of behavior over time, (and) to surfacing the underlying structures that drive those events and patterns. By understanding and changing structures that are not serving us well (including our mental models and perceptions), we can expand the choices available to us and create more satisfying, long-term solutions to chronic problems.”

Program evaluation benefits from a systems approach because interventions (e.g., programs and initiatives) are themselves systems, and are embedded or nested in larger social and economic systems. Rather than thinking that challenges to program effectiveness are the exclusive result of individuals’ one-off actions, it is more productive to examine the systemic features of the program in order to identify how both internal structures and repeated behaviors, and larger external systemic constraints shape programs’ effectiveness.

 

 

Resources:

June 2, 2020
02 Jun 2020

How Evaluation Can Help Non-profits to Respond to the COVID-19 Health Crisis

The current health crisis is compelling many non-profits to rethink how they do business. Many must consider how to best serve their stakeholders with new and perhaps untested, means. Among questions that many non-profits must now ask themselves: How do we continue to reach program participants and service recipients? How do we change/adjust our programing so that it reaches existing and new service recipients? How do we maximize our value while ensuring the safety of staff and clients? Are there new, unanticipated opportunities to serve program participants?

New conditions require new strategies. While the majority of non-profits’ attention will necessarily be focused on serving the needs of those they seek to assist, non-profit leaders will benefit from paying attention to which strategies work, and which adaptations work better than others.

In order to investigate the effectiveness of new programmatic responses, non-profits will benefit from conducting evaluation research that gathers data about the effects and the effectiveness of new (and continuing) interventions. Formative evaluation is one such means for discovering what works under new conditions.

The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.

While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:

  • Which features of a program or initiative are working and which aren’t working so well?
  • Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
  • Which components of the program do program participants say could be strengthened?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved. Brad Rose Consulting has conducted dozens of formative evaluations, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served. For the foreseeable future, non-profits are likely to be called upon to offer ever greater levels of services. Program evaluation can help non-profits to maximize their effectiveness in ever more challenging times.

Resources:

May 5, 2020
05 May 2020

Keeping Busy? – The Cult of Busyness in a Time of Lock-down

We’re living in a period where a lot of people are compelled to stay home, and many folks are out of work. Ironically, many employees are busier than ever. For the UPS driver, the grocery store cashier, medical personnel, truck drivers, and many assembly line workers, busyness is not a choice, but an on-going condition. While the COVID-19 crisis has increased the pace of work for those who are deemed “essential workers,” even in less stressful times, the pace of work is an intense and unrelenting activity.

Although for many, busyness is imposed and involuntary, for others–especially middle and upper managers, and entrepreneurs– busyness is not an imposed condition, but a prestigious choice. Indeed, busyness is a status symbol and an indicator of social importance. “In a recent paper published in the Journal of Consumer Research, researchers from Columbia University, Harvard, and Georgetown found through a series of experiments that the busier a person appeared, the more important they were deemed.” https://globalnews.ca/news/3343760/the-cult-of-busyness-how-being-busy-became-a-status-symbol/ The authors of the paper write, “We argue that a busy and overworked lifestyle, rather than a leisurely lifestyle, has become an aspirational status symbol. A series of studies shows that the positive inferences of status in response to busyness and lack of leisure are driven by the perceptions that a busy person possesses desired human capital characteristics (competence, ambition) and is scarce and in demand on the job market.” As early as 1985, Barbara Ehrenreich noted that effect on women, “I don’t know when the cult of conspicuous busyness began, but it has swept up almost all the upwardly mobile, professional women I know.” https://www.nytimes.com/1985/02/21/garden/hers.html

Why are so many people obsessively busy? Tim Kreider writes in “The ‘Busy’ Trap” New York Times, June 30, 2012 “Busyness serves as a kind of existential reassurance, a hedge against emptiness; obviously your life cannot possibly be silly or trivial or meaningless if you are so busy, completely booked, in demand every hour of the day.” Similarly, Lissa Rankin writes, “It seems to me that too many of us wear busyness as a badge of honor. I’m busy, therefore I’m important and valuable, therefore I’m worthy. And if I’m not busy, forget it. I don’t matter.”

In a recent article. “7 Hypotheses for Why we are So Busy Today” Kyle Kowalski posits the following hypotheses about busyness:

  1. Busyness as a badge of honor and trendy status symbol — or the glorification of busy — to show our importance, value, or self-worth in our fast-paced society
  2. Busyness as job security — an outward sign of productivity and company loyalty
  3. Busyness as FOMO (Fear of Missing Out) — spending is shifting from buying things (“have it all”) to experiences (“do it all”), packing our calendars (and social media feeds with the “highlight reel of life”)
  4. Busyness as a byproduct of the digital age — our 24/7 connected culture is blurring the line between life and work; promoting multitasking and never turning “off”
  5. Busyness as a time filler — in the age of abundance of choice, we have infinite ways to fill time (online and off) instead of leaving idle moments as restorative white space
  6. Busyness as necessity — working multiple jobs to make ends meet while also caring for children at home
  7. Busyness as escapism — from idleness and slowing down to face the tough questions in life (e.g. Maybe past emotional pain or deep questions like, “What is the meaning of life?” or “What is my purpose?”)

Whatever the reasons, busyness has its costs. The most obvious result is “burnout”. Others include: the long-term negative impact on happiness, well-being, and health. Ultimately, busyness may make us feel like we are important and in demand, have a high status, and command the respect of others, but it may also be destructive.

 

Resources:

“The cult of busyness: How being busy became a status symbol” Global News, March 30, 2017

‘Ugh, I’m So Busy’: A Status Symbol for Our Time” Joe Pinsker, The Atlantic, March 1, 2017

Busyness 101: Why are we SO BUSY in Modern Life? (7 Hypotheses) By Kyle Kowalski, Sloww

“Conspicuous Consumption of Time: When Busyness and Lack of Leisure Time Become a Status Symbol,” Silvia Bellezza, Neeru Paharia, Anat Keinan Columbia Business School Research Archive

“The cult of busyness in the nonprofit sector” Susan Fish, Charity Village, May 25, 2016

HERS, Barbara Ehrenreich. New York Times, Feb. 21, 1985

“This is why you’re addicted to being busy” Jory MacKay, Fast Company, August 12, 2019

“Are You Addicted to Being Busy? Why we should consider the hard truths we mask by staying busy.” Lissa Rankin M.D., Psychology Today, Apr 07, 2014

“Busy is a Sickness,” Scott Dannemiller, Huffington Post, February 27, 2015

April 21, 2020
21 Apr 2020

Developmental Evaluation in Tumultuous Times, Tumultuous Environments

The current health crisis is already having a powerful effect on non-profit organizations, many of whom had been economically challenged even before the onset of the COVID-19 pandemic. (See “A New Mission for Nonprofits During the Outbreak: Survival” by David Streitfeld, New York Times, March 27, 2020) Despite economic challenges, and as the immediate health crisis develops, non-profits will need information, including accurate and robust evaluation and monitoring information, even more than ever.

Under conditions of uncertainty and tumultuous social-environmental environments, and rapid adaptation, non-profits will benefit from information gathered by flexible and adaptable evaluation approaches like Developmental Evaluation. “Developmental evaluation (DE) is especially appropriate for…organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback.”
As Michael Quinnn Patton has recently pointed out, “All evaluators must now become developmental evaluators, capable of adapting to complex dynamics systems, preparing for the unknown, for uncertainties, turbulence, lack of control, nonlinearities, and for emergence of the unexpected. This is the current context around the world in general and this is the world in which evaluation will exist for the foreseeable future.”

Developmental Evaluation, the kind of evaluation approach Brad Rose Consulting has employed for many years, is extremely well-suited to serve the evaluation and information needs of non-profits, educational institutions, and foundations. For more information about our approach, please see our previous article “Developmental Evaluation: Evaluating Programs in the Real World’s Complex and Unpredictable Environment” and “Evaluation in Complex and Evolving Environments” .

Resources:

March 24, 2020
24 Mar 2020

Working from Home/Telecommuting

With the onset of the corona-virus (COVID-19) in the US, increasing numbers of people are working from home. While many, indeed most, jobs don’t allow for home-based employment, both new technology (e-mail, video conferencing, etc.) and public health concerns are compelling increasing numbers of employers to permit their workers to telecommute/work-from-home. In fact, long before the COVID-19 pandemic, ever greater numbers of U.S. workers have been working from their homes. One source notes that between 2005 and 2017, the number of people telecommuting grew by 159 percent. Prior to corona-virus, about 4.7 million people in the U.S. telecommuted. That number is expected to dramatically increase in 2020.

Benefits and Liabilities of Telecommuting

Telecommuting offers a number of benefits. In her article, “Benefits of Telecommuting for The Future Of Work,” Andrea Loubier reports that productivity receives a boost from those who telecommute because telecommuters are less distracted and more task focused. “With none of the distractions from a traditional office setting, telecommuting drives up employee efficiency. It allows workers to retain more of their time in the day and adjust to their personal mental and physical well-being needs that optimize productivity. Removing something as simple as a twenty minute commute to work can make a world of difference. If you are ill, telecommuting allows one to recover faster without being forced to be in the office.”

Telecommuting offers workers flexibility that they otherwise wouldn’t have. Many workers are able to organize their work with greater efficiency, deliberately integrate non-work tasks into their daily schedules. For older workers, telecommuting allows many to remain in the workforce longer. Some studies indicate that working from home also reduces employee turnover and increases company loyalty. For employers telecommuting also reduces costs, including costs associated with office space, employee hiring (due to reduced turnover), office supplies, equipment, etc.

Despite the advantages of telecommuting, working from home (or working anywhere off-site) also has disadvantages. For some, distraction isn’t decreased by working at home; it’s increased. Working from home can also be isolating and reduce the social rewards of the non-home workplace. Jobs that require building and maintaining strong interpersonal relationships are not well suited for telecommuting. As Mark Leibovich writes in “Working From Home in Washington? Not So Great,” “So much of what we do is just looking someone in the eye,” “When you can see a facial expression or body language, you get a much better sense if you’re making your case. It can be much more challenging to convey urgency remotely.”

Telecommuting Statistics at a Glance

Below are some statistics about telecommuting as reported by Flexjobs:

Resources:

“Benefits of Telecommuting for The Future Of Work,” Andrea Loubier , Forbes July 20, 2017

“What is Telecommuting?” The Balance Career

Flexjobs

“The Growing Army of Americans Who Work From Home,” Karsten Strauss Forbes, June 22, 2017

“Working From Home in Washington? Not So Great,” By Mark Leibovich, New York Times, March 18, 2020

March 17, 2020
17 Mar 2020

What is the purpose of Education?

In our previous article, “Schooling vs. Education – What is Education For?” we discussed the difference between schooling and education, examined the emergence of public education in the U.S, and briefly reviewed an article that said that the lingering 19th and early 20th “factory model” of education is out of date and needs to be replaced. Here, we’d like to briefly explore the underlying question: “What is education’s purpose?”

In classical Greece, Plato believed that a fundamental task of education is to help students to value reason and to become reasonable people (i.e., people guided by reason.) He envisioned a segregated education in which different groups of students would receive different sorts of education, depending on their abilities, interests, and social stations. Plato’s student Aristotle thought that the highest aim of education is to foster good judgment or wisdom. Aristotle was more optimistic than Plato about the ability of the typical student to achieve judgement and wisdom. Centuries later, writing in the period leading up to the French Revolution, Jean-Jacques Rousseau (1712–78) said that formal education, like society itself, is inevitably corrupting, and argued that a genuine education should enable the “natural” and “free” development of children – a view that eventually led to the modern movement known as “open education.” Rousseau’s views of education, although based in an idea of the romanticized innocence of youth, informed John Dewey’s later progressive movement in education during the early 20th century. Dewey believed that education should be based largely on experience (later formulated as “experiential education”) and that it should lead to students’ “growth” (a somewhat ill-defined and indeterminate concept.) Dewey further believed in the central importance of education for the health of democratic social and political institutions. Over the centuries, philosophers have held a variety of views about the purposes of education. Harvey Siegel catalogues the following list:

  • the cultivation of curiosity and the disposition to inquire;
  • the fostering of creativity;
  • the production of knowledge and of knowledgeable students;
  • the enhancement of understanding;
  • the promotion of moral thinking, feeling, and action;
  • the enlargement of the imagination;
  • the fostering of growth, development, and self-realization;
  • the fulfillment of potential;
  • the cultivation of “liberally educated” persons;
  • the overcoming of provincialism and close-mindedness;
  • the development of sound judgment;
  • the cultivation of docility and obedience to authority;
  • the fostering of autonomy;
  • the maximization of freedom, happiness, or self-esteem;
  • the development of care, concern, and related attitudes and dispositions;
  • the fostering of feelings of community, social solidarity, citizenship, and civic-mindedness;
  • the production of good citizens;
  • the “civilizing” of students;
  • the protection of students from the deleterious effects of civilization;
  • the development of piety, religious faith, and spiritual fulfillment;
  • the fostering of ideological purity;
  • the cultivation of political awareness and action;
  • the integration or balancing of the needs and interests of the individual student and the larger society; and
  • the fostering of skills and dispositions constitutive of rationality or critical thinking.

Needless to say, the extent and diversity of this list suggests that the purposes of education are manifold, and that in different historical periods, and under various historical circumstances, people have looked to education to accomplish a wide variety of ends – from the instillment of reason, to the tasks of self-development and vocational/career preparation. The sometimes incompatible goals of education may inform some of the challenges – both philosophical and practical – that U.S. schools have experienced during the last few centuries, and that persist today. (See “Confusion Over Purpose of U.S. Education System” Lauren Camera, August 29, 2016, U.S. News and World Report.)

 

Resources:

Harvey Siegel, “Philosophy of education,” Encyclopedia Britannica

“What is Education for?” Video. School of Life

“Education in Society” Video. Crash Course

“What Is the Purpose of Education?” Alan Singer, Huffpost, February 8, 2016

“Purpose of School” Steven Stemler, Wesleyan University

A List of Quotes about the Purposes of Education

“Confusion Over Purpose of U.S. Education System” Lauren Camera, August 29, 2016 U.S. News and World Report

“What Is Education For?” Danielle Allen, Boston Review, May 9, 2016

March 3, 2020
03 Mar 2020

Schooling vs. Education – What is Education For?

When Mark Twain said, “I never let my schooling get in the way of my education” he was distinguishing between the effects of the conventional, institutional practices associated with schools, and the individual human endeavor—a life-long exercise—to become an educated person. But what does education consist of?

Schooling vs. Education

At its core, “education” is the accumulation of knowledge and skills, and of course, moral/ethical values. (See Education) In complex societies—those in which person-to-person informal, intergenerational transmission of skills and knowledge is insufficient to ensure that successive generations acquire such assets—formal, structured schooling has been the form that education has taken. Throughout the history of Western society, much education has been conducted in private settings (e.g., tutors, private schools, monasteries, etc.) In ancient Greece, for example, private schools and academies were tasked with educating the young free-born Athenian. During the Middle Ages, most schools were founded upon religious principles with the primary purpose of training clergy. Following the Reformation in northern Europe, clerical education was largely superseded by forms of elementary schooling for larger portions of the population. The Reformation was associated with the broadening of literacy, primarily aimed at equipping people to read the Bible and experience its teachings directly. It wasn’t until the 19th century, however, that the idea of educating the mass of a nation’s population via universal, non-sectarian public education emerged in Europe and the U.S.

In 1821, Boston started the first public high school in the United States. By 1870 all of the US states had some form of publicly subsidized formal education. By the close of the 19th century, public secondary schools began to outnumber private ones. Access to schooling has been a perennial challenge, with women slowly gaining access throughout the 19th and early 20th centuries, and African Americans largely excluded from schooling or relegated to sub-standard schooling. By 1900, 34 states had compulsory schooling laws and thirty states with compulsory schooling laws required attendance until age 14 (or higher). By 1910, 72 percent of American children attended school, and half the nation’s children attended one-room schools. It was not until 1918, a little over a century ago, that every state required students to complete even elementary school.

Throughout its history, schooling has served many purposes. From the beginning of American public schooling, its purposes and goals have been fiercely contested. Some viewed schools as civic and moral preparatory institutions, while others saw schools as essential to forging and consolidating a distinct US national identity. Many saw schools as critical socializing processes which were designed less for the intellectual development of individuals, and more for equipping an increasingly industrializing workforce with the habits of, and tolerance for, factory work.

Factory Schools

In a recent article “The Modern Education System Was Made to Foster “Punctual, Docile, and Sober” Factory Workers Perhaps it’s time for a change.” Allison Schrager argues that 19th century American education was designed to produce disciplined, dependable, and compliant workers for an expanding industrial economy. Industrialists (often in alliance with other social sectors, like the Protestant clergy and later social reformers) believed that young people (many of them traditionally involved in agriculture, many the offspring of immigrants to the US) needed to be readied for factory life—which demanded punctuality, regular attendance, narrow task orientation, self-control, and respect for authority. These characteristics, although instrumental for an industrializing economy, were hardly geared toward the development of autonomous, self-directed individuals, and active democratic citizens. The “factory model” of schooling was functional, but by many accounts, stunting. This factory model, Schrager argues, remains widely prevalent in contemporary US schools. (For a critique of the reign of factory model of schooling see, Valerie Strauss, “American schools are modeled after factories and treat students like widgets. Right? Wrong.” Washington Post, Oct. 10, 2015 ) Schrager argues that the factory model is anachronistic and increasingly dysfunctional.

As mentioned, education is always a “contested terrain.” Various social, economic, political, and religious forces are interested in ensuring that schools teach what representatives of these forces value. Consequently, the content, and in some cases, the form, that schooling takes is the product of the struggle among these forces. (See for example our earlier article, “The Implications of Public School Privatization” Part 1 and Part 2 )

Underlying all forms of schooling, public and private, are the implicit questions, “What is education?” and “What is education’s purpose?” In a forthcoming article, we’ll explore these central questions.

Resources:

Education

History of Education

Classical education

History of Education in the U.S.

Valerie Strauss, “American schools are modeled after factories and treat students like widgets. Right? Wrong.” Washington Post, Oct. 10, 2015

Allison Schrager, “The Modern Education System Was Made to Foster “Punctual, Docile, and Sober” Factory Workers Perhaps it’s time for a change.”

February 18, 2020
18 Feb 2020

Digital Technology vs. Students’ Education

Over the last two decades, American education has sought to introduce and improve student access to digital technology. Since the first introduction of personal computers in classrooms, to the more recent efflorescence of iPads and the use of on-line educational content, educators have expressed enthusiasm for digital technology. As Natalie Wexler writes in The MIT Review, December 19, 2019, “Gallup …found near-universal enthusiasm for technology on the part of educators. Among administrators and principals, 96% fully or somewhat support “the increased use of digital learning tools in their school,” with almost as much support (85%) coming from teachers.” Despite this enthusiasm, there isn’t a lot of evidence for the effectiveness of digitally based educational tools. Wexler cites a study of millions of high school students in the 36 member countries of the Organization for Economic Co-operation and Development (OECD) which found that those who used computers heavily at school “do a lot worse in most learning outcomes, even after accounting for social background and student demographics.”

Although popular, and thought by educators useful, digital tools in classrooms not only appear to make little difference in educational outcomes, but in some cases may actually negatively affect student learning. “According to other studies, college students in the US who used laptops or digital devices in their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who took the course in person. And fourth graders who used tablets in all or almost all their classes had, on average, reading scores 14 points lower than those who never used them—a differential equivalent to an entire grade level. In some states, the gap was significantly larger.”

While it has been largely believed that digital technologies can “level the playing” field for economically disadvantaged students, the OECD study found that “technology is of little help in bridging the skills divide between advantaged and disadvantaged students.”

Why do digital technologies fail students? As Wexler ably details:

  • When students read text from a screen, it’s been shown, they absorb less information than when they read it on paper
  • Digital vs. human instruction eliminates the personal, face-to-face relationships that customarily support students’ motivation to learn
  • Technology can drain a classroom of the communal aspect of learning, over individualize instruction, and thus diminish the important role of social interaction in learning
  • Technology is primarily used as a delivery system, but if the material it’s delivering is flawed or inadequate, or presented in an illogical order, it won’t provide much benefit
  • Learning, especially reading comprehension, isn’t just a matter of skill acquisition, of showing up and absorbing facts, but is largely dependent upon students’ background knowledge and familiarity with context. In his article “Technology in the Classroom in 2019: 6 Pros & Cons” Vawn Himmelsbach, makes many of the same arguments and adds a few liabilities to Wexler’s list.
  • Technology in the classroom can be a distraction
  • Technology can disconnect students from social interactions
  • Technology can foster cheating in class and on assignments
  • Students don’t have equal access to technological resources
  • The quality of research and sources they find may not be top-notch
  • Lesson planning might become more labor-intensive with technology

Access and availability to digital technology varies, of course, among schools and school districts. As the authors of Concordia University’s blog, Rm. 241 point out, “Technology spending varies greatly across the nation. Some schools have the means to address the digital divide so that all of their students have access to technology and can improve their technological skills. Meanwhile, other schools still struggle with their computer-to-student ratio and/or lack the means to provide economically disadvantaged students with loaner iPads and other devices so that they can have access to the same tools and resources that their classmates have at school and at home.”

While students certainly need technological skills to navigate the modern world and equality of access to such technology remains a challenge, digital technology alone cannot hope to solve the problems of either education or “the digital divide.” The more we rely on the use of digital tools in the classroom, the less we may be helping some students, especially disadvantaged students, to learn.

Resources:

February 4, 2020
04 Feb 2020

Bias, Seeing Things as We Are, Not as They Are

“Bias” is a tendency (either known or unknown) to prefer one thing over another that prevents objectivity, that influences understanding or outcomes in some way. (See the Open Education Sociology Dictionary) Bias is an important phenomenon in social science and in our everyday lives.

In her article, “9 types of research bias and how to avoid them,” Rebecca Sarniak discusses the core kinds of bias in social research. These include both the biases of the researcher, and the biases of the research subject/respondent.

Prevalent kinds of researcher bias include:

  • confirmation bias
  • culture bias
  • question-order bias
  • leading questions/wording bias
  • the halo effect

Respondent biases include:

  • acquiescence bias
  • social desirability bias
  • habituation
  • sponsor bias

In their Scientific American article “How-to-think-about-implicit-bias,”, Keith Payne, Laura Niemi, John M. Doris assure us that bias is not merely rooted in prejudice, but in our tendency to notice patterns and make generalizations. “When is the last time a stereotype popped into your mind? If you are like most people, the authors included, it happens all the time. That doesn’t make you a racist, sexist, or whatever-ist. It just means your brain is working properly, noticing patterns, and making generalizations…. This tendency for stereotype-confirming thoughts to pass spontaneously through our minds is what psychologists call implicit bias. It sets people up to overgeneralize, sometimes leading to discrimination even when people feel they are being fair.”

Of course, bias is not just a phenomenon relevant to social science (and evaluation) research. It affects our everyday activities too. In “10 Cognitive Biases That Distort Your Thinking,” Kendra Cherry explores the following kinds of biases:

In evaluation research, especially when employing qualitative methods, such as interviews and focus groups, unconscious bias can negatively affect evaluation findings. The following types of bias are especially problematic in evaluations:

  • confirmation bias, when a researcher forms a hypothesis or belief and uses respondents’ information to confirm that belief.
  • acquiescence bias, also known as “yea-saying” or “the friendliness bias,” when a respondent demonstrates a tendency to agree with, and be positive about, whatever the interviewer presents.
  • social desirability bias, involves respondents answering questions in a way that they think will lead to being accepted and liked. Some respondents will report inaccurately on sensitive or personal topics to present themselves in the best possible light.
  • sponsor bias, when respondents know – or suspect – the interests and preferences of the sponsor or funder of the research, and modifies their (respondents) answers to questions
  • leading questions/wording bias, elaborating on a respondent’s answer puts words in their mouth because the researcher is trying to confirm a hypothesis, build rapport, or overestimate their understanding of the respondent.

It’s important to strive to eliminate bias in both our personal judgements and in social research. (For an extensive list of cognitive biases, see here.) Awareness of potential biases can alert us to when bias, rather than impartiality, influence our methods and affect our judgments.

Resources:

“How-to-think-about-implicit-bias,” Keith Payne, Laura Niemi, John M. Doris, March 27, 2018, Scientific American
“Bias in Social Research.” M. Hammersley, R. Gomm
January 21, 2020
21 Jan 2020

Did We Achieve What We Intended? Summative Evaluations

A summative evaluation is typically conducted near, or at the end of a program or program cycle. Summative evaluations seek to determine if, over the course of the intervention, the desired outcomes of a program were achieved. An “outcome” is that change, effect, or result which a program or initiative intends to achieve (See “What Counts as an “outcome” and Who Decides” ). Summative evaluations, as their name implies, offer a kind of “summary” of the value or worth of a program. Such an estimation is based on whether, and to what degree, intended outcomes have been achieved. Whereas formative evaluations are conducted near the beginning of a program, summative evaluations are conducted near or at the end of a program. Formative evaluations provide information with which to strengthen the implementation of the program. Conversely, summative evaluations determine whether the program should be continued or discontinued. (See our article “Strengthening Programs and Initiatives through Formative Evaluation”)

Summative evaluations are important because they gather and analyze data that indicate whether a program or initiative has been successful in affecting desired changes. Summative evaluations can be of use in making a case to potential funders and other stakeholders that continued support is a worthwhile investment. A word of caution: While it is important for funders to know that their investments are effective, and that desired changes are happening, summative evaluations may also provide evidence that discontinuation of a program is in order. (See “Fail Forward: What We Can Learn from Program ‘Failure’” )

Resources:

Understanding Different Types of Program Evaluation

“Building Our Understanding: Key Concepts of Evaluation What is it and how do you do it” Center for Disease Control

Evaluation, Second Edition, Carol H Weiss, Prentice Hall

“Types of Evaluation You Need to Know,” by Vipul Nanda

“Making Sense of Summative Evaluation: Three Tips for Making Those “Strings” Work in Your Favor,” by Heather Stombaugh

Just the Facts: Data Collection

January 7, 2020
07 Jan 2020

Strengthening Programs and Initiatives through Formative Evaluation

Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program or initiative. Formative evaluations typically are conducted in the early-to-mid period of a program’s implementation. Formative evaluations can be contrasted with summative evaluation which are conducted near, or at the end of, a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities). Summative evaluations are used to indicate the ultimate value, merit, and worth of the program. Their findings can be used to determine whether the program should be continued, replicated, or curtailed.

The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.

While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:

  •  Which features of a program or initiative are working and which aren’t working so well?
  • Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
  • Which components of the program do program participants say could be strengthened?
  • Which elements of the program do participants find most beneficial, and which least beneficial?Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved.Brad Rose Consulting has conducted dozens of formative evaluation, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served.
November 5, 2019
05 Nov 2019

We’re All in This Together—The features and dynamics of groups

We—humans—spend a lot of time in groups. Families, workplaces, churches, mosques, and synagogues, political organizations, sports teams, clubs, associations, etc. A “group” is a collection of two or more people that interact, communicate, and influence one another. A crowded elevator or a subway car is not generally considered a group; it’s a crowd. A club or a work-team is a group.

Groups are the settings for a range of behaviors, all of which entail human interaction and influence. Individuals become members of groups in order to achieve goals and to satisfy needs. Groups have shared goals, or agendas, which include their “task agenda”—getting work done; and their “social agenda”—meeting the social-emotional and identity needs of members. Groups assign members to roles that prescribe a set of expectations for each member’s behavior. These roles typically have different statuses, or different levels of prestige associated with each role. There are “in-groups” and “out-groups”; the former are groups with which people identify as members and the latter are groups with which people don’t identify and are often “assigned” by members of other groups. An organization is a kind of group, whose members work together for a shared purpose in a continuing way. Organizations can contain various groups, both formal and informal, within its boundaries.

Groups have different levels of cohesion or incoherence. Both internal competition among group members and external competition with other groups, can affect the degree of cohesion, or solidarity of the group. While cohesion is important to most groups, if excessive, it can be the cause of undesirable factors like “groupthink” which can lower the quality of the group’s decision-making ability, lead to closed-mindedness, prejudice, and exert undue pressure to conform.

These features and dynamics (above) are applicable to most groups. They are especially noticeable at work, where group dynamics are often operative. Status of members, specified roles, pressures towards conformity and “groupthink”, leadership and “followership,” group decision-making, etc., are issues with which we must often deal—both consciously and unconsciously. In the for-profit world and the non-profit world, group dynamics are at play. Awareness of these features can help us to productively deal with them, rather than experience them unconsciously, and at times, adversely.

October 22, 2019
22 Oct 2019

What the Heck are We Evaluating, Anyway?

When you’re thinking about doing an evaluation — either conducting one yourself, or working with an external evaluator to conduct the evaluation — there are a number of issues to consider. (See our earlier article “Approaching an Evaluation—Ten Issues to Consider”)

I’d like to briefly focus on four of those issues:

  • What is the “it” that is being evaluated?
  • What are the questions that you’re seeking to answer?
  • What concepts are to be measured?
  • What are appropriate tools and instruments to measure or indicate the desired change?

1. What is the “it” that is being evaluated?

Every evaluation needs to look at a particular and distinct program, initiative, policy, or effort. It is critical that the evaluator and the client be clear about what the ‘it” is that the evaluation will examine. Most programs or initiatives occur in a particular context, have a history, involve particular persons (e.g. staff and clients/service recipients) and are constituted by a set of specific actions and practices (e.g., trainings, educational efforts, activities, etc.) Moreover, each program or initiative has particular changes (i.e. outcomes) that it seeks to produce. Such changes can be manifold or singular. Typically, programs and initiatives seek to produce changes in attitudes, behavior, knowledge, capacities, etc. Changes can occur in individuals and/or collectivities (e.g. communities, schools, regions, populations, etc.)

2. What are the questions that you’re seeking to answer?

Evaluations like other investigative or research efforts, involve looking into one or more evaluation questions. For example, does a discreet reading intervention improve students’ reading proficiency, or does a job training program help recipients to find and retain employment? Does a middle school arts program increase students’ appreciation of art? Does a high school math program improve students’ proficiency with algebra problems?

Programs, interventions, and policies are implemented to make valued changes in the targeted group of people that these programs are designed to serve. Every evaluation should have some basic questions that it seeks to answer. By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings. (See “Program Evaluation Methods and Questions: A Discussion”)

3. What concepts are to be measured?

Before launching the evaluation, it is critical to clarify the kinds of changes that are desired, and then to find the appropriate measures for these changes. Programs that seek to improve maternal health, for example, may involve adherence to recommended health screening measures, e.g., pap smears. Evaluation questions for a maternal health program, therefore might include: “Did the patient receive a pap smear in the past year? Two years? Three years?” Ultimately, the question is “Does receipt of such testing improve maternal health?” (Note that this is only one element of maternal health. Other measures might include nutrition, smoking abstinence, wellness, etc.)

4. What are appropriate tools and instruments to measure or indicate the desired change?

Once the concept (e.g. health, reading proficiency, employment, etc.) are clearly identified, then it is possible to identify the measures or indicators of the concept, and to identify appropriate tools that can measure the desired concepts. Ultimately, we want tools that are able either to quantify, or qualitatively indicate, changes in the conceptual phenomenon that programs are designed to affect. In the examples noted above, evaluations would seek to show changes in program participants’ reading proficiency (education), employment, and health.

We have more information on these topics:

“Approaching an Evaluation—Ten Issues to Consider”

“Understanding Different Types of Program Evaluation”

“4 Advantages of an External Evaluator”

October 8, 2019
08 Oct 2019

Why Evaluate Your Program?

Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the common reasons for conducting program evaluation are to:

  • monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
  • measure the outcomes, or effects, produced by a program in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
  • provide objective evidence of a program’s achievements to current and/or future funders and policy makers
  • elucidate important lessons and contribute to public knowledge.

There are numerous reasons why a program manager or an organizational leader might chose to conduct an evaluation. Too often however, we don’t do things until we have to. Program evaluation is a way to understand how a program or initiative is doing. Compliance with a funder’s evaluation requirements need not be the only motive for evaluating a program. In fact, learning in a timely way about the achievements of, and challenges to, a program’s implementation—especially in the early-to-mid stages of a program’s implementation—can be a valuable and strategic endeavor for those who oversee programs. Evaluation is a way to learn about and to strengthen programs.

 

Resources:

September 24, 2019
24 Sep 2019

It’s Not Just Your Credit Card Score – The Erosion of Privacy

What is Privacy Good For?

The right to privacy is a much-cherished value in America. As we noted in an earlier article, “Transparent as a Jellyfish? Why Privacy is Important” privacy is crucial to the development of a person’s autonomy and subjectivity. When privacy is reduced by surveillance or restrictive interference—either by governments or corporations—such interference may not just affect our social and political freedoms, but undermine the preconditions for the fundamental development and sustenance of the self.

Danial Solove, Professor of Law at George Washington University Law School, lists ten important reasons for privacy including: limiting the power of government and corporations over individuals; the need to establish important social boundaries; creating trust; and as a precondition for freedom of speech and thought. Solove also notes, “Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being.” (See “Ten Reasons Why Privacy Matters” Danial Solove). Julie Cohen, in “What is Privacy For?” Harvard Law Review, Vol. 126, 2013 writes: “Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop.”

Strains on Privacy

Privacy, of course, is under continual strain. In his recent article, “Uh-oh: Silicon Valley is building a Chinese-style social credit system,” (Fast Company, August 8, 2019) Mike Elgan notes that China is not alone in seeking to create a “social credit” system—a system that monitors and rewards/punishes citizen behavior.

China’s state-run system would seem to be extreme (e.g., it rewards and punishes for such things as failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, etc. It also publishes lists of citizens’ social credit ratings, and uses public shaming as a means to enforce desired behavior.) Elgan notes that Silicon Valley has similar designs on monitoring and motivating what it deems as “desirable and undesirable” behavior. The outlines of an ever-evolving corporate-sponsored, technology-based “social credit” system now include:

  • Life insurance companies can base premiums on what they find in your social media posts
  • Airbnb—now has more than 6 million listings in its system, and the company can ban customers and limit their travel/accommodation choices. Airbnb can disable your account for life for any reason it chooses, and it reserves the right to not tell you the reason.
  • PatronScan, an ID-reading service helps restaurants and bars to spot fake IDs—and troublemakers. The company maintains a list of objectionable customers which is designed to protect venues from people previously removed for “fighting, sexual assault, drugs, theft, and other bad behavior,” A “public” list is shared among all PatronScan customers.Under a new policy Uber announced in May: If your average rating is “significantly below average,” Uber will ban you from the service.
  • WhatsApp is, in much of the world today, the main form of electronic communication. Users can be blocked if too many other users block you. Not being allowed to use WhatsApp in some countries is as punishing as not being allowed to use the telephone system in America.

The Consequences

While no one wants to endorse “bad behavior,” ceding the power to corporations and technology giants to determine which behavior counts as undesirable and punishable may not be the most just or democratic way to ensure societal norms and expectations. As Elgan observes, “The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extra-legal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.” Even more ominously, as Julie Cohen writes, “Conditions of diminished privacy shrink the capacity (of self government), because they impair both the capacity and the scope for the practice of citizenship. But a liberal democratic society cannot sustain itself without citizens who possess the capacity for democratic self-government. A society that permits the unchecked ascendancy of surveillance infrastructures cannot hope to remain a liberal democracy.”

 

Resources:

“America Isn’t Far Off from China’s ‘Social Credit Score’” Anthony Davenport, Observer, February 19, 2018.

“How the West Got China’s Social Credit System Wrong,”  Lousse Matsakis, Wired, July 29. 2019

“Ten Reasons Why Privacy Matters” Daniel Solove

“What Privacy Is For?” Julie Cohen, Harvard Law Review, Vol. 126, 2013

“The Spy in Your Wallet: Credit Cards Have a Privacy Problem,” Geoffrey A. Fowler, The Washington Post, August 26, 2019.

 

September 3, 2019
03 Sep 2019

What is “Normal”?

Why do so many of us aspire to be “normal?” Who decides what’s normal and abnormal? What happens to our self- and social- worth when we discover that we aren’t “normal?” In a recent article, “How Did We Come Up with What Counts as Normal,” Jonathan Mooney discusses the rise of an idea that has acquired substantial power in modern society. Mooney notes that “normal’ entered the English language only in the mid-19th century and has its roots in the Latin “norma” which refers to the carpenter’s T-Square. It originally meant simply “perpendicular.” Right-angles however, are considered mathematically “good” and “normal” soon came to be associated not just with their description of the orthogonal angle, but also with the normative notion of something that is desirable or socially expected. Mooney argues that it is this ambiguity as both a descriptive word and as a normative ideal, that makes “normal” so appealing and powerful.

“Normal” was first used in the academic disciplines of comparative anatomy and physiology. For academics in these and other fields, “normal” soon evolved to describe bodies and organs that were “perfect” or “ideal” and also was used to name certain states as “natural”. Eventually, thanks largely to the field of statistics, ideas about the normal soon conflated the average with the ideal or perfect. In the 19th century, for example, Adolphe Quetelet, a deep believer in the power of statistics, advanced the idea of the “average man” and argued that “the normal” (i.e., average) was perfect and beautiful. Quetelet characterized that which was not “normal” not simply as “abnormal,” or non-average, but as something potentially monstrous. “In 1870, in a series of essays on “deformities” in children, he juxtaposed children with disabilities to the normal proportions of other human bodies, which he calculated using averages.” Thus, averages soon became the aspirant ideal.

Mooney also describes how the statistician Francis Galton, who was Charles Darwin’s cousin, “…was both the first person to develop a properly statistical theory of the normal . . . . and also the first person to suggest that it be applied as a practice of social and biological normalization.” “By the early twentieth century, the concept of a normal man took hold. Soon, the emerging field of public health embraced the idea of the normal; schools, with rows of desks and a one-size-fits-all approach to learning, were designed for the mythical middle; and the industrial economy sought standardization, which was brought about by the application of averages, standards, and norms to industrial production. Moreover, eugenics, an offshoot of genetics created by Galton, was committed to ridding the world of human “defectives.”

The ensuing predominance (some might say “domination”) of “the normal” became firmly established by the mid-20th century. Mooney points out however, that the normal was not so much “discovered” as it was invented, largely by statistics and statisticians, and promulgated by the social sciences and moralists. “Alain Desrosières, a renowned historian of statistics wrote, “With the power deployed by statistical thought, the diversity inherent in living creatures was reduced to an inessential spread of “errors” and the average was held up as the normal—as a literal, moral, and intellectual ideal.”

Resources:

“How Did We Come Up with What Counts as Normal,” Jonathan Mooney, Literary Hub August 16, 2019

Normal Sucks: How to Live, Learn, and Thrive Outside the Lines, Jonathan Mooney, Henry Holt and Co., 2019

“Ranking, Rating, and Measuring Everything”

For information on social norms (formal and informal norms, morays, folkways, etc.) see https://courses.lumenlearning.com/alamo-sociology/chapter/social-norms/ and “What is a Norm?”

August 6, 2019
06 Aug 2019

Politics is Not Forbidden. Should Nonprofits Rethink Their Political Agnosticism?

Non-profits have long operated under the assumption that they must remain non-political. A recent article by Bill Shore in the Stanford Social Innovation Review, July 17, 2019, argues that nonprofits need not be constrained by their presumed non-political status. In fact, Shore contends, “Nonprofits need to do much more of exactly what most of them don’t think they can or should do: influence public policy and its execution.”

Shore says that nonprofits can and should be more political than many in the nonprofit community believe they are legally permitted to be. Achieving the goals that many nonprofits pursue depends upon nonprofits becoming more, not less, political. While some activities are prohibited, including working on campaigns, donating to candidates, and engaging in lobbying beyond certain generously defined limits, shore notes that “… a broad range of political work is permitted, appropriate, even essential. (There is also the option of establishing a 501(c)(4) that permits campaign engagement and support, which we haven’t done.)”

Shore asserts getting political is often about educating, not necessarily lobbying or campaigning. He further argues that, “Nonprofits need to build their internal political capacity. Nonprofit political activity is good for nonprofits, good for politics, and good for the people that both aim to serve.” Ultimately, by expanding their political activities within stipulated legal limits, “nonprofits benefit by seeing their programs and services achieve greater scale and reach more people in need, in ways that only politics and public policy can guarantee.”

Resources:

“Getting Political Is Good for Everyone,” Bill Shore, Stanford Social Innovation Review, July 17, 2019

July 23, 2019
23 Jul 2019

Don’t Blame the Robots

We’ve previously written about the impact of Artificial Intelligence (AI) and new computer technologies on employment in the US. (See, for example, “Humans Need Not Apply:
What Happens When There’s No More Work?” “Will President Trump’s Wall Keep Out the Robots? ) Two recent articles further highlight the effects associated with the advance of AI. The first, “Robots Are Not Coming for Your Job. Management Is,” by Brian Merchant, at Gizmodo argues that while the robots-stealing-your-job headlines make for good copy, robots are not to blame for current and anticipated technological displacement. Blaming robots, is to misattribute the origin of the uses of technology. “Robots are not threatening your job. Business-to-business salesmen promising automation solutions to executives are threatening your job. Robots are not killing jobs. The managers who see a cost benefit to replacing a human role with an algorithmic one and choose to make the switch are killing jobs. Robots are not coming for your job. The CEOs who see an opportunity to reap greater profits in machines that will make back their investment in three point seven years and send the savings upstream — they’re the ones coming for your job.” Merchant says that merely technologizing the phenomenon, is to obscure the real sources (and possible solutions to) the problem. “Letting an ambiguous conception of ‘robots’ instead shoulder the blame lets the managerial class evade scrutiny for how it deploys automation, shuts down meaningful discussion about the actual contours of the phenomenon, and prevents us from challenging the march of this manifest “robodestiny” when it should be challenged.”

In “A Machine May Not Take Your Job, but One Could Become Your Boss,” by Kevin Roose, New York Times, June 23, 2019, the author says “…in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.” Roose observes that one of the goals of AI is to optimize efficiency of humans in the workplace. Thus systems that monitor, guide, and report on employee performance are increasingly seen in white collar workplaces, where employees are being “assisted” to be more productive, more customer friendly, and work more quickly — by “adjunct management,” i.e., artificial intelligence.

Roose scans a variety of workplaces.  In the insurance industry, he reports on AI systems that provide on-screen prompts to call center works — prompting them to be chirpier and more empathetic; he discusses the use of AI and employee tracking in both Amazon warehouses and retail stores. In the later he notes that 7-Eleven “uses in-store sensors to calculate a “true productivity” score for each worker, and rank workers from most to least productive,” and notes “Uber, Lyft and other on-demand platforms have made billions of dollars by outsourcing conventional tasks of human resources — scheduling, payroll, performance reviews — to computers.

Management by algorithm doesn’t just affect call center works, Uber drivers, and warehouse workers. It also bodes less-than-well for managers, whose traditional supervisory and over-site duties are increasingly being handled by “robots”. As we discussed in “Humans Need Not Apply,” AI promises to displace both blue collar, manual laborers and white collar, college-educated professionals — the latter including but not limited to, lawyers, computer programmers, managers, office and retail workers. “A Machine May Not Take Your Job, but One Could Become Your Boss,” hauntingly suggests that management too, is in the cross-hairs of AI.

Resources:

“Robots Are Not Coming for Your Job.  Management Is,” Brian Merchant, Gizmodo

“A Machine May Not Take Your Job, but One Could Become Your Boss,” by Kevin Roose, New York Times, June 23, 2019
July 9, 2019
09 Jul 2019

Ranking, Rating, and Measuring Everything

In an important and brief new book, The Metric Society: on the Quantification of the Social, (Polity, 2019), German sociologist Steffen Mau argues that the historic growth in the availability of data and a seeming societal obsession with quantitatively measuring and ranking everything, is fast making us a “metric society. “A cult of numbers masquerading as rationalization” he says, is having unparalleled impact on how we understand both social and personal value. We are becoming increasingly trapped in a social world where, “The possibilities of life and activity logging are growing apace: consumption patterns, financial transactions, mobility profiles, friendship networks, states of health, education activities, work output, etc.—all this is becoming statistically quantifiable.” Such quantification is far from neutral and scientific, Mau says. It leads to ever greater tendencies, both individual and institutional, to classify, differentiate, and construct social hierarchies. He argues further that these tendencies are paving the way for us to become “an evaluation society,” a society where individuals constantly measure and compare their social worth with others (e.g., dating sites and Facebook “likes”) and where both corporations and the state sort people, based on narrow statistics, into categories that ultimately have differential access to valuable resources.

While the book is filled with examples, Chapter 5, “The Evaluation Cult: Points and Stars,” explores how “‘the evaluation cult’ is binding us to the metrics of measurement, evaluation, and comparison.” Mau scans the proliferation of various tools for evaluation: satisfaction surveys, preference measures, self-assessments, health tracking algorithms, and myriad ranking systems, ranging from Yelp, to publicly available starred reviews of medical providers and lawyers. He shows us how such ratings and rankings—often justified by the claims of providing “transparency,” helpful information, and consumer influence on service providers and products— are upending both markets and the professions, in some cases driving companies to purchase good reviews.

Mau raises questions not just about the validity of measures (after all, what is the difference between a three-star restaurant rating and a four-star rating?), but argues that the growth in the use of such measures is transforming how we view and value ourselves and others. “The universal language of numbers, their lack of ambiguity, and the illusion of commensurability, pave the way for the hegemony of a metrics-based apparatus of comparison.” He says that today, we are witnessing and participating in the emergence of a new “status regime” characterized by quantification and numerical ranking. This “quantitative comparison is frequently translated into a competitive ethos of better versus worse, more versus less.”

Among the other observations Mau offers:

  • growing reliance upon numbers changes our everyday notions of value and social status
  • the availability of quantitative information reinforces the tendency toward social comparison and rivalry
  • quantitative measurement of social phenomena fosters the expansion of competition
  • representations of quantitative data, such as graphs, tables, lists, and scores, change qualitative differences into quantitative inequalities
  • the availability of, and reliance upon, quantitative data leads to further social hierarchization

Ultimately, “…the measurement and quantification of the social realm are not neutral representations of reality. On the contrary, they are representative of specific orders of worth which are invariably based on forgone conclusions as to what can and should be measured and evaluated, and by what means. Metrics may claim to give an objective, accurate, and rational picture of the world as it is, but they also contribute, through the selection, weighting, and linking of information, to the establishment of the normative order.” Essentially. Mau raises a perennial question that is relevant to all evaluative efforts: Do we measure what’s valuable, or is it valuable because we choose to measure it? Please see our previous posts “The Tyranny of Metrics” and “What Counts as an ‘Outcome’—and Who Determines?” Mau argues further that we are becoming a society obsessed with managing our reputations, and ultimately a society of ever greater competition and rivalry.

Resources:

The Metric Society: On the Quantification of the Social, (Polity, 2019),

Heather Douglas, “Facts, Values, and Objectivity”

Max Weber, Objectivity in the Social Sciences 

Max Weber, Methodology of the Social Sciences, Transaction Press, 2011

May 29, 2019
29 May 2019

Being Smart About What You Feel

Emotional Intelligence

In a previous blogpost, “Interpersonal Skills Enhance Program Evaluation,” we discussed the importance of interpersonal and relational skills for program evaluators. These skills make effective and responsive interpersonal interaction possible. “Emotional Intelligence” underlies many of these skills. Emotional Intelligence, first explored by Daniel Goleman, in his book Emotional Intelligence, Why It Matters More Than IQ, Bantam Books, 1995 is the ability to recognize, manage, and utilize both one’s own emotions and the emotions of others. Emotional Intelligence, as summarized by Eric Ravenscraft in his recent article “Emotional Intelligence: The Social Skills You Weren’t Taught in School,” Lifehacker, February 20, 2019, includes the following elements:

  • Self-awareness: Self-awareness involves knowing your own feelings. This includes having an accurate assessment of what you’re capable of, when you need help, and what your emotional triggers are.
  • Self-management: This involves being able to keep your emotions in check when they become disruptive. Self-management involves being able to control outbursts, calmly discussing disagreements, and avoiding activities that undermine you like extended self-pity or panic.
  • Motivation: Everyone is motivated to action by rewards like money or status. Goleman’s model, however, refers to motivation for the sake of personal joy, curiosity, or the satisfaction of being productive.
  • Empathy: While the three previous categories refer to a person’s internal emotions, this one deals with the emotions of others. Empathy is the skill and practice of reading the emotions of others and responding appropriately.
  • Social skills: This category involves the application of empathy as well as negotiating the needs of others with your own. This can include finding common ground with others, managing others in a work environment, and being persuasive.

 

Critiques of Emotional Intelligence

Although Emotional Intelligence (EI) has become an increasingly accepted concept, there are some who question its distinctiveness and validity. Some say that it is difficult to distinguish from regular IQ, that is not really a kind of intelligence but a set of behaviors, and that it is nearly impossible to objectively measure.

A recent article argues that the idea of “reading” the emotions of oneself and of others is itself a problematic conception. In “Emotional Intelligence Needs a Rewrite”, Lisa Feldman Barrett, Nautilus, August 3, 2017, writes that “The traditional foundation of Emotional Intelligence rests on two common-sense assumptions. The first is that it’s possible to detect the emotions of other people accurately. That is, the human face and body are said to broadcast happiness, sadness, anger, fear, and other emotions, and if you observe closely enough, you can read these emotions like words on a page. The second assumption is that emotions are automatically triggered by events in the world, and you can learn to control them through rationality.”

Feldman Barrett argues that neither of these assumptions stands up to scientific investigation. Research shows that faces and bodies alone do not communicate any specific emotion, in any consistent manner, and that since the brain doesn’t have separate regions—one for emotion, and one for cognition—the belief that we can control or manage emotions using our rational brains is fallacious. She argues that although it may sound appealing and reasonable that we can detect emotions in others by observing their faces and bodies, expressions are neither universal nor mono-emotional. “When it comes to detecting emotion in other people, the face and body do not speak for themselves.”  She says that, “Your brain may automatically make sense of someone’s movements in context, allowing you to guess what a person is feeling, but you are always guessing, never detecting.”

The author offers an alternative, neuroscientific view of how the brain works. She says that our brains “create all thoughts, emotions, and perceptions, automatically and on the fly, as needed. This process is completely unconscious. It may seem like you have reflex-like emotional reactions and effortlessly detect emotions in other people, but under the hood, your brain is doing something else entirely.”  Essentially our brains are survival-oriented prediction engines, that produce responses to internal and external stimuli that “become the emotions we experience and the expressions we perceive in other people.” Therefore “Emotional Intelligence requires a brain that can use prediction to manufacture a large, flexible array of different emotions. If you’re in a tricky situation that has called for emotion in the past, your brain will oblige by constructing the emotion that works best.”

Feldman Barrett argues that we don’t so much observe emotions in ourselves and others, as we construct them and predict them.  She further argues that we can give our “constructivist” brains (and their concomitant emotions) a boost by enhancing the granularity of our sensitivity to our feelings and emotional states. One way we can do this is by learning greater vocabularies to describe our own and other’s feeling states, and thereby priming our prediction engines to “guess” what others are feeling, with even more specificity.

Whether our brains construct emotions and predict the emotions of others seems largely irrelevant to the issue of the importance of understanding emotions in ways that help us relate to, and interact with, others.  In the final analysis, the human social world is composed by thinking and feeling beings, and those who can understand (“predict” in Feldman Barrett’s view) and manage emotions will be better prepared to engage in that world.

Resources:

Daniel Goleman, Emotional Intelligence, Why It Matters More Than IQ.   Bantam Books, 1995.

Emotional Intelligence: “The Social Skills You Weren’t Taught in School” Eric Ravenscraft.  Lifehacker, February 20, 2019

“Emotional Intelligence Needs a Rewrite”, Lisa Feldman Barrett, Nautilus, August 3, 2017.

“What is Emotional Intelligence” Michael Akers & Grover Porter,  Oct 8, 2018 Psych Central

“The Benefits of Emotional Intelligence,”  Paula Durlofsky, July 8 2018, Psych Central
May 21, 2019
21 May 2019

Lying at Work

It’s sometimes difficult to tell the truth, especially in arenas like the workplace, where inequalities of power and authority make it difficult to “speak truth to power.” In a recent Harvard Business Review article, “4 Ways Lying Becomes the Norm at a Company” (February 15, 2019) Ron Carucci discusses the results of a substantial, 15 year longitudinal study that examined the systemic (vs. personal) incentives for dishonesty. Carucci says there are a range of incentives, or prompts, for employees to be less than honest at work.  Among these:

  • Inconsistency: An inconsistency between an organization’s stated mission, objectives, and values, and the way it is actually experienced by employees and the marketplace. As one interviewee put it, “Our priorities change by the week. Nobody wants to admit we’re in trouble, so we’re grasping at straws. We don’t know who we are anymore, so we’re just making things up.”
  • Unjust accountability systems, especially when an organization’s processes for measuring employee contributions is perceived as unfair or unjust. Research shows that people are nearly 4 times more likely to withhold or distort information when the system is perceived to be unfair or rigged.
  • Poor organizational governance; for example there is no effective process to gather decision makers into honest conversations about tough issues. Truth is forced underground, leaving the organization to rely on rumors and gossip.
  • Inter-group rivalry, conflict, and competition (what Carlucci terms “weak cross-functional collaboration.”) is a predictor of people withholding information or distorting truthful information. Additionally, Carlucci observes that isolation, fragmentation, and departmental/divisional loyalties often result in dishonesty or a damaging lack of candor.

Because these factors are cumulative, an organization afflicted with all four of these factors is 15 times more likely to end up in an “integrity catastrophe” than those who have none of these four integrity/honesty problems. Carlucci argues however, that these organizational problems are alterable and that a culture of honesty can be achieved by companies and organizations that challenge these issues.

Resources:

“4 Ways Lying Becomes the Norm at a Company,” Ron Carucci, Harvard Business ReviewFebruary 15, 2019

May 7, 2019
07 May 2019

Meritocracy: Who Deserves to Succeed?

Meritocracy is a system in which skills, ability, talent, and knowledge are thought to be the best basis for promoting people to positions of power and social standing. Advancement in a meritocratic system is based on performance, typically as measured through examination, or otherwise demonstrated achievement. Meritocracies can be found as far back as 6th century BC China, where an administrative meritocracy was based on civil service examinations, rather than inherited offices. In contemporary England, there is a Meritocratic political party which believes, among other things, that there should be a 100% inheritance tax, so that the super-rich can’t pass on their wealth to a select few (their privileged children) and that every child should get an equal chance to succeed in life. Needless to say, a fully realized meritocracy could go a long way to ending elite dynasties and hereditary monarchy.

Ironically, the term ‘meritocracy’ was coined as a satirical slur in a dystopic novel, The Rise of the Meritocracy, 1870–2033, published in 1958, by the British sociologist and Labor party politician Michael Young. The Rise of the Meritocracy imagines a world in which social class and inherited position has been replaced by a system that promotes those to the top those who have advanced educationally as evidenced via rigid testing and objective standards. The book, however, argues that meritocracy doesn’t eliminate ruling elites, but simply ends up recreating a new class system by means of education and testing. As the conservative commentator, Toby Young, (the son of the author of Rise of the Meritocracy) recently observed “(there is) the tendency within meritocracies for the cognitive elite to become a self-perpetuating oligarchy.”

For many years, the US has been thought to be predominately a meritocracy–one in which the social power of inheritance and privilege had been superseded by a system in which leaders and socially prominent persons are those who possess superior knowledge and talent. A spate of recent articles, some of which were precipitated by the recent college-admissions scandal (See, for example. “A History of College Admissions Schemes, From Encoded Pencils to Paid Stand-Ins,” Adeel Hassan, March 15, 2019, New York Times) calls into question many of the assumptions about the benefits of meritocracy, and suggests that meritocracy has not yet been realized.

There are, of course, a number of criticisms of meritocracy and the concept of “merit” on which it is based. Among these:

  • What counts as meritorious and who decides which qualities, skills, and knowledge are worthy of merit?
  • In educational systems, do standardized tests and other measures of merit accurately and thoroughly indicate merit/worth?
  • Is meritocracy a kind of “social Darwinism,” in which the survival (and promotion) of the physically “fittest,” is replaced by the survival of the cognitively “smartest” (i.e. the best test takers)?
  • Does wealth and inheritance affect individuals’ ability to obtain the educational credentials by which meritocracy is demonstrated? (For example, does the level of education required for a person to become competitive in a meritocracy discriminate against those who are unable to afford the often expensive and time-consuming “markers” that an education affords?)
  • Does meritocracy, despite its original anti-elitist intentions, merely recreate another kind of permanent elite?

The ultimate question is whether a meritocracy is the best we can do? While meritocracy is problematic, is there a fairer system to replace it? (See for example Richard Dawkins brief discussion, “Democracy or Meritocracy: Which is the Government of Reason?”)

Resources:

“Meritocracy: Real or Myth?”

“A History of College Admissions Schemes, From Encoded Pencils to Paid Stand-Ins,” Adeel Hassan, March 15, 2019, New York Times

“A ‘Meritocracy’ Is Not What People Think It Is.” Ben Zimmer, The Atlantic March 14, 2019

“The Scandals of Meritocracy,” Ross Douthat, New York Times, March 16, 2019

The Big Test: The Secret History of the American Meritocracy, Nicholas Lemann, Farrar, Straus and Giroux; 1st edition (October 1, 1999)

“Meritocracy” at Wikipedia

“What’s (still) wrong with meritocracy” Toby Young, The Spectator

“College Admission Scandal,” various authors, New York Times

April 23, 2019
23 Apr 2019

Talking Back to Foundations

Philanthropies play an important role in contemporary society. They are, by their very nature, focused on supporting the public good and human welfare. While philanthropies, each year, channel vast sums to the achievement of laudable goals, their social and economic power has raised questions about their unalloyed reputation for ‘doing good’. See, for example, our previous blogpost, “Philanthrocapitalism?”

In an attempt to provide a voice to those who have worked with foundations, a new website, GrantAdvisor, offers grantees of foundations a safe way to anonymously give feedback about the grantmaking/receiving process. GrantAdvisor also offers foundations an opportunity to learn about grantees’ experience working with foundation staff.

GrantAdvisor effectively serves as a kind “Yelp” to those in the non-profit community. Here for example is a link to grantees experience working with the Wal-Mart Foundation.

You can view GrantAdvisor here to learn more about how the site works and to review the reviews of a number of important national and local foundations.

Resources:

“A Place Where You Can Speak Your Mind to That Foundation,” Amy Costello. Non Profit Quarterly

“Benchmarking Foundation Evaluation Practices,” Center for Effective Philanthropy

“Helping Community Foundations Strengthen Grantees’ Effectiveness”

April 3, 2019
03 Apr 2019

Evaluation Site Visits – Seeing is Knowing

Gathering evaluative information about a program or initiative often relies upon evaluators physically visiting the program’s location in order to observe program operations, to collect evidence of the program’s implementation and outcomes, and to interview staff and program participants. The empirical and observational nature of site visits offer evaluators a unique lens through which to “see” what the program actually is, and how it attempts to achieve the desired outcomes it hopes to achieve.

In their influential article, “Evaluative Site Visits: A Methodological Review,” American Journal of Evaluation, Vol. 24, No. 3, 2003, pp. 341–352, Lawrence, Keiser, and Levoie note that, “An evaluative site visit occurs when persons with specific expertise and preparation go to a site for a limited period of time and gather information about an evaluation object either through their own experience or through the reported experiences of others in order to prepare testimony addressing the purpose of the site visit.” Unlike case studies, which are of longer duration and often of greater depth, and which seek to describe in detail the instance or phenomena under study, site visits are of limited time duration, and are focused on gathering data that ultimately will inform judgement about a program’s worth/value. Site visits typically involve the use of a number of qualitative methods (e.g., individual and focus group interviews, observations, document review, etc. For more information on the kinds of data that site visits permit, see our previous blog post “Just the Facts: Data Collection.”

Michael Quinn Patton summarizes the essential elements of an evaluation site visit:

  1. Competence– Ensure that site‐visit team members have skills and experience in qualitative observation and interviewing. Availability and subject matter expertise does not suffice.
  2. Knowledge– For an evaluative site visit, ensure at least one team member, preferably the team leader, has evaluation knowledge and credentials.
  3. Preparation– Site visitors should know something about the site being visited based on background materials, briefings, and/or prior experience.
  4. Site participation– People at sites should be engaged in planning and preparation for the site visit to minimize disruption to program activities and services.
  5. Do no harm– Site‐visit stakes can be high, with risks for people and programs. Good intentions, naiveté, and general cluelessness are not excuses. Be alert to what can go wrong and commit as a team to do no harm.
  6. Credible fieldwork– People at the site should be involved and informed, but they should not control the information collection in ways that undermine, significantly limit, or corrupt the inquiry. The evaluators should determine the activities observed and people interviewed, and arrange confidential interviews to enhance data quality.
  7. Neutrality– An evaluator conducting fieldwork should not have a preformed position on the intervention or the intervention model.
  8. Debriefing and feedback– Before departing from the field, key people at the site should be debriefed on highlights of findings and a timeline of when (or if) they will receive an oral or written report of findings.
  9. Site review– Those at the site should have an opportunity to respond in a timely way to site visitors’ reports, to correct errors and provide an alternative perspective on findings and judgments. Triangulation and a balance of perspectives should be the rule.
  10. Follow-up– The agency commissioning the site visit should do some minimal follow‐up to assess the quality of the site visit from the perspective of the locals on site.

Lawrence, Keiser, and Levoie argue that evaluative site visits are not merely a venue in which a range of predominately qualitative methodologies are used, but a specific kind of methodology, which is distinguished by its use of observation. “We believe site visit methodology is based on ontological beliefs about the nature of reality and epistemological beliefs about whether and how valid knowledge can be achieved. Ontologically, in order to conduct site visits the evaluator must assume that there is a reality that can be seen or sensed and described. Epistemologically, site visits are based in the belief that site visitors are legitimate, sensing instruments and that they can obtain valid information through first-hand encounters with the object being evaluated.”

Accordingly, site visits are where evaluators can get “the feel” of what a program is and does. As a result, site visits are a critical means through which evaluators gather and interpret data with which to make judgements about the value and effects of a program.

Resources:

“Evaluative Site Visits: A Methodological Review,” Frances Lawrenz, Nanette Keiser, and Bethann Lavoie, American Journal of Evaluation, Vol. 24, No. 3, 2003, pp. 341–352.

See Michael Quinn Patton quoted in Editors’ Note, Randi K. Nelson and Denise L. Roselan, New Directions in Evaluation, December, 2017

“Using Qualitative Interviews in Program Evaluations”

Conducting and Using Evaluative Site Visits: New Directions for Evaluation, Number 156, February 2018

“Developmental Evaluation: Evaluating Programs in the Real World’s Complex and Unpredictable Environment”

March 19, 2019
19 Mar 2019

Glitches in Philanthropy?

Over the years, Brad Rose Consulting has provided evaluation services to philanthropies and community service organizations. These clients are dedicated to making the world a better place, often through philanthropic work with disadvantaged populations. While the work of philanthropies is generally perceived as laudable, there are a number of potential objections to the rise of philanthropic largesse.

In a BBC article, “The Problems with Charity” the authors survey a number of potential objections to, and liabilities associated with, charitable and philanthropic work. These include:
  • Charities often target symptoms, not causes- Charity helps the recipient with their problem, but it doesn’t do much to deal with the causes of that problem.
  • Charity may become a substitute for real justice- The idea that charity is wrong when it’s used to patch up the effects of the fundamental injustices that are built into the structure and values of a society.
  • Charity may not provide the best solution to a problem- Charitable giving may not be the most effective way of solving world poverty. Indeed, charitable giving may even distract from finding the best solution – which might involve a complex rethink of the way the world organizes its economic relationships, and large-scale government initiatives to change people’s conditions.
  • Charity may benefit the state rather than the needy- If the charity sector increases spending in an area also funded by government then there is a risk that government will choose to spend less in that area.
  • Charities are often accountable to the givers not the receivers- because the recipients of charity are often unorganized and the charity doesn’t know their individual identities, it’s often easier for charities to make their performance reports to the givers.

In his article “The Downside of Doing Good,” David Campbell examines recent critiques of philanthropy.  Following Anand Giridharadas (Winners Take All: The Elite Charade of Changing the World) Campbell argues that “wealthy philanthropists and other prominent social change leaders often co-exist in a parallel universe (called) “MarketWorld,” where the best solutions to society’s problems require the same knowhow used in corporate boardrooms. That is because MarketWorld ignores the underlying causes for problems like poverty and hunger.

He further observes that efforts at educational reform funded by such philanthropic luminaries as Bill and Melinda Gates, via support for charter schools, often fail to understand the underlying inequalities that make schools resourced in vastly different ways. “As long as school systems are funded locally, based on property values, students in wealthy communities will have advantages over those residing in poorer ones. However, creating a more equal system to pay for schools would take tax dollars and advantages away from the rich. The wealthy would lose, and the disadvantaged would win. So it’s possible to see the nearly $500 million that billionaires and other rich people have pumped into charter schools and other education reform efforts over the past dozen years, as a way to dodge this problem.”

While philanthropy is likely to remain with us for the foreseeable future, the books and articles mentioned here underline some of the problematic assumptions that haunt philanthropic attempts to ameliorate conditions whose causes are often the deeper, underlying dynamics associated with the societies and economies these philanthropies seek to amend.

Resources

Winners Take All: The Elite Charade of Changing the World, by Anand Giridharadas

Just Giving Why Philanthropy Is Failing Democracy and How It Can Do Better, by Rob Reich

“The Problem with Philanthropy,”  David Sirota, In These Times, June 13, 2014
February 19, 2019
19 Feb 2019

Power in Organizations

Most of us spend a good portion of our lives in organizations or indirectly relating to organizations (businesses, non-profits, civic and legal organizations, religious organizations, military and criminal justice organizations, etc.). One might say that in the modern world, we “live in” an environment composed largely of organizations. (See our previous blogposts “Organization Development: What Is It & How Can Evaluation Help?”  and “What’s the Difference? 10 Things You Should Know About Organizations vs. Programs”)

Organizations contain, utilize, and deploy various kinds of social power. Such power is the capacity of individuals and groups to affect, control, or influence outcomes (i.e. changes). Power doesn’t exist in isolation, but in relationships to other individuals and/or groups. If we want to accomplish goals at work—whether these goals are about producing widgets, or making the world a better place—we need to draw on and negotiate various kinds of formal and informal power. Sometimes it may be useful to think of various resources as sources of power. Tangible resources include, money, machinery, physical infrastructure, etc. Less tangible, but no less important, resources may include, authority, social status/prestige, social networks, individuals’ intelligence, professional experience, even social attractiveness and charisma. Both tangible and intangible resources are used as sources of power with which organizations achieve objectives and goals.

In her article, “Types of Powers in Organizations,” Diana Dahl summarizes 7 types of power in organizations. These include:

  1. Coercive Power— a person or group is able to punish others for not following orders has coercive power.
  2. Connection Power— connection power is gained by knowing and being listened to by influential people. Increasing connections and mastering political networking lead to a greater potential for connection power.
  3. Reward Power— the ability to give rewards to other employees. Rewards are not always monetary, such as improved work hours and words of praise.
  4. Legitimate Power (also known as legitimate authority)— when employees believe a person can give orders based on his position within the organization, such as when a manager orders staff members to complete a task and they comply because the orders came from their superior.
  5. Referent Power— people who are liked, respected, and are viewed by other employees as worth emulating. Supervisors who lead by example, treat employees with respect, seek their collaboration and gain the trust of their employees possess referent power.
  6. Informational Power— access to valued information. This power can be quickly fleeting because once the needed information is shared, the person’s power is gone.
  7. Expert Power— the greater a person’s knowledge or specialized skill set, the greater her potential for expert power.

People in organizations must get things accomplished. Having a clear idea of what constitutes organizational power and the kinds of resources that need to mobilized to reach goals, may help us to better navigate what are often complicated and contentious organizations.

Resources

Power in Organizations: Structures, Processes and Outcomes, by Richard Hall and Pamela Tolbert Pearson/Prentice Hall, 2005, 9th edition

“Types of Powers in Organizations,” by Diana Dahl ; Updated September 26, 2017

Links to and summaries of “Theories of Organizational Power” at bankofinfo.com/theories-of-organizational-power

“Power and Politics in Organizational Life,” Abraham Zaleznik, Harvard Business Review

“Power in Organizations: A Way of Thinking About What You’ve Got, and How to Use It,” Roelf Woldring

Summary of tactics to gain organizational power 

February 5, 2019
05 Feb 2019

Pretending to Love Work

In a previous blog post, “Why You Hate Work” we discussed an article that appeared in the New York Times that investigated the way that the contemporary workplace too often produces a sense of depletion and employee “burnout.” In that article, the authors, Tony Schwartz and Christin Porath, argued that only when companies attempt to address the physical, mental, emotional, and spiritual dimensions of their employees by creating “truly human-centered organizations,” can these companies create the conditions for more engaged and fulfilled workers, and in so doing, become more productive and profitable organizations.

In that eponymous blogpost, we suggested that employee burnout is not an unknown feature of the non-profit world, and that, while program evaluation cannot itself prevent employee burnout, it can add to non-profit organizations’ capacities to create organizations in which staff and program participants have a greater sense of efficacy and purposefulness. (See also our blogpost “Program Evaluation and Organization Development” )

Of course, the problem of employee burnout and alienation is a perennial one. It occurs in both the for-profit and non-profit sectors. In a more recent article, “Why Are Young People Pretending to Love Work?” New York Times, January 26, 2019, Erin Griffith says that in recent years, there has emerged a “hustle culture,”—especially for millennials. This culture, Griffith argues, “…is obsessed with striving, (is) relentlessly positive, devoid of humor, and — once you notice it — impossible to escape.” She sites the artifacts of such a culture, which include at one WeWork location, in New York, neon signs that exhorts workers to “Hustle harder,” and murals that spread the gospel of T.G.I.M. (Thank God It’s Monday). Somewhat horrified by the Stakhanovite tenor of the WeWork environment, Griffith notes, “Even the cucumbers in WeWork’s water coolers have an agenda. ‘Don’t stop when you’re tired,’… ‘Stop when you are done.’” “In the new work culture,” Griffith observes, “enduring or even merely liking one’s job is not enough. Workers should love what they do, and then promote that love on social media, thus fusing their identities to that of their employers.”

Griffith is not concerned with employee burnout. Instead, she is horrified by the degree to which many younger employees have internalized the obsessively productivist, “workaholic” norms of their employers and, more broadly, of contemporary corporations. These norms include the apotheosis of excessive work hours and the belief that devotion to anything other than work is somehow a shameful betrayal of the work ethic. She quotes the founder of online platform, Basecamp, David Heinemeier Hansson, who observes, “The vast majority of people beating the drums of hustle-mania are not the people doing the actual work. They’re the managers, financiers and owners.”

Griffith writes, “…as tech culture infiltrates every corner of the business world, its hymns to the virtues of relentless work remind me of nothing so much as Soviet-era propaganda, which promoted impossible-seeming feats of worker productivity to motivate the labor force. One obvious difference, of course, is that those Stakhanovite posters had an anti-capitalist bent, criticizing the fat cats profiting from free enterprise. Today’s messages glorify personal profit, even if bosses and investors — not workers — are the ones capturing most of the gains. Wage growth has been essentially stagnant for years.”

Resources:

“Why Are Young People Pretending to Love Work?” Erin Griffith, New York Times, January 26, 2019

“Why You Hate Work”

“The Fleecing of Millennials” David Leonhardt, New York Times, January 27, 2019

January 22, 2019
22 Jan 2019

Social Innovation and Evaluation

During the last 15-20 years, “social innovations” (SIs) have grown both in number and in terminological confusion. Social innovations include initiatives and programs as substantively diverse as micro credit organizations, charter schools, environmental emissions credit trading schemes, and online volunteerism. Social innovations are distinguished by a focus on “the process of innovation, how innovation and change take shape… and center on new work and new forms of cooperation, especially on those that work towards the attainment of a sustainable society” (Wikipedia). The Center for Social Innovation reports that, “Social innovation refers to the creation, development, adoption, and integration of new and renewed concepts, systems, and practices that put people and planet first.” Social innovations are thought to cut across the traditional boundaries separating nonprofits, government, and for-profit businesses. SIs are also considered to be distinct from conventional social programs.

The rise of social innovations presents new challenges to those who seek to evaluate (and in some cases, those who seek to nurture) these initiatives. Social innovations often bring together unrelated agencies and organizations, and involve complex and changing social dynamics, roles, and relationships.

The volatile, ever-unfolding nature of many SIs — including their evolving outcomes, collaborations among different social sectors and organizations, and dynamic and volatile contexts of SIs —may present challenges to evaluators who are more familiar with traditional social programs. In their recent article, “Evaluating Social Innovations: Implications for Evaluation Design,” Kate Svennson, Barbara Szijarto, Peter Milley, and J Bradley Cousins, (American Journal of Evaluation, Vol. 39, No. 4, December 2018 pp. 459-477) review the international literature on SI evaluation and summarize critical insights about the unique challenges associated with selecting evaluation designs for social innovations.

Svennson, et al. remind us that the design of every evaluation is driven by “…what questions will be answered by the evaluation, what data will be collected, how the data will be analyzed to answer the questions, and how resulting information will be used?” (See our previous blogposts “Approaching An Evaluation – Ten Issues to Consider” and “Questions Before Methods”).

The authors surveyed 28 peer reviewed empirical studies of SIs, with an eye to identifying commonly reported issues and conditions that influence the choice of SI evaluation design. Svennson, et al. report that the choice of evaluation design was most frequently influenced by:

  1. a “complexity perspective,” i.e., one that acknowledges the often messy, trial-and-error landscape of such initiatives,
  2. a focus on the desire for collective learning by evaluators and evaluands,
  3. the need for collaboration between evaluators and SIs, including the need for timely feedback from evaluators, and
  4. the need for accountability to evaluation funders, including funders’ preferences for evaluation methods and design.

Interestingly, Svennson, et al. find that, in the 28 studies they reviewed, there is little diversity in the types of evaluation designs selected by evaluators, and what they term a “lingering ambiguity” among evaluators about what constitutes a social innovation.

In the future, the authors tell us, evaluators of SIs will want to:

  1. be sensitive to the processual nature of SIs
  2. focus on capturing and facilitating feedback, especially after each iteration of an SI
  3. support productive collaboration, especially among often competing SI stakeholders
  4. incorporate multiple methods of reporting to meet the needs for various, often divergent, stakeholders
  5. help SI practitioners to clarify outcomes, especially as these evolve
  6. capture both intended and unexpected outcomes of SIs

Resources:

Defining Social Innovation—Stanford Business School

Social Innovation

Center for Social Innovation

“Evaluating Social Innovations: Implications for Evaluation Design,” Kate Svennson, Barbara Szijarto, Peter Milley, and J Bradley Cousins, American Journal of Evaluation, Vol. 39, No. 4, December 2018.

For an critique of social innovation, see the “Criticism” section at Wikipedia

January 8, 2019
08 Jan 2019

Philanthrocapitalism?

Philanthropy, most of us presume, is a good thing. Philanthropic foundations seek to make the world a better place. In the US, philanthropic foundations have played an important role in funding, designing and “testing” a variety of programs and initiatives that seek to solve the most intransigent social problems of American society, from homelessness and education reform, to health care and access to the arts. As of 2015, there were over 86,000 foundations in the US alone, with total assets of $890,061,214,247.  (See http://data.foundationcenter.org/ )

In their recent article, “The Trouble with Charitable Billionaires,” in the May 28, 2018, Guardian, Carl Rhodes and Peter Bloom argue that although philanthropy appears to be a socially valuable activity, the recent emergence of “philanthrocapitalism” such as that conducted by Mark Zukerberg and his wife, Priscilla Chan, Warren Buffet,  Bill and Melinda Gates, and others, in fact, shifts decision-making power about social change from the public to a stratum of wealthy donors, whose vision for social change is not always innocent or desirable. “Essentially, what we are witnessing is the transfer of responsibility for public goods and services from democratic institutions to the wealthy, to be administered by an executive class.”

Rhodes and Bloom write that, since the 1990s, there has emerged a cohort of billionaires who appear genuinely committed to addressing persistent and seemingly intractable social problems, and who are now investing literally billions of dollars in efforts to tackle these problems. They note that it would seem that many of the world’s richest people simply want to give their money away to good causes, and have thus created a “golden age of philanthropy.” The authors, however, caution that “The golden age of philanthropy is not just about benefits that accrue to individual givers. More broadly, such philanthropy serves to legitimize capitalism, as well as to extend it further and further into all domains of social, cultural and political activity.” Furthermore, “Philanthrocapitalism” they say, “takes the application of management discourses and practices from business corporations and adapts them to charitable work. The focus is on entrepreneurship, market-based approaches and performance metrics…(and these) result, at a practical level, in a philanthropy that is undertaken by CEOs in a manner similar to how they run businesses.”

Rhodes and Bloom problematize this new era of corporate social responsibility and philanthrocapitalism.  “Philanthrocapitalism is commonly presented as the social justice component of an otherwise amoral global free market. At best, corporate charity is a type of voluntary tax paid by the 1% for their role in creating such an economically deprived and unequal world.” Additionally, “Philanthrocapitalism is about much more than the simple act of generosity it pretends to be, instead involving the inculcation of neoliberal values personified by the billionaire CEOs who have led its charge. Philanthropy is recast in the same terms in which a CEO would consider a business venture. Charitable giving is translated into a business model that employs market-based solutions characterized by efficiency and quantified costs and benefits. Ultimately, the authors contend, “…we find a new form of corporate rule, refashioning another dimension of human endeavor (i.e. philanthropy) in its own interests. Such is a society where CEOs are no longer content to do business; they must control public goods as well. In the end, while the Giving Pledge’s website (a philanthropy campaign initiated by Warren Buffett and Bill Gates in 2010 which targets billionaires around the world, encouraging them to give away the majority of their wealth) may feature more and more smiling faces of smug-looking CEOs, the real story is of a world characterized by gross inequality that is getting worse year by year.”

Essentially Rhodes and Bloom question the influence of private fortunes on public issues. While foundations may have the best of intentions, they mobilize private money to address privately defined public issues.  This is hardly a new development.  Rhodes and Bloom however, question whether the more recent importation of business methods, measures, and beliefs are either the best, or the most democratic, way to address deep systemic issues. Their view is that for the many “wounds” experienced by contemporary society, philanthrocapitalism may be merely a self-serving “band-aid.”

Resources

 “The Trouble with Charitable Billionaires,” Carl Rhodes and Peter Bloom, The Guardian

How Liberal Nonprofits Are Failing Those They’re Supposed to Protect,” William C. Anderson,  Verso Blog, October 6, 2017

American Foundations: Roles and Contributions, by Helmut K. Anheier (Editor), David C. Hammack (Editor), The Brookings Institution, 2010

The Self-Help Myth: How Philanthropy Fails to Alleviate Poverty, Erica Kohl-Arenas, University of California Press, 2016

December 11, 2018
11 Dec 2018

Program Evaluation Methods and Questions: A Discussion

“I would rather have questions that can’t be answered than answers that can’t be questioned.” 
― Richard Feynman

The Cambridge Dictionary defines research as “A detailed study of a subject, especially in order to discover (new) information or reach a (new) understanding.” Program evaluation necessarily involves research. As we mentioned in our most recent blogpost, “Just the Facts: Data Collection” program evaluation deploys various research methods (e.g., surveys, interviews, statistical analyses, etc.) to find out what is happening and what has happened in regards to a program, initiative, or policy. At the core of every evaluation are key questions that should guide the evaluation. Below we reprise our previous blogpost, “Questions Before Methods,” which emphasizes the importance of specifying evaluation questions prior to the design and implementation of each evaluation.

Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider”) Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.

Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).

While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.

Types of Evaluation Questions

Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.

Process Evaluation Questions
  • Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
  • Did the program’s services, products, and resources reach their intended audiences and users?
  • Were services, products, and resources made available to intended audiences and users in a timely manner?
  • What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
  • What steps were taken by the program to address these challenges?
Formative Evaluation Questions
  • How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
  • How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants  and stakeholders?
  • What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
  • Which elements of the program do participants find most beneficial, and which least beneficial?
Outcome/Summative Evaluation Questions
  • What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills, and practices)?
  • Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
  • Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
  • What is the ultimate worth, merit, and value of the program?
  • Should the program be continued or curtailed?
The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use. If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT). Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so. In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review, etc.) that can help elucidate why what happens, happens, and what program participants experience.

Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions.

To learn more about our evaluation methods visit our Data Collection & Outcome Measurement page.

Resources

“Just the Facts: Data Collection”

“Questions Before Methods”

“Approaching and Evaluation: Ten Issues to Consider”

“Data Collection and Outcome Measurement”

December 4, 2018
04 Dec 2018

Strengthening Program AND Organizational Effectiveness

Program evaluation is seldom simply about making a narrow judgment about the outcomes of a program (i.e., whether the desired changes were, in fact, ultimately produced.) Evaluation is also about helping to provide program implementers and stakeholders with information that will help them strengthen their organization’s efforts, so that desired programmatic goals are more likely to be achieved.

Brad Rose Consulting is strongly committed to translating evaluation data into meaningful and actionable knowledge, so that programs, and the organizations that host programs, can strengthen their efforts and optimize results. Because we are committed not just to measuring program outcomes, but to strengthening the organizations that host and manage programs, we work at the intersection of program evaluation and organization development (OD).

Often challenges facing discrete programs reflect challenges facing the organizations that host programs. (For the difference between “organizations” and “programs” see our previous post “What’s the Difference? 10 Things You Should Know About Organizations vs. Programs,” ) Program evaluations thus present opportunities for host organizations to:
  • engage in the clarification of their goals and purposes
  • enhance understanding of the often implied relationships between a program’s causes and effects
  • articulate for internal stakeholders a collective understanding of the objectives of their programming
  • reflect on alternative concrete strategies to achieve desired outcomes
  • strengthen internal and external communications
  • improve relationships between individuals within in programs and organizations
Although Brad Rose Consulting evaluation projects begin with a focus on discrete programs and initiatives, the answers to the questions that drive our evaluation of programs often provide vital insights into ways to strengthen the effectiveness of the organizations that host, design, and implement those programs. (See “Logic Modeling: Contributing to Strategic Planning” )
Typically, Brad Rose Consulting works with clients to gather data that will help to improve, strengthen, and “nourish” both programs and organizations. For example, our formative evaluations, which are conducted during a project’s implementation, aim to improve a program’s design and performance. (See “Understanding Different Types of Program Evaluation” ) Our evaluation activities provide program managers and implementers with regular, data-based briefings, and with periodic written reports so that programs can make timely adjustments to their operations. Formative feedback, including data-based recommendations for program refinement, can also help to strengthen the broader organization, by identifying opportunities for organizational learning, clarifying the goals of the organization as these are embodied in specific programming, specifying how programs and organizations work to produce results (i.e., articulating cause and effect) and by strengthening systems and processes.

Resources

“What’s the Difference? 10 Things You Should Know About Organizations vs. Programs,”

“Logic Modeling: Contributing to Strategic Planning”

“Understanding Different Types of Program Evaluation”

November 6, 2018
06 Nov 2018

Just the Facts: Data Collection

Program evaluations entail research. Research is a systematic “way of finding things out.” Evaluation research depends on the collection and analysis of data (i.e., evidence, facts) that indicate the outcomes (i.e., effects, results, etc.) of the operation of programs. Typically, evaluations want to discover evidence of whether valued outcomes have been achieved. (Other kinds of evaluations, like formative evaluations, seek to discover, through the collection and analysis of data, ways that a program may be strengthened.)

Data can be either qualitative (descriptive, entail words and observations) or quantitative (numerical). What counts as data depends upon the design and character of the evaluation research. Quantitative evaluations rely primarily on the collection of countable information like measurements and statistical data. Qualitative evaluations depend upon language-based data and other descriptive data. Usually, program evaluations combine the collection of quantitative and qualitative data.

There are a range of data sources for any evaluation. These can include: observations of programs’ operation; interviews with program participants, program staff, and program stakeholders; administrative records, files, and tracking information; questionnaires and surveys; focus groups; and visual information, such as video data and photographs.

The selection of the kinds of data to collect and the ways of collecting such data will be contingent on the evaluation design, the availability and accessibility of data, economic considerations about the cost of data collection, and both the limitations and potentials of each data source. The kinds of evaluation questions and the design of the evaluation research will, together, help to determine the optimal kinds of data that will need to be collected. (See our articles
Questions Before Methods” and “Using Qualitative Interviews in Program Evaluations”)

Resources

What is ‘Data’?

What’s the Difference? Evaluation vs. Research

Evaluation, Carole Weiss, Prentice Hall; 2nd edition (1997)

October 24, 2018
24 Oct 2018

Do Work Teams Work?

“Collaboration” and “teamwork” are the catchphrases of the contemporary workplace. Since the 1980s in the U.S., work teams have been hailed as the solution to assembly line workers’ alienation and disaffection, and white-collar workers’ isolation and disconnection. Work teams have been associated with increased productivity, innovation, employee satisfaction, and reduced turnover. Additionally, teams at work are said to have beneficial effects on employee learning, problem-solving, communication, company loyalty, and organizational cohesiveness. Teams are now found throughout the for-profit, non-profit, and governmental sectors, and much of the work of the field of organization development (OD) is devoted to fostering and sustaining teams at work.

In his recent article “Stop Wasting Money on Team Building,” Harvard Business Review, September 11, 2018, Carlos Valdes-Dapena, argues that teams are less effective than many believe them to be. Based on research conducted at Mars, Inc. “a 35 billion dollar global corporation with a commitment to collaboration,” Valdes-Dapena argues that while employees like the idea of teams and team work, employees don’t, in fact, much collaborate in teams. After conducting 125 interviews and administering questionnaires with team members, he writes “If there was one dominant theme from the interviews, it is summarized in this remarkable sentiment: “I really like and value my teammates. And I know we should collaborate more. We just don’t.”

Valdes-Dapena reports that employees “…felt the most clarity about their individual objectives, and felt a strong sense of ownership for the work they were accountable for.” He also shows that “Mars was full of people who loved to get busy on tasks and responsibilities that had their names next to them. It was work they could do exceedingly well, producing results without collaborating. On top of that, they were being affirmed for those results by their bosses and the performance rating system.” Essentially, Valdes-Dapena, argues, teams may sound good in theory, but it is probably better to tap individual self-interest, if you really want to get the job done.

In “3 Types of Dysfunctional Teams and How To Fix Them,” Patty McManus says that there are different types of dysfunctional work teams. She characterizes these different team types as: “The War Zone,” “The Love Fest,” and “The Unteam.” In “War Zone” teams, competition and factionalism among members obscure or derail the potential benefits of teamwork. In the “Love Fest” team, there is a focus on muting disagreements, highlighting areas of agreement, and avoidance of tough issues in the interest of maintaining good feelings. “The Unteam” is characterized by meetings that are used for top-down communication and status updates, and fail to build shared perspective about the organization. In the “Unteam” members may get along as individuals, but they have little connection to one another or a larger purpose they all share.

McManus claims that the problems of teams may be overcome by what she terms “ecosystems teams,” i.e., teams that surface and manage differences, build healthy inter-dependence among members, and engage the organization—beyond the mere confines of the team.

Matthew Swyers also sees problems in teams at work. In “7 Reasons Good Teams Become Dysfunctional,” (Inc. September 27, 2012,) Swyers writes that there are seven types of problems that teams may experience:

  • absence of a strong and competent leader
  • team members more interested in individual glory than achieving team objectives
  • failure to define team goals and desired outcomes
  • disproportionately place too much of the team’s work on a few of its members’ shoulders
  • lack focus and endless debate, without moving toward an ultimate goal
  • lack of accountability and postponed timetables
  • failure of decisiveness.

Each of these writers highlight the vulnerabilities of teams at work. Although the work of these writers doesn’t foreclose the positive possibilities of team organization at work, they raise important questions about both the enthusiasm for, and the effectiveness of, teams. Additionally, each author suggests that with enlightened modifications, organizations can overcome the liabilities of teams and begin to reap the benefits of team-based employee collaboration. That said, none of these writers, and few among the other U.S. based writers who have engaged this topic, treat the underlying assumptions of workplace reform—that work can be made more habitable and humane without the independent organizations that have traditionally represented workers’/employees’ interests in the workplace. For discussion of models of workplace reform that genuinely represent workers’ interest in more humane, collaborative, and ultimately, productive working environments, we will need to look elsewhere.

Resources:

Workgroups vs. Teams

“Importance of Teamwork at Work,” Tim Zimmer

“Importance of Teamwork in Organizations,” Aaron Marquis

“What Makes Teams Work?” Regina Fazio Maruca, Fast Company

“Stop Wasting Money on Team Building,” Carlos Valdes-Dapena, Harvard Business Review, September 11, 2018

“3 Types Of Dysfunctional Teams And How To Fix Them,” Patty McManus, Fast Company

“When Is Teamwork Really Necessary?” Michael D. Watkins, Harvard Business Review, August 16, 2018

“7 Reasons Good Teams Become Dysfunctional,” Matthew Swyers , Inc. Sept 27 2012

“Why Teams Don’t Work,” Diane Coutu, Harvard Business Review, May 2009

October 16, 2018
16 Oct 2018

MetroWest Workshop: Introduction to Evaluation

Brad recently presented a program evaluation workshop to grantees of the Foundation of MetroWest.

The two-hour workshop served to introduce grantees and other attendees to the basics of evaluation, including the:

  • basic kinds and purposes of program evaluation
  • types of questions that evaluations ask and answer
  • ways that values and criteria inform an evaluation
  • types of evaluation designs
  • use of logic models

You can access the PowerPoint for the presentation below.

September 28, 2018
28 Sep 2018

Stakeholders vs. Customers

Stakeholders vs. Customers

Rather than “customers,” nonprofits, educational institutions, and philanthropies typically have “stakeholders.” Stakeholders are individuals and organizations that have an interest in, and may be affected by, the activities, actions, and policies of non-profits, schools, and philanthropies. Stakeholders don’t just purchase products and services (i.e. commodities), they have an interest, or “stake” in the outcomes of an organization’s or program’s operation.

There are a number of persons or entities who may be a stakeholder in a nonprofit organization. Nonprofit stakeholders may include funders/sponsors, program participants, staff, communities, and government agencies. It’s important to note that stakeholders can be either internal or external to the organization, and that stakeholders are able to exert influence— either positive or negative — over the outcomes of the organization or program.

While many nonprofits are sensitive to, and aware of, the interests of their multiple stakeholders, quite often both nonprofit leaders and nonprofit staff hold implicit, unexamined ideas about who their various stakeholders are. Often, stakeholders are not delineated, and consequently, there isn’t a shared understanding of who is and isn’t a stakeholder. Conducting a stakeholder analysis can be a useful process because it raises awareness of staff and managers about who is interested in, and who potentially influences the success of an organization’s desired outcomes. A stakeholder analysis is a simple way to help nonprofits to clarify those who have a “stake” in the success of the organization and its discrete programs. It can sharpen strategic planning, clarify goals, and build consensus about an organization’s purpose.

Resources:

“Identifying Evaluation Stakeholders”

“The Importance of Understanding Stakeholders”

Business Dictionary

“Organization Development: What Is It & How Can Evaluation Help?”

September 5, 2018
05 Sep 2018

A Lifetime of Learning

Pablo Picasso once said, “It takes a long time to become young.” The same may be said about education and the process of becoming educated. While we often associate formal education with youth and early adulthood, the fact is that education is an increasingly recognized lifelong endeavor that occurs far beyond the confines of early adulthood and traditional educational institutions.

In a recent article “Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic the authors discuss the emergence of a rich and ever-expanding “ecosystem” of organizations and institutions that have arisen to serve the unmet educational needs and expectations of learners who are not enrolled in formal, traditional educational institutions (e.g. community colleges, colleges, and universities). “This ecosystem of semi-structured, unorthodox learning providers is emerging at “the edges” of the current postsecondary world, with innovations that challenge the structure and even the existence of traditional education institutions.”

Hagel III, et al. argue that economic forces together with emerging technologies are enabling learners to do an “end run” around traditional educational providers and to gain access to knowledge and information in new venues. The growing availability of, and access to, MOOCs (Massive Online Open Courses), Youtube, Open Educational Resources, and other online learning platforms enable more and more learners to advance their learning and career goals outside the purview of traditional post-secondary institutions.

While the availability of alternative, lifelong educational resources is helping some non-traditional students to advance their educational goals, it is also having an effect on traditional post-secondary institutions. Hagel III, Seely Brown, Wooll and Tsu, argue that, “The educational institutions that succeed and remain relevant in the future …will likely be those that foster a learning environment that reflects the networked ecosystem and (that will become) meaningful and relevant to the lifelong learner. This means providing learning opportunities that match the learner’s current development and stage of life.”  The authors site as examples, community colleges that are now experimenting with “stackable” credentials that provide short-term skills and employment value, while enabling students to return over time and assemble a coherent curriculum that helps them progress toward career and personal goals” and “some universities (that) have started to look at the examples coming from both the edges of education and areas such as gaming and media to imagine and conduct experiments in what a future learning environment could look like.”

The authors say that in the future colleges and universities will benefit from considering such things as:

  1. Providing the facilities and locations for a variety of learning experiences, many of which will depend external sources for content
  2. Aggregating knowledge resources and connecting these resources with appropriate learners rather than acting as sole “vendors” of knowledge
  3. Acting as a lifelong “agents” for learners by helping learners to navigate a world of exponential change and an abundance of information

While these goals are ambitious, they highlight the remarkably changing terrain in continuing education. Educational “consumers” are increasingly likely to seek inexpensive and more accessible pathways to knowledge. As the authors point out, individuals’ lifelong learning needs are likely to continue to increase, so correspondingly, the pressures on traditional post-secondary education are likely to grow. Whether learners’ needs are more effectively addressed by re-orienting traditional post-secondary institutions or by the patchwork “ecosystem” of semi-structured, unorthodox learning-providers who inhabit what the authors of “Lifetime Learner” term “the edges” of the postsecondary world, is difficult to predict.

Resources:

Lifelong learning, Wikipedia

“Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic

“The Third Education Revolution: Schools are moving toward a model of continuous, lifelong learning in order to meet the needs of today’s economy” by Jeffrey Selingo, The Atlantic, Mar 22, 2018

August 14, 2018
14 Aug 2018

Robots Grade Your Essays and Read Your Resumes


We’ve previously written about the rise of artificial intelligence and the current and anticipated effects of AI upon employment.  (See links to previous blog posts, below) Two recent articles treat the effects of AI on the assessment of students and the hiring of employees.

In her recent article for NPR, “More States Opting To ‘Robo-Grade’ Student Essays By Computer” Tovia Smith discusses how so-called “robo-graders” (i.e., computer algorithms) are increasingly being used to grade students’ essays on state standardized tests. Smith reports that Utah and Ohio currently use computers to read and grade students’ essays and that soon, Massachusetts will follow suit. Peter Foltz, a research professor at the University of Colorado, Boulder observes, “We have artificial intelligence techniques which can judge anywhere from 50 to 100 features…We’ve done a number of studies to show that the (essay) scoring can be highly accurate.” Smith also notes that Utah, which once had humans review students’ essays after they had been graded by a machine, now relies on the machines almost exclusively. Cyndee Carter, assessment development coordinator for the Utah State Board of Education reports “…the state began very cautiously, at first making sure every machine-graded essay was also read by a real person. But…the computer scoring has proven “spot-on” and Utah now lets machines be the sole judge of the vast majority of essays.”

Needless to say, despite support for “robo-graders”, there are critics of automated essay assessments. Smith details how one critic, Les Perelman at MIT, has created an essay-generating program, the BABEL generator, that creates nonsense essays designed to trick the algorithmic “robo-graders” for the Graduate Record Exam (GRE). When Perelman submits a nonsense essay to the GRE computer, the algorithm gives the essay a near perfect score. Perelman observes, “”It makes absolutely no sense,” shaking his head. “There is no meaning. It’s not real writing. It’s so scary that it works….Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid.”

Critics of “robo-graders” are also worried that students might learn how to game the system, that is, give the algorithms exactly what they are looking for, and thereby receive undeservedly high scores. Cyndee Carter, the assessment development coordinator for the Utah State Board of Education, describes instances of students gaming the state test: “…Students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.”

Despite these shortcomings, computer designers are learning and further perfecting computer algorithms. It’s anticipated that more states will soon use refined algorithms to read and grade student essays.

Grading student essays is not the end of computer assessment. Once you’ve left school and start looking for a job, you may find that your resume is read not by an employer eager to hire a new employee, but by an algorithm whose job it is to screen for appropriate job applicants. In the brief article, “How Algorithms May Decide Your Career: Getting a job means getting past the computer,” The Economist reports that most large firms now use computer programs, or algorithms, for screening candidates seeking junior jobs.  Applicant Tracking Systems (ATS) can reject up to 75% of candidates, so it becomes increasingly imperative for applicants to send resumes filled with key words that will peak screening computers’ interests.

Once your resume passes the initial screening, some companies use computer driven visual interviews to further screen and select candidates. “Many companies, including Vodafone and Intel, use a video-interview service called HireVue. Candidates are quizzed while an artificial-intelligence (AI) program analyses their facial expressions (maintaining eye contact with the camera is advisable) and language patterns (sounding confident is the trick). People who wave their arms about or slouch in their seat are likely to fail. Only if they pass that test will the applicants meet some humans.”

Although one might think that computer-driven screening systems might avoid some of the biases of traditional recruitment processes, it seems that AI isn’t bias free, and that algorithms may favor applicants who have the time and monetary resources to continually retool their resumes so that these present the code words that employers are looking for. (This is similar to gaming the system, described above.) “There may also be an ‘arms race’ as candidates learn how to adjust their CVs to pass the initial AI test, and algorithms adapt to screen out more candidates.”

Resources:

More States Opting To ‘Robo-Grade’ Student Essays By Computer,” Tovia Smith, NPR, June 30, 2018

How Algorithms May Decide Your Career: Getting a job means getting past the computer” The Economist, June 21, 2018

Will You Become a Member of the Useless Class?

Humans Need Not Apply: What Happens When There’s No More Work?

Will President Trump’s Wall Keep Out the Robots?

Welcoming our New Robotic Overlords,” Sheelah Kolhatkar, The New Yorker, October 23 2017

AI, Robotics, and the Future of Jobs,” Pew Research Center

Artificial intelligence and employment,” Global Business Outlook

July 31, 2018
31 Jul 2018

Are There Any Questions?

Asking questions is a critical aspect of learning. We’ve previously written about the importance of questions in our blog post “Evaluation Research Interviews: Just Like Good Conversations.” In a recent article, “The Surprising Power of Questions,” which appears in the Harvard Business Review, May-June, 2018, authors Alison Wood Brooks and Leslie K. John offer suggestions for asking better questions.

As Brooks and John report, we often don’t ask enough questions during our conversations. Too often we talk rather than listen. Brooks and John, however, note that recent research shows that by asking good questions and genuinely listening to the answers, we are more likely to achieve both genuine information exchange and effective self-presentation. “Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding.”

Although asking more questions in our conversations is important, the authors show that asking follow-up questions is critical. Follow-up questions “…signal to your conversation partner that you are listening, care, and want to know more. People interacting with a partner who asks lots of follow-up questions tend to feel respected and heard.”

Another critical component of a question-asking is to be sure that we ask open-ended questions, not simply categorial (yes/no) questions. “Open-ended questions …can be particularly useful in uncovering information or learning something new. Indeed, they are wellsprings of innovation—which is often the result of finding the hidden, unexpected answer that no one has thought of before.”

Asking effective questions depends, of course, on the purpose and context of conversations. That said, it is vital to ask questions in an appropriate sequence. Counterintuitively, asking tougher questions first, and leaving easier questions until later “…can make your conversational partner more willing to open up.” On the other hand, asking tough questions too early in the conversation, can seem intrusive and sometimes offensive. If the ultimate goal of the conversation is to build a strong relationship with your interlocutor, especially with someone who you don’t know, or don’t know well, it may be better opening with less sensitive questions and escalate slowly. Tone and attitude are also important: “People are more forthcoming when you ask questions in a casual way, rather than in a buttoned-up, official tone.”

While question-asking is a necessary component of learning, the authors remind us that “The wellspring of all questions is wonder and curiosity and a capacity for delight. We pose and respond to queries in the belief that the magic of a conversation will produce a whole that is greater than the sum of its parts. Sustained personal engagement and motivation—in our lives as well as our work—require that we are always mindful of the transformative joy of asking and answering questions.”

Resources:

The Surprising Power of Questions,” Alison Wood Brooks and Leslie K. John. Harvard Business Review, May–June 2018 (pp.60–67)

Using Qualitative Interviews in Program Evaluations

July 24, 2018
24 Jul 2018

Learning to Learn

In a recent article in the May 2, 2018 Harvard Business Review, “Learning Is a Learned Behavior. Here’s How to Get Better at It,” Ulrich Boser rejects the idea that our capacities for learning are innate and immutable. He argues, instead, that a growing body of research shows that learners are not born, but made. Boser says that we can all get better at learning how to learn, and that improving our knowledge-acquisition skills is a matter of practicing some basic strategies.

Learning how to learn is a matter of:

  1. setting clear and achievable targets about what we want to learn
  2. developing our metacognition skills (“metacognition” is a fancy way to say thinking about thinking) so that as we learn, we ask ourselves questions like, Could I explain this to a friend? Do I need to get more background knowledge? etc.
  3. reflecting on what we are learning by taking time to “step away” from our deliberate learning activities so that during periods of calm and even mind-wondering, new insights emerge

Boser says that research shows we’re more committed, if we develop a learning plan with clear objectives, and that periodic reflection on the skills and concepts we’re trying to master, i.e., utilizing metacognition, makes each of us a better learner.

You can read more about strategies for learning in Boser’s article and his book
June 14, 2018
14 Jun 2018

The Tyranny of Metrics

In a recent article “Against Metrics: How Measuring Performance by Numbers Backfires,” Jerry Z Muller argues that companies, educational institutions, government agencies, and philanthropies are now in the grip of what he calls “metric fixation,” “…the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardized data (metrics).”

In this brief and important article, Muller critiques the growing phenomenon of paying employees for performance. He points out that such schemes often lead to a narrowing measure of what is desirable for the organization, leads members of an organization to “game the system”, often undermines organizations ability to think more broadly about their purposes, and most importantly, impedes innovation.

Looking at the unintended outcomes of metric fixation, he writes:

“When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organizational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.”

Pay for performance schemes, however, are not alone in eliciting a narrowing of goals, and a tendency to game the system. Metric fixation (or what I term the “tyranny of measurement”) can be a risk for a range of non-profit organizations and educational institutions who often feel that demands for accountability can be addressed by merely counting the number of participants who receive services, or the number of students who score well on reading tests. While it is important to have clear goals, and to be able to indicate if these goals are met, organizations, in their rush to address demands from funders and other stakeholders for accountability, must be careful not to reduce their goals—indeed their organizations’ vision— to only a few countable variables. “What can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about” (Muller). As Albert Einstein observed, “Not everything that counts can be counted, and not everything that can be counted, counts.”

Resources:

“Against Metrics: How Measuring Performance by Numbers Backfires,” Aeon, April 24, 2018

The Tyranny of MetricsJerry Z. Muller, Princeton University Press, 2018

May 30, 2018
30 May 2018

Brad Addresses Nonprofit Net

Brad recently presented an introductory seminar on program evaluation at Nonprofit Net, a Lexington, Massachusetts-based organization. Nonprofit Net is a forum for Massachusetts nonprofit leaders and nonprofit consultants which offers seminars on topics of importance to the nonprofit community. Brad’s presentation provided an introduction to program evaluation, outlined the benefits of evaluating outcomes in order to demonstrate programs’ achievements and challenges, introduced the use of logic models, and reviewed the key questions that nonprofit leaders should consider as they approach a program evaluation.

The seminar was based on the materials in the following white papers, which you can download for free:

Program Evaluation Essentials for Non-evaluators

Preparing for a Program Evaluation

Logic Modeling

Resources:

Nonprofit Net

May 8, 2018
08 May 2018

Transparent as a Jellyfish? Why Privacy is Important

Recent revelations about Facebook and Cambridge Analytica’s use of personal data have raised serious concerns about internet privacy. It would appear that we inhabit a world in which privacy is increasingly under assault—not just from leering governments, but also from panoptic corporations.

Although the right to privacy in the US is not explicitly protected by the Constitution, constitutional amendments and case law have provided some protections to what has become a foundational assumption of American citizens. The “right to privacy” (what Supreme Court Justice Louis Brandis once called “the right to be left alone,”) is a widely held value, both in the U.S. and throughout the world. But why is privacy important?

In “Ten Reasons Why Privacy Matters” Danial Solove, Professor of Law at George Washington University Law School, lists ten important reasons including: limiting the power of government and corporations over individuals; the need to establish important social boundaries; creating trust; and as a precondition for freedom of speech and thought. Solove also notes, “Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being.”

Julie E. Cohen, of Georgetown argues that privacy is not just a protection, but an irreducible environment in which individuals are free to develop who they are and who they will be. “Privacy is shorthand for breathing room to engage in the process of … self-development. What Cohen means is that since life and contexts are always changing, privacy cannot be reductively conceived as one specific type of thing. It is better understood as an important buffer that gives us space to develop an identity that is somewhat separate from the surveillance, judgment, and values of our society and culture.” (See “Why Does Privacy Matter? One Scholar’s Answer” Jathan Sadowski, The Atlantic, Feb 26) In the Harvard Law Review, (“What Privacy Is For” Julie E. Cohen, Harvard Law Review, Vol. 126, 2013) Cohen writes, “Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop.”

Cohen’s argument that privacy is a pre-condition for the development of an autonomous and thriving self is a critical and often overlooked point. If individuals are to develop, individuate, and thrive, they need room to do so, without interference or unwanted surveillance. Such conditions are also necessary for the maintenance of individual freedom vs. slavery.  As Orlando Patterson argued in his book,  Freedom Vol.1 Freedom in the Making of Western Culture (Basic Books, 1991) freedom historically developed in the West as a long struggle against chattel slavery.  Slavery, of course, entails the subjugation of the individual/person, and depends upon the thwarting of autonomy. While slavery may not fully eradicate the full and healthy development of the “self,” it may deform and distort that development. Autonomous selves are both the product of and the condition of social freedom.

Privacy, which is crucial to the development of a person’s autonomy and subjectivity, when reduced by surveillance or restrictive interference—either by governments or corporations who gather and sell our private information—may interfere not just with social and political freedom, but with the development and sustenance of the self.  “Transparency” (especially when applied to personal information) may seem like an important feature to those who gather “Big Data,” but it may also represent an intrusion and an attempt to whittle away the environment of privacy that the self depends upon for its full and healthy development. As Cohen observes, “Efforts to repackage pervasive surveillance as innovation — under the moniker “Big Data” — are better understood as efforts to enshrine the methods and values of the modulated society at the heart of our system of knowledge production. In short, privacy incursions harm individuals, but not only individuals. Privacy incursions in the name of progress, innovation, and ordered liberty jeopardize the continuing vitality of the political and intellectual culture that we say we value.” (See “What Privacy Is For” Julie E. Cohen, Harvard Law Review, Vol. 126, 2013)

Privacy is not just important to the protection of individuals from governments and commercial interests, it is also essential for the development of full, autonomous, and healthy selves.

Resources:

“Ten Reasons Why Privacy Matters” Daniel Solove

“Why Does Privacy Matter? One Scholar’s Answer” Jathan Sadowski, The Atlantic, Feb 26, 2013

“What Privacy Is For” Julie E. Cohen, Harvard Law Review, Vol. 126, 2013

Orlando Patterson argued in his book,  Freedom Vol.1 Freedom in the Making of Western Culture (Basic Books, 1991)

“Facebook and Cambridge Analytica, What You Need to Know as Fallout Widens” Kevin Granville, New York Times, Mar 19, 2018

“I Downloaded the Information that Facebook Has on Me. Yikes” Brian Chen, New York Times, Apr 11, 2018

“Right to Privacy: Constitutional Rights & Privacy Laws” Tim Sharp, Livescience, June 12, 2013

Surveillance Capitalism, Shoshone Zuboff
Listen to “Facebook and the Reign of Surveillance Capitalism” Radio Open Source
Read a review of Surveillance Capitalism

“How to Save Your Privacy from the Internet’s Clutches” Natasha Lomas, Romain Dillet, TC, Apr 14, 2018

April 25, 2018
25 Apr 2018

Everybody Lies

Although the author of Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are, (Seth Stephens-Davidowitz, Dey St., 2017,) hesitates to specify precisely what ‘big data’ is, he is confident that we are living through an era in which there is an explosion in the amount and quality of data—especially internet-imbedded data—that can tell us things about humans that previous data sources and data analysis methods have been unable to reveal. In fact, Stephens-Davidowitz argues in this quick and easy to read book that, “I am now convinced that Google searches are the most important data set ever collected on the human psyche…and I am convinced that new data increasingly available in our digital age will expand our understanding of humankind.”

Stephens-Davidowitz argues that unlike previous, predominately survey-based data, the emergence of big data—primarily data provided by Google and other on-line searches, makes insights into humans’ deepest interests, desires, behaviors, and values much more transparent and accessible.  Whereas traditional survey research has a number of vulnerabilities (e.g., people are not candid, and in fact lie, they provide socially desirable answers, they both exaggerate or underestimate behaviors and characteristics, etc.) analysis of internet data together with the use of new analytical tools (e.g., Google trends), now makes available immensely more accurate information about what people actually think, believe, and fear.

Stephens-Davidowitz illustrates the insights that collection and analysis of internet-based data now make possible. He shows, for example, how analysis of Google searches about race revealed voters’ real (vs. survey-reported) attitudes toward race even in otherwise seemingly liberal precincts. These attitudes—largely hidden from analysts who used traditional kinds of survey methods, made possible the surprising election of a figure like Donald Trump, who mobilized anti-immigrant sentiment and racist allusions to win the 2016 presidential election. “Surveys and conventional wisdom placed modern racism predominantly in the South and mostly among Republicans.  But the places with the highest racist search rates included upstate New York, western Pennsylvania, eastern Ohio, and rural Illinois…” (p. 7) “The Google searches revealed a darkness and hatred among a meaningful number of Americans that pundits for many years missed. Search data revealed that we live in a very different society from the one academics and journalists, relying on polls, thought that we live in. It revealed a nasty, scary, and wider-spread rage that was waiting for a candidate to give voice to.” (p. 12)

Everybody Lies…examines the ways that new methods of analyzing internet data can yield accurate insights about what people are really concerned with and thinking about. In a chapter titled “Digital Truth Serum,” the author surveys a number of topics including gender bias and sexism, America’s Nazi sympathizers, and the underreported rise of child abuse during economic recessions. He repeatedly demonstrates how internet data reveal accurate and often counter-intuitive findings. In one instance, the author shows that, in fact, internet data reveal that we are more likely to interact with someone with opposing political ideas on the internet than in real life, and that in many instances—and counter to what is widely believed—liberals and conservatives often visit and draw upon the same news websites. (It appears that fascist sympathizers and liberals both rely on nytimes.com)

Everybody Lies…makes an argument for a ‘revolution’ in social science research. Stephens-Davidowitz believes that the collection and careful analysis of internet-based data promises a much more rigorous and penetrating approach to answering questions about peoples’ genuine attitudes, behaviors, and political dispositions. Along the way to demonstrating the superiority of such research, Stephens-Davidowitz touches on some important, but taboo and previously difficult-to-answer questions: What percentage of American males are gay? Is Freud’s theory of sexual symbols in dreams really accurate? At what age are political attitudes first established?  How racist are most Americans?

Stephens-Davidowitz writes, “Frankly, the overwhelming majority of academics have ignored the data explosion caused by the digital age. The world’s most famous sex researches stick with the tried and true. They ask a few hundred subjects about their desires; they don’t ask sites like PornHub for their data. The world’s most famous linguists analyze individual tests; they largely ignore the patterns revealed in billions of books. The methodologies taught to graduate students in psychology, political science, and sociology have been, for the most part, untouched by the digital revolution. The broad, mostly unexplored terrain opened by the data explosion has been left to a small number of forward-thinking professors, rebellious grad students and hobbyists. That will change.” (p. 274)

Resources:

Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are, Seth Stephens-Davidowitz, Dey St., 2017

March 27, 2018
27 Mar 2018

Two Cheers for Bureaucracy in Education

Rationalization and Bureaucracy

In the early years of the 20th century, the German sociologist Max Weber argued that modern society increasingly relies upon the “rationalization” of social organizations and institutions. He maintained that Western society is increasingly reliant upon reason, efficiency, predictability, and means/ends calculation. He further believed that modern society is highly dependent upon both public and private bureaucracies (e.g., the nation state and the modern corporation) as a way to achieve important societal goals (education, social welfare, medical care, business administration, governance, etc.) Bureaucracies are, “Highly organized networks of hierarchy and command structures (which are) necessary to run any ordered society – especially ones large in scope.” (See “Max Weber’s Theory of Rationalization: What it Can Tell Us of Modernity,” ) As one form of social organization, bureaucracy is distinguished by its: (1) clear hierarchy of authority, (2) rigid division of labor, (3) written and inflexible rules, regulations, and procedures, and (4) impersonal relationships. (For this and additional definitions, see the BusinessDictionary)

Weber and subsequent social theorists saw the process of rationalization and bureaucratization as replacing traditional modes of life, traditional values, and religious orientations with a society characterized by growing calculability, pursuit of individuals’ self-interest, efficiency, and ordered control. (Weber termed the loss of tradition that accompanied the increasing rationalization of Western society as the “disenchantment” of society.) As modernity transforms traditional social forms and social values, rationality and bureaucracy come to dominate the various spheres of contemporary society. Moreover, as society becomes ever more rationalized, it increasingly depends upon bureaucratic regimes of governance and management by impersonal rules and the exercise of technical knowledge by experts. Today, various social arenas—ranging from government to corporate organizations, from healthcare to public education—have become suffused with the ethos of bureaucracy and rationality. “…rationalization means a historical drive towards a world in which ‘one can, in principle, master all things by calculation.’” (See, Max Weber, in Stanford Encyclopedia of Philosophy)

Advantages and Disadvantages of Bureaucracy

Although “bureaucracy’ is often thought of as a pejorative term, bureaucracy has some advantages over other forms of social organization.  Bureaucracy creates and utilizes rules and laws (vs. fiat decisions by a powerful notable, such as a king), mobilizes the knowledge of educated experts, promotes meritocracy, delineates and sets boundaries for the exercise of social power, establishes a formal chain of command and specifies organizational authority, and provides a technically efficient form of organization for dealing with routine matters that concern large numbers of persons. These advantages however, are accompanied by what are thought by many to be substantial disadvantages, including, compelling officials to conform to fixed rules and detailed procedures, sponsoring bureaucrats’ focus on narrow objectives, and supporting bureaucrats to become defensive, rigid, and unresponsive to the urgent individual needs and concerns of private citizens. “…individual officials working under bureaucratic incentive systems frequently find it to be in their own best interests to adhere rigidly to internal rules and formalities in a ritualistic fashion, behaving as if “proper procedure” were more important than the larger goals for serving their clients or the general public that they are supposedly designed to accomplish (i.e., the “red tape” phenomenon).” (See Bureaucracy)

Education and Bureaucracy 

If we look at public education in contemporary society, we see many features associated with bureaucracy. State education agencies, districts, and schools: 1) are run by trained experts (e.g., credentialed teachers and administrators), 2) feature rigid hierarchies of authority, 3) have a strict division of labor, 4) depend upon and are run by formal and impersonal rules of administration and control, and 5) credential students by relying on impersonal and standardized methods for assessing student achievement. Additionally, students are routinely segregated into age-specified categories (classes) and are subjected not to individually tailored curricula, but to routine and standardized curricula that attempt to teach students en masse.

Does Standardized Testing Support Educational Bureaucracy?

Standardized testing of student achievement is one of the bureaucratic characteristics of modern public education. Although assessment is thought to be a necessary means for measuring student learning, it is also a means by which educational organizations categorize students, assign them social statues, and allocate them to various social trajectories (“life chances” to use Weber’s terminology). Standardized testing regimes also assist the educational bureaucracy by creating different categories of clientele (i.e., students) who can then be served en masse by large-scale routinized educational programs and mass-produced textbooks. Some would even argue that students are made to fit school as much as schools are made to fit the student. (For a summary of the problems associated with standardized testing, see “What’s Wrong with Standardized Tests.”)

While students are subject to the rule of bureaucracy, so too are faculty and administrators. Like the students they teach and oversee, faculty and administrators are subject to formal structures of authority, adhere to a strict division of labor, follow formal rules and regulations, and must be credentialed and certified. Like their students, teachers are also subject to assessment and review. (See our previous blog post “Too Much Assessment in Higher Education,” for an example of the effects of assessment on higher education.) Schools are also reviewed and rated by State Departments of Education.

While some feel that the stultifying aspects of bureaucracy may be ameliorated, the original theorist of rationalization and bureaucracy, Max Weber, was pessimistic about the reform of bureaucracy. As he surveyed the early 20th century and considered the likely developmental direction of Western society, he said that citizens of society were likely to find themselves increasingly entrapped in what he termed the “iron cage of bondage,” which continued to be cemented by the growth of rationalization and bureaucracy. Whether this dark prognosis is generally true for Western society is still very much debatable. That said, it is difficult to imagine large-scale public education without many of the features of bureaucracy that Weber first described —including standardized student testing. (For examples of reform efforts as they apply to standardized tests, see The National Center for Fair & Open Testing)

March 6, 2018
06 Mar 2018

Too Much Assessment in Higher Education?

Too Much Assessment in Higher Education?In an important article, “The Misguided Drive to Measure ‘Learning Outcomes’”, New York Times, Feb 23, 2018, Molly Worthen argues that the growth of, and seeming obsession with, the assessment of learning outcomes in higher education has profoundly shaped curricula and instruction, undercut the unique capacities of colleges and universities to foster genuine critical thinking, and proven to be both bureaucracy bloating and extremely expensive. Worthen shows that, driven largely by the interests of accrediting agencies and for-profit tech and consulting companies, higher education’s rush to demonstrate student learning and skill-acquisition disproportionately affects non-elite schools, and often compels these under-resourced institutions to devote scarce dollars to obtaining evidence of instructional impact, “…more and more university administrators want campus-wide, quantifiable data that reveal what skills students are learning. Their desire has fed a bureaucratic behemoth known as learning outcomes assessment.”

In order to show that students graduate with job-ready skills and attitudes, Worthen argues that higher education institutions’ focus on assessment obscures the “real crisis” in higher education: “the system’s deepening divide into a narrow tier of elite institutions primarily serving the rich and a vast landscape of glorified trade schools for everyone else.” She notes the cruel irony in the mania for assessment is that there is little evidence beyond occasional anecdotes, that regimes of assessment actually improve student learning. Moreover, more selective (i.e., elite) institutions, themselves, don’t utilize assessment of learning outcomes at the same rate as less prestigious institutions. “Research indicates that the more selective a university, the less likely it is to embrace assessment.”

Perhaps the greatest irony, Worthen writes, is that assessment regimes subvert the unique purposes and capacities of higher education. “The value of universities to a capitalist society depends on their ability to resist capitalism, to carve out space for intellectual endeavors that don’t have obvious metrics or market value.” She further observes, “Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.”

Resources:

The Misguided Drive to Measure ‘Learning Outcomes’”, Molly Worthen, New York Times, 23 February 2018

February 19, 2018
19 Feb 2018

Will You Become a Member of the Useless Class?

Will You Become a Member of the Useless Class?The development and deployment of robotics and artificial intelligence continues to affect the world of work. As we’ve discussed in previous blogposts
“Humans Need Not Apply: What Happens When There’s No More Work?”, “Will President Trump’s Wall Keep Out the Robots?”  and “Dark Factories” , AI and robotics are transforming both blue-collar jobs and professional occupations. New technologies promise to change not just how we work and are employed, but also to alter the traditional meanings of work and employment that have been central to peoples’ self-conceptions and identities.

In “How Automation Will Change Work, Purpose, and Meaning,” by Robert C. Wolcott, Harvard Business Review, January 11, 2018, Wolcott says that new technologies not only raise the question, “How are the spoils of technology to be distributed?” but equally baffling, “When technology can do nearly anything, what should I do, and why?” He cites Hannah Arendt’s writings in The Human Condition about the importance of moving from a self-conception that identifies work as purpose, to one that encompasses the idea of the Vita Activa, the active life, in which humans, when freed from much of the drudgery of labor, will need to aspire to integrate non-labor activity in the world with contemplation about the world. Wolcott asks, “When our machines release us from ever more tasks, to what will we turn our attentions? This will be the defining question of our coming century.”

In “The Meaning of Life in a World Without Work” by Yuval Noah Harari, The Guardian, May 8, 2017, Harari writes that as new technologies increasingly displace humans from work, the real problem will be to keep occupied the masses of people (i.e., members of “the useless class” as Harari defines them) who are no longer involved in work. Harari says that one possible scenario might be the deployment of virtual reality computer games. “Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside.” He likens such virtual reality to the world’s religions, which Harari says, are filled with practices and beliefs that give meaning to adherents’ lives, but are not themselves necessary or ‘real’ in any objective way. Harari asserts that it doesn’t much matter whether one finds stimulation from the ‘real’ world or from computer-simulated reality, because ultimately, both rely on what’s happening inside our brains. Further, he observes, “Hence virtual realities are likely to be key to providing meaning to the useless class (created by) the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless, and nobody knows for sure what kind of deep play will engage us in 2050.”

Although Harari’s sketch of possible futures seems shockingly Huxleyan, it does attempt to imagine a future in which large swaths of the population will be unnecessary to the functioning of the productive economy. Anticipating criticism of the brave new world that he’s sketched, Harari, referring to the world’s religions, writes, “But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.”

The challenges, and some might say the catastrophies, associated with the new technologies are not merely technological. They are political, and will be shaped by the kinds of political institutions and social policies that nations use to deal with them. In the December 27 2017, New York Times article, “The Robots are Coming and Sweden is Fine,” Peter S. Goodman notes that Swedish workers appear less threatened by the introduction of robotics and AI because Sweden’s history of social democracy and the relatively strong influence of unions temper the effects of new technologies on Swedish workers. Goodman argues that, unlike much of the rest of the world, the fear that robots will steal jobs “… has little currency in Sweden or its Scandinavian neighbors, where unions are powerful, government support is abundant, and trust between employers and employees runs deep. Here, robots are just another way to make companies more efficient. As employers prosper, workers have consistently gained a proportionate slice of the spoils — a stark contrast to the United States and Britain, where wages have stagnated even while corporate profits have soared.”

How AI and robotics will affect the U.S. is still uncertain, although as we’ve discussed in “Humans Need Not Apply…” some researchers believe that within two decades, half of U.S. jobs could be handled by machines (For example, check out the video “Why Amazon Go Is Being Called the Next Big Job Killer” below). The character of work, and the consequent effects on the population will be determined, in part, by the strength of institutions that have mediated the relationship between employers and employees. In the U.S. sadly, those institutions and social agreements have largely been weakened or eliminated in the last 35-40 years. The introduction of robotics and AI in America is likely to follow a far different path than in Sweden.

January 31, 2018
31 Jan 2018

What’s the Difference? Evaluation vs. Research

What's the Difference? Evaluation vs. ResearchEvaluation is a research enterprise whose primary goal is to identify whether desired changes have been achieved. Evaluation is a type of applied social research that is conducted with a value, or set of values, in its “denominator.” Evaluation research is always conducted with an eye to whether the desired outcomes, or results, of a program, initiative, or policy were achieved, especially as these outcomes are compared to a standard or criterion. At the heart of program evaluation is the idea that outcomes, or changes, are valuable and desired. Some outcomes are more valuable than others. Evaluators conduct evaluation research to find out if these valued changes are, in fact, achieved by the program or initiative.

Evaluation research shares many of the same methods and approaches as other social sciences, and indeed, natural sciences. Evaluators draw upon a range of evaluation designs (e.g. experimental design, quasi-experimental design, non-experimental design) and a range of methodologies (e.g. case studies, observational studies, interviews, etc.) to learn what the effects of a given intervention have been. Did, for example, 8th grade students who received an enriched STEM curriculum do better on tests, than did their otherwise similar peers who didn’t receive the enriched curriculum? Do homeless women who receive career readiness workshops succeed at obtaining employment at greater rates than do other similar homeless women who don’t participate in such workshops? (For more on these types of outcome evaluations, see our previous blog post, “What You Need to Know About Outcome Evaluations: The Basics,”) While not all evaluations are outcome evaluations, all evaluations gather systematic data with which judgments about the program or initiative can be made.

Another way to differentiate social research from evaluation research is to understand that social research seeks to find out “what is the case?” “What is out there?” “How does the world really work?” etc. For example, in political science, researchers may want to find out how citizens of California vote in national elections, or, what are their attitudes towards certain candidates or policies. Sociology may investigate the causes of racial segregation or the relationship(s) between race and class. These instances of social research are primarily interested in discovering what is the case, regardless of the value we might attribute to the findings of the research. Researchers in political science are neutral about the percentages of California voters who vote Republican, Democrat, Independent, Green, etc. They are most interested in knowing how people vote, not if they vote for one particular party.

Although evaluation research is interested in a truthful accurate description of what is the case, it is ALSO interested in discovering whether findings indicate that what is there (i.e., is present) is valuable, important, desired, etc. When evaluators look for outcomes they don’t just want to know if anything at all happened, or changed, they want to discover if something specific and valued happened. Evaluators don’t just set their sites on describing the world, but on determining whether certain valued and worthwhile things happened. While evaluators use many of the same methods and approaches as other researchers, evaluators must employ an explicit set of values against which to judge the findings of their empirical research. The means that evaluators must both be competent social scientists AND exercise value-based judgments and interpretations about the meaning of data.

January 3, 2018
03 Jan 2018

Evaluation in Complex and Evolving Environments

Program evaluations seldom occur in stable, scientifically controlled environments. Often programs are implemented in complex and rapidly evolving settings that make traditional evaluation research approaches—which depend upon the stability of the “treatment” and the designation of predetermined outcomes—difficult to utilize.

Michael Quinn Patton, one of the originators of Developmental Evaluation, says that “Developmental evaluation processes include asking evaluative questions and gathering information to provide feedback and support developmental decision-making and course corrections along the emergent path. The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change. The evaluator’s primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and to facilitate data-based assessments and decision-making in the unfolding and developmental processes of innovation.”

In their paper, “A Practitioners Guide to Developmental Evaluation,” Dozios and her colleagues note, “Developmental Evaluation differs from traditional forms of evaluation in several key ways:”

  • The primary focus is on adaptive learning rather than accountability to an external authority.
  • The purpose is to provide real-time feedback and generate learnings to inform development.
  • The evaluator is embedded in the initiative as a member of the team.
  • The DE role extends well beyond data collection and analysis; the evaluator actively intervenes to shape the course of development, helping to inform decision-making and facilitate learning.
  • The evaluation is designed to capture system dynamics and surface innovative strategies and ideas.
  • The approach is flexible, with new measures and monitoring mechanisms evolving as understanding of the situation deepens and the initiative’s goals emerge

Development evaluation is especially useful for social innovators who often find themselves inventing the program as it is implemented, and who often don’t have a stable and unchanging set of anticipated outcomes. Following Patton, Dozois, Langlois, and Blanchet-Cohen observe that Developmental Evaluation is especially well suited to situations that are:

  • Highly emergent and volatile (e.g., the environment is always changing)
  • Difficult to plan or predict because the variables are interdependent and non-linear
  • Socially complex— requiring collaboration among stakeholders from different organizations, systems, and/or sectors
  • Innovative, requiring real-time learning and development

Developmental Evaluation, however, is increasingly appropriate for use in the non-profit world, especially where the stability of programs’ key components including the program’s core treatment and eventual, often evolving, outcomes, are not as certain or firm as program designers might wish.

Brad Rose Consulting is experienced in working with program’s whose environments are volatile and whose iterative program designs are necessarily flexible. We are adept at collecting data that can inform the on-going evolution of a program, and have 20+ years of providing meaningful data to program designers and implementers that help them to adjust to rapidly changing and highly variable environments.

Resources:

A Practitioner’s Guide to Developmental Evaluation, Elizabeth Dozois, Marc Langlois, and Natasha Blanchet-Cohen

Michael Quinn Patton on Developmental Evaluation

Developmental Evaluation

The Case for Developmental Evaluation

December 11, 2017
11 Dec 2017

What’s the Difference? 10 Things You Should Know About Organizations vs. Programs

Organizations vs. Programs

Organizations are social collectivities that have: members/employees, norms (rules for, and standards of, behavior), ranks of authority, communications systems, and relatively stable boundaries. Organizations exist to achieve purposes (objectives, goals, and missions) and usually exist in a surrounding environment (often composed of other organizations, individuals, and institutions.) Organizations are often able to achieve larger-scale and more long-lasting effects than individuals are able to achieve.  Organizations can take a variety of forms including corporations, non-profits, philanthropies, and military, religious, and educational organizations.

Programs are discreet, organized activities and actions (or sets of activities and actions) that utilize resources to produce desired, typically targeted, outcomes (i.e., changes and results). Programs typically exist within organizations. (It may be useful to think of programs as nested within one or, in some cases, more than one organization.) In seeking to achieve their goals, organizations often design and implement programs that use resources to achieve specific ends for program participants and recipients. Non-profit organizations, for example, implement programs that mobilize resources in the form of activities, services, and products that are intended to improve the lives of program participants/recipients. In serving program participants, nonprofits strive to effectively and efficiently deploy program resources, including knowledge, activities, services, and materials, to positively affect the lives of those they serve.

What is Program Evaluation?

Program evaluation is an applied research process that examines the effects and effectiveness of programs and initiatives. Michael Quinn Patton notes that “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs in order to make judgements about the program, to improve program effectiveness, and/or to inform decisions about future programming. Program evaluation can be used to look at:  the process of program implementation, the intended and unintended results produced by programs, and the long-term impacts of interventions. Program evaluation employs a variety of social science methodologies–from large-scale surveys and in-depth individual interviews, to focus groups and review of program records.” Although program evaluation is research-based, unlike purely academic research, it is designed to produce actionable and immediately useful information for program designers, managers, funders, stakeholders, and policymakers.

Organization Development, Strategic Planning, and Program Evaluation

Organization Development is a set of processes and practices designed to enhance the ability of organizations to meet their goals and achieve their overall mission. It entails “…a process of continuous diagnosis, action planning, implementation and evaluation, with the goal of transferring (or generating) knowledge and skills so that organizations can improve their capacity for solving-problems and managing future change.” (See: Organizational Development Theory, below) Organization Development deals with a range of features, including organizational climateorganizational culture (i.e., assumptions, values, norms/expectations, patterns of behavior) and organizational strategy. It seeks to strengthen and enhance the long-term “health” and performance of an organization, often by focusing on aligning organizations with their rapidly changing and complex environments through organizational learning, knowledge management, and the specification of organizational norms and values.

Strategic Planning is a tool that supports organization development. Strategic planning is a systematic process of envisioning a desired future for an entire organization (not just a specific program), and translating this vision into broadly defined set of goals, objectives, and a sequence of action steps to achieve these. Strategic planning is an organization’s process of defining its strategy, or direction, and making decisions about allocating its resources to pursue this strategy.

Strategic plans typically identify where and organization is at and where it wants to be in the future. It includes statements about how to “close the gap,” between its current state and its desired, future state. Additionally, strategic planning requires making decisions about allocating resources to pursue an organizations strategy. Strategic planning generally involves not just setting goals and determining actions to achieve the goals, but also mobilizing resources.

Program evaluation is uniquely able to contribute to organization development–the deliberately planned, organization-wide effort to increase an organization’s effectiveness and/or efficiency. Although evaluations are customarily aimed at gathering and analyzing data about discrete programs, the most useful evaluations collect, synthesize, and report information that can be used to improve the broader operation and health of the organization that hosts the program. Additionally, program evaluation can aid the strategic planning process, by using data about an organization’s programs to indicate whether the organization is successfully realizing its goals and mission through its current programming.

Brad Rose Consulting works at the intersection of evaluation and organization development. While our projects begin with a focus on discrete programs and initiatives, the answers to the questions that drive our evaluation research provide vital insights into the effectiveness of the organizations that host, design, and fund those programs. Findings from our evaluations often have important implications for the development and sustainability of the entire host organization.

Resources:

Organizations: Structures, Processes, and Outcomes, Richard H. Hall and Pamela S Tolbert, Pearson Prentice Hall, 9th edition.

Utilization Focused Evaluation, Michael Quinn Patton, Sage, 3rd edition, 1997

Organization Development: What Is It & How Can Evaluation Help?

Organization Development

Organizational Development Theory

Strategic Planning, Bain and Co. 2017

Strategic Planning

What a Strategic Plan Is and Isn’t
Ten Keys to Successful Strategic Planning for Nonprofit and Foundation Leaders

Elements of a Strategic Plan

Types of Strategic Planning
Understanding Strategic Planning

Five Steps to a Strategic Plan
Five Steps to a Strategic Plan

The Big Lie of Strategic Planning, Roger L. Martin, Harvard Business Review, January-February 2014

November 8, 2017
08 Nov 2017

Dark Factories

We’ve previously written about the rise of Artificial Intelligence (AI) and robotics, and the impact of these new technologies on employment and the future of work.
(See our previous blog posts: “Humans Need Not Apply: What Happens When There’s No More Work?” and “Will President Trump’s Wall Keep Out the Robots?”) Today we’d like to refer readers to an important article, “Dark Factory”, that recently appeared in The New Yorker. “Dark Factory” explores the growing impact of robotics and AI on the manufacturing and service sectors of the U.S. Economy.

Dark Factory

In “Dark Factory”, Sheelah Kolhatkar discusses her visit to Steelcase, the manufacturer of office furniture. Steelcase, like much of American manufacturing, has had its economic ups and downs over the years. Kolhatkar describes how, in recent years, the company has increasingly employed robotics as a means to improve manufacturing efficiency, and as a result, now relies on fewer workers than it has in the past. Representative of an ever-growing number of manufacturing companies in the U.S., Steelcase employs fewer and fewer high school graduates, and now seeks college educated employees with technological skills, so that these higher skilled workers can supervise an expanding army of manufacturing robots.

As Kolhatkar shows, while efficiency gains are good for Steelcase and other manufacturing companies that employ robots and AI— and even, in some cases, make work more tolerable and less grueling for the remaining employees on the shop floor— the net effect of these technologies is to displace large swaths of the work force and to shift wealth to the owners of companies. Kolhatkar cites research that shows that the use of industrial robots, like those at Steelcase, is directly related to decline in both the number of manufacturing jobs and declining pay for remaining workers.

Kolhatkar also discusses “dark factories”— factories and warehouses whose use of robots and AI are so extensive that they need not turn on the lights because there are so few human workers. While such factories and warehouses are not yet wide-spread, major U.S. corporations are looking to use robotics and AI to run nearly employee-less operations. Although some companies may not be eager to begin utilizing robotic warehouses, competitive pressure is sure to compel U.S. companies to implement fully robotized facilities, or lose competitive battles with other nations that do adopt these technologies.

AI and Robotics Not Limited to Manufacturing

The result of the growing use of robotics and AI in the U.S. is, of course, the declining demand for workers in what where once fairly labor-intensive human-dominated work environments. (Manufacturing now employees only about 10% of the US workforce, and these jobs are under constant threat by new technologies.) Although displaced manufacturing workers often seek jobs in the service sector, this sector is now hardly immune to automation. MacDonald’s, for example, is introducing “digital ordering kiosks” where customers electronically enter their orders and pay for their meals. MacDonald’s is expected to automate in 5500 restaurants by the end of 2018. Uber and Google continue to invest in the development of autonomous driving technologies, and the U.S. trucking industry is eager to adapt autonomous vehicles so that it can reduce substantial labor costs associated with trucking. Amazon has purchased Kiva, a robotics company, and is developing robots that can zoom around Amazon warehouses and fulfill orders. (A Deutsch Bank report estimates that Amazon could save 22 million dollars a year in each of its warehouses, simply by introducing warehouse robots to replace human workers.)

As Kolhatkar’s “Dark Factory” shows, while the future looks increasingly promising for the shareholders of companies who introduce labor-displacing robotics and AI, it doesn’t appear quite so sunny for those humans who must work for a living—especially in the manufacturing and service sectors of the U.S. economy. Like the “dark factories” that promise to displace them, for many workers, the future too, will be dark.

Resources:

“Welcoming our New Robotic Overlords”, Sheelah Kolhatkar, The New Yorker, October 23 2017

“AI, Robotics, and the Future of Jobs”, Pew Research Center

“Artificial intelligence and employment”, Global Business Outlook

“Advances in Artificial Intelligence Could Lead to Mass Unemployment, Experts Warn”, James Vincent, The Independent, Wednesday 29 January 2014

Copyright © 2020 - Brad Rose Consulting