Program evaluation services that increase program value for grantors, grantees, and communities. x

Archive for category: Research

June 29, 2020
29 Jun 2020

Systems Thinking in Evaluation

The Oxford English Dictionary defines a system as, “A set of things working together as parts of a mechanism or an interconnecting network; a complex whole.” There are, of course, a range of specific kinds of systems, including economic systems, computer systems, biological systems, social systems, psychological systems, etc. In each of these domains, the system includes specialization of component parts (a division of labor), boundaries for each of the constituent parts, both a degree of relative autonomy and an interdependence of each part on the functioning of the other parts of that system, long-term functioning (i.e. function over time), and the production of outcomes (whether such outcomes are intended or not). Systems produce effects.

While various systems are distinct, there has been an effort to generate a general science of systems, under the umbrella of “systems theory” (See, for example, this summary, “Systems Theory” ) Theorists have attempted to construct a general and abstract science that is able to describe a variety of systems. These efforts, although subject to some questions and criticisms, have been useful for mapping and describing a variety of systems and structures, and have helped social scientists and organizational/social change advocates to describe approaches to intervening in a variety of contexts, including organizational, educational, social welfare, and economic systems.

Systems thinking—or thinking about systems vs. thinking exclusively about individuals or single events, can help those who are attempting to strengthen initiatives and interventions. As Michael Goodman points out in “Systems Thinking: What, Why, When, Where, and How?” “Systems thinking often involves moving from observing events or data, to identifying patterns of behavior over time, (and) to surfacing the underlying structures that drive those events and patterns. By understanding and changing structures that are not serving us well (including our mental models and perceptions), we can expand the choices available to us and create more satisfying, long-term solutions to chronic problems.”

Program evaluation benefits from a systems approach because interventions (e.g., programs and initiatives) are themselves systems, and are embedded or nested in larger social and economic systems. Rather than thinking that challenges to program effectiveness are the exclusive result of individuals’ one-off actions, it is more productive to examine the systemic features of the program in order to identify how both internal structures and repeated behaviors, and larger external systemic constraints shape programs’ effectiveness.

 

 

Resources:

June 2, 2020
02 Jun 2020

How Evaluation Can Help Non-profits to Respond to the COVID-19 Health Crisis

The current health crisis is compelling many non-profits to rethink how they do business. Many must consider how to best serve their stakeholders with new and perhaps untested, means. Among questions that many non-profits must now ask themselves: How do we continue to reach program participants and service recipients? How do we change/adjust our programing so that it reaches existing and new service recipients? How do we maximize our value while ensuring the safety of staff and clients? Are there new, unanticipated opportunities to serve program participants?

New conditions require new strategies. While the majority of non-profits’ attention will necessarily be focused on serving the needs of those they seek to assist, non-profit leaders will benefit from paying attention to which strategies work, and which adaptations work better than others.

In order to investigate the effectiveness of new programmatic responses, non-profits will benefit from conducting evaluation research that gathers data about the effects and the effectiveness of new (and continuing) interventions. Formative evaluation is one such means for discovering what works under new conditions.

The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.

While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:

  • Which features of a program or initiative are working and which aren’t working so well?
  • Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
  • Which components of the program do program participants say could be strengthened?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved. Brad Rose Consulting has conducted dozens of formative evaluations, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served. For the foreseeable future, non-profits are likely to be called upon to offer ever greater levels of services. Program evaluation can help non-profits to maximize their effectiveness in ever more challenging times.

Resources:

May 5, 2020
05 May 2020

Keeping Busy? – The Cult of Busyness in a Time of Lock-down

We’re living in a period where a lot of people are compelled to stay home, and many folks are out of work. Ironically, many employees are busier than ever. For the UPS driver, the grocery store cashier, medical personnel, truck drivers, and many assembly line workers, busyness is not a choice, but an on-going condition. While the COVID-19 crisis has increased the pace of work for those who are deemed “essential workers,” even in less stressful times, the pace of work is an intense and unrelenting activity.

Although for many, busyness is imposed and involuntary, for others–especially middle and upper managers, and entrepreneurs– busyness is not an imposed condition, but a prestigious choice. Indeed, busyness is a status symbol and an indicator of social importance. “In a recent paper published in the Journal of Consumer Research, researchers from Columbia University, Harvard, and Georgetown found through a series of experiments that the busier a person appeared, the more important they were deemed.” https://globalnews.ca/news/3343760/the-cult-of-busyness-how-being-busy-became-a-status-symbol/ The authors of the paper write, “We argue that a busy and overworked lifestyle, rather than a leisurely lifestyle, has become an aspirational status symbol. A series of studies shows that the positive inferences of status in response to busyness and lack of leisure are driven by the perceptions that a busy person possesses desired human capital characteristics (competence, ambition) and is scarce and in demand on the job market.” As early as 1985, Barbara Ehrenreich noted that effect on women, “I don’t know when the cult of conspicuous busyness began, but it has swept up almost all the upwardly mobile, professional women I know.” https://www.nytimes.com/1985/02/21/garden/hers.html

Why are so many people obsessively busy? Tim Kreider writes in “The ‘Busy’ Trap” New York Times, June 30, 2012 “Busyness serves as a kind of existential reassurance, a hedge against emptiness; obviously your life cannot possibly be silly or trivial or meaningless if you are so busy, completely booked, in demand every hour of the day.” Similarly, Lissa Rankin writes, “It seems to me that too many of us wear busyness as a badge of honor. I’m busy, therefore I’m important and valuable, therefore I’m worthy. And if I’m not busy, forget it. I don’t matter.”

In a recent article. “7 Hypotheses for Why we are So Busy Today” Kyle Kowalski posits the following hypotheses about busyness:

  1. Busyness as a badge of honor and trendy status symbol — or the glorification of busy — to show our importance, value, or self-worth in our fast-paced society
  2. Busyness as job security — an outward sign of productivity and company loyalty
  3. Busyness as FOMO (Fear of Missing Out) — spending is shifting from buying things (“have it all”) to experiences (“do it all”), packing our calendars (and social media feeds with the “highlight reel of life”)
  4. Busyness as a byproduct of the digital age — our 24/7 connected culture is blurring the line between life and work; promoting multitasking and never turning “off”
  5. Busyness as a time filler — in the age of abundance of choice, we have infinite ways to fill time (online and off) instead of leaving idle moments as restorative white space
  6. Busyness as necessity — working multiple jobs to make ends meet while also caring for children at home
  7. Busyness as escapism — from idleness and slowing down to face the tough questions in life (e.g. Maybe past emotional pain or deep questions like, “What is the meaning of life?” or “What is my purpose?”)

Whatever the reasons, busyness has its costs. The most obvious result is “burnout”. Others include: the long-term negative impact on happiness, well-being, and health. Ultimately, busyness may make us feel like we are important and in demand, have a high status, and command the respect of others, but it may also be destructive.

 

Resources:

“The cult of busyness: How being busy became a status symbol” Global News, March 30, 2017

‘Ugh, I’m So Busy’: A Status Symbol for Our Time” Joe Pinsker, The Atlantic, March 1, 2017

Busyness 101: Why are we SO BUSY in Modern Life? (7 Hypotheses) By Kyle Kowalski, Sloww

“Conspicuous Consumption of Time: When Busyness and Lack of Leisure Time Become a Status Symbol,” Silvia Bellezza, Neeru Paharia, Anat Keinan Columbia Business School Research Archive

“The cult of busyness in the nonprofit sector” Susan Fish, Charity Village, May 25, 2016

HERS, Barbara Ehrenreich. New York Times, Feb. 21, 1985

“This is why you’re addicted to being busy” Jory MacKay, Fast Company, August 12, 2019

“Are You Addicted to Being Busy? Why we should consider the hard truths we mask by staying busy.” Lissa Rankin M.D., Psychology Today, Apr 07, 2014

“Busy is a Sickness,” Scott Dannemiller, Huffington Post, February 27, 2015

April 21, 2020
21 Apr 2020

Developmental Evaluation in Tumultuous Times, Tumultuous Environments

The current health crisis is already having a powerful effect on non-profit organizations, many of whom had been economically challenged even before the onset of the COVID-19 pandemic. (See “A New Mission for Nonprofits During the Outbreak: Survival” by David Streitfeld, New York Times, March 27, 2020) Despite economic challenges, and as the immediate health crisis develops, non-profits will need information, including accurate and robust evaluation and monitoring information, even more than ever.

Under conditions of uncertainty and tumultuous social-environmental environments, and rapid adaptation, non-profits will benefit from information gathered by flexible and adaptable evaluation approaches like Developmental Evaluation. “Developmental evaluation (DE) is especially appropriate for…organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback.”
As Michael Quinnn Patton has recently pointed out, “All evaluators must now become developmental evaluators, capable of adapting to complex dynamics systems, preparing for the unknown, for uncertainties, turbulence, lack of control, nonlinearities, and for emergence of the unexpected. This is the current context around the world in general and this is the world in which evaluation will exist for the foreseeable future.”

Developmental Evaluation, the kind of evaluation approach Brad Rose Consulting has employed for many years, is extremely well-suited to serve the evaluation and information needs of non-profits, educational institutions, and foundations. For more information about our approach, please see our previous article “Developmental Evaluation: Evaluating Programs in the Real World’s Complex and Unpredictable Environment” and “Evaluation in Complex and Evolving Environments” .

Resources:

March 24, 2020
24 Mar 2020

Working from Home/Telecommuting

With the onset of the corona-virus (COVID-19) in the US, increasing numbers of people are working from home. While many, indeed most, jobs don’t allow for home-based employment, both new technology (e-mail, video conferencing, etc.) and public health concerns are compelling increasing numbers of employers to permit their workers to telecommute/work-from-home. In fact, long before the COVID-19 pandemic, ever greater numbers of U.S. workers have been working from their homes. One source notes that between 2005 and 2017, the number of people telecommuting grew by 159 percent. Prior to corona-virus, about 4.7 million people in the U.S. telecommuted. That number is expected to dramatically increase in 2020.

Benefits and Liabilities of Telecommuting

Telecommuting offers a number of benefits. In her article, “Benefits of Telecommuting for The Future Of Work,” Andrea Loubier reports that productivity receives a boost from those who telecommute because telecommuters are less distracted and more task focused. “With none of the distractions from a traditional office setting, telecommuting drives up employee efficiency. It allows workers to retain more of their time in the day and adjust to their personal mental and physical well-being needs that optimize productivity. Removing something as simple as a twenty minute commute to work can make a world of difference. If you are ill, telecommuting allows one to recover faster without being forced to be in the office.”

Telecommuting offers workers flexibility that they otherwise wouldn’t have. Many workers are able to organize their work with greater efficiency, deliberately integrate non-work tasks into their daily schedules. For older workers, telecommuting allows many to remain in the workforce longer. Some studies indicate that working from home also reduces employee turnover and increases company loyalty. For employers telecommuting also reduces costs, including costs associated with office space, employee hiring (due to reduced turnover), office supplies, equipment, etc.

Despite the advantages of telecommuting, working from home (or working anywhere off-site) also has disadvantages. For some, distraction isn’t decreased by working at home; it’s increased. Working from home can also be isolating and reduce the social rewards of the non-home workplace. Jobs that require building and maintaining strong interpersonal relationships are not well suited for telecommuting. As Mark Leibovich writes in “Working From Home in Washington? Not So Great,” “So much of what we do is just looking someone in the eye,” “When you can see a facial expression or body language, you get a much better sense if you’re making your case. It can be much more challenging to convey urgency remotely.”

Telecommuting Statistics at a Glance

Below are some statistics about telecommuting as reported by Flexjobs:

Resources:

“Benefits of Telecommuting for The Future Of Work,” Andrea Loubier , Forbes July 20, 2017

“What is Telecommuting?” The Balance Career

Flexjobs

“The Growing Army of Americans Who Work From Home,” Karsten Strauss Forbes, June 22, 2017

“Working From Home in Washington? Not So Great,” By Mark Leibovich, New York Times, March 18, 2020

March 17, 2020
17 Mar 2020

What is the purpose of Education?

In our previous article, “Schooling vs. Education – What is Education For?” we discussed the difference between schooling and education, examined the emergence of public education in the U.S, and briefly reviewed an article that said that the lingering 19th and early 20th “factory model” of education is out of date and needs to be replaced. Here, we’d like to briefly explore the underlying question: “What is education’s purpose?”

In classical Greece, Plato believed that a fundamental task of education is to help students to value reason and to become reasonable people (i.e., people guided by reason.) He envisioned a segregated education in which different groups of students would receive different sorts of education, depending on their abilities, interests, and social stations. Plato’s student Aristotle thought that the highest aim of education is to foster good judgment or wisdom. Aristotle was more optimistic than Plato about the ability of the typical student to achieve judgement and wisdom. Centuries later, writing in the period leading up to the French Revolution, Jean-Jacques Rousseau (1712–78) said that formal education, like society itself, is inevitably corrupting, and argued that a genuine education should enable the “natural” and “free” development of children – a view that eventually led to the modern movement known as “open education.” Rousseau’s views of education, although based in an idea of the romanticized innocence of youth, informed John Dewey’s later progressive movement in education during the early 20th century. Dewey believed that education should be based largely on experience (later formulated as “experiential education”) and that it should lead to students’ “growth” (a somewhat ill-defined and indeterminate concept.) Dewey further believed in the central importance of education for the health of democratic social and political institutions. Over the centuries, philosophers have held a variety of views about the purposes of education. Harvey Siegel catalogues the following list:

  • the cultivation of curiosity and the disposition to inquire;
  • the fostering of creativity;
  • the production of knowledge and of knowledgeable students;
  • the enhancement of understanding;
  • the promotion of moral thinking, feeling, and action;
  • the enlargement of the imagination;
  • the fostering of growth, development, and self-realization;
  • the fulfillment of potential;
  • the cultivation of “liberally educated” persons;
  • the overcoming of provincialism and close-mindedness;
  • the development of sound judgment;
  • the cultivation of docility and obedience to authority;
  • the fostering of autonomy;
  • the maximization of freedom, happiness, or self-esteem;
  • the development of care, concern, and related attitudes and dispositions;
  • the fostering of feelings of community, social solidarity, citizenship, and civic-mindedness;
  • the production of good citizens;
  • the “civilizing” of students;
  • the protection of students from the deleterious effects of civilization;
  • the development of piety, religious faith, and spiritual fulfillment;
  • the fostering of ideological purity;
  • the cultivation of political awareness and action;
  • the integration or balancing of the needs and interests of the individual student and the larger society; and
  • the fostering of skills and dispositions constitutive of rationality or critical thinking.

Needless to say, the extent and diversity of this list suggests that the purposes of education are manifold, and that in different historical periods, and under various historical circumstances, people have looked to education to accomplish a wide variety of ends – from the instillment of reason, to the tasks of self-development and vocational/career preparation. The sometimes incompatible goals of education may inform some of the challenges – both philosophical and practical – that U.S. schools have experienced during the last few centuries, and that persist today. (See “Confusion Over Purpose of U.S. Education System” Lauren Camera, August 29, 2016, U.S. News and World Report.)

 

Resources:

Harvey Siegel, “Philosophy of education,” Encyclopedia Britannica

“What is Education for?” Video. School of Life

“Education in Society” Video. Crash Course

“What Is the Purpose of Education?” Alan Singer, Huffpost, February 8, 2016

“Purpose of School” Steven Stemler, Wesleyan University

A List of Quotes about the Purposes of Education

“Confusion Over Purpose of U.S. Education System” Lauren Camera, August 29, 2016 U.S. News and World Report

“What Is Education For?” Danielle Allen, Boston Review, May 9, 2016

March 3, 2020
03 Mar 2020

Schooling vs. Education – What is Education For?

When Mark Twain said, “I never let my schooling get in the way of my education” he was distinguishing between the effects of the conventional, institutional practices associated with schools, and the individual human endeavor—a life-long exercise—to become an educated person. But what does education consist of?

Schooling vs. Education

At its core, “education” is the accumulation of knowledge and skills, and of course, moral/ethical values. (See Education) In complex societies—those in which person-to-person informal, intergenerational transmission of skills and knowledge is insufficient to ensure that successive generations acquire such assets—formal, structured schooling has been the form that education has taken. Throughout the history of Western society, much education has been conducted in private settings (e.g., tutors, private schools, monasteries, etc.) In ancient Greece, for example, private schools and academies were tasked with educating the young free-born Athenian. During the Middle Ages, most schools were founded upon religious principles with the primary purpose of training clergy. Following the Reformation in northern Europe, clerical education was largely superseded by forms of elementary schooling for larger portions of the population. The Reformation was associated with the broadening of literacy, primarily aimed at equipping people to read the Bible and experience its teachings directly. It wasn’t until the 19th century, however, that the idea of educating the mass of a nation’s population via universal, non-sectarian public education emerged in Europe and the U.S.

In 1821, Boston started the first public high school in the United States. By 1870 all of the US states had some form of publicly subsidized formal education. By the close of the 19th century, public secondary schools began to outnumber private ones. Access to schooling has been a perennial challenge, with women slowly gaining access throughout the 19th and early 20th centuries, and African Americans largely excluded from schooling or relegated to sub-standard schooling. By 1900, 34 states had compulsory schooling laws and thirty states with compulsory schooling laws required attendance until age 14 (or higher). By 1910, 72 percent of American children attended school, and half the nation’s children attended one-room schools. It was not until 1918, a little over a century ago, that every state required students to complete even elementary school.

Throughout its history, schooling has served many purposes. From the beginning of American public schooling, its purposes and goals have been fiercely contested. Some viewed schools as civic and moral preparatory institutions, while others saw schools as essential to forging and consolidating a distinct US national identity. Many saw schools as critical socializing processes which were designed less for the intellectual development of individuals, and more for equipping an increasingly industrializing workforce with the habits of, and tolerance for, factory work.

Factory Schools

In a recent article “The Modern Education System Was Made to Foster “Punctual, Docile, and Sober” Factory Workers Perhaps it’s time for a change.” Allison Schrager argues that 19th century American education was designed to produce disciplined, dependable, and compliant workers for an expanding industrial economy. Industrialists (often in alliance with other social sectors, like the Protestant clergy and later social reformers) believed that young people (many of them traditionally involved in agriculture, many the offspring of immigrants to the US) needed to be readied for factory life—which demanded punctuality, regular attendance, narrow task orientation, self-control, and respect for authority. These characteristics, although instrumental for an industrializing economy, were hardly geared toward the development of autonomous, self-directed individuals, and active democratic citizens. The “factory model” of schooling was functional, but by many accounts, stunting. This factory model, Schrager argues, remains widely prevalent in contemporary US schools. (For a critique of the reign of factory model of schooling see, Valerie Strauss, “American schools are modeled after factories and treat students like widgets. Right? Wrong.” Washington Post, Oct. 10, 2015 ) Schrager argues that the factory model is anachronistic and increasingly dysfunctional.

As mentioned, education is always a “contested terrain.” Various social, economic, political, and religious forces are interested in ensuring that schools teach what representatives of these forces value. Consequently, the content, and in some cases, the form, that schooling takes is the product of the struggle among these forces. (See for example our earlier article, “The Implications of Public School Privatization” Part 1 and Part 2 )

Underlying all forms of schooling, public and private, are the implicit questions, “What is education?” and “What is education’s purpose?” In a forthcoming article, we’ll explore these central questions.

Resources:

Education

History of Education

Classical education

History of Education in the U.S.

Valerie Strauss, “American schools are modeled after factories and treat students like widgets. Right? Wrong.” Washington Post, Oct. 10, 2015

Allison Schrager, “The Modern Education System Was Made to Foster “Punctual, Docile, and Sober” Factory Workers Perhaps it’s time for a change.”

February 18, 2020
18 Feb 2020

Digital Technology vs. Students’ Education

Over the last two decades, American education has sought to introduce and improve student access to digital technology. Since the first introduction of personal computers in classrooms, to the more recent efflorescence of iPads and the use of on-line educational content, educators have expressed enthusiasm for digital technology. As Natalie Wexler writes in The MIT Review, December 19, 2019, “Gallup …found near-universal enthusiasm for technology on the part of educators. Among administrators and principals, 96% fully or somewhat support “the increased use of digital learning tools in their school,” with almost as much support (85%) coming from teachers.” Despite this enthusiasm, there isn’t a lot of evidence for the effectiveness of digitally based educational tools. Wexler cites a study of millions of high school students in the 36 member countries of the Organization for Economic Co-operation and Development (OECD) which found that those who used computers heavily at school “do a lot worse in most learning outcomes, even after accounting for social background and student demographics.”

Although popular, and thought by educators useful, digital tools in classrooms not only appear to make little difference in educational outcomes, but in some cases may actually negatively affect student learning. “According to other studies, college students in the US who used laptops or digital devices in their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who took the course in person. And fourth graders who used tablets in all or almost all their classes had, on average, reading scores 14 points lower than those who never used them—a differential equivalent to an entire grade level. In some states, the gap was significantly larger.”

While it has been largely believed that digital technologies can “level the playing” field for economically disadvantaged students, the OECD study found that “technology is of little help in bridging the skills divide between advantaged and disadvantaged students.”

Why do digital technologies fail students? As Wexler ably details:

  • When students read text from a screen, it’s been shown, they absorb less information than when they read it on paper
  • Digital vs. human instruction eliminates the personal, face-to-face relationships that customarily support students’ motivation to learn
  • Technology can drain a classroom of the communal aspect of learning, over individualize instruction, and thus diminish the important role of social interaction in learning
  • Technology is primarily used as a delivery system, but if the material it’s delivering is flawed or inadequate, or presented in an illogical order, it won’t provide much benefit
  • Learning, especially reading comprehension, isn’t just a matter of skill acquisition, of showing up and absorbing facts, but is largely dependent upon students’ background knowledge and familiarity with context. In his article “Technology in the Classroom in 2019: 6 Pros & Cons” Vawn Himmelsbach, makes many of the same arguments and adds a few liabilities to Wexler’s list.
  • Technology in the classroom can be a distraction
  • Technology can disconnect students from social interactions
  • Technology can foster cheating in class and on assignments
  • Students don’t have equal access to technological resources
  • The quality of research and sources they find may not be top-notch
  • Lesson planning might become more labor-intensive with technology

Access and availability to digital technology varies, of course, among schools and school districts. As the authors of Concordia University’s blog, Rm. 241 point out, “Technology spending varies greatly across the nation. Some schools have the means to address the digital divide so that all of their students have access to technology and can improve their technological skills. Meanwhile, other schools still struggle with their computer-to-student ratio and/or lack the means to provide economically disadvantaged students with loaner iPads and other devices so that they can have access to the same tools and resources that their classmates have at school and at home.”

While students certainly need technological skills to navigate the modern world and equality of access to such technology remains a challenge, digital technology alone cannot hope to solve the problems of either education or “the digital divide.” The more we rely on the use of digital tools in the classroom, the less we may be helping some students, especially disadvantaged students, to learn.

Resources:

September 24, 2019
24 Sep 2019

It’s Not Just Your Credit Card Score – The Erosion of Privacy

What is Privacy Good For?

The right to privacy is a much-cherished value in America. As we noted in an earlier article, “Transparent as a Jellyfish? Why Privacy is Important” privacy is crucial to the development of a person’s autonomy and subjectivity. When privacy is reduced by surveillance or restrictive interference—either by governments or corporations—such interference may not just affect our social and political freedoms, but undermine the preconditions for the fundamental development and sustenance of the self.

Danial Solove, Professor of Law at George Washington University Law School, lists ten important reasons for privacy including: limiting the power of government and corporations over individuals; the need to establish important social boundaries; creating trust; and as a precondition for freedom of speech and thought. Solove also notes, “Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being.” (See “Ten Reasons Why Privacy Matters” Danial Solove). Julie Cohen, in “What is Privacy For?” Harvard Law Review, Vol. 126, 2013 writes: “Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop.”

Strains on Privacy

Privacy, of course, is under continual strain. In his recent article, “Uh-oh: Silicon Valley is building a Chinese-style social credit system,” (Fast Company, August 8, 2019) Mike Elgan notes that China is not alone in seeking to create a “social credit” system—a system that monitors and rewards/punishes citizen behavior.

China’s state-run system would seem to be extreme (e.g., it rewards and punishes for such things as failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, etc. It also publishes lists of citizens’ social credit ratings, and uses public shaming as a means to enforce desired behavior.) Elgan notes that Silicon Valley has similar designs on monitoring and motivating what it deems as “desirable and undesirable” behavior. The outlines of an ever-evolving corporate-sponsored, technology-based “social credit” system now include:

  • Life insurance companies can base premiums on what they find in your social media posts
  • Airbnb—now has more than 6 million listings in its system, and the company can ban customers and limit their travel/accommodation choices. Airbnb can disable your account for life for any reason it chooses, and it reserves the right to not tell you the reason.
  • PatronScan, an ID-reading service helps restaurants and bars to spot fake IDs—and troublemakers. The company maintains a list of objectionable customers which is designed to protect venues from people previously removed for “fighting, sexual assault, drugs, theft, and other bad behavior,” A “public” list is shared among all PatronScan customers.Under a new policy Uber announced in May: If your average rating is “significantly below average,” Uber will ban you from the service.
  • WhatsApp is, in much of the world today, the main form of electronic communication. Users can be blocked if too many other users block you. Not being allowed to use WhatsApp in some countries is as punishing as not being allowed to use the telephone system in America.

The Consequences

While no one wants to endorse “bad behavior,” ceding the power to corporations and technology giants to determine which behavior counts as undesirable and punishable may not be the most just or democratic way to ensure societal norms and expectations. As Elgan observes, “The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extra-legal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.” Even more ominously, as Julie Cohen writes, “Conditions of diminished privacy shrink the capacity (of self government), because they impair both the capacity and the scope for the practice of citizenship. But a liberal democratic society cannot sustain itself without citizens who possess the capacity for democratic self-government. A society that permits the unchecked ascendancy of surveillance infrastructures cannot hope to remain a liberal democracy.”

 

Resources:

“America Isn’t Far Off from China’s ‘Social Credit Score’” Anthony Davenport, Observer, February 19, 2018.

“How the West Got China’s Social Credit System Wrong,”  Lousse Matsakis, Wired, July 29. 2019

“Ten Reasons Why Privacy Matters” Daniel Solove

“What Privacy Is For?” Julie Cohen, Harvard Law Review, Vol. 126, 2013

“The Spy in Your Wallet: Credit Cards Have a Privacy Problem,” Geoffrey A. Fowler, The Washington Post, August 26, 2019.

 

September 3, 2019
03 Sep 2019

What is “Normal”?

Why do so many of us aspire to be “normal?” Who decides what’s normal and abnormal? What happens to our self- and social- worth when we discover that we aren’t “normal?” In a recent article, “How Did We Come Up with What Counts as Normal,” Jonathan Mooney discusses the rise of an idea that has acquired substantial power in modern society. Mooney notes that “normal’ entered the English language only in the mid-19th century and has its roots in the Latin “norma” which refers to the carpenter’s T-Square. It originally meant simply “perpendicular.” Right-angles however, are considered mathematically “good” and “normal” soon came to be associated not just with their description of the orthogonal angle, but also with the normative notion of something that is desirable or socially expected. Mooney argues that it is this ambiguity as both a descriptive word and as a normative ideal, that makes “normal” so appealing and powerful.

“Normal” was first used in the academic disciplines of comparative anatomy and physiology. For academics in these and other fields, “normal” soon evolved to describe bodies and organs that were “perfect” or “ideal” and also was used to name certain states as “natural”. Eventually, thanks largely to the field of statistics, ideas about the normal soon conflated the average with the ideal or perfect. In the 19th century, for example, Adolphe Quetelet, a deep believer in the power of statistics, advanced the idea of the “average man” and argued that “the normal” (i.e., average) was perfect and beautiful. Quetelet characterized that which was not “normal” not simply as “abnormal,” or non-average, but as something potentially monstrous. “In 1870, in a series of essays on “deformities” in children, he juxtaposed children with disabilities to the normal proportions of other human bodies, which he calculated using averages.” Thus, averages soon became the aspirant ideal.

Mooney also describes how the statistician Francis Galton, who was Charles Darwin’s cousin, “…was both the first person to develop a properly statistical theory of the normal . . . . and also the first person to suggest that it be applied as a practice of social and biological normalization.” “By the early twentieth century, the concept of a normal man took hold. Soon, the emerging field of public health embraced the idea of the normal; schools, with rows of desks and a one-size-fits-all approach to learning, were designed for the mythical middle; and the industrial economy sought standardization, which was brought about by the application of averages, standards, and norms to industrial production. Moreover, eugenics, an offshoot of genetics created by Galton, was committed to ridding the world of human “defectives.”

The ensuing predominance (some might say “domination”) of “the normal” became firmly established by the mid-20th century. Mooney points out however, that the normal was not so much “discovered” as it was invented, largely by statistics and statisticians, and promulgated by the social sciences and moralists. “Alain Desrosières, a renowned historian of statistics wrote, “With the power deployed by statistical thought, the diversity inherent in living creatures was reduced to an inessential spread of “errors” and the average was held up as the normal—as a literal, moral, and intellectual ideal.”

Resources:

“How Did We Come Up with What Counts as Normal,” Jonathan Mooney, Literary Hub August 16, 2019

Normal Sucks: How to Live, Learn, and Thrive Outside the Lines, Jonathan Mooney, Henry Holt and Co., 2019

“Ranking, Rating, and Measuring Everything”

For information on social norms (formal and informal norms, morays, folkways, etc.) see https://courses.lumenlearning.com/alamo-sociology/chapter/social-norms/ and “What is a Norm?”

February 5, 2019
05 Feb 2019

Pretending to Love Work

In a previous blog post, “Why You Hate Work” we discussed an article that appeared in the New York Times that investigated the way that the contemporary workplace too often produces a sense of depletion and employee “burnout.” In that article, the authors, Tony Schwartz and Christin Porath, argued that only when companies attempt to address the physical, mental, emotional, and spiritual dimensions of their employees by creating “truly human-centered organizations,” can these companies create the conditions for more engaged and fulfilled workers, and in so doing, become more productive and profitable organizations.

In that eponymous blogpost, we suggested that employee burnout is not an unknown feature of the non-profit world, and that, while program evaluation cannot itself prevent employee burnout, it can add to non-profit organizations’ capacities to create organizations in which staff and program participants have a greater sense of efficacy and purposefulness. (See also our blogpost “Program Evaluation and Organization Development” )

Of course, the problem of employee burnout and alienation is a perennial one. It occurs in both the for-profit and non-profit sectors. In a more recent article, “Why Are Young People Pretending to Love Work?” New York Times, January 26, 2019, Erin Griffith says that in recent years, there has emerged a “hustle culture,”—especially for millennials. This culture, Griffith argues, “…is obsessed with striving, (is) relentlessly positive, devoid of humor, and — once you notice it — impossible to escape.” She sites the artifacts of such a culture, which include at one WeWork location, in New York, neon signs that exhorts workers to “Hustle harder,” and murals that spread the gospel of T.G.I.M. (Thank God It’s Monday). Somewhat horrified by the Stakhanovite tenor of the WeWork environment, Griffith notes, “Even the cucumbers in WeWork’s water coolers have an agenda. ‘Don’t stop when you’re tired,’… ‘Stop when you are done.’” “In the new work culture,” Griffith observes, “enduring or even merely liking one’s job is not enough. Workers should love what they do, and then promote that love on social media, thus fusing their identities to that of their employers.”

Griffith is not concerned with employee burnout. Instead, she is horrified by the degree to which many younger employees have internalized the obsessively productivist, “workaholic” norms of their employers and, more broadly, of contemporary corporations. These norms include the apotheosis of excessive work hours and the belief that devotion to anything other than work is somehow a shameful betrayal of the work ethic. She quotes the founder of online platform, Basecamp, David Heinemeier Hansson, who observes, “The vast majority of people beating the drums of hustle-mania are not the people doing the actual work. They’re the managers, financiers and owners.”

Griffith writes, “…as tech culture infiltrates every corner of the business world, its hymns to the virtues of relentless work remind me of nothing so much as Soviet-era propaganda, which promoted impossible-seeming feats of worker productivity to motivate the labor force. One obvious difference, of course, is that those Stakhanovite posters had an anti-capitalist bent, criticizing the fat cats profiting from free enterprise. Today’s messages glorify personal profit, even if bosses and investors — not workers — are the ones capturing most of the gains. Wage growth has been essentially stagnant for years.”

Resources:

“Why Are Young People Pretending to Love Work?” Erin Griffith, New York Times, January 26, 2019

“Why You Hate Work”

“The Fleecing of Millennials” David Leonhardt, New York Times, January 27, 2019

December 4, 2018
04 Dec 2018

Strengthening Program AND Organizational Effectiveness

Program evaluation is seldom simply about making a narrow judgment about the outcomes of a program (i.e., whether the desired changes were, in fact, ultimately produced.) Evaluation is also about helping to provide program implementers and stakeholders with information that will help them strengthen their organization’s efforts, so that desired programmatic goals are more likely to be achieved.

Brad Rose Consulting is strongly committed to translating evaluation data into meaningful and actionable knowledge, so that programs, and the organizations that host programs, can strengthen their efforts and optimize results. Because we are committed not just to measuring program outcomes, but to strengthening the organizations that host and manage programs, we work at the intersection of program evaluation and organization development (OD).

Often challenges facing discrete programs reflect challenges facing the organizations that host programs. (For the difference between “organizations” and “programs” see our previous post “What’s the Difference? 10 Things You Should Know About Organizations vs. Programs,” ) Program evaluations thus present opportunities for host organizations to:
  • engage in the clarification of their goals and purposes
  • enhance understanding of the often implied relationships between a program’s causes and effects
  • articulate for internal stakeholders a collective understanding of the objectives of their programming
  • reflect on alternative concrete strategies to achieve desired outcomes
  • strengthen internal and external communications
  • improve relationships between individuals within in programs and organizations
Although Brad Rose Consulting evaluation projects begin with a focus on discrete programs and initiatives, the answers to the questions that drive our evaluation of programs often provide vital insights into ways to strengthen the effectiveness of the organizations that host, design, and implement those programs. (See “Logic Modeling: Contributing to Strategic Planning” )
Typically, Brad Rose Consulting works with clients to gather data that will help to improve, strengthen, and “nourish” both programs and organizations. For example, our formative evaluations, which are conducted during a project’s implementation, aim to improve a program’s design and performance. (See “Understanding Different Types of Program Evaluation” ) Our evaluation activities provide program managers and implementers with regular, data-based briefings, and with periodic written reports so that programs can make timely adjustments to their operations. Formative feedback, including data-based recommendations for program refinement, can also help to strengthen the broader organization, by identifying opportunities for organizational learning, clarifying the goals of the organization as these are embodied in specific programming, specifying how programs and organizations work to produce results (i.e., articulating cause and effect) and by strengthening systems and processes.

Resources

“What’s the Difference? 10 Things You Should Know About Organizations vs. Programs,”

“Logic Modeling: Contributing to Strategic Planning”

“Understanding Different Types of Program Evaluation”

November 6, 2018
06 Nov 2018

Just the Facts: Data Collection

Program evaluations entail research. Research is a systematic “way of finding things out.” Evaluation research depends on the collection and analysis of data (i.e., evidence, facts) that indicate the outcomes (i.e., effects, results, etc.) of the operation of programs. Typically, evaluations want to discover evidence of whether valued outcomes have been achieved. (Other kinds of evaluations, like formative evaluations, seek to discover, through the collection and analysis of data, ways that a program may be strengthened.)

Data can be either qualitative (descriptive, entail words and observations) or quantitative (numerical). What counts as data depends upon the design and character of the evaluation research. Quantitative evaluations rely primarily on the collection of countable information like measurements and statistical data. Qualitative evaluations depend upon language-based data and other descriptive data. Usually, program evaluations combine the collection of quantitative and qualitative data.

There are a range of data sources for any evaluation. These can include: observations of programs’ operation; interviews with program participants, program staff, and program stakeholders; administrative records, files, and tracking information; questionnaires and surveys; focus groups; and visual information, such as video data and photographs.

The selection of the kinds of data to collect and the ways of collecting such data will be contingent on the evaluation design, the availability and accessibility of data, economic considerations about the cost of data collection, and both the limitations and potentials of each data source. The kinds of evaluation questions and the design of the evaluation research will, together, help to determine the optimal kinds of data that will need to be collected. (See our articles
Questions Before Methods” and “Using Qualitative Interviews in Program Evaluations”)

Resources

What is ‘Data’?

What’s the Difference? Evaluation vs. Research

Evaluation, Carole Weiss, Prentice Hall; 2nd edition (1997)

September 28, 2018
28 Sep 2018

Stakeholders vs. Customers

Stakeholders vs. Customers

Rather than “customers,” nonprofits, educational institutions, and philanthropies typically have “stakeholders.” Stakeholders are individuals and organizations that have an interest in, and may be affected by, the activities, actions, and policies of non-profits, schools, and philanthropies. Stakeholders don’t just purchase products and services (i.e. commodities), they have an interest, or “stake” in the outcomes of an organization’s or program’s operation.

There are a number of persons or entities who may be a stakeholder in a nonprofit organization. Nonprofit stakeholders may include funders/sponsors, program participants, staff, communities, and government agencies. It’s important to note that stakeholders can be either internal or external to the organization, and that stakeholders are able to exert influence— either positive or negative — over the outcomes of the organization or program.

While many nonprofits are sensitive to, and aware of, the interests of their multiple stakeholders, quite often both nonprofit leaders and nonprofit staff hold implicit, unexamined ideas about who their various stakeholders are. Often, stakeholders are not delineated, and consequently, there isn’t a shared understanding of who is and isn’t a stakeholder. Conducting a stakeholder analysis can be a useful process because it raises awareness of staff and managers about who is interested in, and who potentially influences the success of an organization’s desired outcomes. A stakeholder analysis is a simple way to help nonprofits to clarify those who have a “stake” in the success of the organization and its discrete programs. It can sharpen strategic planning, clarify goals, and build consensus about an organization’s purpose.

Resources:

“Identifying Evaluation Stakeholders”

“The Importance of Understanding Stakeholders”

Business Dictionary

“Organization Development: What Is It & How Can Evaluation Help?”

September 5, 2018
05 Sep 2018

A Lifetime of Learning

Pablo Picasso once said, “It takes a long time to become young.” The same may be said about education and the process of becoming educated. While we often associate formal education with youth and early adulthood, the fact is that education is an increasingly recognized lifelong endeavor that occurs far beyond the confines of early adulthood and traditional educational institutions.

In a recent article “Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic the authors discuss the emergence of a rich and ever-expanding “ecosystem” of organizations and institutions that have arisen to serve the unmet educational needs and expectations of learners who are not enrolled in formal, traditional educational institutions (e.g. community colleges, colleges, and universities). “This ecosystem of semi-structured, unorthodox learning providers is emerging at “the edges” of the current postsecondary world, with innovations that challenge the structure and even the existence of traditional education institutions.”

Hagel III, et al. argue that economic forces together with emerging technologies are enabling learners to do an “end run” around traditional educational providers and to gain access to knowledge and information in new venues. The growing availability of, and access to, MOOCs (Massive Online Open Courses), Youtube, Open Educational Resources, and other online learning platforms enable more and more learners to advance their learning and career goals outside the purview of traditional post-secondary institutions.

While the availability of alternative, lifelong educational resources is helping some non-traditional students to advance their educational goals, it is also having an effect on traditional post-secondary institutions. Hagel III, Seely Brown, Wooll and Tsu, argue that, “The educational institutions that succeed and remain relevant in the future …will likely be those that foster a learning environment that reflects the networked ecosystem and (that will become) meaningful and relevant to the lifelong learner. This means providing learning opportunities that match the learner’s current development and stage of life.”  The authors site as examples, community colleges that are now experimenting with “stackable” credentials that provide short-term skills and employment value, while enabling students to return over time and assemble a coherent curriculum that helps them progress toward career and personal goals” and “some universities (that) have started to look at the examples coming from both the edges of education and areas such as gaming and media to imagine and conduct experiments in what a future learning environment could look like.”

The authors say that in the future colleges and universities will benefit from considering such things as:

  1. Providing the facilities and locations for a variety of learning experiences, many of which will depend external sources for content
  2. Aggregating knowledge resources and connecting these resources with appropriate learners rather than acting as sole “vendors” of knowledge
  3. Acting as a lifelong “agents” for learners by helping learners to navigate a world of exponential change and an abundance of information

While these goals are ambitious, they highlight the remarkably changing terrain in continuing education. Educational “consumers” are increasingly likely to seek inexpensive and more accessible pathways to knowledge. As the authors point out, individuals’ lifelong learning needs are likely to continue to increase, so correspondingly, the pressures on traditional post-secondary education are likely to grow. Whether learners’ needs are more effectively addressed by re-orienting traditional post-secondary institutions or by the patchwork “ecosystem” of semi-structured, unorthodox learning-providers who inhabit what the authors of “Lifetime Learner” term “the edges” of the postsecondary world, is difficult to predict.

Resources:

Lifelong learning, Wikipedia

“Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic

“The Third Education Revolution: Schools are moving toward a model of continuous, lifelong learning in order to meet the needs of today’s economy” by Jeffrey Selingo, The Atlantic, Mar 22, 2018

August 14, 2018
14 Aug 2018

Robots Grade Your Essays and Read Your Resumes


We’ve previously written about the rise of artificial intelligence and the current and anticipated effects of AI upon employment.  (See links to previous blog posts, below) Two recent articles treat the effects of AI on the assessment of students and the hiring of employees.

In her recent article for NPR, “More States Opting To ‘Robo-Grade’ Student Essays By Computer” Tovia Smith discusses how so-called “robo-graders” (i.e., computer algorithms) are increasingly being used to grade students’ essays on state standardized tests. Smith reports that Utah and Ohio currently use computers to read and grade students’ essays and that soon, Massachusetts will follow suit. Peter Foltz, a research professor at the University of Colorado, Boulder observes, “We have artificial intelligence techniques which can judge anywhere from 50 to 100 features…We’ve done a number of studies to show that the (essay) scoring can be highly accurate.” Smith also notes that Utah, which once had humans review students’ essays after they had been graded by a machine, now relies on the machines almost exclusively. Cyndee Carter, assessment development coordinator for the Utah State Board of Education reports “…the state began very cautiously, at first making sure every machine-graded essay was also read by a real person. But…the computer scoring has proven “spot-on” and Utah now lets machines be the sole judge of the vast majority of essays.”

Needless to say, despite support for “robo-graders”, there are critics of automated essay assessments. Smith details how one critic, Les Perelman at MIT, has created an essay-generating program, the BABEL generator, that creates nonsense essays designed to trick the algorithmic “robo-graders” for the Graduate Record Exam (GRE). When Perelman submits a nonsense essay to the GRE computer, the algorithm gives the essay a near perfect score. Perelman observes, “”It makes absolutely no sense,” shaking his head. “There is no meaning. It’s not real writing. It’s so scary that it works….Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid.”

Critics of “robo-graders” are also worried that students might learn how to game the system, that is, give the algorithms exactly what they are looking for, and thereby receive undeservedly high scores. Cyndee Carter, the assessment development coordinator for the Utah State Board of Education, describes instances of students gaming the state test: “…Students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.”

Despite these shortcomings, computer designers are learning and further perfecting computer algorithms. It’s anticipated that more states will soon use refined algorithms to read and grade student essays.

Grading student essays is not the end of computer assessment. Once you’ve left school and start looking for a job, you may find that your resume is read not by an employer eager to hire a new employee, but by an algorithm whose job it is to screen for appropriate job applicants. In the brief article, “How Algorithms May Decide Your Career: Getting a job means getting past the computer,” The Economist reports that most large firms now use computer programs, or algorithms, for screening candidates seeking junior jobs.  Applicant Tracking Systems (ATS) can reject up to 75% of candidates, so it becomes increasingly imperative for applicants to send resumes filled with key words that will peak screening computers’ interests.

Once your resume passes the initial screening, some companies use computer driven visual interviews to further screen and select candidates. “Many companies, including Vodafone and Intel, use a video-interview service called HireVue. Candidates are quizzed while an artificial-intelligence (AI) program analyses their facial expressions (maintaining eye contact with the camera is advisable) and language patterns (sounding confident is the trick). People who wave their arms about or slouch in their seat are likely to fail. Only if they pass that test will the applicants meet some humans.”

Although one might think that computer-driven screening systems might avoid some of the biases of traditional recruitment processes, it seems that AI isn’t bias free, and that algorithms may favor applicants who have the time and monetary resources to continually retool their resumes so that these present the code words that employers are looking for. (This is similar to gaming the system, described above.) “There may also be an ‘arms race’ as candidates learn how to adjust their CVs to pass the initial AI test, and algorithms adapt to screen out more candidates.”

Resources:

More States Opting To ‘Robo-Grade’ Student Essays By Computer,” Tovia Smith, NPR, June 30, 2018

How Algorithms May Decide Your Career: Getting a job means getting past the computer” The Economist, June 21, 2018

Will You Become a Member of the Useless Class?

Humans Need Not Apply: What Happens When There’s No More Work?

Will President Trump’s Wall Keep Out the Robots?

Welcoming our New Robotic Overlords,” Sheelah Kolhatkar, The New Yorker, October 23 2017

AI, Robotics, and the Future of Jobs,” Pew Research Center

Artificial intelligence and employment,” Global Business Outlook

July 24, 2018
24 Jul 2018

Learning to Learn

In a recent article in the May 2, 2018 Harvard Business Review, “Learning Is a Learned Behavior. Here’s How to Get Better at It,” Ulrich Boser rejects the idea that our capacities for learning are innate and immutable. He argues, instead, that a growing body of research shows that learners are not born, but made. Boser says that we can all get better at learning how to learn, and that improving our knowledge-acquisition skills is a matter of practicing some basic strategies.

Learning how to learn is a matter of:

  1. setting clear and achievable targets about what we want to learn
  2. developing our metacognition skills (“metacognition” is a fancy way to say thinking about thinking) so that as we learn, we ask ourselves questions like, Could I explain this to a friend? Do I need to get more background knowledge? etc.
  3. reflecting on what we are learning by taking time to “step away” from our deliberate learning activities so that during periods of calm and even mind-wondering, new insights emerge

Boser says that research shows we’re more committed, if we develop a learning plan with clear objectives, and that periodic reflection on the skills and concepts we’re trying to master, i.e., utilizing metacognition, makes each of us a better learner.

You can read more about strategies for learning in Boser’s article and his book
February 19, 2018
19 Feb 2018

Will You Become a Member of the Useless Class?

Will You Become a Member of the Useless Class?The development and deployment of robotics and artificial intelligence continues to affect the world of work. As we’ve discussed in previous blogposts
“Humans Need Not Apply: What Happens When There’s No More Work?”, “Will President Trump’s Wall Keep Out the Robots?”  and “Dark Factories” , AI and robotics are transforming both blue-collar jobs and professional occupations. New technologies promise to change not just how we work and are employed, but also to alter the traditional meanings of work and employment that have been central to peoples’ self-conceptions and identities.

In “How Automation Will Change Work, Purpose, and Meaning,” by Robert C. Wolcott, Harvard Business Review, January 11, 2018, Wolcott says that new technologies not only raise the question, “How are the spoils of technology to be distributed?” but equally baffling, “When technology can do nearly anything, what should I do, and why?” He cites Hannah Arendt’s writings in The Human Condition about the importance of moving from a self-conception that identifies work as purpose, to one that encompasses the idea of the Vita Activa, the active life, in which humans, when freed from much of the drudgery of labor, will need to aspire to integrate non-labor activity in the world with contemplation about the world. Wolcott asks, “When our machines release us from ever more tasks, to what will we turn our attentions? This will be the defining question of our coming century.”

In “The Meaning of Life in a World Without Work” by Yuval Noah Harari, The Guardian, May 8, 2017, Harari writes that as new technologies increasingly displace humans from work, the real problem will be to keep occupied the masses of people (i.e., members of “the useless class” as Harari defines them) who are no longer involved in work. Harari says that one possible scenario might be the deployment of virtual reality computer games. “Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside.” He likens such virtual reality to the world’s religions, which Harari says, are filled with practices and beliefs that give meaning to adherents’ lives, but are not themselves necessary or ‘real’ in any objective way. Harari asserts that it doesn’t much matter whether one finds stimulation from the ‘real’ world or from computer-simulated reality, because ultimately, both rely on what’s happening inside our brains. Further, he observes, “Hence virtual realities are likely to be key to providing meaning to the useless class (created by) the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless, and nobody knows for sure what kind of deep play will engage us in 2050.”

Although Harari’s sketch of possible futures seems shockingly Huxleyan, it does attempt to imagine a future in which large swaths of the population will be unnecessary to the functioning of the productive economy. Anticipating criticism of the brave new world that he’s sketched, Harari, referring to the world’s religions, writes, “But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.”

The challenges, and some might say the catastrophies, associated with the new technologies are not merely technological. They are political, and will be shaped by the kinds of political institutions and social policies that nations use to deal with them. In the December 27 2017, New York Times article, “The Robots are Coming and Sweden is Fine,” Peter S. Goodman notes that Swedish workers appear less threatened by the introduction of robotics and AI because Sweden’s history of social democracy and the relatively strong influence of unions temper the effects of new technologies on Swedish workers. Goodman argues that, unlike much of the rest of the world, the fear that robots will steal jobs “… has little currency in Sweden or its Scandinavian neighbors, where unions are powerful, government support is abundant, and trust between employers and employees runs deep. Here, robots are just another way to make companies more efficient. As employers prosper, workers have consistently gained a proportionate slice of the spoils — a stark contrast to the United States and Britain, where wages have stagnated even while corporate profits have soared.”

How AI and robotics will affect the U.S. is still uncertain, although as we’ve discussed in “Humans Need Not Apply…” some researchers believe that within two decades, half of U.S. jobs could be handled by machines (For example, check out the video “Why Amazon Go Is Being Called the Next Big Job Killer” below). The character of work, and the consequent effects on the population will be determined, in part, by the strength of institutions that have mediated the relationship between employers and employees. In the U.S. sadly, those institutions and social agreements have largely been weakened or eliminated in the last 35-40 years. The introduction of robotics and AI in America is likely to follow a far different path than in Sweden.

January 31, 2018
31 Jan 2018

What’s the Difference? Evaluation vs. Research

What's the Difference? Evaluation vs. ResearchEvaluation is a research enterprise whose primary goal is to identify whether desired changes have been achieved. Evaluation is a type of applied social research that is conducted with a value, or set of values, in its “denominator.” Evaluation research is always conducted with an eye to whether the desired outcomes, or results, of a program, initiative, or policy were achieved, especially as these outcomes are compared to a standard or criterion. At the heart of program evaluation is the idea that outcomes, or changes, are valuable and desired. Some outcomes are more valuable than others. Evaluators conduct evaluation research to find out if these valued changes are, in fact, achieved by the program or initiative.

Evaluation research shares many of the same methods and approaches as other social sciences, and indeed, natural sciences. Evaluators draw upon a range of evaluation designs (e.g. experimental design, quasi-experimental design, non-experimental design) and a range of methodologies (e.g. case studies, observational studies, interviews, etc.) to learn what the effects of a given intervention have been. Did, for example, 8th grade students who received an enriched STEM curriculum do better on tests, than did their otherwise similar peers who didn’t receive the enriched curriculum? Do homeless women who receive career readiness workshops succeed at obtaining employment at greater rates than do other similar homeless women who don’t participate in such workshops? (For more on these types of outcome evaluations, see our previous blog post, “What You Need to Know About Outcome Evaluations: The Basics,”) While not all evaluations are outcome evaluations, all evaluations gather systematic data with which judgments about the program or initiative can be made.

Another way to differentiate social research from evaluation research is to understand that social research seeks to find out “what is the case?” “What is out there?” “How does the world really work?” etc. For example, in political science, researchers may want to find out how citizens of California vote in national elections, or, what are their attitudes towards certain candidates or policies. Sociology may investigate the causes of racial segregation or the relationship(s) between race and class. These instances of social research are primarily interested in discovering what is the case, regardless of the value we might attribute to the findings of the research. Researchers in political science are neutral about the percentages of California voters who vote Republican, Democrat, Independent, Green, etc. They are most interested in knowing how people vote, not if they vote for one particular party.

Although evaluation research is interested in a truthful accurate description of what is the case, it is ALSO interested in discovering whether findings indicate that what is there (i.e., is present) is valuable, important, desired, etc. When evaluators look for outcomes they don’t just want to know if anything at all happened, or changed, they want to discover if something specific and valued happened. Evaluators don’t just set their sites on describing the world, but on determining whether certain valued and worthwhile things happened. While evaluators use many of the same methods and approaches as other researchers, evaluators must employ an explicit set of values against which to judge the findings of their empirical research. The means that evaluators must both be competent social scientists AND exercise value-based judgments and interpretations about the meaning of data.

January 3, 2018
03 Jan 2018

Evaluation in Complex and Evolving Environments

Program evaluations seldom occur in stable, scientifically controlled environments. Often programs are implemented in complex and rapidly evolving settings that make traditional evaluation research approaches—which depend upon the stability of the “treatment” and the designation of predetermined outcomes—difficult to utilize.

Michael Quinn Patton, one of the originators of Developmental Evaluation, says that “Developmental evaluation processes include asking evaluative questions and gathering information to provide feedback and support developmental decision-making and course corrections along the emergent path. The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change. The evaluator’s primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and to facilitate data-based assessments and decision-making in the unfolding and developmental processes of innovation.”

In their paper, “A Practitioners Guide to Developmental Evaluation,” Dozios and her colleagues note, “Developmental Evaluation differs from traditional forms of evaluation in several key ways:”

  • The primary focus is on adaptive learning rather than accountability to an external authority.
  • The purpose is to provide real-time feedback and generate learnings to inform development.
  • The evaluator is embedded in the initiative as a member of the team.
  • The DE role extends well beyond data collection and analysis; the evaluator actively intervenes to shape the course of development, helping to inform decision-making and facilitate learning.
  • The evaluation is designed to capture system dynamics and surface innovative strategies and ideas.
  • The approach is flexible, with new measures and monitoring mechanisms evolving as understanding of the situation deepens and the initiative’s goals emerge

Development evaluation is especially useful for social innovators who often find themselves inventing the program as it is implemented, and who often don’t have a stable and unchanging set of anticipated outcomes. Following Patton, Dozois, Langlois, and Blanchet-Cohen observe that Developmental Evaluation is especially well suited to situations that are:

  • Highly emergent and volatile (e.g., the environment is always changing)
  • Difficult to plan or predict because the variables are interdependent and non-linear
  • Socially complex— requiring collaboration among stakeholders from different organizations, systems, and/or sectors
  • Innovative, requiring real-time learning and development

Developmental Evaluation, however, is increasingly appropriate for use in the non-profit world, especially where the stability of programs’ key components including the program’s core treatment and eventual, often evolving, outcomes, are not as certain or firm as program designers might wish.

Brad Rose Consulting is experienced in working with program’s whose environments are volatile and whose iterative program designs are necessarily flexible. We are adept at collecting data that can inform the on-going evolution of a program, and have 20+ years of providing meaningful data to program designers and implementers that help them to adjust to rapidly changing and highly variable environments.

Resources:

A Practitioner’s Guide to Developmental Evaluation, Elizabeth Dozois, Marc Langlois, and Natasha Blanchet-Cohen

Michael Quinn Patton on Developmental Evaluation

Developmental Evaluation

The Case for Developmental Evaluation

December 11, 2017
11 Dec 2017

What’s the Difference? 10 Things You Should Know About Organizations vs. Programs

Organizations vs. Programs

Organizations are social collectivities that have: members/employees, norms (rules for, and standards of, behavior), ranks of authority, communications systems, and relatively stable boundaries. Organizations exist to achieve purposes (objectives, goals, and missions) and usually exist in a surrounding environment (often composed of other organizations, individuals, and institutions.) Organizations are often able to achieve larger-scale and more long-lasting effects than individuals are able to achieve.  Organizations can take a variety of forms including corporations, non-profits, philanthropies, and military, religious, and educational organizations.

Programs are discreet, organized activities and actions (or sets of activities and actions) that utilize resources to produce desired, typically targeted, outcomes (i.e., changes and results). Programs typically exist within organizations. (It may be useful to think of programs as nested within one or, in some cases, more than one organization.) In seeking to achieve their goals, organizations often design and implement programs that use resources to achieve specific ends for program participants and recipients. Non-profit organizations, for example, implement programs that mobilize resources in the form of activities, services, and products that are intended to improve the lives of program participants/recipients. In serving program participants, nonprofits strive to effectively and efficiently deploy program resources, including knowledge, activities, services, and materials, to positively affect the lives of those they serve.

What is Program Evaluation?

Program evaluation is an applied research process that examines the effects and effectiveness of programs and initiatives. Michael Quinn Patton notes that “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs in order to make judgements about the program, to improve program effectiveness, and/or to inform decisions about future programming. Program evaluation can be used to look at:  the process of program implementation, the intended and unintended results produced by programs, and the long-term impacts of interventions. Program evaluation employs a variety of social science methodologies–from large-scale surveys and in-depth individual interviews, to focus groups and review of program records.” Although program evaluation is research-based, unlike purely academic research, it is designed to produce actionable and immediately useful information for program designers, managers, funders, stakeholders, and policymakers.

Organization Development, Strategic Planning, and Program Evaluation

Organization Development is a set of processes and practices designed to enhance the ability of organizations to meet their goals and achieve their overall mission. It entails “…a process of continuous diagnosis, action planning, implementation and evaluation, with the goal of transferring (or generating) knowledge and skills so that organizations can improve their capacity for solving-problems and managing future change.” (See: Organizational Development Theory, below) Organization Development deals with a range of features, including organizational climateorganizational culture (i.e., assumptions, values, norms/expectations, patterns of behavior) and organizational strategy. It seeks to strengthen and enhance the long-term “health” and performance of an organization, often by focusing on aligning organizations with their rapidly changing and complex environments through organizational learning, knowledge management, and the specification of organizational norms and values.

Strategic Planning is a tool that supports organization development. Strategic planning is a systematic process of envisioning a desired future for an entire organization (not just a specific program), and translating this vision into broadly defined set of goals, objectives, and a sequence of action steps to achieve these. Strategic planning is an organization’s process of defining its strategy, or direction, and making decisions about allocating its resources to pursue this strategy.

Strategic plans typically identify where and organization is at and where it wants to be in the future. It includes statements about how to “close the gap,” between its current state and its desired, future state. Additionally, strategic planning requires making decisions about allocating resources to pursue an organizations strategy. Strategic planning generally involves not just setting goals and determining actions to achieve the goals, but also mobilizing resources.

Program evaluation is uniquely able to contribute to organization development–the deliberately planned, organization-wide effort to increase an organization’s effectiveness and/or efficiency. Although evaluations are customarily aimed at gathering and analyzing data about discrete programs, the most useful evaluations collect, synthesize, and report information that can be used to improve the broader operation and health of the organization that hosts the program. Additionally, program evaluation can aid the strategic planning process, by using data about an organization’s programs to indicate whether the organization is successfully realizing its goals and mission through its current programming.

Brad Rose Consulting works at the intersection of evaluation and organization development. While our projects begin with a focus on discrete programs and initiatives, the answers to the questions that drive our evaluation research provide vital insights into the effectiveness of the organizations that host, design, and fund those programs. Findings from our evaluations often have important implications for the development and sustainability of the entire host organization.

Resources:

Organizations: Structures, Processes, and Outcomes, Richard H. Hall and Pamela S Tolbert, Pearson Prentice Hall, 9th edition.

Utilization Focused Evaluation, Michael Quinn Patton, Sage, 3rd edition, 1997

Organization Development: What Is It & How Can Evaluation Help?

Organization Development

Organizational Development Theory

Strategic Planning, Bain and Co. 2017

Strategic Planning

What a Strategic Plan Is and Isn’t
Ten Keys to Successful Strategic Planning for Nonprofit and Foundation Leaders

Elements of a Strategic Plan

Types of Strategic Planning
Understanding Strategic Planning

Five Steps to a Strategic Plan
Five Steps to a Strategic Plan

The Big Lie of Strategic Planning, Roger L. Martin, Harvard Business Review, January-February 2014

November 8, 2017
08 Nov 2017

Dark Factories

We’ve previously written about the rise of Artificial Intelligence (AI) and robotics, and the impact of these new technologies on employment and the future of work.
(See our previous blog posts: “Humans Need Not Apply: What Happens When There’s No More Work?” and “Will President Trump’s Wall Keep Out the Robots?”) Today we’d like to refer readers to an important article, “Dark Factory”, that recently appeared in The New Yorker. “Dark Factory” explores the growing impact of robotics and AI on the manufacturing and service sectors of the U.S. Economy.

Dark Factory

In “Dark Factory”, Sheelah Kolhatkar discusses her visit to Steelcase, the manufacturer of office furniture. Steelcase, like much of American manufacturing, has had its economic ups and downs over the years. Kolhatkar describes how, in recent years, the company has increasingly employed robotics as a means to improve manufacturing efficiency, and as a result, now relies on fewer workers than it has in the past. Representative of an ever-growing number of manufacturing companies in the U.S., Steelcase employs fewer and fewer high school graduates, and now seeks college educated employees with technological skills, so that these higher skilled workers can supervise an expanding army of manufacturing robots.

As Kolhatkar shows, while efficiency gains are good for Steelcase and other manufacturing companies that employ robots and AI— and even, in some cases, make work more tolerable and less grueling for the remaining employees on the shop floor— the net effect of these technologies is to displace large swaths of the work force and to shift wealth to the owners of companies. Kolhatkar cites research that shows that the use of industrial robots, like those at Steelcase, is directly related to decline in both the number of manufacturing jobs and declining pay for remaining workers.

Kolhatkar also discusses “dark factories”— factories and warehouses whose use of robots and AI are so extensive that they need not turn on the lights because there are so few human workers. While such factories and warehouses are not yet wide-spread, major U.S. corporations are looking to use robotics and AI to run nearly employee-less operations. Although some companies may not be eager to begin utilizing robotic warehouses, competitive pressure is sure to compel U.S. companies to implement fully robotized facilities, or lose competitive battles with other nations that do adopt these technologies.

AI and Robotics Not Limited to Manufacturing

The result of the growing use of robotics and AI in the U.S. is, of course, the declining demand for workers in what where once fairly labor-intensive human-dominated work environments. (Manufacturing now employees only about 10% of the US workforce, and these jobs are under constant threat by new technologies.) Although displaced manufacturing workers often seek jobs in the service sector, this sector is now hardly immune to automation. MacDonald’s, for example, is introducing “digital ordering kiosks” where customers electronically enter their orders and pay for their meals. MacDonald’s is expected to automate in 5500 restaurants by the end of 2018. Uber and Google continue to invest in the development of autonomous driving technologies, and the U.S. trucking industry is eager to adapt autonomous vehicles so that it can reduce substantial labor costs associated with trucking. Amazon has purchased Kiva, a robotics company, and is developing robots that can zoom around Amazon warehouses and fulfill orders. (A Deutsch Bank report estimates that Amazon could save 22 million dollars a year in each of its warehouses, simply by introducing warehouse robots to replace human workers.)

As Kolhatkar’s “Dark Factory” shows, while the future looks increasingly promising for the shareholders of companies who introduce labor-displacing robotics and AI, it doesn’t appear quite so sunny for those humans who must work for a living—especially in the manufacturing and service sectors of the U.S. economy. Like the “dark factories” that promise to displace them, for many workers, the future too, will be dark.

Resources:

“Welcoming our New Robotic Overlords”, Sheelah Kolhatkar, The New Yorker, October 23 2017

“AI, Robotics, and the Future of Jobs”, Pew Research Center

“Artificial intelligence and employment”, Global Business Outlook

“Advances in Artificial Intelligence Could Lead to Mass Unemployment, Experts Warn”, James Vincent, The Independent, Wednesday 29 January 2014

October 16, 2017
16 Oct 2017

What Counts as an ‘Outcome’—and Who Determines?

What is an “Outcome”

“Outcomes,” i.e., specific changes or results, are what programs seek to produce, and what funders seek to fund. Programs and initiatives, especially in the nonprofit sector, exist to produce valuable and desired changes. While the measurement of outcomes is essential to evaluation, a key question for both programs and for evaluators is, ‘What counts as a desired outcome?”

Typically, education and nonprofit programs strive to see improvements in things like’s students’ reading scores, homeless persons employability, young people’s job-readiness, availability of affordable housing for home-seekers, etc. While the desirability of such outcomes appear “natural” or taken-for-granted, outcomes are, in fact, “agreed upon” events, states, or entities.

Who determines the outcomes that are to be evaluated?

For those who administer and run programs, it is often the case that such programs are highly dependent upon what funders value and desire to see changed. Although desired changes are usually viewed as self-evident, outcomes are a socially and politically defined entity. They depend upon a negotiated understanding of what constitutes a valuable change, and how such change should be measured or indicated. Additionally, as many program leaders and educators will attest, changes that are desirable, are often shaped by the ability of proponents and researchers to measure such changes. (I’m reminded here, of Einstein’s famous quote: “Not everything that counts can be measured, and not everything that can be measured counts.”) As Heather Douglas points out, “ We must value something to find it significant enough to measure, to pluck it from the complexity of human social life, and to see it as a set of phenomena worthy of study.” (See Heather Douglas, “Facts, Values, and Objectivity”).

Of course, funders alone don’t determine the outcomes that programs produce. Program stakeholders can have a significant influence on what constitutes a desired outcome (See our previous blogpost, “The Importance of Understanding Stakeholders”). In an educational program, for example, a wide range of stakeholders may influence the desired outcomes of programming. Parents, teachers, community members, state and federal policy makers, business interests, politicians—may all influence what counts as a desirable outcome of education. Therefore, what stakeholders value (i.e., view as significant), is often what is viewed as the desired outcome of a program.

It’s important for program leaders and staff, and for evaluators to discuss and identify which changes programming seeks to affect. While evaluators deploy a range of methods to indicate or measure such changes, what counts as a desirable change, as a desirable “outcome,” is a question as critical to the success an evaluation as it is to the success of a program.

Resources:

The Measure of Reality: Quantification and Western Society 1250-1600, Alfred W. Crosby. Cambridge University Press, 1997.

What is Quantification Research?

For a radical critique on the power of the funders of nonprofits to exercise influence on nonprofits goals and outcomes, see “How Liberal Nonprofits Are Failing Those They’re Supposed To Protect,” by William Anderson

Heather Douglas, “Facts, Values, and Objectivity”

Program Outcomes

Student outcomes

MIT Teaching and Learning Lab, on Student Outcomes

For a summary of the possibility of “value neutrality” in the social sciences

September 11, 2017
11 Sep 2017

The Importance of Understanding Stakeholders

Every program evaluation is conducted in a context in which there are parties (persons, organizations, etc.) who have an interest, or a “stake,” in the operation and success of the program. In the corporate world, a “stakeholder” is any member of the “groups without whose support the organization would cease to exist.” (see Corporate Stakeholder). More recently, the idea of “a stakeholder” has been broadened to include “any group or individual who is affected by, or who can affect the achievement of, an organization’s objectives.” (The Stakeholder Theory Summary.)
Indeed, in the not-for-profit world, stakeholders may include an array of persons and organizations including funders, community members, program participants, family members, volunteers, staff, government agencies, and the broader public.

Stakeholders in non-profits usually fall into one of three categories of legal statuses:

  1. Constitutional stakeholders such as board members or trustees of the non-profit organization
  2. Contractual stakeholders, including paid staff, or any business, group or individual that has a formal relationship with the organization.
  3. Third-party stakeholders including all the people and groups that may be affected by what the organization does. That includes businesses, the local government, and the citizens who live in the community. (See What is a Stakeholder of a Non-profit.)

Nonprofit stakeholders may range from those who support an organization, to those who oppose an organization. Stakeholder can include advocates, supporters, critics, competitors, and opponents. In its analysis of stakeholders in policy change efforts, the World Bank uses the categories of “promoter,” “defender,” “latents”, and “apathetics.” (See What is a Stakeholder Analysis.)

Conducting a stakeholder analysis is very useful for both evaluation and strategic planning efforts. Identifying various stakeholders’ interests in an organization’s mission and programming can help non-profit leaders and staff to be sure that their efforts and initiatives are achieving desired goals. They can also be useful in ensuring that the needs of those served are being directly met. For both evaluation and strategic planning purposes, a stakeholder analysis is an important process for achieving a shared understanding of each stakeholder’s specific interest in, and relevance to, the work of the non-profit or educational organization.

Brad Rose Consulting has developed a unique approach to stakeholder analysis, one that can be extremely useful as organizations examine the purposes, goals, specific activities, and desired outcomes of their work. We often work with organizations to implement stakeholder analyses. These analyses are helpful both in identifying where an organization is, at any given point in time, and for identifying where it wants to go in the future. You can see our basic stakeholder analysis form here.

Resources

Corporate Stakeholder

Stakeholder Theory

Summary of Stakeholder Theory

Stakeholders of a Typical Non-Profit Organization

What is a Stakeholder of a Non-profit

July 27, 2017
27 Jul 2017

The Use of Surveys in Evaluation

Surveys can be an efficient way to collect information from a substantial number of people (i.e., respondents) in order to answer evaluation research questions. Typically, surveys strive to collect information from a sample (portion) of a broader population. When the sample is selected via random selection of respondents from a specified sampling frame, findings can be confidently generalized to the entire population.

Surveys may be conducted by phone, in-person, on the web, or by mail. They may ask standardized questions so that each respondent replies to precisely the same inquiry. Like other forms of research, highly effective surveys depend upon the quality of the questions asked of respondents. The more specific and clear the questions, the more useful survey findings are likely to be. Good surveys present questions in a logical order, are simple and direct, ask about one idea at a time, and are brief.

Surveys can ask either closed-ended or open-ended questions. Closed-ended questions may include multiple choice, dichotomous, Lickert scales, rank order scales, and other types of questions for which there are only a few answer categories available to the respondent. Closed-ended questions provide easily quantifiable data, for example, the frequency and percentage of respondents who answer a question in a particular way.

Alternatively, open-ended survey questions provide narrative responses that constitute a form of qualitative data. They require respondent reflection on their experience or attitudes. Open-ended questions often begin with: “why,” “how,” “what,” “describe,” “tell me about…,” or “what do you think about…”(See Open Ended Questions.) Open-ended survey questions depend heavily upon the interest, enthusiasm, and literacy level of respondents, and require extensive analysis precisely because they are not comprised of a small number of response categories.

Administering Surveys and Analyzing Results
Surveys can be administered in a variety of ways; in-person, on the phone, via mail, via the web, etc. Regardless of specific venue, it’s important to consider from the point of view of the respondent the factors that will maximize respondents’ participation, including accessibility of the survey, convenience of format, logicality of organization, and clarity of both the survey’s purpose and its questions.

Once survey data are collected and compiled, analyses of the data may take a variety of forms. Analysis of survey data essentially entails looking at quantitative data to find relationships, patterns, and trends. “Analyzing information involves examining it in ways that reveal the relationships, patterns, trends, etc…That may mean subjecting it to statistical operations that can tell you not only what kinds of relationships seem to exist among variables, but also to what level you can trust the answers you’re getting. It may mean comparing your information to that from other groups (a control or comparison group, statewide figures, etc.), to help draw some conclusions from the data. (See Community Tool Box “Collecting and Analyzing Data”) While data analysis usually entails some kind of statistical/quantitative manipulation of numerical information, it may also entail the analysis of qualitative data, i.e., data that is usually composed of words and not immediately quantifiable (e.g., data from in-depth interviews, observations, written documents, video, etc.)

The analysis of both quantitative and qualitative survey data (the latter typically collected in surveys from open-ended questions) is performed primarily to answer key evaluation research questions like, “Did the program make a difference for participants?” Effectively reporting findings from survey research not only entails accurate representation of quantitative findings, but interpretation of what both quantitative and qualitative data mean. This requires telling a coherent and evocative story, based on the survey data.

Brad Rose Consulting has over two decades of experience designing and conducting surveys whose findings compose an essential component of program evaluation activities. The resources below provide additional sources of information on the basics of survey research.

Resources

The American Statistical Association, “What is a Survey?”

“Survey Questions and Answer Types”

“Writing Good Survey Questions”

“How to Ask Open-ended Questions”

“Guide to Survey Research,” University of Colorado

Community Toolbox: “Collecting and Analyzing Data”

Example of Survey Analysis Guidelines:

May 25, 2017
25 May 2017

Using Experimental Design in Evaluation

A recent issue of New Directions in Evaluation, (No. 152, Winter, 2016) “Social Experiments in Practice: The What, Why, When, Where, and How of Experimental Design and Analyses,” is devoted to the use of randomized experiments in program evaluation. The eight articles in this thematic volume discuss different aspects of experimental design—the practical and theoretical benefits and challenges of applying randomized controlled trials (RCTs) to the evaluation of programs. Although it’s beyond the scope of this blogpost to discuss each of the articles in detail, I’d like to mention a few insights offered by the authors and review the advantages and challenges of experimental design.

Random assignment helps rule out alternative explanations for outcomes
Experimental design in the social sciences, are studies that randomly assign subjects (i.e., program participants) to treatment and control groups, then measure changes (i.e., average changes) in both groups to determine if a program, or “treatment,” has had a desired effect on those who receive the treatment. As the issue’s editor, Laura Peck, observes, “…when it comes to the question of cause and effect—the question of a program’s or policy’s impact, we assert that a randomized experiment should be the evaluation design of choice.” (p.11) Indeed, experimental design studies—whose origins are in the natural sciences, and whose benefits are perhaps most frequently demonstrated in FDA testing of pharmaceuticals—is thought to be the “gold standard” for scientifically establishing causation. Random assignment of individuals to two groups—one that receives treatment and one that does not receive treatment—is the best way to establish whether desired changes are the result of what happens in the treatment (i.e., program). As the editor observes, “This ‘coin toss’ (i.e., random assignment) to allocate access to treatment carries substantial power. It allows us to rule out alternative explanations for differences in outcomes between people who have access to a service and people who do not.” (p.11)

There are still concerns surrounding the use of experimental design
Although experimental design is viewed by many as the premier indicator of causation, it’s use in evaluations can have practical challenges. There are potentially legal and ethical concerns about non-treatment for control groups (especially in the fields of medicine and education). Additionally, some argue that experimental design, especially in complex social interventions, is unable to identify which specific component of a treatment is responsible for the observed differences in the treatment group (the “black box” phenomenon.). Michael Scriven observes that it is nearly impossible to create a truly “double blind” experiment in the social world (i.e. experiments where neither experimental subject nor the evaluator knows who is in the treatment who is in the control groups). Moreover, some argue that experimental design can be more labor and time-intensive than other study designs, and therefore, more costly.

Quasi- experimental design is useful for showing before and after changes
While experimental design is the most prestigious method for determining the causal effects of a program, initiative, or policy, it is far from a universally appropriate design for evaluations. Quasi-experimental design, for example, is often used to show pre- and post- changes in those who participate in a program or treatment, although quasi-experimental design is unable to unequivocally confirm whether such changes are attributable to the program. One form of a quasi-experimental design is the “non-equivalent (pre-test, post-test) control group design”. In this design, participants are assigned to two groups (although not randomly assigned.) Both groups take a pre-test and a post-test, but only one group, the experimental group, receives the treatment/program. (The key textbook resource on both experimental and non-experimental designs is Experimental and Quasi-Experimental Designs, by Shadish, Cook, and Campbell, Houghton Mifflin.)

There are, of course, a range of non-experimental designs that are used productively in evaluation. These range from case studies to observational studies, and rely on a variety of methods, largely qualitative, including phone and in-person interviews, focus groups, surveys, and document reviews. (See this page for a brief table comparing the characteristics of qualitative and quantitative methods of research. See also the National Science Foundation’s very helpful, “Overview of Qualitative Methods and Analytic Techniques”) Qualitative evaluation studies can be very effective, and are often used in a mixed methods approach to evaluation work.

Resources:

“Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide”

“Designing Quasi-Experiments: Meeting What Works Clearinghouse Standards Without Random Assignment”

Good web-based resources on the subject of determining cause. Examples of research designs

“Overview of Qualitative Methods and Analytic Techniques”

“A Summative Evaluation of RCT Methodology: & An Alternative Approach to Causal Research,” Michael Scriven”

“Using Small-Scale Randomized Controlled Trials to Evaluate the Efficacy of New Curricular Materials”

“Example Evaluation Plan for a Quasi-Experimental Design”

Experimental and Quasi-Experimental Designs, by Shadish, Cook, and Campbell

“What is Evaluation?” – Gene Shackman 

April 26, 2017
26 Apr 2017

Will President Trump’s Wall Keep Out the Robots?

In July, we posted a blog post titled, “Humans Need Not Apply: What Happens When There’s No More Work?”As we mentioned in that post, the rise of artificial intelligence, machine learning, and robotics, have increasingly ominous implications for the future of work and employment. In a recent New York Times article, “The Long-Term Jobs Killer Is Not China. It’s Automation,” Claire Caine Miller traces the effects of automation on those who have been employed in America’s once preeminent industries—steel, coal, newspapers, etc. She observes that it is neither immigration nor globalization that threatens American workers; it’s automation. Referring to the recent 2016 political campaigns Caine Miller notes, “No candidate talked much about automation on the campaign trail. Technology is not as convenient a villain as China or Mexico, there is no clear way to stop it, and many of the technology companies are in the United States and benefit the country in many ways.” She quotes one study that shows that roughly 13 percent of manufacturing job losses are due to trade, and the rest are due to enhanced productivity attributable to automation.

In another article, “Evidence That Robots Are Winning the Race for American Jobs,” Caine Miller writes, “The industry most affected by automation is manufacturing. For every robot per thousand workers, up to six workers lost their jobs and wages fell by as much as three-fourths of a percent, according to a new paper by the economists, Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University.” In “How to Make America’s Robots Great Again” Farhad Manjoo, (New York Times, January 25, 2017) states, “Thanks to automation, we now make 85 percent more goods than we did in 1987, but with only two-thirds the number of workers.”

Manufacturing however, is not the only area where AI and robots threaten to displace human employees. In “A Robot May Be Training to Do Your Job. Don’t Panic,” Alexandra Levit argues that automation in the form of “social robotics,’ affective computing, and emotional awareness software, are now making inroads into the helping/caring professions, like nursing. Levit writes, “Eventually, the moment will come when machines possess empathy, the ability to innovate and other traits we perceive as uniquely human. What then? How will we sustain our own career relevance?”

In “Actors, teachers, therapists – think your job is safe from artificial intelligence? Think again,” Dan Tynan writes, “A January, 2017 report from the McKinsey Global Institute estimated that roughly half of today’s work activities could be automated by 2055, (give or take 20 years.)…Thanks to advances in artificial intelligence, natural language processing, and inexpensive computing power, jobs that once weren’t considered good candidates for automation suddenly are.”

According to these and other writers, automation and Artificial Intelligence are poised to sweep away or profoundly transform a number of occupations, and thereby alter both industry and society. While some writers foresee productive partnerships between AI and human colleagues, others warn that automation is likely to reduce the needs for human labor, and relegate sectors of the population to hard scrabble redundancy. As Martin Ford points out, this industrial revolution is different than previous ones, because new technology is taking aim at both blue and white collar work. (See Rise of the Robots: Technology and the Threat of a Jobless Future, by Martin Ford.)

Lest we become disconsolate at the prospect that robots will take our jobs, Claire Caine Miller suggests that there are a number of things that the US can do to prepare and adapt to these employment- threatening developments. She suggests that: 1) the US provide more and different kinds of education to employees, including teaching technical skills, like coding and statistics, and skills that still give humans an edge over machines, like creativity and collaboration; 2) creating better jobs for human workers including government subsidized employment (creating public sector jobs) and building infrastructure; 3) creating more care-giving jobs, strengthening labor unions, and training some workers to work in advanced manufacturing; 4)expanding the earned-income tax credit, providing a universal basic income, in which the government gives everyone a guaranteed amount of money, and establishing “ portable benefits” that wouldn’t be tied to a job to get health insurance. Caine Miller also suggests raising the minimum wage and even taxing robots (the latter, a proposal supported by Bill Gates.)

Whether these proposals will prove to be politically feasible or economically viable is difficult to judge. Some of these seem wildly utopian and difficult to envision—especially given a new Administration that built substantial electoral support on promises to revive employment in ‘smokestack’ industries, like steel and coal. That said, until relatively recently, it was difficult to envision the meteoric “rise of the robots” and the consequent effects on employment and society that such a development would have. Not even robots can reliably predict the future.

Resources:
“The Long-Term Jobs Killer Is Not China. It’s Automation.” Claire Caine Miller, New York Times, December 12, 2016

“Where machines could replace humans—and where they can’t (yet),” Michael Chui, James Manyika, and Mehdi Miremadi, July 2016, McKinsey Quarterly,

“Actors, teachers, therapists – think your job is safe from artificial intelligence? Think again.” Dan Tynan, The Guardian, February 9, 2017.

“How to Make America’s Robots Great Again” Farhad Manjoo New York Times, January 25, 2017

“Evidence That Robots Are Winning the Race for American Jobs,” Claire Cain Miller New York Times, March 28, 2017

How to Beat the Robots” Claire Cain Miller. New York Times, March 7, 2017

“A Robot May Be Training to Do Your Job. Don’t Panic.” Alexandra Levit, New York Times September. 10, 2016

“EU supports Personhood status to robots.” Alex Hern, The Guardian, January 12, 2017

Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford, Basic Books, 2015″

March 31, 2017
31 Mar 2017

The Implications of Public School Privatization (Part 2): Betsy DeVos’ “Holy War”

In our last blog post “The Implications of Public School Privatization,” I referred to an article by Diane Ravitch that recently appeared in The New York Review of Books. That article claimed that the school privatization movement is largely composed of social conservatives, corporations, and business-friendly foundations. In a recent article by Janet Reitman, in Rolling Stone,Betsy DeVos’ Holy War,” the author argues that the movement to privatize public schools is also sponsored, at least in part, by those who, like Betsy DeVos,  would prefer to de-secularize schools and create institutions that reflect market friendly Christian values.

Betsy DeVos, embodies a nexus of wealth and hyper conservative Christianity. Her goals include support of “school choice” (i.e., voucher systems that direct tax money for public schools toward private and parochial schools) and, according to Reitman, the promotion of the religious colonization of public education, and more broadly, American society. (See also, “Betsy DeVos Wants to Use America’s Schools to Build ‘God’s Kingdom’” Kristina Rizga, Mar/Apr 2017 Issue, Mother Jones.

DeVos, as is well documented, is not deeply acquainted with public education—neither she nor her children attended public schools; she has never served on a school board, nor been an educator. DeVos who hails from a wealthy, Calvinist, Western Michigan dynasty that includes among other resources, her husband’s multi-billion dollar Amway fortune, and her father’s auto parts fortune (among other profitable ventures, her father, Edgar Prince, invented the lighted, automobile sun visor) now finds herself at the helm of the federal Department of Education. She appears to be even less a friend of public education than she is familiar with it. DeVos has devoted a substantial part of her political and philanthropic career to advocating for the privatization of public schools, and her home sate of Michigan has the highest number of for-profit charter schools in the nation. To learn more about DeVos’ plans for public schools in America, you can read this intriguing article “Betsy DeVos’ Holy War,” The resources below offer additional insights into Secretary DeVos and her plans for public schools. And to learn more about our work with schools visit our Higher education & K-12 page.

Resources

Betsy DeVos’ website

Six astonishing things Betsy DeVos said — and refused to say — at her confirmation hearing” Valerie Strauss, Washington Post, January 18, 2016

Education for Sale?” Linda Darling Hammond The Nation, March 27, 2017

Betsy DeVos: Fighter for kids or destroyer of public schools?” Lori Higgins, Kathleen Gray, November 23, 2016, Detroit Free Press.

Betsy DeVos blocked from entering Washington public school by protesters CBS News 

The Betsy DeVos Hearing Was an Insult to Democracy” Charles Pierce, Esquire

Betsy DeVos Wants to Use America’s Schools to Build ‘God’s Kingdom’” Kristina Rizga, Mar/Apr 2017 Issue, Mother Jones

Why are Republicans so cruel to the poor? Paul Ryan’s profound hypocrisy stands for a deeper problem” Chauncey DeVega, March 23, 2017

March 1, 2017
01 Mar 2017

The Implications of Public School Privatization

implications of public school privatizationIn a recent review of two books, Education and the Commercial Mindset and School Choice: The End of Public Education, which appears in the December 8, 2016 New York Review of Books, Diane Ravitch, the former Assistant Secretary of Education during the George HW Bush presidency, discusses the implications of corporate designs on public education. Ravitch begins her review by reminding us that, “Privatization means that a public service is taken over by for-profit business whose highest goal is profit.” In the name of market-driven efficiency, she argues, the “education industry” is likely to become increasingly similar to privatized hospitals and prisons. In these industries, as in many others, corporate owners, in their loyalty to investors’ desire for profits, tend to eliminate unions, reduce employee benefits, continually cut costs of operation, and orient to serving those who are least expensive to serve.

Ravitch sketches some of the challenges posed by charter schools, noting that “…they can admit the students they want, exclude those they do not want, and push out the ones who do not meet their academic or behavioral standards.” She says, that charters not only “drain away resources from public schools” but they also “leave the neediest, most expensive students to the public schools to educate.” Moreover, as Josh Moon recently noted in his article, “’School choice’ is an awful choice,” “If the “failing school” is indeed so terrible that we’re willing to reroute tax money from it to a private institution that’s not even accredited, then what makes it OK for some students to attend that failing school?”

While some argue that charter schools can “save children from failing public schools” research on student outcomes for charter school has shown mixed results. For example, The Education Law Center, in “Charter School Achievement: Hype vs Evidence” reports:

Research on charter schools paints a mixed picture. A number of recent national studies have reached the same conclusion: charter schools do not, on average, show greater levels of student achievement, typically measured by standardized test scores, than public schools, and may even perform worse.

The Center for Research on Education Outcomes (CREDO) at Stanford University found in a 2009 report that 17% of charter schools outperformed their public school equivalents, while 37% of charter schools performed worse than regular local schools, and the rest were about the same. A 2010 study by Mathematica Policy Research found that, on average, charter middle schools that held lotteries were neither more nor less successful than regular middle schools in improving student achievement, behavior, or school progress. Among the charter schools considered in the study, more had statistically significant negative effects on student achievement than statistically significant positive effects. These findings are echoed in a number of other studies.

In Michigan, Secretary of Education, Betsy Devos’s home state, 80 percent of charter schools operate as for-profit organizations. Ravitch says, “They perform no better than public schools, and according to the Detroit Free Press, they make up a publicly subsidized $1 billion per year industry with no accountability.”

Ravitch tells us that the privatization movement is largely composed of social conservatives, corporations, and business-friendly foundations. “These days, those who call themselves “education reformers” are likely to be hedge fund managers, entrepreneurs, and billionaires, not educators. The “reform” movement loudly proclaims the failure of American public education and seeks to turn public dollars over to entrepreneurs, corporate chains, mom-and-pop operations, religious organizations, and almost anyone else who wants to open a school.”

The Trump administration is likely to further advance a public-school privatization and school voucher agenda. The extent and results of such “reforms” are hard to predict. That said, as Ravitch argues “… there is no evidence for the superiority of privatization in education. Privatization divides communities and diminishes commitment to that which we call the common good. When there is a public school system, citizens are obligated to pay taxes to support the education of all children in the community, even if they have no children in the schools themselves. We invest in public education because it is an investment in the future of society.” How continued privatization of public k-12 education will affect an increasingly economically privatized and socially and politically divided society is not yet known. To find out more about the work we do with schools click here.

Resources

Diane Ravitch ,“When Public Goes Private as Trump Wants: What Happens? New York Review of Books, December 8 2016.

Diane Ravitch, “Trump’s Nominee for Secretary of Education Could Gut Public Ed,: In These Times. 

Margaret E. Raymond, “A Critical Look at the Charter School Debate”

National Charter School Study (Stanford University) 2013

“Charter School Achievement: Hype vs Evidence”

Kristina Rizga Jan. 17, 2017 “Betsy DeVos Wants to Use America’s Schools to Build “God’s Kingdom”

Kevin Carey, “Dismal Results From Vouchers Surprise Researchers as DeVos Era Begins” New York Times, Feb.23, 2017

Josh Moon, “‘School choice’ is an awful choice” Alabama Political Reporter

Literature Review: Research Comparing Charter Schools and Traditional Public Schools

July 6, 2016
06 Jul 2016

Humans Need Not Apply:
What Happens When There’s No More Work?

artificial intelligence, impact of ai, unemploymentIn the June 25- July 1, 2016 issue of New Scientist, Michael Bond and Joshua Howgego report that a recent study by Oxford University concludes that within two decades, one-half of all jobs in the US could be done by machines. Artificial intelligence (AI) and advanced automation are having a profound effect on work and employment, especially in the advanced industrial economies. (See “When Machine’s Take Over: What Will Humans Do When Computers Run the World?” New Scientist, June 25- July 1, 2016, Vol. 230, Issue 3079, p29 &ff.)

Martin Ford’s 2015 book, Rise of the Robots: Technology and the Threat of a Jobless Future, explores in greater depth the impact that AI and robotics on employment. Ford traces the powerful (and disturbing) effects of robotization and artificial intelligence on a range of sectors in the economy, and argues that in addition to job elimination, the current AI-driven revolution in the world of work promises to displace both blue collar, manual laborers and white collar, college-educated professionals—the latter including but not limited to, lawyers,computer programmers, managers, office and retail workers. The current and anticipated “rise of the robots” thus threatens to create an increasingly jobless future for all; a future, Ford argues, that cannot be addressed with more education and upskilling of the workforce, because those jobs for which displaced blue collar workers once retrained increasingly will be carried out by robots and smart machines.

Ford’s book, as does Bond and Howgego’s article, underscore both the ominous changes in the economy and the profound losses that such changes portend. Bond and Howgego’s article explores the significant role work has played, especially in the advanced economies—not  only as source of income and livelihood, but also as an important source of employees’ sense of purpose, identity, and meaning. For instance, they cite a recent Gallop poll that shows that 50% of manual workers, and 70% of college educated employees report that they get a sense of identify from their jobs. They also discuss the health benefits associated with the performance of meaningful work, and how the risks of diseases such as dementia and Alzheimer’s may be reduced for those who work more years, and postpone retirement.

As work continues to change because of employers’ preference for AI and automation, and fewer people are able to find employment, how will society deal with what looks like an imminent if not current, tidal wave of unemployment and forced ‘leisure’? Ford shows how recent history has been characterized by diminishing job creation, lengthening jobless recoveries, and soaring long-term unemployment—all of which are certain to lead to significant social and economic results—if not adequately addressed.

Ford, Bond, and Howgego all suggest that society will necessarily need to rethink the distribution of wealth and society’s assets. Ford, for example, argues for a guaranteed basic income of 10,000 dollars annually for all citizens (augmentable of course, by paid employment), and says that if the guaranteed income was not set too high, it would be likely to avoid the pitfalls of creating disincentives to work. He estimates such a plan would cost about 2 trillion dollars annually—about one half of which would be recouped through cost savings on discontinued welfare programs (e.g., food stamps, housing assistance programs, Earned Income Tax Credits, etc.) and the other half which might be raised by new taxes, like a carbon tax. Bond, and Howgego also explore basic incomes, but also discuss alternative income-supporting plans, such as a negative tax program, in which poor people receive a guaranteed annual income, middle earners aren’t taxed, and the wealthy are taxed.

Whether society is culturally and politically ready for the introduction of a guaranteed minimum income remains to be seen. Ominously, current and forthcoming changes in work and the resulting displacement of workers is likely to necessitate a sweeping examination of the economic and moral implications of the disappearance of paid employment. AI and robotic technology, as these writers convincingly show, will continue to eliminate jobs and make human employment increasingly rare.

Resources:

“When Machine’s Take Over: What Will Humans Do When Computers Run the World,” Michael Bond and Joshua,” New Scientist, Vol 230, Issue 3079, p29 &ff.)

Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford, Basic Books, 2015

“Would a Work-Free World Be So Bad?” Ilana E. Strauss, The Atlantic, Jun 28, 2016

“Why Switzerland’s basic income idea is not crazy” Scott Santens, Politico, 6/6/16

“Review: Rise of the Robots’ and ‘Shadow Work,” Barbara Ehrenreich, New York Times, May 11, 2015

“Rise of the Robots’ and the threat of a jobless future,”Andrew Leonard, LA Times

Inventing the Future: Post-Capitalism and a World Without Work, Nick Srnicek and Alex Williams, Verso, 2016

April 28, 2016
28 Apr 2016

Evidence-Based Studies

“Evidence-based” – What is it?
“Evidenced-based” has become a common adjectival term for identifying and endorsing the effectiveness of various programs and practices in fields ranging from medicine to education, from psychology to nursing, from criminal justice to psychology. The motivation for marshalling objective evidence in order to guide practices and policies in these diverse fields has been the result of the growing recognition that professional practices—whether they be doctoring or teaching, social work, or nursing—need to be based on something more sound than custom/tradition, practitioners’ habit, professional culture, received wisdom, and hearsay.

What does “evidenced-based” mean?
While definitions of “evidence-based” vary, the most common characteristics of evidence-based research include: objective, empirical research that is valid and replicable, whose findings are based on a strong theoretical foundation, and include high quality data and data collection procedures. The most common definition of Evidence-Based Practice (EBP) is drawn from Dr. David Sackett’s, original (1996) definition of “evidence-based” practices in medicine, i.e., “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research” (Sackett D, 1996). This definition was subsequently amended to, “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” (Sackett DL, Strauss SE, Richardson WS, et al. Evidence-based medicine: how to practice and teach EBM. London: Churchill-Livingstone,2000). (See, “Definition of Evidenced-based Medicine”).

An evidenced-based program, whether it be in youth development or education, is comprised of a set of coordinated services/activities that demonstrate effectiveness, as such effectiveness has been established by sound research, preferably, scientifically based research. (See, “Introduction to Evidence-Based Practice“).

In education, evidence-based practices are those practices that are based on sound research that shows that desired outcomes follow from the employment of such practices.  “Evidence-based education is a paradigm by which education stakeholders use empirical evidence to make informed decisions about education interventions (policies, practices, and programs). ‘Evidence-based’ decision making is emphasized over ‘opinion-based’ decision making.” Additionally, “the concept behind evidence-based approaches is that education interventions should be evaluated to prove whether they work, and the results should be fed back to influence practice. Research is connected to day-to-day practice, and individualistic and personal approaches give way to testing and scientific rigor.” (See, “What is Evidence-Based Education?“).

Of course there are different kinds of evidence that can be used to show that practices, programs, and policies are effective.  In a subsequent blog post I will discuss the range of evidence-based studies—from individual case studies and quasi experimental designs, to randomized controlled trials (RCT).  The quality of the evidence as well as the quality of the study in which such evidence appears is a critical factor in deciding whether the practice or program is not just “evidence-based,” but in fact, effective. To learn more about our data collection and measurement click here.

Resources

Evidenced-based practice

What are Evidence-Based Interventions (EBI)?

Scientific Research and Evidence-Based Practice

Evidence-based medicine, Florida State College of Medicine

Evidenced-based studies in education

U.S Department of Education, “What Works” Clearinghouse

Defining Evidence-Based Programs

Child and Family Services

Linking Research with Practice in Youth Development – What Works and How Do We Know

Scientifically Based Research vs. Evidence-Based Practices and Instruction

How to Evaluate Evidence-Based or Research-Based Interventions

Issues in Defining and Applying Evidence-Based Practices Criteria for Treatment of Criminal-Justice Involved Clients

Can Randomized Trials Answer the Question of What Works?

February 26, 2016
26 Feb 2016

Evaluation Evidence from Local & Federally Supported Programs

Dawn Bentley of the Michigan Association of Special Educators recently drew my attention to an important article appearing in the Huffington Post, “Proven Programs vs. Local Evidence,” by Robert Slavin, of Johns Hopkins University. “Proven Programs vs. Local Evidence” compares and contrasts two kinds of evaluations of educational programs.

On the one hand, Slavin says, there are evaluations that are conducted of large-scale, typically federally funded programs. These programs represent program structures, that once found to be effective, can be replicated in a variety of settings. Evaluation findings from such programs are usually generalizable, that is they are applicable to a broader range of contexts than the individual case under study. Slavin terms such evaluations, “Proven Programs.” “Proven Program” evaluations are becoming increasingly important because the federal government is interested in funding efforts that are researched-based and show strong evidence of effectiveness. Examples of such “Proven Programs” include School Improvement Grants (SIGs) and Title II Seed grants.

On the other hand, Slavin notes, there are locally specific evaluations that are “not intended to produce answers to universals problems,” and whose findings typically are not generalizable. These evaluations are conducted on programs of a more limited, usually local, scope, and tend to be of interest principally to local program stakeholders, rather than state or national policy makers. Slavin calls these evaluations, “Local Evidence” because they yield evidence that typically isn’t generalizable to larger contexts.

Slavin notes that these two kinds of program evaluations are not necessarily mutually exclusive, for example, when a district or state implements and evaluates a replicable program that responds to its own needs. That said, Slavin says that “proven programs” are likely to contribute to national evidence and experience of what works, while “Local Evidence” evaluations are more likely to be of interest to local educators and local stakeholders. He notes that “Local Evidence” evaluations are more likely to result in stakeholders utilizing and acting on evaluation findings.

While Brad Rose Consulting, Inc. has experience in working with the U.S. Dept. of Education in conducting evaluations of national scope initiatives, we are also have extensive experience and are strongly committed to assisting state-level and district-level education agencies to design and conduct evaluation research to produce evaluation findings that will constructively inform both local policy and programming innovations. To find out more about our work in education visit our Higher education & K-12 page.

“Proven Programs vs. Local Evidence,” by Robert Slavin

March 31, 2015
31 Mar 2015

“There is no failure. Only feedback. ” – Robert Allen

You will recall that in an earlier post we discussed the importance of learning from program “failure.” (“Fail Forward: What We Can Learn from Program Failure”)

Below are some films and other resources about the value of risk and failure in helping us to learn and improve.

“I have not failed. I’ve just found 10,000 ways that won’t work.”  Thomas Edison

“By understanding how and why programs don’t achieve the results they intend, we can design and execute improved programs in the future. It is important to note that psychological research has shown that individuals learn more from failure than they do from success. Our goals should be to learn from our defeats and to surmount them—especially in programs that address critical social, educational, and human service needs. Learning from the challenges that confront these kinds of programs can have a powerful impact of the success of future programming.” To learn about how we take advantage of feedback visit our Feedback & Continuous improvement page.

 

Resources
http://www.edutopia.org/blog/film-festival-learning-from-failure-resilience

October 28, 2014
28 Oct 2014

Why You Hate Work

br-why-you-hate-work

In a recent New York Times article, “Why You Hate Work”

Tony Schwartz and Christin Porath discuss why employees’ experience of work has increasingly become an experience of depletion and “burnout.” The factors are many, including: 1) demands on employee time that far exceed employee capacity to meet demands; 2) a leaner and less populous workforce, and therefore more work distributed to fewer workers; and 3) technology-driven expectations for immediate response to requests for employees’ attention and commitment (think here of answering e-mails at 1:00 AM). The authors cite both national and international studies that indicate that workers at all levels of various kinds of organizations feel less engaged, less satisfied, and less fulfilled by their experience at work. Schwartz and Porath argue however, that when companies better address the physical, mental, emotional, and spiritual dimensions of their workers, they not only produce more engaged and fulfilled workers, but more productive and profitable organizations. Organizations can begin to do this by instituting simple changes like mandating meetings that last no longer than 90 minutes; rewarding managers that display empathy, care, and humility; and providing regular and frequent breaks so that employees can ‘recharge ‘and work more creatively. Successful companies provide opportunities for employee renewal, focus, emotional support, and sense of purpose. When companies provide such opportunities, companies, investors, and employees benefit.

How might program evaluation add to non-profit organizations’ efforts to create what Schwartz and Porath call “truly human-centered organizations” (which) put their people first….because they recognize that they are the key to creating long-term value.” While program evaluation, alone, cannot prevent employee burnout, timely and well-designed formative evaluations can add to non-profit organizations’ capacities to implement effective programming by providing insight into the unintended features of programs that often ‘get in the way’ of staff (and program participants’) sense of efficacy and purposefulness. By conducting formative evaluations—evaluations that focus on program strengthening and effectiveness-maximization— program evaluation can help organizations and funders to create programs in which staff and participants don’t have to “spin their wheels,” i.e., programs where both staff and participants can achieve a greater sense of effectiveness, purpose, and satisfaction.

Because Brad Rose Consulting, Inc. often works at the intersection of program evaluation and organization development (OD), we work with clients to collect data, understand the characteristics of programs, and provide evidence-based insights into how programs, and the organizations that support them, can become maximally effective. We make concrete recommendations that help our clients adjust their modes of operation and thereby increase staff engagement and better serve their participants/clients. While the latter are the reason programs exist, the former are often the under-recognized key to programs’ success. To learn more about our adaptive approach to program evaluation visit our Impact & Assessment reporting page.

September 23, 2014
23 Sep 2014

Program Evaluation vs. Social Research

I recently participated in a workshop at Brandeis University for graduate students who were considering non-academic careers in the social sciences.  During the workshop, one of the students asked about the difference between program evaluation and other kinds of social research.  This is a valuable and important question to which I responded that program evaluation is a type of applied social research that is conducted with “a value, or set of values, in its denominator.”  I further explained that I meant that evaluation research is always conducted with an eye to whether the outcomes, or results, of a program were achieved, especially when these outcomes are compared to a desired and valued standard or criterion.  At the heart of program evaluation is the idea that outcomes, or changes, are valuable and desired.  Evaluators conduct evaluation research to find out if these valuable changes (often expressed as program goals or objectives) are, in fact, achieved by the program.

Evaluation research shares many of the same methods and approaches as other social sciences, and indeed, natural sciences.  Evaluators draw upon a range of evaluation designs (e.g. experimental design, quasi-experimental desing, non-experimental design) and a range of methodologies (e.g. case studies, observational studies, interviews, etc.) to learn what the effects of a given intervention have been.  Did, for example, 8th grade students who received an enriched STEM curriculum do better on tests, than did their otherwise similar peers who didn’t receive the enriched curriculum?   Do homeless women who receive career readiness workshops succeed at obtaining employment at greater rates than do other similar homeless women who don’t participate in such workshops? (For more on these types of outcome evaluations, see our previous blog post, “What You Need to Know About Outcome Evaluations: The Basics,”) While not all program evaluations are outcome evaluations, all evaluations gather systematic data with which judgments about the program can be made.

Evaluation’s Differences From Other Kinds of Social Research

Evaluation research is distinct from other forms of applied social research in so far as it:

  • seeks to determine the merit, value, and/or worth of a program’s activities and results.
  • entails the systematic collection of empirical data that is used to measure the processes and/or outcomes of a program,  with the goal of furthering the program’s development and improvement.
  • provides actionable information for decision-makers and program stakeholders, so that, based on objective data, a program can be strengthened or curtailed.
  • focuses on particular knowledge (usually about a program and its outcomes), rather than seeks widely generalizable  and  universal knowledge.

While evaluators share many of the same methods and approaches as other researchers, program evaluators must employ an explicit set of values against which to judge the findings of their empirical research.  The means that evaluators must both be competent social scientists and exercise value-based judgments and interpretations about the meaning of data. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Resources

Research vs. Evaluation
http://www.ncjp.org/research-evaluation/overview/research-vs-evaluation

Differences Between Research and Evaluation
http://www.differencebetween.com/difference-between-research-and-vs-evaluation/

Harvard Family Research Project’s “Ask an Expert” series.
See “Michael Scriven on the Differences Between Evaluation and Social Science Research,”
http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research

Office of Educational Assessment
http://www.washington.edu/oea/services/research/program_eval/faq.html

Sandra Mathison’s “What is the Difference Between Evaluation and Research, and Why Do We Care?”
http://www.ncdsv.org/images/Mathison_WhatIsDiffBetweenEvalAndResearch.pdf

“Distinguishing Evaluation from Research”
http://www.ncdsv.org/images/Mathison_WhatIsDiffBetweenEvalAndResearch.pdf

August 26, 2014
26 Aug 2014

Focus Groups

Pioneered by market researchers and mid-20th century sociologists, focus groups are a qualitative research method that involves small groups of people in guided discussions about their attitudes, beliefs, experiences, and opinions about a selected topic or issue.  Often used by marketers who obtain feedback from consumers about a product or service, focus groups have also become an effective and widely recognized social science research tool that enables researchers to explore participants’ views, and to reveal rich data that often remain under-reported by other kinds of data collection strategies (e.g., surveys, questionnaires, etc. ).

Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions that allow participants to address questions. Focus groups usually include people who are somewhat similar in characteristics or social roles.  Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions.  Ideally—although not always possible—it is best to involve participants who don’t previously know one another.

Focus group conversations enable participants to offer observations, define issues, pose and refine questions, and create informative debate/discussions.  Focus group moderators must: be attentive, pose useful and creative questions, create a welcoming and non-judgmental atmosphere, be sensitive to non-verbal cues and the emotional tenor of participants.   Typically, focus group sessions are recorded or videoed so that researchers can later transcribe and analyze participants’ comments.  Often an assistant moderator will take notes during the focus group conversation.

Focus groups have advantages over other date collection methods.  They often employ group dynamics that help to reveal information that would not emerge from an individual interview or survey: they produce relatively quick, low cost data (they produce an ‘economy of scale’ as compared to individual interviews); allow the moderator to pose appropriate and responsive follow-up questions;  enable the moderator to observe non-verbal data; and often produce greater and richer data than a questionnaire or survey.

Focus groups also can have some disadvantages, especially if not conducted by an experienced and skilled moderator:  Depending upon their composition, focus groups are not necessarily representative of the general population; respondents may feel social pressure to endorse other group members’ opinions or refrain from voicing their own; group discussions require effective “steering” so that key questions are answered, and participants don’t stray from the questions/topic.

Focus groups are often used in program evaluations.  I have had extensive experience conducting focus groups with a wide-range of constituencies.  During my 20 years of experience as a program evaluator, I’ve moderated focus groups composed of:  homeless persons; disadvantaged youth; university pr ofessors and administrators; k-12 teachers; k-12 and university students, corporate managers; and hospital administrators.  In each of these groups I’ve found that it’s been beneficial to: have a non-judgmental attitude, be genuinely curious; exercise a gentle guidance; and respect the opinions, beliefs, and experiences of each focus group member.   A sense of humor can also be extremely helpful. (See our previous post: “Interpersonal Skills Enhance Program Evaluation,”  Also “Listening to Those Who Matter Most, the Beneficiaries” ) Or if you want to learn more about our qualitative approaches visit our Data collection & Outcome measurement page.

Resources:

About focus groups:

http://sociology.about.com/od/Research-Methods/a/Focus-Groups.htm

About focus groups:

http://www.cse.lehigh.edu/~glennb/mm/FocusGroups.htm

How focus groups work:

http://money.howstuffworks.com/business-communications/how-focus-groups-work1.htm

Focus group interviewing:

http://www.tc.umn.edu/~rkrueger/focus.html

‘Focus groups’ at Wikipedia

https://en.wikipedia.org/wiki/Focus_group

August 11, 2014
11 Aug 2014

Needs Assessment

br-needsassessmentA needs assessment is a systematic research and planning process for determining the discrepancy between an actual condition or state of affairs, and a future desired condition or state of affairs.  Needs assessments are undertaken not only to identify the gap between “what is” and “what should be” but also to identify the programmatic actions and resources that are required to address that gap.  Typically, a needs assessment is a part of planning processes that is intended to yield improvements in individuals, education/training, organizations, and/ or communities. (https://en.wikipedia.org/wiki/Needs_assessment ) Ultimately, a needs assessment is “a systematic process whose aim is to acquire an accurate, thorough picture of a system’s strengths and weaknesses, in order to improve it and to meet existing and future challenges.”

http://dictionary.reference.com/browse/needs+assessment) Needs assessments have a variety of purposes. They can be used to identify and address challenges in a community, to develop training strategies, or to improve the performance of organizations.

There are a variety of conceptual modules of needs assessments. (For a review of various models (See http://ryanrwatkins.com/na/namodels.html ) One of the most popular is the SWOT analysis, in which researchers and action teams conduct a study to determine the strengths, weaknesses, opportunities, and threats involved in a project or business venture. In Planning and Conducting Needs Assessments: A Practical Guide.  Thousand Oaks, CA. Sage Publications. (1995) Witkin, and Altschuld, identify a three-stage model of needs assessment, which includes pre-assessment (exploration), assessment (data gathering), and post-assessment (utilization).

Although there are various approaches to needs assessment, most include the following essential components/steps:

  • Identify issue/concern
  • Conduct a gap analysis (where things are now vs. where they should be)
  •  Specify methods for collecting information/data
  • Perform literature review
  • Collect and analyze data
  • Develop action plan
  • Produce implementation report
  • Disseminate report/recommendations to stakeholders.
Why Conduct a Needs Assessment

Needs assessments can be used to identify real-world challenges, to formulate plans to correct inequities, and to involve critical stakeholders in building consensus and mobilizing resources to address identified challenges.  For non-profit organizations needs assessments: 1) use data to identify an unaddressed or under-addressed need; 2) help to more effectively utilize resources to address a given problem; 3) make programs measurable, defensible, and fundable; and 4) inform, mobilize, and re-energize stakeholders.  Needs assessments can be used with an organizations internal and external stakeholders and constituents.

Brad Rose Consulting Inc. has extensive experience in designing and implementing needs assessments for non-profit organizations, educational institutions, and health and human service programs.  We’d welcome a chance to speak with you and your colleagues about how we can help you to conduct a needs assessment. To learn more about our assessment methods visit our Data collection & Outcome measurement page.

Resources

SWOT Analysis:
https://en.wikipedia.org/wiki/SWOT_analysis

Pyramid Model of Needs Assessment
http://needsassessment.missouri.edu/

Needs Assessment: Strategies for Community Groups and Organizations
http://www.extension.iastate.edu/communities/assess

Needs Assessment 101
http://www.cdc.gov/nccdphp/dnpao/hwi/programdesign/needsassessment101.htm

U.S. Department of Education
http://www2.ed.gov/admins/lead/account/compneedsassessment.pdf

Needs Assessment: A User’s Guide
http://books.google.com/books/about/Needs_assessment.html?id=Ek_l0MOsZHoC

July 23, 2014
23 Jul 2014

Helpful Resources: Program Evaluation Supports Strategic Planning

br-strategicplanningAs mentioned in a previous blog post, program evaluation can play an important role in an organization’s strategic planning initiatives. This is especially true in non-profit organizations, human service agencies, k-12, and higher education institutions, all of whom must rely on non-market data for evidence of program effects. Evaluation can help these organizations to identify, gather, and analyze data with which to judge the impact of their activities and to strengthen current, or redirect future, efforts. Only with clear and accurate information can a non-profit organization take stock of its effectiveness and make informed choices about needed changes in direction. As Heather Tunis and Maura Harrington, note “An evaluation plan helps refine data collection and assessment practices so that the information is most useful to advancing the organization’s mission and the objectives of the program being evaluated. Evaluation is a key component of being a learning organization.” (Non-profits: Strategic Planning and Future Program Evaluation) As Mark Fulop observes, “Indeed nonprofits that embrace evaluation as strategy will be driven by internal excellence rather than an external locus of control. Nonprofits that embrace evaluation as strategy will strengthen not only their organizational core but the centrality of their place in solving social needs.” (“The Roles of Strategic Evaluation in Non-profits” ) To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Also, here are a few links to resources about strategic planning and program evaluation:

From CDC, “Using Program Evaluation to Improve Programs: Strategic Planning” http://www.cdc.gov/HealthyYouth/evaluation/pdf/sp_kit/sp_toolkit.pdf

“Strategic Planning Resources for Non-Profits” http://nonprofitanswerguide.org/faq/strategic-planning/

“Why Strategic Planning for Non-profits is Important,” http://www.event360.com/blog/why-nonprofit-strategic-planning-is-important/

“What is strategic planning, and why should all schools have a strategic plan?” http://www.strategicplanning4schools.com/overview.html

From the National Alliance for Media Arts and Culture, “Basic Steps to a Strategic Planning Process,”http://www.namac.org/strategic-planning-steps

From the World Bank, “Strategic Planning- A Ten-Step Guide” http://siteresources.worldbank.org/INTAFRREGTOPTEIA/Resources/mosaica_10_steps.pdf

July 7, 2014
07 Jul 2014

Helpful Resources: Literature Reviews

Occasionally, I encounter a resource that I think may be useful to clients and colleagues.  I recently had the pleasure of enlisting the help of Green Heron Information services who helped me conduct a literature review for a project that I was working on.  I’d like to share with you some of the central ideas about doing literature reviews—ideas that may be helpful to both evaluators and grant seekers— and encourage you to connect with Matt Von Hendy who is the president of Green Heron Information Services. (240) 401-7433 (email at info@greenheroninfo.com or visit  www.greenheroninfo.com)

If you are like most people you probably have not thought about literature reviews since college or graduate school until you need to write one for a contract report, journal article or grant proposal.  Just a quick review: A literature review is a piece of work that provides an overview of published information on a particular topic or subject usually within a specific period of time and discusses critical points of the current state of knowledge in the field including major findings as well as theoretical and methodological contributions.  It will generally seek to present a summary of the important works but also provide a synthesis of this information as well.

Literature reviews matter for a number of reasons:  they demonstrate a strong knowledge of the current state of research in the field or topic; they show what issues are being discussed or debated and where research is headed; and they provide excellent background information for placing a program, initiative or grant proposal in context.   In short, a well-written literature review can provide a ‘mental road map’ of the past, present and future of research in a particular field.

Literature reviews can take many different types and forms but typically good ones share certain characteristics such as:

  • Follows an organizational pattern that combines summary and synthesis
  • Tracks the intellectual progression of a thought or a field of study
  • Contains a conclusion that offer suggestions for future research
  • Is well-researched
  • Uses a wide variety of high quality resources including journal articles, conference papers, books and reports

Many evaluation and grant professionals when doing research for literature reviews use some combination of Google and/or other professionals as their primary information sources.  While these resources are a great place to start, they both have limitations which make them not so good places to end your research.  For example, search engines such as Google filter results based on a number of factors and very few experts can keep up to date with the amount of information that is being published.   Fortunately, many high quality tools such as citation databases and subject specific databases exist that make going beyond Google relatively easy. Many evaluation professionals and proposal writers are motivated to do their own research but there are times such as working in new areas or tight deadlines where hiring an information professional to consult, research or write a literature review can be helpful.

You may think while this all sounds good in theory but wonder how would it work in practice?  Let me offer a very quick case study of conducting research for a literature review in evaluating programs that attempt to improve mental health outcomes for teenagers in the United States.  I would first start a list of sources by consulting experts and searching on Google.  My next step would be to look at the two major citation databases, Scopus and Web of Science, and find out which journal articles and conference papers are most cited.   I would then search in the subject specific databases that cover the health and psychology areas such as PubMed, Medline and PsycINFO.   Finally, I would examine resources such as academic and non-profit think tanks just to make sure I was not missing anything important.

A well-researched and written literature review offers a number of benefits for evaluation professionals, grant-seekers and even funders and grantors: it can show an excellent understanding of the research in a subject area; it can demonstrate about what current issues or topics are being debated and suggest directions for future research; it can also provide an excellent way to place a program, initiative or proposal into context within the larger picture of what is happening in an area.   If you have questions with getting started on a literature review, we are always glad to offer suggestions.

Green Heron Information Services offers consulting, research and writing services in support of literature review efforts. www.greenheroninfo.com

June 12, 2014
12 Jun 2014

Questions Before Methods

Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer.  (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”).  Evaluation questions are what guide the evaluation, give it direction, and express its purpose.   Identifying guiding questions is essential to the success of any evaluation research effort.

Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program.  For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions).  During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions).  Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions).  Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).

While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations.  Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.

Types of Evaluation Questions

Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.

▪ Process Evaluation Questions

  • Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
  • Did the program’s services, products, and resources reach their intended audiences and users?
  • Were services, products, and resources made available to intended audiences and users in a timely manner?
  • What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
  • What steps were taken by the program to address these challenges?

▪ Formative Evaluation Questions

  • How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
  • How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants  and stakeholders?
  • What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

▪ Outcome/Summative Evaluation Questions

  • What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
  • Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
  • Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
  • What is the ultimate worth, merit, and value of the program?
  • Should the program be continued or curtailed?

The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use.  If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT).  Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so.  In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review,  etc.) that can help elucidate why what happens, happens, and what program participants experience.

Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

April 10, 2014
10 Apr 2014

Interpersonal Skills Enhance Program Evaluation

BR-InterpersonalSkillsEvaluator competencies—the skills, knowledge and attitudes— required to be an effective program evaluator have been much discussed. (See, for example, The International Board of Standards for Training, Performance and Instruction Evaluator Competencies, and the CDC’s “Finding the Right People for Your Program Evaluation Team: Evaluator and Planning Team Job Descriptions.” )

A good evaluator must, of course, be able to develop a research design, carry out research in the field, analyze data, and report findings. These technical/methodological skills, although of critical importance, are however, not the only skills that evaluators need. Effective evaluations also depend upon a range of interpersonal or relational skills that make effective and responsive interpersonal interaction possible.

I recently posted to the American Evaluation Associations listserv a query about the importance and role of the interpersonal skills in evaluation.  I asked for AEA members’ opinions about the importance of interpersonal skills in conducting successful evaluations. A number of evaluators responded to my inquiry. The central theme of those responses was that successful evaluators and successful evaluation engagements require that evaluators possess and employ key interpersonal skills, and that without these, evaluation engagements are unlikely to be successful.

Among the most prominent reasons that my AEA colleagues reported for the importance of interpersonal skills were: 1) the importance of building strong, candid, and constructive relationships, on which effective data collection depends; and 2) the importance of establishing trusting and collaborative relationships between evaluators and stakeholders in order to help to ensure that evaluation findings will be utilized by clients and stakeholders. Additionally, some colleagues commented that the self-evident reason for utilizing strong interpersonal skills in evaluation engagements: these skills  enhance the probability that clients and stakeholders will share information and provide insights about the program. Thus effective evaluation necessarily entails trusting, open, and amicable relationships that make access to program knowledge and information possible .

Reflecting on my 25 years of professional experience, which includes observing the work of many evaluators, I think that key interpersonal characteristics, include the abilities to:

  • Build rapport and trust with clients, evaluands, and stakeholders
  • Act with personal integrity
  • Display a genuine curiosity and ask good questions
  • Make oneself vulnerable in order to learn (see my earlier blog post on the role of  vulnerability in learning and creativity at https://bradroseconsulting.com/secret-innovation-creativity-change/
  • Actively listen
  • Be empathic
  • Be both socially aware and self-aware— i.e., be aware of, and manage, both one’s own and other’s emotions (including the features of emotional intelligence, i.e, capacities to accurately perceive emotions, use emotions to facilitate thinking, to understand emotional meanings, and to manage emotions).
  • Treat each person with respect
  • Manage conflict and galvanize collaboration
  • Problem solve
  • Facilitate collective (group) learning
  • These interpersonal skills are central to successful program evaluations.  Attention to these characteristics– both by program evaluators and by those seeking to engage a program evaluator (i.e. evaluation clients)— will greatly maximize the probability of successful evaluation projects.

To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Resources

Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation J.A. King and L. Stevahn (Sage).

Working with Emotional Intelligence. Daniel Goleman (Bantam)

Daniel Goleman Explains Emotional Intelligence 

August 29, 2013
29 Aug 2013

Evaluation Workflow

Typically, we work with clients from the early stages of program development in order to understand their organization’s needs and the needs of program funders and other stakeholders. Following initial consultations with program managers and program staff, we work collaboratively to identify key evaluation questions, and to design a strategy for collecting and analyzing data that will provide meaningful and useful information to all stakeholders.

Depending upon the specific initiative, we implement a range of evaluation tools (e.g., interview protocols, web-based surveys, focus groups, quantitative measures, etc.) that allow us to collect, analyze, and interpret data about the activities and outcomes of the specified program. Periodic debriefings with program staff and stakeholders allow us to communicate preliminary findings, and to offer program managers timely opportunities to refine programming so that they can better achieve intended goals.

Our collaborative approach to working with clients allows us to actively support program managers, staff, and funders to make data-informed judgments about programs’ effectiveness and value. At the appropriate time(s) in the program’s implementation, we write a report(s) that details findings from program evaluation activities and that makes data-based suggestions for program improvement. To learn more about our approach to evaluation visit our Data collection & Outcome measurement page.

June 26, 2013
26 Jun 2013

Understanding How Programs Work: Using Logic Models to “Map” Cause and Effect

A logic model is a schematic representation of the elements of a program and the program’s resulting effects.  A logic model (also known, as a “theory of change”) is a useful tool for understanding the way a program intends to produce the outcomes (i.e. changes) it hopes to produce.  Logic models typically consist of a flowchart schematic that shows the logical connection between a program’s “inputs” (i.e. invested resources), “outputs” (program activities and actions), “short-term outcomes” (changes), “medium-term outcomes” (changes), and “long range impacts” (changes).

br-logic-model

When developing a logic model many evaluators and program staff rightly focus on inputs, outputs, and program outcomes (the core of the program).  However, it is critical to also include in the logic model the implicit assumptions that underlie the program’s operation, the needs that the program aspires to address, and the program’s environment, or context.  Assumptions, needs, and context are crucial factors in understanding how the program does what it intends to do.  Ultimately these are crucial to understanding the causal mechanisms that produce the intended changes of any program.

Without clearly understanding the causal mechanisms at work in a program, program staff may work ineffectively, placing emphasis on the wrong or inefective activities—and ultimately fail to correctly address the challenges the program intends to address.   Similarly, without a clear understanding of the causal mechanisms that enable the program to achieve its outcomes, the program evaluation may not measure the proper outcomes or fail to see the changes the program, in fact, brings about.

Brad Rose Consulting, Inc. works with clients to develop simple, yet robust, logic models that explicitly document the causal mechanisms that are at work in a program.  By discussing, and explicitly identifying the often implicit causal assumptions,  as well as highlighting the needs for the program and the social context of a program, we not only ensure that the evaluation is properly designed and executed, we also help program implementers to ensure that they are activating the causal processes/mechanisms that yield the changes that the program strives to achieve.

Other Resources:

Read Brad’s current whitepaper “Logic Modeling

Monitoring and Evaluation: Some Tools Methods and Approaches, The World Bank.

The Logic Model Development Guide,” W.K. Kellogg Foundation.

Logic Model Resources at the University of Wisconsin

A Bibliography for Program Logic Models/Logframe Analysis

 

May 20, 2013
20 May 2013

The Secret to Innovation, Creativity, and Change?

The other day, I conducted a focus group with disadvantaged youth. On behalf of a local workforce investment board, I interviewed a group of 16-24 year-olds about their use of cell phone and other hand-held technologies, in order to learn whether it would be possible to reach youth with career development programming via cell phone and electronic modalities. (In my 20+ years as a professional evaluator, I’ve conducted between 50 and 60 focus groups, with participants who range across the socioeconomic spectrum—from homeless women to college presidents.) As this focus group proceeded, I became aware of a two things. Many of the youth saw me, understandably, as an authority figure to whom they had to give guarded responses—at least initially— and whose trust I needed to earn. Additionally, I, too, felt vulnerable before the group of young people, who I feared might think I was uninformed about cyber culture and the prevailing circumstances of their age group. Each of us, in our own ways, felt “vulnerable.”

It occurred to me that focus group members’ sense of vulnerability would yield only if I myself became more open and vulnerable. Consequently, I abandoned my interview protocol, and began improvising candid and spontaneous questions. I also confessed my lack of knowledge about the technologies that young people often use so comfortably, as if it is an extension of themselves. I also redoubled my efforts to enlist the opinions of each member—especially those who seemed, at first, reticent to share their experience. As the focus group continued, I noticed that some of the initial reticence and reserve of my interlocutors began to dissolve, and even those who had not initially offered their opinions and experience, began to fully participate in the group. I also noticed that as I further expressed my genuine interest in learning about their experience, the sense of who possessed the authority shifted from me—the question-asker— to the youth in the group, who became experts on the subject I was interviewing them about.

The Necessity of Vulnerability in Education

Although all of us necessarily spend a lot of our lives shielding ourselves from various forms of vulnerability (economic, social, emotional, etc.), research is beginning to show that psychological vulnerability and the willingness to risk social shame and embarrassment, is essential for genuine learning, creativity and path-breaking innovation. In a recent TED presentation by Brene Brown a research professor at the University of Houston Graduate School of Social Work, who has spent the last decade studying vulnerability, courage, authenticity, and shame (Click here to listen).  Ms. Brown discusses the importance of making mistakes and enduring potential embarrassment, in order to learn new things and make new connections. Brown highlights the significance of making ourselves vulnerable (i.e. taking risks, enduring uncertainty, handling emotional exposure) so that we can genuinely connect with others, and learn from them. Fear of failure and fear of vulnerability (especially fear of social shame) she says, too often get in the way of our learning from others. Moreover, we are often deathly afraid of making mistakes (See our recent post “Fail Forward: What We Can Learn from Program ”Failure”. You can also listen to the entire NPR TED Hour on the importance of mistakes to the process of learning, here. Ultimately, we must embrace, rather than deny vulnerability, if we are to connect, and thereby learn.

I’ve conducted research for most of my professional life. As I reflected on my professional experience, I realize that I’ve learned the most from people and situations when I’ve been willing to make myself vulnerable, to be fully present, and to authentically engage others. As in the above-mentioned focus group, and in many proceeding that, I recognize that when I’ve allowed myself to be open and available—to be unconcerned with knowing all the right answers, in advance— indeed, when I’ve made myself vulnerable and present—is precisely when I’ve learned the most important lessons and gained the most insight into a given phenomenon.

Successful program evaluations require effective, constant, and adaptive learning—often in fluid, uncertain, and continually evolving contexts. Genuine learning occurs when we make ourselves vulnerable enough to sincerely engage others, to connect with them, and to acknowledge that what we don’t know is the first step toward gaining knowledge, toward genuine knowledge. To learn more about our adaptive approach to evaluation visit our Feedback & Continuous improvement page.

May 9, 2013
09 May 2013

Listening to Those who Matter Most, the Beneficiaries.

br-eblast-05082013Listening to Those Who Matter Most, the Beneficiaries,” (Spring, 2013 Stanford Social Innovation Review) highlights the importance of incorporating the perspectives of program beneficiaries (participants, clients, service recipients, etc.) into program evaluations.  The authors note that non-profit organizations, unlike their counterparts in health care, education, and business, are often not as effective in gathering feedback and input from those who they serve.  Although extremely important, the collection of opinions and perspectives from program participants has three fundamental challenges: 1) It can be expensive and time intensive.  2) It is often difficult to collect data, especially with disadvantaged and minimally literate populations.  3) Honest feedback can make us (i.e., program funders and program implementers) uncomfortable, especially if program beneficiaries don’t think that programs are working the way they are supposed to.

As the authors point out, feedback from participants is important for two fundamental reasons: It provides a voice to those who are served. As Bridgespan Group partner, Daniel Stid, notes, “Beneficiaries aren’t buying your service; rather a third party is paying you to provide it to them.  Hence the focus shifts more toward the requirements of who is paying, versus the unmet needs and aspirations of those meant to benefit.”  Equally importantly, gathering and analyzing the perspectives and opinions of beneficiaries can help program implementers to refine programming and make it more effective.

The authors of “Listening to Those Who Matter Most, the Beneficiaries,” make a strong case for systematically collecting and utilizing beneficiary input. “Beneficiary Feedback isn’t just the right thing to do, it is the smart thing to do.”

Our experience in designing and conducting program evaluations has shown the value of soliciting the views, perspectives, and narrative experiences of program beneficiaries.  Beneficiary feedback is a fundamental component of our program evaluations—whether we are evaluating programs that serve homeless mothers, or programs that serve college students.  We’ve had 20 years of experience conducting interviews, focus groups, and surveys, which are designed to efficiently gather and productively use information from program participants.  While the authors of “Listening to Those Who Matter Most, the Beneficiaries,” suggest that such efforts can be resource intensive, and indeed they can be, we’ve developed strategies for maximizing the effectiveness of these techniques while minimizing the cost of their implementation. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

April 5, 2013
05 Apr 2013

Helpful Link Resources

link-resourcesPeriodically, I discover and like to share links to resources related to program evaluation.  I think these links can be useful for colleagues in the non-profit, foundation, education, and government sectors.  Here are some links that may be of interest.

Links to typical program outcomes and indicators are available from the Urban Institute.   These links include outcomes for a variety of programs, including arts programs, youth mentoring programs, and advocacy programs.  All in all, this site has outcomes and indicators for 14 specific program areas.
Link here.

The perspective of program participants is a very important source of data about the effects of program. Yet this perspective is often overlooked.  An article from the Stanford Social Innovation Review entitled “Listening to Those Who Matter Most, the Beneficiaries” provides good insight into why program participant perspective is so valuable.
Link here.

The Foundation Center has a database of over 150 tools for assessing the social impact of programs.  While there are dozens and dozens of useful tools here for you to browse, take a look at “A Guide to Actionable Measurement,” from the Gates Foundation and “Framework for Evaluation” from the CDC.
Link here.

The Annie E. Casey Foundation A HANDBOOK OF DATA COLLECTION TOOLS: COMPANION TO “A GUIDE TO MEASURING ADVOCACY AND POLICY” may be helpful for organizations seeking to effect changes in public perceptions and public policy.
Link here.

In up-coming posts, I will share additional links to tools and resources.

March 19, 2013
19 Mar 2013

Transforming “Data” Into Knowledge

br-transforming-dataIn his recent article in the New York Times, “What Data Can’t Do” (February 18, 2013, visit here ), David Brooks discusses some of the limits of “data.”

Brooks writes that we now live in a world that is saturated with gargantuan data collection capabilities, and that today’s powerful computers are able to handle huge data sets which “can now make sense of mind-bogglingly complex situations.” Despite these analytical capacities, there are a number of things that data can’t do very well. Brooks remarks that data is unable to fully understand the social world; often fails to integrate and deal with the quality (vs. quantity) of social interactions; and struggles to make sense of the “context,” i.e., the real environments, in which human decisions and human interactions are inevitably embedded. (See our earlier blog post Context Is Critical.)

Brooks insightfully notes that data often “obscures values,” by which he means that data often conceals the implicit assumptions, perspectives, and theories on which they are based. “Data is never ‘raw,’ it’s always structured according to somebody’s predispositions and values.” Data is always a selection of information. What counts as data depends upon what kinds of information the researcher values and thinks is important.

Program evaluations necessarily depend on the collection and analysis of data because data constitutes important measures and indicators of a program’s operation and results. While evaluations require data, it is important to note that data alone while, necessary, is insufficient for telling the complete story about a program and its effects. To get at the truth of a program, it is necessary to 1) discuss both the benefits and limitations of what constitutes “the data”—to understand what counts as evidence; 2) to use multiple kinds of data—both quantitative and qualitative; and 3) to employ experience- based judgment when interpreting the meaning of data.

Brad Rose Consulting, Inc., addresses the limitations pointed out by David Brooks, by working with clients and program stakeholders to identify what counts as “data,” and by collecting and analyzing multiple forms of data. We typically use a multi-method evaluation strategy, one which relies on both quantitative and qualitative measures. Most importantly, we bring to each evaluation project our experience-based judgment when interpreting the meaning of data, because we know that to fully understand what a program achieves (or doesn’t achieve), evaluators need robust experience so that they can transform mere information into genuine, useable knowledge. To learn about our diverse evaluation methods visit our Data collection & Outcome measurement page.

January 18, 2013
18 Jan 2013

Learning by Changing: Lessons from Action Research

What is Action Research?
Action Research is a method of applied, often organization-based, research whose fundamental tenet is that we learn through action, learn through doing and reflecting. Action research is used in “real world” situations, rather than in ideal, experimental conditions. It focuses on solving real world problems.

Although there are a number of strands of Action Research (AR) including: participatory action research, emancipatory research, co-operative inquiry, appreciative inquiry, all share a commitment to positively changing a concrete organizational or social challenge through a deliberate process of taking action, and reflecting on cycles of emergent learning. Kurt Lewin, one of the original theorists of AR said that: “If you want truly to understand something, try to change it.” Ultimately, Action Research is about learning though doing, indeed, learning through changing.

Collaboration and Co-Learning
Although Action Research uses many of the same methodologies as positivist empirical science (observation, collection of data, etc.), AR typically involves collaborating with, and gathering input from, the people who are likely to be affected by the research. As Gilmore, Krantz, and Ramirez point out in their article “Action Based Modes of Inquiry and the Host-Researcher Relationship,” Consultation 5.3 (Fall 1986): 161, “… there is a dual commitment in action research to study a system and concurrently to collaborate with members of the system in changing it in what is together regarded as a desirable direction. Accomplishing this twin goal requires the active collaboration of researcher and client, and thus it stresses the importance of co-learning as a primary aspect of the research process.”
(Retrieved from http://www.web.ca/robrien/papers/arfinal.html#_edn1)

Collaboration, Stakeholder Involvement, and Constructive Judgment for Program Strengthening
Brad Rose Consulting draws on the key ideas Action Research–collaboration and stakeholder involvement— to ensure that its evaluations are grounded in, and reflect the experience of program stakeholders. Because we work at the intersection of program evaluation (i.e. Does a program produce its intended results?) and organization development (i.e. How can an organization’s performance be enhanced, and how can we ensure that it better achieves it goals and purposes?), we know that the success of program evaluations depend in large part upon the involvement of all program stakeholders.

This means that we work with program stakeholders, (e.g. program managers, program staff, program participants, community members, etc.) to understand the intentions, processes, and experiences of each group. Brad Rose Consulting, Inc., begins each evaluation engagement by listening to clients and participants—including listening to their aspirations, their needs, their understanding of program objectives, and their experience (both positive and negative) participating in programs. Furthermore, we engage clients not as passive recipients of “expert knowledge,” but rather as co-learners who seek both to understand if a program is working, and how a program can be strengthened to better achieve its goals.

Ultimately we make evaluative judgments about the effectiveness of a program, but our approach to making such judgments is guided by our commitment to constructive judgments that help clients to achieve both intended programmatic outcomes (program results) and desired organizational goals. To learn more about our adaptive approach to evaluation visit our Feedback & Continuous improvement page.

December 17, 2012
17 Dec 2012

Context is Critical

“Context matters,” Debra Rog,  past President of the American Evaluation Association reported in her address to the 2012 meeting of ASA, and recently wrote in her insightful article, “When Background Becomes Foreground: Toward Context-Sensitive Evaluation Practice (New Directions for Evaluation No.135, Fall, 2012)
Indeed, as Rog correctly points out, program evaluators (and program sponsors) often ignore at their own peril, program context.  Too often evaluations begin with attention focused on methodology, then later—often when it’s too late–discover that the results, meanings, and uses, of evaluation findings have been hampered by insufficiently considering the context in which an evaluation is conducted.

Five Aspects of Context
Rog says there are 5 aspects of context that directly and indirectly affect the selection, design, and ultimate success of an evaluation:
1. the nature of the problem that a program or intervention seeks to address
2. the nature of the intervention—how the structure, complexity, and dynamics of the program (including program life cycle) affect the selection and implementation of the evaluation approach
3. the broader environment (or setting) in which the program is situated and operates (for example, the availability of affordable housing may profoundly affect the outcomes and successes of a program intended to assist homeless persons to accesses housing.)
4. the parameters of the evaluation, including the evaluation budget, allotted time for implementation of evaluation activities, and the availability of data
5.  the decision-making context for the evaluation—who are the decision-makers that will use evaluation findings, which types of decisions do they need to make, and whcih standards of rigor and levels of confidence do decisions-makers require

Context-Sensitive Evaluations – What to Look For
Rog underscores the importance of conducting “context-sensitive” evaluations—evaluations that first consider the various aspects of the context in which programs operate and in which program evaluation activities will occur.  She makes a plea to evaluators and evaluation sponsors to refrain from engaging in a “method’s first approach” which too often fetishizes methodologies at the cost of conducting appropriate evaluations that can be of maximum value and use to all program stakeholders.

In our experience, the most effective context-sensitive evaluations address:

  • Who program stakeholders are (funders, program managers, program participants, community members, etc.)
  • What stakeholders need to learn about a program’s operation and outcomes
  • The social and economic context of the program’s operation
  • Key questions to guide the evaluation research
  • Research methodologies that provide robust and cost-effective findings
  • A logic model that clearly specifies the program activities and results
  • A wide range of evaluation research methodologies and data collection strategies to ensure that program results are systematically and rigorously measured.
  • Clear, accessible reports so that all stakeholders benefit from evaluation findings
  • Detailed recommendations for how sponsors and program managers can strengthen further efforts

When it comes to program evaluation, not only does context matter, it is on the critical path for getting the best results. To find out about how we consider context in our evaluation methods visit our Data collection & Outcome measurement page.

November 7, 2012
07 Nov 2012

Fail Forward: What We Can Learn from Program “Failure”

Vilfredo Pareto, an Italian economist, sociologist, and philosopher, observed: “Give me a fruitful error any time, full of seeds, bursting with its own corrections. You can keep your sterile truth for yourself.” Programs frequently achieve some of their original goals, while missing others.  In some cases, they achieve unintended results, both desirable and undesirable.  In other cases, programs fail to achieve any of the original outcomes (i.e., changes, results) they intended at all.

Discovering New Information in Failure
Rather than identifying a program’s unachieved results as merely program failure, we need to rethink what we can learn from failures. What do program failures tell us about what we do that’s ineffective or otherwise misses the mark? By understanding how and why programs don’t achieve the results they intend, we can design and execute improved programs in the future. It is important to note that psychological research has shown that individuals learn more from failure than they do from success. Our goals should be to learn from our defeats and to surmount them—especially in programs that address critical social, educational, and human service needs. Learning from the challenges that confront these kinds of programs can have a powerful impact of the success of future programming.

Of course programs shouldn’t seek to fail, but they should seek to learn from the challenges that they encounter.  Constructive program evaluation can help organizations to learn what they need to be doing more effectively, and point the way to strengthened programming and enhanced results.  Constructive program evaluation identifies challenges, analyzes why such challenges detract from desired outcomes, and helps program sponsors and implementers to understand how to strengthen and refine programming so that the next iteration achieves its goals.

Brad Rose Consulting, Inc. is committed to conducting program evaluations that help program managers, funders, and stakeholders to ensure successful program design, accurately measure results, and make timely adjustments in order to maximize positive program impacts. To learn more about our commitment to learning from failure visit our Feedback & Continuous improvement page.

Samuel Becket wrote, “Ever tried.  Ever Failed.  No matter.  Try again. Fail again.  Fail better.”
See: “Embracing Failure,” an article at the Asian Development Bank
http://www.adb.org/sites/default/files/pub/2010/embracing-failure.pdf

See also the website www.admittingfailure.com
*This post is indebted to a lively and productive discussion which appeared on the American Evaluation Association’s listserv October, 2012.
Copyright © 2020 - Brad Rose Consulting