Program evaluation services that increase program value for grantors, grantees, and communities. x

Archive for category: Data Collection

April 21, 2020
21 Apr 2020

Developmental Evaluation in Tumultuous Times, Tumultuous Environments

The current health crisis is already having a powerful effect on non-profit organizations, many of whom had been economically challenged even before the onset of the COVID-19 pandemic. (See “A New Mission for Nonprofits During the Outbreak: Survival” by David Streitfeld, New York Times, March 27, 2020) Despite economic challenges, and as the immediate health crisis develops, non-profits will need information, including accurate and robust evaluation and monitoring information, even more than ever.

Under conditions of uncertainty and tumultuous social-environmental environments, and rapid adaptation, non-profits will benefit from information gathered by flexible and adaptable evaluation approaches like Developmental Evaluation. “Developmental evaluation (DE) is especially appropriate for…organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback.”
As Michael Quinnn Patton has recently pointed out, “All evaluators must now become developmental evaluators, capable of adapting to complex dynamics systems, preparing for the unknown, for uncertainties, turbulence, lack of control, nonlinearities, and for emergence of the unexpected. This is the current context around the world in general and this is the world in which evaluation will exist for the foreseeable future.”

Developmental Evaluation, the kind of evaluation approach Brad Rose Consulting has employed for many years, is extremely well-suited to serve the evaluation and information needs of non-profits, educational institutions, and foundations. For more information about our approach, please see our previous article “Developmental Evaluation: Evaluating Programs in the Real World’s Complex and Unpredictable Environment” and “Evaluation in Complex and Evolving Environments” .

Resources:

February 4, 2020
04 Feb 2020

Bias, Seeing Things as We Are, Not as They Are

“Bias” is a tendency (either known or unknown) to prefer one thing over another that prevents objectivity, that influences understanding or outcomes in some way. (See the Open Education Sociology Dictionary) Bias is an important phenomenon in social science and in our everyday lives.

In her article, “9 types of research bias and how to avoid them,” Rebecca Sarniak discusses the core kinds of bias in social research. These include both the biases of the researcher, and the biases of the research subject/respondent.

Prevalent kinds of researcher bias include:

  • confirmation bias
  • culture bias
  • question-order bias
  • leading questions/wording bias
  • the halo effect

Respondent biases include:

  • acquiescence bias
  • social desirability bias
  • habituation
  • sponsor bias

In their Scientific American article “How-to-think-about-implicit-bias,”, Keith Payne, Laura Niemi, John M. Doris assure us that bias is not merely rooted in prejudice, but in our tendency to notice patterns and make generalizations. “When is the last time a stereotype popped into your mind? If you are like most people, the authors included, it happens all the time. That doesn’t make you a racist, sexist, or whatever-ist. It just means your brain is working properly, noticing patterns, and making generalizations…. This tendency for stereotype-confirming thoughts to pass spontaneously through our minds is what psychologists call implicit bias. It sets people up to overgeneralize, sometimes leading to discrimination even when people feel they are being fair.”

Of course, bias is not just a phenomenon relevant to social science (and evaluation) research. It affects our everyday activities too. In “10 Cognitive Biases That Distort Your Thinking,” Kendra Cherry explores the following kinds of biases:

In evaluation research, especially when employing qualitative methods, such as interviews and focus groups, unconscious bias can negatively affect evaluation findings. The following types of bias are especially problematic in evaluations:

  • confirmation bias, when a researcher forms a hypothesis or belief and uses respondents’ information to confirm that belief.
  • acquiescence bias, also known as “yea-saying” or “the friendliness bias,” when a respondent demonstrates a tendency to agree with, and be positive about, whatever the interviewer presents.
  • social desirability bias, involves respondents answering questions in a way that they think will lead to being accepted and liked. Some respondents will report inaccurately on sensitive or personal topics to present themselves in the best possible light.
  • sponsor bias, when respondents know – or suspect – the interests and preferences of the sponsor or funder of the research, and modifies their (respondents) answers to questions
  • leading questions/wording bias, elaborating on a respondent’s answer puts words in their mouth because the researcher is trying to confirm a hypothesis, build rapport, or overestimate their understanding of the respondent.

It’s important to strive to eliminate bias in both our personal judgements and in social research. (For an extensive list of cognitive biases, see here.) Awareness of potential biases can alert us to when bias, rather than impartiality, influence our methods and affect our judgments.

Resources:

“How-to-think-about-implicit-bias,” Keith Payne, Laura Niemi, John M. Doris, March 27, 2018, Scientific American
“Bias in Social Research.” M. Hammersley, R. Gomm
October 22, 2019
22 Oct 2019

What the Heck are We Evaluating, Anyway?

When you’re thinking about doing an evaluation — either conducting one yourself, or working with an external evaluator to conduct the evaluation — there are a number of issues to consider. (See our earlier article “Approaching an Evaluation—Ten Issues to Consider”)

I’d like to briefly focus on four of those issues:

  • What is the “it” that is being evaluated?
  • What are the questions that you’re seeking to answer?
  • What concepts are to be measured?
  • What are appropriate tools and instruments to measure or indicate the desired change?

1. What is the “it” that is being evaluated?

Every evaluation needs to look at a particular and distinct program, initiative, policy, or effort. It is critical that the evaluator and the client be clear about what the ‘it” is that the evaluation will examine. Most programs or initiatives occur in a particular context, have a history, involve particular persons (e.g. staff and clients/service recipients) and are constituted by a set of specific actions and practices (e.g., trainings, educational efforts, activities, etc.) Moreover, each program or initiative has particular changes (i.e. outcomes) that it seeks to produce. Such changes can be manifold or singular. Typically, programs and initiatives seek to produce changes in attitudes, behavior, knowledge, capacities, etc. Changes can occur in individuals and/or collectivities (e.g. communities, schools, regions, populations, etc.)

2. What are the questions that you’re seeking to answer?

Evaluations like other investigative or research efforts, involve looking into one or more evaluation questions. For example, does a discreet reading intervention improve students’ reading proficiency, or does a job training program help recipients to find and retain employment? Does a middle school arts program increase students’ appreciation of art? Does a high school math program improve students’ proficiency with algebra problems?

Programs, interventions, and policies are implemented to make valued changes in the targeted group of people that these programs are designed to serve. Every evaluation should have some basic questions that it seeks to answer. By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings. (See “Program Evaluation Methods and Questions: A Discussion”)

3. What concepts are to be measured?

Before launching the evaluation, it is critical to clarify the kinds of changes that are desired, and then to find the appropriate measures for these changes. Programs that seek to improve maternal health, for example, may involve adherence to recommended health screening measures, e.g., pap smears. Evaluation questions for a maternal health program, therefore might include: “Did the patient receive a pap smear in the past year? Two years? Three years?” Ultimately, the question is “Does receipt of such testing improve maternal health?” (Note that this is only one element of maternal health. Other measures might include nutrition, smoking abstinence, wellness, etc.)

4. What are appropriate tools and instruments to measure or indicate the desired change?

Once the concept (e.g. health, reading proficiency, employment, etc.) are clearly identified, then it is possible to identify the measures or indicators of the concept, and to identify appropriate tools that can measure the desired concepts. Ultimately, we want tools that are able either to quantify, or qualitatively indicate, changes in the conceptual phenomenon that programs are designed to affect. In the examples noted above, evaluations would seek to show changes in program participants’ reading proficiency (education), employment, and health.

We have more information on these topics:

“Approaching an Evaluation—Ten Issues to Consider”

“Understanding Different Types of Program Evaluation”

“4 Advantages of an External Evaluator”

October 8, 2019
08 Oct 2019

Why Evaluate Your Program?

Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the common reasons for conducting program evaluation are to:

  • monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
  • measure the outcomes, or effects, produced by a program in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
  • provide objective evidence of a program’s achievements to current and/or future funders and policy makers
  • elucidate important lessons and contribute to public knowledge.

There are numerous reasons why a program manager or an organizational leader might chose to conduct an evaluation. Too often however, we don’t do things until we have to. Program evaluation is a way to understand how a program or initiative is doing. Compliance with a funder’s evaluation requirements need not be the only motive for evaluating a program. In fact, learning in a timely way about the achievements of, and challenges to, a program’s implementation—especially in the early-to-mid stages of a program’s implementation—can be a valuable and strategic endeavor for those who oversee programs. Evaluation is a way to learn about and to strengthen programs.

 

Resources:

September 24, 2019
24 Sep 2019

It’s Not Just Your Credit Card Score – The Erosion of Privacy

What is Privacy Good For?

The right to privacy is a much-cherished value in America. As we noted in an earlier article, “Transparent as a Jellyfish? Why Privacy is Important” privacy is crucial to the development of a person’s autonomy and subjectivity. When privacy is reduced by surveillance or restrictive interference—either by governments or corporations—such interference may not just affect our social and political freedoms, but undermine the preconditions for the fundamental development and sustenance of the self.

Danial Solove, Professor of Law at George Washington University Law School, lists ten important reasons for privacy including: limiting the power of government and corporations over individuals; the need to establish important social boundaries; creating trust; and as a precondition for freedom of speech and thought. Solove also notes, “Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being.” (See “Ten Reasons Why Privacy Matters” Danial Solove). Julie Cohen, in “What is Privacy For?” Harvard Law Review, Vol. 126, 2013 writes: “Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop.”

Strains on Privacy

Privacy, of course, is under continual strain. In his recent article, “Uh-oh: Silicon Valley is building a Chinese-style social credit system,” (Fast Company, August 8, 2019) Mike Elgan notes that China is not alone in seeking to create a “social credit” system—a system that monitors and rewards/punishes citizen behavior.

China’s state-run system would seem to be extreme (e.g., it rewards and punishes for such things as failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, etc. It also publishes lists of citizens’ social credit ratings, and uses public shaming as a means to enforce desired behavior.) Elgan notes that Silicon Valley has similar designs on monitoring and motivating what it deems as “desirable and undesirable” behavior. The outlines of an ever-evolving corporate-sponsored, technology-based “social credit” system now include:

  • Life insurance companies can base premiums on what they find in your social media posts
  • Airbnb—now has more than 6 million listings in its system, and the company can ban customers and limit their travel/accommodation choices. Airbnb can disable your account for life for any reason it chooses, and it reserves the right to not tell you the reason.
  • PatronScan, an ID-reading service helps restaurants and bars to spot fake IDs—and troublemakers. The company maintains a list of objectionable customers which is designed to protect venues from people previously removed for “fighting, sexual assault, drugs, theft, and other bad behavior,” A “public” list is shared among all PatronScan customers.Under a new policy Uber announced in May: If your average rating is “significantly below average,” Uber will ban you from the service.
  • WhatsApp is, in much of the world today, the main form of electronic communication. Users can be blocked if too many other users block you. Not being allowed to use WhatsApp in some countries is as punishing as not being allowed to use the telephone system in America.

The Consequences

While no one wants to endorse “bad behavior,” ceding the power to corporations and technology giants to determine which behavior counts as undesirable and punishable may not be the most just or democratic way to ensure societal norms and expectations. As Elgan observes, “The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extra-legal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.” Even more ominously, as Julie Cohen writes, “Conditions of diminished privacy shrink the capacity (of self government), because they impair both the capacity and the scope for the practice of citizenship. But a liberal democratic society cannot sustain itself without citizens who possess the capacity for democratic self-government. A society that permits the unchecked ascendancy of surveillance infrastructures cannot hope to remain a liberal democracy.”

 

Resources:

“America Isn’t Far Off from China’s ‘Social Credit Score’” Anthony Davenport, Observer, February 19, 2018.

“How the West Got China’s Social Credit System Wrong,”  Lousse Matsakis, Wired, July 29. 2019

“Ten Reasons Why Privacy Matters” Daniel Solove

“What Privacy Is For?” Julie Cohen, Harvard Law Review, Vol. 126, 2013

“The Spy in Your Wallet: Credit Cards Have a Privacy Problem,” Geoffrey A. Fowler, The Washington Post, August 26, 2019.

 

April 3, 2019
03 Apr 2019

Evaluation Site Visits – Seeing is Knowing

Gathering evaluative information about a program or initiative often relies upon evaluators physically visiting the program’s location in order to observe program operations, to collect evidence of the program’s implementation and outcomes, and to interview staff and program participants. The empirical and observational nature of site visits offer evaluators a unique lens through which to “see” what the program actually is, and how it attempts to achieve the desired outcomes it hopes to achieve.

In their influential article, “Evaluative Site Visits: A Methodological Review,” American Journal of Evaluation, Vol. 24, No. 3, 2003, pp. 341–352, Lawrence, Keiser, and Levoie note that, “An evaluative site visit occurs when persons with specific expertise and preparation go to a site for a limited period of time and gather information about an evaluation object either through their own experience or through the reported experiences of others in order to prepare testimony addressing the purpose of the site visit.” Unlike case studies, which are of longer duration and often of greater depth, and which seek to describe in detail the instance or phenomena under study, site visits are of limited time duration, and are focused on gathering data that ultimately will inform judgement about a program’s worth/value. Site visits typically involve the use of a number of qualitative methods (e.g., individual and focus group interviews, observations, document review, etc. For more information on the kinds of data that site visits permit, see our previous blog post “Just the Facts: Data Collection.”

Michael Quinn Patton summarizes the essential elements of an evaluation site visit:

  1. Competence– Ensure that site‐visit team members have skills and experience in qualitative observation and interviewing. Availability and subject matter expertise does not suffice.
  2. Knowledge– For an evaluative site visit, ensure at least one team member, preferably the team leader, has evaluation knowledge and credentials.
  3. Preparation– Site visitors should know something about the site being visited based on background materials, briefings, and/or prior experience.
  4. Site participation– People at sites should be engaged in planning and preparation for the site visit to minimize disruption to program activities and services.
  5. Do no harm– Site‐visit stakes can be high, with risks for people and programs. Good intentions, naiveté, and general cluelessness are not excuses. Be alert to what can go wrong and commit as a team to do no harm.
  6. Credible fieldwork– People at the site should be involved and informed, but they should not control the information collection in ways that undermine, significantly limit, or corrupt the inquiry. The evaluators should determine the activities observed and people interviewed, and arrange confidential interviews to enhance data quality.
  7. Neutrality– An evaluator conducting fieldwork should not have a preformed position on the intervention or the intervention model.
  8. Debriefing and feedback– Before departing from the field, key people at the site should be debriefed on highlights of findings and a timeline of when (or if) they will receive an oral or written report of findings.
  9. Site review– Those at the site should have an opportunity to respond in a timely way to site visitors’ reports, to correct errors and provide an alternative perspective on findings and judgments. Triangulation and a balance of perspectives should be the rule.
  10. Follow-up– The agency commissioning the site visit should do some minimal follow‐up to assess the quality of the site visit from the perspective of the locals on site.

Lawrence, Keiser, and Levoie argue that evaluative site visits are not merely a venue in which a range of predominately qualitative methodologies are used, but a specific kind of methodology, which is distinguished by its use of observation. “We believe site visit methodology is based on ontological beliefs about the nature of reality and epistemological beliefs about whether and how valid knowledge can be achieved. Ontologically, in order to conduct site visits the evaluator must assume that there is a reality that can be seen or sensed and described. Epistemologically, site visits are based in the belief that site visitors are legitimate, sensing instruments and that they can obtain valid information through first-hand encounters with the object being evaluated.”

Accordingly, site visits are where evaluators can get “the feel” of what a program is and does. As a result, site visits are a critical means through which evaluators gather and interpret data with which to make judgements about the value and effects of a program.

Resources:

“Evaluative Site Visits: A Methodological Review,” Frances Lawrenz, Nanette Keiser, and Bethann Lavoie, American Journal of Evaluation, Vol. 24, No. 3, 2003, pp. 341–352.

See Michael Quinn Patton quoted in Editors’ Note, Randi K. Nelson and Denise L. Roselan, New Directions in Evaluation, December, 2017

“Using Qualitative Interviews in Program Evaluations”

Conducting and Using Evaluative Site Visits: New Directions for Evaluation, Number 156, February 2018

“Developmental Evaluation: Evaluating Programs in the Real World’s Complex and Unpredictable Environment”

November 6, 2018
06 Nov 2018

Just the Facts: Data Collection

Program evaluations entail research. Research is a systematic “way of finding things out.” Evaluation research depends on the collection and analysis of data (i.e., evidence, facts) that indicate the outcomes (i.e., effects, results, etc.) of the operation of programs. Typically, evaluations want to discover evidence of whether valued outcomes have been achieved. (Other kinds of evaluations, like formative evaluations, seek to discover, through the collection and analysis of data, ways that a program may be strengthened.)

Data can be either qualitative (descriptive, entail words and observations) or quantitative (numerical). What counts as data depends upon the design and character of the evaluation research. Quantitative evaluations rely primarily on the collection of countable information like measurements and statistical data. Qualitative evaluations depend upon language-based data and other descriptive data. Usually, program evaluations combine the collection of quantitative and qualitative data.

There are a range of data sources for any evaluation. These can include: observations of programs’ operation; interviews with program participants, program staff, and program stakeholders; administrative records, files, and tracking information; questionnaires and surveys; focus groups; and visual information, such as video data and photographs.

The selection of the kinds of data to collect and the ways of collecting such data will be contingent on the evaluation design, the availability and accessibility of data, economic considerations about the cost of data collection, and both the limitations and potentials of each data source. The kinds of evaluation questions and the design of the evaluation research will, together, help to determine the optimal kinds of data that will need to be collected. (See our articles
Questions Before Methods” and “Using Qualitative Interviews in Program Evaluations”)

Resources

What is ‘Data’?

What’s the Difference? Evaluation vs. Research

Evaluation, Carole Weiss, Prentice Hall; 2nd edition (1997)

October 24, 2018
24 Oct 2018

Do Work Teams Work?

“Collaboration” and “teamwork” are the catchphrases of the contemporary workplace. Since the 1980s in the U.S., work teams have been hailed as the solution to assembly line workers’ alienation and disaffection, and white-collar workers’ isolation and disconnection. Work teams have been associated with increased productivity, innovation, employee satisfaction, and reduced turnover. Additionally, teams at work are said to have beneficial effects on employee learning, problem-solving, communication, company loyalty, and organizational cohesiveness. Teams are now found throughout the for-profit, non-profit, and governmental sectors, and much of the work of the field of organization development (OD) is devoted to fostering and sustaining teams at work.

In his recent article “Stop Wasting Money on Team Building,” Harvard Business Review, September 11, 2018, Carlos Valdes-Dapena, argues that teams are less effective than many believe them to be. Based on research conducted at Mars, Inc. “a 35 billion dollar global corporation with a commitment to collaboration,” Valdes-Dapena argues that while employees like the idea of teams and team work, employees don’t, in fact, much collaborate in teams. After conducting 125 interviews and administering questionnaires with team members, he writes “If there was one dominant theme from the interviews, it is summarized in this remarkable sentiment: “I really like and value my teammates. And I know we should collaborate more. We just don’t.”

Valdes-Dapena reports that employees “…felt the most clarity about their individual objectives, and felt a strong sense of ownership for the work they were accountable for.” He also shows that “Mars was full of people who loved to get busy on tasks and responsibilities that had their names next to them. It was work they could do exceedingly well, producing results without collaborating. On top of that, they were being affirmed for those results by their bosses and the performance rating system.” Essentially, Valdes-Dapena, argues, teams may sound good in theory, but it is probably better to tap individual self-interest, if you really want to get the job done.

In “3 Types of Dysfunctional Teams and How To Fix Them,” Patty McManus says that there are different types of dysfunctional work teams. She characterizes these different team types as: “The War Zone,” “The Love Fest,” and “The Unteam.” In “War Zone” teams, competition and factionalism among members obscure or derail the potential benefits of teamwork. In the “Love Fest” team, there is a focus on muting disagreements, highlighting areas of agreement, and avoidance of tough issues in the interest of maintaining good feelings. “The Unteam” is characterized by meetings that are used for top-down communication and status updates, and fail to build shared perspective about the organization. In the “Unteam” members may get along as individuals, but they have little connection to one another or a larger purpose they all share.

McManus claims that the problems of teams may be overcome by what she terms “ecosystems teams,” i.e., teams that surface and manage differences, build healthy inter-dependence among members, and engage the organization—beyond the mere confines of the team.

Matthew Swyers also sees problems in teams at work. In “7 Reasons Good Teams Become Dysfunctional,” (Inc. September 27, 2012,) Swyers writes that there are seven types of problems that teams may experience:

  • absence of a strong and competent leader
  • team members more interested in individual glory than achieving team objectives
  • failure to define team goals and desired outcomes
  • disproportionately place too much of the team’s work on a few of its members’ shoulders
  • lack focus and endless debate, without moving toward an ultimate goal
  • lack of accountability and postponed timetables
  • failure of decisiveness.

Each of these writers highlight the vulnerabilities of teams at work. Although the work of these writers doesn’t foreclose the positive possibilities of team organization at work, they raise important questions about both the enthusiasm for, and the effectiveness of, teams. Additionally, each author suggests that with enlightened modifications, organizations can overcome the liabilities of teams and begin to reap the benefits of team-based employee collaboration. That said, none of these writers, and few among the other U.S. based writers who have engaged this topic, treat the underlying assumptions of workplace reform—that work can be made more habitable and humane without the independent organizations that have traditionally represented workers’/employees’ interests in the workplace. For discussion of models of workplace reform that genuinely represent workers’ interest in more humane, collaborative, and ultimately, productive working environments, we will need to look elsewhere.

Resources:

Workgroups vs. Teams

“Importance of Teamwork at Work,” Tim Zimmer

“Importance of Teamwork in Organizations,” Aaron Marquis

“What Makes Teams Work?” Regina Fazio Maruca, Fast Company

“Stop Wasting Money on Team Building,” Carlos Valdes-Dapena, Harvard Business Review, September 11, 2018

“3 Types Of Dysfunctional Teams And How To Fix Them,” Patty McManus, Fast Company

“When Is Teamwork Really Necessary?” Michael D. Watkins, Harvard Business Review, August 16, 2018

“7 Reasons Good Teams Become Dysfunctional,” Matthew Swyers , Inc. Sept 27 2012

“Why Teams Don’t Work,” Diane Coutu, Harvard Business Review, May 2009

September 28, 2018
28 Sep 2018

Stakeholders vs. Customers

Stakeholders vs. Customers

Rather than “customers,” nonprofits, educational institutions, and philanthropies typically have “stakeholders.” Stakeholders are individuals and organizations that have an interest in, and may be affected by, the activities, actions, and policies of non-profits, schools, and philanthropies. Stakeholders don’t just purchase products and services (i.e. commodities), they have an interest, or “stake” in the outcomes of an organization’s or program’s operation.

There are a number of persons or entities who may be a stakeholder in a nonprofit organization. Nonprofit stakeholders may include funders/sponsors, program participants, staff, communities, and government agencies. It’s important to note that stakeholders can be either internal or external to the organization, and that stakeholders are able to exert influence— either positive or negative — over the outcomes of the organization or program.

While many nonprofits are sensitive to, and aware of, the interests of their multiple stakeholders, quite often both nonprofit leaders and nonprofit staff hold implicit, unexamined ideas about who their various stakeholders are. Often, stakeholders are not delineated, and consequently, there isn’t a shared understanding of who is and isn’t a stakeholder. Conducting a stakeholder analysis can be a useful process because it raises awareness of staff and managers about who is interested in, and who potentially influences the success of an organization’s desired outcomes. A stakeholder analysis is a simple way to help nonprofits to clarify those who have a “stake” in the success of the organization and its discrete programs. It can sharpen strategic planning, clarify goals, and build consensus about an organization’s purpose.

Resources:

“Identifying Evaluation Stakeholders”

“The Importance of Understanding Stakeholders”

Business Dictionary

“Organization Development: What Is It & How Can Evaluation Help?”

September 5, 2018
05 Sep 2018

A Lifetime of Learning

Pablo Picasso once said, “It takes a long time to become young.” The same may be said about education and the process of becoming educated. While we often associate formal education with youth and early adulthood, the fact is that education is an increasingly recognized lifelong endeavor that occurs far beyond the confines of early adulthood and traditional educational institutions.

In a recent article “Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic the authors discuss the emergence of a rich and ever-expanding “ecosystem” of organizations and institutions that have arisen to serve the unmet educational needs and expectations of learners who are not enrolled in formal, traditional educational institutions (e.g. community colleges, colleges, and universities). “This ecosystem of semi-structured, unorthodox learning providers is emerging at “the edges” of the current postsecondary world, with innovations that challenge the structure and even the existence of traditional education institutions.”

Hagel III, et al. argue that economic forces together with emerging technologies are enabling learners to do an “end run” around traditional educational providers and to gain access to knowledge and information in new venues. The growing availability of, and access to, MOOCs (Massive Online Open Courses), Youtube, Open Educational Resources, and other online learning platforms enable more and more learners to advance their learning and career goals outside the purview of traditional post-secondary institutions.

While the availability of alternative, lifelong educational resources is helping some non-traditional students to advance their educational goals, it is also having an effect on traditional post-secondary institutions. Hagel III, Seely Brown, Wooll and Tsu, argue that, “The educational institutions that succeed and remain relevant in the future …will likely be those that foster a learning environment that reflects the networked ecosystem and (that will become) meaningful and relevant to the lifelong learner. This means providing learning opportunities that match the learner’s current development and stage of life.”  The authors site as examples, community colleges that are now experimenting with “stackable” credentials that provide short-term skills and employment value, while enabling students to return over time and assemble a coherent curriculum that helps them progress toward career and personal goals” and “some universities (that) have started to look at the examples coming from both the edges of education and areas such as gaming and media to imagine and conduct experiments in what a future learning environment could look like.”

The authors say that in the future colleges and universities will benefit from considering such things as:

  1. Providing the facilities and locations for a variety of learning experiences, many of which will depend external sources for content
  2. Aggregating knowledge resources and connecting these resources with appropriate learners rather than acting as sole “vendors” of knowledge
  3. Acting as a lifelong “agents” for learners by helping learners to navigate a world of exponential change and an abundance of information

While these goals are ambitious, they highlight the remarkably changing terrain in continuing education. Educational “consumers” are increasingly likely to seek inexpensive and more accessible pathways to knowledge. As the authors point out, individuals’ lifelong learning needs are likely to continue to increase, so correspondingly, the pressures on traditional post-secondary education are likely to grow. Whether learners’ needs are more effectively addressed by re-orienting traditional post-secondary institutions or by the patchwork “ecosystem” of semi-structured, unorthodox learning-providers who inhabit what the authors of “Lifetime Learner” term “the edges” of the postsecondary world, is difficult to predict.

Resources:

Lifelong learning, Wikipedia

“Lifetime Learner” by John Hagel III, John Seely Brown, Roy Mathew, Maggie Wooll & Wendy Tsu, The Atlantic

“The Third Education Revolution: Schools are moving toward a model of continuous, lifelong learning in order to meet the needs of today’s economy” by Jeffrey Selingo, The Atlantic, Mar 22, 2018

August 14, 2018
14 Aug 2018

Robots Grade Your Essays and Read Your Resumes


We’ve previously written about the rise of artificial intelligence and the current and anticipated effects of AI upon employment.  (See links to previous blog posts, below) Two recent articles treat the effects of AI on the assessment of students and the hiring of employees.

In her recent article for NPR, “More States Opting To ‘Robo-Grade’ Student Essays By Computer” Tovia Smith discusses how so-called “robo-graders” (i.e., computer algorithms) are increasingly being used to grade students’ essays on state standardized tests. Smith reports that Utah and Ohio currently use computers to read and grade students’ essays and that soon, Massachusetts will follow suit. Peter Foltz, a research professor at the University of Colorado, Boulder observes, “We have artificial intelligence techniques which can judge anywhere from 50 to 100 features…We’ve done a number of studies to show that the (essay) scoring can be highly accurate.” Smith also notes that Utah, which once had humans review students’ essays after they had been graded by a machine, now relies on the machines almost exclusively. Cyndee Carter, assessment development coordinator for the Utah State Board of Education reports “…the state began very cautiously, at first making sure every machine-graded essay was also read by a real person. But…the computer scoring has proven “spot-on” and Utah now lets machines be the sole judge of the vast majority of essays.”

Needless to say, despite support for “robo-graders”, there are critics of automated essay assessments. Smith details how one critic, Les Perelman at MIT, has created an essay-generating program, the BABEL generator, that creates nonsense essays designed to trick the algorithmic “robo-graders” for the Graduate Record Exam (GRE). When Perelman submits a nonsense essay to the GRE computer, the algorithm gives the essay a near perfect score. Perelman observes, “”It makes absolutely no sense,” shaking his head. “There is no meaning. It’s not real writing. It’s so scary that it works….Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid.”

Critics of “robo-graders” are also worried that students might learn how to game the system, that is, give the algorithms exactly what they are looking for, and thereby receive undeservedly high scores. Cyndee Carter, the assessment development coordinator for the Utah State Board of Education, describes instances of students gaming the state test: “…Students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.”

Despite these shortcomings, computer designers are learning and further perfecting computer algorithms. It’s anticipated that more states will soon use refined algorithms to read and grade student essays.

Grading student essays is not the end of computer assessment. Once you’ve left school and start looking for a job, you may find that your resume is read not by an employer eager to hire a new employee, but by an algorithm whose job it is to screen for appropriate job applicants. In the brief article, “How Algorithms May Decide Your Career: Getting a job means getting past the computer,” The Economist reports that most large firms now use computer programs, or algorithms, for screening candidates seeking junior jobs.  Applicant Tracking Systems (ATS) can reject up to 75% of candidates, so it becomes increasingly imperative for applicants to send resumes filled with key words that will peak screening computers’ interests.

Once your resume passes the initial screening, some companies use computer driven visual interviews to further screen and select candidates. “Many companies, including Vodafone and Intel, use a video-interview service called HireVue. Candidates are quizzed while an artificial-intelligence (AI) program analyses their facial expressions (maintaining eye contact with the camera is advisable) and language patterns (sounding confident is the trick). People who wave their arms about or slouch in their seat are likely to fail. Only if they pass that test will the applicants meet some humans.”

Although one might think that computer-driven screening systems might avoid some of the biases of traditional recruitment processes, it seems that AI isn’t bias free, and that algorithms may favor applicants who have the time and monetary resources to continually retool their resumes so that these present the code words that employers are looking for. (This is similar to gaming the system, described above.) “There may also be an ‘arms race’ as candidates learn how to adjust their CVs to pass the initial AI test, and algorithms adapt to screen out more candidates.”

Resources:

More States Opting To ‘Robo-Grade’ Student Essays By Computer,” Tovia Smith, NPR, June 30, 2018

How Algorithms May Decide Your Career: Getting a job means getting past the computer” The Economist, June 21, 2018

Will You Become a Member of the Useless Class?

Humans Need Not Apply: What Happens When There’s No More Work?

Will President Trump’s Wall Keep Out the Robots?

Welcoming our New Robotic Overlords,” Sheelah Kolhatkar, The New Yorker, October 23 2017

AI, Robotics, and the Future of Jobs,” Pew Research Center

Artificial intelligence and employment,” Global Business Outlook

July 31, 2018
31 Jul 2018

Are There Any Questions?

Asking questions is a critical aspect of learning. We’ve previously written about the importance of questions in our blog post “Evaluation Research Interviews: Just Like Good Conversations.” In a recent article, “The Surprising Power of Questions,” which appears in the Harvard Business Review, May-June, 2018, authors Alison Wood Brooks and Leslie K. John offer suggestions for asking better questions.

As Brooks and John report, we often don’t ask enough questions during our conversations. Too often we talk rather than listen. Brooks and John, however, note that recent research shows that by asking good questions and genuinely listening to the answers, we are more likely to achieve both genuine information exchange and effective self-presentation. “Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding.”

Although asking more questions in our conversations is important, the authors show that asking follow-up questions is critical. Follow-up questions “…signal to your conversation partner that you are listening, care, and want to know more. People interacting with a partner who asks lots of follow-up questions tend to feel respected and heard.”

Another critical component of a question-asking is to be sure that we ask open-ended questions, not simply categorial (yes/no) questions. “Open-ended questions …can be particularly useful in uncovering information or learning something new. Indeed, they are wellsprings of innovation—which is often the result of finding the hidden, unexpected answer that no one has thought of before.”

Asking effective questions depends, of course, on the purpose and context of conversations. That said, it is vital to ask questions in an appropriate sequence. Counterintuitively, asking tougher questions first, and leaving easier questions until later “…can make your conversational partner more willing to open up.” On the other hand, asking tough questions too early in the conversation, can seem intrusive and sometimes offensive. If the ultimate goal of the conversation is to build a strong relationship with your interlocutor, especially with someone who you don’t know, or don’t know well, it may be better opening with less sensitive questions and escalate slowly. Tone and attitude are also important: “People are more forthcoming when you ask questions in a casual way, rather than in a buttoned-up, official tone.”

While question-asking is a necessary component of learning, the authors remind us that “The wellspring of all questions is wonder and curiosity and a capacity for delight. We pose and respond to queries in the belief that the magic of a conversation will produce a whole that is greater than the sum of its parts. Sustained personal engagement and motivation—in our lives as well as our work—require that we are always mindful of the transformative joy of asking and answering questions.”

Resources:

The Surprising Power of Questions,” Alison Wood Brooks and Leslie K. John. Harvard Business Review, May–June 2018 (pp.60–67)

Using Qualitative Interviews in Program Evaluations

July 24, 2018
24 Jul 2018

Learning to Learn

In a recent article in the May 2, 2018 Harvard Business Review, “Learning Is a Learned Behavior. Here’s How to Get Better at It,” Ulrich Boser rejects the idea that our capacities for learning are innate and immutable. He argues, instead, that a growing body of research shows that learners are not born, but made. Boser says that we can all get better at learning how to learn, and that improving our knowledge-acquisition skills is a matter of practicing some basic strategies.

Learning how to learn is a matter of:

  1. setting clear and achievable targets about what we want to learn
  2. developing our metacognition skills (“metacognition” is a fancy way to say thinking about thinking) so that as we learn, we ask ourselves questions like, Could I explain this to a friend? Do I need to get more background knowledge? etc.
  3. reflecting on what we are learning by taking time to “step away” from our deliberate learning activities so that during periods of calm and even mind-wondering, new insights emerge

Boser says that research shows we’re more committed, if we develop a learning plan with clear objectives, and that periodic reflection on the skills and concepts we’re trying to master, i.e., utilizing metacognition, makes each of us a better learner.

You can read more about strategies for learning in Boser’s article and his book
May 8, 2018
08 May 2018

Transparent as a Jellyfish? Why Privacy is Important

Recent revelations about Facebook and Cambridge Analytica’s use of personal data have raised serious concerns about internet privacy. It would appear that we inhabit a world in which privacy is increasingly under assault—not just from leering governments, but also from panoptic corporations.

Although the right to privacy in the US is not explicitly protected by the Constitution, constitutional amendments and case law have provided some protections to what has become a foundational assumption of American citizens. The “right to privacy” (what Supreme Court Justice Louis Brandis once called “the right to be left alone,”) is a widely held value, both in the U.S. and throughout the world. But why is privacy important?

In “Ten Reasons Why Privacy Matters” Danial Solove, Professor of Law at George Washington University Law School, lists ten important reasons including: limiting the power of government and corporations over individuals; the need to establish important social boundaries; creating trust; and as a precondition for freedom of speech and thought. Solove also notes, “Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being.”

Julie E. Cohen, of Georgetown argues that privacy is not just a protection, but an irreducible environment in which individuals are free to develop who they are and who they will be. “Privacy is shorthand for breathing room to engage in the process of … self-development. What Cohen means is that since life and contexts are always changing, privacy cannot be reductively conceived as one specific type of thing. It is better understood as an important buffer that gives us space to develop an identity that is somewhat separate from the surveillance, judgment, and values of our society and culture.” (See “Why Does Privacy Matter? One Scholar’s Answer” Jathan Sadowski, The Atlantic, Feb 26) In the Harvard Law Review, (“What Privacy Is For” Julie E. Cohen, Harvard Law Review, Vol. 126, 2013) Cohen writes, “Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop.”

Cohen’s argument that privacy is a pre-condition for the development of an autonomous and thriving self is a critical and often overlooked point. If individuals are to develop, individuate, and thrive, they need room to do so, without interference or unwanted surveillance. Such conditions are also necessary for the maintenance of individual freedom vs. slavery.  As Orlando Patterson argued in his book,  Freedom Vol.1 Freedom in the Making of Western Culture (Basic Books, 1991) freedom historically developed in the West as a long struggle against chattel slavery.  Slavery, of course, entails the subjugation of the individual/person, and depends upon the thwarting of autonomy. While slavery may not fully eradicate the full and healthy development of the “self,” it may deform and distort that development. Autonomous selves are both the product of and the condition of social freedom.

Privacy, which is crucial to the development of a person’s autonomy and subjectivity, when reduced by surveillance or restrictive interference—either by governments or corporations who gather and sell our private information—may interfere not just with social and political freedom, but with the development and sustenance of the self.  “Transparency” (especially when applied to personal information) may seem like an important feature to those who gather “Big Data,” but it may also represent an intrusion and an attempt to whittle away the environment of privacy that the self depends upon for its full and healthy development. As Cohen observes, “Efforts to repackage pervasive surveillance as innovation — under the moniker “Big Data” — are better understood as efforts to enshrine the methods and values of the modulated society at the heart of our system of knowledge production. In short, privacy incursions harm individuals, but not only individuals. Privacy incursions in the name of progress, innovation, and ordered liberty jeopardize the continuing vitality of the political and intellectual culture that we say we value.” (See “What Privacy Is For” Julie E. Cohen, Harvard Law Review, Vol. 126, 2013)

Privacy is not just important to the protection of individuals from governments and commercial interests, it is also essential for the development of full, autonomous, and healthy selves.

Resources:

“Ten Reasons Why Privacy Matters” Daniel Solove

“Why Does Privacy Matter? One Scholar’s Answer” Jathan Sadowski, The Atlantic, Feb 26, 2013

“What Privacy Is For” Julie E. Cohen, Harvard Law Review, Vol. 126, 2013

Orlando Patterson argued in his book,  Freedom Vol.1 Freedom in the Making of Western Culture (Basic Books, 1991)

“Facebook and Cambridge Analytica, What You Need to Know as Fallout Widens” Kevin Granville, New York Times, Mar 19, 2018

“I Downloaded the Information that Facebook Has on Me. Yikes” Brian Chen, New York Times, Apr 11, 2018

“Right to Privacy: Constitutional Rights & Privacy Laws” Tim Sharp, Livescience, June 12, 2013

Surveillance Capitalism, Shoshone Zuboff
Listen to “Facebook and the Reign of Surveillance Capitalism” Radio Open Source
Read a review of Surveillance Capitalism

“How to Save Your Privacy from the Internet’s Clutches” Natasha Lomas, Romain Dillet, TC, Apr 14, 2018

April 25, 2018
25 Apr 2018

Everybody Lies

Although the author of Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are, (Seth Stephens-Davidowitz, Dey St., 2017,) hesitates to specify precisely what ‘big data’ is, he is confident that we are living through an era in which there is an explosion in the amount and quality of data—especially internet-imbedded data—that can tell us things about humans that previous data sources and data analysis methods have been unable to reveal. In fact, Stephens-Davidowitz argues in this quick and easy to read book that, “I am now convinced that Google searches are the most important data set ever collected on the human psyche…and I am convinced that new data increasingly available in our digital age will expand our understanding of humankind.”

Stephens-Davidowitz argues that unlike previous, predominately survey-based data, the emergence of big data—primarily data provided by Google and other on-line searches, makes insights into humans’ deepest interests, desires, behaviors, and values much more transparent and accessible.  Whereas traditional survey research has a number of vulnerabilities (e.g., people are not candid, and in fact lie, they provide socially desirable answers, they both exaggerate or underestimate behaviors and characteristics, etc.) analysis of internet data together with the use of new analytical tools (e.g., Google trends), now makes available immensely more accurate information about what people actually think, believe, and fear.

Stephens-Davidowitz illustrates the insights that collection and analysis of internet-based data now make possible. He shows, for example, how analysis of Google searches about race revealed voters’ real (vs. survey-reported) attitudes toward race even in otherwise seemingly liberal precincts. These attitudes—largely hidden from analysts who used traditional kinds of survey methods, made possible the surprising election of a figure like Donald Trump, who mobilized anti-immigrant sentiment and racist allusions to win the 2016 presidential election. “Surveys and conventional wisdom placed modern racism predominantly in the South and mostly among Republicans.  But the places with the highest racist search rates included upstate New York, western Pennsylvania, eastern Ohio, and rural Illinois…” (p. 7) “The Google searches revealed a darkness and hatred among a meaningful number of Americans that pundits for many years missed. Search data revealed that we live in a very different society from the one academics and journalists, relying on polls, thought that we live in. It revealed a nasty, scary, and wider-spread rage that was waiting for a candidate to give voice to.” (p. 12)

Everybody Lies…examines the ways that new methods of analyzing internet data can yield accurate insights about what people are really concerned with and thinking about. In a chapter titled “Digital Truth Serum,” the author surveys a number of topics including gender bias and sexism, America’s Nazi sympathizers, and the underreported rise of child abuse during economic recessions. He repeatedly demonstrates how internet data reveal accurate and often counter-intuitive findings. In one instance, the author shows that, in fact, internet data reveal that we are more likely to interact with someone with opposing political ideas on the internet than in real life, and that in many instances—and counter to what is widely believed—liberals and conservatives often visit and draw upon the same news websites. (It appears that fascist sympathizers and liberals both rely on nytimes.com)

Everybody Lies…makes an argument for a ‘revolution’ in social science research. Stephens-Davidowitz believes that the collection and careful analysis of internet-based data promises a much more rigorous and penetrating approach to answering questions about peoples’ genuine attitudes, behaviors, and political dispositions. Along the way to demonstrating the superiority of such research, Stephens-Davidowitz touches on some important, but taboo and previously difficult-to-answer questions: What percentage of American males are gay? Is Freud’s theory of sexual symbols in dreams really accurate? At what age are political attitudes first established?  How racist are most Americans?

Stephens-Davidowitz writes, “Frankly, the overwhelming majority of academics have ignored the data explosion caused by the digital age. The world’s most famous sex researches stick with the tried and true. They ask a few hundred subjects about their desires; they don’t ask sites like PornHub for their data. The world’s most famous linguists analyze individual tests; they largely ignore the patterns revealed in billions of books. The methodologies taught to graduate students in psychology, political science, and sociology have been, for the most part, untouched by the digital revolution. The broad, mostly unexplored terrain opened by the data explosion has been left to a small number of forward-thinking professors, rebellious grad students and hobbyists. That will change.” (p. 274)

Resources:

Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are, Seth Stephens-Davidowitz, Dey St., 2017

July 27, 2017
27 Jul 2017

The Use of Surveys in Evaluation

Surveys can be an efficient way to collect information from a substantial number of people (i.e., respondents) in order to answer evaluation research questions. Typically, surveys strive to collect information from a sample (portion) of a broader population. When the sample is selected via random selection of respondents from a specified sampling frame, findings can be confidently generalized to the entire population.

Surveys may be conducted by phone, in-person, on the web, or by mail. They may ask standardized questions so that each respondent replies to precisely the same inquiry. Like other forms of research, highly effective surveys depend upon the quality of the questions asked of respondents. The more specific and clear the questions, the more useful survey findings are likely to be. Good surveys present questions in a logical order, are simple and direct, ask about one idea at a time, and are brief.

Surveys can ask either closed-ended or open-ended questions. Closed-ended questions may include multiple choice, dichotomous, Lickert scales, rank order scales, and other types of questions for which there are only a few answer categories available to the respondent. Closed-ended questions provide easily quantifiable data, for example, the frequency and percentage of respondents who answer a question in a particular way.

Alternatively, open-ended survey questions provide narrative responses that constitute a form of qualitative data. They require respondent reflection on their experience or attitudes. Open-ended questions often begin with: “why,” “how,” “what,” “describe,” “tell me about…,” or “what do you think about…”(See Open Ended Questions.) Open-ended survey questions depend heavily upon the interest, enthusiasm, and literacy level of respondents, and require extensive analysis precisely because they are not comprised of a small number of response categories.

Administering Surveys and Analyzing Results
Surveys can be administered in a variety of ways; in-person, on the phone, via mail, via the web, etc. Regardless of specific venue, it’s important to consider from the point of view of the respondent the factors that will maximize respondents’ participation, including accessibility of the survey, convenience of format, logicality of organization, and clarity of both the survey’s purpose and its questions.

Once survey data are collected and compiled, analyses of the data may take a variety of forms. Analysis of survey data essentially entails looking at quantitative data to find relationships, patterns, and trends. “Analyzing information involves examining it in ways that reveal the relationships, patterns, trends, etc…That may mean subjecting it to statistical operations that can tell you not only what kinds of relationships seem to exist among variables, but also to what level you can trust the answers you’re getting. It may mean comparing your information to that from other groups (a control or comparison group, statewide figures, etc.), to help draw some conclusions from the data. (See Community Tool Box “Collecting and Analyzing Data”) While data analysis usually entails some kind of statistical/quantitative manipulation of numerical information, it may also entail the analysis of qualitative data, i.e., data that is usually composed of words and not immediately quantifiable (e.g., data from in-depth interviews, observations, written documents, video, etc.)

The analysis of both quantitative and qualitative survey data (the latter typically collected in surveys from open-ended questions) is performed primarily to answer key evaluation research questions like, “Did the program make a difference for participants?” Effectively reporting findings from survey research not only entails accurate representation of quantitative findings, but interpretation of what both quantitative and qualitative data mean. This requires telling a coherent and evocative story, based on the survey data.

Brad Rose Consulting has over two decades of experience designing and conducting surveys whose findings compose an essential component of program evaluation activities. The resources below provide additional sources of information on the basics of survey research.

Resources

The American Statistical Association, “What is a Survey?”

“Survey Questions and Answer Types”

“Writing Good Survey Questions”

“How to Ask Open-ended Questions”

“Guide to Survey Research,” University of Colorado

Community Toolbox: “Collecting and Analyzing Data”

Example of Survey Analysis Guidelines:

March 31, 2017
31 Mar 2017

The Implications of Public School Privatization (Part 2): Betsy DeVos’ “Holy War”

In our last blog post “The Implications of Public School Privatization,” I referred to an article by Diane Ravitch that recently appeared in The New York Review of Books. That article claimed that the school privatization movement is largely composed of social conservatives, corporations, and business-friendly foundations. In a recent article by Janet Reitman, in Rolling Stone,Betsy DeVos’ Holy War,” the author argues that the movement to privatize public schools is also sponsored, at least in part, by those who, like Betsy DeVos,  would prefer to de-secularize schools and create institutions that reflect market friendly Christian values.

Betsy DeVos, embodies a nexus of wealth and hyper conservative Christianity. Her goals include support of “school choice” (i.e., voucher systems that direct tax money for public schools toward private and parochial schools) and, according to Reitman, the promotion of the religious colonization of public education, and more broadly, American society. (See also, “Betsy DeVos Wants to Use America’s Schools to Build ‘God’s Kingdom’” Kristina Rizga, Mar/Apr 2017 Issue, Mother Jones.

DeVos, as is well documented, is not deeply acquainted with public education—neither she nor her children attended public schools; she has never served on a school board, nor been an educator. DeVos who hails from a wealthy, Calvinist, Western Michigan dynasty that includes among other resources, her husband’s multi-billion dollar Amway fortune, and her father’s auto parts fortune (among other profitable ventures, her father, Edgar Prince, invented the lighted, automobile sun visor) now finds herself at the helm of the federal Department of Education. She appears to be even less a friend of public education than she is familiar with it. DeVos has devoted a substantial part of her political and philanthropic career to advocating for the privatization of public schools, and her home sate of Michigan has the highest number of for-profit charter schools in the nation. To learn more about DeVos’ plans for public schools in America, you can read this intriguing article “Betsy DeVos’ Holy War,” The resources below offer additional insights into Secretary DeVos and her plans for public schools. And to learn more about our work with schools visit our Higher education & K-12 page.

Resources

Betsy DeVos’ website

Six astonishing things Betsy DeVos said — and refused to say — at her confirmation hearing” Valerie Strauss, Washington Post, January 18, 2016

Education for Sale?” Linda Darling Hammond The Nation, March 27, 2017

Betsy DeVos: Fighter for kids or destroyer of public schools?” Lori Higgins, Kathleen Gray, November 23, 2016, Detroit Free Press.

Betsy DeVos blocked from entering Washington public school by protesters CBS News 

The Betsy DeVos Hearing Was an Insult to Democracy” Charles Pierce, Esquire

Betsy DeVos Wants to Use America’s Schools to Build ‘God’s Kingdom’” Kristina Rizga, Mar/Apr 2017 Issue, Mother Jones

Why are Republicans so cruel to the poor? Paul Ryan’s profound hypocrisy stands for a deeper problem” Chauncey DeVega, March 23, 2017

March 1, 2017
01 Mar 2017

The Implications of Public School Privatization

implications of public school privatizationIn a recent review of two books, Education and the Commercial Mindset and School Choice: The End of Public Education, which appears in the December 8, 2016 New York Review of Books, Diane Ravitch, the former Assistant Secretary of Education during the George HW Bush presidency, discusses the implications of corporate designs on public education. Ravitch begins her review by reminding us that, “Privatization means that a public service is taken over by for-profit business whose highest goal is profit.” In the name of market-driven efficiency, she argues, the “education industry” is likely to become increasingly similar to privatized hospitals and prisons. In these industries, as in many others, corporate owners, in their loyalty to investors’ desire for profits, tend to eliminate unions, reduce employee benefits, continually cut costs of operation, and orient to serving those who are least expensive to serve.

Ravitch sketches some of the challenges posed by charter schools, noting that “…they can admit the students they want, exclude those they do not want, and push out the ones who do not meet their academic or behavioral standards.” She says, that charters not only “drain away resources from public schools” but they also “leave the neediest, most expensive students to the public schools to educate.” Moreover, as Josh Moon recently noted in his article, “’School choice’ is an awful choice,” “If the “failing school” is indeed so terrible that we’re willing to reroute tax money from it to a private institution that’s not even accredited, then what makes it OK for some students to attend that failing school?”

While some argue that charter schools can “save children from failing public schools” research on student outcomes for charter school has shown mixed results. For example, The Education Law Center, in “Charter School Achievement: Hype vs Evidence” reports:

Research on charter schools paints a mixed picture. A number of recent national studies have reached the same conclusion: charter schools do not, on average, show greater levels of student achievement, typically measured by standardized test scores, than public schools, and may even perform worse.

The Center for Research on Education Outcomes (CREDO) at Stanford University found in a 2009 report that 17% of charter schools outperformed their public school equivalents, while 37% of charter schools performed worse than regular local schools, and the rest were about the same. A 2010 study by Mathematica Policy Research found that, on average, charter middle schools that held lotteries were neither more nor less successful than regular middle schools in improving student achievement, behavior, or school progress. Among the charter schools considered in the study, more had statistically significant negative effects on student achievement than statistically significant positive effects. These findings are echoed in a number of other studies.

In Michigan, Secretary of Education, Betsy Devos’s home state, 80 percent of charter schools operate as for-profit organizations. Ravitch says, “They perform no better than public schools, and according to the Detroit Free Press, they make up a publicly subsidized $1 billion per year industry with no accountability.”

Ravitch tells us that the privatization movement is largely composed of social conservatives, corporations, and business-friendly foundations. “These days, those who call themselves “education reformers” are likely to be hedge fund managers, entrepreneurs, and billionaires, not educators. The “reform” movement loudly proclaims the failure of American public education and seeks to turn public dollars over to entrepreneurs, corporate chains, mom-and-pop operations, religious organizations, and almost anyone else who wants to open a school.”

The Trump administration is likely to further advance a public-school privatization and school voucher agenda. The extent and results of such “reforms” are hard to predict. That said, as Ravitch argues “… there is no evidence for the superiority of privatization in education. Privatization divides communities and diminishes commitment to that which we call the common good. When there is a public school system, citizens are obligated to pay taxes to support the education of all children in the community, even if they have no children in the schools themselves. We invest in public education because it is an investment in the future of society.” How continued privatization of public k-12 education will affect an increasingly economically privatized and socially and politically divided society is not yet known. To find out more about the work we do with schools click here.

Resources

Diane Ravitch ,“When Public Goes Private as Trump Wants: What Happens? New York Review of Books, December 8 2016.

Diane Ravitch, “Trump’s Nominee for Secretary of Education Could Gut Public Ed,: In These Times. 

Margaret E. Raymond, “A Critical Look at the Charter School Debate”

National Charter School Study (Stanford University) 2013

“Charter School Achievement: Hype vs Evidence”

Kristina Rizga Jan. 17, 2017 “Betsy DeVos Wants to Use America’s Schools to Build “God’s Kingdom”

Kevin Carey, “Dismal Results From Vouchers Surprise Researchers as DeVos Era Begins” New York Times, Feb.23, 2017

Josh Moon, “‘School choice’ is an awful choice” Alabama Political Reporter

Literature Review: Research Comparing Charter Schools and Traditional Public Schools

April 28, 2016
28 Apr 2016

Evidence-Based Studies

“Evidence-based” – What is it?
“Evidenced-based” has become a common adjectival term for identifying and endorsing the effectiveness of various programs and practices in fields ranging from medicine to education, from psychology to nursing, from criminal justice to psychology. The motivation for marshalling objective evidence in order to guide practices and policies in these diverse fields has been the result of the growing recognition that professional practices—whether they be doctoring or teaching, social work, or nursing—need to be based on something more sound than custom/tradition, practitioners’ habit, professional culture, received wisdom, and hearsay.

What does “evidenced-based” mean?
While definitions of “evidence-based” vary, the most common characteristics of evidence-based research include: objective, empirical research that is valid and replicable, whose findings are based on a strong theoretical foundation, and include high quality data and data collection procedures. The most common definition of Evidence-Based Practice (EBP) is drawn from Dr. David Sackett’s, original (1996) definition of “evidence-based” practices in medicine, i.e., “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research” (Sackett D, 1996). This definition was subsequently amended to, “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” (Sackett DL, Strauss SE, Richardson WS, et al. Evidence-based medicine: how to practice and teach EBM. London: Churchill-Livingstone,2000). (See, “Definition of Evidenced-based Medicine”).

An evidenced-based program, whether it be in youth development or education, is comprised of a set of coordinated services/activities that demonstrate effectiveness, as such effectiveness has been established by sound research, preferably, scientifically based research. (See, “Introduction to Evidence-Based Practice“).

In education, evidence-based practices are those practices that are based on sound research that shows that desired outcomes follow from the employment of such practices.  “Evidence-based education is a paradigm by which education stakeholders use empirical evidence to make informed decisions about education interventions (policies, practices, and programs). ‘Evidence-based’ decision making is emphasized over ‘opinion-based’ decision making.” Additionally, “the concept behind evidence-based approaches is that education interventions should be evaluated to prove whether they work, and the results should be fed back to influence practice. Research is connected to day-to-day practice, and individualistic and personal approaches give way to testing and scientific rigor.” (See, “What is Evidence-Based Education?“).

Of course there are different kinds of evidence that can be used to show that practices, programs, and policies are effective.  In a subsequent blog post I will discuss the range of evidence-based studies—from individual case studies and quasi experimental designs, to randomized controlled trials (RCT).  The quality of the evidence as well as the quality of the study in which such evidence appears is a critical factor in deciding whether the practice or program is not just “evidence-based,” but in fact, effective. To learn more about our data collection and measurement click here.

Resources

Evidenced-based practice

What are Evidence-Based Interventions (EBI)?

Scientific Research and Evidence-Based Practice

Evidence-based medicine, Florida State College of Medicine

Evidenced-based studies in education

U.S Department of Education, “What Works” Clearinghouse

Defining Evidence-Based Programs

Child and Family Services

Linking Research with Practice in Youth Development – What Works and How Do We Know

Scientifically Based Research vs. Evidence-Based Practices and Instruction

How to Evaluate Evidence-Based or Research-Based Interventions

Issues in Defining and Applying Evidence-Based Practices Criteria for Treatment of Criminal-Justice Involved Clients

Can Randomized Trials Answer the Question of What Works?

February 1, 2016
01 Feb 2016

Using Qualitative Interviews in Program Evaluations

brad rose consultingQualitative interviews can be an important source of program evaluation data. Both in-depth individual interviews and focus group interviews are important methods that provide insights and phenomenologically rich descriptive information that other, numerically oriented data collection methods (e.g., questionnaires and surveys), are often unable to capture. Typically, interviews are either structured, semi-structured, or unstructured. (Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions is asked. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but allow the interviewer to diverge from the questions in order to pursue or follow up to an idea or response. Unstructured interviews start with an opening question, but don’t employ a detailed interview protocol, favoring instead the interviewer’s spontaneous generation of subsequent follow-up questions. See P. Gill1, K. Stewart2, E. Treasure3 & B. Chadwick, cited below.)

Interviews, especially one-on-one, in-person, interviews, allow an evaluator to participate in direct conversations with program participants, program staff, community members, and other program stakeholders. These conversations enable the evaluator to learn about interviewees’ experiences, perspectives, attitudes, motivations, beliefs, personal history, and knowledge. Interviews can be the source of pertinent information that is often unavailable to other methodological approaches. Qualitative interviews enable evaluators to: elicit direct responses from interviewees, probe and ask follow-up questions, gather rich and detailed descriptive data, offer real-time opportunities to explain and clarify interview questions, observe the affective responses of interviewees, and ultimately, to conduct thorough-going explorations of topics and issues critical to the evaluation initiative.

Although less intimate, focus group interviews are another key source of evaluation data. Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions to focus group participants. Focus groups usually include people who are somewhat similar in characteristics or social roles. Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions. Ideally – although not always possible – it is best to involve participants who don’t previously know one another. Focus groups can be especially useful in clarifying, qualifying, and/or challenging data collected through other methods. Consequently they can be useful as a tool for corroborating findings from other research methodologies.

Regardless of the specific form of qualitative interview—individual or focus group, in-person or telephone—qualitative interviews can be useful in providing data about participants’ experience in, and ultimately the effectiveness of, programs and initiatives. To find out more about our use of qualitative data visit our Data collection & Outcome measurement page.

Resources:

Learning from Strangers: The Art and Method of Qualitative Interview Studies, Robert S. Weiss, Free Press.

Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences, Irving Seidman. Fourth Edition.

Methods of data collection in qualitative research: interviews and focus groups. P. Gill1, K. Stewart2, E. Treasure3 & B. Chadwick.

Advantages and Disadvantages of Four Interview Techniques in Qualitative Research,  Raymond Opdenakker.

Qualitative Research Methods Overview

Interviewing—The Robert Wood Johnson Foundation

Pros & Cons of Interviewing

Advantages and Disadvantages of Face-to-Face Data Collection

December 4, 2014
04 Dec 2014

Anecdote as Evidence

When discussing with clients potential sources of data about a program’s operations and effects, it has often been said to me, “But we just have anecdotal evidence.”  It’s as if anecdotal data don’t count.   Too often anecdotes are dismissed as unscientific and valueless—as if they are just stories.  In point of fact, anecdotes (qualitative accounts, “word-based” data) can be a valuable source of information and offer powerful insights about how a program works and the effects it produces.  When carefully collected and systematically analyzed, especially when combined with other sources of quantitative data, anecdotes can be a powerful “window” on a program.

In a recent blog post, (see link below) the evaluator Michael Quinn Patton reflects on the value and utility of anecdotal information.  Patton shows that, when collected in sufficient quantity, compared (or “triangulated’) with other kinds of data, and systematically and sensibly analyzed, anecdotes can provide important information about the character and meaning of a given phenomenon.  Furthermore, anecdotes are often the starting place for hypotheses and experiments that ultimately produce quantitative evidence of phenomena.  William Trochim underscores the importance of word-based, qualitative data (of which anecdotes are a specific type) when he points out. “All quantitative data is based on qualitative judgment.  Numbers in and of themselves can’t be interpreted without understanding the assumptions which underlie them…”(David Foster Wallace made a similar point from an entirely different vantage point, in Consider the Lobster: “You can’t escape language.  Language is everything and everywhere.  It’s what lets us have anything to do with one another. p. 70.)

Trochim goes on to say,

“All numerical information involves numerous judgments about what the number means. The bottom line here is that quantitative and qualitative data are, at some level, virtually inseparable.  Neither exists in a vacuum or can be considered totally devoid of the other. To ask which is “better” or more “valid” or has greater “verisimilitude” or whatever ignores the intimate connection between them. To do good research we need to use both the qualitative and the quantitative.”

Patton reminds us of the importance of anecdotes when he quotes N.G. Carr, author of  The Shallows: What the Internet is doing to our brains. W. W. Norton(2010). New York:

“We live anecdotally, proceeding from birth to death through a series of incidents, but scientists can be quick to dismiss the value of anecdotes. “Anecdotal” has become something of a curse word, at least when applied to research and other explorations of the real.. . . . The empirical, if it’s to provide anything like a full picture, needs to make room for both the statistical and the anecdotal.

The danger in scorning the anecdotal is that science gets too far removed from the actual experience of life, that it loses sight of the fact that mathematical averages and other such measures are always abstractions.”

I believe that it is important to use multiple kinds of information to understand what programs do, and what their outcomes are.  Quantitative data is essential for understanding abstract trends and at getting at the “larger picture.”  That said, it is nearly impossible to make sense of quantitative data without using language to reveal assumption, implications, explanations, and meaning of quantitative data.  Anecdotal data, as one kind of qualitative data, is critical to effective program evaluation research. To learn more about how we utilize both quantitative and qualitative data visit our Data collection & Outcome measurement page.

 

Resources:

Michael Quinn Patton, “Anecdote as Epithet – Rumination #1 from Qualitative Research and Evaluation Methods”
http://betterevaluation.org/blog/anecdote_as_epithet

William Trochim, “The Qualitative Debate”
http://www.socialresearchmethods.net/kb/qualdeb.php

“The Qualitative- Quantitative Debate”
http://writing.colostate.edu/guides/page.cfm?pageid=1383

Video about Qualitative, Quantitative, and Mixed-Methods Research
http://sites.macewan.ca/inspire/2014/10/09/qualitative-versus-quantitative-the-validity-debate/

A Table Summarizing Qualitative versus Quantitative Research: Key Points in a Classic Debate
http://wilderdom.com/research/QualitativeVersusQuantitativeResearch.html

Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research
http://www.webpages.uidaho.edu/css506/506%20Readings/sale%20mixed-methods.pdf

November 14, 2014
14 Nov 2014

Evaluation Research Interviews:  Just Like Good Conversations

br-goodconversations-EQualitative research interviews are a critical component of program evaluations.  In-person and telephone interviews are especially valuable because they allow the evaluator to participate in direct conversations with program participants, program staff, community members, and other stakeholders.  These conversations enable the evaluator to learn in a rich conversational venue about interviewees’ experiences, perspectives, attitudes, and knowledge.   Unlike questionnaires and surveys, which typically require structured, categorical responses to standardized written questions so that data can be quantified, qualitative interviews allow for deeper probing of interviewees and the use of clarifying follow-up questions which can surface information that often remains unrevealed in survey/questionnaire formats.

Although research interviews are guided by a pre-determined, written protocol which contains guiding questions, excellent interviews require a nimble and improvisational interviewer who can thoughtfully and swiftly respond to interviewees’ observations and reflections.   Qualitative research interviews also require that the interviewer be a skilled listener and thoughtful interpreter of verbally presented data.  Interviewers must listen carefully both to the denotative narrative “text” of the interviewee, and to the connotative subtext (the implied intent, tacit sub-themes and connotations) that the interviewee presents.

The most productive qualitative interviews are those that approximate a good conversation.  This requires the interviewer to establish a comfortable atmosphere; ask interesting and germane questions; display respect for the interviewee; and create a sense of equality, candor, and reciprocity between the interviewer and the interviewee.  Good interviews not only are a source of rich and informative data for the interviewer, they can also be a reflective learning opportunity for the interviewee.  Like every good conversation, both parties should benefit. To learn more about our qualitative evaluation methods visit our Data collection & Outcome measurement page.

Resources:

Discussion of interview basics:
http://www.socialresearchmethods.net/kb/intrview.php

Tip Sheet for Qualitative Interviewing:
http://dism.ssri.duke.edu/pdfs/Tipsheet%20-%20Qualitative%20Interviews.pdf

Advantages and disadvantages of different research methods (personal interviews, telephone surveys, mail surveys, etc.):
http://www.jeffandersonconsulting.com/marketing-research.php/survey-research/research-methods

Interviews: An Introduction to Qualitative Research Interviewing, by Steinar Kvale, Sage Publications, Thousand Oaks California, 1996:
http://www.inside-installations.org/OCMT/mydocs/Microsoft%20Word%20-%20Booksummary_Interviews_SMAK_2.pdf

About research interviewing:
https://en.wikipedia.org/wiki/Interview

Interviewing in qualitative research
http://peoplelearn.homestead.com/MEdHOME/QUALITATIVE/Chap15.Interview.pdf

Interviewing in educational research:
http://www.history.ucsb.edu/faculty/marcuse/projects/oralhistory/2006MEBrennerInterviewInEducResearchOCR.pdf

August 26, 2014
26 Aug 2014

Focus Groups

Pioneered by market researchers and mid-20th century sociologists, focus groups are a qualitative research method that involves small groups of people in guided discussions about their attitudes, beliefs, experiences, and opinions about a selected topic or issue.  Often used by marketers who obtain feedback from consumers about a product or service, focus groups have also become an effective and widely recognized social science research tool that enables researchers to explore participants’ views, and to reveal rich data that often remain under-reported by other kinds of data collection strategies (e.g., surveys, questionnaires, etc. ).

Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions that allow participants to address questions. Focus groups usually include people who are somewhat similar in characteristics or social roles.  Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions.  Ideally—although not always possible—it is best to involve participants who don’t previously know one another.

Focus group conversations enable participants to offer observations, define issues, pose and refine questions, and create informative debate/discussions.  Focus group moderators must: be attentive, pose useful and creative questions, create a welcoming and non-judgmental atmosphere, be sensitive to non-verbal cues and the emotional tenor of participants.   Typically, focus group sessions are recorded or videoed so that researchers can later transcribe and analyze participants’ comments.  Often an assistant moderator will take notes during the focus group conversation.

Focus groups have advantages over other date collection methods.  They often employ group dynamics that help to reveal information that would not emerge from an individual interview or survey: they produce relatively quick, low cost data (they produce an ‘economy of scale’ as compared to individual interviews); allow the moderator to pose appropriate and responsive follow-up questions;  enable the moderator to observe non-verbal data; and often produce greater and richer data than a questionnaire or survey.

Focus groups also can have some disadvantages, especially if not conducted by an experienced and skilled moderator:  Depending upon their composition, focus groups are not necessarily representative of the general population; respondents may feel social pressure to endorse other group members’ opinions or refrain from voicing their own; group discussions require effective “steering” so that key questions are answered, and participants don’t stray from the questions/topic.

Focus groups are often used in program evaluations.  I have had extensive experience conducting focus groups with a wide-range of constituencies.  During my 20 years of experience as a program evaluator, I’ve moderated focus groups composed of:  homeless persons; disadvantaged youth; university pr ofessors and administrators; k-12 teachers; k-12 and university students, corporate managers; and hospital administrators.  In each of these groups I’ve found that it’s been beneficial to: have a non-judgmental attitude, be genuinely curious; exercise a gentle guidance; and respect the opinions, beliefs, and experiences of each focus group member.   A sense of humor can also be extremely helpful. (See our previous post: “Interpersonal Skills Enhance Program Evaluation,”  Also “Listening to Those Who Matter Most, the Beneficiaries” ) Or if you want to learn more about our qualitative approaches visit our Data collection & Outcome measurement page.

Resources:

About focus groups:

http://sociology.about.com/od/Research-Methods/a/Focus-Groups.htm

About focus groups:

http://www.cse.lehigh.edu/~glennb/mm/FocusGroups.htm

How focus groups work:

http://money.howstuffworks.com/business-communications/how-focus-groups-work1.htm

Focus group interviewing:

http://www.tc.umn.edu/~rkrueger/focus.html

‘Focus groups’ at Wikipedia

https://en.wikipedia.org/wiki/Focus_group

June 12, 2014
12 Jun 2014

Questions Before Methods

Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer.  (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”).  Evaluation questions are what guide the evaluation, give it direction, and express its purpose.   Identifying guiding questions is essential to the success of any evaluation research effort.

Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program.  For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions).  During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions).  Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions).  Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).

While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations.  Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.

Types of Evaluation Questions

Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.

▪ Process Evaluation Questions

  • Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
  • Did the program’s services, products, and resources reach their intended audiences and users?
  • Were services, products, and resources made available to intended audiences and users in a timely manner?
  • What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
  • What steps were taken by the program to address these challenges?

▪ Formative Evaluation Questions

  • How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
  • How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants  and stakeholders?
  • What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
  • Which elements of the program do participants find most beneficial, and which least beneficial?

▪ Outcome/Summative Evaluation Questions

  • What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
  • Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
  • Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
  • What is the ultimate worth, merit, and value of the program?
  • Should the program be continued or curtailed?

The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use.  If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT).  Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so.  In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review,  etc.) that can help elucidate why what happens, happens, and what program participants experience.

Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

August 29, 2013
29 Aug 2013

Evaluation Workflow

Typically, we work with clients from the early stages of program development in order to understand their organization’s needs and the needs of program funders and other stakeholders. Following initial consultations with program managers and program staff, we work collaboratively to identify key evaluation questions, and to design a strategy for collecting and analyzing data that will provide meaningful and useful information to all stakeholders.

Depending upon the specific initiative, we implement a range of evaluation tools (e.g., interview protocols, web-based surveys, focus groups, quantitative measures, etc.) that allow us to collect, analyze, and interpret data about the activities and outcomes of the specified program. Periodic debriefings with program staff and stakeholders allow us to communicate preliminary findings, and to offer program managers timely opportunities to refine programming so that they can better achieve intended goals.

Our collaborative approach to working with clients allows us to actively support program managers, staff, and funders to make data-informed judgments about programs’ effectiveness and value. At the appropriate time(s) in the program’s implementation, we write a report(s) that details findings from program evaluation activities and that makes data-based suggestions for program improvement. To learn more about our approach to evaluation visit our Data collection & Outcome measurement page.

May 20, 2013
20 May 2013

The Secret to Innovation, Creativity, and Change?

The other day, I conducted a focus group with disadvantaged youth. On behalf of a local workforce investment board, I interviewed a group of 16-24 year-olds about their use of cell phone and other hand-held technologies, in order to learn whether it would be possible to reach youth with career development programming via cell phone and electronic modalities. (In my 20+ years as a professional evaluator, I’ve conducted between 50 and 60 focus groups, with participants who range across the socioeconomic spectrum—from homeless women to college presidents.) As this focus group proceeded, I became aware of a two things. Many of the youth saw me, understandably, as an authority figure to whom they had to give guarded responses—at least initially— and whose trust I needed to earn. Additionally, I, too, felt vulnerable before the group of young people, who I feared might think I was uninformed about cyber culture and the prevailing circumstances of their age group. Each of us, in our own ways, felt “vulnerable.”

It occurred to me that focus group members’ sense of vulnerability would yield only if I myself became more open and vulnerable. Consequently, I abandoned my interview protocol, and began improvising candid and spontaneous questions. I also confessed my lack of knowledge about the technologies that young people often use so comfortably, as if it is an extension of themselves. I also redoubled my efforts to enlist the opinions of each member—especially those who seemed, at first, reticent to share their experience. As the focus group continued, I noticed that some of the initial reticence and reserve of my interlocutors began to dissolve, and even those who had not initially offered their opinions and experience, began to fully participate in the group. I also noticed that as I further expressed my genuine interest in learning about their experience, the sense of who possessed the authority shifted from me—the question-asker— to the youth in the group, who became experts on the subject I was interviewing them about.

The Necessity of Vulnerability in Education

Although all of us necessarily spend a lot of our lives shielding ourselves from various forms of vulnerability (economic, social, emotional, etc.), research is beginning to show that psychological vulnerability and the willingness to risk social shame and embarrassment, is essential for genuine learning, creativity and path-breaking innovation. In a recent TED presentation by Brene Brown a research professor at the University of Houston Graduate School of Social Work, who has spent the last decade studying vulnerability, courage, authenticity, and shame (Click here to listen).  Ms. Brown discusses the importance of making mistakes and enduring potential embarrassment, in order to learn new things and make new connections. Brown highlights the significance of making ourselves vulnerable (i.e. taking risks, enduring uncertainty, handling emotional exposure) so that we can genuinely connect with others, and learn from them. Fear of failure and fear of vulnerability (especially fear of social shame) she says, too often get in the way of our learning from others. Moreover, we are often deathly afraid of making mistakes (See our recent post “Fail Forward: What We Can Learn from Program ”Failure”. You can also listen to the entire NPR TED Hour on the importance of mistakes to the process of learning, here. Ultimately, we must embrace, rather than deny vulnerability, if we are to connect, and thereby learn.

I’ve conducted research for most of my professional life. As I reflected on my professional experience, I realize that I’ve learned the most from people and situations when I’ve been willing to make myself vulnerable, to be fully present, and to authentically engage others. As in the above-mentioned focus group, and in many proceeding that, I recognize that when I’ve allowed myself to be open and available—to be unconcerned with knowing all the right answers, in advance— indeed, when I’ve made myself vulnerable and present—is precisely when I’ve learned the most important lessons and gained the most insight into a given phenomenon.

Successful program evaluations require effective, constant, and adaptive learning—often in fluid, uncertain, and continually evolving contexts. Genuine learning occurs when we make ourselves vulnerable enough to sincerely engage others, to connect with them, and to acknowledge that what we don’t know is the first step toward gaining knowledge, toward genuine knowledge. To learn more about our adaptive approach to evaluation visit our Feedback & Continuous improvement page.

May 9, 2013
09 May 2013

Listening to Those who Matter Most, the Beneficiaries.

br-eblast-05082013Listening to Those Who Matter Most, the Beneficiaries,” (Spring, 2013 Stanford Social Innovation Review) highlights the importance of incorporating the perspectives of program beneficiaries (participants, clients, service recipients, etc.) into program evaluations.  The authors note that non-profit organizations, unlike their counterparts in health care, education, and business, are often not as effective in gathering feedback and input from those who they serve.  Although extremely important, the collection of opinions and perspectives from program participants has three fundamental challenges: 1) It can be expensive and time intensive.  2) It is often difficult to collect data, especially with disadvantaged and minimally literate populations.  3) Honest feedback can make us (i.e., program funders and program implementers) uncomfortable, especially if program beneficiaries don’t think that programs are working the way they are supposed to.

As the authors point out, feedback from participants is important for two fundamental reasons: It provides a voice to those who are served. As Bridgespan Group partner, Daniel Stid, notes, “Beneficiaries aren’t buying your service; rather a third party is paying you to provide it to them.  Hence the focus shifts more toward the requirements of who is paying, versus the unmet needs and aspirations of those meant to benefit.”  Equally importantly, gathering and analyzing the perspectives and opinions of beneficiaries can help program implementers to refine programming and make it more effective.

The authors of “Listening to Those Who Matter Most, the Beneficiaries,” make a strong case for systematically collecting and utilizing beneficiary input. “Beneficiary Feedback isn’t just the right thing to do, it is the smart thing to do.”

Our experience in designing and conducting program evaluations has shown the value of soliciting the views, perspectives, and narrative experiences of program beneficiaries.  Beneficiary feedback is a fundamental component of our program evaluations—whether we are evaluating programs that serve homeless mothers, or programs that serve college students.  We’ve had 20 years of experience conducting interviews, focus groups, and surveys, which are designed to efficiently gather and productively use information from program participants.  While the authors of “Listening to Those Who Matter Most, the Beneficiaries,” suggest that such efforts can be resource intensive, and indeed they can be, we’ve developed strategies for maximizing the effectiveness of these techniques while minimizing the cost of their implementation. To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

April 5, 2013
05 Apr 2013

Helpful Link Resources

link-resourcesPeriodically, I discover and like to share links to resources related to program evaluation.  I think these links can be useful for colleagues in the non-profit, foundation, education, and government sectors.  Here are some links that may be of interest.

Links to typical program outcomes and indicators are available from the Urban Institute.   These links include outcomes for a variety of programs, including arts programs, youth mentoring programs, and advocacy programs.  All in all, this site has outcomes and indicators for 14 specific program areas.
Link here.

The perspective of program participants is a very important source of data about the effects of program. Yet this perspective is often overlooked.  An article from the Stanford Social Innovation Review entitled “Listening to Those Who Matter Most, the Beneficiaries” provides good insight into why program participant perspective is so valuable.
Link here.

The Foundation Center has a database of over 150 tools for assessing the social impact of programs.  While there are dozens and dozens of useful tools here for you to browse, take a look at “A Guide to Actionable Measurement,” from the Gates Foundation and “Framework for Evaluation” from the CDC.
Link here.

The Annie E. Casey Foundation A HANDBOOK OF DATA COLLECTION TOOLS: COMPANION TO “A GUIDE TO MEASURING ADVOCACY AND POLICY” may be helpful for organizations seeking to effect changes in public perceptions and public policy.
Link here.

In up-coming posts, I will share additional links to tools and resources.

March 19, 2013
19 Mar 2013

Transforming “Data” Into Knowledge

br-transforming-dataIn his recent article in the New York Times, “What Data Can’t Do” (February 18, 2013, visit here ), David Brooks discusses some of the limits of “data.”

Brooks writes that we now live in a world that is saturated with gargantuan data collection capabilities, and that today’s powerful computers are able to handle huge data sets which “can now make sense of mind-bogglingly complex situations.” Despite these analytical capacities, there are a number of things that data can’t do very well. Brooks remarks that data is unable to fully understand the social world; often fails to integrate and deal with the quality (vs. quantity) of social interactions; and struggles to make sense of the “context,” i.e., the real environments, in which human decisions and human interactions are inevitably embedded. (See our earlier blog post Context Is Critical.)

Brooks insightfully notes that data often “obscures values,” by which he means that data often conceals the implicit assumptions, perspectives, and theories on which they are based. “Data is never ‘raw,’ it’s always structured according to somebody’s predispositions and values.” Data is always a selection of information. What counts as data depends upon what kinds of information the researcher values and thinks is important.

Program evaluations necessarily depend on the collection and analysis of data because data constitutes important measures and indicators of a program’s operation and results. While evaluations require data, it is important to note that data alone while, necessary, is insufficient for telling the complete story about a program and its effects. To get at the truth of a program, it is necessary to 1) discuss both the benefits and limitations of what constitutes “the data”—to understand what counts as evidence; 2) to use multiple kinds of data—both quantitative and qualitative; and 3) to employ experience- based judgment when interpreting the meaning of data.

Brad Rose Consulting, Inc., addresses the limitations pointed out by David Brooks, by working with clients and program stakeholders to identify what counts as “data,” and by collecting and analyzing multiple forms of data. We typically use a multi-method evaluation strategy, one which relies on both quantitative and qualitative measures. Most importantly, we bring to each evaluation project our experience-based judgment when interpreting the meaning of data, because we know that to fully understand what a program achieves (or doesn’t achieve), evaluators need robust experience so that they can transform mere information into genuine, useable knowledge. To learn about our diverse evaluation methods visit our Data collection & Outcome measurement page.

January 18, 2013
18 Jan 2013

Learning by Changing: Lessons from Action Research

What is Action Research?
Action Research is a method of applied, often organization-based, research whose fundamental tenet is that we learn through action, learn through doing and reflecting. Action research is used in “real world” situations, rather than in ideal, experimental conditions. It focuses on solving real world problems.

Although there are a number of strands of Action Research (AR) including: participatory action research, emancipatory research, co-operative inquiry, appreciative inquiry, all share a commitment to positively changing a concrete organizational or social challenge through a deliberate process of taking action, and reflecting on cycles of emergent learning. Kurt Lewin, one of the original theorists of AR said that: “If you want truly to understand something, try to change it.” Ultimately, Action Research is about learning though doing, indeed, learning through changing.

Collaboration and Co-Learning
Although Action Research uses many of the same methodologies as positivist empirical science (observation, collection of data, etc.), AR typically involves collaborating with, and gathering input from, the people who are likely to be affected by the research. As Gilmore, Krantz, and Ramirez point out in their article “Action Based Modes of Inquiry and the Host-Researcher Relationship,” Consultation 5.3 (Fall 1986): 161, “… there is a dual commitment in action research to study a system and concurrently to collaborate with members of the system in changing it in what is together regarded as a desirable direction. Accomplishing this twin goal requires the active collaboration of researcher and client, and thus it stresses the importance of co-learning as a primary aspect of the research process.”
(Retrieved from http://www.web.ca/robrien/papers/arfinal.html#_edn1)

Collaboration, Stakeholder Involvement, and Constructive Judgment for Program Strengthening
Brad Rose Consulting draws on the key ideas Action Research–collaboration and stakeholder involvement— to ensure that its evaluations are grounded in, and reflect the experience of program stakeholders. Because we work at the intersection of program evaluation (i.e. Does a program produce its intended results?) and organization development (i.e. How can an organization’s performance be enhanced, and how can we ensure that it better achieves it goals and purposes?), we know that the success of program evaluations depend in large part upon the involvement of all program stakeholders.

This means that we work with program stakeholders, (e.g. program managers, program staff, program participants, community members, etc.) to understand the intentions, processes, and experiences of each group. Brad Rose Consulting, Inc., begins each evaluation engagement by listening to clients and participants—including listening to their aspirations, their needs, their understanding of program objectives, and their experience (both positive and negative) participating in programs. Furthermore, we engage clients not as passive recipients of “expert knowledge,” but rather as co-learners who seek both to understand if a program is working, and how a program can be strengthened to better achieve its goals.

Ultimately we make evaluative judgments about the effectiveness of a program, but our approach to making such judgments is guided by our commitment to constructive judgments that help clients to achieve both intended programmatic outcomes (program results) and desired organizational goals. To learn more about our adaptive approach to evaluation visit our Feedback & Continuous improvement page.

October 1, 2012
01 Oct 2012

Sample Evaluation Report – MMUF

The Mellon Mays Undergraduate Fellowship is an initiative to to reduce the serious under-representation of minorities in the faculties of higher education. MMUF commissioned Brad Rose Consulting to do an evaluation of their success over the past 10 years. Here are Brad Rose’s findings. The report is a fantastic example of what you can expect from a commissioned evaluation.

To learn more about our work in education visit our Higher education & K-12 page.

September 13, 2012
13 Sep 2012

Using a Logic Model

A logic model/theory of change can be a very useful learning tool, which, when generated collaboratively with clients, helps evaluators and clients to clearly understand the workings and goals of a program.  It can also be extremely helpful in developing the most accurate measures of a program’s intended results.  Click on the thumbnail to learn more about the easy-to-understand logic model/theory of change tool that Brad Rose Consulting uses to illustrate the way a program works and how it achieves the results/changes that it seeks to realize.

July 18, 2012
18 Jul 2012

Approaching An Evaluation – Ten Issues to Consider

Before beginning an evaluation, it may be helpful to consider the following questions:

1. Why is the evaluation being conducted? What is/are the purpose(s) of the evaluation?

Common reasons for conducting an evaluation are to:

  • monitor progress of program implementation and provide formative feedback to designers and program managers (i.e., a formative evaluation seeks to discover what is happening and why, for the purpose of program improvement and refinement.)
  • measure final outcomes or effects produced by the program (i.e., a summative evaluation);
  • provide evidence of a program’s achievements to current or future funders;
  • convince skeptics or opponents of the value of the program;
  • elucidate important lessons and contribute to public knowledge;
  • tell a meaningful and important story;
  • provide information on program efficiency;
  • neutrally and impartially document the changes produced in clients or systems;
  • fulfill contractual obligations;
  • advocate for the expansion or reduction of a program with current and/or additional funders.

Evaluations may simultaneously serve many purposes. For the purpose of clarity and to ensure that evaluation findings meet the client’s and stakeholders’ needs, the client and evaluator may want to identify and rank the top two or three reasons for conducting the evaluation. Clarifying the purpose(s) of the evaluation early in the process will maximize the usefulness of the
evaluation.

2. What is the “it” that is being evaluated? (A program, initiative, organization, network, set of processes or relationships, services, activities?) There are many things that may be evaluated in any given program or intervention. It may be best to start with a few (2-4) key questions and concerns (See #4, below ). Also, for purposes of clarity, it may be useful to discuss what isn’t being evaluated.

3. What are the intended outcomes that the program or intervention intends to produce? What is the program meant to achieve? What changes or differences does the program hope to produce, and in whom? What will be different as the result of the program or intervention? Please note that changes can occur in individuals, organizations, communities, and other social environments. While evaluations often look for changes in persons, changes need not be restricted to alterations in individuals’ behavior, attitudes, or knowledge, but can extend to larger units of analysis, like changes in organizations, networks of organizations, and communities. For collective groups or institutions, changes may occur in: policies, positions, vision/mission, collective actions, communication, overall effectiveness, public perception, etc. For individuals: changes may occur in behaviors, attitudes, skills, ideas, competencies, etc.

4. Every evaluation should have some basic questions that it seeks to answer. What are the key questions to be answered by the evaluation? What do clients want to be able to say (report) about the program to key stakeholders? By collaboratively defining key questions, the evaluator and the client will sharpen the focus the evaluation and maximize the clarity and usefulness of the evaluation findings.

5. Who is the evaluation for? Who are the major stakeholders or interested parties for evaluation results? Who wants to know? Who are the various “end users” of the evaluation findings?

6. How will evaluation findings be used? (To improve the program; to make judgements about the economic or social value of the program—it’s costs and benefits–; to document and publicize efforts; to expand or curtail the program?)

7. What information will stakeholders find useful and how will they use the evaluation findings?

8. What will be the “product” of the evaluation? What form will findings take? How are findings to be disseminated? (Written report, periodic briefings, analytic memos, a briefing paper, public presentation, etc.?)

9. What are the potential sources of information/data? (Interviews, program documents, surveys, quantitative/statistical data, comparison with other/similar programs, field observations, testimony of experts?) What are the most accessible and cost effective sources of information for the client?

10. What is the optimal design for the evaluation? Which Methods will yield the most valid, accurate, and persuasive evaluation conclusions? If interested in indicating a cause and effect relationship, is it possible (and desirable) to expend the resources necessary to conduct an experimental (i.e., “control group study”), or quasi-experimental design study? Experimental designs can be resource intensive and therefore, more costly. If resources are not substantial, the client and the evaluator will want to discuss other kinds of evaluation designs that will provide stakeholders with the most substantial, valid, and persuasive evaluative information.

To learn more about our evaluation methods visit our Data collection & Outcome measurement page.

Copyright © 2020 - Brad Rose Consulting