“I would rather have questions that can’t be answered than answers that can’t be questioned.”
― Richard Feynman
The Cambridge Dictionary defines research as “A detailed study of a subject, especially in order to discover (new) information or reach a (new) understanding.” Program evaluation necessarily involves research. As we mentioned in our most recent blogpost, “Just the Facts: Data Collection” program evaluation deploys various research methods (e.g., surveys, interviews, statistical analyses, etc.) to find out what is happening and what has happened in regards to a program, initiative, or policy. At the core of every evaluation are key questions that should guide the evaluation. Below we reprise our previous blogpost, “Questions Before Methods,” which emphasizes the importance of specifying evaluation questions prior to the design and implementation of each evaluation.
Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider”) Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.
Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).
While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.
- Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
- Did the program’s services, products, and resources reach their intended audiences and users?
- Were services, products, and resources made available to intended audiences and users in a timely manner?
- What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
- What steps were taken by the program to address these challenges?
- How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
- How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants and stakeholders?
- What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
- Which elements of the program do participants find most beneficial, and which least beneficial?
- What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills, and practices)?
- Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
- Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
- What is the ultimate worth, merit, and value of the program?
- Should the program be continued or curtailed?
Program evaluation is seldom simply about making a narrow judgment about the outcomes of a program (i.e., whether the desired changes were, in fact, ultimately produced.) Evaluation is also about helping to provide program implementers and stakeholders with information that will help them strengthen their organization’s efforts, so that desired programmatic goals are more likely to be achieved.
Brad Rose Consulting is strongly committed to translating evaluation data into meaningful and actionable knowledge, so that programs, and the organizations that host programs, can strengthen their efforts and optimize results. Because we are committed not just to measuring program outcomes, but to strengthening the organizations that host and manage programs, we work at the intersection of program evaluation and organization development (OD).
- engage in the clarification of their goals and purposes
- enhance understanding of the often implied relationships between a program’s causes and effects
- articulate for internal stakeholders a collective understanding of the objectives of their programming
- reflect on alternative concrete strategies to achieve desired outcomes
- strengthen internal and external communications
- improve relationships between individuals within in programs and organizations
Program evaluations entail research. Research is a systematic “way of finding things out.” Evaluation research depends on the collection and analysis of data (i.e., evidence, facts) that indicate the outcomes (i.e., effects, results, etc.) of the operation of programs. Typically, evaluations want to discover evidence of whether valued outcomes have been achieved. (Other kinds of evaluations, like formative evaluations, seek to discover, through the collection and analysis of data, ways that a program may be strengthened.)
Data can be either qualitative (descriptive, entail words and observations) or quantitative (numerical). What counts as data depends upon the design and character of the evaluation research. Quantitative evaluations rely primarily on the collection of countable information like measurements and statistical data. Qualitative evaluations depend upon language-based data and other descriptive data. Usually, program evaluations combine the collection of quantitative and qualitative data.
There are a range of data sources for any evaluation. These can include: observations of programs’ operation; interviews with program participants, program staff, and program stakeholders; administrative records, files, and tracking information; questionnaires and surveys; focus groups; and visual information, such as video data and photographs.
The selection of the kinds of data to collect and the ways of collecting such data will be contingent on the evaluation design, the availability and accessibility of data, economic considerations about the cost of data collection, and both the limitations and potentials of each data source. The kinds of evaluation questions and the design of the evaluation research will, together, help to determine the optimal kinds of data that will need to be collected. (See our articles
“Questions Before Methods” and “Using Qualitative Interviews in Program Evaluations”)
Evaluation, Carole Weiss, Prentice Hall; 2nd edition (1997)
“Collaboration” and “teamwork” are the catchphrases of the contemporary workplace. Since the 1980s in the U.S., work teams have been hailed as the solution to assembly line workers’ alienation and disaffection, and white-collar workers’ isolation and disconnection. Work teams have been associated with increased productivity, innovation, employee satisfaction, and reduced turnover. Additionally, teams at work are said to have beneficial effects on employee learning, problem-solving, communication, company loyalty, and organizational cohesiveness. Teams are now found throughout the for-profit, non-profit, and governmental sectors, and much of the work of the field of organization development (OD) is devoted to fostering and sustaining teams at work.
In his recent article “Stop Wasting Money on Team Building,” Harvard Business Review, September 11, 2018, Carlos Valdes-Dapena, argues that teams are less effective than many believe them to be. Based on research conducted at Mars, Inc. “a 35 billion dollar global corporation with a commitment to collaboration,” Valdes-Dapena argues that while employees like the idea of teams and team work, employees don’t, in fact, much collaborate in teams. After conducting 125 interviews and administering questionnaires with team members, he writes “If there was one dominant theme from the interviews, it is summarized in this remarkable sentiment: “I really like and value my teammates. And I know we should collaborate more. We just don’t.”
Valdes-Dapena reports that employees “…felt the most clarity about their individual objectives, and felt a strong sense of ownership for the work they were accountable for.” He also shows that “Mars was full of people who loved to get busy on tasks and responsibilities that had their names next to them. It was work they could do exceedingly well, producing results without collaborating. On top of that, they were being affirmed for those results by their bosses and the performance rating system.” Essentially, Valdes-Dapena, argues, teams may sound good in theory, but it is probably better to tap individual self-interest, if you really want to get the job done.
In “3 Types of Dysfunctional Teams and How To Fix Them,” Patty McManus says that there are different types of dysfunctional work teams. She characterizes these different team types as: “The War Zone,” “The Love Fest,” and “The Unteam.” In “War Zone” teams, competition and factionalism among members obscure or derail the potential benefits of teamwork. In the “Love Fest” team, there is a focus on muting disagreements, highlighting areas of agreement, and avoidance of tough issues in the interest of maintaining good feelings. “The Unteam” is characterized by meetings that are used for top-down communication and status updates, and fail to build shared perspective about the organization. In the “Unteam” members may get along as individuals, but they have little connection to one another or a larger purpose they all share.
McManus claims that the problems of teams may be overcome by what she terms “ecosystems teams,” i.e., teams that surface and manage differences, build healthy inter-dependence among members, and engage the organization—beyond the mere confines of the team.
Matthew Swyers also sees problems in teams at work. In “7 Reasons Good Teams Become Dysfunctional,” (Inc. September 27, 2012,) Swyers writes that there are seven types of problems that teams may experience:
- absence of a strong and competent leader
- team members more interested in individual glory than achieving team objectives
- failure to define team goals and desired outcomes
- disproportionately place too much of the team’s work on a few of its members’ shoulders
- lack focus and endless debate, without moving toward an ultimate goal
- lack of accountability and postponed timetables
- failure of decisiveness.
Each of these writers highlight the vulnerabilities of teams at work. Although the work of these writers doesn’t foreclose the positive possibilities of team organization at work, they raise important questions about both the enthusiasm for, and the effectiveness of, teams. Additionally, each author suggests that with enlightened modifications, organizations can overcome the liabilities of teams and begin to reap the benefits of team-based employee collaboration. That said, none of these writers, and few among the other U.S. based writers who have engaged this topic, treat the underlying assumptions of workplace reform—that work can be made more habitable and humane without the independent organizations that have traditionally represented workers’/employees’ interests in the workplace. For discussion of models of workplace reform that genuinely represent workers’ interest in more humane, collaborative, and ultimately, productive working environments, we will need to look elsewhere.
“Importance of Teamwork at Work,” Tim Zimmer
“Importance of Teamwork in Organizations,” Aaron Marquis
“What Makes Teams Work?” Regina Fazio Maruca, Fast Company
“Stop Wasting Money on Team Building,” Carlos Valdes-Dapena, Harvard Business Review, September 11, 2018
“3 Types Of Dysfunctional Teams And How To Fix Them,” Patty McManus, Fast Company
“When Is Teamwork Really Necessary?” Michael D. Watkins, Harvard Business Review, August 16, 2018
“7 Reasons Good Teams Become Dysfunctional,” Matthew Swyers , Inc. Sept 27 2012
“Why Teams Don’t Work,” Diane Coutu, Harvard Business Review, May 2009
Brad recently presented a program evaluation workshop to grantees of the Foundation of MetroWest.
The two-hour workshop served to introduce grantees and other attendees to the basics of evaluation, including the:
- basic kinds and purposes of program evaluation
- types of questions that evaluations ask and answer
- ways that values and criteria inform an evaluation
- types of evaluation designs
- use of logic models
You can access the PowerPoint for the presentation below.