Over the last two decades, American education has sought to introduce and improve student access to digital technology. Since the first introduction of personal computers in classrooms, to the more recent efflorescence of iPads and the use of on-line educational content, educators have expressed enthusiasm for digital technology. As Natalie Wexler writes in The MIT Review, December 19, 2019, “Gallup …found near-universal enthusiasm for technology on the part of educators. Among administrators and principals, 96% fully or somewhat support “the increased use of digital learning tools in their school,” with almost as much support (85%) coming from teachers.” Despite this enthusiasm, there isn’t a lot of evidence for the effectiveness of digitally based educational tools. Wexler cites a study of millions of high school students in the 36 member countries of the Organization for Economic Co-operation and Development (OECD) which found that those who used computers heavily at school “do a lot worse in most learning outcomes, even after accounting for social background and student demographics.”
Although popular, and thought by educators useful, digital tools in classrooms not only appear to make little difference in educational outcomes, but in some cases may actually negatively affect student learning. “According to other studies, college students in the US who used laptops or digital devices in their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who took the course in person. And fourth graders who used tablets in all or almost all their classes had, on average, reading scores 14 points lower than those who never used them—a differential equivalent to an entire grade level. In some states, the gap was significantly larger.”
While it has been largely believed that digital technologies can “level the playing” field for economically disadvantaged students, the OECD study found that “technology is of little help in bridging the skills divide between advantaged and disadvantaged students.”
Why do digital technologies fail students? As Wexler ably details:
- When students read text from a screen, it’s been shown, they absorb less information than when they read it on paper
- Digital vs. human instruction eliminates the personal, face-to-face relationships that customarily support students’ motivation to learn
- Technology can drain a classroom of the communal aspect of learning, over individualize instruction, and thus diminish the important role of social interaction in learning
- Technology is primarily used as a delivery system, but if the material it’s delivering is flawed or inadequate, or presented in an illogical order, it won’t provide much benefit
- Learning, especially reading comprehension, isn’t just a matter of skill acquisition, of showing up and absorbing facts, but is largely dependent upon students’ background knowledge and familiarity with context. In his article “Technology in the Classroom in 2019: 6 Pros & Cons” Vawn Himmelsbach, makes many of the same arguments and adds a few liabilities to Wexler’s list.
- Technology in the classroom can be a distraction
- Technology can disconnect students from social interactions
- Technology can foster cheating in class and on assignments
- Students don’t have equal access to technological resources
- The quality of research and sources they find may not be top-notch
- Lesson planning might become more labor-intensive with technology
Access and availability to digital technology varies, of course, among schools and school districts. As the authors of Concordia University’s blog, Rm. 241 point out, “Technology spending varies greatly across the nation. Some schools have the means to address the digital divide so that all of their students have access to technology and can improve their technological skills. Meanwhile, other schools still struggle with their computer-to-student ratio and/or lack the means to provide economically disadvantaged students with loaner iPads and other devices so that they can have access to the same tools and resources that their classmates have at school and at home.”
While students certainly need technological skills to navigate the modern world and equality of access to such technology remains a challenge, digital technology alone cannot hope to solve the problems of either education or “the digital divide.” The more we rely on the use of digital tools in the classroom, the less we may be helping some students, especially disadvantaged students, to learn.
How Classroom Technology is Holding Students Back, Natalie Wexler, The MIT Review, December 19, 2019
“Technology in the Classroom in 2019: 6 Pros & Cons” Vawn Himmelsbach, Top Hat Blog, July 15, 2019
The Evolution of Technology in the Classroom, Perdue university Online
“Debating the Use of Digital Devices in the Classroom,” The Room 241 Team • November 7, 2012
“Bias” is a tendency (either known or unknown) to prefer one thing over another that prevents objectivity, that influences understanding or outcomes in some way. (See the Open Education Sociology Dictionary) Bias is an important phenomenon in social science and in our everyday lives.
In her article, “9 types of research bias and how to avoid them,” Rebecca Sarniak discusses the core kinds of bias in social research. These include both the biases of the researcher, and the biases of the research subject/respondent.
Prevalent kinds of researcher bias include:
- confirmation bias
- culture bias
- question-order bias
- leading questions/wording bias
- the halo effect
Respondent biases include:
- acquiescence bias
- social desirability bias
- sponsor bias
In their Scientific American article “How-to-think-about-implicit-bias,”, Keith Payne, Laura Niemi, John M. Doris assure us that bias is not merely rooted in prejudice, but in our tendency to notice patterns and make generalizations. “When is the last time a stereotype popped into your mind? If you are like most people, the authors included, it happens all the time. That doesn’t make you a racist, sexist, or whatever-ist. It just means your brain is working properly, noticing patterns, and making generalizations…. This tendency for stereotype-confirming thoughts to pass spontaneously through our minds is what psychologists call implicit bias. It sets people up to overgeneralize, sometimes leading to discrimination even when people feel they are being fair.”
Of course, bias is not just a phenomenon relevant to social science (and evaluation) research. It affects our everyday activities too. In “10 Cognitive Biases That Distort Your Thinking,” Kendra Cherry explores the following kinds of biases:
- confirmation bias
- hindsight bias
- anchoring bias
- misinformation effect
- the actor observer bias
- false consensus effect
- halo effect
- self-serving bias
- availability heuristic
- the optimism bias
In evaluation research, especially when employing qualitative methods, such as interviews and focus groups, unconscious bias can negatively affect evaluation findings. The following types of bias are especially problematic in evaluations:
- confirmation bias, when a researcher forms a hypothesis or belief and uses respondents’ information to confirm that belief.
- acquiescence bias, also known as “yea-saying” or “the friendliness bias,” when a respondent demonstrates a tendency to agree with, and be positive about, whatever the interviewer presents.
- social desirability bias, involves respondents answering questions in a way that they think will lead to being accepted and liked. Some respondents will report inaccurately on sensitive or personal topics to present themselves in the best possible light.
- sponsor bias, when respondents know – or suspect – the interests and preferences of the sponsor or funder of the research, and modifies their (respondents) answers to questions
- leading questions/wording bias, elaborating on a respondent’s answer puts words in their mouth because the researcher is trying to confirm a hypothesis, build rapport, or overestimate their understanding of the respondent.
It’s important to strive to eliminate bias in both our personal judgements and in social research. (For an extensive list of cognitive biases, see here.) Awareness of potential biases can alert us to when bias, rather than impartiality, influence our methods and affect our judgments.
A summative evaluation is typically conducted near, or at the end of a program or program cycle. Summative evaluations seek to determine if, over the course of the intervention, the desired outcomes of a program were achieved. An “outcome” is that change, effect, or result which a program or initiative intends to achieve (See “What Counts as an “outcome” and Who Decides” ). Summative evaluations, as their name implies, offer a kind of “summary” of the value or worth of a program. Such an estimation is based on whether, and to what degree, intended outcomes have been achieved. Whereas formative evaluations are conducted near the beginning of a program, summative evaluations are conducted near or at the end of a program. Formative evaluations provide information with which to strengthen the implementation of the program. Conversely, summative evaluations determine whether the program should be continued or discontinued. (See our article “Strengthening Programs and Initiatives through Formative Evaluation”)
Summative evaluations are important because they gather and analyze data that indicate whether a program or initiative has been successful in affecting desired changes. Summative evaluations can be of use in making a case to potential funders and other stakeholders that continued support is a worthwhile investment. A word of caution: While it is important for funders to know that their investments are effective, and that desired changes are happening, summative evaluations may also provide evidence that discontinuation of a program is in order. (See “Fail Forward: What We Can Learn from Program ‘Failure’” )
“Building Our Understanding: Key Concepts of Evaluation What is it and how do you do it” Center for Disease Control
Evaluation, Second Edition, Carol H Weiss, Prentice Hall
“Types of Evaluation You Need to Know,” by Vipul Nanda
Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program or initiative. Formative evaluations typically are conducted in the early-to-mid period of a program’s implementation. Formative evaluations can be contrasted with summative evaluation which are conducted near, or at the end of, a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities). Summative evaluations are used to indicate the ultimate value, merit, and worth of the program. Their findings can be used to determine whether the program should be continued, replicated, or curtailed.
The goal of formative evaluations is to gather information that can help program designers, managers, and implementers address challenges to the program’s effectiveness. In its paper “Different Types of Evaluation” the CDC notes that formative evaluations are implemented “During the development of a new program (or) when an existing program is being modified or is being used in a new setting or with a new population.” Formative evaluation allows for modifications to be made to the plan before full implementation begins, and helps to maximize the likelihood that the program will succeed.” “Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly.” See “Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions”.
While there are many potential formative evaluation questions, the core of these consists of gathering information that answers:
- Which features of a program or initiative are working and which aren’t working so well?
- Are there identifiable obstacles, or design features, that “get in the way” of the program working well?
- Which components of the program do program participants say could be strengthened?
- Which elements of the program do participants find most beneficial, and which least beneficial?Typically, formative evaluations are used to provide feedback in a timely way, so that the functioning of the program can be modified or adjusted, and the goals of the program better achieved.Brad Rose Consulting has conducted dozens of formative evaluation, each of which has helped program managers to understand ways that their program or initiative can be refined, and program participants better served.
We—humans—spend a lot of time in groups. Families, workplaces, churches, mosques, and synagogues, political organizations, sports teams, clubs, associations, etc. A “group” is a collection of two or more people that interact, communicate, and influence one another. A crowded elevator or a subway car is not generally considered a group; it’s a crowd. A club or a work-team is a group.
Groups are the settings for a range of behaviors, all of which entail human interaction and influence. Individuals become members of groups in order to achieve goals and to satisfy needs. Groups have shared goals, or agendas, which include their “task agenda”—getting work done; and their “social agenda”—meeting the social-emotional and identity needs of members. Groups assign members to roles that prescribe a set of expectations for each member’s behavior. These roles typically have different statuses, or different levels of prestige associated with each role. There are “in-groups” and “out-groups”; the former are groups with which people identify as members and the latter are groups with which people don’t identify and are often “assigned” by members of other groups. An organization is a kind of group, whose members work together for a shared purpose in a continuing way. Organizations can contain various groups, both formal and informal, within its boundaries.
Groups have different levels of cohesion or incoherence. Both internal competition among group members and external competition with other groups, can affect the degree of cohesion, or solidarity of the group. While cohesion is important to most groups, if excessive, it can be the cause of undesirable factors like “groupthink” which can lower the quality of the group’s decision-making ability, lead to closed-mindedness, prejudice, and exert undue pressure to conform.
These features and dynamics (above) are applicable to most groups. They are especially noticeable at work, where group dynamics are often operative. Status of members, specified roles, pressures towards conformity and “groupthink”, leadership and “followership,” group decision-making, etc., are issues with which we must often deal—both consciously and unconsciously. In the for-profit world and the non-profit world, group dynamics are at play. Awareness of these features can help us to productively deal with them, rather than experience them unconsciously, and at times, adversely.