What Happens When There’s No More Work? (7/6/2016)
In the June 25- July 1, 2016 issue of New Scientist, Michael Bond and Joshua Howgego report that a recent study by Oxford University concludes that within two decades, one-half of all jobs in the US could be done by machines. Artificial intelligence (AI) and advanced automation are having a profound effect on work and employment, especially in the advanced industrial economies. (See “When Machine’s Take Over: What Will Humans Do When Computers Run the World?” New Scientist, June 25- July 1, 2016, Vol. 230, Issue 3079, p29 &ff.)
Martin Ford’s 2015 book, Rise of the Robots: Technology and the Threat of a Jobless Future, explores in greater depth the impact that AI and robotics on employment. Ford traces the powerful (and disturbing) effects of robotization and artificial intelligence on a range of sectors in the economy, and argues that in addition to job elimination, the current AI-driven revolution in the world of work promises to displace both blue collar, manual laborers and white collar, college-educated professionals—the latter including but not limited to, lawyers,computer programmers, managers, office and retail workers. The current and anticipated “rise of the robots” thus threatens to create an increasingly jobless future for all; a future, Ford argues, that cannot be addressed with more education and upskilling of the workforce, because those jobs for which displaced blue collar workers once retrained increasingly will be carried out by robots and smart machines.
Ford’s book, as does Bond and Howgego’s article, underscore both the ominous changes in the economy and the profound losses that such changes portend. Bond and Howgego’s article explores the significant role work has played, especially in the advanced economies—not only as source of income and livelihood, but also as an important source of employees’ sense of purpose, identity, and meaning. For instance, they cite a recent Gallop poll that shows that 50% of manual workers, and 70% of college educated employees report that they get a sense of identify from their jobs. They also discuss the health benefits associated with the performance of meaningful work, and how the risks of diseases such as dementia and Alzheimer’s may be reduced for those who work more years, and postpone retirement.
As work continues to change because of employers’ preference for AI and automation, and fewer people are able to find employment, how will society deal with what looks like an imminent if not current, tidal wave of unemployment and forced ‘leisure’? Ford shows how recent history has been characterized by diminishing job creation, lengthening jobless recoveries, and soaring long-term unemployment—all of which are certain to lead to significant social and economic results—if not adequately addressed.
Ford, Bond, and Howgego all suggest that society will necessarily need to rethink the distribution of wealth and society’s assets. Ford, for example, argues for a guaranteed basic income of 10,000 dollars annually for all citizens (augmentable of course, by paid employment), and says that if the guaranteed income was not set too high, it would be likely to avoid the pitfalls of creating disincentives to work. He estimates such a plan would cost about 2 trillion dollars annually—about one half of which would be recouped through cost savings on discontinued welfare programs (e.g., food stamps, housing assistance programs, Earned Income Tax Credits, etc.) and the other half which might be raised by new taxes, like a carbon tax. Bond, and Howgego also explore basic incomes, but also discuss alternative income-supporting plans, such as a negative tax program, in which poor people receive a guaranteed annual income, middle earners aren’t taxed, and the wealthy are taxed.
Whether society is culturally and politically ready for the introduction of a guaranteed minimum income remains to be seen. Ominously, current and forthcoming changes in work and the resulting displacement of workers is likely to necessitate a sweeping examination of the economic and moral implications of the disappearance of paid employment. AI and robotic technology, as these writers convincingly show, will continue to eliminate jobs and make human employment increasingly rare.
“When Machine’s Take Over: What Will Humans Do When Computers Run the World,” Michael Bond and Joshua,” New Scientist, Vol 230, Issue 3079, p29 &ff.)
Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford, Basic Books, 2015
Inventing the Future: Post-Capitalism and a World Without Work, Nick Srnicek and Alex Williams, Verso, 2016
“Evidence-based” – What is it?
“Evidenced-based” has become a common adjectival term for identifying and endorsing the effectiveness of various programs and practices in fields ranging from medicine to education, from psychology to nursing, from criminal justice to psychology. The motivation for marshalling objective evidence in order to guide practices and policies in these diverse fields has been the result of the growing recognition that professional practices—whether they be doctoring or teaching, social work, or nursing—need to be based on something more sound than custom/tradition, practitioners’ habit, professional culture, received wisdom, and hearsay.
What does “evidenced-based” mean?
While definitions of “evidence-based” vary, the most common characteristics of evidence-based research include: objective, empirical research that is valid and replicable, whose findings are based on a strong theoretical foundation, and include high quality data and data collection procedures. The most common definition of Evidence-Based Practice (EBP) is drawn from Dr. David Sackett’s, original (1996) definition of “evidence-based” practices in medicine, i.e., “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research” (Sackett D, 1996). This definition was subsequently amended to, “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” (Sackett DL, Strauss SE, Richardson WS, et al. Evidence-based medicine: how to practice and teach EBM. London: Churchill-Livingstone,2000). (See, “Definition of Evidenced-based Medicine”).
An evidenced-based program, whether it be in youth development or education, is comprised of a set of coordinated services/activities that demonstrate effectiveness, as such effectiveness has been established by sound research, preferably, scientifically based research. (See, “Introduction to Evidence-Based Practice“).
In education, evidence-based practices are those practices that are based on sound research that shows that desired outcomes follow from the employment of such practices. “Evidence-based education is a paradigm by which education stakeholders use empirical evidence to make informed decisions about education interventions (policies, practices, and programs). ‘Evidence-based’ decision making is emphasized over ‘opinion-based’ decision making.” Additionally, “the concept behind evidence-based approaches is that education interventions should be evaluated to prove whether they work, and the results should be fed back to influence practice. Research is connected to day-to-day practice, and individualistic and personal approaches give way to testing and scientific rigor.” (See, “What is Evidence-Based Education?“).
Of course there are different kinds of evidence that can be used to show that practices, programs, and policies are effective. In a subsequent blog post I will discuss the range of evidence-based studies—from individual case studies and quasi experimental designs, to randomized controlled trials (RCT). The quality of the evidence as well as the quality of the study in which such evidence appears is a critical factor in deciding whether the practice or program is not just “evidence-based,” but in fact, effective.
Nonprofit organizations and program evaluators are increasingly being called upon to evaluate multi-stakeholder initiatives. These initiatives often depend upon collaboration among various organizations and agencies. As a consequence, the need to evaluate collaborations/coalitions has become an important requirement for a wide-range of contemporary program evaluations.
In a recent article, “Evaluating Collaboration for Effectiveness: Conceptualization and Measurement,” (American Journal of Evaluation, 2015, Vol. 36, p. 67-85) Lydia I. Marek, Donna-Jean Brock and Jyoti Savla discuss the customary difficulties in assessing collaborations, including, “lack of validated tools, undefined or immeasurable community outcomes, the dynamic nature of coalitions, and the length of time typical for interventions to effect community outcomes…the diversity and complexity of collaborations and the increasingly intricate political and organizational structures that pose challenges for evaluation design” (p.68).
Building on previous research by Mattesich and Monesy (Collaboration: What Makes it Work? Fieldstone Alliance, 1992) which outlines a six category inventory of characteristics typical of collaborations ( i.e., environment, membership characteristics, process/structure, communication, purpose, and resources), Marek, et al., argue for seven key factors that typify successful inter-organizational collaborations and coalitions:
1. Context—the shared history among coalition partners, the context in which they function, and the coalition’s role within the community
2. Membership—individual coalition members’ skills, attitudes, and beliefs that together contribute to, or detract from, successful outcomes
3. Process and organizational factors like flexibility and adaptability of members, and members’ clear understanding of their roles and responsibilities
4. Communication—formal and informal communication among members, and communication with the community
5. Function—the determination and articulation of coalition goals
6. Resources—the coordination of financial and human resources required for the coalition to achieve its goals
7. Leadership—strong leadership skills, including organizing and relationship – building skills
Marek et al. offer a tool for assessing the effectiveness of collaboration, a 69 item Collaboration Assessment Tool (CAT) survey instrument, which asks a series of questions for each of the factors identified above. For example, the “function” domain of the survey asks organizational respondents the following questions:
- This coalition has clearly defined the problem that it wished to address
- The goals and objectives of the coalition are based upon key community needs
- This coalition has clearly defined short-term goals and objectives
- This coalition has clearly defined long-term goals and objectives
- Members agree upon the goals and objectives
- The goals and objectives set for this coalition can be realistically attained
- Members view themselves as interdependent in achieving the goals and objectives of this coalition
- The goals and objectives of this coalition differ, at least in part, from the goals and objectives of each of the coalition members
“Evaluating Collaboration for Effectiveness: Conceptualizations and Measurement,” offers a valuable tool for assessing the often complex inter-organizational relationships that constitute multi-stakeholder collaborations. The CAT offers a system for gathering data for evaluating the quality of collaborations, and implicitly suggests an outline of the key characteristics of successful collaborations. The CAT tool will be useful for organizations who are embarking on collaborations, and for program evaluators who are charged with evaluating the success of such inter-organizational collaborations.
“Evaluating Collaboration for Effectiveness: Conceptualization and Measurement,” Lydia I. Marek, Donna-Jean Brock and Jyoti Savla, (American Journal of Evaluation, 2015, Vol. 36, p. 67-85)
Dawn Bentley of the Michigan Association of Special Educators recently drew my attention to an important article appearing in the Huffington Post, “Proven Programs vs. Local Evidence,” by Robert Slavin, of Johns Hopkins University. “Proven Programs vs. Local Evidence” compares and contrasts two kinds of evaluations of educational programs.
On the one hand, Slavin says, there are evaluations that are conducted of large-scale, typically federally funded programs. These programs represent program structures, that once found to be effective, can be replicated in a variety of settings. Evaluation findings from such programs are usually generalizable, that is they are applicable to a broader range of contexts than the individual case under study. Slavin terms such evaluations, “Proven Programs.” “Proven Program” evaluations are becoming increasingly important because the federal government is interested in funding efforts that are researched-based and show strong evidence of effectiveness. Examples of such “Proven Programs” include School Improvement Grants (SIGs) and Title II Seed grants.
On the other hand, Slavin notes, there are locally specific evaluations that are “not intended to produce answers to universals problems,” and whose findings typically are not generalizable. These evaluations are conducted on programs of a more limited, usually local, scope, and tend to be of interest principally to local program stakeholders, rather than state or national policy makers. Slavin calls these evaluations, “Local Evidence” because they yield evidence that typically isn’t generalizable to larger contexts.
Slavin notes that these two kinds of program evaluations are not necessarily mutually exclusive, for example, when a district or state implements and evaluates a replicable program that responds to its own needs. That said, Slavin says that “proven programs” are likely to contribute to national evidence and experience of what works, while “Local Evidence” evaluations are more likely to be of interest to local educators and local stakeholders. He notes that “Local Evidence” evaluations are more likely to result in stakeholders utilizing and acting on evaluation findings.
While Brad Rose Consulting, Inc. has experience in working with the U.S. Dept. of Education in conducting evaluations of national scope initiatives, we are also have extensive experience and are strongly committed to assisting state-level and district-level education agencies to design and conduct evaluation research to produce evaluation findings that will constructively inform both local policy and programming innovations.
Qualitative interviews can be an important source of program evaluation data. Both in-depth individual interviews and focus group interviews are important methods that provide insights and phenomenologically rich descriptive information that other, numerically oriented data collection methods (e.g., questionnaires and surveys), are often unable to capture. Typically, interviews are either structured, semi-structured, or unstructured. (Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions is asked. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but allow the interviewer to diverge from the questions in order to pursue or follow up to an idea or response. Unstructured interviews start with an opening question, but don’t employ a detailed interview protocol, favoring instead the interviewer’s spontaneous generation of subsequent follow-up questions. See P. Gill1, K. Stewart2, E. Treasure3 & B. Chadwick, cited below.)
Interviews, especially one-on-one, in-person, interviews, allow an evaluator to participate in direct conversations with program participants, program staff, community members, and other program stakeholders. These conversations enable the evaluator to learn about interviewees’ experiences, perspectives, attitudes, motivations, beliefs, personal history, and knowledge. Interviews can be the source of pertinent information that is often unavailable to other methodological approaches. Qualitative interviews enable evaluators to: elicit direct responses from interviewees, probe and ask follow-up questions, gather rich and detailed descriptive data, offer real-time opportunities to explain and clarify interview questions, observe the affective responses of interviewees, and ultimately, to conduct thorough-going explorations of topics and issues critical to the evaluation initiative.
Although less intimate, focus group interviews are another key source of evaluation data. Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions to focus group participants. Focus groups usually include people who are somewhat similar in characteristics or social roles. Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions. Ideally – although not always possible – it is best to involve participants who don’t previously know one another. Focus groups can be especially useful in clarifying, qualifying, and/or challenging data collected through other methods. Consequently they can be useful as a tool for corroborating findings from other research methodologies.
Regardless of the specific form of qualitative interview—individual or focus group, in-person or telephone—qualitative interviews can be useful in providing data about participants’ experience in, and ultimately the effectiveness of, programs and initiatives.
Learning from Strangers: The Art and Method of Qualitative Interview Studies, Robert S. Weiss, Free Press.
Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences, Irving Seidman. Fourth Edition.