Program evaluation is a way to judge the effectiveness of a program. It can also provide valuable information to ensure that the program is maximally capable of achieving its intended results. Some of the common reasons for conducting program evaluation are to:
- monitor the progress of a program’s implementation and provide feedback to stakeholders about various ways to increase the positive effects of the program
- measure the outcomes, or effects, produced by a program in order to determine if the program has achieved success and improved the lives of those it is intended to serve or affect
- provide objective evidence of a program’s achievements to current and/or future funders and policy makers
- elucidate important lessons and contribute to public knowledge.
There are numerous reasons why a program manager or an organizational leader might chose to conduct an evaluation. Too often however, we don’t do things until we have to. Program evaluation is a way to understand how a program or initiative is doing. Compliance with a funder’s evaluation requirements need not be the only motive for evaluating a program. In fact, learning in a timely way about the achievements of, and challenges to, a program’s implementation—especially in the early-to-mid stages of a program’s implementation—can be a valuable and strategic endeavor for those who oversee programs. Evaluation is a way to learn about and to strengthen programs.
What is Privacy Good For?
The right to privacy is a much-cherished value in America. As we noted in an earlier article, “Transparent as a Jellyfish? Why Privacy is Important” privacy is crucial to the development of a person’s autonomy and subjectivity. When privacy is reduced by surveillance or restrictive interference—either by governments or corporations—such interference may not just affect our social and political freedoms, but undermine the preconditions for the fundamental development and sustenance of the self.
Danial Solove, Professor of Law at George Washington University Law School, lists ten important reasons for privacy including: limiting the power of government and corporations over individuals; the need to establish important social boundaries; creating trust; and as a precondition for freedom of speech and thought. Solove also notes, “Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being.” (See “Ten Reasons Why Privacy Matters” Danial Solove). Julie Cohen, in “What is Privacy For?” Harvard Law Review, Vol. 126, 2013 writes: “Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable. It protects the situated practices of boundary management through which self-definition and the capacity for self-reflection develop.”
Strains on Privacy
Privacy, of course, is under continual strain. In his recent article, “Uh-oh: Silicon Valley is building a Chinese-style social credit system,” (Fast Company, August 8, 2019) Mike Elgan notes that China is not alone in seeking to create a “social credit” system—a system that monitors and rewards/punishes citizen behavior.
China’s state-run system would seem to be extreme (e.g., it rewards and punishes for such things as failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, etc. It also publishes lists of citizens’ social credit ratings, and uses public shaming as a means to enforce desired behavior.) Elgan notes that Silicon Valley has similar designs on monitoring and motivating what it deems as “desirable and undesirable” behavior. The outlines of an ever-evolving corporate-sponsored, technology-based “social credit” system now include:
- Life insurance companies can base premiums on what they find in your social media posts
- Airbnb—now has more than 6 million listings in its system, and the company can ban customers and limit their travel/accommodation choices. Airbnb can disable your account for life for any reason it chooses, and it reserves the right to not tell you the reason.
- PatronScan, an ID-reading service helps restaurants and bars to spot fake IDs—and troublemakers. The company maintains a list of objectionable customers which is designed to protect venues from people previously removed for “fighting, sexual assault, drugs, theft, and other bad behavior,” A “public” list is shared among all PatronScan customers.Under a new policy Uber announced in May: If your average rating is “significantly below average,” Uber will ban you from the service.
- WhatsApp is, in much of the world today, the main form of electronic communication. Users can be blocked if too many other users block you. Not being allowed to use WhatsApp in some countries is as punishing as not being allowed to use the telephone system in America.
While no one wants to endorse “bad behavior,” ceding the power to corporations and technology giants to determine which behavior counts as undesirable and punishable may not be the most just or democratic way to ensure societal norms and expectations. As Elgan observes, “The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extra-legal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights.” Even more ominously, as Julie Cohen writes, “Conditions of diminished privacy shrink the capacity (of self government), because they impair both the capacity and the scope for the practice of citizenship. But a liberal democratic society cannot sustain itself without citizens who possess the capacity for democratic self-government. A society that permits the unchecked ascendancy of surveillance infrastructures cannot hope to remain a liberal democracy.”
“Uh-oh: Silicon Valley is building a Chinese-style social credit system,” Mike Elgan , Fast Company, August 8, 2019.
“China has started ranking citizens with a creepy ‘social credit’ system — here’s what you can do wrong, and the embarrassing, demeaning ways they can punish you,” Alexandra Ma, Business Insider, Oct. 29, 2018.
“America Isn’t Far Off from China’s ‘Social Credit Score’” Anthony Davenport, Observer, February 19, 2018.
“How the West Got China’s Social Credit System Wrong,” Lousse Matsakis, Wired, July 29. 2019
“Ten Reasons Why Privacy Matters” Daniel Solove
“What Privacy Is For?” Julie Cohen, Harvard Law Review, Vol. 126, 2013
“The Spy in Your Wallet: Credit Cards Have a Privacy Problem,” Geoffrey A. Fowler, The Washington Post, August 26, 2019.
Why do so many of us aspire to be “normal?” Who decides what’s normal and abnormal? What happens to our self- and social- worth when we discover that we aren’t “normal?” In a recent article, “How Did We Come Up with What Counts as Normal,” Jonathan Mooney discusses the rise of an idea that has acquired substantial power in modern society. Mooney notes that “normal’ entered the English language only in the mid-19th century and has its roots in the Latin “norma” which refers to the carpenter’s T-Square. It originally meant simply “perpendicular.” Right-angles however, are considered mathematically “good” and “normal” soon came to be associated not just with their description of the orthogonal angle, but also with the normative notion of something that is desirable or socially expected. Mooney argues that it is this ambiguity as both a descriptive word and as a normative ideal, that makes “normal” so appealing and powerful.
“Normal” was first used in the academic disciplines of comparative anatomy and physiology. For academics in these and other fields, “normal” soon evolved to describe bodies and organs that were “perfect” or “ideal” and also was used to name certain states as “natural”. Eventually, thanks largely to the field of statistics, ideas about the normal soon conflated the average with the ideal or perfect. In the 19th century, for example, Adolphe Quetelet, a deep believer in the power of statistics, advanced the idea of the “average man” and argued that “the normal” (i.e., average) was perfect and beautiful. Quetelet characterized that which was not “normal” not simply as “abnormal,” or non-average, but as something potentially monstrous. “In 1870, in a series of essays on “deformities” in children, he juxtaposed children with disabilities to the normal proportions of other human bodies, which he calculated using averages.” Thus, averages soon became the aspirant ideal.
Mooney also describes how the statistician Francis Galton, who was Charles Darwin’s cousin, “…was both the first person to develop a properly statistical theory of the normal . . . . and also the first person to suggest that it be applied as a practice of social and biological normalization.” “By the early twentieth century, the concept of a normal man took hold. Soon, the emerging field of public health embraced the idea of the normal; schools, with rows of desks and a one-size-fits-all approach to learning, were designed for the mythical middle; and the industrial economy sought standardization, which was brought about by the application of averages, standards, and norms to industrial production. Moreover, eugenics, an offshoot of genetics created by Galton, was committed to ridding the world of human “defectives.”
The ensuing predominance (some might say “domination”) of “the normal” became firmly established by the mid-20th century. Mooney points out however, that the normal was not so much “discovered” as it was invented, largely by statistics and statisticians, and promulgated by the social sciences and moralists. “Alain Desrosières, a renowned historian of statistics wrote, “With the power deployed by statistical thought, the diversity inherent in living creatures was reduced to an inessential spread of “errors” and the average was held up as the normal—as a literal, moral, and intellectual ideal.”
“How Did We Come Up with What Counts as Normal,” Jonathan Mooney, Literary Hub August 16, 2019
Normal Sucks: How to Live, Learn, and Thrive Outside the Lines, Jonathan Mooney, Henry Holt and Co., 2019
For information on social norms (formal and informal norms, morays, folkways, etc.) see https://courses.lumenlearning.com/alamo-sociology/chapter/social-norms/ and “What is a Norm?”
Shore says that nonprofits can and should be more political than many in the nonprofit community believe they are legally permitted to be. Achieving the goals that many nonprofits pursue depends upon nonprofits becoming more, not less, political. While some activities are prohibited, including working on campaigns, donating to candidates, and engaging in lobbying beyond certain generously defined limits, shore notes that “… a broad range of political work is permitted, appropriate, even essential. (There is also the option of establishing a 501(c)(4) that permits campaign engagement and support, which we haven’t done.)”
Shore asserts getting political is often about educating, not necessarily lobbying or campaigning. He further argues that, “Nonprofits need to build their internal political capacity. Nonprofit political activity is good for nonprofits, good for politics, and good for the people that both aim to serve.” Ultimately, by expanding their political activities within stipulated legal limits, “nonprofits benefit by seeing their programs and services achieve greater scale and reach more people in need, in ways that only politics and public policy can guarantee.”
“Getting Political Is Good for Everyone,” Bill Shore, Stanford Social Innovation Review, July 17, 2019
In “A Machine May Not Take Your Job, but One Could Become Your Boss,” by Kevin Roose, New York Times, June 23, 2019, the author says “…in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.” Roose observes that one of the goals of AI is to optimize efficiency of humans in the workplace. Thus systems that monitor, guide, and report on employee performance are increasingly seen in white collar workplaces, where employees are being “assisted” to be more productive, more customer friendly, and work more quickly — by “adjunct management,” i.e., artificial intelligence.
Roose scans a variety of workplaces. In the insurance industry, he reports on AI systems that provide on-screen prompts to call center works — prompting them to be chirpier and more empathetic; he discusses the use of AI and employee tracking in both Amazon warehouses and retail stores. In the later he notes that 7-Eleven “uses in-store sensors to calculate a “true productivity” score for each worker, and rank workers from most to least productive,” and notes “Uber, Lyft and other on-demand platforms have made billions of dollars by outsourcing conventional tasks of human resources — scheduling, payroll, performance reviews — to computers.
Management by algorithm doesn’t just affect call center works, Uber drivers, and warehouse workers. It also bodes less-than-well for managers, whose traditional supervisory and over-site duties are increasingly being handled by “robots”. As we discussed in “Humans Need Not Apply,” AI promises to displace both blue collar, manual laborers and white collar, college-educated professionals — the latter including but not limited to, lawyers, computer programmers, managers, office and retail workers. “A Machine May Not Take Your Job, but One Could Become Your Boss,” hauntingly suggests that management too, is in the cross-hairs of AI.
“Robots Are Not Coming for Your Job. Management Is,” Brian Merchant, Gizmodo