br-transforming-dataIn his recent article in the New York Times, “What Data Can’t Do” (February 18, 2013, visit here ), David Brooks discusses some of the limits of “data.”

Brooks writes that we now live in a world that is saturated with gargantuan data collection capabilities, and that today’s powerful computers are able to handle huge data sets which “can now make sense of mind-bogglingly complex situations.” Despite these analytical capacities, there are a number of things that data can’t do very well. Brooks remarks that data is unable to fully understand the social world; often fails to integrate and deal with the quality (vs. quantity) of social interactions; and struggles to make sense of the “context,” i.e., the real environments, in which human decisions and human interactions are inevitably embedded. (See our earlier blog post Context Is Critical.)

Brooks insightfully notes that data often “obscures values,” by which he means that data often conceals the implicit assumptions, perspectives, and theories on which they are based. “Data is never ‘raw,’ it’s always structured according to somebody’s predispositions and values.” Data is always a selection of information. What counts as data depends upon what kinds of information the researcher values and thinks is important.

Program evaluations necessarily depend on the collection and analysis of data because data constitutes important measures and indicators of a program’s operation and results. While evaluations require data, it is important to note that data alone while, necessary, is insufficient for telling the complete story about a program and its effects. To get at the truth of a program, it is necessary to 1) discuss both the benefits and limitations of what constitutes “the data”—to understand what counts as evidence; 2) to use multiple kinds of data—both quantitative and qualitative; and 3) to employ experience- based judgment when interpreting the meaning of data.

Brad Rose Consulting, Inc., addresses the limitations pointed out by David Brooks, by working with clients and program stakeholders to identify what counts as “data,” and by collecting and analyzing multiple forms of data. We typically use a multi-method evaluation strategy, one which relies on both quantitative and qualitative measures. Most importantly, we bring to each evaluation project our experience-based judgment when interpreting the meaning of data, because we know that to fully understand what a program achieves (or doesn’t achieve), evaluators need robust experience so that they can transform mere information into genuine, useable knowledge. To learn about our diverse evaluation methods visit our Data collection & Outcome measurement page.

Recommended Posts