eblastquoteNot long ago I was meeting with a prospective client.  It was our first meeting, and shortly after our initial conversation had begun—but long before we had a chance to discuss either the purposes of the evaluation, the questions the evaluation would address, or the methods that would be used— the client began imagining the many marketing uses for the evaluation’s findings.  Eager to dissuade my colleague from prematurely celebrating her program’s successes, I observed that while it may turn out that the evaluation would reveal important information about the program’s achievements and benefits, it might also find that the program had, in fact, not achieved some of the goals it had set out to realize.  I cautioned my colleague, “My experience tells me that we will want to wait to see what the evaluation shows, before making plans to use evaluation findings for marketing purposes.”  In essence, I was making a case for discovering the truth of the program before launching an advertising campaign.

We live in a period where demands for accountability and systematic documentation of program achievements are pervasive.  Understandably, grantees and program managers are eager to demonstrate the benefits of their programs.  Indeed, many organizations rely upon evaluation findings to demonstrate to current and future funders that they are making a difference and that their programs are worthy of continued funding.  Despite these pressures, it is very important that program evaluations be conducted with the utmost integrity and objectivity so that findings are accurate and useful to all stakeholders.

The integrity of the evaluation is critical, indeed, paramount for all stakeholders.  Reliable, robust, and unbiased evaluation findings are important not just to funders who want to know whether their financial resources were used wisely, but also to the program implementers, who need to know whether they are making the difference(s) they intend to make.  Without objective data about the outcomes that a program produces, no one can know with any certainty whether a program is a success or a “failure” (take a look out our blog post Fail Forward, that examines what we can learn from “failure”) i.e., if it needs refining and strengthening.

As a member of the American Evaluation Association, Brad Rose Consulting, Inc. is committed to upholding the AEA’s “Guiding Principles for Evaluators”,  the Principles state:
“Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process. Evaluators should be explicit about their own, their clients’, and other stakeholders’ interests and values concerning the conduct and outcomes of an evaluation (and)… should not misrepresent their procedures, data or findings. Within reasonable limits, they should attempt to prevent or correct misuse of their work by others.”

In each engagement, Brad Rose Consulting, Inc. adheres to the “Guiding Principles for Evaluators” because we are committed to ensuring that the findings of its evaluations are clearly and honestly represented to all stakeholders. This commitment is critical not just to program sponsors—the people who pay for programs— but also to program managers and implementers, who also are in need of unbiased  and dispassionate information about the results of the programs they operate. To learn about the evaluation methods we offer visit our Data collection & Outcome measurement page.

Recommended Posts