Pioneered by market researchers and mid-20th century sociologists, focus groups are a qualitative research method that involves small groups of people in guided discussions about their attitudes, beliefs, experiences, and opinions about a selected topic or issue. Often used by marketers who obtain feedback from consumers about a product or service, focus groups have also become an effective and widely recognized social science research tool that enables researchers to explore participants’ views, and to reveal rich data that often remain under-reported by other kinds of data collection strategies (e.g., surveys, questionnaires, etc. ).
Organized around a set of guiding questions, focus groups typically are composed of 6-10 people and a moderator who poses open-ended questions that allow participants to address questions. Focus groups usually include people who are somewhat similar in characteristics or social roles. Participants are selected for their knowledge, reflectiveness, and willingness to engage topics or questions. Ideally—although not always possible—it is best to involve participants who don’t previously know one another.
Focus group conversations enable participants to offer observations, define issues, pose and refine questions, and create informative debate/discussions. Focus group moderators must: be attentive, pose useful and creative questions, create a welcoming and non-judgmental atmosphere, be sensitive to non-verbal cues and the emotional tenor of participants. Typically, focus group sessions are recorded or videoed so that researchers can later transcribe and analyze participants’ comments. Often an assistant moderator will take notes during the focus group conversation.
Focus groups have advantages over other date collection methods. They often employ group dynamics that help to reveal information that would not emerge from an individual interview or survey: they produce relatively quick, low cost data (they produce an ‘economy of scale’ as compared to individual interviews); allow the moderator to pose appropriate and responsive follow-up questions; enable the moderator to observe non-verbal data; and often produce greater and richer data than a questionnaire or survey.
Focus groups also can have some disadvantages, especially if not conducted by an experienced and skilled moderator: Depending upon their composition, focus groups are not necessarily representative of the general population; respondents may feel social pressure to endorse other group members’ opinions or refrain from voicing their own; group discussions require effective “steering” so that key questions are answered, and participants don’t stray from the questions/topic.
Focus groups are often used in program evaluations. I have had extensive experience conducting focus groups with a wide-range of constituencies. During my 20 years of experience as a program evaluator, I’ve moderated focus groups composed of: homeless persons; disadvantaged youth; university professors and administrators; k-12 teachers; k-12 and university students, corporate managers; and hospital administrators. In each of these groups I’ve found that it’s been beneficial to: have a non-judgmental attitude, be genuinely curious; exercise a gentle guidance; and respect the opinions, beliefs, and experiences of each focus group member. A sense of humor can also be extremely helpful. (See our previous post: “Interpersonal Skills Enhance Program Evaluation,”http://bradroseconsulting.com/index.php/interpersonal-skills-enhance-program-evaluation/ Also “Listening to Those Who Matter Most, the Beneficiaries”http://bradroseconsulting.com/index.php/listening-matter-most-beneficiaries/ )
About focus groups:
About focus groups:
How focus groups work:
Focus group interviewing:
‘Focus groups’ at Wikipedia
A needs assessment is a systematic research and planning process for determining the discrepancy between an actual condition or state of affairs, and a future desired condition or state of affairs. Needs assessments are undertaken not only to identify the gap between “what is” and “what should be” but also to identify the programmatic actions and resources that are required to address that gap. Typically, a needs assessment is a part of planning processes that is intended to yield improvements in individuals, education/training, organizations, and/ or communities. (https://en.wikipedia.org/wiki/Needs_assessment ) Ultimately, a needs assessment is “a systematic process whose aim is to acquire an accurate, thorough picture of a system’s strengths and weaknesses, in order to improve it and to meet existing and future challenges.”
( http://dictionary.reference.com/browse/needs+assessment) Needs assessments have a variety of purposes. They can be used to identify and address challenges in a community, to develop training strategies, or to improve the performance of organizations.
There are a variety of conceptual modules of needs assessments. (For a review of various models (See http://ryanrwatkins.com/na/namodels.html ) One of the most popular is the SWOT analysis, in which researchers and action teams conduct a study to determine the strengths, weaknesses, opportunities, and threats involved in a project or business venture. In Planning and Conducting Needs Assessments: A Practical Guide. Thousand Oaks, CA. Sage Publications. (1995) Witkin, and Altschuld, identify a three-stage model of needs assessment, which includes pre-assessment (exploration), assessment (data gathering), and post-assessment (utilization).
Although there are various approaches to needs assessment, most include the following essential components/steps:
- Identify issue/concern
- Conduct a gap analysis (where things are now vs. where they should be)
- Specify methods for collecting information/data
- Perform literature review
- Collect and analyze data
- Develop action plan
- Produce implementation report
- Disseminate report/recommendations to stakeholders.
Why Conduct a Needs Assessment
Needs assessments can be used to identify real-world challenges, to formulate plans to correct inequities, and to involve critical stakeholders in building consensus and mobilizing resources to address identified challenges. For non-profit organizations needs assessments: 1) use data to identify an unaddressed or under-addressed need; 2) help to more effectively utilize resources to address a given problem; 3) make programs measurable, defensible, and fundable; and 4) inform, mobilize, and re-energize stakeholders. Needs assessments can be used with an organizations internal and external stakeholders and constituents.
Brad Rose Consulting Inc. has extensive experience in designing and implementing needs assessments for non-profit organizations, educational institutions, and health and human service programs. We’d welcome a chance to speak with you and your colleagues about how we can help you to conduct a needs assessment
Pyramid Model of Needs Assessment
Needs Assessment: Strategies for Community Groups and Organizations
Needs Assessment 101
U.S. Department of Education
Needs Assessment: A User’s Guide
As mentioned in a previous blog post, program evaluation can play an important role in an organization’s strategic planning initiatives. This is especially true in non-profit organizations, human service agencies, k-12, and higher education institutions, all of whom must rely on non-market data for evidence of program effects. Evaluation can help these organizations to identify, gather, and analyze data with which to judge the impact of their activities and to strengthen current, or redirect future, efforts. Only with clear and accurate information can a non-profit organization take stock of its effectiveness and make informed choices about needed changes in direction. As Heather Tunis and Maura Harrington, note “An evaluation plan helps refine data collection and assessment practices so that the information is most useful to advancing the organization’s mission and the objectives of the program being evaluated. Evaluation is a key component of being a learning organization.” (Non-profits: Strategic Planning and Future Program Evaluation) As Mark Fulop observes, “Indeed nonprofits that embrace evaluation as strategy will be driven by internal excellence rather than an external locus of control. Nonprofits that embrace evaluation as strategy will strengthen not only their organizational core but the centrality of their place in solving social needs.” (“The Roles of Strategic Evaluation in Non-profits” )
Here are a few links to resources about strategic planning and program evaluation:
From CDC, “Using Program Evaluation to Improve Programs: Strategic Planning” http://www.cdc.gov/HealthyYouth/evaluation/pdf/sp_kit/sp_toolkit.pdf
“Strategic Planning Resources for Non-Profits” http://nonprofitanswerguide.org/faq/strategic-planning/
“Why Strategic Planning for Non-profits is Important,” http://www.event360.com/blog/why-nonprofit-strategic-planning-is-important/
“What is strategic planning, and why should all schools have a strategic plan?” http://www.strategicplanning4schools.com/overview.html
From the National Alliance for Media Arts and Culture, “Basic Steps to a Strategic Planning Process,”http://www.namac.org/strategic-planning-steps
From the World Bank, “Strategic Planning- A Ten-Step Guide” http://siteresources.worldbank.org/INTAFRREGTOPTEIA/Resources/mosaica_10_steps.pdf
Occasionally, I encounter a resource that I think may be useful to clients and colleagues. I recently had the pleasure of enlisting the help of Green Heron Information services who helped me conduct a literature review for a project that I was working on. I’d like to share with you some of the central ideas about doing literature reviews—ideas that may be helpful to both evaluators and grant seekers— and encourage you to connect with Matt Von Hendy who is the president of Green Heron Information Services. (240) 401-7433 (email at email@example.com or visit www.greenheroninfo.com)
If you are like most people you probably have not thought about literature reviews since college or graduate school until you need to write one for a contract report, journal article or grant proposal. Just a quick review: A literature review is a piece of work that provides an overview of published information on a particular topic or subject usually within a specific period of time and discusses critical points of the current state of knowledge in the field including major findings as well as theoretical and methodological contributions. It will generally seek to present a summary of the important works but also provide a synthesis of this information as well.
Literature reviews matter for a number of reasons: they demonstrate a strong knowledge of the current state of research in the field or topic; they show what issues are being discussed or debated and where research is headed; and they provide excellent background information for placing a program, initiative or grant proposal in context. In short, a well-written literature review can provide a ‘mental road map’ of the past, present and future of research in a particular field.
Literature reviews can take many different types and forms but typically good ones share certain characteristics such as:
- Follows an organizational pattern that combines summary and synthesis
- Tracks the intellectual progression of a thought or a field of study
- Contains a conclusion that offer suggestions for future research
- Is well-researched
- Uses a wide variety of high quality resources including journal articles, conference papers, books and reports
Many evaluation and grant professionals when doing research for literature reviews use some combination of Google and/or other professionals as their primary information sources. While these resources are a great place to start, they both have limitations which make them not so good places to end your research. For example, search engines such as Google filter results based on a number of factors and very few experts can keep up to date with the amount of information that is being published. Fortunately, many high quality tools such as citation databases and subject specific databases exist that make going beyond Google relatively easy. Many evaluation professionals and proposal writers are motivated to do their own research but there are times such as working in new areas or tight deadlines where hiring an information professional to consult, research or write a literature review can be helpful.
You may think while this all sounds good in theory but wonder how would it work in practice? Let me offer a very quick case study of conducting research for a literature review in evaluating programs that attempt to improve mental health outcomes for teenagers in the United States. I would first start a list of sources by consulting experts and searching on Google. My next step would be to look at the two major citation databases, Scopus and Web of Science, and find out which journal articles and conference papers are most cited. I would then search in the subject specific databases that cover the health and psychology areas such as PubMed, Medline and PsycINFO. Finally, I would examine resources such as academic and non-profit think tanks just to make sure I was not missing anything important.
A well-researched and written literature review offers a number of benefits for evaluation professionals, grant-seekers and even funders and grantors: it can show an excellent understanding of the research in a subject area; it can demonstrate about what current issues or topics are being debated and suggest directions for future research; it can also provide an excellent way to place a program, initiative or proposal into context within the larger picture of what is happening in an area. If you have questions with getting started on a literature review, we are always glad to offer suggestions.
Green Heron Information Services offers consulting, research and writing services in support of literature review efforts. www.greenheroninfo.com
Like other research initiatives, each program evaluation should begin with a question, or series of questions that the evaluation seeks to answer. (See my previous bog post “Approaching and Evaluation: Ten Issues to Consider.”). Evaluation questions are what guide the evaluation, give it direction, and express its purpose. Identifying guiding questions is essential to the success of any evaluation research effort.
Of course, evaluation stakeholders can have various interests, and therefore, can have various kinds of questions about a program. For example, funders often want to know if the program worked, if it was a good investment, and if the desired changes/outcomes were achieved (i.e., outcome/summative questions). During the program’s implementation, program managers and implementers, may want to know what’s working and what’s not working, so they can refine the program to ensure that it is more likely to produce the desired outcomes (i.e., formative questions). Program managers and implementers may also want to know which part of the program has been implemented, and if recipients of services are indeed, being reached and receiving program services (i.e., process questions). Participants, community stakeholders, funders, and program implementers may all want to know if the program makes the intended difference in the lives of those who the program is designed to serve (i.e., outcome/summative questions).
While program evaluations seldom serve only one purpose, or ask only one type of question, it may be useful to examine the kinds of questions that pertain to the different types of evaluations. Establishing clarity about the questions an evaluation will answer will maximize the evaluation’s effectiveness, and ensure that stakeholders find evaluation results useful and meaningful.
Types of Evaluation Questions
Although the list below is not exhaustive, it is illustrative of the kinds of questions that each type of evaluation seeks to answer.
▪ Process Evaluation Questions
- Were the services, products, and resources of the program, in fact, produced and delivered to stakeholders and users?
- Did the program’s services, products, and resources reach their intended audiences and users?
- Were services, products, and resources made available to intended audiences and users in a timely manner?
- What kinds of challenges did the program encounter in developing, disseminating, and providing its services, products, and resources?
- What steps were taken by the program to address these challenges?
▪ Formative Evaluation Questions
- How do program stakeholders rate the quality, relevance, and utility, of the program’s activities, products and services?
- How can the activities, products, and services of the program be refined and strengthened during project implementation, so that they better meet the needs of participants and stakeholders?
- What suggestions do participants and stakeholders have for improving the quality, relevance, and utility, of the program?
- Which elements of the program do participants find most beneficial, and which least beneficial?
▪ Outcome/Summative Evaluation Questions
- What effect(s) did the program have on its participants and stakeholders (e.g., changes in knowledge, attitudes, behavior, skills and practices)?
- Did the activities, actions, and services (i.e., outputs) of the program provide high quality services and resources to stakeholders?
- Did the activities, actions, and services of the program raise the awareness and provide new and useful knowledge to participants?
- What is the ultimate worth, merit, and value of the program?
- Should the program be continued or curtailed?
The process of identifying which questions program sponsors want the evaluation to answer thus becomes a means for identifying the kinds of methods that an evaluation will use. If, ultimately, we want to know if a program is causing a specific outcome, then the best method (the “gold standard”) is to design a randomized control experiment (RCT). Often, however, we are interested not just in knowing if a program causes a particular outcome, but why it does so and how it does so. In that case, it will be essential to use a mixed methods approach that draws not just on quantitative outcome data that compare the outcomes of treatment and control groups, but also to use qualitative methods (e.g., interviews, focus groups, direct observation of program functioning, document review, etc.) that can help elucidate why what happens, happens, and what program participants experience.
Robust and useful program evaluations begin with the questions that stakeholders want answered, and then identifies the best methods to gather data to answer these questions.