In March, the Nonprofit Centers Network hosted a webinar on . During the webinar, I shared three steps for developing an evaluation strategy, which I will also share with you today. But first, some evaluation basics:
- What is Evaluation? Evaluation is the systematic assessment of program operations and outcomes compared to explicit or implicit performance standards. Evaluation is planned and organized. It looks at both the execution of a program and the outcomes to develop a clear picture of what is happening in a program and what elements create the outcomes we observe. And we compare these observations to performance standards to provide context around the program’s performance—to help us understand if what we observed meets our expectations.
- Why Evaluate? Most nonprofits and shared spaces want to achieve four intersecting goals: First, we want to monitor our mission, to understand what we are doing and how it is moving us towards our achieving our goals. Second, we want to deliver better services. Third, we want to increase our impact, by tweaking our work to deliver the most bang for our buck. And fourth, we want to motivate funders, member, and donors around our work. Evaluation can help us do all of these things. Evaluation is a great tool for understanding what we are doing, what we could be doing better, and how we can communicate these successes and lessons with key stakeholders.
Now that we’ve covered the “what” and “why” of evaluating, let’s talk about the “how” with three key steps for developing an evaluation strategy:
- Map Your Program: Before you even get started with evaluation design, it is critical to be clear about what you are evaluating. One tool to make explicit your program and services is a logic model or theory of change, which visually depicts what you do and what you expect those actions to result in. Think about the inputs and resources your space relies on, your activities, and the outputs and outcomes that you hope to see. The most important part of mapping your program is capture the secret sauce that makes your shared space special. What are the key elements that make your shared space work? What are you able to accomplish that you could not if everyone was located separately?To learn more, check out: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide
- Define Key Evaluation Questions: What do you want to know about your program and services? What pieces of information would help you make better programmatic decisions? Step two is dedicated to defining what you want to learn from your evaluation. Spending adequate time at the beginning of an evaluation to define questions will help focus the evaluation and structure the way you communicate your findings.To learn more, check out: https://www.wmich.edu/sites/default/files/attachments/u372/2016/eval_questions_checklist-2016-03.pdf
- Match Questions with Data Sources: Finally, identify what data sources can help answer each of your evaluation questions. Aim for a mixture of different types of data from different source so that you can assess if the answer to your question is consistent from multiple perspectives. Remember that quantitative data (think: counts, fixed-response survey questions, demographic data, financial data) is strongest when you need precise, specific data, when you have a cause and effect relationship you want to test, and when you already have a theory about what is happening that you want to replicate. In contrast, qualitative data (think: interviews, focus groups, observations, open-ended survey questions, photos, videos) is strongest when you want to learn about a topic you don’t know much about, to explore a sensitive topic, to capture the lived experience of your participants, or when you want rich description of the “how” and “why” of your quantitative data.To learn more, check out: http://betterevaluation.org/plan/describe/collect_retrieve_data