Evaluation for Impact
Why this matters¶
- Assessment (Lesson 9) asks whether learners met your intended outcomes during the training. Evaluation asks a different question: did the training contribute to meaningful change over time?
- For practitioners in resource-constrained settings, evaluation often feels like an afterthought or a donor requirement. But done well, it is the most useful tool you have for improving your training and understanding your impact.
- This topic helps you plan evaluation that is realistic, honest, and useful — not performative.
What evaluation is¶
- Understanding change over time: Evaluation looks beyond the training event itself. It asks what happened afterwards — in learners' practice, in their organisations, in the communities they work with.
- Not the same as assessment: Assessment checks learning during or immediately after training. Evaluation tracks whether that learning translated into action, and whether that action contributed to the outcomes you intended.
- Not the same as satisfaction surveys: "Did you enjoy the training?" tells you almost nothing about impact. Evaluation focuses on evidence of change, not feelings about the experience.
Collecting useful and feasible evidence¶
- Match your methods to your resources: Follow-up interviews, workplace observations, and longitudinal tracking are powerful but resource-intensive. Reflective journals, peer check-ins, and output reviews are lighter-weight alternatives that can still provide meaningful evidence.
- Multiple sources, not just self-report: Learner self-assessment is useful but limited — people tend to overestimate their own change. Where possible, combine self-report with observation, facilitator notes, peer review, or evidence from learners' own work outputs.
- Timing matters: Evidence collected immediately after training captures recall and enthusiasm, not lasting change. Build in follow-up touchpoints — two weeks, three months, six months — to see what sticks.
Signal vs noise¶
- Distinguish meaningful change from weak indicators: A learner saying "I learned a lot" is noise. A learner redesigning their workshop based on principles from your training is signal. Look for evidence of behaviour change, not just positive feedback.
- Be honest about attribution: Your training operates within a system (you mapped this in Lesson 1). Change in that system has many causes. You can rarely prove your training caused a specific outcome — but you can collect evidence that it contributed, and you can identify where it fell short.
- Watch for the easy wins: It is tempting to focus on outcomes that are easy to measure (attendance, completion rates, satisfaction scores) rather than outcomes that matter (changed practice, new capabilities, system-level shifts). The easy metrics are not useless, but they should not be the whole story.
Using evaluation to improve¶
- Refining activities: If learners consistently struggle with a particular skill after training, that points to a gap in your activity design — not a failure of the learners. Use evaluation data to identify which activities produce lasting learning and which do not.
- Identifying gaps: Evaluation often reveals needs your training did not address. These gaps are valuable — they tell you what the next iteration should include.
- Improving delivery: Patterns in evaluation data can highlight facilitation issues (pacing, group dynamics, unclear instructions) that are hard to see from inside the room. Combine your evaluation findings with the facilitation reflection from Activity 11.
Connection to Theory of Change¶
- Closing the loop: In Lesson 2, you articulated a Theory of Change — a chain from your training activities through to the impact you hoped for. Evaluation is where you test that chain. Which links held? Which assumptions turned out to be wrong?
- Revising your theory: A Theory of Change is a hypothesis, not a promise. Evaluation evidence will almost certainly show that some of your assumptions were off. That is not failure — it is learning. Update your Theory of Change based on what you find, and let that inform how you redesign the training.
- Connecting back to the system: Your system map (Lesson 1) identified actors, resources, and constraints. Evaluation helps you see which system-level factors supported or undermined your training's impact. Some of these are within your control; others are not. Knowing the difference shapes realistic expectations for the next iteration.
In practice¶
👉 Activity 12: Evaluation Plan — Define what evidence you will collect, when, from whom, and how you will use it to improve your training. This plan connects directly to your Theory of Change (Activity 2) and your Assessment Plan (Activity 10).
Before you move on¶
You should now have:
- A clear distinction between assessment (during training) and evaluation (after training)
- An evaluation plan with specific, feasible methods for collecting evidence of change
- Criteria for distinguishing meaningful evidence from weak indicators
- A plan for using evaluation findings to revise your training design and Theory of Change