Skip to content

9. Assessment for Learning

🎯 Learning Outcomes πŸ“ Guiding Questions
  • Distinguish between assessment (focused on learning) and evaluation (focused on broader impact), understanding that evaluation is covered separately in Extra Topics
  • Design formative and summative assessments aligned with learning outcomes
  • Define clear, observable indicators of learning
  • How will you know whether learners have actually learned what you intended?
  • What would it look like if a learner succeeded β€” what would they produce, do, or decide?
  • Are your assessments aligned with your intended outcomes and activities?
  • How can assessment support learning rather than just measure it?

Here is something that might feel familiar: you have just run a training session. Participants seemed engaged. They nodded along, asked a few questions, and said it was useful in the feedback form. But did they actually learn anything? Could they do something on Monday that they could not do on Friday? You are not sure β€” because you never built a way to find out.

This lesson is about building that way in. Assessment is how you make learning visible β€” to your learners and to yourself. And when done well, it is not a test bolted on at the end. It is a tool that supports learning as it happens.

Why this mattersΒΆ

Many people's experience of assessment is formal, high-stakes, and anxiety-provoking β€” exams, grades, pass/fail judgements. If that is the association you carry, it is worth setting it aside for now. The kind of assessment this lesson focuses on is different. It is low-stakes, practical, and designed to help people learn, not to sort them into categories.

Assessment matters for two reasons that are easy to overlook.

First, it helps learners see where they are. Without some form of assessment, learners have no reliable way to gauge their own progress. They might feel confident about a skill they have not actually developed, or feel uncertain about something they are doing well. A short, low-pressure check β€” applying a method to a new scenario, explaining a concept to a peer, producing a brief output β€” gives learners honest information about their own understanding.

Second, and this is the part trainers often miss: assessment gives you feedback about your teaching. When learners consistently struggle with a particular concept, that is not just information about their learning β€” it is information about your design. Maybe your explanation was unclear. Maybe the activity did not give enough scaffolding. Maybe the outcome was more ambitious than you realised. Trainers who treat assessment as feedback on their own practice, not just as a learner scorecard, improve faster and design better training over time.

The reframe

Assessment is not a judgement you pass on learners. It is a two-way mirror: learners see their progress, and you see where your training is working β€” and where it is not.

Assessment vs evaluation β€” a quick distinctionΒΆ

This lesson focuses on assessment: making learning visible during and at the end of your training. There is a related concept β€” evaluation β€” which asks a broader question: did the training contribute to real-world change over time? Evaluation looks beyond the training event itself to longer-term impact. It is important work, and it is covered in the Extra Topics section. For now, the question is simpler: did learners learn what you intended?

What does success look like?ΒΆ

Assessment starts with a concrete question: if a learner has achieved a particular outcome, what would you actually see them do?

This is where the outcomes you defined in Lesson 5 become practical. A well-written outcome already implies its own assessment. If the outcome is "analyse a local dataset and produce recommendations," then success looks like a learner producing a set of recommendations that follow logically from data they have analysed. If the outcome is "facilitate a small-group discussion," then success looks like a learner running a discussion where participants are engaged and the conversation stays focused.

The key is to define observable indicators β€” things you can actually see, hear, or read. "Understands climate data" is not observable. "Interprets a temperature trend graph and explains what it means for local farming decisions" is.

Start from your outcomes

Go back to the outcomes you wrote in Lesson 5. For each one, ask: what would a learner who has achieved this actually do? That is your indicator. If you cannot answer the question, the outcome may need to be rewritten.

Defining indicators that are honest, not inflatedΒΆ

A common trap is to choose indicators that are easy to observe but do not actually reflect the learning you care about. A learner completing a worksheet does not necessarily mean they understand the concept β€” they might have copied from a neighbour or followed instructions mechanically. A learner giving a confident presentation does not mean their analysis is sound.

Good indicators are tied to the substance of the outcome, not just its surface appearance. Ask yourself: could someone hit this indicator without actually having learned the skill? If the answer is yes, sharpen the indicator.

Formative and summative assessmentΒΆ

Assessment serves different purposes at different points in the training.

Formative assessment happens during learning. Its purpose is to support improvement while there is still time to act on it. A quick check-in question partway through an activity, a peer review of a draft, a show of hands about confidence levels β€” these are all formative. They give you and the learners real-time information about what is landing and what needs more work.

Formative assessment is where the "two-way mirror" is most powerful. When you pause an activity to check understanding and discover that half the room is confused, you have a choice: push ahead with your planned content, or stop and address the gap. The second option is almost always better, and it is only available to you if you built the check-in into your design.

Summative assessment happens at the end of a learning sequence. Its purpose is to determine whether learners have achieved the intended outcomes. A final project, a demonstration of a skill, a completed output that meets defined criteria β€” these are summative. They answer the question: can learners do what we set out to teach?

Both types matter, but for practical training, lean heavily toward formative assessment. Short training programmes rarely need formal end-of-course exams. What they need is frequent, low-stakes feedback loops that keep learning on track.

Avoid importing academic defaults

Formal exams, detailed rubrics, and grading scales exist for good reasons in university settings β€” but they are not the default for practical training. If your learners are community health workers or agricultural extension officers, a written test is probably not the most valid way to assess whether they can apply a new method in the field. Choose assessment approaches that fit your context and serve your learners, not ones borrowed from academia because they feel "proper."

Activities can be assessmentsΒΆ

Here is a liberating idea, especially if you are working in resource-constrained settings: you do not need a separate exam. If a learner completes a well-designed activity that demonstrates a skill, that is evidence of learning.

This connects directly to the activities you designed in Lesson 7. A good activity already generates observable evidence of what learners can do. If the activity asks learners to analyse a dataset and produce recommendations, and a learner produces thoughtful, data-grounded recommendations, you have just assessed their learning β€” without a test, without a rubric, without a separate assessment event.

The key is to be intentional about it. When you design an activity, ask: what will I be able to observe about learners' understanding from their work on this task? If the answer is "quite a lot," you may already have your assessment. If the answer is "not much β€” they could complete the task without really understanding the material," then either the activity or the assessment needs rethinking.

Assessment through activity: climate data training

The climate data training team needed to assess whether community organisers could interpret local datasets. Rather than creating a separate test, they looked at the activity they had already designed: organisers analyse a local dataset, produce a brief report, and present recommendations to their peers. The facilitators observed the analysis process, reviewed the reports, and listened to the presentations. Organisers who could explain why their data pointed to particular recommendations β€” not just what the recommendations were β€” demonstrated the outcome. The activity was the assessment.

Designing assessment into your trainingΒΆ

Rather than treating assessment as a separate design task, weave it into what you have already built. Here is a practical method:

Start with your alignment table. Go back to the alignment work from Lesson 5. For each outcome, you already have an activity. Now ask: does this activity generate evidence that the learner has achieved the outcome? If yes, note what you will look for (your indicator). If no, consider whether you need to adjust the activity or add a brief assessment moment.

Build formative checks into activities. At natural pause points in longer activities, add a quick check: ask learners to summarise what they have found so far, compare their approach with a peer, or answer a specific question. These do not need to be formal β€” a two-minute pair discussion or a brief written reflection is enough.

Choose your summative evidence. For each key outcome, decide what final evidence you will accept. This might be a completed output, a demonstration, a short presentation, or a peer review. It does not need to be the same format for every outcome β€” in fact, varying your methods gives you richer information.

Keep it proportionate. Assessment should serve learning, not consume the training. If you are spending more time designing assessments than designing learning experiences, you have lost the plot. For a short training programme, a handful of well-chosen indicators and a few formative check-ins are usually enough.

A useful gut check

Ask yourself: if I removed all formal assessment from this training, would I still have a reasonable sense of whether learners achieved the outcomes? If the answer is yes (because your activities already generate good evidence), your assessment design is probably on the right track. If the answer is no, you need to build in more visibility.

In practiceΒΆ

πŸ‘‰ Activity 10: Assessment Plan β€” define observable indicators for your key outcomes and decide how you will assess learning (formative and summative)

πŸ‘‰ Come back to Activity 6: Alignment Table

what to do: Review your alignment table. For each outcome, check that your activity generates observable evidence of learning. Add or refine your assessment approach where gaps exist. Fill in the assessment column if it is still blank.

Key takeaway

Assessment is not a judgement bolted on at the end β€” it is a built-in way to make learning visible. When your activities are well-designed, they already generate the evidence you need. And when learners struggle, that is not just their problem β€” it is information about where your training can improve.

Before you move onΒΆ

You should now have:

  • observable indicators of success for each key learning outcome
  • formative assessment moments built into your activities
  • a summative assessment approach for your key outcomes
  • a reviewed alignment table with the assessment column completed

Further reading (optional)ΒΆ

  • Black, P., & Wiliam, D. (1998) β€” Assessment and Classroom Learning β†’ Supports: formative assessment aligned with learning outcomes β†’ Why it matters: foundational research showing how assessment for learning improves understanding and performance β†’ Source: https://doi.org/10.1080/0969595980050102

  • Wiggins, G. (1998) β€” Educative Assessment: Designing Assessments to Inform and Improve Student Performance β†’ Supports: designing assessment that serves learning rather than just measurement β†’ Why it matters: argues that assessment should be a learning experience in itself, not just a check at the end β†’ Source: Jossey-Bass

  • Boud, D. (2000) β€” Sustainable Assessment: Rethinking Assessment for the Learning Society β†’ Supports: assessment that develops learners' own capacity to judge their work β†’ Why it matters: challenges over-reliance on external judgement and proposes assessment that builds learner autonomy β†’ Source: https://doi.org/10.1080/03075070050025488