Impact Journal Logo

Guiding student improvement without individual feedback

6 min read
  • Feedback seems extremely powerful. It is ‘among the most common features of successful teaching and learning’ with an average effect size of 0.79, ‘twice the average effect of all other schooling effects’ (Hattie, 2012: pp.115-116). Such meta-analyses are problematic (see, for example, Wiliam, 2016) and more recent reviews have offered lower effect sizes, but the overall picture is clear: ‘Good feedback can significantly improve learning processes and outcomes’ (Shute, 2008). Anders Ericsson emphasises the importance of feedback and guided improvement in his work on expert performance: ‘Deliberate practice involves feedback and modification of efforts in response to that feedback’ (Ericsson and Pool, 2016: p.99).
  • Providing effective feedback is problematic, however. ‘While feedback is among the most powerful moderators of learning, its effects are among the most variable’ (Hattie, 2012: p.115). Providing feedback successfully is a real challenge: ‘Get it wrong, and students give up, reject the feedback, or choose an easier goal’ (Wiliam, 2011: p.119). This is illustrated most vividly in Kluger and DeNisi’s meta-analysis (1996), which found that studies of feedback showed an average effect size of 0.41, but that more than 38 per cent had negative effects.
  • Providing effective feedback to individual students is problematic for practical reasons too. Marking is the most common way to provide individual feedback, but there is limited evidence of its effectiveness (Elliott et al., 2016) and it takes an inordinate amount of teachers’ time (Gibson et al., 2015). We may limit our marking in favour of verbal feedback, but reaching every student and giving clear verbal feedback may prove challenging during busy lessons. Once we’ve assessed students’ work, there are different ways in which we can guide improvement without giving individual feedback.

These approaches (see also Figure 1) could be combined with one another, or with individual feedback – but each might also prove effective on its own:

1. Re-teaching
Re-teaching allows us to challenge common misconceptions or knowledge gaps collectively and efficiently. We might reiterate definitions or offer mnemonics to support students with declarative knowledge; or we might offer examples, counterexamples and big pictures to support conceptual knowledge (Shute, 2008). While we could repeat our initial teaching, fresh images, examples and metaphors are likely to prove more useful: students who struggled to add using a number line may do better with counters; those confused by their reading about the American constitution may benefit from studying the court cases around President Trump’s 2017 travel ban. Students who ‘got it’ last lesson need not get bored: they’ll have forgotten aspects of the lesson and can also offer some of the explanations. Reteaching seems the simplest and most efficient way to approach knowledge gaps and misconceptions without giving individual feedback.

Figure 1 is titled "Providing effective feedback" and has four circles linked successively to each other in a line. The first is labelled "Re-teach (Fresh examples)"; the second, "Revisit goals: Models and checklists"; the third, "Revise process: Model improvement"; and the fourth, "Redraft. Practice. Check".

2. Revisiting goals
Closing the gap between students’ performances and goals may require more (or clearer) knowledge; it may also require clearer goals. Just as we may revisit what we taught students, we may also revisit the models we offered, or provide fresh ones; students can now compare their efforts with the model and better understand where the gap lies. Revisiting checklists may help students identify missing features of their work: punctuation, point sentences or balanced equations. Asking students to revisit the goals through examining one another’s work might work, but could prove unpredictable – the teacher’s choice of a model in advance is likely to prove more productive. Revisiting goals allows students both to improve the work at hand and to understand better what good work looks like in the subject.

3. Revising the process
We may also help students revise how they can change their work to meet these goals. We can do this by modelling the process of improvement – providing demonstrations and worked examples to show what students can do to their work (Shute, 2008). Taking a student’s answer, or a weak example of our own, we could model rewriting a paragraph or solution on the board. By asking students to ‘suggest another way we could put this, even more clearly’ or ‘remove unnecessary words’, we can model both the kind of sentences or components, and the kind of changes which create an excellent product. Demonstrating how we improve work both shares a process students can follow and further clarifies our goals by showing the choices we make and the difference between a good and a beautiful sentence.

4. More practice
Knowing exactly where students are at is important. It doesn’t mean we have to intervene immediately: students may benefit from further practice, perhaps even without error correction. I have written about times when lower student performance can lead to greater learning (Fletcher-Wood, 2017); Josh Goodrich noted, as a response, that teachers skilled in formative assessment can use this to keep tight control of student learning, mistakes and misconceptions. The result can be that students never get the chance to struggle, as teachers address misconceptions immediately without allowing students to do the thinking which may lead to longer-term learning. This is supported by Kluger and DeNisi’s (1996) observation that feedback ‘may reduce the cognitive effort involved in task performance’ and so be ‘detrimental in the long run’. As Goodrich observes, if we don’t allow students to struggle, although it can appear that students are doing well, this may harm their longer-term retention. This is not an easy message to convey – particularly to observers – but it is an important one: rapid feedback, particularly after students have acquired the knowledge they require, may diminish learning; sometimes, more practice is the best thing for students.


Each of these approaches might usefully be combined with individual feedback: teachers often revisit what students are aiming for before asking them to act on feedback, for example. What I’m wondering is whether we can achieve similar results (better work, better learning) without individual feedback. I’ve not seen any study comparing delivery of the same feedback to a group and an individual. One possible disadvantage would be students treating group activities as irrelevant to them (through over- or under-confidence). The responses we ask of students after using the techniques above – such as redrafting, further practice or another check for understanding – are therefore particularly important. Conversely, these approaches allow us to provide far clearer and more detailed guidance than we could possibly provide each individual: we can plan one good five-minute explanation, rather than attempting to convey these ideas in 30 individual comments. Of the many ways to guide improvement, perhaps these approaches can save us the most time while also benefiting students.

This is an extract from Responsive Teaching: The Classroom Teacher’s Guide to Formative Assessment, to be published in 2018.

Further reading

More on unit plans, including a downloadable template, can be found on the author’s blog here:

There are some great examples of how to expand student vocabulary in maths lessons in Doug Lemov’s blog:


Elliott V, Baird J, Hopfenbeck T, Ingram J, Thompson I, Usher N, Zantout M, Richardson J and Coleman R (2016) A Marked Improvement? A Review of the Evidence on Written Marking. Oxford: Education Endowment Foundation.

Ericsson A and Pool R (2016) Peak: Secrets from the New Science of Expertise. London: Bodley Head.

Fletcher-Wood H (2017) Is formative assessment fatally flawed? In: Improving Teaching. Available at: (accessed 21 August 2017).

Gibson S, Oliver L and Dennison M (2015) Workload Challenge: Analysis of Teacher Consultation Responses. London: Department for Education.

Hattie J (2012) Visible Learning for Teachers: Maximizing Impact on Learning. Abingdon: Routledge.

Kluger A and DeNisi A (1996) The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119(2): 254-284.

Sadler D (1989) Formative assessment and the design of instructional systems. Instructional Science 18(2): 119-144.

Shute V (2008) Focus on formative feedback. Review of Educational Research 78(1): 153-189.

Wiliam D (2011) Embedded Formative Assessment. Bloomington, IN: Solution Tree.

Wiliam D (2016) Leadership for Teacher Learning: Creating a Culture Where All Teachers Improve So That All Students Succeed. West Palm Beach, FL: Learning Sciences International.

      0 0 votes
      Please Rate this content
      Notify of
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes

      Author(s): Bill Lucas