Impact Journal Logo

Evidence-informed practice: The importance of professional judgement

Written by: James Mannion
6 min read

The late, great Ted Wragg once calculated that a teacher typically makes upwards of a thousand ‘on-the-spot, evaluative decisions’ on any given day (MacBeath, 2012, p.17). When I first came across this, I thought: ‘That sounds like a lot… you’d be exhausted!’ However, when you consider how busy a school is – how busy a classroom is – and how many instances might trigger a response from a teacher in the course of a day, it soon starts to look like a reasonable figure.

If you accept this to be true, a number of questions arise. First – what are all these decisions? What factors influence the decisions that teachers make when planning and teaching lessons? How many of these decisions are made consciously and how many are predetermined by past experience, habits or beliefs? To what extent are these decisions informed by research evidence? Perhaps most importantly, can we get better at making these decisions and, if so, how?

Here we arrive at a methodological question – how can we know when we are making better decisions? – and this is where practitioner research enters the fray. To be clear, by ‘practitioner research’ I mean a systematic process of reflection on our practice, trying out new ideas and evaluating the impact of what we do. In seeking to get even better at what we do – professional development in a nutshell – we first need to get a handle on how effective different aspects of our practice are. The question is: in the absence of some form of systematic research inquiry, how can we know which practices, habits and routines are the most useful – and which might most usefully be jettisoned?

‘What works’ versus the Bananarama effect

Readers may be aware that the Chartered College of Teaching recently secured access to paywalled journal articles for its members. This is a welcome development. However, if the teaching profession is to become more evidence-informed, looking to the literature to determine ‘what works’ is only part of the solution; it may be helpful, but it is by no means sufficient. Here’s why:

In recent years there have been a number of publications seeking to tell us ‘what works’ in education (e.g. Marzano, 2003; Petty, 2006; Hattie, 2008; Lemov, 2010; Higgins et al., 2013); for example, the Education Endowment Foundation (EEF) tells us that ‘feedback’ is the most effective thing schools can do, providing ‘high impact for low cost’.

If this sounds too good to be true, that’s because it is. Guides to ‘what works’ can only point us towards what works on average; for any given area of practice, there is always huge variation in terms of efficacy, ranging from the highly effective to the highly counterproductive. For example, in one study – a meta-analysis of 607 feedback interventions (FIs) – in 38% of cases, the FI actually decreased student performance (Kluger and DeNisi, 1996; see Figure 1).

Figure 1 is titled "Feedback interventions: distribution of effect sizes" and shows a pyramid-shaped bar graph with two axes. The horizontal axis is labelled "Effective size (Cohen's d)" and ranges from minus 4 to plus 12. The vertical axis is labelled "Number of effect sizes" and ranges from 0 to 80.

It is worth restating this point, because it is really quite mindblowing: in more than 230 of the cases studied, the FI – a practice that supposedly gives ‘high impact for low cost’ – actually made things worse than if the schools had just done business as usual. A recent EEF study of effective feedback also reported ‘wide variation’ in practice because ‘teachers struggled to understand and use evidence on effective feedback consistently across all schools’ (Gorard et al., 2014, pp. 5-6).

Imagine if a school leader said to their colleagues, ‘We’re all going to do a new thing but there’s a one in three chance that we’ll be making things worse’; they would be unlikely to garner much support for their new initiative. However, in the absence of a systematic impact evaluation of any shiny new initiative (or existing area of practice), this is precisely what school leaders are saying – even if they don’t realise it. What’s worse, they can’t possibly know where on the bell curve their school sits in relation to any given area of practice.

Steve Higgins (2013) refers to this phenomenon of wide variation as ‘the Bananarama effect’: it ain’t what you do, it’s the way that you do it – and that’s what gets results. The question is: in the absence of some form of systematic research inquiry, how can schools know whether ‘what they do’ is helping improve student outcomes, having zero impact or making things worse?

A stark choice

When researchers talk about ‘what works’, what they really mean is ‘what worked’ in the research context. In seeking to get better at making the decisions that shape our professional lives, looking to the research literature – illuminating though it may be – is not going to be sufficient. We need to work out what works for us – in the contexts in which we work. In the drive to become a more evidence-informed profession, we need to stop ticking boxes and jumping through hoops, and start becoming active problem-solvers. We need to engage both with and in research.

We could, of course, wait for education researchers to parachute into our classrooms to carry out this work for us. However, I suspect we’ll be waiting a long time. In the words of another ‘late great’, Lawrence Stenhouse: ‘It is teachers who, in the end, will change the world of the school by understanding it.’ The choice facing the teaching profession is stark: either we continue to blindly fumble around in the dark, or we strap on a head torch.

Case study

What does practitioner research look like in practice?

People often think of practitioner research as a huge undertaking – something resulting in a 20,000-word Masters dissertation that only a handful of people will ever read. If we are to become a more evidence-informed profession, we need to embrace the idea of small-scale practitioner research inquiry.

In recent years, I have spent a lot of time exploring the question ‘what is the minimum viable model for a research cycle?’ The surprising answer is that you can do a meaningful piece of inquiry within an hour, easily. In fact, teachers already do this kind of ‘what’ inquiry pretty well. What is the relative attainment of boys and girls in Year 10 physics? Search the existing data – done. What do Year 11 pupil premium boys say about revision? Write a short survey and ask a sample of them to fill it out – done.

There are longer ‘how’ pieces that might take a half-term, say. How can we increase attendance at parents’ evening? Collect some baseline data, perhaps conduct a telephone survey, devise a strategy, implement it, take a post-intervention measure – done. And then there are longer pieces still. How can we maintain the momentum of learning and development across the transition from Year 6 to Year 7? Arrange reciprocal school visits for teachers of Years 6 and 7, observe some lessons, take samples of pupils’ work, devise some training for teachers to enhance consistency of provision, monitor and evaluate the impact and compare it with previous cohorts – done.

At the start of each inquiry cycle, we review the literature briefly, and at the end of each cycle we ask: how impactful is this area of practice? Should we tweak it, scale it up or discard it altogether? Through this simple methodology – read stuff, try stuff, measure stuff – we bootstrap our way to a better-informed, more surefooted future.

 

References

Gorard S, See BH and Siddiqui N (2014) Anglican Schools Partnership: Effective Feedback. Education Endowment Foundation. Available at: https://educationendowmentfoundation.org.uk/public/fi les/Projects/EEF_Project_Report_AnglicanSchoolsPartnershipEffectiveFeedback.pdf (accessed 10 April 2017).

Hattie J (2008) Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. London: Routledge.

Higgins S, Katsipataki M, Kokotsaki D, Coleman R, Major LE and Coe R (2013) The Sutton Trust – Education Endowment Foundation Teaching and Learning Toolkit. Manual. London: Education Endowment Foundation.

Kluger AN and DeNisi A (1996) The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119(2): 254-284.

Lemov D (2010) Teach like a Champion: 49 Techniques that Put Students on the Path to College. San Francisco: Jossey-Bass.

MacBeath J (2012) Learning and teaching: Are they by any chance related? In: McLaughlin C (ed) Teachers Learning: Professional Development and Education. Cambridge, UK: Cambridge University Press, pp.1-20.

Marzano R (2003) What Works in Schools: Translating Research into Action. Alexandria, VA: Association for Supervision and Curriculum Development.

Petty G (2006) Evidence-Based Teaching: A Practical Approach. Cheltenham: Nelson Thornes.

Stenhouse L (1981) What counts as research? British Journal of Educational Studies 29 (2): 103-114.

      About the Author

      James Mannion qualified as a teacher in 2006. He holds an MA in Person-Centred Education from the University of Sussex, and a PhD from the University of Cambridge. James is also Bespoke Programmes Leader in the London Centre for Leadership in Learning at the UCL Institute of Education, working with schools throughout London and the South East to develop evidence-informed practices. 

      4 1 vote
      Please Rate this content
      Subscribe
      Notify of
      0 Comments
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes