Translating the science of learning into practice with teacher-led randomised controlled trials: Giving teachers voice and agency in evidence-informed pedagogy

9 min read
Richard Churches, Education Development Trust, UK
Eleanor Dommett, Institute Of Psychiatry, Psychology And Neuroscience, King’s College London, UK
Ian Devonshire, Nottingham University Medical School, UK
Robin Hall, Education Section, British Science Association, UK
Steve Higgins, School Of Education, Durham University, UK
Astrid Korin, Education Development Trust, UK

Previously we wrote for Impact outlining a Wellcome Trust-funded programme we had just launched (Dommett et al., 2018). Teachers received training and support to enable them to design and implement randomised controlled trials (RCTs) exploring the effect of teaching approaches based on evidence from neuroscience and psychology. Results from this project are now published in the US journal Mind, Brain and Education (Churches et al., 2020).

The idea that education practice could ground itself in best available evidence is an alluring one. In particular, use of knowledge from neuroscience and psychology is highly attractive. However, as with any form of translation from one field of study to another, there are challenges. We believe that our experience working with teachers provides useful lessons for the education system going forward, in terms of both what needs to be put in place to ensure that the impact of the science of learning-informed pedagogy is fully understood and the wider implications for evidence-informed practice.

The teacher-led research programme

The programme explored one of the key challenges facing the science of learning and education: how to translate evidence from laboratories into actual practice. From the mid-nineteenth century, similar challenges faced the medical profession as it aspired to become a ‘natural science’ grounded in biology. Today, clinicians use biology in a similar fashion to the way in which architects use physics. One day, something similar might be possible in education, with the biology of learning forming the basis for an ‘educational clinical reasoning’.

That said, there are many difficulties in translating evidence from neuroscience and psychology (Dommett and Devonshire, 2010). Taking a moment to reflect on the various levels of research that the science of learning explores illustrates the difficulty (Figure 1). Alongside this sits the challenge of where the evidence comes from, with most evidence emerging from laboratory contexts completely different from classrooms (including experiments using animals and learning tasks assessed with individuals alone, rather than with interacting social groups (Churches et al., 2017)). Because of this, for what we know about the biology of learning to make sense in schools, education interventions need to be developed from these ideas by serving teachers and tested carefully in real classrooms. We believe that teachers (as serving ‘clinical practitioners’) are best placed to do this, just as their cousins do in medicine and healthcare, where treatments are taken from ‘bench to bedside’ with clinical trials (Horton, 1999). In turn, having serving teacher practitioners design and conduct such research could help to bridge the ‘democratic deficit’ in education (Biesta, 2007, p. 1), enabling teachers to be consumers and producers of education research, as well as giving them a clear voice with regard to what is researched and how the findings should be applied to actual practice.

Primary and secondary schools across England were involved. They received RCT design training and supporting materials (Churches and Dommett, 2016; Churches et al., 2017). These included the advantages and disadvantages of different designs, randomisation approaches and trial implementation. We then trained teachers to analyse their results with Excel spreadsheets that we built to calculate research statistics easily, writing up their findings as conference posters to aid dissemination. As well as a practical way of sharing findings, a conference poster format helped to ensure that the teachers’ voices were centre stage when it came to the interpretation of their findings and implications for practice in their schools.

Three examples will illustrate the range of areas that teachers were interested in exploring. James Siddle and colleagues built an initial sample size of over 400 to explore the effect of two approaches (interleaving and interleaving combined with retrieval practice) compared to their normal practice in times tables learning. Francis Bryant-Khachy was interested in whether spaced learning was more useful when teaching geography or history and conducted parallel replications with six different year groups. Leanne Day designed a trial that explored the effect of ‘think-alouds’ as a metacognitive strategy in mathematics lessons.

Randomised controlled trials (RCTs)

Teachers designed a range of RCTs. RCTs are the ‘gold standard’ research approach in many sciences. They have a control group (or condition) that an intervention is compared to. In education, the control will usually be existing best practice – as in surgery, where you might compare a new operation to a current one. There would be no point not treating people or not teaching children at all.

Well-designed, short-treatment-window, teacher-led RCTs have advantages over some larger-scale trials, as the control condition can be a specific alternative intervention rather than any practice that might be taking place at the time. Because of this, and the reduced variation in data that this enables, teacher-led RCTs often produce clearer effect size differences.

Randomisation allocates participants to the conditions, helping to remove researcher bias. Finally, you need some form of relevant measurement that the conditions are tested (or ‘trialled’) against, although there are many variations that can be used depending on your hypothesis and the context.

As well as designs that compared an intervention with a planned control lesson/sequence of lessons, teachers compared two interventions simultaneously to the control (an approach that can yield more subtle information – particularly where interventions are similar but slightly different).

Finally, we combined teachers’ results into a meta-analysis and ‘forest plot’, where effects across the studies could be compared for over 2,000 children. Meta-analysis is a statistical method that allows for the synthesis of quantitative evidence from related research in a way that can summarise that body of evidence (Higgins, 2018).

How to read a forest plot (Figure 2)

Each dot [  ] represents the ‘effect size’. Error bars [ Ͱ ](either side) illustrate 95 per cent ‘confidence intervals’ – the range of results that we might expect in 95 out of 100 replications (repetitions of the study). The relative dot size shows the contribution of the individual finding to the combined analysis.

Positive effects, right of the central vertical line (> 0.00), show that the treatment improved pupil outcomes compared to the control. Negative effects, left of the central vertical line (< 0.00), show that the control group performed better. The effect size used in the analysis is called r (used because the data is often not normally distributed (Rosenthal, 1991)). Some of you may be more familiar with Cohen’s d (used by John Hattie (2009)), included on the right. Finally, you can see an indication [*] of the probability that the effect might be misleading (not really a change in scores at all). For example, p < 0.05 means a smaller than five in 100 probability, and p < 0.001 means a less than one in 1,000 probability. p-values are a combination of effect and sample size.

 

Findings from the teacher-led RCTs

Overall, teacher translations of neuroscience and cognitive psychology evidence had positive effects on pupil outcomes (equal to a Cohen’s d of 0.30). The largest positive effect was for the use of novelty to enhance salience and attention. The largest negative effect was for multiple choice testing alone as a means of learning new spellings compared to ‘look, cover, write, check’ (LCWC) (a simple strategy using multiple repetition in working memory). However, combining LCWC with such testing completely reversed this effect, producing the largest overall outcome.

Taking laboratory evidence into the real world

It is early days for use of this type of approach. Therefore, results should be interpreted with caution. However, it is worth drawing preliminary conclusions. Notably, ‘testing as a learning experience’ (Bjork and Bjork, 2011) had differential effects. The forest plot suggests that these differences in effect were influenced by things such as pupil age, subject area and the way in which testing was used. This contrasts with evidence from a wide range of laboratory-based cognitive psychology research (Adesope et al., 2017) that implies a universal positive effect for retrieval practice.

Here, perhaps, lies the nub of the problem. Laboratories are not classrooms and, by extension, a laboratory protocol is not necessarily effective pedagogy. Retrieval practice (particularly in a multiple choice form) is often operationalised in psychology research in a way that would be of little direct benefit in the classroom over a whole one-hour lesson period. A whole lesson, to be effective or enhanced, might well include a test but would always need to embrace elements that appear in ‘clinical practice’ terms to be important for effective classroom practice (e.g. engagement, feedback, guided learning, high levels of instruction, questioning, modelling, praise, review, scaffolding, good subject knowledge, giving time for practice, etc.) (Coe et al., 2014; Muijs and Reynolds, 2011).

In the real world (as opposed to a laboratory condition), it is highly likely that the application of a test, as useful as this might be in theory, will have to be contextually defined and applied through the lens of best existing education practice, the context, age of the children, the subject being taught and the point that has been reached in the learning process. It cannot be good enough to imply that testing will always work, for every teacher, in every situation, with all children – nor can it be acceptable to jump to similar conclusions about other evidence from the science of learning.

Implications for research into evidence-informed pedagogy

In our last article for Impact, we suggested that teacher-led randomised controlled trials might have the potential to bridge the gap between laboratory evidence and ‘classroom clinical practice’. The teacher findings from this project not only support this idea but also point to the potential of multiple planned teacher-led RCTs (and replications) as an important means of rigorously testing education initiatives across the board. Multiple planned teacher-led RCTs exploring a single intervention (with the synthesis of findings in a meta-analysis as the outcome) could have much potential in adaptive programming situations and government policy roll-out, where testing, learning and iteration are required to find solutions (Ramalingam et al., 2019). Interventions being explored in an adaptive way could have each adaptation bounded and segmented by pre- and post-testing so that, alongside context differences, different adaptations could be compared within different sequential treatment windows.

References

Adesope OO, Trevisan DA and Sundararajan N (2017) Rethinking the use of testing: A meta-analysis of practice testing. Review of Educational Research 87(3): 659–701.

Biesta G (2007) Why ‘what works’ won’t work: Evidence-based practice and the democratic deficit in educational research. Educational Theory 57(1): 1–22.

Bjork EL and Bjork RA (2011) Making things hard on yourself, but in a good way. In: Gernsbacher MA, Pew RW and Hough LM (eds) Psychology and the Real World. New York: Worth Publishers, pp. 56–64.

Churches R and Dommett E (2016) Teacher-Led Research: Designing and Implementing Randomised Controlled Trials and Other Forms of Experimental Research. Camarthen: Crown House Publishing.

Churches R, Dommett E and Devonshire I (2017) Neuroscience for Teachers: Applying Research Evidence from Brain Science. Carmarthen: Crown House Publishing.

Churches R, Dommett E, Devonshire I et al. (2020) Translating laboratory evidence into classroom practice with teacher-led randomised controlled trials – a perspective and meta-analysis. Mind, Brain and Education. 14(3): 292–302.

Coe R, Aloisi C, Higgins S et al. (2014) What makes great teaching? London: Sutton Trust. Available at: www.suttontrust.com/wp-content/uploads/2014/10/What-Makes-Great-Teaching-REPORT.pdf (accessed 2 July 2020).

Dommett EJ and Devonshire IM (2010) Neuroscience: Viable applications in education. The Neuroscientist 16(4): 349–356.

Dommett E, Devonshire IM and Churches R (2018) Bridging the gap between evidence and classroom ‘clinical practice’. Impact 2: 64–67.

Hattie J (2009) Visible Learning: A Synthesis of Over 800 Meta-Analyses Related to Achievement. London: Routledge.

Higgins S (2018) Improving Learning: Meta-Analysis of Intervention Research in Education. Cambridge: Cambridge University Press.

Horton B (1999) From bench to bedside… research makes the translational transition. Nature 40: 213–215.

Muijs RD and Reynolds D (2011) Effective Teaching: Evidence and Practice. London: Sage.

Ramalingam B, Wild L and Buffardi AL (2019) Making adaptive rigour work: Principles and practices for strengthening MEL for adaptive management. Briefing note. Available at: www.odi.org/sites/odi.org.uk/files/resource-documents/12653.pdf (accessed 2 July 2020).

Rosenthal R (1991) Meta-Analytic Procedures for Social Research. California: Sage.

      0 0 votes
      Please Rate this content
      Subscribe
      Notify of
      0 Comments
      Oldest
      Newest Most Voted
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes