Bridging the gap between evidence and classroom ‘clinical practice’ – the potential of teacher-led randomised controlled trials to advance the science of learning

8 min read

A key challenge facing neuroscience and education is how to translate evidence from the laboratory into the classroom (Dommett et al., 2013). From the mid-19th century, similar challenges have faced the medical profession as it aspires to become a ‘natural science’ grounded in biology. Firstly, laboratories are not classrooms, just as the biological experiment is not clinical practice. Secondly, wide replication to control for individual pupil differences as well as school context will be necessary. Finally, and most importantly, writers have pointed to the ‘democratic deficit’ that exists in education research and its potential impact on attempts to establish ‘what works’ (Biesta, 2007). In medicine and healthcare, it is serving clinicians who most frequently publish studies about clinical practice. In education, few practitioner studies reach journals or get disseminated. Further, those researchers who do study or design pedagogy often no longer practise as teachers.

In a Wellcome Trust-funded project, teachers who previously designed and implemented randomised controlled trials (RCTs) (Churches and Dommett, 2016)(Churches and McAleavy, 2016), together with teachers with a psychology or neuroscience degree, have come together to design and deliver a series of replicated trial protocols. In this paper, we discuss the issues outlined above, the neuroscience and cognitive psychology evidence chosen by the teachers for translation into classroom practice, and the wider potential of teacher-led RCTs in supporting the translation of evidence from the science of learning.

Evidence-based and clinical practice in medicine and healthcare

Before we discuss this further, it would be helpful to clarify what evidence-based practice (EBP) and clinical practice mean in medicine and healthcare. EBP is more than the simplistic application of treatments based on research findings; it is about ‘integrating individual clinical expertise with the best external evidence’ (Sackett et al., 1996). Practically, EBP is seen as involving a number of steps:

  1. Diagnosis (assess the patient)
  2. Create a clinical question to help identify an appropriate treatment
  3. Look at the research evidence, critique it and select a treatment
  4. Treatment, involving the patient in the process
  5. Evaluation of the effects of the treatment (self-evaluation as a clinician).

In this way, each patient engagement becomes a form of research project for the clinician. Alongside this, clinical practice consists of guidelines that form ‘systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances’ (Field and Lohr, 1990). We suggest that both developments in the communication of the science of learning and the effects of training teachers to conduct controlled research expose one of the key challenges in moving the education profession forward: namely, the lack of systematic approaches to the development of education evidence by the serving profession itself, grounded in (but not dominated by) the biology of learning.

The translation of laboratory evidence into classroom practice – problems and possibilities

Three key challenges with regard to collaboration between neuroscientists and educators are particularly relevant here. Firstly, on a theoretical level, education and neuroscience can be considered as fundamentally different in their overall objectives and the manner in which the objectives are pursued (the ‘goal problem’). Neuroscience is a natural science that investigates the workings of the brain, the functional architecture of the mind and the way that the brain and mind map together. In contrast, education aims to develop particular pedagogies and therefore, arguably, has more in common with the way in which architecture uses physics.

Secondly, neuroscience research can take place at a number of levels (Figure 1), not all of which have the same applicability in the classroom. The lowest microscopic level of neuroscience analysis looks at individual genes and molecules. At the highest level of analysis, a neuroscientist may examine the workings of the whole brain in healthy – most commonly but not exclusively – adults and often in laboratory settings. In contrast, education research often starts at the level of the individual and progresses up to examine social processes, culture and meaning (at levels above those illustrated – in other words, the two professions only meet in passing at the behavioural level).

Thirdly, there is a translation problem. Specifically, the outputs of neuroscience research often translate poorly into something that is useable in education. For instance, knowing that a particular brain region is important for a certain skill does not actually tell you what to do about that in an educational setting.

Teacher-led RCTs as a catalyst for change

Although there has been a growth in teacher-led research (Riggall and Singer, 2016), in general, the types of methods adopted by teachers (predominantly qualitative and uncontrolled forms of action research) will never be enough on their own to give teachers a compelling voice and the levels of agency enjoyed by their cousins in the medical profession. RCTs (generally considered the ‘gold standard’ in medical research) differ from such types of research in that a control group is introduced in order to remove biases. In the vast majority of education trials, the control group is likely to be given the current best practice that a new or novel treatment is compared to (Figure 2).

Alongside this, for a study to be classified as an RCT, some type of random allocation needs to take place so that the researcher does not directly decide which participants experience what. Lastly, there needs to be some form of measurement, at least at the end of the process – the ‘trial’, as it were.

Churches, Higgins and Hall (Churches et al., 2017), in a retrospective case-controlled study using the ‘Evidence-Based Practice Questionnaire’ (Upton et al., 2014), demonstrated how engagement in the design and delivery of an RCT improves teacher evidence-based behaviours. Beyond this, however, if organised into batches of planned replications (specifically, the repetition of protocols in different contexts and with variations in the ‘treatment’ plan), teacher-led approaches could go much further and form a foundation of educational research.

Several crucial factors seem to be enabling these sorts of findings. In terms of research design, teacher-led RCTs have greater potential to control extraneous variables (variations in implementation) compared to larger-scale trials. They also have the potential for higher levels of mundane realism (‘everyday-ness’ – reducing the way that participants may react to being in the trial). In addition, teacher-led RCTs offer up the possibility of breaking complex, multifaceted, pedagogical interventions down into their component parts; and, by extension, it becomes more feasible to study the effects of interventions on different children in different contexts and with subtle variations in protocol. Finally, serving teachers are perhaps better placed to come up with interventions and variants on current practice than educationalists who no longer practise the art of teaching on a daily basis, or than commercial organisations driven by the goal of finding a standardised, cost-effective product, rather than the sort context-specific outcomes that are demanded in EBP.

The neuroscience-informed, teacher-led, randomised controlled trial project

A total of 31 individual schools and Teaching School Alliances involved in this project received an RCT design day in October 2017, reading material about RCT design (Churches and Dommett, 2016) and the neuroscience and cognitive psychology of learning (Churches et al., 2017). Over the following months, the teachers will implement their RCTs, coming together again in February for an analysis, interpretation and write-up day. They will then produce conference posters to support the dissemination of their findings.

As we write this article, we are still reviewing the teacher research protocols ahead of them implementing their trials. Yet, even at this stage, some clear themes are beginning to emerge. Of most interest is what the teachers have chosen to study from the science of learning literature and what this hints at with regard to the different levels of translation that may need to be considered, as we move the science of learning forward. In the first place, and predictably, many of the teachers have eagerly latched onto findings from the desirable difficulties school of research (Bjork et al., 2011), particularly retrieval practice – research that sits towards the upper level of Figure 1.

However, they have not done this blindly seeking to repeat the studies of others. Rather, a large group of the schools are now replicating ‘testing as a learning event’ approaches to see how variations in protocol interact with different school contexts and age groups. On a second level, teachers are trialling existing pedagogical approaches that appear to make sense when a biology of learning perspective is applied as the theoretical underpinning of that approach, comparing these pedagogies to other ‘common practice’ that does not seem as strong from a science of learning perspective. Finally, some teachers have sought to find ways to enhance rehearsal in working memory, to enable more effective transfer of knowledge into long-term memory.

What the final outcomes will be is hard to say. What we do know is that putting research and evidence front and centre of what it means to be a medical practitioner gives doctors and healthcare workers a deep and profound sense of voice and agency when it comes to their professional identity – whilst, at the same time, enabling a better understanding of how treatments interact with patients who present with different symptoms. Perhaps, given time and the right expertise, teachers could achieve the same.

The project will begin reporting its full findings from March 2018.

 

References

Biesta G (2007) Why ‘what works’ won’t work: Evidence-based practice and the democratic deficit in educational research. Educational Theory 57(1): 1–22.
Bjork EL, Bjork RA, Pew RW, et al. (2011) Making things hard on yourself, but in a good way. In: Gernsbacher MA (ed.) Psychology and the Real World. New York: Worth Publishers, pp. 55–64.
Churches R, Dommett E and Devonshire I (2017) Neuroscience for Teachers. Camarthen: Crown House.
Churches R and Dommett E (2016) Teacher-Led Research. Camarthen: Crown House.
Churches R and McAleavy T (2016) Evidence That Counts. Reading: Education Development Trust.
Churches R, Higgins S and Hall R (2017) Mobilising Teacher Researchers. Childs A and Menter I (eds). Abingdon: Routledge.
Dommett EJ, Devonshire IM, Sewter E, et al. (2013) The impact of participation in a neuroscience course on motivational measures and academic performance. Trends in Neuroscience and Education 2(3): 95–138.
Field MJ and Lohr KN (eds) (1990) Clinical Practice Guidelines: Directions for a New Program. Washington: National Academy Press.
Riggall A and Singer R (2016) Research Leads: Current Practice, Future Prospects. Reading: Education Development Trust.
Sackett DL, Rosenberg WMC, Muir Gray JA, et al. (1996) Evidence-based medicine: What it is and what it isn’t. British Medical Journal 312: 71–72.
Upton D, Upton P and Scurlock-Evans L (2014) The reach, transferability and impact of the Evidence-Based Practice Questionnaire: A methodological and narrative literature review. Worldviews on Evidence Based Nursing 11(1): 46–54.
      0 0 votes
      Please Rate this content
      Subscribe
      Notify of
      0 Comments
      Oldest
      Newest Most Voted
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes