Impact Journal Logo

Translating research into classroom practice: Cognitive science and beyond

4 min read
Pedro De Bruyckere, Arteveldehogeschool University of Applied Sciences, Belgium; Utrecht University, Netherlands
Paul A Kirschner, Emeritus Professor, Open University of the Netherlands, Netherlands; Thomas More University of Applied Sciences, Belgium; kirschner-ED

Welcome to this issue of Impact, with the theme of ‘Translating research into classroom practice: Cognitive science and beyond’. The print version contains a selection of 23 articles: some are original research; some are perspective articles that offer a useful, interesting and balanced perspective on the theme; some are case studies; and some are teacher reflections on how a particular research paper or area of research has informed classroom practice, and the potential impact of this on learning outcomes. 

When researchers want to study something, they often, though not always, try to do this in an optimally controlled environment. The reason for this is simple. If you want to know whether and how an intervention works, then that intervention must be the only difference between the research situation and the ‘normal’ or control condition. So, for example, if you want to know the effect of physical activity (independent variable) on cognitive achievement (dependent variable), factors such as the effects of sleep, diet, teacher and so forth (control variables) must be kept equal for all those taking part in the experiment, with only physical activity varying. In this respect, we speak of a ‘gold standard’ of research when scientists are able to carry out randomised controlled trials to eliminate all other possible influences on the results (e.g. age, prior knowledge, socioeconomic status, parental education and so on) (Zhang et al., 2021). As Shavelson and Towne (2002) wrote, in such a situation we have a ‘control group that has the same experiences as the experimental group except for the “treatment” under study’ (p. 69). In this way we can determine the exact effect of a possible approach. The problem is that teachers are seldom or even never in a situation where only one element is in play or can be controlled for. Yes, retrieval practice, for example, can help learning – even a lot. But the harsh reality of education is that often students don’t want to do it, or as a teacher you think that you don’t have the time to instruct students on how to use it and then practise using it, or whatever other kind of reality that kicks in. Most of the time, the truth is that it’s all of the above and more!

You can recognise this in the shift that our field has made from evidence-based to evidence-informed. The first comes from the medical sciences, where the working of a pill isn’t affected by the weather, your mood, the person administering it or any other such factors. It works if it is taken or administered as prescribed, following clinical evidence from systematic research. Evidence-based approaches in medicine integrate clinical expertise and patient choice with the best available research evidence, drawing on an evidence base that is more well-defined and easily controllable than in educational sciences (Neelen and Kirschner, 2020). In education, something that works for one class might not work at all for the next class, even if it’s the same subject and the same teacher. Evidence-informed means that we need to take a step back from ‘this will work if you do it this way’ to ‘this could work most of the time if you take this and this into consideration’. Lately we’ve seen the introduction of what could be called ‘research-informed’ teaching or practice. One can find many different definitions of this concept, such as the idea of teachers acting as ‘researchers, adopting a problem-solving orientation to practice’ (Burn and Mutton, 2015, p. 217), or trying to make positive changes to their teaching and aiming to stop or avoid changes that might be harmful or ineffective – all guided by research evidence of what works (Kvernbekk, 2015) or simply using research to improve what they’re doing in school (Cain, 2018). 

We could summarise it as follows:

  • evidence-based: using scientific evidence to do what works
  • evidence-informed: using scientific evidence to make choices in what could work, taking your own context into consideration
  • research-informed: using scientific research to make deliberate choices in your own context, and to evaluate these choices.

 

Hopefully you’ve noticed that we substituted the word ‘evidence’ with ‘research’. The reason is that an often-made mistake is, for example, to only look at significant results and the effect sizes, while the Education Endowment Foundation, among others, has taught us that we should also consider the quality of the research and the necessary effort. But there is another reason for considering research rather than evidence. When conducting research, both scientists and practitioners have different goals. As Daniel and De Bruyckere (2021) explain, if a researcher compares two different pedagogical or instructional approaches and doesn’t find any significant difference in their effect, this is often bad news for them as an academic. It means that one approach isn’t better or worse than the other and it also probably means that a planned paper will not get published. But for practitioners, non-results could be great news. It means that they can use either approaches, as they are equal in result! The choices are then: Which of the two is less expensive? Which takes less preparation time? Which takes less time to carry out in the class for teacher or student? Which causes the lowest cognitive load? Which approach do the students like most? These are all very practical choices. Those readers who are older will recognise the same discussion that Richard Clark spoke of in his ground-breaking 1983 article ‘Reconsidering research on learning from media’ and that is discussed by Kirschner and Hendrick (2020, Chapter 28).

For all the reasons mentioned above, it’s a good thing that in this issue many practitioners present their research and experiences while working to include insights from (cognitive) science in the classroom. So often they are examples of how the complexity of everyday life kicks in while trying to implement an insight that could help your students to learn more and better.

    • Burn K and Mutton T (2015) A review of ‘research-informed clinical practice’ in initial teacher education. Oxford Review of Education 41(2): 217–233.
    • Cain T (2018) Becoming a Research-Informed School: Why? What? How? Abingdon: Routledge.
    • Clark RE (1983) Reconsidering research on learning from media. Review of Educational Research 53: 445–459.
    • Daniel DB and De Bruyckere P (2021) Toward an ecological science of teaching. Canadian Psychology/Psychologie Canadienne 62(4): 361.
    • Neelen M and Kirschner PA (2020) Evidence-Informed Learning Design: Creating Training to Improve Performance. London: Kogan Page.
    • Kirschner PA and Hendrick C (2020) How Learning Happens: Seminal Works in Educational Psychology and What They Mean in Practice. Abingdon: Routledge.
    • Kvernbekk T (2015) Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. Abingdon: Routledge.
    • Shavelson RJ and Towne L (2002) Scientific Research in Education. Washington, United States: National Academy Press.
    • Zhang L, Kirschner PA, Cobern WW et al. (2021) There is an evidence crisis in science educational policy. Educational Psychology Review 34: 1157–1176.
    0 0 votes
    Please Rate this content
    Subscribe
    Notify of
    0 Comments
    Inline Feedbacks
    View all comments

    From this issue

    Impact Articles on the same themes