Impact Journal Logo

What we don’t yet know about cognitive science in the classroom

Written by: Thomas Perry
6 min read
Dr Thomas Perry, Assistant Professor, University of Warwick

To get the benefits from cognitive science we need to talk about what we don’t know.

Our current knowledge

I am really excited about the possibilities of cognitive science. In the EEF review of Cognitive Science in the Classroom (Perry et al., 2021), after locating and reviewing hundreds of classroom studies applying cognitive science, we were convinced that cognitive science can offer significant insights into learning, and that applications of cognitive science have real potential to improve classroom practice. We also concluded, however, that cognitive science was being recommended – and even mandated – before we have a real understanding of how the basic science relating to cognition and memory translates into everyday classroom teaching and learning, across phases and subject areas. 

I see far too little discussion of what we don’t know or have wrong about cognitive science and, crucially, its application in the classroom. This is important for realising its benefits. So, in this short article, I hope to highlight the significant gap in our current knowledge and suggest that a little more caution is due, even if taking this position makes me feel a bit like the person who turns up at the party asking for the music to be turned down.

Blind spots

Let us think for a moment about the gaps in the evidence base. This is helpful for knowing who and what to trust; this also helps locate the ‘blind spots’ of research that may need attention when using research. Many teachers are already aware that quite a lot of the evidence from cognitive science comes from the psychology laboratory or from research on undergraduates. There is, however, also a lot of evidence from the classroom to be found. What many people haven’t realised yet – perhaps because it isn’t much discussed – is that only a tiny fraction of the latter tests cognitive science in realistic classroom conditions.

As well as many classroom studies being small (less than 100 pupils), few pay attention to important educational variables. For example, most strategies have only been tested in a limited number of subjects, with the evidence concentrated in mathematics and science (and in KS2 to KS4) and many studies are short (a few weeks or less), making it harder to judge long-term retention. The most serious problem is that the role of teachers and teacher professional development has not been widely studied. With only a small number of exceptions, cognitive science studies have sought to scientifically control or minimise the influence of teachers, other students, and curriculum content. This is done through use of standardised booklets and computer programmes; scripted lessons; restricting classroom activity to independent work; using curriculum content that students haven’t encountered previously; stark differences between experimental conditions (rather than exploring blends or thresholds of strategies); and by having lessons delivered by researchers, or by teachers who are experts and/or (enthusiastic) volunteers. 

Many are untroubled by such gaps in the evidence. Cognition is cognition – the basic architecture of memory and the brain are the same whether you are five or 55 years old, learning about polynomials or poetry. This is a good argument when it comes to the applicability of fundamental principles, but a poor one when it comes to recommending specific classroom strategies. One reason for caution is that when cognitive science has been tested at scale, in realistic classroom conditions, it has been found to have very little effect. Four of the strongest studies in the EEF review tested programmes which incorporated multiple cognitive science strategies into teacher professional development and/or curriculum redesign (Cromley et al., 2016; Davenport et al., 2020; Schunn et al., 2018; Yang et al., 2020). These all were realistic, delivered at scale with thousands of pupils (n=2,595 to 9,611) and were high quality studies employing experimental designs (even if all were in maths and/or science for KS3 pupils). The results, however, were disappointing. The most positive study was Cromley et al. (2016), finding moderate, positive results in two out of the six curriculum units tested, but statistically insignificant results (and one negative) in the other four. Davenport et al. (2020) and Yang et al. (2020) found small, positive, but not statistically significant results. Schunn et al. (2018) found a range of results from small positive to small negative impacts, mostly not statistically significant.

What is the difference between the impressive effect sizes seen in the basic science and these tiny and mixed effects from large-scale studies in realistic conditions? In our view, these results do not call the basic cognitive science into question. Instead, they raise educational questions relating to the curriculum, professional development, how the strategies linked to other aspects of teaching (e.g., feedback, assessment, planning), and differences in pupil motivation, needs and prior learning. In other words, we are left wondering about all the variables that teachers would think about and an applied educational science would study, but are ‘controlled’ out of a science focused on the fundamentals of cognition and memory. 

Educational questions

Let’s look a little closer of the kind of educational questions that we don’t currently have clear answers to, taking retrieval practice as an example area. For retrieval practice the evidence we reviewed and the advice from teachers suggests several areas to think about, including:

  • whether and how to build feedback and error correction into retrieval activities
  • the level of difficulty and retrieval success to aim for
  • the format of the retrieval tests (see Yang et al., 2021 for helpful evidence about this)
  • whether retrieval practice works for learning with high complexity, subtlety or ‘element interactivity’ (i.e. beyond factual recall)
  • whether it is desirable to mimic the wording, conditions and format of the original learning, or to deliberately change them to seek transfer
  • how to build retrieval practice into planning and timetabling
  • how to select retrieval items from the (crowded) curriculum
  • how to tailor for pupil ability and prior learning
  • timing and spacing of practice to ensure consolidation
  • how to integrate retrieval practice into classroom dialogue and other activities
  • and how to motivate and encourage independent retrieval practice (e.g., for homework, revision or directed improvement and reflection time).

 

None of these represent insurmountable problems for retrieval practice. What these questions do highlight is the amount of variation possible in how retrieval practice is implemented, and how much teacher expertise is required to make it work outside of the optimised and highly controlled conditions of (the current) research. A similar exercise – with a longer, subject- and context-specific list – is needed for every cognitive science-informed classroom strategy. Until more applied research is available, is up to teachers to both pose and answer such questions.

Working with uncertainty

Teachers are right to be enthusiastic about cognitive science. I would be astonished if, in a few decades’ time, our understanding of the fundamentals of learning and memory have greatly changed. However, the profession has to connect this to everything else it knows about great teaching and figure out how to get cognitive science into practice. At present, we have more knowledge ‘that’, and less knowledge ‘how’, and the psychological (and neuroscientific) knowledge is still early on in its journey to becoming educational knowledge.

Realising the great potential of cognitive science requires a recognition of the blind spots of cognitive science, and steering clear of centralised and prescriptive approaches to ensure that teachers (not researchers or policymakers) are in the driving seat. It’s not just about filling gaps; it is also a matter of being judicious with our current understanding. So, let’s continue to make sure we are talking about what we don’t know about cognitive science in the classroom, as well as what we do know.

References
  • Cromley JG, Weisberg SM, Dai T et al. (2016) Improving middle school science learning using diagrammatic reasoning. Science Education 100(6): 1184–1213.
  • Davenport JL, Kao YS, Matlen BJ et al (2020) Cognition research in practice: engineering and evaluating a middle school math curriculum. The Journal of Experimental Education 88(4): 516–535.
  • Perry T, Lea R, Jørgensen CR et al. (2021) Cognitive Science in the Classroom. London: Education Endowment Foundation (EEF).
  • Schunn CD, Newcombe NS, Alfieri L et al. (2018) Using principles of cognitive science to improve science learning in middle school: What works when and for whom? Applied Cognitive Psychology 32(2): 225–240.
  • Yang C, Luo L, Vadillo MA et al. (2021) Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin 147(4): 399–435.
  • Yang R, Porter AC, Massey CM et al. (2020) Curriculum‐based teacher professional development in middle school science: A comparison of training focused on cognitive science principles versus content knowledge. Journal of Research in Science Teaching 57(4) 536–566.
0 0 votes
Article Rating
0 Comments
Inline Feedbacks
View all comments

From this issue

Impact Articles on the same themes