Ben Goldacre, of ‘Bad Science’ fame (2013), was tasked by the government to examine the role of evidence in educational practice. Citing the success in using blind-tested research in medical practice, a good case is built for engaging in evidence-based teaching.
The scientific method promises rigorous analysis of any potential method; by employing test groups compared with controls, an effect can be seen and conclusions drawn. Causation can be predicted and proven through statistical comparison and rigorous data collection.
Whilst the call for evidence-based practice is compelling, the cynicism surrounding the apparent magic bullets lauded by psychologists seems equally well founded.
Evidence-based teaching: a divisive idea
Following this logic, through exposure to evidence from the latest educational research-based training, teachers should be able cherry pick the best methods, based upon research, without needing a background in interpreting statistics. Evidence-based practices therefore should become the norm as professionals are presented with a dazzling array of books, TED talks and articles citing research that promises interventions, techniques and approaches, backed by reliable data, that can improve our students’ outcomes.
And yet, it is apparent at INSET at the start of every school term that this idea can be rather divisive; in one corner of the staff room a clique of enthusiastic teachers work out exactly how the next big idea can be embedded into everything they do. On the opposite side, grumbles can be heard from another group of staff members, lamenting how the same idea hadn’t worked 20 years ago when packaged slightly differently, questioning why one should change a tried-and-tested formula in favour of the latest fad from Stanford.
Whilst the call for evidence-based practice is compelling, the cynicism surrounding the apparent magic bullets lauded by psychologists seems equally well founded. Can you really conduct a randomised trial, or ensure a genuine control group, when you are dealing with children who will behave differently if they have had breakfast or undergone a change in social situation?
Much of the data cited in journals and meta-analyses of research comes from very specific contexts; research on socially deprived demographics within the US may not allow us to transferThe processes of applying learning to new situations the findings into a home-counties grammar school, a public day school or country comprehensive, so why should everyone jump on the bandwagon?
The more teachers who engage in their own research, the more likely it is that we can share good practice that is specific to the context in which it was observed.
Testing research in your own context
Research in social settings will always be inherently flawed; no two people are the same and efforts to control variables and make research more ‘scientific’ takes the context further from the classroom and into the laboratory. Transferability of the results of any study relies on equivalence of the context in which it was carried out, but the social milieu of a school or even classroom will be far removed from the next school in the town, or the next class along the corridor.
To ascertain whether theory can be of benefit, it must be down to the school and teacher to test it within their own context and draw their own conclusions, generating practice-based evidence to inform their planning rather than blindly applying evidence-based practice.
Teachers taking control of research is, according to Goldacre and many of his peers, essential. The ideas and evidence resulting from university-based investigation can provide an excellent starting point, and the neuroscientific techniques to determine how learning takes place simply couldn’t happen in a school context. The limitation is that, however well evidenced in the laboratory, if it doesn’t work in the classroom, a technique or intervention is irrelevant, regardless of its theoretical uses. What’s more, if it works in one school or one classroom, there is no guarantee of transferability to the next school, or the classroom next door.
The more teachers who engage in their own research, the more likely it is that we can share good practice that is specific to the context in which it was observed. Idealistic double-blind evidence collection is unfeasible and not necessary if we are collecting evidence in our own context only. Why should an observation, perceived improvement or feeling that a class is engaged not count as evidence of effective practice if the aim is to provide inspiration or share a positive experience with peers?
At a school level, if we accept that a study of social interactions does not always require quantitative data to back it up, more people can engage in research.
Generating insight and informing good practice
The burden of evidence can strangle practitioner research; reliable data, validityIn assessment, the degree to which a particular assessment measures what it is intended to measure, and the extent to which proposed interpretations and uses are justified of the method and rigour are all, of course, essential if you are publishing a book or making an assertion that will inform what happens on a national level.
At a school level, if we accept that a study of social interactions does not always require quantitative data to back it up, more people can engage in research, collect evidence and share their findings. Questionnaire responses, observation reports and focus groups can provide great insight into how students and teachers perceive effective teaching and learning and provide rich data, even if a comparable group is not available, or indeed plausible.
Sharing good practice amongst teachers need not necessitate control over the multitude of variables affecting student outcomes. Within a school community, a learning research group is perfectly equipped to assess the impact of an innovation within the context that those teachers are interested in, whilst providing a forum to evaluate implication and application of theory. Testing an intervention based on the recommendations of psychological studies conducted with the rigorous control impossible in a school is, in my opinion, the best way of assessing impact.
My experience
By trialling and carefully assessing the impact of lab findings into the school specific context in which we hope it will work, evaluation of how effective these strategies are can be very effective.
These strategies have been employed with a great deal of success within my experience at Reading Blue Coat School, where we have observed practitioners engaging with evidence-based practice as a result of exposure to accessible research conducted by their peers. We can then see whether theory translates to the real world and whether it has potential.
Disseminating results can be as simple as INSET presentations or, on a larger scale, sharing findings through networking between similar schools. Whilst the argument for research-led practice is compelling, until evidence is more consistently practice-based, and evaluated within the context to which it is to be applied, cynicism around research will be hard to dismiss.