Impact Journal Logo

Making teaching more research-informed: Some challenges

Written by: Kieran Briggs and Andrew Davis
9 min read
Andrew Davis, Honorary Research Fellow, School of Education, Durham University, UK

Here are a few basic questions to ask when seeking to make teaching more research-informed. I hope to show that there are no easy answers and that this is precisely why the questions are important.

Sometimes it is claimed that strategies such as organising pupils in groups, learning by discovery, Rosenshine’s teaching principles (2012) or direct instruction have a significant ‘effect size’. Research is said to demonstrate that pupils targeted with a given approach achieve more than those without it. It apparently has an effect in some places. Now we wonder whether it has in ours.

It is vital to ask exactly what any given strategy amounts to. Consider students sitting in rows. This may well seem perfectly clear. Either they are sitting in rows or they are not. Yet there are a number of variables that surely will make a difference here, and we need no lessons from research to convince us. Here is a selection:

Students face the front without speaking. Or they turn and speak to others. The latter happens despite the teacher’s instructions or, instead, because of them. The general rule might be to do so once or twice in each lesson – or pupil–pupil interaction takes place throughout. There are many different kinds of pupil talk, and many different educational reasons for supporting it. Students can be any age between four and 18. The subject being taught may make a difference. The lesson may be part of a medium- to long-term learning programme or addressing knowledge and skills that can easily be taught in one lesson. Students usually sit in rows or sit in rows on this occasion only. Students often sit in rows with this teacher and never with others. Or they often sit in rows both with this teacher and with others.

If these options matter, a favourable ‘sitting in rows’ research result leaves teachers with most of the key classroom decisions. And, in any case, some kinds of lessons cannot be taught like this. Drama, music and physical education are obvious examples.

Here is a second example where we need to be very clear about what strategy is deemed to have an interesting effect size. It is sometimes said that direct instruction is effective and has been underused in recent years. So what precisely is being researched here?

Consider one version from Cook et al. (2014) p. 202:

“Lesson objectives that are clear and communicated in language students are able to understand… An instructional sequence that begins with a description of the skill to be learned, followed by modeling of examples and non-examples of the skill… shared practice… and independent demonstration of the skill… Instructional activities that are differentiated and matched to students’ skill levels to the maximum extent practicable. Choral response methods in which all students respond in unison to strategic teacher-delivered questions.”

This can be interpreted in various ways. Consider ‘language students are able to understand’. Does this mean all the students? Do teachers make assumptions about typical students of this age and stage, or are they drawing on what they know of the group they are teaching? Even the idea of a ‘lesson objective’ is open to interpretation, and this, in turn, may vary from one subject to another.

Suppose that teachers read the research and learn that direct instruction is effective. It is difficult to see what useful lessons they can take from this. Perhaps they should no longer feel guilty about sometimes standing at the front and talking to students. However, if they avoid interpreting any principles in a rigid script-like fashion, they still have to make all the detailed professional decisions that they have always made.

This point does not only apply to direct instruction. The well-known educational researcher Hattie (2008) reports on the so-called ‘effect sizes’ of various educational strategies, including ability grouping, cooperative versus individualistic learning and student control over learning. At least some of these strategies, such as ‘student control over learning’ or Rosenshine’s teaching principles (2012), face a dilemma. Either they are characterised as a precisely specified recipe of some kind or they are left as abstract, flexible and hence open to multiple interpretations.

The recipe choice makes the method readily researchable. Its implementation is easily recognisable in any given lesson or lessons. Yet in cases like ‘student control of learning’ or ‘cooperative learning’, the very notion of a recipe hardly seems to make sense.

So, in the light of this, suppose that the multiple interpretation option is chosen instead. Any positive research result leaves teachers with all the fundamental decisions about implementation. Moreover, this may vary from one day to another, from one subject to another and from one group of students to another. There are likely to be sound professional reasons for this variety. In short, the multiple interpretation version of the research result cannot provide teachers with any kind of detailed pedagogical guidance.

I wonder whether many recipes or scripts have actually been researched. Few schools in their right minds would ever agree to follow a script precisely. If, despite my doubts, classroom procedures along the lines of scripts really have been tested out, we may question whether what has been investigated was true teaching. Adults can refrain from interacting with students. Hence, knowledge of their pupils cannot inform choices about language, explanations, pace, tasks, questioning styles or other aspects of pedagogy. Students may still learn something from this form of delivery. But are these extremes actually teaching worthy of the name?

This dilemma of either script-following or a flexible interpretation of the research poses major challenges to a significant body of educational research. Among other things, varieties of direct instruction and Rosenshine’s teaching principles (2012) are confronted with some hard choices here.

For the next few points I draw on an Ofsted research summary (Ofsted, 2019). The questions involved have broad applications for how we might scrutinise educational research, although I only have space to explore a few examples.

The first issue is the danger of concealed tautologies. Whether these are present may depend on just how key ideas in the research are understood. For instance, Ofsted claims that research shows that clarity of teacher presentation is ‘consistently related to pupils’ attainment’ and that ‘effective teachers are able to communicate clearly and effectively with pupils’ (p. 13).

However, we need to ponder the meaning of ‘clarity of presentation’. One obvious way in which research might gauge whether a presentation is clear is to look at pupils’ reactions. How might that be achieved? Perhaps by testing changes in pupil knowledge and understanding at the end of the presentation. However, if that is done, the research in effect says that clarity of presentation is related to clarity of presentation. So a tautology has emerged, rather than something that teachers can use.

The research could take explicit steps to ensure that its notion of ‘clarity’ is independent from pupil attainment. If it succeeds in doing that, tautologies will be avoided. Now this ‘attainment-free’ version of clarity is likely to embody some values. Researchers will have their own views about what counts as good, clear presentation. Bringing values in is perfectly reasonable, since education is a value-rich enterprise. However, we need to ask where the researchers found these values and whether, if at all, they can be justified. For instance, some may wish to tie clarity very closely to accuracy. Others may insist that simplicity should be the name of the game, at least with younger pupils, and that accuracy can come later.

An example may help. We explain the meaning of ‘parallelogram’. We make it ‘clear’ to pupils that not only are ‘squashed’ quadrilaterals with two pairs of parallel sides parallelograms, but that rectangles, including squares, are also parallelograms. A critic says that this is too complicated to be ‘clear’, saying that we should start with something simpler, concentrating on the ‘squashed quadrilateral’ idea and only introducing the rectangles and squares complication later. You may feel that each side in this debate has a point. If teachers are to apply research to their practice, the research needs to be probed to see whether such issues are made explicit – for they might not be properly rehearsed at all.

I want to make a few more comments about values. Educational research often reports that a given teaching method or approach ‘works’. This judgment may well incorporate educational values held by the relevant researchers. These might concentrate on pupils’ futures as employees but could focus more on life-long learning or personal autonomy, among other possibilities. Is the research clear about any values incorporated in its notion of what ‘works’? Do we agree with them and why? Whether our teaching should be informed by such research surely depends in part on the answers here.

In many examples of research into teaching strategies, ‘what works’ is explained in terms of test performance. However, in the UK context, and more particularly in England, assessments such as National Curriculum tests take place in a ‘high-stakes’ atmosphere. Results are used to hold schools to account (even if Ofsted is backing away from this, or so they claim). The well-worn point here is that such forms of accountability often have damaging influence on test or exam preparation, may narrow the curriculum and raise serious questions about whether improving scores should count as ‘working’ at all. Questions will arise about the extent to which improving test performance in a high-stakes regime is associated with knowledge and understanding that can be used and applied in a range of contexts, rather than ‘thin’ achievements that can only be manifested in contexts similar to the relevant test conditions.

Researchers may, instead, explore associations with later developments such as participation in higher education or occupational ‘success’. Again, the associations that they choose may well incorporate their educational values. Sometimes these are explicit in the reporting. They certainly should be.

My final point concerns accuracy in the research itself. From time to time, research contains basic factual or definitional errors. When uncertain of the meaning of a technical term such as ‘phoneme’, we must check it out for ourselves, rather than taking the researcher’s word for it. For instance, Ofsted says (p. 20):

“These studies show that explicit and systematic teaching of the manipulation of phonemes (the smallest unit of sound in a language) and phonemic awareness (the ability to identify phonemes in written words) is crucial and should be continued until children can automatically process this information.”

But a phoneme isnt the smallest unit of sound in a language. It isn’t a sound at all, but an abstraction. The Ofsted research summary confuses phonemes with phones. This confusion is widespread, both in government policy documents and in commercial schemes.

And here is another example, from the same Ofsted source (p. 28):

“Self-belief, an overarching term for a set of often overlapping and highly correlated concepts such as self-confidence, self-concept and self-efficacy, has been found to be slightly but significantly related to subsequent attainment.”

But self-concept and self-confidence do not overlap with self-efficacy. The latter concerns someone’s specific beliefs about what they know and can do. Self-confidence and self-concept are generic. Unlike self-efficacy, they are not linked to anything specific that an individual thinks about themselves.

To sum up, informed uncertainty about the applicability of at least some classroom research would seem to be the ideal state for the research-informed teacher. The research may be quite unable to offer detailed classroom guidance, despite any initially promising rhetoric. Moreover, informed uncertainty should be accompanied by a cautious and probing awareness both of researchers’ values and of the robustness of their definitions of key terms.

References

Cook C, Holland E and Slemrod T (2014) Evidence-based reading decoding

instruction. In: Little S and Akin-Little A (eds) Academic Assessment

and Intervention. London: Routledge, pp. 199–218.

Hattie J (2008) Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Abingdon: Routledge.

Ofsted (2019) Education inspection framework: Overview of research. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/813228/Research_for_EIF_framework_100619__16_.pdf (accessed 20 July 2020).

Rosenshine B (2012) Principles of instruction: Research-based strategies that all teachers should know. American Educator 36(1): 12–19, 39.

      0 0 votes
      Please Rate this content
      Subscribe
      Notify of
      0 Comments
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes

      Author(s): Bill Lucas