Research Hub Logo

Researchers should measure the side effects of new teaching approaches – not just academic achievement

Written By: Kieran Briggs
Share on facebook
Share on twitter
Share on linkedin
3 min read
Researchers focus on measuring academic achievement, but this might miss important consequences

Education policy bingo enthusiasts are rarely disappointed to see a reference to how good East Asian systems are at maths on their cards.

The impressive PISA and TIMSS performances of students in Singapore, South Korea, Hong Kong and Shanghai are never far from the news. In England, East Asian approaches have informed a number of Government schemes, including its decision to roll out a maths mastery programme to half of the primary schools in the country.

So I was surprised to discover that, when asked to rate their confidence in maths, students from East Asian systems did not also come out on top. As part of questionnaires in the 2015 TIMSS international assessments, students were asked to rate their confidence. Despite coming top for maths performance, only 19% of students in Singapore identified as ‘very confident in mathematics’. England came 11th overall, but 37% of its students surveyed described themselves as being very confident.

This is an example of an educational ‘side-effect’, according to Yong Zhao, an education professor based at the University of Kansas. The East Asian approach to teaching mathematics has improved test scores, but it may also have the side effect of decreasing confidence.

Zhao argues that, just as we expect that many drugs and medical treatments will have side effects, we should also anticipate side effects in education. His central contention is that as a rule side effects should be expected and acknowledged as part of decision-making in education. Zhao does not argue that side effects should always lead us to abandon policies or approaches (though he suggests that in some cases this should happen), but he believes they should be actively looked for in evaluations of new approaches.

Zhao’s claims about the likelihood of side effects are convincing. Most simply, given that resources and time are finite, prioritising one outcome or approach will mean others receive less attention. The allocation of curriculum time between subjects is an obvious example.

The consequences of ignoring side effects is also interesting. Zhao argues that many long-running debates in education are perpetuated because parties on both sides talk past one another, each citing evidence related to different outcomes while missing opportunities to accommodate legitimate concerns. Zhao suggests, for example, that advocates of direct instruction have amassed a considerable amount of evidence related to standardised test outcomes, but these findings alone are unlikely to convince those who believe direct instruction reduces creativity. Instead, he recommends that proponents of direct instruction acknowledge that alongside the intended effect of improving test scores, a possible side effect is reduced creativity. This could move the debate forward by shifting the question from whether or not creativity is affected, to how any side effects can be monitored and reduced. Returning to medicine – another bingo favourite – Zhao gives the example of cold caps. Chemotherapy can help cure or slow down the development of cancer, but patients may also lose their hair, which cold caps can reduce.

The medical analogy hints at one challenge to Zhao’s argument, however. In medicine, it is generally possible to agree what constitutes a side effect (hair loss), whether it is desirable (it isn’t), and how to measure the side effect. In education, I am not sure whether this is likely to be the case. Direct instruction supporters may dispute the suggestion that creativity can be measured and, if it can, whether it matters.

The challenges of definition and measurement may be a more convincing explanation of why educational researchers have not studied side effects more systematically – rather than that they just don’t care. It is understandable that academic achievement is the starting point for decision-makers and researchers: it is something the vast majority of educators value, and (relative to most other outcomes) there is agreement about the types of tests that can be used to measure it.

Zhao is right that attainment is not everything, however, and that a narrow focus on any single measure makes it likely that side effects will be missed. Although such conversations may not be easy, the explicit consideration of side effects would be a mark of a more mature, thoughtful system, and is worth pursuing.

References
0 0 votes
Article Rating
0 Comments
Inline Feedbacks
View all comments
Chartered College of Teaching Crest
© 2022 The Chartered College of Teaching

Pears Pavillion
Corum Campus
41 Brunswick Square
London
WC1N 1AZ

hello@chartered.college
020 3433 7624