According to the Oxford dictionary, myths can be defined as ‘misrepresentations of the truth’. This article aims to give an overview of some aspects that come into play when talking about the creation of myths, with particular emphasis on neuromythsCommon misconceptions about the brain in education. I will describe some of the mechanisms behind the formation of myths. One particular example, of the role of iron in spinach, will serve to demonstrate how challenging it can be to address myths. I briefly look at the role of social media and will finish by giving some pointers that might help prevent myths taking hold in education.
The nature of myths
In the last five years, numerous studies have looked at the prevalence of myths in education. For example, Howard-Jones (Howard-Jones, 2014) looked at the level of agreement with several ‘neuromythical’ statements in different countries, and concluded that, even with very different cultures, there are similarly high levels of belief in neuromyths, such as that we mostly only use 10% of our brain, and that differences in left/right brain dominanceThe theory that each side of the brain controls different types of thinking – an example of a neuromyth can help explain individual differences amongst learners. The article also usefully reflects on possible ‘seeds of confusion’ that might spark myths. The most likely scenario seems to be that myths originate from ‘uninformed interpretations of genuine scientific facts’. They are promoted by victims of their own wishful thinking, who hold a ‘sincere but deluded fixation on some eccentric theory that the holder is absolutely sure will revolutionize science and society’ (Howard-Jones, 2014).
Howard-Jones (Howard-Jones, 2014) goes on to attempt to explain the perpetuation of neuromyths. Firstly, he flags up cultural conditions – for example, differences in terminology and language creating a gap between neuroscience and education. A second reason is that counter-evidence might be difficult to access. Relevant evidence might appear in specialist journals and, together with the complexity of the topic, this might mask any critical signals. A third element might be that claims are simply untestable – for example, because they assume knowledge about cognitive processes, or even the brain, that are unknown to us (yet). Finally, an important factor that we can’t rule out is bias. When we evaluate and scrutinise evidence, a range of emotional, developmental and cultural biases interact with emerging myths.
The good news, though, is that there are signs that training can decrease beliefs in neuromyths. In a recent study, Macdonald et al. (Macdonald et al., 2017) compared the prevalence of neuromyths in the USA between three groups of participants: educators, participants exposed to neuroscientific knowledge, and the general public. The general public endorsed the greatest number of myths, with educators endorsing fewer and the high neuroscience exposure group even fewer. Unfortunately, it was still around 50%. The article also suggested, however, that in order to not invoke new myths, care must be taken in how myths are dispelled. The learning stylesTheories relating to the idea that individuals learn best in different ways and teaching should be tailored to their learning styles – these have been widely debunked by research neuromyth is described as a particular challenge to the field, as it ‘seems to be supporting effective instructional practice, but for the wrong reasons’ (Macdonald et al., 2017). It is suggested that dispelling that particular myth might inadvertently discourage diversityThe recognition of individual differences in terms of race, ethnicity, gender, sexual orientation, socio-economic status, physical ability, religious beliefs and other differences in instructional approaches.
In some cases, simply saying that something is a myth is fine; in other cases, it is best to combine this with more information, to prevent new myths taking hold. A meta-analysisA quantitative study design used to systematically assess the results of multiple studies in order to draw conclusions about that body of research by Chan et al. (Chan et al., 2017) investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation, concluding that it seems helpful to not spend too much time talking about the misconception, but instead focusing on presenting counterarguments or even asking the audience to generate counterarguments. Perhaps a simple question such as ‘what is the best argument for not believing the following statement or study?’ could be rather revealing.
The case of iron in spinach
An interesting example regarding the creation of myths is described by Rekdal ((Rekdal, 2014); please refer to the Rekdal article for detailed references of the sources noted here). He explores the formation and reaction to the urban legend of spinach and iron. I remember the claims from my own youth, partly incentivised by cartoons like Popeye: spinach is good because of its iron content. However, spinach does not really contain significantly more iron than other food products, and most probably should not be the first food to obtain if one is iron-deficient, as it also contains substances that inhibit the intestinal absorption of iron. So where did this myth come from?
Rekdal describes how he came across an article by Larsson that cited work by Hamblin from 1981, who first ‘debunked’ the claim. According to Larsson, Hamblin reported that the myth came about because of a malpositioned decimal point in the 1930s. Rekdal subsequently embarks on a quest to track down the origins of this statement. He calls this a ‘treasure hunt’, in which he traces down the original manuscript by Hamblin and notices that Hamblin had worded it differently: the decimal point error was from the 1890s, but was only disclosed in the 1930s. He ascribes the discovery to other scientists, and does not provide further references.
Rekdal notes that the situation is quite ironic, as both Larsson and Hamblin place themselves at the frontline of the fight against bad science and academic carelessness, but the irony goes still further. Rekdal reports that in 2010, Sutton argued convincingly that there were entirely other reasons, such as contamination, or confusion between fresh and dried spinach, that may have caused the initial urban myth about iron content. But Hamblin’s ‘decimal point’ explanation had since become an urban myth in itself, and found its way into wider society through books like Facts and Fallacies and Follies and Fallacies in Medicine, which Larsson might have used.
According to Rekdal, the moral of this tale is not that myths have been created, but the way in which we approach our facts. He describes the ‘invisible heroes’ that go out of their way to trace back scientific facts: ‘individuals with such attitudes are among the most important propellers of scientific development and accumulative knowledge’. Rekdal finishes with the appreciation that the ‘digital revolution has made it easier to expose and debunk myths, but it has also created opportunities for new and remarkably efficient academic shortcuts’. I think the article provides a cautious tale for how we ‘build up’ our tower of evidence.
What about social media?
The double face of the digital revolution is demonstrated in the recent work by Robinson-Garcia et al. (Robinson-Garcia et al., 2017) in which the authors sought, in the field of dentistry, to assess the extent to which tweeting about scientific papers signified engagement with, attention to, or consumption of, scientific literature.
They argue that ‘simplistic and naïve use of social media data risks damaging the scientific enterprise, misleading both authors and consumers of scientific literature’. I want to flag up some questions that years of using social media have sparked in me.
Let’s, for example, scrutinise the advent of economic papers with advanced statistical methods being cited in the education blogosphere. These papers often appear as pre-prints and deal with a range of important issues. However, like any piece of research, there are many features that – if not studied more deeply – can lead to myths. Issues that come to mind include whether the paper has already appeared in a peer-reviewed journalA journal in which research papers are evaluated by experts in the field. If not, this means that no ‘peers’ have yet studied the article in detail; in general, peer-reviewed articles tend to be more rigorous and robust (although peer review is no guarantee!).
Another thing to look at might be whether it is clear how the authors operationalised complex variables in their statistical models. Sometimes these issues boil down to the way in which they are measured. When we talk about measurement, many people envisage some sort of ‘thermometer’ that can easily gauge the concept. This often is not the case for highly complex constructs; both Growth MindsetThe theory, popularised by Carol Dweck, that students’ beliefs about their intelligence can affect motivation and achievement; those with a growth mindset believe that their intelligence can be developed and Cognitive Load TheoryAbbreviated to CLT, the idea that working memory is limited and that overloading it can have a negative impact on learning, and that instruction should be designed to take this into account primarily use self-report as a form of measurement. Of course, this need not be a problem; both concepts can still be very useful, but I would argue that a critically engaged teacher should be aware of these things.
A challenge can also lie in the summaries of underlying data. One can almost have a day job in unpicking research articles, the prior literature involved, the methodology, the data analysis and subsequent conclusions. We often have to rely on summaries and accounts from others and this can sometimes be subject to ‘Chinese Whispers’. When you dive in deeper, you see all sorts of surprising things, ranging from atypical definitions of concepts to selectively using data. Analyses of, for example, large-scale datasets like PISAThe Programme for International Student Assessment, a worldwide study by the Organisation for Economic Co-operation and Development (OECD), intended to evaluate educational systems by measuring 15-year-old school students’ knowledge and skills and TIMSSTrends in International Mathematics and Science Study – a series of international assessments of the mathematics and science knowledge of students around the world certainly need to go further than the key tables reported in the media.
Keep in mind, too, that science is constantly revised and updated, and this means that one ideally looks at a whole body of literature. One article that contradicts previous literature does not nullify it, nor should it be disregarded. This, in my view, also means that we should not easily dismiss some older research, purportedly because ‘cognitive science’ has shown that they were ‘wrong’. I would assert that, for many ideas over the decades, ‘cognitive science’ has provided empirical backing for some ideas and no empirical backing for other ideas. Blanket dismissal would be inappropriate; approach the ideas as they are and evaluate them as such, and not through broad, sweeping generalisations.
Underneath all of this, it is useful to be aware of a very human tendency to appreciate novel and original findings in the research literature, sometimes leading to ‘publication bias’. Remember that what ends up in publication often is the remarkable, not the unremarkable.
Conclusion
In this article, I have tried to provide an overview of the complexities involved in studying myths and misconceptions. Of course, there is much more to say, but I would like to finish with what I feel are some take-away recommendations:
1: Try to follow up sources as much as possible. Of course, this is very time-consuming. Sometimes other people summarise research for you, but even then – as the iron in spinach example shows – it is wise to remain critical. Perhaps refraining from too firm a position, until you feel you have reviewed a fair amount of material, from different actors, might be a good strategy. Sorry – this is just hard work, and I understand that practitioners do not always have this time.
2: Be mindful of over-simplifications. I completely understand that providing a multitude of pages to describe the complexities of an educational phenomenon is not helpful for practitioners. However, the fact that some over-complicate things does not mean that ‘simple is best’ either. Follow the facts and, if one simplifies, be aware of the limitations or what it leaves out.
3: It is best, when talking about myths, to not just say something is wrong but to provide compelling evidence or facts to replace them. There is a tension here with potential new myths, as a simple concrete message that grabs attention can more easily replace a previous misconception.
4: Be cautious about developing policy based on new claims. Some people have suggested that we wait at least 15 years before an initial (scientific) idea should ever end up as policy, allowing us to fully study the pros and cons. Although I think this time period is too lengthy, at a minimum, research findings should be accompanied by a clear scope and disclaimer with regard to claims.
Perhaps the key message for all is that we accept that no research finding will provide a ‘silver bullet’. Now go forth and fact-check my article!