This case study focuses on the use of Google Forms and Zipgrade to assess and improve students’ retention and recall of key subject-specific knowledge in Year 7 religious studies at Chesterton Community College, Cambridge. The case study involved 200 students and four members of staff, of which just one was a religious studies specialist. Data was collected over the autumn term of 2019.
Chesterton’s Key Stage 3 religious studies course is designed to ensure that students are introduced to, understand and remember key religious beliefs, teachings and practices. This includes factual knowledge (e.g. subject-specific vocabulary and sacred writings) and understanding of how beliefs influence believers (e.g. opinions, interpretations and practices). Key information is presented in knowledge organisers. The direct instructionA method of instruction in which concepts or skills are taught using explicit teaching techniques, such as demonstrations or lectures, and are practised until fully understood by each student phase is followed by a range of scaffolded tasks, leading to independent practice. There is a focus on elaborative interrogation, with students being encouraged to ask why different religious believers might hold diverse views and how these viewpoints might affect their practices. Lessons are carefully constructed to ensure that all of the content in each knowledge organiser is explicitly addressed in class. This ensures that all students, regardless of their prior knowledge, are exposed to the same body of knowledge. This follows the approach advocated by Porter (2015), who notes that teachers cannot just assume that students arrive in lessons with this knowledge. Lessons include a significant element of direct instruction, during which students develop a comprehensive and cohesive schema as links between ideas are highlighted.
Students’ recall of key knowledge is tested in fortnightly homework multiple choice quizzes using Google Forms and in Zipgrade quizzes conducted in lessons. This regular retrieval practice helps to strengthen memory of key facts and, as it can change the representation of that information in the student’s memory, can lead to deeper understanding.
Much has been written about the benefits and drawbacks of using multiple choice quizzes as an assessment tool. Writing reliable and valid questions is hard work and takes time and effort! Brame (2013) suggests that teachers writing multiple-choice questions should consider the following issues:
- The stem should be meaningful by itself and present a definite problem. This allows for a focus on the learning outcome.
- Where possible, the stem should be a question rather than a partial sentence, as this allows the student to focus on answering the question rather than holding the partial sentence in their working memory and sequentially completing it with each alternative.
In addition, Christodoulou (2013) notes that questions need to make students think deeply about the subject matter, and that distractors must be plausible but not so plausible that they are unfair. See Figure 1 for an example.
Example of a multiple choice quiz question:
The Sikh Holy Book is given its own room every night in the place of worship. Why? (a) A Sikh will sleep with the book overnight (b) It makes it easier for Sikhs to worship the Book if it is in a separate room (c) It shows that Sikhs have great respect for the book (d) To keep it safe from visitors who might touch it when nobody else is around overnight N.B. This question was developed in response to the observation that many students believed that Sikhs worshipped the Guru Granth Sahib. One of the distractors addresses this key misconception. |
Figure 1: Example of a multiple choice quiz question
Google Forms is a free programme that allows teachers to set self-marking homework quizzes. Each quiz is a set of multiple choice questions based on the content of the knowledge organiser. The fortnightly quizzes include a series of questions to test factual recall (e.g. definitions of key terms, filling in the missing word in a piece of scripture), alongside questions promoting elaborative interrogation of this knowledge (e.g. why do a named group of believers carry out a religious practice in a certain way?). The quiz software automatically provides individualised feedback to students. Willingham (2014) argues that ‘The largest benefit to memory occurs when the student gets immediate corrective feedback’; many students have commented on how much they appreciate the immediacy of the feedback, as it allows them to ‘fix the facts’ before they become entrenched. The feedback can be simple verification feedback or answer feedback, or it can be more complex and explain the correct answer (elaborative feedback), guiding students towards additional resources to help them to clarify and consolidate their knowledge.
Marsh and Cantor (2014) have reviewed experiments that have shown that prior multiple choice testing can increase the likelihood that students will incorrectly answer later questions with multiple choice distractors – this is the negative testing effect. Research has shown that immediate feedback can reduce this negative testing effect and enhance the positive effects of multiple choice testing (Butler and Roediger, 2008).
Students can answer the quizzes as often as they wish. It is noticeable that many low prior attaining students choose to repeat the quizzes. Informal interviews conducted with 10 students from the two lowest prior attaining sets suggest that this is building their self-esteem, as they typically achieve high success rates on their second or third attempts. Their class teachers have also commented on the way in which this success has raised the students’ attitudes towards themselves as learners.
Google Forms automatically generates a teacher markbook. This can be downloaded and analysed offline, allowing individual teachers to examine the data for their classes or the head of department to analyse data for the whole cohort. The markbook can be sorted to show responses by question or by student. Teachers analyse their class data on a fortnightly basis, identifying at least one common misconception for their class. They then address this point in their next lesson. The head of department analyses the cohort data, identifies the most common misconceptions and then rewrites the associated lessons and scheme of learning for the following year to address these points. An example is where analysis revealed a lack of understanding of the nature of God in Hinduism. Lessons have now been modified to make it clearer to students that Hindus worship one creator, God, who takes a variety of different forms.
The Zipgrade software provides teachers with another quick and easy way to introduce low-stakes multiple choice testing to their classroom. It was chosen because it provides the teacher with the ability to print student and class summaries very quickly and easily, thus allowing them to meet the demands of the school’s assessment and feedback policy. Students answer multiple choice questions by filling in a personalised optical mark recognition (OMR) sheet, which is then scanned by the teacher using a phone. It takes just a couple of minutes to mark an entire class set of answers. The software generates rich data about students’ understanding and misconceptions and allows teachers to address these immediately at both class and individual level. As with Google Forms, the data can be analysed by student, by class or by question, meaning that support can be targeted exactly where it is most needed.
Year 7 students completed a baseline Zipgrade multiple choice quiz in their first lesson in September, sat the quiz again in their last lesson before the October half-term break and then again at the start of December. Some questions were identically phrased, while others were amended to assess the same knowledge using a different question; questions and answers were presented in a different order to the initial test. The average change in score was +49 percentage points between September and December and there was no significant difference in performance on the identical and amended questions. A class-level analysis shows that where teachers have identified a common misconception and addressed it in a subsequent lesson, the performance on the associated quiz question rises to between 95 and 100 per cent success.
We do not hold directly comparable test scores from last year’s cohort, as they did not sit equivalent quizzes in October or December. However, scores from their end-of-year test (which they sat in July) showed an increase of +24 percentage points over the baseline, just under half of the uplift that we have observed this year. See Table 1 for an overview of scores.
Quiz | Average score | Range of scores | Modal score group |
Year 7 September baseline
205 students |
34% | 0–67% | 31–40% |
Year 7 October retest
205 students |
83% | 13–100% | 91–100% |
Year 7 December retest
205 students |
84% | 50–100% | 91–100% |
2017–18 cohort baseline test
(199 students) |
34% | 0–65% | 31–40% |
2017–18 cohort July retest
(194 students) |
58% | 22–86% | 41–50% |
Table 1: Comparison of scores between September and December and with previous cohort
Furthermore, teachers are reporting that students are able to apply their knowledge to unfamiliar contexts. They are remembering more and therefore have more knowledge at their disposal to apply to open-ended questions, where they are being asked to explain the reasons for believers’ behaviour. There is a notable difference between Year 7 students, who have been exposed to the new knowledge organiser and quizzing approach, and Year 8 students: teachers report that Year 8 students are much less secure in the key subject knowledge and they are therefore less able to show their understanding in extended answers. This observation has been reported by teachers working across the full ability spectrum.
References
Brame C (2013) Writing good multiple-choice questions. Available at: cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions (accessed 10 December 2019).
Butler AC and Roediger HL (2008) Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition 36: 604–616. Available at: researchgate.net/publication/5359941_Feedback_enhances_the_positive_effects_and_reduces_the_negative_effect_of_multiple-choice_testing (accessed 21 October 2019).
Christodoulou D (2013) Research on multiple choice questions. Available at: daisychristodoulou.com/2013/10/research-on-multiple-choice-questions (accessed 10 December 2019).
Marsh EJ and Cantor AD (2014) Learning from the test: Dos and don’ts for using multiple-choice tests. In: McDaniel M, Frey RF, Fitzpatrick SF et al. (eds) Integrating Cognitive ScienceThe study of the human mind, such as the processes of thought, memory, attention and perception with Innovative Teaching in STEM Disciplines. Washington University in St Louis, pp. 37–52. Available at: openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1009&context=books (accessed 10 December 2019).
Porter JHC (2015) Which knowledge should you teach from the Bible? In: To learn is to follow. Available at: tolearnistofollow.wordpress.com/2015/03/30/which-knowledge-should-you-teach-from-the-bible (accessed 21 October 2019).
Willingham DT (2014) Strategies that make learning last. Journal of Educational Leadership 72(2): 10–15. Available at: ascd.org/publications/educational-leadership/oct14/vol72/num02/Strategies-That-Make-Learning-Last.aspx (accessed 21 October 2019).