Teaching and learning approaches that make use of retrieval, interleaving, spacing and visual cues have been found to enhance students’ performance, but are not frequently used as independent learning strategies. We have developed learning software that applies these four techniques, and tested its efficacy in a randomised control trial with school pupils. Results showed that students of selective and non-selective schools significantly improved their scores in a standardised test after using the software. This research was carried out by Seneca Learning’s Science team, who helped to design and develop the software system. Seneca Learning is a social enterprise that offers a free learning platform and free GCSE revision guides; however, the same software may be used for paid courses sold to corporate organisations in the future.

Designing learning software based on cognitive science

Research from the field of cognitive science has identified a number of learning strategies that appear to improve learning, all of which have been introduced and discussed in earlier articles in this journal. Spacing, interleaving, retrieval practice, and using visuals alongside text have been shown to improve retention of learned content (Roediger III and Pcy, 2012); (Karpicke and Blunt, 2011); (Meyer and Anderson, 1992). However, these are not consistently used by pupils for independent work in or outside of school; ‘passive’ learning techniques, such as rereading or highlighting, appear to be more commonly used.

Some software packages are available to aid learners in the application of effective learning strategies; however, solutions tend to be piecemeal and, as such, their use is limited. These packages commonly only apply a subset of learning strategies, involve a large burden of work to input relevant content, and are restrictive in the type of content to which they are amenable. With these issues in mind, we developed new learning software that simultaneously applies spacing, interleaving, retrieval and visual cues. To test the effectiveness of this resource, we conducted a randomised control trial (RCT) with 1,120 Year 9 students (13 to 14 years old) from independent, grammar and comprehensive schools, including single-sex and co-educational schools. We wanted to know if students’ performance in an assessment would improve after using our software to a larger degree than when using other learning strategies. The hypotheses and planned analyses were pre-registered with the Open Science Framework.

The software tested is an independent learning platform, provided free and designed specifically to be amenable to a wide range of content, whilst not demanding extensive staff training. The platform presents short learning modules consecutively. The type of module varies based on the format of information being presented, and modules can either introduce or test students on information; content can include images, processes, text and short- to medium- length questions. Interleaving and spacing are achieved with an algorithm that monitors each student’s performance on each piece of content and presents new question modules in accordance with the pieces in need of further practice. Each piece of information is tested in one or more question formats to promote retrieval practice. The use of visual cues is implemented through the presentation of the same images during both the introduction and testing modes. In our research, spacing was also implemented through a two-week gap between the RCT phases (explained in more detail in the next section).

The RCT procedure

For this RCT, the content covered was part of the Infection and Response unit common to all GCSE biology exam boards, and commissioned from a biology specialist at a school achieving highly in terms of Progress 8. The 1,120 participating students were all Year 9 students, aged 13 or 14 years old. Of the sample, 48% was female. Schools were approached to participate in the trial via email, social media and personal communication. Those participating were those who could commit to the trial schedule over the summer term. Prior to the beginning of the trial, all participating schools received an information sheet explaining the protocol. Schools consented to the completion and publication of the research, and managed their internal processes themselves. All student data was anonymised and no data on the students was used aside from their pre-test score, post-test score, which treatment group they were in and the reason for exclusion from the research data (if any).

The students were randomly assigned to one of the three following groups: software group, spacing group, and massed practice group. The randomisation was important because it ensured that no prior differences existed between the groups and that results obtained were caused by the intervention. More specifically, the randomisation happened in two ways, depending on whether the school classes were set by student ability. If classes were not set by ability, it was assumed that students were randomly allocated into their subject classes. As such, each class was randomly assigned to one of the three groups. This approach was taken to minimise unnecessary class disruption in the participant schools. If classes were set by ability, students were randomly assigned to one of three classrooms by blindly selecting a number from a bag.

Before the beginning of their learning, all students completed a six-minute pre-test. Similar to the randomisation, this was done to ensure that any final differences in performance were due to the intervention and not to prior discrepancies. Thus, the pre-test provided us with an objective measure of students’ prior knowledge of the RCT content. The trial consisted of three phases (see Table 1), each spaced by two weeks, since this time matches as closely as possible optimal intervals (Cepeda et al., 2008) whilst fitting into schools’ weekly timetables. During the phases, students worked with the material independently, with no interaction with teachers and peers. The trial material was not available in between the phases (see Table 1).Table 1 showing a summary of trial phases and groups is available to download as a Microsoft Word document below.

Download an accessible Microsoft Word document of Table 1

During Phases 1 and 2, students in the software group completed 20-minute independent learning sessions using our software on a tablet. Students in the spacing group did the same, but using a PDF document on a tablet. This PDF document contained the exact same content and information that was provided in the software. The spacing group served to control against the treatment group for both the spacing interval and the use of technology to deliver content.

Students in the massed practice group did not have contact with the content material during Phase 1. During that time, the students in this group had regular lesson time with a topic chosen by each teacher but which differed from the RCT content. In Phase 2, students carried out a 40-minute independent learning session with a physical copy of the material. The massed practice group served as baseline for what most closely resembles commonly used independent learning activities of students in schools.

In Phase 3, all students completed a pen-and-paper post-test consisting of multiple choice, free recall and short answer questions. Marking was conducted by a teacher with a PhD and six years’ experience in GCSE biology courses. This teacher was blind to students’ treatment group.

The pre-test and Phase 3 exams, the teachers’ protocol and the list of participating schools are available on the online version of this article. To evaluate students’ learning, we measured their scores on the pre-test and also on the post-test, and compared these values between the three experimental groups.

Statistical analyses and interpretation

Learners taking part that did not adhere to the trial conditions, for example by missing a phase of the trial through absence, were excluded from the data. The final sample of students can be found in Table 2.

Download an accessible Microsoft Word document of Table 2

A Hierarchical Linear Modelling analysis was conducted to investigate how different the software group was in comparison to the other two groups, whilst taking into account the fact that students were clustered within schools (Sheard and Chambers , 2014). The analysis also included whether the school was selective or non-selective.

Students’ scores

The RCT data showed very similar baselines for the software, the spacing and the massed practice groups in terms of pre-test scores (Figure 1).

Figure 1 showing student pre-test scores per group considering all schools, selective schools and non-selective schools.

These results suggest that the groups did not differ in their prior knowledge of the trial content, and any later differences were caused by the intervention.

In the post-test (Figure 2), we found a striking difference between the software, the spacing and the massed practice groups. When considering all schools, the platform (software group) more than doubled pupils’ scores when compared to the use of printed material (massed practice group). Similarly, the software group scored 63% higher than the spacing group. The same pattern was seen when considering only selective and only non-selective schools. That is, learning with the software that incorporates retrieval, spacing, interleaving and visual cues was significantly more effective than learning by reading a revision guide only. The remarkable difference in pupils’ marks in both school settings allows us to conclude that such differences were not due to chance, but rather due to the use of the software.

Figure 1 showing student post-test scores per group considering all schools, selective schools and non-selective schools.

Classroom application

The results of our trial suggest that using a software platform that incorporates a blended approach of spacing, interleaving, retrieval and the use of visual cues to learn material is more effective than a spaced learning approach using a PDF of the same material, and than a massed practice approach using a printed version of the material.

Whilst pupils in selective schools performed better in the assessment than those in non-selective schools, regardless of the experimental group, the difference in post-test scores between the software group and the other groups is significant in both school settings. Thus, use of the software appears effective independently of students’ background.

We still need to investigate how students’ performance is impacted after a prolonged use of the software and when different content is covered. It also remains to be investigated how each of the four techniques applied in the software contributed to students’ enhanced learning with the use of the platform. Nevertheless, the findings indicate that the software can accelerate independent learning and has the potential to improve students’ performance on standardised tests compared to the other approaches trialled.

Additionally, the learning software has the advantage of automatically marking students’ work. Thus, no additional time cost is demanded from the school when using this software as a supplement to homework and revision practices. In fact, teachers’ workload could potentially be reduced. Crucially, the platform is provided free of charge, requires only access to an internet-enabled device (which 95% of our participating students had) and does not require extensive staff or student training.



Cepeda N, Vul E, Rohrer D, et al. (2008) Spacing effects in learning: A temporal ridge line of optimal retention. Psychological Science 19(11): 1095–1102.
Karpicke J and Blunt J (2011) Retrieval practice produces more learning than elaborative studying with concept mapping. Science 331(6018): 772–775.
Meyer R and Anderson R (1992) The instructive animation: Helping students build connections between words and pictures in multimedia learning. Journal of Educational Psychology 4: 444–452.
Roediger III H and Pcy M (2012) Inexpensive techniques to improve education: Applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition 1(4): 242–248.
Sheard M and Chambers B (2014) A case of technology enhanced formative assessment and achievement in primary grammar: How is quality assurance of formative assessment assured? . Studies in Educational Evaluation 43: 14–23.
0 0 votes
Please Rate this content
Notify of
Inline Feedbacks
View all comments