Skip to content

Research Hub Logo

A quarter-century journey through England’s primary school assessment system

Written By: Jon Barr
5 min read
Coming to terms with teaching first national curriculum: reflections and conclusions

When I first entered my Year 1 class as a probationary teacher in 1991, I found a profession coming to terms with teaching England’s first national curriculum.

1991

In the school I joined, the quality of teaching that children were experiencing in all subjects was idiosyncratic and fractured – in mathematics, for example, every child was on different pages of a workbook or, in many cases, different workbooks.

In my second year, as the new subject leader for mathematics, I made it my mission to bring some structure to our teaching. I wrote a scheme of work that included a progression of knowledge and skills, and teachers began to plan what mathematics children learned.

1995

I then took on the role of assessment leader at the school and, from 1995, I worked with teachers to explore the National Curriculum level descriptors. For many teachers, the notion of assessing where children were to inform their next teaching and pupils’ learning was a revelation, while others found it very challenging to try and decide what deciding a child was at.

With support and moderation, however, we developed a strong approach. We used  ‘weak’, ‘secure’ and ‘strong’ to describe a child’s performance so that teachers could see when a child was approaching a threshold or had just crossed one. We moderated what we thought about the children between year groups and ensured we had a standardised view by meeting with other schools. Staff became confident in making judgements on levels and used that information – alongside book scrutiny and marking – to ensure that their teaching enabled children to make progress.

1997

I joined a new school in 1997, and again worked hard to establish robust assessment systems. Now, the language of sub-levels replaced the terms ‘weak’, ‘secure’ and ‘strong’. Later, as the headteacher, I took this further and developed systems that allowed us to judge our whole school’s effectiveness. We could talk about the children’s attainment in all years and predicted results at the end of key stages. The internal systems we had in place allowed us to accurately predict our Year 2 and Year 6 results.

2007

This really came into its own in 2007 with my second headship. I had taken on a failing school so it was more important than ever for my teachers and leadership team to use our collective professional knowledge on assessment to accelerate children’s progress. This included insights into levels, sub-levels, prediction tools (such as Fisher Family Trust) – and how they could accelerate progress and improve our children’s outcomes. Our approach worked: after a few years of fluctuation, results from 2011-2015 showed a clear improvement trend in reading, writing and mathematics.

This quarter-century journey of assessment with levels will be familiar to many of my generation of teachers.

Reflecting on 1990s-2000s

Between 1995 and 2015 this assessment approach accompanied a rise in national attainment in year 6 mathematics SATs, with 44% of pupils reaching level 4+ in 1995, 75% in 2005 and finally, 87% in 2015 (from when levels ceased to be used in statutory testing).

There had been claims that the improvements were illusory (De Waal 2009, Peal 2014). On most occasions, researchers cite the performance of children on international secondary tests, such as PISA and TIMMs, to support their case. For example, England has had a stubbornly average and stable performance in the PISA mathematics’ tests – 495 in 2006, 493 in 2009, 495 in 2012 and 493 in 2015 (Jerrim and Shure 2016). There is some evidence that the DfE found it difficult to ensure that year 6 SATS reading tests were standardised over the two decades of tests with levels, but studies did not find the same in mathematics whether researched by the DFE, QCA or independent bodies (DFE 2011).

Other research shows that many primary school headteachers regarded the use of levels and assessment as a key factor in raising standards (Webb and Vulliamy 2006). It has been argued (Howson 2012) that England’s PISA mathematics results did not increase between 2006 and 2015 primarily because of the difficulties secondary schools had in attracting mathematics teachers. This, it is argued, had a disproportionate effect on the attainment of the lowest-achieving children (Jerrim and Shure 2016).

2014

With the arrival of the 2014 national curriculum and the decision of government to end the use of levels, the dominant narrative was on the limitations of levels. Figures, such as Tim Oates (2010) and Daisy Christodoulou (2016), argued that levels led to limits being placed on the potential outcomes of children who had achieved below the national average in their year 2 assessments; schools were not aspiring for these pupils in Year 6 and this was limiting their future achievement.

But life after levels was not universally welcomed. Many of the profession in England’s primary schools expressed deep concern that a complex body of professional knowledge was at risk. Observers of primary assessment, such as James Pembroke (2015), noted that many commercial companies, local authorities and schools appeared to be trying to reinvent a new level system to inform formative and summative assessment from Year 1 to 6, and that their search for measurements of progress may prove illusory. Many English secondary schools also appear to have retained the language of levels across Years 7 to 9 as they grapple with the challenges of the EBacc and the new GCSE frameworks.

My dilemma is that I have no desire to return to the situation I found in September 1991.

2018 and beyond

I want my staff to formatively assess their pupils on the curriculum they have taught and have evidence of their children’s success to inform future teaching. To do that, our teachers will work as a team and with other schools to understand our curriculum and how to secure those outcomes. Without that, I know that the most vulnerable and disadvantaged children at our school would struggle to achieve high standards.

My senior leaders, governors and I also need to collect information on our children’s progress to monitor whether all of our children are succeeding. That means I need to collect summative data and use a framework to analyse it and compare it with other schools. This requires a common assessment language: it is not acceptable for me to wait until the end of the key stage and a scaled score based on a standardised test for me to know whether we are being successful or not.

Furthermore, the MAT or local authority a school belongs to needs to report information on children’s outcomes and their progress to their board of trustees so that it can hold school leaders to account. We need a common language and process to do that.

Some have argued that we can move beyond assessment processes centred on the periodic collection of data and pupil tracking (Peacock 2016) towards school-determined processes (NFER 2017).  However, international research (Mourshed et al 2010) has consistently advocated system-wide student assessment processes as a foundation for a strong system-wide education provision. I remain convinced that the absence of one in England’s schools is unlikely to support us all achieving the best for our children.

References
0 0 votes
Please Rate this content
0 Comments
Inline Feedbacks
View all comments