Impact Journal Logo

Moving away from criteria: Using modelling when assessing pupils

Written by: Luke Hinchliffe
6 min read
LUKE HINCHLIFFE, GOFFS CHURCHGATE ACADEMY, UK

Goffs Churchgate Academy is currently rapidly expanding, with increasing numbers year on year. With an above-average number of students with identified special educational needs, there has been heavy focus on curriculum and ensuring that this is broad, balanced and challenging. As part of the review of the curriculum in geography, there was a focus on how assessment could be improved to deliver higher outcomes and improve the coherence with the curriculum.

Evidence

Whilst reviewing the research around assessment, it was found that the evidence that is accessible focuses on higher education, with little content on Key Stage 3 assessment. It would be useful if there was greater research on Key Stage 3 assessment, as currently this is an area that is lacking.

The work by Bloxham et al. (2015) found that the level of detail used in the criteria for assessments, whilst useful for students, also had drawbacks. In particular, criteria often oversimplified the complex process that went into assessing work, and explicit criteria often confused students, as work was marked more holistically than the explicit criteria would suggest (Sadler, 2005, 2009). Norton (2004) identified that explicit criteria can lead to students focusing on a superficial aspect of an assessment, and this was supported by Handley and Williams (2011). We had similar concerns, as students often could not understand how they had not achieved a certain grade when their work was marked, as they felt that they had hit most of the criteria, whilst others only addressed superficial points.

Sadler (2009) in fact argues that this breaking down of criteria to extensive explicit statements is taking the focus away from the importance of the teaching and curriculum thinking. There is a clear link made in the research between students understanding the assessment criteria and student outcomes (Orsmond et al., 2002).

Therefore, we decided to simplify our criteria and have some form of explicit teaching of the criteria to support students’ understanding. This explicit teaching uses modelling and is supported by research from Rosenshine (2012), who identified this as part of effective instruction. Research has been conducted that suggests that the use of model answers and exemplars can improve students’ performance in assessments (To and Carless, 2016).

In Year 7, we decided that we would try to rewrite the assessment criteria from being lengthy explicit paragraphs to being simpler holistic sentences. This was carried out based on the work by Norton (2004) and Handley and Williams (2011). It was felt that this would encourage students to focus on producing high-quality work rather than matching the criteria. As a part of this process, we reviewed our lessons sequences and activities prior to the assessment. This was to ensure that the knowledge assessed was covered in sufficient depth and the skills modelled so that students could succeed.

Actions

The purpose of the new simplified criteria was to focus on addressing the concerns raised by the literature, so we decided that the criteria should focus on ‘good geography’. It was discussed that good geography should focus on higher-level skills in geography such as synopticity. However, each assessment is different, and therefore there should be different criteria that would reflect this.

The move to simple criteria that promoted good geography was an attempt to reduce formulaic answers that superficially answer the question, whilst also trying to put focus on relevant knowledge and using skills that had been taught. It was hoped that it would allow students to demonstrate a deeper level of knowledge by drawing their own conclusions from the information and evidence.

When thinking about putting this in practice, it was agreed that we would use modelling – a mixture of pre-written models and live modelling. This was done as it would enable the teacher to model the new ‘good geography’ criteria by showing how this looked in practice, improving students’ understanding of what the criteria meant.

Results

Initially, a quantitative review of the assessment data was carried out to identify trends and patterns. This quantitative data analysis presented a challenge, as previously there had been six points on the assessment criteria and now there were only four. To compare the data, teachers were asked to grade the two middle grades of the new approach into categories of either secured (comfortably in that band) or securing (not fully in the band) (Table 1).

After this, a qualitative analysis of the students’ work was carried out to identify reasons for the trends and patterns. The analysis was a comparison of 24 students, two from each banding in both years. To make it comparable, prior attainment and other factors such as pupil premium and SEND were considered. The analysis focused on identifying areas of strength and weakness with each answer and then making a comparison.

Table 1: Quantitative analysis of the assessment data

Low Middle 1

(securing)

Middle 1

(secured)

Middle 2

(securing)

Middle 2

(secured)

High
Old approach 24 18 24 31 14 9
New approach 11 28 36 19 11 15

It was clear that there appeared to be a stronger performance for students who had identified SEND needs, with them typically scoring higher in all areas of the criteria than previously, noticeable by the reduction in the numbers of lower answers. The analysis showed that they had developed their answers more, moving beyond just the simple points seen previously in answers from similar students.

The higher-ability students performed better than previously, with the strongest using a much higher level of synopticity and understanding in their answers, with good development and explanation. It was observed that in the new approach there was more original thinking demonstrated. Whilst some higher-ability students started to demonstrate these skills, their answers were not as fully developed as those from the highest-performing students. Their answers were more focused on the criteria and there were fewer examples of students going off on a tangent.

Middle-ability students performed comparably lower, seen by an increase in Middle 1 (secured), than previously. The analysis showed that they tried to follow the model answers too closely and paraphrased the same point multiple times in their answers. In addition, they failed to develop points in sufficient detail.

Discussion

The results from this approach suggest that it may improve performance for the strongest and weakest students. The evidence suggests that students benefit from a sharper focus on the simplified criteria. Giving them extra clarity on what they need to do enables them to focus on demonstrating this in their answers. In addition, the modelling enables students to understand the criteria in practice. Modelling may obscure the effect of the criteria, but it is ultimately necessary for students to understand what the criteria mean.

The performance of middle-ability students is less good, however, with a greater number in the Middle 1 band than previously. This may be due to a number of reasons and it would be useful to spend more time investigating these potential reasons. Internal factors such as variation between teachers in both the modelling and grading needs to be investigated as a possible cause. It also may be related to how modelling had been used previously in primary schools with students. It would be important to investigate how the students understood the modelling and new criteria to gain more insight into their perspective. It is also important to remember that this research compares a cohort of students who have conducted remote education because of COVID-19 with a cohort that has not.

Whilst the early indication is that this approach has improved performance for some students, the project is only in its initial stages. There is enough evidence at present to suggest that this project is worthwhile and contributes to improved performance. However, the approach may require refinement and development as time progresses in order to improve. For example, this practice could be taken further by using models at different levels to help students understand the assessment criteria more, as was shown in research by Orsmond et al. (2002).

References

Bloxham S, Den-Outer B, Hudson J et al. (2015) Let’s stop the pretence of consistent marking: Exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 41(3): 466–481.

To J and Carless D (2016) Making productive use of exemplars: Peer discussion and teacher guidance for positive transfer of strategies. Journal of Further and Higher Education 40(6): 746–764.

Handley K and Williams L (2011) From copying to learning: Using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education 36(1): 95–108.

Norton L (2004) Using assessment criteria as learning criteria: A case study in psychology. Assessment & Evaluation in Higher Education 29(6): 687–702.

Orsmond P, Merry S and Reiling K (2002) The use of exemplars and formative feedback when using student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education 27(4): 309–323.

Rosenshine B (2012) Principles of instruction: Research-based strategies that all teachers should know. American Educator 36(1): 12–19, 39.

Sadler DR (2005) Interpretations of criteria‐based assessment and grading in higher education. Assessment & Evaluation in Higher Education 30(2): 175–194.

Sadler DR (2009) Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education 34(2): 159–179.

      0 0 votes
      Please Rate this content
      Subscribe
      Notify of
      0 Comments
      Oldest
      Newest Most Voted
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes

      Author(s): Bill Lucas