Impact Journal Logo

Test obsessed: Improving student perceptions of assessment when generating performance data

Written by: Richard Lancaster
5 min read
Richard Lancaster, Kellett School, Hong Kong

In the aftermath of the U-turn decision regarding the use of teacher assessment for GCSE and A-level grading in August (BBC, 2020), it has since become increasingly apparent that some students are left feeling anxious about how their final grade will be decided. Some of this unease manifests through the perceived heightened stakes of every in-house assessment and its implication on teacher-assessed grading. Sadly, it seems that the culture of learning through mistakes and growth mindset risks being undone. Another issue raised concerns the validity of the data used in generating centre-assessed grades for students.

This article aims to address the following questions:

How do we reduce student anxiety surrounding in-house assessment?

How do we develop valid summative profiles of student attainment?

How do we reduce student anxiety surrounding in-house assessment?

Using end-of-unit tests is a valuable way of gaining data on student performance, but the fixation on grades hinders their other function as an opportunity for improving academic attainment through post-exam reflection. Outlined below are four techniques that I have found to be effective ways of lowering student anxieties, as measured through personal observation over time and informal student feedback.

1. No grades on the front of assessment papers

Withholding or omitting grades and final scores on in-house assessments encourages a less grade-focused initial response from students and instead promotes continued learning. It is well documented that further learning diminishes once grades have been communicated (Wiliam, 2011) and that grade fixation detracts from the valuable formative nature of assessments, in which reflection and structured amendments provide opportunities to guide students towards their learning goals.

2. Highlight areas for improvement in assessments and leave formative comments

In the absence of a test score, sections of assessments where marks are not awarded can be highlighted as areas for improvement, to direct focus towards improving understanding. This can consist of using codes or different-coloured highlighters with preassigned designations for types of mistake, such as a certain colour relating to grammar and another to incorrect content. When any comments are used, it is beneficial to include feedback on both strengths and weaknesses (Black and Wiliam, 1998), as this will make them more constructive and reduce students perceiving them negatively. A common pitfall is to give feedback that is accurate but not helpful (Wiliam, 2000). An example of this would be a comment stating that a student needs to ‘evaluate’ their answer – an accurate statement, but it will not be effective if the teacher has not first taught what evaluation looks like.

3. Use post-assessment reflections

A clear understanding of strengths and targets allows students to form specific learning goals. Tasking the students with first identifying the teacher-highlighted areas for improvement will allow them to ascertain which parts require further study. The teacher can then use a simple document that links specific sections of the assessment to particular areas of the curriculum or skills, guiding students in knowing what they don’t know. This facilitates synthesising a set of goals, e.g. ‘top three topics to revise’, that will structure their next steps. When giving students ownership, the intention is to provide ‘cognitive responsibility’ for the students (Yang et al., 2020), which works to increase their attainment when working towards learning goals.

4. Scaffolded resources for amending practice

When students are at a stage to begin addressing the learning goals set in their reflections, provide resources that directly address highlighted areas for development. For example, each question in an assessment can have an assigned set of resources that develop understanding of the concepts used in said question. When made available to the students, they can selectively choose the resources that can aid them in meeting their own learning goals. Additionally, this approach is enhanced through providing a selection of resources that are differentiated in their degree of guidance, such as videos, specific textbook page numbers or worked examples.

The four-stage process outlined above aims to reduce student anxiety surrounding in-house assessments by removing a grade-focused approach. This can prevent grades from being perceived as summative markers of potential, and instead facilitates continued learning.

How do we develop valid summative profiles of student attainment?

Teacher-assessed grades should show, as Wiliam (2014, p. 22) put it, ‘what the students can do in other situations, at other times and in other contexts’. The combination of assessments conducted at the end of each topic and a synoptic mock exam allows teachers to generate profiles of student attainment. A criticism of the validity of assessment data generated in this way is that there is potential for some students to rely upon massed practice (cramming). This is detrimental to establishing long-term comprehension (Christodoulou, 2016) and hinders reliable summative inferences from being made. To avoid cramming, embedding synoptic questions within the assessments conducted at the end of topics, in combination with mock exams featuring assessment questions from across the course, promotes the necessity to maintain long-term comprehension of learning. It is also possible to include assessment material in exams that retests highlighted areas of misunderstanding, allowing for measurement of improved comprehension.

A benefit of using multiple topic tests when acquiring data on student attainment is that it examines more content than can be covered in one or two mock exams or external assessments. Mocks and external assessments only test snapshots of curriculum areas in a time-frame that is appropriate for sitting such tests (Koretz, 2008). Additionally, the process of sitting semi-frequent shorter unit tests can have the advantage of improving the retention of knowledge when compared with simply studying material and sitting an assessment later in the course (Roediger and Karpicke, 2006).

Conclusion

I have found that it is possible to ease student anxiety through shifting student mindset away from perceiving end-of-unit assessments as a definitive measure of academic ability, by highlighting their value as steps in the greater learning journey. Including synoptic questions in these end-of-unit assessments, alongside the use of mock exams, allows for the generation of academic profiles for students that can be used to generate valid teacher-assessed grades.

Assessments can be a source of stress for students, but at the same time can be motivational, and there is a healthy balance to be found in this conflicting relationship when the right level of support is provided.

References

BBC News (2020) A-levels and GCSEs: U-turn as teacher estimates to be used for exam results. Available at: www.bbc.com/news/uk-53810655 (accessed 2 January 2021).

Black P and Wiliam D (1998) Inside the Black Box: Raising Standards Through Classroom Assessment. London: King’s College.

Christodoulou D (2016) Making Good Progress? Oxford: Oxford University Press.

Koretz D (2008) Measuring Up: What Educational Testing Really Tells Us. Cambridge, Massachusetts: Harvard University Press.

Roediger H and Karpicke J (2006) The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science 1(3): 181–210.

Wiliam D (2000) Integrating summative and formative functions of assessment. In: International Congress on Mathematics Education, Makuhari, Tokyo, August 2000.

Wiliam D (2011) Embedded Formative Assessment. Bloomington: Solution Tree Press.

Wiliam D (2014) Principled assessment design. SSAT. Available at: https://webcontent.ssatuk.co.uk/wp-content/uploads/2013/09/RS8-Principled-assessment-design-chapter-one.pdf (accessed 26 March 2021).

Yang Y, van Aalst J and Chan CKK (2020) Dynamics of reflective assessment and knowledge building for academically low-achieving students. American Educational Research Journal 57(3): 1241–1289.

      0 0 votes
      Please Rate this content
      Subscribe
      Notify of
      0 Comments
      Inline Feedbacks
      View all comments

      From this issue

      Impact Articles on the same themes