Skip to content

The evaluation of information by sixth-formers: A study in decision-making processes

Written By: Andrew Shenton
8 min read

Introduction

The evaluation of information is a key skill associated with modern independent learning. Indeed, it is fundamental to Callison’s (2014, p. 23) principle, distilled from the literature, that ‘analysis skills must dominate student use of the internet’, in particular. Even in the age when most information materials took paper form, source appraisal was invariably one of many processes addressed in frameworks for teaching information skills (e.g. Marland, 1981; Paterson, 1981). The emergence of the Web, however, has led to new models that concentrate exclusively on the information evaluation. These offer educators a structured breakdown of the areas they should cover, whilst showing students the factors they must consider. Today, with the popularity of social media especially, the number and range of authors have increased enormously, and material no longer necessarily has to satisfy the requirements of editors, reviewers, referees and publishers for it to become widely available. Separating high-quality content from material that should be ignored is now a major task in student research.

Faced with a range of models for appraising material, devised by parties as diverse as academics, teachers, librarians and other information professionals, modern students may well feel overwhelmed by their number. And in situations where the individual is allowed completely free choice, making a decision on which model to adopt when carrying out their work may be intimidating. Early in 2023, I conducted a small scale research project that explored the factors borne in mind by students when determining which model they should use. This article will outline the context for the new research, the methods employed, the main results and the wider implications of what arose from the study.

Background

I am employed as an EPQ supervisor in a very successful comprehensive school in the north-east of England. The participating students were Sixth Formers aged sixteen or seventeen who had achieved above national levels at GCSE the previous year. As part of their pursuit of the Extended Project Qualification, all were tackling independent learning tasks that culminated in the writing of a 5,000-word document on a subject of their own choice and in the planning and delivery of an oral presentation. They also kept a diary of their research work.

The evaluation of information is a significant element within the EPQ. Ten of the fifty marks available are allotted to the use of resources. These must not only be relevant to the topic; they should be diverse and of high calibre. At my school, candidates are required to compile a source evaluation table that includes an entry for each item consulted. Students are expected to appraise all the materials they consider for use against the criteria put forward in a recognised framework. Various models are suggested to the candidates as examples but they decide for themselves which to adopt. The students may even ignore all those highlighted and go for one that they have discovered entirely independently, without any prompting from staff.

Methods

In order to shed light on the evaluative framework chosen by each individual and their reasons for selecting it, the 11 students allocated to me were asked specifically about these matters during face-to-face, one-to-one tutorials. The meetings took place four months into the EPQ course. By this point, the candidates were used to working with me and had become accustomed to personal tutorials – each had experienced four previous one-to-one sessions of a similar nature. The students and the supervisor took notes during these exchanges and those made by the latter, plus the text that the candidates had set down in their research diaries, formed the data upon which I drew in the study.

In a classic research methods textbook, Webb et al. (2000) recommend the use of ‘unobtrusive measures’. They are critical of how interviews ‘intrude as a foreign element into the social setting they would describe’ (Webb et al, 2000, p. 1) and whilst it is true that the personal tutorials here were dependent on the cooperation of the students involved, these conversations were an integral part of the EPQ and would have happened irrespective of whether the research forming the subject of this article had taken place or not. Consequently, this data collection method, like that of the research diary, may indeed be regarded as unobtrusive.

Findings

On the basis of the data gathered and analysed, eight factors determined the likelihood that a particular model for evaluating information would be accepted by an EPQ candidate.

1)  Ready availability

Although the names and/or the authors of eight different models were given to the students, they were not provided with either Web addresses or any other details of sites where the frameworks could be found. Each of those chosen for use by the students was, however, freely available via the internet and featured in different documents. For virtually all young people today, the Web of course forms the most obvious environment in which to look for information.

2) Profile/degree of exposure

All the models that were applied by at least one student had been highlighted to them in class previously, either in terms of being mentioned verbally by the EPQ supervisor or shown on a PowerPoint slide. It may be significant, also, that one of the two most frequently occurring tools was that which the supervisor had referred to first. In circumstances where this was favoured, the candidates began examining the models in the order in which they had been presented and when the first was deemed to be satisfactory, they looked no further.

3)  Fittingness with existing experiences

Various students indicated that they adopted the model because they had already applied at least some of its criteria in a previous assignment. This gave the framework a degree of familiarity and, if the candidates had been successful in the previous endeavour, led to a feeling among users that it was likely to be effective. Past experience of a more immediate kind was important to a student who had already devised a table in which they would set down details of each source. They then selected an evaluation framework that seemed most congruent with the headings they had determined.

4)  Ease of use

This justification covered a range of areas, which students expressed in different ways. Several spoke generally of a model being ‘intuitive’. Some appreciated how, when working with their preferred checklist, it was straightforward to test a source in terms of the factors cited, whilst one candidate pointed to how, if there was a smooth progression from criterion to criterion, it became easier for their thinking to transition between the points when they were evaluating an item.

5)  Memorability

By no means all the models suggested to the candidates were identified by particular names. A few were known simply as the work of those who devised them. Most of the frameworks chosen for use by the candidates were, however, represented by acronyms – CRAAP (California State University, 2010), RADAR (Mandalios, 2013) and IF I APPLY (Phillips, 2019). The first generated much amusement and students who adopted any of the three explained how the name rendered memorable both the model itself and the elements within it. CRAAP refers to currency, relevance, authority, accuracy and purpose. In the words of one candidate, ‘As long as I can remember what the letters stand for, I don’t need to look anything up.’

6)  Anticipation of own ideas

Students within this category spoke of how the factors stipulated in their preferred model ‘made sense’ or ‘were obvious’. In each instance, the individual claimed that the considerations set down were those they would have applied themselves had they been left to their own devices. Following the chosen model meant that little additional thought on their part was necessary.

7) Use of familiar language

The inclusion within the frameworks of unusual words or specialist academic language was found off-putting by multiple students and imposed an extra cognitive burden. Some were discouraged from using one framework by its reference to ‘affiliations’ and ‘citations’ (Shenton and Pickard, 2012). In contrast, models whose criteria were presented ‘in plain English’ were praised.

8) Resonance

This was the most intangible of the eight factors and that which the candidates found most difficult to articulate. Here the nature of a certain model chimed with a student’s inclinations. Resonance is perhaps best exemplified by a candidate who felt a particular affinity with one framework because it gave what she believed to be special emphasis to an evaluative factor she deemed most meaningful, i.e. relevance.

Only one student chose not to adopt an existing framework in its entirety. This exception opted instead to develop their own hybrid, which incorporated what they regarded as the most pertinent factors from different models, as well as giving special attention to the usefulness of the source for the task at hand.

Other Relevant Contextual Factors

Although in previous work lower down the school the students had been encouraged to take a critical attitude to information, none recalled ever being trained in the application of a specific model, beyond those recommended in very precise curricular contexts. Had a more generic tool been advocated then, it seems likely that the candidates would have used it unhesitatingly in their EPQ work, with little thought given to alternatives.

About a month after I had urged the students to select and then rigorously apply a source evaluation framework of their choice, the candidates made a study visit to the library of a local university. Here a model featuring six generic questions (i.e. Newcastle University, 2018) was demonstrated in detail. It had not been made known to the group previously. One wonders whether, had the trip taken place earlier, the students would have accepted this structure immediately, simply because it had been introduced at a special event (i.e. a day away from school), by authoritative Higher Education staff and in a memorable video.

Implications for the classroom

The results from this study would suggest that a model for evaluating information is most likely to be adopted if:

  • it is freely accessible and readily available
  • it is highlighted by a teacher in a lesson
  • its contents are found to be consistent with previous work undertaken by the student
  • it is thought to be user-friendly
  • it is represented by a memorable acronym
  • it incorporates factors that would be borne in mind by the student without recourse to any prompts
  • it features language and terms known to the user
  • it resonates in some way with the individual.

 

This should not be viewed as a definitive list of relevant factors, however. Indeed, we might hypothesise that other considerations, such as recommendations from peers, could also come into play. The small-scale nature of this research should be noted and it would be unwise to assume that the discoveries made here will emerge in terms of EPQ students more widely or groups of EPQ students in schools different from my own. It has been indicated that the 11 EPQ candidates whose thinking has been explored were for the most part of high ability. Comparable findings may not necessarily result in studies that involve a more heterogeneous body of EPQ candidates.

Still, teachers who invite students to choose from a range of models for use in their own subjects may reflect on how far they apply more generally the principles listed above. Many of the decisions made by the participants in the project would seem to make sense on a logical level but at this stage it is too much of a conceptual leap to believe that similar factors will arise in other situations. More research would be needed to ascertain how far the findings presented here are widely applicable across a range of contexts and settings.

References
0 0 votes
Please Rate this content
Subscribe
Notify of
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Claire Tyson

Thanks for this interesting article, I also teach the EPQ and use the CRAAP testing model. I find that students need to formalise their critical thinking and need to learn how to verbalise the reasons that they have for choosing sources of information.

Other content you may be interested in