TIM HALLAS, HILLS ROAD SIXTH FORM COLLEGE, UK; ANGLIA RUSKIN UNIVERSITY, UK
Introduction
AI has become a significant topic of conversation in post-16 colleges, after ChatGPT reached public consciousness at the end of 2022. This was met with a variety of reactions from individuals and organisations, which included intrigue, fear and disinterest, depending on the perspective that those people had on the impact of technology on the world of post-16 education.
The initial discourse around AI in general was that it was either ‘good’ or ‘bad’ (Bell, 2023). In her article around the initial uses of AI, Bell identifies a range of organisations either extolling the virtues of AI in generating content without the need for people or, conversely, condemning AI’s disregard of the truth. Yet some might argue that truth is irrelevant, because large language models (LLMs) such as ChatGPT manipulate language and do not claim to be a repository of knowledge or information. In this article, I will explore the potential risks and benefits of the use of AI as a classroom tool, discuss potential learning outcomes with regard to both the positives and negatives of AI use, and conclude with some thoughts on the current educational discourse around AI in education.
The risks that AI poses to post-16 education
An assumption that AI ‘knows things’ is potentially dangerous if people remain unaware of this systemic constraint. This danger could extend to students and teachers if they do not have appropriate levels of understanding around AI. This perceived threat is exemplified by the knee-jerk reaction from the Joint Council for Qualifications (JCQ), in which their initial document around the use of AI in examined and non-examined work included negative words such as ‘malpractice’ and ‘severe sanctions’, with only minimal reference to AI’s potential benefits and uses (JCQ, 2024). The general initial discourse around AI in education was focused on the perceived threat to learning and academic integrity (Watson and Romic, 2024).
One of the biggest risks with the use of AI is the tendency to anthropomorphise it (Ryan, 2020). Ryan states that interactions with AI can lead users to think that they are interacting with a person, and this leads to an ‘inherent trust’ that we give to people with whom we interact. For example, we may assume a willingness to interact in a person; however, AI is not a person and has no inherent willingness, trust or even ethics (Ryan, 2020). People can interact with LLMs and communicate in language that simulates personal interactions, but those interactions are entirely generated by a neural network that simulates language, without containing the core thinking that makes human communication important – the human filter. The human filter is what helps people to understand what is appropriate or accurate (Casas-Roma et al., 2023). Casas-Roma et al. and Ryan both state that users are likely to consider AI to have presence of mind and to trust in its provided information in the same way as one might do with a fellow human. However, in reality, an LLM will simply use the word that is most likely to follow the previous one in its interactions. It’s the unreliable magic genie. It appears to be generating information – but there’s no discernment behind the words, sentences or ideas. Instead, it just uses an algorithm and a probability of ‘likeliness’, and while algorithms are unambiguous, knowledge is not.
The benefits of AI in post-16 education
The World Economic Forum identified several ways in which AI could enhance education. The potential benefits of using AI in classrooms include personalised learning, refined assessment and evaluation, optimisation of teacher time and the teaching of using AI as a skill in itself (World Economic Forum, 2024). It is perhaps this last point that is the purpose of approaching AI use in post-16 education.
Watson and Romic (2024) view LLMs as mediating tools that affect people differently depending on their relationship with them. AI can be a tool that enables those who understand and use it effectively to achieve more in thought and action. Conversely, AI can form a potential barrier to those who cannot use it, by excluding them from the affordances that it offers – for example, those who may not be able to buy a subscription for premium features. This has the potential to widen the digital divide already present in society: AI could potentially become a tool in education most easily accessed by those that can afford it (Bentley et al., 2024; Serafina, 2019).
AI contains inherent flaws and risks but also presents opportunities to support students in their future lives. Using Watson and Romic’s (2024) identification of technology as a bimodal mediator between thought and action and between thought and society, technology can affect an individual’s thoughts, actions, communication and interactions with society. In other words, technology is a tool that can both help users to turn ideas into a tangible outcome (e.g. using music software to turn an idea for a song into a finished track) and influence how we interact with wider society (e.g. social media influences our fashion, culture and interpretation of the news). In this light, a key question is: How can we enhance students’ academic, personal and social development through engaging with AI as a tool to prepare them for the world?
Using AI to develop students’ critical thinking
It is with the tensions between the unreliable nature of AI’s generated content and the potential of using AI as a tool to aid academic, personal and social development in mind that I approached my own use of AI with A-level students.
AI in general, and LLMs specifically, can be a great tool to help students to develop their own sense of criticality and analysis. LLMs are very quick at generating content. Conversely, the manual generation of exemplar content for students to use is a time-consuming and arduous process. LLMs can, therefore, create variable-quality content rapidly, and students can review this as part of the work in developing their analytical skills. This material can be used by students to contrast with their own work and to identify where the AI has created successful or ‘good’ material and where it has generated less helpful information.
One example of this process is the cross-referencing of information that AI presents. LLMs can now access a wide range of documentation, and some have incorporated web searching facilities. But AI can make mistakes. Asking a politics student to use AI to identify Hansard references is an interesting task. However, depending on the specific AI (LLM), the reference could be genuine, from a web search, or fictitious and made from the LLM’s neural network of common words. Ensuring that students are aware of the limitations of the AI-generated material is an essential tool to ensure the appropriate use of the technology.
The use of AI to create material is one of the concerns raised in the JCQ report on AI and plagiarism (2024). However, the quality of AI-generated material is such that it requires the students to hone and develop the responses to suit what they are trying to achieve (at least, this is the case at the moment). The creation, editing, additional creation and further synthesis are akin to the process that one might go through in any academic piece of work (writing, editing, rewriting) – and the skills required to do this are similar to the synthesis that one might use when completing other research activities, by combining different sources into one appropriate narrative.
This is now a skill that I am actively promoting with my students. I am encouraging students to use AI when they think that it is appropriate. This is helping them to develop a sense of criticality to the responses given and ensuring that they do not use any unverified facts within their work. This level of criticality is a skill that will prepare them well for their professional lives, and it links effectively into Watson and Romic’s markers for AI and technology as a mediating device for academic, personal and social development.
Next steps
The discourse around AI in post-16 education has largely been around ‘how’ to use AI in teaching. Indeed, conference programmes and current articles and books tend to reflect this focus (e.g. Bowen and Watson, 2024). However, perhaps we need to reframe the conversation to focus more on ‘when’ and ‘why’ to use AI. When does it serve educational purposes and why use it that way? We need to include discussion and training for post-16 students to develop the skills of how to critically approach AI-generated content and to know what is reliable and what is unreliable. This will help them to develop their academic, personal and social development and prepare them for the post-education world.
As teachers, we need to acknowledge both the flaws of AI and the potential opportunities, as perhaps it is the tensions between these that offer meaningful ‘whys’. While AI is not a magic genie, it is another tool that can be leveraged appropriately to support our students.
The examples of AI use and specific tools in this article are for context only. They do not imply endorsement or recommendation of any particular tool or approach by the Department for EducationThe ministerial department responsible for children’s services and education in England or the Chartered College of Teaching and any views stated are those of the individual. Any use of AI also needs to be carefully planned, and what is appropriate in one setting may not be elsewhere. You should always follow the DfE’s Generative AI In Education policy position and product safety expectations in addition to aligning any AI use with the DfE’s latest Keeping Children Safe in Education guidance. You can also find teacher and leader toolkits on gov.uk .