Unmasking the machine: A critical reflection on evaluating AI-generated content in a pedagogical setting

Written By: Author(s): James Rawbone
4 min read
JAMES RAWBONE, HEAD OF ADDITIONAL EDUCATIONAL NEEDS, ST LAWRENCE COLLEGE, UK Pontius Pilate once asked, ‘What is truth?’. An answer to such a question is beyond the remit of this article; Benjamin Freud (2025) reminds us that as teachers, however, it is important that we can be confident that the material we present to our students is trustworthy and reliable. How much can we trust the content generated by AI (artificial intelligence)? A couple of years ago I asked ChatGPT to create a scheme of work for British History from 1066–1500. To my surprise ‘The Glorious Revolution’ was included, which did not take place until 1688. No system is perfect, and hallucinations, or mistakes, are a feature of AI-generated content. There is, however, a deeper concern; Large Language Models (LLMs) are trained by scraping the internet for knowledge, thereby replicating much of the confused thinking on the internet, with its inherent bias, logical fallacies and outright prejudices. As an e

Join us or sign in now to view the rest of this page

You're viewing this site as a guest, which only allows you to view a limited amount of content.

To view this page and get access to all our resources, join the Chartered College of Teaching (it's free for trainee teachers and half price for ECTs) or log in if you're already a member.

References
0 0 votes
Please Rate this content
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Other content you may be interested in