A learning curve? A landscape review of AI and education in the UK

Written by: Renate Samson
5 min read
RENATE SAMSON, PROJECT LEAD (DATA AND DIGITAL SOCIETY), ADA LOVELACE INSTITUTE, UK

In January 2025, the Nuffield Foundation and Ada Lovelace Institute published ‘A learning curve? A landscape review of AI in education in the UK’ (Samson and Pothong, 2025). The paper seeks to inform conversations around the use of EdTech and AI in UK schools. Our aims were to build a common understanding of AI and to provide an overview of the existing tools being used in many UK schools. Drawing on existing and emerging evidence, we also pose questions for the further research that is needed to ensure that AI works for everyone in education. This article introduces some of the key areas explored in the paper.

Defining AI

AI is often used as shorthand to describe a spectrum of concepts and components that enable or support very different functions. While there are many different types of AI systems and techniques, they have a common set of components.

Narrow AI systems are designed for a particular task and, despite being trained on vast quantities of specific data, they are limited in scope and can only undertake the task that they have been trained to do.

Recent advances in AI research have created more powerful kinds of AI systems – referred to as general-purpose AI – capable of achieving many tasks for which they may not have been explicitly or exclusively trained. Aside from the power and speed of these systems, general-purpose AI systems are also able to be used to develop applications for many different tasks, purposes and domains.

The generative AI (GenAI) systems that are the topic of conversation in relation to education are just one of many systems that can be built on top of general-purpose AI. When we use ChatGPT, Microsoft’s Copilot, Dall-E or any of the other GenAI systems with which teachers and pupils are engaging, we are connecting with a chat interface hosted by a general-purpose AI model.

It is likely that we will see AI in education extend beyond the generative capabilities of GenAI to the more diverse and broader capabilities of general-purpose AI. Acknowledging this and ensuring that general-purpose AI is a focus of the AI conversation will be critical if policymakers and educationalists are to keep abreast of the breadth of AI.

Similarly, it will be vital that the issues that AI can bring are acknowledged, understood and addressed before widespread adoption of AI in education takes place – for example, issues relating to data privacy and security, or the challenge of understanding the data on which an AI system has been trained.

The lack of transparency that currently exists in relation to the size, characteristics, encoded bias and lack of diversity (Bender et al., 2021) in training data can lead to users being susceptible to risks and harms. Furthermore, if data about a student is fed into or used to train a system, then there is a risk that some or all of this personal data could appear in other outputs.

Moreover, there are issues that are completely unique to general-purpose AI. The problem of inaccuracy is a primary challenge which may never be resolved. Other challenges relate to the impact on individual autonomy, agency and creativity, and on the development of relationships. Ensuring that AI supports teacher–student relationships, rather than replacing or undermining them, will be critical.

Oversight and evaluation 

While schools have duties as data controllers, the regulation and legislation relating to AI has not kept pace with the innovation. This means that there is currently a lack of clarity about school leaders’ duties in relation to procuring and using new general-purpose AI or AI EdTech products. Current support for schools in procuring EdTech looks only at administration technology. This leaves a significant gap in support for decision-making around the use of EdTech products in teaching and learning.

Through the Everything ICT website (www.everythingict.org/about), the Department for Education (DfE) offers assistance for schools that are looking to procure technology to support their statutory duties. However, it is unclear which organisations or bodies are responsible for evaluating these technologies for efficacy, accuracy or effectiveness, or how rigorous the evaluation is. The lack of expert oversight and independent guidance also leaves schools overly reliant on marketing materials and hype, rather than support for procuring and using EdTech that is fit for purpose and proven to be effective.

The oversight needed to ensure that general-purpose AI products, EdTech and AI EdTech are appropriate, necessary and fit for purpose is multifaceted, and should cover both the technology and the impact of its use. In the case of EdTech, it also needs to include evaluating the pedagogy on which the technology has been designed, and any improvements in pedagogical practice that a product claims that it can bring about.

We propose a number of areas where we think that oversight and evaluation should include both the technology and the pedagogy. These align with the principles of safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

Furthermore, we suggest that a holistic look at the intersection between the technology, the pedagogy and the societal outcomes is needed. This will require collaboration across a range of stakeholders, including school leaders, technologists, regulators, government, teachers, data protection experts and academics.

What’s next?

In summary, if general-purpose AI is to become a feature of school education, then barriers to understanding the impact of AI and understanding the data and models used in some EdTech need to be tackled.

The evidence base demonstrating the technical and pedagogical efficacy and the social impact of using AI in EdTech, – whether for general learning and teaching, for special educational needs and disabilities, or for administration – remains limited and requires development.

Furthermore, a greater focus on evaluation and oversight is needed. To support transparency, improved access to products and training data from EdTech companies will enable evaluation of the technologies, while opportunities for further research, including longitudinal research and randomised controlled trials, would be beneficial.

The recent announcements made by the DfE in relation to the evaluation of EdTech and the development of teachers’ AI literacy, of which the Chartered College of Teaching is a part, are a welcome start to acknowledging and addressing the gaps in knowledge, evidence, oversight and understanding.

If we are to fully realise the transformational opportunities of AI and AI EdTech, which policymakers and innovators have identified will benefit teachers and pupils alike, then further research and holistic working practice will be needed. We hope that policy, business, academia, and education practice and leadership will come together over the coming months and years to interrogate the technology and its appropriateness to the sector and to pupils’ learning outcomes.

The examples of AI use and specific tools in this article are for context only. They do not imply endorsement or recommendation of any particular tool or approach by the Department for Education or the Chartered College of Teaching and any views stated are those of the individual. Any use of AI also needs to be carefully planned, and what is appropriate in one setting may not be elsewhere. You should always follow the DfE’s Generative AI In Education policy position and product safety expectations in addition to aligning any AI use with the DfE’s latest Keeping Children Safe in Education guidance. You can also find teacher and leader toolkits on gov.uk .

    0 0 votes
    Please Rate this content
    Subscribe
    Notify of
    0 Comments
    Oldest
    Newest Most Voted
    Inline Feedbacks
    View all comments

    From this issue

    Impact Articles on the same themes