DR FIONA AUBREY-SMITH, FOUNDING DIRECTOR, ONE LIFE LEARNING, UK
Educational organisations are increasingly deciding who will take responsibility for leadership around AI (artificial intelligence) in their context – with new roles and remits being evolved to lead this changing landscape.
Yet each of those sharing their expertise or leadership is doing so through a particular lens, which brings with it specific values and beliefs – a pivotal influence that shapes what is considered and prioritised (Aubrey-Smith and Twining, 2024). For example, organisations that are taking high-level strategic approaches, considering policy, data architecture and infrastructure, privacy, security and training, tend to be larger organisations with significant, often centralised, staffing capacity and specialist expertise to lead on these matters (MATMulti-academy trust - a group of schools working in collaboration, governed by a single set of members and directors AI, 2024). It is notable that colleagues leading on AI in these organisations (whether they are trusts, suppliers/partners or universities) tend to be from operational backgrounds or educational leaders with science, technology or mathematical specialist origins – generally associated with traditional behaviourist and individual constructivist pedagogical belief systems (Tondeur et al., 2017; Becker and Riel, 1999).
Contrasting with these organisations are those who take what might be described as an informal action research approach – usually setting in place policies for guidance, providing training and establishing responsive working groups that surface issues, seek out solutions and engage in very agile multi-stakeholder communications (MAT AI, 2024). These tend to be mid-sized and smaller organisations, with AI leadership delivered by individuals with people-centred perspectives. Colleagues leading on AI in these organisations tend to be from classroom practitioner specialist backgrounds and are often aligned more with social constructivist pedagogical beliefs (MAT AI, 2024).
Belief systems – particularly in research concerned with digital technologies – are often discussed at surface level, with attention often diverted onto strategies, methods and approaches, rather than the values and beliefs underpinning these (González-Sanmamed et al., 2017; Hurlburt and Heavey, 2006). This can create a narrative wherein colleagues engage in discussion about what is perceived to be a shared understanding about the purpose of a strategy, yet have very different deeper beliefs about what it is actually intended to achieve and why that may be important.
To illustrate how significant this variance can be, workshops between September 2023 and September 2024 invited teachers and leaders (n = 2,931) across a range of educational organisations to take part in a sequence of activities focused on surfacing espoused pedagogical beliefs (based on activities taken from Aubrey-Smith and Twining, 2024). Within this group, a number of questions were asked in order to ascertain with which of the four main pedagogical belief systems the person most closely aligned, alongside some supplementary questions to ascertain whether the person considered themselves to be an AI enthusiast or expert.
According to Figure 1, 42 per cent of AI enthusiasts align with traditional behaviourist views, whereas behaviourist beliefs are shared by less than 10 per cent of the other respondents. Conversely, around two-thirds of the wider education sector align with socially oriented belief systems, whereas for AI enthusiasts, this represented only a third. The significant disparity between the underpinning values and beliefs of those leading on AI and those of the wider sector is striking when the same data is segmented in other ways, such as by phase, role, experience, and socioeconomic context.
Figure 1: The alignment between AI enthusiasts and the wider education sector with the four main pedagogical belief systems
It is important to note that human value and belief systems are rarely surfaced in a meaningful way by self-reporting alone, nor through AI- (or computer-) aided analysis tools. This is because of a key influence that Hodges (2015) refers to as dialogic undertone, whereby the meaning of a word or phrase is dependent on the social context of its use and a perception of shared meaning. Simple words and phrases can be used to infer belief systems that are seen as socially acceptable but not necessarily implicitly believed by the person communicating them.
In the context of discussions around AI, we must be mindful of this. Conversations around ethics, governance, equity and data will mean different things to different people. A behaviourist pedagogical lens on these issues is likely to be concerned with policy adherence and hierarchical operational management, whereas a socially oriented pedagogical lens on these same issues is likely to focus more on discussions where each stakeholder demonstrates shared ownership over decision-making and consequent actions.
In a study carried out in the autumn of 2024, one group of Key Stage 2 children spoke about their school asking parents for permission to use their photographs on the school website each year, and their parents making that decision on their behalf. The conversation then turned to how AI trawls through online images, using this data to feed into tools and apps that can ‘age’ someone, in order to predict what they might look like in future years. Such tools, already in widespread use, depend on photographs of real people to train the underpinning data models.
The issue being raised by these young people was that their photographs were being made public without their individual consent. Due to their age, their parents were giving the school generic consent for the use of these images, and the school were then deciding when and where they were being posted online. Yet it was the children’s identities that could be affected by these representations well into their adult lives. The students saw this as disconnected and unjust.
It is perhaps too early to know how much of an issue these potential concerns will be over the course of these children’s lives. Much depends on wider ethical and governance decisions made by technology companies and legislators. However, as leaders, what we can do is to provide sufficient awareness and training so that children, parents and staff all feel equipped to make meaningful and informed decisions.
In relation to the pedagogical beliefs outlined previously, a leader holding a behaviourist lens on this issue will likely have a very different view to a leader holding a sociocultural lens.
Conclusion
The specialism and background of the person leading on AI conversations in an educational organisation directly correlates with the lens through which AI is viewed by that person and, ultimately, their organisation. Drawing on Bourdieu’s ideas of capital and habitus, the professional and personal background of the person taking the lead role for the organisation thinking and engaging with AI creates a set of parameters around what that person is likely to tune into and engage with (Bourdieu, 1977). This creates priorities and blind spots but, more importantly, it also creates an invisible framework within each person’s everyday implicit communications and decision-making. A business leader (or educational leader with a secondary STEM – science, technology, engineering and maths – background) is perhaps more likely to frame AI conversations around policy and operational implementation, whereas an educational leader with an inclusionAn approach where a school aims to ensure that all children are educated together, with support for those who require it to access the full curriculum and contribute to and participate in all aspects of school life background is more likely to frame conversations around equity, wellbeing and critical literacy skills. Both are important, and both may be under discussion, but personal bias and prioritisation are likely to impact which is given greater consideration.
The examples of AI use and specific tools in this article are for context only. They do not imply endorsement or recommendation of any particular tool or approach by the Department for EducationThe ministerial department responsible for children’s services and education in England or the Chartered College of Teaching and any views stated are those of the individual. Any use of AI also needs to be carefully planned, and what is appropriate in one setting may not be elsewhere. You should always follow the DfE’s Generative AI In Education policy position and product safety expectations in addition to aligning any AI use with the DfE’s latest Keeping Children Safe in Education guidance. You can also find teacher and leader toolkits on gov.uk .