Teachers’ and school leaders’ AI readiness

4 min read
Anna Lindroos Cermakova, Senior Research Associate, Lancaster University, UK

Generative Artificial Intelligence (GenAI), labelled as a paradigm shift, the fourth industrial revolution, cognitive automation, the algorithmic age, the great disruptor and other terms depending on the perspective taken, forces us to consider the trajectory this technology charts for the future of learning: How will the students and teachers of tomorrow construct their learning and teaching environments and practices with and around AI?

AI in education

While educators are increasingly embracing AI tools, many have never used AI for their schoolwork or use it only rarely. It is, therefore, important not to assume that all teachers and schools are prepared and ready. The reasons for this vary, ranging from teachers’ technological readiness and misconceptions about AI, to ethical concerns, immature AI curricula, and a lack of evaluation methods (see Kim et al., 2022; Ng et al., 2023). To efficiently support educators in digital AI transformation, it is essential to understand the challenges they face. While AI is expected to alleviate teacher workload, research suggests that the transition period will require substantial support, resources, and flexible timelines (see Luan et al., 2020).

To ensure well-qualified teachers in AI-enhanced classrooms, educators need not only AI-related technological skills (Healy and Blade, 2020) but also a positive leadership attitude and an ethical mindset (Ng et al., 2023), whilst addressing common concerns, such as AI threatening to replace the teaching profession (Selwyn, 2019). It is imperative that human experience remains at the centre of the educational context. The development of AI competencies should therefore enable teachers to critically evaluate AI technologies (Long and Magerko, 2020), and understand potential negative impacts (Seo et al., 2021).

Our current understanding of AIED

Emerging practices in AI in education (AIED) suggest the potential to enable new ways of teaching, learning, and education management. At the same time, AI poses significant risks to students, educators, education systems, and society at large, including threats to human agency, systemic data privacy violations, and deepening systemic inequalities.

The foundations of GenAI rely on data mining to train systems, and the large-scale data collection required for the development of these tools may threaten personal data privacy. While human control over data transfers was relatively straightforward in previous generations of AI tools, data mining practices ‘behind the design of AI platforms involve actively preying on and exploiting personal data, often without consent’ (Miao and Cukurova, 2024). This underscores the urgency of empowering teachers to understand the ethical issues related to their interactions with AI tools to ensure safe and responsible use.

At its core, what distinguishes AI from other digital technologies is its capacity to mimic human behaviour. This mimicking ability has expanded considerably with GenAI. By facilitating convenient and rapid decision-making, GenAI may replace autonomous decision-making capacity and foster human over-reliance on AI. It is, therefore, essential to emphasize teacher agency and a human-centred mindset (Miao and Cukurova, 2024).

Key considerations for AIED

UNESCO has been highly active in the field of AIED. Its adopted policy guidance (UNESCO, 2023) promotes four core principles: strengthening human capacities and sustainable development; ensuring equitable and inclusive access to and deployment of AI; requiring AI models to be explainable, safe, and harm-free; and ensuring that control and accountability for AI use remain human-centred.

It goes without saying that implementing a human-centred approach requires multi-stakeholder participation. Regulators, AI providers, and institutions must share governance responsibilities before expecting teachers to apply AI-related principles in their profession.

Certain aspects of education and society require targeted protection. For example, the impact of AI on children’s cognitive development remains largely unknown and is under intense scrutiny (see Williamson et al., 2018). Overreliance on AI is frequently cited as a concern, potentially leading to decreased motivation to learn, diminished capacity to retain information, and a decline in memory abilities (Bai et al., 2023).

Approaches such as Human-Centred Artificial Intelligence (HCAI), which places human needs, wellbeing, and experiences at the core of AI development to create systems that enhance human capabilities, and Human-AI (HAI), which integrates human expertise into AI solutions to promote co-learning and co-evolution, help ensure that distinctively human qualities, such as flexible thinking, social and emotional intelligence, values and ethics, are embedded in AI systems. These approaches also ensure that human perspectives and judgment remain central, preventing malicious misuse.

Key considerations for teachers

Educators’ AIED-specific competencies should encompass knowledge and understanding of AI technologies, as well as how these technologies can be effectively integrated into pedagogical practice (Ng et al., 2023). Furthermore, educators must be able to critically evaluate the tools they use within their specific contexts. In this regard, existing guidance is insufficient, leaving both procurers and educators in an extremely challenging position (Hillman et al., 2024; Lindroos Cermakova et al., 2024).

While several international frameworks focus on teacher digital competencies, the first AI-specific competency framework was developed by UNESCO (Miao and Cukurova, 2024).  This framework defines competencies across five dimensions and three progression levels, from acquire to deepen and create. The five dimensions include:

  • human-centred mindset
  • ethics of AI
  • AI foundations and applications
  • AI pedagogy
  • AI for professional learning.

 

Professional frameworks can serve as valuable reference points by outlining competency levels, from basic understanding to advanced critical evaluation and ethical leadership in AI. These frameworks are inclusive across diverse educational contexts, acknowledging the varying levels of digital expertise that educators may possess.

The examples of AI use and specific tools in this article are for context only. They do not imply endorsement or recommendation of any particular tool or approach by the Department for Education or the Chartered College of Teaching and any views stated are those of the individual. Any use of AI also needs to be carefully planned, and what is appropriate in one setting may not be elsewhere. You should always follow the DfE’s Generative AI In Education policy position and product safety expectations in addition to aligning any AI use with the DfE’s latest Keeping Children Safe in Education guidance. You can also find teacher and leader toolkits on gov.uk .

    0 0 votes
    Please Rate this content
    Subscribe
    Notify of
    0 Comments
    Oldest
    Newest Most Voted
    Inline Feedbacks
    View all comments

    From this issue

    Impact Articles on the same themes