Generative AI and ‘kind’ feedback: Early insights from neurodivergent learners

6 min read
DR LUCY CATON, SENIOR LECTURER AND LEAD FOR THE CENTRE OF AI IN EDUCATION, THE UNIVERSITY OF GREATER MANCHESTER, UK
HELEN BRADFORD-KEEGAN, FOUNDATION HEAD OF EDUCATIONAL RESEARCH AND INNOVATION, BOLTON SCHOOL FOUNDATION, UK

This article presents preliminary findings from an ongoing study exploring the use of generative artificial intelligence (GenAI) to help reimagine formative feedback for children with Autism Spectrum Disorder (ASD). The research involves ASD-registered pupils in mainstream primary and secondary schools situated within the North West of England. Full findings will be published in Autumn 2025. Here we present data that is drawn from one of the secondary school partners, wherein two students in Key Stage 3 participated in a pilot intervention using ChatGPT, a commercially accessible GenAI (generative artificial intelligence) product. While the school had no immediate plans to adopt ChatGPT, it expressed interest in the broader potential of GenAI. Accordingly, the study does not focus on evaluating a specific platform but instead explores the wider pedagogical applications of GenAI. Prior to the intervention, privacy measures were explained to students and, where available, the new ‘temporary chat’ function was enabled.

The research team included mainstream teachers from participating schools, a specialist teacher and a special teaching assistant from the Woodbridge SEND Service, and the lead researcher who is a senior lecturer and Head of the Centre for AI in Education at the University of Greater Manchester. The team felt it timely to share some initial reflections with the wider teaching community to contribute to early dialogue around GenAI and its capacity to democratise learning opportunities for neurodivergent learners, particularly those in mainstream settings.

The child’s voice was central to the process, with a shared acknowledgement to recognise both learners and educators as key stakeholders in shaping practice with GenAI technologies (Luckin, 2024). Data from key stakeholders, including preliminary student interviews, teacher observations, and feedback from both students and staff were thematically coded (Braun and Clarke, 2014) alongside transcripts from collaborative discussions with teachers, the lead researcher, and Woodbridge SEND service specialists. Emerging cross-disciplinary findings suggest that GenAI can enhance personalised feedback when informed by professional and experiential pedagogical insight.

This particular phase took place in the spring term of 2025 in two separate creative writing sessions – descriptive writing and poetry editing. While situated in English, the study is not subject-specific and aims to inform cross-curricular teaching practice.

Initial one-to-one interviews were conducted with each child and, in this instance, the lead teacher was also on the research team. Together they explored learning preferences, awareness of GenAI, and strategies for effective feedback. The AI tool was flexibly integrated into the school day, aligned with teacher judgment and parental input to support engagement and minimise disruption.  A multimodal approach to data collection (Lomax, 2020) was offered to the children to capture their experiences, including observed interactions, voice recordings and written reflections.  The choice enabled children to exercise their preference in how they wanted to participate, by sharing their experiences in ways that felt comfortable and authentic.

Student A and Student B both expressed separately a clear preference for feedback, more generally that was immediate, accessible, and integrated within the context of their work, such as handwritten comments in exercise books, rather than delivered externally via planners or digital platforms. Both noted that feedback provided outside the core learning space (including digital and physical) was often forgotten or overlooked due to it being out of sight and therefore out of mind. This flagged the importance of feedback visibility and immediacy in supporting learner engagement with new GenAI tools.  Student B, for example, emphasised the need for the GenAI feedback to be clear, specific, and directly linked to the task at hand, stating:

I like it when all the feedback is in one place, accessible, organised and clear. It has to be connected to what I just did.

The student further stressed the importance of constructive, respectful critique:

I don’t mind when it says what I need to fix, as long as it’s helpful, not just saying it’s wrong. It needs to tell me how to improve it.

These sentiments underscore the value students place on feedback that is both actionable and emotionally considerate, highlighting the need for careful design of GenAI interventions that balance clarity, personalisation, and trust.

One of the significant themes to emerge was the children’s desire for emotionally considerate feedback produced by GenAI. This first came to light through teacher observations and was subsequently discussed by the wider team at the research away day. The potential of GenAI to mediate emotionally sensitive language in support of pedagogical practices, with possible impacts on enhancing student motivation, self-efficacy and engagement, became a topic for further investigation.

For example, Student B had voiced a concern that ChatGPT should offer ‘kind feedback’, so the teacher engineered a starter prompt to ‘Please provide feedback that is kind’. The teacher explained this was a significant insight into how the student desired emotional sensitivity in the way they expected feedback to be delivered. This student request signals an emerging behaviour that anthropomorphises the GenAI, whereby the child actively chooses to position it not simply as a tool for improvement, but as a relational partner expected to demonstrate human-like empathy and care.

This observation raises important pedagogical and ethical considerations. It challenges the assumption that learners view GenAI as a neutral tool, revealing instead a tendency to quickly form relational expectations typically reserved for human teachers. This humanisation risks blurring the line between tool and teacher, potentially leading students to overestimate the AI’s capacity for empathy and understanding; qualities it does not possess. Selwyn (2022) warns of the potential de-skilling of educators, noting that over-reliance on technology may fragment teaching roles and reduce professional agency. The research team recommends that policy and guidance incorporate clear safeguards to help students understand both the capabilities and limitations of GenAI. Its responsible integration should be firmly grounded in teacher presence and informed by prior knowledge of the child.

Despite these cautions, the children’s desire to prompt GenAI for ‘kind feedback’ has instigated further exploration within the research project’s primary sector partners.   This preliminary data flags important design implications for educational technology developers and the wider framing of institutional policy frameworks (Policy Connect, 2023). For example, the phrasing, tone and clarity of multimodal feedback produced by GenAI has the capacity to impact students’ emotional experiences in their engagement of learning. While AI can deliver fast, detailed, and judgement-free comments, the absence of human sensitivity, acknowledged by Student A before the intervention – who noted that ‘it tells you what to fix, but it doesn’t care if it upsets you. A teacher would say it in a nicer way’ – points to a gap in the relational and affective dimensions of learning. This highlights that whilst AI can support cognitive engagement, it cannot fully replace the motivational and emotional scaffolding that human educators provide.

Building on students’ desire for emotionally considerate, human-like feedback, the use of invitational language by GenAI (e.g. ‘you might like to try…’) positioned the tool as supportive and non-directive. This open-ended phrasing mirrors effective pedagogical strategies, promoting learner agency and reducing the emotional pressure often linked to critique and judgement. Student B valued the open-ended nature of the AI’s suggestions, noting that they provided low-risk opportunities for exploration without fear of judgement. Temporal markers within prompts such as ‘now you might consider…’ or ‘next time you might like to…’ supported engagement by signaling progression and providing a structured, time-bound framework, particularly beneficial for learners with ASD, who often prefer clearly sequenced learning experiences.

As the study progresses, further exploration into how learners seek and shape emotionally considerate feedback, may offer new insights for GenAI design, institutional policy and, more importantly, teaching practice. Equally, we understand that machines do not have the capacity to be considerate in an ethical or sentient manner. But it does raise compelling questions about trust, human agency and the ability to simulate emotional consideration through language that reflects empathy, encouragement and recognition.

The examples of AI use and specific tools in this article are for context only. They do not imply endorsement or recommendation of any particular tool or approach by the Department for Education or the Chartered College of Teaching and any views stated are those of the individual. Any use of AI also needs to be carefully planned, and what is appropriate in one setting may not be elsewhere. You should always follow the DfE’s Generative AI In Education policy position and product safety expectations in addition to aligning any AI use with the DfE’s latest Keeping Children Safe in Education guidance. You can also find teacher and leader toolkits on gov.uk .

    0 0 votes
    Please Rate this content
    Subscribe
    Notify of
    0 Comments
    Oldest
    Newest Most Voted
    Inline Feedbacks
    View all comments

    From this issue

    Impact Articles on the same themes