Draft–redraft–reframe: Using ChatGPT to build student ownership of writing

Written by: Justin Jeffrey
9 min read
JUSTIN JEFFREY, ENGLISH TEACHER, CENTRE FOR BRITISH TEACHERS (CFBT), BRUNEI

Introduction

Asking students to respond to and incorporate feedback is central to best practice in the process of developing writing skills. Yet, one of the oldest frustrations in writing instruction is the limited impact feedback can have. Despite the time teachers invest in providing it, students often receive it too late or struggle to use it critically. As Muncie (2000) points out, teachers’ summative feedback may be of little use as it forms part of a repetitive cycle of ‘compositions assigned by the teacher, written by the learners, handed in for marking by the teacher, handed back to learners, and promptly forgotten by them as they start the next assignment’.

Moreover, the perspective of teacher as authority figure also limits the usefulness of formative feedback, when comments offered by way of collaboration are seldom received this way. Thus, students do not decide for themselves what to do with it by engaging with comments critically (Muncie, 2000). Furthermore, Hattie (2009) notes, for feedback to be effective, it must be timely, specific and actionable. Such goals are hard to meet when marking large volumes of work.

This study explores a way to shift the feedback burden by introducing an AI (artificial intelligence) tool as intermediary. ChatGPT can be prompted to provide structured feedback that helps students revise their writing more thoughtfully before receiving summative feedback from the teacher. Summative feedback can then be more focused on acknowledging a student’s editing decisions and, perhaps, signposting a way forward beyond the piece of writing concerned.

Context and rationale

Edtech has a mixed reputation. Concerns about screen overuse and student distraction have led countries like Sweden to scale back digital tools (Swedish Ministry of Education, 2024). Researchers like Haidt (2024) link anxiety and reduced attention spans to smartphone use.

However, many tools themselves are not inherently problematic. In this intervention, ChatGPT provided fast, focused feedback in a way that deepened student engagement rather than distracting from it.

The approach outlined in this paper draws on several overlapping strands of research:

  • Feedback literacy: For feedback to drive learning, students must understand what quality looks like, evaluate their own work, and act on input (Sadler, 1989; Carless & Boud, 2018).
  • Cognitive load theory: Feedback that tries to address too much at once can overwhelm rather than support learners. Focused, structured feedback, such as the ‘three-plus-two’ model used here (and explained in detail below) helps students act without overload (Sweller, 1988).
  • The noticing hypothesis: Schmidt (1990) argued that conscious attention to language forms is key to acquisition. Highlighting and comparing errors with corrections helps students internalise changes (Bitchener & Ferris, 2012; Ellis, 2012).
  • Automated writing evaluation (AWE): While early AWE tools like Grammarly or Criterion focus mostly on grammar, newer tools like ChatGPT offer more sophisticated input if carefully prompted. Used well, AI can offer supportive, specific feedback without diminishing student voice (Stevenson & Phakiti, 2019; Wilson & Roscoe, 2020).

 

These principles shaped both the design of the revision cycle and the prompt that powered it. They also reflect a broader movement in educational AWE and AI scholarship that advocates for human-centred, pedagogically sound integration of technology. One study (Link et al. 2022) suggests that while AWE can support long-term accuracy in L2 writing, its integration does not necessarily reduce teachers’ workload regarding higher-level (HL) feedback. Moreover, students may engage more effectively with teacher-provided lower-level (LL) feedback than with automated feedback. These findings highlight the importance of strategically integrating AWE into L2 writing instruction to complement, rather than replace, teacher feedback.

As regards AI in education, its use should augment human teaching and support formative processes, rather than automate judgement or constrain learner agency (Holmes et al., 2023). Similarly, Luckin et al. (2016) propose AI as a ‘learning partner’ capable of delivering adaptive support that encourages metacognitive reflection, personalisation and student autonomy. These perspectives reinforce the approach taken here, in which ChatGPT complements the teacher’s role and is used to mediate rather than dominate feedback, with students positioned as active agents in the revision process.

Methodology

This project took place at Maktab Sains Paduka Seri Begawan Sultan (MSPSBS), a selective secondary school in Brunei, a small country in Southeast Asia. English plays a key role in Brunei’s education system as the main medium of instruction for most subjects from upper primary onwards. While Malay is the national language and used in early education and religious subjects, English is crucial for academic progression, especially in higher education.

The study was conducted across four AS level general paper classes, involving a total of 86 students. The process was repeated for two different pieces of work using a five-stage AI-supported revision cycle. This guided students through drafting, receiving targeted feedback via ChatGPT, reflection and redrafting. To supplement classroom observation with student perspectives, an 11-question survey was administered following the intervention to collect both quantitative and qualitative data, using a combination of Likert-scale, multiple-choice and open-ended items. Questions focused on students’ perceptions of the AI-generated feedback process and its impact on their sense of writing voice and revision confidence. The survey was completed anonymously, and participation was voluntary. Of the 86 students, 67 responded to this post-intervention questionnaire. Results are discussed below.

Prompt design

Prompts were key to success and were designed to respond to the shortcomings of feedback outlined in the literature discussed above. Prompts can anticipate student weaknesses based on knowledge of previous performance or draw on common errors highlighted in examiner reports. In effect, they use the summative red ink of an authority figure predictively, transforming it into the helpful comments of a collaborator. Furthermore, limiting feedback to three mechanical and two developmental points ensured manageability, avoiding cognitive overload.

A five-stage revision cycle

The revision cycle comprised of the following five stages.

1. Drafting in class

The intervention focused on developing writing at a paragraph level as this ensured that a draft could be produced and revised within a lesson. Students often struggle with writing engaging introductions and nuanced argument, so these were areas of focus. Having adopted different preparatory strategies to ensure students were able to write, students wrote a paragraph under timed conditions. Writing under exam-like constraints ensured the draft reflected their authentic ability.

2. AI feedback via teacher prompt

Students pasted their paragraph into ChatGPT using a prompt requesting:

  • Identification, explanation and correction of the three most serious mechanical issues
  • Two developmental improvements
  • A faithful rewrite

This stage reduces cognitive load by limiting feedback to manageable amounts, aligning with Sweller’s (1988) cognitive load theory.

3. Student reflection

Students reviewed the AI’s feedback and revised version alongside their original draft.

4. Colour-coded noticing

Students performed contrastive noticing, highlighting mechanical errors in red (original) and green (revised). Before producing a final draft, they were encouraged to question developmental suggestions rather than merely adopt them, deciding how/whether to include them. The contrastive noticing task operationalises Schmidt’s (1990) noticing hypothesis, drawing students’ attention to specific language forms and encouraging conscious reflection.

5. Final revision and summative feedback

Revised drafts were submitted to teachers, substantially improved. This enabled summative feedback to become more positive, often becoming an endorsement of a student’s decisions and revisions, or providing an opportunity to respond to other issues, such as student questions about metalanguage used by the AI. It also served as a means for the teacher to encourage the students to anticipate issues highlighted in subsequent writing.

Findings

The model was trialled in four classes with a total of 86 students, 67 of whom responded to the survey. Survey analysis revealed strong overall support for the AI-supported formative feedback process.

Key findings include:

  • 85 per cent found the AI feedback either ‘very’ or ‘enormously’ helpful
  • 82 per cent reported improved error recognition
  • 80 per cent noticed details they typically missed
  • only 10 per cent felt that the feedback reduced ownership.

 

A majority (58.2 per cent) preferred a mix of AI and teacher feedback. 86.4 per cent found the three-plus-two feedback model manageable, and (86.6 per cent) found it easy to apply. Encouragingly, 82.1 per cent indicated they would independently use their teacher’s AI prompt in future writing.

Open-ended responses were largely positive. Students valued the clarity and focus of the feedback and some cited improved vocabulary and structural awareness. Several highlighted the immediacy of AI feedback, calling it ‘instantaneous’ and ‘easy to understand.’

A minority requested additional support. Only 28.4 per cent felt the AI helped them identify strengths. One student wanted more feedback on ‘flow and style’; another found the process ‘great but complex.’ These responses informed the practical recommendations below.

Discussion

This model combines the strengths of peer/self-assessment with the consistency of AI. Unlike peer feedback, which may be variable, ChatGPT output is non-judgemental and offers immediate responses to the kinds of errors students typically make. Positioning the system as an assistant rather than authority figure helps to make students more open to revision and more confident to question suggestions. Most importantly, students became more active participants in the feedback cycle, interpreting, editing and reflecting rather than passively receiving teacher corrections. This aligns with research (Sadler, 1989; Muncie, 2000) that shows for feedback to be effective students must be able to evaluate it for themselves and that this is best done mid-draft.

Limitations

Several limitations emerged:

  • Occasional loss of personal voice: ChatGPT sometimes output a revision that was significantly different from the original input. This was especially the case for weaker more error-dense writing.
  • Poor prompts led to weak feedback. Differentiated prompts may be needed for struggling or advanced writers.
  • Feedback metalanguage such as ‘parallel structure’ confused some students. Teachers should pre-teach key terminology or address it post hoc.
  • The noticing task through colour-coding requires explicit modelling and student diligence.
  • While effective at paragraph level, more research is needed on applying this model to full essays or other genres.

 

Despite these limitations, classroom experience and survey data suggest the model can foster greater engagement, reflection, and ownership, skills central to writing development.

Practical recommendations

1. Tailor prompts for clarity
  • Use plain language and limit jargon.
  • Scaffold prompts by ability level.
  • model how to interpret and act on feedback.
2. Include positive feedback
  • Prompt AI to identify one effective sentence or phrase and explain why it works.
3. Expand prompt scope thoughtfully
  • For advanced writers, prompts could include style, flow or cohesion.
  • use marking schemes or feedback from examiner reports to inform prompts.
4. Prepare students to interpret feedback
  • Teach key terminology.
  • Encourage peer discussion of AI feedback to build feedback literacy.
5. Promote reflective use
  • Ask follow-up questions like ‘Do you agree with this?’ or ‘Why might this revision be effective?’.

 

By crafting prompts and following this five-step cycle with these considerations in mind, teachers can maximise the benefits of AI-supported formative feedback and ensure that students remain at the centre of the revision process.

Conclusion

This study suggests that when guided by intentional prompts, AI can deliver formative feedback that is immediate, actionable and empowering. Final drafts showed improvement, demonstrating that students read and applied AI feedback. Stronger students made more selective use of developmental suggestions, indicating critical engagement. The revision process became more student-driven, allowing teachers to refine and, in some cases, reduce their feedback so that it could inform future writing rather than be filed away and forgotten.

While this intervention took place in a selective setting with strong English proficiency, its core strategies of structured prompts, reflective noticing and scaffolded revision are adaptable to a wide range of educational contexts, including multilingual or mixed-ability classrooms.

Future research could explore how this model functions across different year levels and writing abilities. A longer-term study might assess the quality of AI feedback and whether students continue to apply AI-supported feedback strategies independently. It could also assess the intervention’s effect on the type and volume of different teacher feedback. This would deepen our understanding of AI’s long-term pedagogical value and its role in supporting student ownership of writing.

The examples of AI use and specific tools in this article are for context only. They do not imply endorsement or recommendation of any particular tool or approach by the Department for Education or the Chartered College of Teaching and any views stated are those of the individual. Any use of AI also needs to be carefully planned, and what is appropriate in one setting may not be elsewhere. You should always follow the DfE’s Generative AI In Education policy position and product safety expectations in addition to aligning any AI use with the DfE’s latest Keeping Children Safe in Education guidance. You can also find teacher and leader toolkits on gov.uk .

    0 0 votes
    Please Rate this content
    Subscribe
    Notify of
    0 Comments
    Oldest
    Newest Most Voted
    Inline Feedbacks
    View all comments

    From this issue

    Impact Articles on the same themes