Examining Korean EFL College Students’ Experiences and Perceptions of Using ChatGPT as a Writing Revision Tool

Article information

J Eng Teach Movie Media. 2023;24(4):15-27
Publication date (electronic) : 2023 November 30
doi : https://doi.org/10.16875/stem.2023.24.4.15
1Associate Professor, Baird Liberal Arts College, Soongsil University, 369 Sangdo-ro, Dongjak-gu, Seoul, 06978, Korea
2Associate Professor, Liberal Arts College, Dankook University, 152 Jukjeon-ro, Suji-gu, Yongin-si, Gyeonggi-do, 16890, Korea
Corresponding author, Associate Professor, Liberal Arts College, Dankook University, 152 Jukjeon-ro, Suji-gu, Yongin-si, Gyeonggi-do, 16890, Korea (E-mail: youngsangcho@dankook.ac.kr)
Received 2023 October 15; Revised 2023 November 18; Accepted 2023 November 25.

Abstract

This study investigates the experiences and perceptions of Korean college students regarding the use of ChatGPT as a tool for revising paragraphs in a general English course focused on academic writing skills. A total of 71 student participants completed a survey questionnaire about their perceived benefits and challenges in employing ChatGPT for writing revision and anticipated teacher roles in the process. For data analysis, descriptive statistics and thematic analysis methods were used. The findings reveal a positive reception from the students. They expressed satisfaction with the use of ChatGPT for their paragraph revisions, indicating that its feedback was helpful and trustworthy. The benefits of ChatGPT included the convenience of its instant responses, unrestricted use across time and space, and accurate error correction in areas such as vocabulary, grammar, and paragraph flow. Some challenges were also raised, such as the lack of error descriptions, incomprehensible feedback, misalignment of responses, concerns about diminished authorship, and uncertainty about learning effectiveness. These findings suggest the potential of ChatGPT as a supplementary tool for writing revision, emphasizing the importance of a balanced approach between AI-driven and human feedback. The study also underscores the crucial role of teachers in effectively integrating ChatGPT into writing instruction.

Keywords: secondary; tertiary

I. INTRODUCTION

In the ever-evolving landscape of technology, innovative and creative tools and platforms have emerged to assist language learners in various aspects of their language development. One such tool is ChatGPT, a language model powered by artificial intelligence (AI) that was created by OpenAI and made available in November 2022. Utilizing cutting-edge AI techniques, including Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML), ChatGPT has been trained on a vast amount of data. This extensive training enables it to generate responses that are human-like and engage in interactive conversations with people (Kohnke et al., 2023). These critical characteristics have garnered significant interest in the field of language education, raising hopes for ChatGPT’s ability to facilitate language learning (Xiao & Zhi, 2023).

ChatGPT offers numerous potential advantages for language learners. One of them is its ability to provide students with a dynamic and interactive environment in which to practice their target language (Guo et al., 2022). With ChatGPT, learners can simulate real-world conversations and receive instant feedback, aiding them in recognizing and correcting errors in real time. This individualized learning approach is particularly beneficial because it adjusts to each learner’s unique needs and proficiency level (Chen et al., 2021).

While ChatGPT holds significant potential for aiding language learning, it also has potential disadvantages. The model lacks the ability to independently verify the accuracy of the data it generates, relying instead on patterns learned from its training set. Consequently, there is a risk of “hallucination,” where the model produces information that is compelling and plausible but may be inaccurate (Halaweh, 2023). Additionally, concerns arise about plagiarism when students incorporate ChatGPT-generated content directly into their assignments, potentially fostering an overreliance that hampers the development of critical thinking (Lo, 2023). Despite these challenges, it is widely acknowledged that ChatGPT can be effectively utilized in language instruction provided that clear guidelines for instructional use and adequate training for teachers and learners are in place (Choe, 2023a, 2023b; Lo, 2023; Shin, 2023).

As interest in utilizing ChatGPT in language education continues to grow, it is necessary to confirm its viability and efficacy as a language learning tool. However, there is a dearth of empirical research on its implementation in real-world language classrooms, as well as how students use it and interpret their experiences. More research needs to be conducted to fill this research gap and learn more about how to use ChatGPT more efficiently for language learning.

The purpose of this study is to explore how Korean EFL (English as a Foreign Language) college students perceive the use of ChatGPT for editing their writing assignments in general English classes. Among other English language skills, writing is a particular challenge for English language learners (Bok & Cho, 2022; Evans & Green, 2007). Furthermore, L2 writing theories highlight the importance of multiple revisions based on feedback from both teachers and peers, conceptualizing writing as a dynamic and recursive activity (Liu & Hansen Edwards, 2002). According to Kwon and Kim (2023), while there are numerous advantages to this approach for developing writing skills, a major challenge lies in finding sufficient time for feedback. Utilizing an electronic tool like ChatGPT is potentially beneficial in addressing this time constraint.

This study aims to illuminate the potential benefits and challenges associated with integrating ChatGPT into the EFL writing revision process and teachers’ roles in this AI-driven writing revision process by examining students’ experiences, attitudes, and perceptions of the platform. In light of this objective, two main research questions guide the present study:

1. How do Korean university students respond to their experience of using ChatGPT in the process of writing revision?

2. Based on their experiences, what are Korean university students’ perceptions of utilizing ChatGPT for revising their writing, and what roles do they anticipate their teachers will play in this AI-driven writing revision process?

II. LITERATURE REVIEW

1. ChatGPT: Its Affordances for Language Learning and Teaching

Since ChatGPT was first released in November 2022, it has received considerable attention from language educators and researchers. The primary focus of research papers to date has been on the potential of the platform as a language learning and teaching tool. As an example, based on their preliminary analysis, Kohnke et al. (2023) discussed the educational values of the tool. According to them, ChatGPT can effectively facilitate language learning by providing learners with opportunities to engage in authentic interactions through its key features such as word recognition, error correction, and explanation. Furthermore, it is capable of generating texts in a variety of genres and difficulty levels, which can be particularly useful for teachers when preparing teaching materials to align with their students’ varying proficiency levels and needs.

Park (2023) also investigated the positive effects that ChatGPT can have on English education. Specifically, ChatGPT utilizes data-driven algorithms and offers tailored learning experiences by considering individual learners’ interests, proficiency level, and learning goals and providing customized learning materials accordingly. In addition, it can offer instant feedback, assisting in correcting errors in grammar, vocabulary, and structure. Furthermore, the utilization of GPT-4, which has functions for transforming texts into images and sounds, has the potential to enhance students’ engagement in and motivation for language learning.

Shin et al. (2023) focused their analysis on the potential of chatbots as a tool for content-based English learning and teaching. They invited 27 educational specialists to participate in a survey to evaluate three different chatbots: Cleverbot, Kuki, and ChatGPT. They found that ChatGPT had superior language proficiency and provided various information thanks to a DL approach based on massive language models, indicating its potential for use in content-based English learning and teaching. Shin et al. anticipated that the platform can be utilized not only as a self-directed learning tool for language learners but also as a helpful teaching resource for language instructors.

Despite its potential advantages, ChatGPT has limitations. For example, Ahn (2023) examined ChatGPT’s efficacy as a language learning and evaluation tool by analyzing its performance on the English reading sections of the Korean College Scholastic Ability Test. Overall, ChatGPT demonstrated high accuracy rates in answering the questions and abilities to reason and provide explanations for the answers without any specialized input, but it also made some mistakes. Ahn asserted that human interventions and careful interpretations are required for the outputs that ChatGPT generates.

As shown above, the existing body of literature is constrained in that it mostly concentrates on offering recommendations for the use of ChatGPT in language learning and teaching or summarizing the anticipated advantages of ChatGPT based on a feature analysis. More investigation is needed into the experiences and perceptions of English language learners when it is actually implemented in English classrooms. This line of inquiry will contribute to optimizing the use of the tool and thus enhancing the learning experiences of English language learners.

2. Utilization of ChatGPT in the EFL Writing Process

Writing is one of the areas of English language proficiency where ChatGPT can be extremely helpful to learners. This is mostly attributable to its ability to provide prompt, individualized feedback (Hong, 2023) and opportunities for honing writing skills (Kim et al., 2023). Some empirical studies (e.g., Cao & Zhong, 2023; Choe, 2023a, 2023b; Shin, 2023) have examined the use of ChatGPT in the English writing process and assessed its effectiveness from the perspective of the students.

Shin (2023) investigated the possibility of using ChatGPT as a tool to provide guided writing instruction to 21 Korean pre-service English teachers. The main focus of this study was on the perspectives of the students regarding the use of the tool for writing during the planning and revision phases. Most participants found ChatGPT to be very helpful in their writing process; it helped them generate ideas and organize their thoughts, and it provided insightful feedback. Although helpful, some participants advised against relying too much on ChatGPT because they believed it might hinder learning. Shin used these findings to highlight how important it is for teachers to give students thorough instructions on how to use ChatGPT.

Other studies on ChatGPT’s application in English writing have focused on its role in the revision phase. Choe (2023a, 2023b) examined the perceptions of Korean pre-service English teachers regarding the incorporation of ChatGPT into their writing process. Participants were asked to reflect upon how they used ChatGPT to edit their first drafts of the assigned articles. A thematic analysis showed that participants reported greater excitement, more efficient use of time, and increased confidence as affectively positive aspects, and they perceived the feedback on content, organization, and language use as cognitively positive. Negative aspects included worries about possibly hallucinated content and ethical dilemmas regarding plagiarism and over-reliance on the tool. Based on these results, Choe concluded that ChatGPT could be useful in EFL instruction, given that the curriculum includes instruction on its appropriate and ethical use and copyright issues.

Cao and Zhong (2023) investigated ChatGPT’s efficacy in the rewriting process by contrasting it with traditional feedback methods like self- and teacher-feedback. Advanced Chinese ESL/EFL students enrolled in a master’s program in translation and interpretation participated in their study. The participants were required to translate texts from Chinese into English, then edit their translations based on feedback from teachers, ChatGPT, and themselves. ChatGPT performed better than the other two forms of feedback in terms of lexical proficiency and coherence according to an analysis of the translated texts’ lexicon, syntax, and cohesion. However, self- and teacher-feedback were better at assessing syntax, including the appropriate use of passive voice. Cao and Zhong suggested incorporating ChatGPT with conventional feedback methods, highlighting the necessity of a blended feedback strategy.

These studies offer valuable insights into the application of ChatGPT in English writing instruction, outlining both its benefits and drawbacks. Nevertheless, studies on its application in English writing instruction, particularly on the usefulness of ChatGPT-based feedback, are still in their infancy. The goal of this study is to close this research gap and contribute to the body of current knowledge by exploring how EFL students from various academic disciplines use ChatGPT in the context of Korean higher education. It is also important to note that the majority of the studies previously reviewed were conducted with English education and translation majors. Compared to students whose major is English, it is expected that EFL students from a variety of majors enrolled in a general education course will respond differently during the writing revision process. The present study strives to offer insights into the effective utilization of ChatGPT for teaching English writing skills by examining the viewpoints of learners with varying learning experiences and needs.

III. METHOD

1. Research Context

This study was conducted at a metropolitan four-year university in South Korea. Students at this university were required to take a mandatory general English course in the spring semester, with a focus on academic writing. One of the authors of this study was an instructor who taught three basic-level academic writing classes. One of the primary activities in these classes was to spend approximately 30 minutes independently writing a paragraph in the classroom after completing each textbook unit. They subsequently submitted their writings to the school Learning Management System (LMS) for evaluation. Oral feedback from the instructor was provided for students online or in person upon request; participation was voluntary.

Starting from the second half of the semester, a new practice was introduced as homework. Students were required to incorporate generative AI Chatbots (e.g., ChatGPT, Bard, or Bing) to proofread and revise their original paragraphs written during class time. Specific prompts were not provided; instead, examples were shared in class, such as “proofread this paragraph” and “revise this paragraph.” Students were also instructed to identify three main difficulties they encountered when writing a paragraph independently, which helped determine whether they could receive assistance from AIs. Five main areas of difficulties were identified from these assignments: grammar, vocabulary/expressions, clarity, coherence, and organization. These areas were addressed in the survey questionnaire to assess the extent to which ChatGPT assisted with writing in these areas. Throughout the second half of the semester, each student submitted three original paragraphs and their revisions to the instructor as homework.

A total of 80 students initially participated in the study, but data from students who used other AI chatbots such as Bard and Bing Chat were filtered out, resulting in a total of 71 student responses being used for this research. Most of the 71 students were in their first year, with the exception of one third-year student. For majors, 28 were computer science majors specializing in computer engineering, software, or information statistics; 26 were law majors, and 17 were art, music, or sports majors. According to the Common European Framework of Reference for Languages (CEFR) scale, the majority of students self-assessed as English beginners, except for one student: 26 students at the A1 level, 35 at the A2 level, nine at the B1 level, and one at the C1 level. In addition, 39 of the students reported receiving oral feedback from the current instructor at least once during the semester, while the remaining 32 did not. All students who reported using ChatGPT used the GPT-3.5 version.

2. Data Collection and Analysis

The primary data for the research were collected from a survey questionnaire designed to investigate students’ experiences and perceptions of using ChatGPT to revise their paragraphs and their opinions on the teacher’s roles in the process. The questionnaire was developed using Google Forms and comprised two sections. The first section gathered information about the students’ background, such as year in school, major, self-assessed English proficiency level, receiving of oral feedback from the instructor, and the type of AI chatbot used for revision. The second section comprised 15 items that investigated students’ perceptions of the use of ChatGPT, including 11 questions answered using a 5-point Likert-type scale, one multiple choice question, and three open-ended questions (see Appendix). The survey link was shared with students both online and offline at the end of the semester and was open from June 12 to 18, 2023. Participation was anonymous and voluntary; no identifying information was requested, and students completed the survey without the instructor’s presence.

Students’ responses to the survey were automatically saved and transferred to an Excel file. Descriptive statistics were used to analyze the Likert-scale and multiple-choice questions, and thematic analysis was used to analyze the open-ended questions. For qualitative analysis, irrelevant information and responses to the open-ended questions, such as periods, commas, and “I don’t know,” were excluded. Subsequently, each researcher in this study individually read and examined the students’ responses, assigning codes to organize information based on its meaning. They then compared their coding results and collaboratively identified patterns in the codes while also noting recurring ideas. In the last phase, they grouped the codes into broader categories or themes and offered their interpretations.

IV. RESULT

1. Perceived Advantages of Using ChatGPT for Feedback

The first three survey questions1 were developed to understand students’ overall experience with using ChatGPT for paragraph revision. As indicated in Table 1, the responses reflected a generally positive experience with the tool. The first question aimed to assess students’ satisfaction level with the activity of receiving feedback through ChatGPT and revising their writing. Their responses suggest high satisfaction with it (M = 4.13, SD = 0.79). In the subsequent survey questions (2 and 3), students indicated that they found ChatGPT helpful in the process of editing their paragraphs (M = 4.04, SD = 0.92) and expressed an intention to continue using it for paragraph revision in the future (M = 4.21, SD = 0.91). When asked about the quality of its feedback in Questions 4 and 5, students expressed positive perceptions that ChatGPT’s feedback was accurate (M = 4.03, SD = 0.79) and reliable (M = 3.90, SD = 0.86).

Students’ Overall Evaluation of the Use of ChatGPT During the Revision Process

In order to gain a deeper understanding of their positive experiences and attitudes, an open-ended question (Question 11) was posed about the advantages of using ChatGPT. Table 2 presents the main advantages that students mentioned most: convenience, error correction, availability of various revised versions, and novelty. Many students highlighted the convenience of using ChatGPT as one of its most beneficial aspects. They appreciated the freedom from restrictions in terms of time and place, as well as the ability to receive prompt feedback upon request. Due to these conveniences, students considered the revision process with ChatGPT to be “easy, fast and hassle-free.”

Advantageous Aspects of Using ChatGPT During the Revision Process

ChatGPT’s next merit was its ability to accurately detect and correct students’ writing errors. Many students viewed error corrections made by ChatGPT as “perfect,” “nothing wrong,” or “model language,” which was also in line with their responses to survey questions 4 and 5 about the quality of its feedback. Regarding the areas of errors where students received help, vocabulary, paragraph flow, grammar, and content enrichment were frequently indicated. In the area of vocabulary, for instance, they felt that ChatGPT helped them learn “new” vocabulary and find “more appropriate words” for their texts. They found ChatGPT helpful in improving the flow of their paragraphs as well. By checking the revised versions of their paragraphs, they were able to see how their ideas could be sequenced more logically and how the inclusion of “linking” or transition words could maintain their writing coherence, ultimately making their texts more readable and improving their overall quality. Grammar was another area where ChatGPT was reported to be helpful due to its ability to easily catch and fix grammatical errors. Lastly, ChatGPT also aided students in finding more supporting details for their paragraphs so that they could back up their main idea within the text.

Likert-scale questions (Questions 6 through 10) were also used to examine students’ perceptions of the quality of ChatGPT’s feedback in five key areas: grammar, clarity, coherence, vocabulary (including expressions), and organization. These areas were selected based on their frequency in students’ reported difficulties during the paragraph writing activity, where they were instructed to identify three areas of difficulty. Table 3 illustrates that students overall agreed that ChatGPT played a critical role in rectifying English errors. The highest agreement was found in the category of English vocabulary and expressions (M = 4.23, SD = 0.87), followed by grammar (M = 4.17, SD = 0.99), organization (M = 4.15, SD = 0.87), clarity (M = 4.10, SD = 0.96), and flow (M = 4.03, SD = 0.94).

Students’ Perceived Effects of ChatGPT on the Quality of Writing

2. Challenges Faced While Using ChatGPT for Feedback

While students generally had positive experiences with ChatGPT, it was not without its challenges. Students addressed various issues in response to survey question 12, which are summarized in Table 4. One of the main issues was related to how feedback was provided. Firstly, students expressed disappointment that ChatGPT did not automatically offer an explanation about its error corrections unless they explicitly asked for it. As one student commented, “the reason for the correction was unknown,” making it difficult to discern where and why their original sentences or paragraphs were considered incorrect. Additionally, students stated that the vocabulary proposed by ChatGPT was sometimes too advanced or difficult for students to employ. Given that most students considered themselves beginners in English, they frequently noticed a discrepancy between the words suggested by ChatGPT and the ones they had learned or were familiar with in terms of their level of difficulty. Due to its insensitivity to students’ English proficiency, one student mentioned that he had to continually search for other words that he found more familiar and comfortable to use, instead of blindly adopting the vocabulary suggested by ChatGPT. Lastly, students wished that they could have their original text evaluated before ChatGPT created its revision so that they would know how well or poorly their original text was written. Because corrections made by ChatGPT could involve essential changes in areas like grammar, word choice, and style that distinguish one’s writing from another’s, and the modified text often lacked a specific explanation of why the initial text had to be modified, students often found it hard to determine the correctness of their initial attempt. This could result in a missed opportunity for learning English from their own mistakes.

Issues Students Faced Using ChatGPT During the Revision Process

Another significant challenge that students faced was a communication issue with ChatGPT, which often resulted in misleading feedback. Some students mentioned occasional difficulties communicating with ChatGPT. There were instances where the questions or requests they posed were not accurately understood, leading to answers or responses that did not align with their intended meaning. To minimize miscommunications with ChatGPT, some students chose to use English instead of Korean when making requests, considering ChatGPT’s origin. For instance, one student noted, “It was more accurate to use ChatGPT in its original English form rather than in Korean, so it was a hassle to translate before asking.” This additional step of translation before asking for a more accurate answer appeared to add more stress for this student.

The next issue was students’ doubt about the effectiveness of ChatGPT for their English learning. One instance causing this doubt occurred when students were unsure of their specific areas of weakness and what they needed help with when reviewing the revised text provided by ChatGPT. Lacking the knowledge to critically assess the feedback and therefore feeling compelled to “simply accept” it, they questioned whether ChatGPT’s feedback could be truly effective. Doubts about its effectiveness also arose when students compared it to face-to-face feedback from their instructor. Some students indicated that direct interaction with their instructor led to better comprehension of feedback than when reading it through written text. They also found themselves being less attentive and engaged when asking questions through text chat compared to face-to-face interactions. This raised concerns about whether engagement with ChatGPT could contribute to their actual learning.

The last challenge students encountered was the perception of losing authorship over their writing after undergoing a revision process with ChatGPT. Some students noticed that their work was significantly altered or rewritten, and they expressed mixed feelings about it. While they acknowledged that the modified sentences displayed a more polished and professional quality, they were also disappointed to discover that their initial sentences had not been preserved in any form, whether by retaining more of their original structure, wording, or manner of expression. As one student put it, they sometimes received modified texts by ChatGPT in which their original sentences were no longer recognizable, which led them to feel detached or “foreign” from the revised version.

3. Perceived Teachers’ Role During ChatGPT-Driven Feedback Activity

The remaining part of the survey questionnaire focused on examining students’ thoughts about the instructor’s role in the environment where generative AI chatbots like ChatGPT are used in the process of revising paragraphs. While students had positive experience with ChatGPT and perceived that its feedback was accurate, reliable, and helpful, they responded negatively to survey question 13, which inquired whether they would agree that generative AIs could fully replace face-to-face feedback activities by actual human instructors (M = 2.35, SD = 1.10). In answer to survey question 14, which asked participants to choose between ChatGPT and human instructors for feedback on paragraph writing, a significant portion of students (49.3%) selected a combination of both, 35.2% of the students opted for instructors only, and 15.5% students chose ChatGPT exclusively. These results suggest that students still find value in human feedback, even though they acknowledge the potential usefulness of feedback activities with generative AIs.

When asked in survey question 15 about the roles that teachers can play when AIs like ChatGPT become widely used in revision, students identified three main roles, as depicted in Table 5. The first anticipated role of teachers was to provide feedback on AI-generated responses, clarifying, verifying, and even enhancing their quality. Students expected teachers to address certain aspects of writing that might be challenging for ChatGPT to provide feedback on. For instance, while it may not be necessarily true, some students believed that human teachers are better at explaining subtle differences in the meaning and usage of English words, influenced by cultural factors. They also believed that human English teachers can proofread a text with a deeper understanding of the context in which it is written than ChatGPT can. Moreover, they felt human teachers are better equipped to closely observe an individual student’s unique writing style and needs, and subsequently reflect this understanding when editing the writing. All in all, they suggested that teachers can play an essential role in providing nuanced, culturally sensitive, and individualized feedback, which might be perceived as challenging for ChatGPT.

Anticipated Teachers’ Roles During ChatGPT-Driven Feedback Activity

Furthermore, students expected teachers to explain AI-generated feedback in an easily accessible, clear, and comprehensible manner. As discussed in the previous section, students frequently struggled to fully understand ChatGPT’s feedback due to its inadequate error descriptions, the use of overly advanced or unfamiliar vocabulary, and responses that did not align with their intentions. These issues raised concerns about its effectiveness in promoting students’ actual learning. Consequently, they anticipated that teachers would provide additional explanations for the feedback generated by AIs.

Besides receiving additional explanations of the AI’s feedback, some students expressed a desire to verify its accuracy and reliability through their teachers. While they acknowledged ChatGPT’s proficiency in many areas, they had concerns about its potential to offer undesired or incorrect feedback. Rather than blindly relying on it, students wanted teachers to actively monitor ChatGPT-generated feedback, identify its errors, and address any questions or concerns that they had.

Secondly, students also highlighted the importance of teachers’ guidance on how to use AIs for text revisions, enabling them to extract specific and desired feedback. They recognized the significance of posing the right questions to elicit desired, high-quality feedback and suggested that teachers provide instruction on formulating appropriate prompts, questions, and requests. Students also pointed out the need for teachers to offer guidance on the responsible and ethical use of AIs to ensure their proper utilization. Given the potential risks of unintentional plagiarism or copyright infringement due to insufficient knowledge or skills in writing and using AIs when incorporating AI feedback into their writing, one student proposed that teachers consistently “guide and supervise” students to prevent any misuse or abuse of AIs.

The last role that students proposed for teachers was to connect with them on an emotional level, offering encouragement and motivation for continued English learning. Students perceived that ChatGPT provided a “perfect” version of their text without detailed explanations for revisions or adequate support for their original writing. This could make them feel anxious about their own English skills and develop a false idea that their current English would not be adequate to show to others. One student pointed out that this could lead to feelings of “frustration” with their own English skills. Therefore, teachers were expected to provide ongoing support for students, being attuned to their emotional needs.

V. CONCLUSION

This study delves into the experiences and perceptions of Korean college students in utilizing ChatGPT as a tool for revising writing paragraphs within the context of a general English course focused on academic writing skills. Notably, the findings reveal a positive reception among the students. They exhibited a high level of satisfaction with the activity of receiving feedback through ChatGPT and using it to revise their writing. Furthermore, the students expressed a high degree of perceived helpfulness of ChatGPT for editing their paragraphs in various areas of writing, such as grammar, vocabulary, clarity, coherence, and flow. Students also appreciated not only its accurate error detection and corrections but also the convenience afforded by ChatGPT. They valued the freedom from time and location constraints, which allowed them to access immediate feedback whenever needed without limitations.

The positive reception and perceived benefits, particularly in terms of convenience, accurate error correction, and vocabulary acquisition, highlight the promise of integrating AI-driven tools in English writing classes. As the students expressed a strong intention to continue using ChatGPT for revising their writing in the future, it is becoming unavoidable that students will utilize AI chatbots like ChatGPT for their writing. Once their use of AIs is openly allowed, students will be able to receive quality feedback whenever and wherever they require and revise their writings more easily, compared to the times when English teachers or native English speakers were the only reliable sources of writing feedback. Furthermore, it is highly probable that educators will be relieved of the extensive task of meticulously editing each student’s written work. Therefore, English teachers must deliberate upon these prospective changes in their writing courses brought about by the incorporation of AI technologies. This contemplation is crucial for fostering an enhanced educational environment conducive to high-quality teaching and learning outcomes in the time ahead.

The results of this study also identify some challenges associated with using ChatGPT. One notable concern revolves around the specificity of error feedback. Some students expressed a desire for more supplemental explanations and clarification regarding the corrections made by ChatGPT, as well as direct evaluation of their original writing. They also noted that words and expressions replaced by ChatGPT were at times too advanced to digest, and they wished for feedback more tailored to their current level of English proficiency. Furthermore, there were some instances where the revised paragraph did not align with the students’ intended meaning, particularly when they faced difficulties in communicating with ChatGPT. While they acknowledged that ChatGPT provided a more polished version of their writing, they felt that the resulting work no longer reflected their own writing style. Due to these challenges and concerns, some students questioned its effectiveness for their English learning.

These challenges bring forth a wealth of educational implications. As most participants in this study reported the need for human teachers even when using ChatGPT, English teachers need to serve as mentors, aiding students in comprehending and applying the feedback provided by the tool to refine their writing skills. This implies a shift of standard for qualifications that competent English teachers should have in the future and necessitates continuous improvement in teachers’ own English proficiency. Teachers should be aware that they may be expected to clarify, verify, or even enhance feedback generated by AIs. Consequently, they must consistently consider how to elucidate AI-generated feedback in an understandable manner for their students, as well as how to offer personalized feedback with a deeper contextual understanding of the students’ writing, aligning it with their intentions. Simultaneously, it is imperative to enhance abilities that are perceived to go beyond AIs, such as an ability to discern subtle differences in the meaning and usage of English words depending on the context in which they are used.

Moreover, teachers must emphasize the importance of crafting clear and specific prompts to optimize the effectiveness of AIs. While newer generations may be more familiar with using AIs in various ways, it does not necessarily mean that they know how to communicate with them effectively to maximize AI outcomes. Thus, educators need to explicitly train students in formulating questions that elicit the desired responses. For students like the participants in this research whose English proficiency is low, prompts focusing solely on extracting accurate and correct feedback may not be helpful if the revised work contains grammar or vocabulary that far surpass their level of English knowledge. In addition to accuracy, teachers should instruct students on how to develop prompts that elicit feedback appropriate to their level. The English teachers’ role as guides should be emphasized more, especially when implementing AI tools like ChatGPT, designed for general purposes rather than specifically for educational or language teaching purposes, in language classrooms.

Furthermore, teachers should consider coaching students both intellectually and emotionally as they incorporate AIs into their English learning. While students recognize that AIs may make mistakes, they often accept their suggestions without critical evaluation due to their limited knowledge of English. Thus, teachers should not neglect training students in how to critically evaluate AI-generated feedback, which encompasses essential skills like discerning between correct and incorrect suggestions. In addition, students may be overwhelmed by the apparent perfection of AI language models, which may lead to anxiety about their own English skills in comparison. When their original text disappears in the revised version, students may feel more disappointed and discouraged in their learning and use of English. Therefore, teachers can continue to support students by responding to their disappointment in a compassionate way and providing positive and encouraging feedback.

While this study offers valuable insights into the immediate experiences of Korean college students using ChatGPT for writing revisions, it is not without limitations. Firstly, the study’s scope is confined to a single course and a specific cohort of mostly first-year students. This limits the generalizability of the findings to a broader population of EFL learners. Additionally, the study predominantly relies on self-reported data from survey questionnaires. To address these limitations, future researchers should consider adopting a longitudinal approach. A longitudinal study tracking students’ writing development over an extended period, utilizing real writing samples and ChatGPT prompts and responses, would provide a comprehensive understanding of the tool’s impact on language acquisition and writing proficiency. By examining how students’ writing skills evolve with consistent usage of ChatGPT, researchers and educators can gain deeper insights into the long-term benefits and potential drawbacks of integrating AI-driven tools in EFL writing education. Such research endeavors hold significant promise for enhancing pedagogical practices and ultimately contributing to more effective and innovative approaches in EFL writing instruction.

References

Ahn Y. Y.. 2023;Performance of ChatGPT 3.5 on CSAT: Its potential as a language learning and assessment tool. Journal of the Korea English Education Society 22(2):119–145. http://doi.org/10.18649/jkees.2023.22.2.119.
Bok E., Cho Y.. 2022;A case study: Korean university students’ experience with and perceptions of an academic English writing class. Korean Journal of General Education 16(5):171–188. https://doi.org/10.46392/kjge.2022.16.5.171.
Cao S., Zhong L.. 2023. Exploring the effectiveness of ChatGPT-based feedback compared with teacher feedback and self-feedback: Evidence from Chinese to English translation arXiv. https://doi.org/10.48550/arXiv.2309.01645.
Chen X., Zou D., Xie H., Cheng G.. 2021;Twenty years of personalized language learning: Topic modeling and knowledge mapping. Educational Technology & Society 24:205–222.
Choe Y.. 2023a;Exploring ChatGPT’s impact on the English summary writing of pre-service English teachers. Multimedia-Assisted Language Learning 26(2):104–132. https://doi.org/10.15702/mall.2023.26.2.104.
Choe Y.. 2023b;Exploring perceptions of Korean pre-service English teachers on using ChatGPT in the English writing process. Journal of the Korea English Education Society 22(2):243–262. https://doi.org/10.18649/jkees.2023.22.2.243.
Evans S., Green C.. 2007;Why EAP is necessary: A survey of Hong Kong tertiary students. Journal of English for Academic Purposes 6(1):3–17. https://doi.org/10.1016/j.jeap.2006.11.005.
Guo K., Wang J., Chu S. K. W.. 2022;Using chatbots to scaffold EFL students’ argumentative writing. Assessing Writing 54:100666. https://doi.org/10.1016/j.asw.2022.100666.
Halaweh M.. 2023;ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology 15(2):ep421. https://doi.org/10.30935/cedtech/13036.
Hong W. C. H.. 2023;The impact of ChatGPT on foreign language teaching and learning: Opportunities in education and research. Journal of Educational Technology and Innovation 5(1):37–45. https://jeti.thewsu.org/index.php/cieti/article/view/103/64.
Kim S., Shim J., Shim J.. 2023;A study on the utilization of OpenAI ChatGPT as a second language learning tool. Journal of Multimedia Information System 10(1):79–88. https://doi.org/10.33851/JMIS.2023.10.1.79.
Kohnke L., Moorhouse B. L., Zou D.. 2023;ChatGPT for language teaching and learning. RELC Journal 54(2):537–550. https://doi.org/10.1177/003368822311628.
Kwon E., Kim S.. 2023;A study on advanced Korean EFL college students’ peer reviews and revisions in their writing with digital tools. Journal of Learner-Centered Curriculum and Instruction 23(10):277–298. https://doi.org/10.22251/jlcci.2023.23.10.277.
Liu J., Hansen Edwards J. G.. 2002. Peer response in second language writing classrooms University of Michigan Press. https://doi.org/10.3998/mpub.9361097.
Lo C. K.. 2023;What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences 13(4):410. https://doi.org/10.3390/educsci13040410.
Park H.-Y.. 2023;Application of ChatGPT for an English learning platform. Journal of English Teaching through Movies and Media 24(3):30–48. https://doi.org/10.16875/stem.2023.24.3.30.
Shin D.. 2023;Utilizing ChatGPT in guided writing activities. Journal of the Korea English Education Society 22(2):197–217. https://doi.org/10.18649/jkees.2023.22.2.197.
Shin D., Jung H., Lee Y.. 2023;Exploring the potential of using ChatGPT as a content-based English learning and teaching tool. Journal of the Korea English Education Society 22(1):171–192. https://doi.org/10.18649/jkees.2023.22.1.171.
Xiao Y., Zhi Y.. 2023;An exploratory study of EFL learners’ use of ChatGPT for language learning tasks: Experience and perceptions. Languages 8(3):212. https://doi.org/10.3390/languages8030212.

Appendices

APPENDIX Survey Questionnaire

The second section of the survey questionnaire used in this study:

To what degree do you agree with the following statements? (Questions 1-10 and 13)

1. I am satisfied with the activity of receiving feedback on my paragraphs and revising them with ChatGPT.

2. ChatGPT was helpful in editing my English writing paragraphs.

3. I intend to use ChatGPT in the future for editing English paragraphs.

4. The feedback I received from ChatGPT was accurate.

5. I trust the feedback I received from ChatGPT.

6. ChatGPT has helped me find and correct grammatical errors.

7. ChatGPT has helped me write clearly.

8. ChatGPT has helped me write coherently.

9. ChatGPT has helped me find and use appropriate English vocabulary and expressions.

10. ChatGPT has helped me organize a paragraph appropriately.

11. What were the benefits of receiving paragraph writing feedback through ChatGPT?

12. What issues or challenges did you encounter during the process of receiving feedback through ChatGPT?

13. I believe generative AIs such as ChatGPT will replace face-to-face feedback activities by actual instructors.

14. If you could choose between the face-to-to face feedback activities by instructors and feedback activities through generative AIs such as ChatGPT, which method would you choose?

15. If generative AIs such as ChatGPT are widely used as class activities in the future, what role should instructors play in college English writing classes?

Article information Continued

TABLE 1

Students’ Overall Evaluation of the Use of ChatGPT During the Revision Process

Questions M SD
1. I am satisfied with the activity of receiving feedback on my paragraphs and revising them with ChatGPT. 4.13 0.79
2. ChatGPT was helpful in editing my English writing paragraphs. 4.04 0.92
3. I intend to use ChatGPT in the future for editing English paragraphs. 4.21 0.91
4. The feedback I received from ChatGPT was accurate. 4.03 0.79
5. I trust the feedback I received from ChatGPT. 3.90 0.86

TABLE 2

Advantageous Aspects of Using ChatGPT During the Revision Process

Advantages Comments
Convenience (30) Instant reply (22) I liked being able to edit my own drafts quickly and easily.
I was able to know what I didn’t know in an instant.
It was easy, fast and hassle-free.
No restrictions of time and space (5) It was nice to be able to use it right away when needed without any restrictions of time and space.
It was convenient because I could receive feedback right at home at the time I wanted.
Unlimited interaction (3) I could feel free to ask anything I was curious about.
It allowed me to freely ask many questions.
It was helpful to see various types of revisions.
Error correction (43) Vocabulary (20) I liked that it edited the vocabulary to be more appropriate for the text.
It helped me use vocabulary and expressions I wanted to use.
There were many new words I learned.
I learned to use expressions other than those I used repeatedly.
Flow (8) It corrected the flow of the paragraph so that it progressed smoothly.
It was nice to be able to use a variety of linking words.
I felt like I had written a high-quality text when I saw that its flow changed organically to make it easier to read.
Accuracy (6) I liked it because it made the sentences that I wrote more perfect.
There was nothing wrong with the feedback.
I felt like I was seeing a model answer.
Grammar (5) I received help in improving my vocabulary and grammar.
I liked that it corrected my writing neatly and caught spelling and grammatical errors well.
Enhanced content (4) It made the content of my writing richer.
It added additional details to my writing.

Note. The number in the parentheses indicates the count of comments made by the students for each theme.

TABLE 3

Students’ Perceived Effects of ChatGPT on the Quality of Writing

Questions M SD
6. ChatGPT has helped me find and correct grammatical errors. 4.17 0.99
7. ChatGPT has helped me write clearly. 4.10 0.96
8. ChatGPT has helped me write coherently. 4.03 0.94
9. ChatGPT has helped me find and use appropriate English vocabulary and expressions. 4.23 0.87
10. ChatGPT has helped me organizing a paragraph appropriately. 4.15 0.87

TABLE 4

Issues Students Faced Using ChatGPT During the Revision Process

Issues Comments
Feedback issues (18) Lack of error description (9) It was disappointing that it corrected the entire paragraph without telling me exactly where the error was.
I didn’t know where and why my sentences were wrong.
The errors in the paragraph were corrected, but the reason for the correction was unknown. It only answered what I asked.
Incomprehensible words/expressions (6) ChatGPT provided too difficult words to understand.
The suggested vocabulary sometimes exceeded my current level of English.
Words that ChatGPT used seemed different from what I learned, so I had to look for the words that I was familiar with.
Lack of evaluation of the original sentence (3) I know that what ChatGPT wrote was good, but I also wanted to get an evaluation of what I wrote.
It was difficult to know whether the sentence I originally wrote was correct because
it was corrected with different grammar and words than my original sentence.
Misalignment of responses (18) Occasionally, my question was not properly understood, so it gave an answer completely different from what I intended.
It was more accurate to use ChatGPT in its original English rather than in Korean, so it was a hassle to having to translate before asking.
Occasionally it created unrelated sentences.
Sometimes, the meaning of the edited sentence differed from my intended meaning.
Uncertainty in learning effectiveness (11) ChatGPT gives pretty accurate and smooth feedback, but if I don’t know exactly what I don’t know, I doubt it can be of much help.
It was disappointing because it felt like I was simply accepting ChatGPT’s answers without any critical thinking because they did not even know what I did not know.
I understand better when I receive feedback in person from a professor. Reading lines written by ChatGPT is less effective than talking with a professor.
I tend to pay less attention when talking through text chat. However, I become more focused when I ask and answer questions face-to-face with a person.
Loss of authorship (10) It completely restructured my sentence and turned it into something new. Although the revised sentence was undoubtedly more professional, I felt disappointed that my original sentence was not protected at all.
I wrote the text, but it felt like the revised text was not my writing. I felt very foreign to it.

Note. The number in the parentheses indicates the count of comments made by the students for each theme.

TABLE 5

Anticipated Teachers’ Roles During ChatGPT-Driven Feedback Activity

Teachers’ Role Comments
Providing feedback on AI-generated feedback (38) Enhancing (15) Please explain the different nuances of English words and cultural differences in the use of certain words.
Teachers should do something that ChatGPT cannot do well like contextual editing.
Through careful observation, teachers can help each student’s individuality to come out in their writing and show fine improvement points that they have not yet recognized.
Teachers can provide feedback on the parts that AI is difficult to explain.
Clarifying (14) Teachers can play the role of explaining the feedback at eye level so that students can understand it.
It seems that teachers need to teach students about the errors that ChatGPT fixed because it does not tell what the errors are.
Teachers can inform students what improvements were made by ChatGPT compared to their own writing.
Verifying (9) Because AI can also give incorrect information, it seems like teachers should go over
ChatGPT’s feedback and answer students’ questions on its feedback.
I also want to check if the answers I got are reliable.
Instructing strategies to maximize AI’s feedback (11) Please teach us how to use generative AIs effectively to obtain the desired feedback.
I think teachers should teach how to create the right prompts to receive high quality feedback.
I think it is important for teachers to guide and supervise us not to misuse or abuse AI all the time.
Connecting emotionally (5) I think ChatGPT’s English looks perfect, which could make students feel frustrated with English. Teachers can fix this.
I think teachers can interact and communicate with students emotionally so that we can keep encouraged to learn English.

Note. The number in the parentheses indicates the count of comments made by the students for each theme.