References
[1]. Han, J., Yoo, H., Myung, J., Kim, M., Lim, H., Kim, Y., Lee, T. Y., Hong, H., Kim, J., Ahn, S.-Y., & Oh, A. (2023). LLM-as-a-tutor in EFL writing education: Focusing on evaluation of student-LLM interaction. arXiv preprint arXiv: 2310.05191. arxiv.org
[2]. Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2023). Practical and ethical challenges of large language models in education: A systematic scoping review. arXiv preprint arXiv: 2303.13379. arxiv.org
[3]. Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537–550. link.springer.com
[4]. Shahzad, T., Khan, Z., Li, M., & Zhang, Y. (2025). A comprehensive review of large language models: Issues and solutions in learning environments. Discover Sustainability, 6(1), 27. link.springer.com
[5]. Lai, H., Toral, A., & Nissim, M. (2023). Multidimensional evaluation for text style transfer using ChatGPT. arXiv preprint arXiv: 2304.13462. arxiv.org
[6]. Liu, D., & Demberg, V. (2023). ChatGPT vs human-authored text: Insights into controllable text summarization and sentence style transfer. arXiv preprint arXiv: 2306.07799. arxiv.org
[7]. Luo, G., Han, Y. T., Mou, L., & Firdaus, M. (2023). Prompt-based editing for text style transfer. arXiv preprint arXiv: 2301.11997. arxiv.org
[8]. Khan, F., Horvitz, E., & Mireshghallah, F. (2024). Efficient few-shot text style transfer with authorship embeddings. Findings of EMNLP 2024, 781–796. aclanthology.org
[9]. Hu, Z., & Chen, D. (2021). Improving the performance of graph-based dependency parsing with graph attention networks. Neurocomputing, 457, 214–224. sciencedirect.com
[10]. Muhammad, H., & Zhang, S. (2023). Adversarial intervention techniques in text style transfer: A survey. Proceedings of the ACL Workshop on Adversarial NLP, 112–123. bera-journals.onlinelibrary.wiley.com