Controlling Large Language Models in Writing Education: A Computational Framework for Style Transfer, Dependency Detection, and Adversarial Intervention
Research Article
Open Access
CC BY

Controlling Large Language Models in Writing Education: A Computational Framework for Style Transfer, Dependency Detection, and Adversarial Intervention

Yurong Zhao 1*
1 The Education University of Hong Kong, Hong Kong, China
*Corresponding author: rara481846778@gmail.com
Published on 4 July 2025
Journal Cover
ACE Vol.173
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-80590-231-7
ISBN (Online): 978-1-80590-232-4
Download Cover

Abstract

This paper proposes a unified computational framework, which ensures the output quality of large language models in writing education through three major modules: style transformation, dependency detection, and adversarial intervention. The style conversion module adopts the Transformer model with a dual-encoder architecture to transcribe students' texts into academic or news styles while retaining the original meaning. The dependency detection module reconstructs sentence-level grammatical relations and text-level argumentation structures based on the two-layer graph attention network (GAT). The adversarial intervention module simulates typical student errors through controlled perturbations such as synonym replacement and clause recombination to evaluate the robustness of the model. Experiments show that the academic accuracy rate of the style conversion module reaches 91.8%, the news accuracy rate reaches 89.5%, and the average score of UEBL is 28.6. In the case of adversarial perturbation, the style accuracy rate decreased by only 3.2 percentage points. The syntactic annotation accuracy (LAS) of the GAT parser on the original data was 87.5%, the text F1 value reached 78.3%, and the losses under adversarial interference were controlled at 4.8% (LAS) and 5.3% (F1) respectively. These findings confirm that adversarial training can significantly increase the model's resistance to writing errors. This framework provides educators with practical tools to ensure writing style standardization, structural consistency, and the ability to resist error feedback, laying the foundation for building a reliable AI-assisted writing teaching system.

Keywords:

Large Language Models, Writing Education, Style Transfer, Dependency Detection, Adversarial Intervention

View PDF
Zhao,Y. (2025). Controlling Large Language Models in Writing Education: A Computational Framework for Style Transfer, Dependency Detection, and Adversarial Intervention. Applied and Computational Engineering,173,8-14.

References

[1]. Han, J., Yoo, H., Myung, J., Kim, M., Lim, H., Kim, Y., Lee, T. Y., Hong, H., Kim, J., Ahn, S.-Y., & Oh, A. (2023). LLM-as-a-tutor in EFL writing education: Focusing on evaluation of student-LLM interaction. arXiv preprint arXiv: 2310.05191. arxiv.org

[2]. Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2023). Practical and ethical challenges of large language models in education: A systematic scoping review. arXiv preprint arXiv: 2303.13379. arxiv.org

[3]. Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537–550. link.springer.com

[4]. Shahzad, T., Khan, Z., Li, M., & Zhang, Y. (2025). A comprehensive review of large language models: Issues and solutions in learning environments. Discover Sustainability, 6(1), 27. link.springer.com

[5]. Lai, H., Toral, A., & Nissim, M. (2023). Multidimensional evaluation for text style transfer using ChatGPT. arXiv preprint arXiv: 2304.13462. arxiv.org

[6]. Liu, D., & Demberg, V. (2023). ChatGPT vs human-authored text: Insights into controllable text summarization and sentence style transfer. arXiv preprint arXiv: 2306.07799. arxiv.org

[7]. Luo, G., Han, Y. T., Mou, L., & Firdaus, M. (2023). Prompt-based editing for text style transfer. arXiv preprint arXiv: 2301.11997. arxiv.org

[8]. Khan, F., Horvitz, E., & Mireshghallah, F. (2024). Efficient few-shot text style transfer with authorship embeddings. Findings of EMNLP 2024, 781–796. aclanthology.org

[9]. Hu, Z., & Chen, D. (2021). Improving the performance of graph-based dependency parsing with graph attention networks. Neurocomputing, 457, 214–224. sciencedirect.com

[10]. Muhammad, H., & Zhang, S. (2023). Adversarial intervention techniques in text style transfer: A survey. Proceedings of the ACL Workshop on Adversarial NLP, 112–123. bera-journals.onlinelibrary.wiley.com

Cite this article

Zhao,Y. (2025). Controlling Large Language Models in Writing Education: A Computational Framework for Style Transfer, Dependency Detection, and Adversarial Intervention. Applied and Computational Engineering,173,8-14.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of the 7th International Conference on Computing and Data Science

ISBN: 978-1-80590-231-7(Print) / 978-1-80590-232-4(Online)
Editor: Marwan Omar
Conference website: https://2025.confcds.org/
Conference date: 25 September 2025
Series: Applied and Computational Engineering
Volume number: Vol.173
ISSN: 2755-2721(Print) / 2755-273X(Online)