Research Analysis on Adaptive Thought Chains Based on Knowledge Distillation
Research Article
Open Access
CC BY

Research Analysis on Adaptive Thought Chains Based on Knowledge Distillation

Yushan Xia 1*
1 Nantong Institute of Technology
*Corresponding author: 2310410081@ntit.edu.cn
Published on 3 December 2025
Volume Cover
ACE Vol.211
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-80590-579-0
ISBN (Online): 978-1-80590-580-6
Download Cover

Abstract

Large-scale language models (LLMs) have a huge scale, but they have encountered difficulties in improving social productivity, mainly due to their high cost and increasing demand for a large number of computing resources. Knowledge refining plays a key role in bridging the gap between model performance and operational efficiency, and also strengthens these two aspects. This technology refines the functions of GPT-3.5 and other models into miniature models that can be run locally at controllable cost. It can not only enable small and medium-sized enterprises and research institutions to use high-performance large-scale language models, but also ensure data security. Classical knowledge extraction framework uses label softening to achieve knowledge transfer between teacher and student models. According to the logical steps of teacher-student model alignment (similar to GPT-4), this paper mainly focuses on making the student model learn from the teacher model adaptively. This method enables the student model to get a compact model quickly, which not only absorbs the fine knowledge of the teacher model, but also reduces the consumption of computing resources and data. The existing CoT distillation methods ignore the variability of samples and the learning dynamics of student models. In this paper, an adaptive chain distillation method is proposed, so that small models can avoid the problem of reasoning sensitivity and focus on learning difficult samples. Although this will weaken its ability to analyze complex problems, we introduce an adaptive reasoning mechanism including soft prompt fine-tuning module and do experiments to verify it.

Keywords:

Knowledge distillation, Large language models, Teacher-student model, Chain of reasoning

View PDF
Xia,Y. (2025). Research Analysis on Adaptive Thought Chains Based on Knowledge Distillation. Applied and Computational Engineering,211,21-26.

References

[1]. Wang, L., Yoon, K. J. (2021). Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6): 3048-3068.

[2]. Liu, F. (2025). Research on Chinese Spelling Correction Methods Based on Multimodal Feature Fusion. North China University of Technology. DOI: 10.26926/d.cnki.gbfgu.2025.000686.

[3]. Brown, T., Mann, B., Ryder, N. et al., (2020). Language mod-els are few-shot learners, " Advances in neural information processing systems, vol. 33, pp. 1877-1901.

[4]. Zhang, Z., Zhang, A., Li, M., et al. (2022). Automatic chain of thought prompting in large language models. https: //arxiv.org/abs/2210.03493.

[5]. Huang, J., Gu, S. S., Hou, L., et al. (2022). Large language models can self-improve. https: //arxiv. org/abs/ 2210.11610.

[6]. Szegedy, C., Vanhoucke, V., Ioffe, S., et al. (2016). Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2818-2826.

[7]. Zhang, Y., Xiang, T., Hospedales, T. M., et al. (2018). Deep mutual learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4320-4328.

[8]. Chu, Z., Chen, J., Chen, Q. et al., (2023). A survey of chain of thought reasoning: Advances, frontiers and future, arXiv preprint arXiv: 2309.15402.

[9]. Ji, X. L. (2025). Dynamic Self-Optimization: An Adaptive Standard and Feedback-Driven Optimization Framework for Large Language Model Question-Answering. Intelligent Computer and Applications, 1-8. https: //doi.org/10.20169/j.issn.2095-2163.25072303.

[10]. Ding, Y., Chang, J., Liu, Y. M., et al. (2025). Knowledge Distillation for Efficient Deployment and Application of Large Language Models. Information and Communication Technology, 19(03): 53-60. DOI: CNKI: SUN: OXXT.0.2025-03-008.

Cite this article

Xia,Y. (2025). Research Analysis on Adaptive Thought Chains Based on Knowledge Distillation. Applied and Computational Engineering,211,21-26.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of CONF-SPML 2026 Symposium: The 2nd Neural Computing and Applications Workshop 2025

ISBN: 978-1-80590-579-0(Print) / 978-1-80590-580-6(Online)
Editor: Marwan Omar, Guozheng Rao
Conference date: 21 December 2025
Series: Applied and Computational Engineering
Volume number: Vol.211
ISSN: 2755-2721(Print) / 2755-273X(Online)