References
[1]. Jana S, Biswas R, Pal K, et al. The evolution and impact of large language model systems: A comprehensive analysis [J].Alochana Journal 2024.
[2]. Wang Z, Chu Z, Doan T V, et al. History, development, and principles of large language models: an introductory survey [J]. AI and Ethics, 2024: 1-17.
[3]. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need [J]. Advances in neural information processing systems, 2017, 30.
[4]. Beltagy I, Peters M E, Cohan A. Longformer: The long-document transformer [J]. arXiv preprint arXiv: 2004.05150, 2020.
[5]. Naik D, Naik I, Naik N. Large data begets large data: studying large language models (LLMs) and its history, types, working, benefits and limitations [C] The International Conference on Computing, Communication, Cybersecurity & AI. Cham: Springer Nature Switzerland, 2024: 293-314.
[6]. Kaplan J, McCandlish S, Henighan T, et al. Scaling laws for neural language models [J]. arXiv preprint arXiv: 2001.08361, 2020.
[7]. Liu Y, Han T, Ma S, et al. Summary of chatgpt-related research and perspective towards the future of large language models [J]. Meta-radiology, 2023, 1(2): 100017.
[8]. Wei J, Bosma M, Zhao V Y, et al. Finetuned language models are zero-shot learners [J]. arXiv preprint arXiv: 2109.01652, 2021.
[9]. Jang J, Ye S, Yang S, et al. Towards continual knowledge learning of language models [J]. arXiv preprint arXiv: 2110.03215, 2021.
[10]. Jacovi A, Goldberg Y. Aligning faithful interpretations with their social attribution [J]. Transactions of the Association for Computational Linguistics, 2021, 9: 294-310.
[11]. Zheng Z, Ning K, Wang Y, et al. A survey of large language models for code: Evolution, benchmarking, and future trends [J]. arXiv preprint arXiv: 2311.10372, 2023.
[12]. Taylor R, Kardas M, Cucurull G, et al. Galactica: A large language model for science [J]. arXiv preprint arXiv: 2211.09085, 2022.
[13]. Aoki G. Large Language Models in Politics and Democracy: A Comprehensive Survey [J]. arXiv preprint arXiv: 2412.04498, 2024.