References
[1]. Mann, B., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv: 2005.14165 1.3: 3.
[2]. Park, J. S., et al. (2023). "Generative agents: Interactive simulacra of human behavior. Proceedings of the 36th annual acm symposium on user interface software and technology.
[3]. Akoury, N., Qian, Y., and Mohit, I. (2023). A framework for exploring player perceptions of llm-generated dialogue in commercial video games. Findings of the Association for Computational Linguistics: EMNLP 2023.
[4]. Schick, T., et al. (2023). Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems 36: 68539-68551.
[5]. Weidinger, L., et al. (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv: 2112.04359.
[6]. Wang, Z., Masri, Y., Malarvizhi, S. A., et al. (2025). Optimizing context-based location extraction by tuning open-source LLMs with RAG. International Journal of Digital Earth, 18(1).
[7]. Aman, S. S., Kone. T., N’guessan. G. B., et al. (2025). Learning to represent causality in recommender systems driven by large language models (LLMs). Discover Applied Sciences, 7(9): 960-960.
[8]. Dennstädt, F., Windisch, P., Filchenko, I., et al. (2025). Consensus Finding Among LLMs to Retrieve Information About Oncological Trials. Studies in health technology and informatics, 329239-243.
[9]. Golnari, P., Prantzalos, K., Upadhyaya, D., et al. (2025). Human in the Loop: Embedding Medical Expert Input in Large Language Models for Clinical Applications. Studies in health technology and informatics, 329658-662.
[10]. Wu, G., Zheng, L., Xie, H., et al. (2025). Large Language Model Empowered Privacy-Protected Framework for PHI Annotation in Clinical Notes. Studies in health technology and informatics, 329876-880.