References
[1]. Lü, Q., et al. (2024) Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models. Proceedings of the International Conference on Machine Learning (ICML).
[2]. Maynez, J., et al. (2020) On Faithfulness and Factuality in Abstractive Summarization. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
[3]. Shuster, K., et al. (2021) Retrieval-Enhanced Generative Models. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
[4]. Ji, Z., et al. (2023) Survey of Hallucination in Natural Language Generation. ACM Computing Surveys.
[5]. Zhang, M., et al. (2024) How Language Model Hallucinations Can Snowball. Proceedings of the 41st International Conference on Machine Learning (ICML).
[6]. Wu, M., et al. (2024) Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models. Proceedings of the International Conference on Machine Learning (ICML).
[7]. Holtzman, A., et al. (2020) The Curious Case of Neural Text Degeneration. Proceedings of the International Conference on Learning Representations (ICLR).
[8]. Pickering, M., & Garrod, S. (2020) Toward a Mechanistic Psychology of Dialogue. Behavioral and Brain Sciences.
[9]. Lee, N., et al. (2022) Prompt Sensitivity in Large Language Models. arXiv: 2212.10559.
[10]. Barbieri, F., et al. (2018) Modelling the Semantics of Emoji. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
[11]. Vaswani, A., et al. (2017) Attention is All You Need. Proceedings of the Conference on Neural Information Processing Systems (NeurIPS).
[12]. Evans, O., et al. (2021) TruthfulQA: Measuring How Models Mimic Human Falsehoods. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
[13]. Pagnoni, A., et al. (2021) Understanding Factuality in Abstractive Summarization. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
[14]. Lin, S., et al. (2022) Teaching Models to Refuse Unknowns. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
[15]. Zhao, Z., et al. (2023) Revisiting Chain-of-Thought Reasoning. Proceedings of the Conference on Neural Information Processing Systems (NeurIPS).
[16]. Kim, B., et al. (2023) Reducing Hallucination via Data Attribution. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).