References
[1]. J. Deng and F. Ren, "A Survey of Textual Emotion Recognition and Its Challenges, " in IEEE Transactions on Affective Computing, vol. 14, no. 1, pp. 49-67, 1 Jan.-March 2023, doi: 10.1109/TAFFC.2021.3053275.
[2]. W. Hamilton, Z. Ying and J. Leskovec, "Inductive representation learning on large graphs" in Proc. Adv. Neural Inf. Process. Syst., MIT Press, vol. 30, 2017.
[3]. B.-H. Su and C.-C. Lee, "Unsupervised cross-corpus speech emotion recognition using a multi-source cycle-GAN", IEEE Trans. Affect. Comput., vol. 14, no. 3, pp. 1991-2004, Jul./Sep. 2023, [online] Available: .
[4]. Y. Luo and B.-L. Lu, “Eeg data augmentation for emotion recognition using a conditional wasserstein gan, ” in 2018 40th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 2018, pp. 2535–2538.
[5]. B. Li, Y. Liu and X. Wang, "Gradient harmonized single-stage detector", Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, pp. 8577-8584, 2019.
[6]. S. Poria, D. Hazarika, N. Majumder, G. Naik, E. Cambria and R. Mihalcea, "MELD: A multimodal multi-party dataset for emotion recognition in conversations", Proc. 57th Annu. Meeting Assoc. Comput. Linguistics, pp. 527-536, 2019.
[7]. Zhang Y, Li Y, Liu X, et al. Leave no stone unturned: Mine extra knowledge for imbalanced facial expression recognition [J]. Advances in Neural Information Processing Systems, 2023, 36: 14414-14426.
[8]. Li Q, Huang P, Xu Y, et al. Generating and encouraging: An effective framework for solving class imbalance in multimodal emotion recognition conversation [J]. Engineering Applications of Artificial Intelligence, 2024, 133: 108523.
[9]. Singh K, Ahirwal M K, Pandey M. Subject wise data augmentation based on balancing factor for quaternary emotion recognition through hybrid deep learning model [J]. Biomedical Signal Processing and Control, 2023, 86: 105075.
[10]. Shi T, Huang S L. MultiEMO: An attention-based correlation-aware multimodal fusion framework for emotion recognition in conversations [C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023: 14752-14766.
[11]. C. Busso et al., "LEMOCAP: Interactive emotional dyadic motion capture database", Lang. Resour. Eval., vol. 42, no. 4, pp. 335-359, 2008, [online] Available: .
[12]. Ai W, Shou Y, Meng T, et al. Der-gcn: Dialog and event relation-aware graph convolutional neural network for multimodal dialog emotion recognition [J]. IEEE Transactions on Neural Networks and Learning Systems, 2024.
[13]. Meng T, Shou Y, Ai W, et al. Deep imbalanced learning for multimodal emotion recognition in conversations [J]. IEEE Transactions on Artificial Intelligence, 2024.
[14]. Zhang Z, Zhong S, Liu Y. Beyond mimicking under-represented emotions: deep data augmentation with emotional subspace constraints for EEG-based emotion recognition [C]//Proceedings of the AAAI conference on artificial intelligence. 2024, 38(9): 10252-10260.
[15]. S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “Deap: A database for emotion analysis; using physiological signals, ” IEEE transactions on affective computing, vol. 3, no. 1, pp. 18–31, 2011.
[16]. Li A, Wu M, Ouyang R, et al. A Multimodal-Driven Fusion Data Augmentation Framework for Emotion Recognition [J]. IEEE Transactions on Artificial Intelligence, 2025.
[17]. P. Schmidt, A. Reiss, R. Duerichen, C. Marberger, and K. Van Laerhoven, “Introducing wesad, a multimodal dataset for wearable stress and affect detection, ” in Proceedings of the 20th ACM international conference on multimodal interaction, 2018, pp. 400–408.