Stars
This repo contains implementation of different architectures for emotion recognition in conversations.
Type 1 and Interval Type 2 Fuzzy Logic Systems in Python
EmoVerse: Enhancing Multimodal Large Language Models for Affective Computing via Multitask Learning
Multimodal Fusion, Multimodal Sentiment Analysis
Official Code for Spatio-Temporal Fuzzy-oriented Multi-modal Meta-learning for Fine-grained Emotion Recognition
Quantum Fuzzy Neural Network for Multimodal Sentiment and Sarcasm Detection
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Pytorch implementation for the paper: Multivariate, Multi-frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation, CVPR 2023.
The official implementation for paper: Multimodal Emotion Recognition Calibration in Conversations, MM '24.
Scientific Reports - Open access - Published: 14 February 2025
Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum
[AAAI-2025] The official Implement of MSE-Adapter
MMER
[AAAI 2025] Official PyTorch implementation of the paper "Bridging the Gap for Test-Time Multimodal Sentiment Analysis"
[EAAI] - "Enhancing multimodal emotion recognition with dynamic fuzzy membership and attention fusion" by Nhut Minh Nguyen, Trung Minh Nguyen, Trung Thanh Nguyen, Phuong-Nam Tran, Truong Pham, Linh…
[ACM MM'24] Ada2I: Enhancing Modality Balance for Multimodal Conversational Emotion Recognition