The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2025.0160694
PDF

MITG-CU: Multimodal Interaction Temporal Graphs Approach for Conversational Emotion Recognition

Author 1: Qian Xing
Author 2: Yaqin Qiu
Author 3: Minglu Chi
Author 4: Xuewei Li
Author 5: Changyi Gao

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 6, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: In the emotion recognition of conversations, the complementary relationship between the context information and multimodal data cannot be fully exploited. This results in insufficient comprehensiveness and accuracy in emotion recognition. To address these challenges, this paper proposed a Multimodal Interactive Temporal Graph Conversation Understanding model (MITG-CU) composed of textual, audio and visual modalities. Firstly, the pre-extracted textual, audio, and visual features are adopted as the input of the Transformer, and the attention mechanism is utilized to capture the cross-modal context correlation information. Furthermore, structural relationships and temporal dependencies between utterances are captured through a local-level relational temporal graph module. Inter-modal interaction weights are dynamically adjusted by a global-level pairwise cross-modal interaction mechanism. By integrating two complementary hierarchical structures, a hierarchical multimodal information fusion was achieved, and at the same time, the model’s adapt-ability to complex conversation scenarios was enhanced. Finally, feature fusion is carried out by using the gating mechanism and sentiment classification is conducted. Experimental results demonstrated that the proposed model outperforms six common baseline methods across metrics including accuracy, precision, recall, and F1-score. Especially in Weighted-F1 and Accuracy have improved by 0.28 % and 0.39 % respectively, which confirmed the effectiveness of the model.

Keywords: Emotion recognition; multimodal interaction; relational temporal graph; cross-modal interaction; feature fusion

Qian Xing, Yaqin Qiu, Minglu Chi, Xuewei Li and Changyi Gao, “MITG-CU: Multimodal Interaction Temporal Graphs Approach for Conversational Emotion Recognition” International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025. http://dx.doi.org/10.14569/IJACSA.2025.0160694

@article{Xing2025,
title = {MITG-CU: Multimodal Interaction Temporal Graphs Approach for Conversational Emotion Recognition},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160694},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160694},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {6},
author = {Qian Xing and Yaqin Qiu and Minglu Chi and Xuewei Li and Changyi Gao}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference 2025

6-7 November 2025

  • Munich, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

IntelliSys 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Computer Vision Conference 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org