The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2024.0150440
PDF

Multimodal Feature Fusion Video Description Model Integrating Attention Mechanisms and Contrastive Learning

Author 1: Wang Zhihao
Author 2: Che Zhanbin

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 15 Issue 4, 2024.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: To avoid the issue of significant redundancy in the spatiotemporal features extracted from multimodal video description methods and the substantial semantic gaps between different modalities within video data. Building upon the TimeSformer model, this paper proposes a two-stage video description approach (Multimodal Feature Fusion Video Description Model Integrating Attention Mechanism and Contrastive Learning, MFFCL). The TimeSformer encoder extracts spatiotemporal attention features from the input video and performs feature selection. Contrastive learning is employed to establish semantic associations between the spatiotemporal attention features and textual descriptions. Finally, GPT2 is employed to generate descriptive text. Experimental validations on the MAVD, MSR-VTT, and VATEX datasets were conducted against several typical benchmark methods, including Swin-BERT and GIT. The results indicate that the proposed method achieves outstanding performance on metrics such as Bleu-4, METEOR, ROUGE-L, and CIDEr. The spatiotemporal attention features extracted by the model can fully express the video content and that the language model can generate complete video description text.

Keywords: Multimodal feature fusion; video description; spatiotemporal attention; comparative learning

Wang Zhihao and Che Zhanbin, “Multimodal Feature Fusion Video Description Model Integrating Attention Mechanisms and Contrastive Learning” International Journal of Advanced Computer Science and Applications(IJACSA), 15(4), 2024. http://dx.doi.org/10.14569/IJACSA.2024.0150440

@article{Zhihao2024,
title = {Multimodal Feature Fusion Video Description Model Integrating Attention Mechanisms and Contrastive Learning},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2024.0150440},
url = {http://dx.doi.org/10.14569/IJACSA.2024.0150440},
year = {2024},
publisher = {The Science and Information Organization},
volume = {15},
number = {4},
author = {Wang Zhihao and Che Zhanbin}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference 2025

6-7 November 2025

  • Munich, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

IntelliSys 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Computer Vision Conference 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org