The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2025.0160623
PDF

A Novel Multi-Modal Deep Learning Approach for Real-Time Live Event Detection Using Video and Audio Signals

Author 1: Pavadareni R
Author 2: A. Prasina
Author 3: Samuthira Pandi V
Author 4: Ibrahim Mohammad Khrais
Author 5: Alok Jain
Author 6: Karthikeyan

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 6, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Recent developments in live event detection have primarily focused on single-modal systems, where most applications are based on audio signals. Such methods normally rely on classification approaches involving the Mel-spectrogram. Single-modal systems, though effective in some applications, suffer from severe disadvantages in capturing the complexities of a real-world event, which thereby reduces their reliability in dynamically changing environments. This research study presents a novel multi-modal deep learning approach that combines audio and visual signals in order to enhance the accuracy and robustness of live event detection. The innovation lies in the use of two-stream LSTM pipelines, allowing for temporally consistent modeling of both input modalities while keeping a real-time processing pace through feature-level fusion. Unlike many of the recent transformer models, we are utilizing proven techniques (MFCC, 2D CNN, ResNet and LSTM) in a latency-aware and deployment-friendly architecture suitable for embedded and edge-level event detection. The AVE (Audio Video Events) dataset, consisting of 28 categories, has been used. For the visual modality, video frames undergo feature extraction through a 2D CNN ResNet and temporal analysis through an LSTM. Simultaneously, the audio modality employs MFCC (Mel Frequency Cepstral Coefficients) for feature extraction and LSTM to capture temporal dependencies. The features extracted from both audio and video modalities are concatenated for fusion. The proposed integration leverages the complementary nature of audio and visual inputs to create a more comprehensive framework. The outcome yields 85.19% accuracy in audio and video-based events due to the effective fusion of spatial and temporal cues from diverse modalities, outperforming single-modal baselines (audio-only or video-only models).

Keywords: Multi-modality; feature fusion; early fusion; concatenation audio-video signals; convolutional neural network (CNN); Long Short-Term Memory (LSTM); Mel Frequency Cepstral Coefficients (MFCC); ResNet (Residual Network)

Pavadareni R, A. Prasina, Samuthira Pandi V, Ibrahim Mohammad Khrais, Alok Jain and Karthikeyan, “A Novel Multi-Modal Deep Learning Approach for Real-Time Live Event Detection Using Video and Audio Signals” International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025. http://dx.doi.org/10.14569/IJACSA.2025.0160623

@article{R2025,
title = {A Novel Multi-Modal Deep Learning Approach for Real-Time Live Event Detection Using Video and Audio Signals},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160623},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160623},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {6},
author = {Pavadareni R and A. Prasina and Samuthira Pandi V and Ibrahim Mohammad Khrais and Alok Jain and Karthikeyan}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference 2025

6-7 November 2025

  • Munich, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

IntelliSys 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Computer Vision Conference 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Computer Vision Conference
  • Healthcare Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org