The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2025.0160691
PDF

Enhanced Feature Extraction for Accurate Human Action Recognition

Author 1: Tarek Elgaml
Author 2: Ali Saudi
Author 3: Mohamed Taha

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 6, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: This paper tackles the challenge of achieving accurate and computationally efficient human activity recognition (HAR) in videos. Existing methods often fail to effectively balance spatial details (e.g. body poses) with long-term temporal dynamics (e.g. motion patterns), particularly in real-world scenarios characterized by cluttered backgrounds and viewpoint variations. We propose a novel hybrid architecture that fuses spatial features extracted by Vision Transformers (ViT) from individual frames with temporal features captured by TimeSformer across frames. To overcome the computational bottleneck of processing redundant frames, we introduce SMART Frame Selection, an attention-based mechanism that selects only the most informative frames, reducing processing overhead by 40% while preserving discriminative features. Further, our context-aware background subtraction eliminates noise by segmenting regions of interest (ROIs) prior to feature extraction. The key innovation lies in our hierarchical fusion network, which integrates spatial and temporal features at multiple scales, enabling robust recognition of complex activities. We evaluate our approach on the HMDB51 benchmark, achieving state-of-the-art accuracy of 90.08%, out-performing competing methods like CNN-LSTM (85.2%), GeoDe-former (88.3%), and k-ViViT (89.1%) in precision, recall, and F1-score. Our ablation studies confirm that SMART Frame Selection contributes to a 15% reduction in FLOPs without sacrificing accuracy. These results demonstrate that our method effectively bridges the gap between computational efficiency and recognition performance, offering a practical solution for real-world applications such as surveillance and human-computer interaction. Future work will extend this framework to multi-modal inputs (e.g. depth sensors) for enhanced robustness.

Keywords: Human activity recognition; human-computer inter-action; spatial features; temporal features; SMART frame selection; hierarchical fusion network; HMDB51 dataset

Tarek Elgaml, Ali Saudi and Mohamed Taha, “Enhanced Feature Extraction for Accurate Human Action Recognition” International Journal of Advanced Computer Science and Applications(IJACSA), 16(6), 2025. http://dx.doi.org/10.14569/IJACSA.2025.0160691

@article{Elgaml2025,
title = {Enhanced Feature Extraction for Accurate Human Action Recognition},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160691},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160691},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {6},
author = {Tarek Elgaml and Ali Saudi and Mohamed Taha}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference 2025

6-7 November 2025

  • Munich, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

IntelliSys 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Computer Vision Conference 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org