The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Outstanding Reviewers

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

DOI: 10.14569/IJACSA.2026.0170150
PDF

A Robust Real-Time Multimodal Polynomial Fusion Framework for Sensor-Based Sign Language Recognition Using Flex–IMU Smart Gloves

Author 1: Dadang Iskandar Mulyana
Author 2: Edi Noersasongko
Author 3: Guruh Fajar Shidik
Author 4: Pujiono

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 17 Issue 1, 2026.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Sign language recognition is a critical component of assistive technologies for individuals with hearing and speech impairments. While vision-based approaches have shown promising performance, their reliability is often affected by illumination variations, occlusions, and background complexity. Wearable sensor–based solutions, particularly smart gloves integrating flex sensors and inertial measurement units (IMUs), provide a more stable alternative by directly capturing hand articulation and motion patterns. However, existing sensor-based methods frequently suffer from temporal instability, noise sensitivity, and limited discrimination among structurally similar gestures, which is especially challenging in Hijaiyah sign language, where many letters differ only by subtle finger configurations. This study proposes a robust real-time Multimodal Polynomial Fusion (MPF) framework for sensor-based sign language recognition using a flex–IMU smart glove, with a specific focus on Hijaiyah gestures as the application domain. The proposed framework applies nonlinear polynomial temporal smoothing within a sliding window to stabilize raw flex–IMU trajectories, followed by multimodal fusion to enhance gesture separability and temporal consistency. A large-scale multimodal dataset comprising 231,000 samples collected from 33 users performing 28 Hijaiyah gesture classes was constructed to enable rigorous subject-independent evaluation. Experimental results obtained from offline testing, session-aware analysis, and real-time streaming scenarios demonstrate that the proposed MPF framework consistently outperforms a baseline approach based on raw normalized signals. The proposed method improves recognition accuracy from 92.42% to 96.32%, while also achieving higher macro-level precision, recall, and F1-score. Furthermore, MPF significantly reduces misclassification rates and improves temporal stability, particularly for fine-grained Hijaiyah gestures with similar structural patterns. These results confirm that the proposed framework provides a robust and reliable solution for real-time wearable sign language recognition and offers practical benefits for Hijaiyah-based assistive communication systems.

Keywords: Sign language recognition; Hijaiyah sign language; wearable sensors; smart glove; multimodal fusion; polynomial temporal smoothing; real-time recognition

Dadang Iskandar Mulyana, Edi Noersasongko, Guruh Fajar Shidik and Pujiono. “A Robust Real-Time Multimodal Polynomial Fusion Framework for Sensor-Based Sign Language Recognition Using Flex–IMU Smart Gloves”. International Journal of Advanced Computer Science and Applications (IJACSA) 17.1 (2026). http://dx.doi.org/10.14569/IJACSA.2026.0170150

@article{Mulyana2026,
title = {A Robust Real-Time Multimodal Polynomial Fusion Framework for Sensor-Based Sign Language Recognition Using Flex–IMU Smart Gloves},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2026.0170150},
url = {http://dx.doi.org/10.14569/IJACSA.2026.0170150},
year = {2026},
publisher = {The Science and Information Organization},
volume = {17},
number = {1},
author = {Dadang Iskandar Mulyana and Edi Noersasongko and Guruh Fajar Shidik and Pujiono}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

Computer Vision Conference (CVC) 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

Artificial Intelligence Conference 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Future Technologies Conference (FTC) 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

The Science and Information (SAI) Organization Limited is a company registered in England and Wales under Company Number 8933205.