Computer Vision Conference (CVC) 2026
21-22 May 2026
Publication Links
IJACSA
Special Issues
Computer Vision Conference (CVC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 17 Issue 1, 2026.
Abstract: Sign language recognition is a critical component of assistive technologies for individuals with hearing and speech impairments. While vision-based approaches have shown promising performance, their reliability is often affected by illumination variations, occlusions, and background complexity. Wearable sensor–based solutions, particularly smart gloves integrating flex sensors and inertial measurement units (IMUs), provide a more stable alternative by directly capturing hand articulation and motion patterns. However, existing sensor-based methods frequently suffer from temporal instability, noise sensitivity, and limited discrimination among structurally similar gestures, which is especially challenging in Hijaiyah sign language, where many letters differ only by subtle finger configurations. This study proposes a robust real-time Multimodal Polynomial Fusion (MPF) framework for sensor-based sign language recognition using a flex–IMU smart glove, with a specific focus on Hijaiyah gestures as the application domain. The proposed framework applies nonlinear polynomial temporal smoothing within a sliding window to stabilize raw flex–IMU trajectories, followed by multimodal fusion to enhance gesture separability and temporal consistency. A large-scale multimodal dataset comprising 231,000 samples collected from 33 users performing 28 Hijaiyah gesture classes was constructed to enable rigorous subject-independent evaluation. Experimental results obtained from offline testing, session-aware analysis, and real-time streaming scenarios demonstrate that the proposed MPF framework consistently outperforms a baseline approach based on raw normalized signals. The proposed method improves recognition accuracy from 92.42% to 96.32%, while also achieving higher macro-level precision, recall, and F1-score. Furthermore, MPF significantly reduces misclassification rates and improves temporal stability, particularly for fine-grained Hijaiyah gestures with similar structural patterns. These results confirm that the proposed framework provides a robust and reliable solution for real-time wearable sign language recognition and offers practical benefits for Hijaiyah-based assistive communication systems.
Dadang Iskandar Mulyana, Edi Noersasongko, Guruh Fajar Shidik and Pujiono. “A Robust Real-Time Multimodal Polynomial Fusion Framework for Sensor-Based Sign Language Recognition Using Flex–IMU Smart Gloves”. International Journal of Advanced Computer Science and Applications (IJACSA) 17.1 (2026). http://dx.doi.org/10.14569/IJACSA.2026.0170150
@article{Mulyana2026,
title = {A Robust Real-Time Multimodal Polynomial Fusion Framework for Sensor-Based Sign Language Recognition Using Flex–IMU Smart Gloves},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2026.0170150},
url = {http://dx.doi.org/10.14569/IJACSA.2026.0170150},
year = {2026},
publisher = {The Science and Information Organization},
volume = {17},
number = {1},
author = {Dadang Iskandar Mulyana and Edi Noersasongko and Guruh Fajar Shidik and Pujiono}
}
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.