The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 16 Issue 7

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Enhancing Trust in Human-AI Collaboration: A Conceptual Review of Operator-AI Teamwork

Abstract: Trust is vital to collaborative work between opera-tors and AI. Yet, important elements of its nature remain to be investigated, including the dynamic process of trust formation, growth, decline, and even death between an operator and an AI. This review analyzes how the dynamic development of trust is determined by Team performance and its complex interaction with factors related to AI system characteristics, Operator competencies, and Contextual factors. This review summarizes current concepts, theories, and models to propose a genuine framework for enhancing trust. It analyzes the current understanding of trust in human-AI collaborations, highlighting key gaps and limitations, such as a lack of robustness, poor explainability, and effective collaboration design. The findings emphasize the importance of key components in this collaborative environment, including Operator capabilities and AI technology characteristics, underscoring their impact on trust. This study advances understanding of the nature of Operator-AI collaboration and the Dynamics of trust in trust calibration. Through a multidisciplinary approach, it also emphasizes the impact of Explainability, Transparency, and trust repair mechanisms. It highlights how Operator-AI systems can be improved through Design principles and developing Human competencies to enhance collaboration.

Author 1: Abduljaleel Hosawi
Author 2: Richard Stone

Keywords: Operator-AI collaboration; trust calibration; trust dynamics; explainability; transparency; trust repair mechanisms; cross-cultural trust; Clinical Decision Support Systems (CDSS); AI autonomy and influence; ethical considerations in AI; team performance; AI system characteristics; operator competencies; contextual factors; framework; limitations; robustness; Human-AI teaming; design principles; human competencies; predictability; re-liability; understandability; over-reliance; automation bias; under-trust; trust measurement; trust erosion

PDF

Paper 2: Designing an Empathetic Conversational Agent for Student Mental Health: A Pilot Study

Abstract: This study presents the design and evaluation of a conversational agent aimed at supporting university students' mental health. We implemented two variants of a chatbot, referred to as A1 and A2, using large language models (LLMs). A1 employed a baseline prompt reflecting a structured yet neutral counseling style, while A2 was an enhanced version incorporating feedback from psychiatrists and findings from a preliminary study. Emotionally rich expressions, conversational variation, and mild self-disclosure are introduced in A2. A mixed-method user study with 18 participants was conducted to compare A1, A2, and human interactions. Results indicated that A2 significantly improved users’ perception of empathy and engagement compared to A1, though human-level rapport was not fully achieved. These findings highlight the role of prompt design in creating emotionally responsive AI companions for mental health support.

Author 1: Kaichi Minami
Author 2: Choi Dongeun
Author 3: Panote Siriaraya
Author 4: Noriaki Kuwahara

Keywords: Conversational agent; mental health; large language models; prompt design; empathy; chatbot evaluation

PDF

Paper 3: Cross-Context Evaluation of an Indoor–Outdoor AR Navigation System in a University Campus Environment

Abstract: This paper presents a comparative evaluation of a mobile augmented reality (AR) navigation application designed for both indoor and outdoor university environments. Building on a previously validated system for indoor guidance, the current study extends the deployment to outdoor campus spaces without relying on GPS or additional infrastructure. Using visual positioning and spatial anchors, the same application provides real-time AR cues and audio instructions to support wayfinding across different spatial contexts. The principal aim of this study is to determine whether a unified AR navigation system can deliver a consistent, infrastructure-free user experience in both indoor and outdoor university environments. A within-subjects study was conducted with 256 university students who completed both indoor and outdoor navigation tasks. Usability and user acceptance were assessed using the System Usability Scale (SUS) and constructs from the Technology Acceptance Model (TAM). Results revealed consistent user experience across both contexts, with no statistically significant differences in perceived intuitiveness, usefulness, engagement, behavioral intention to reuse, or localization accuracy. A significant difference was found only in perceived AR content loading speed, which was rated slightly higher indoors. These findings demonstrate the feasibility of a unified AR navigation system for academic campuses and provide practical insights into its scalability and user-centered design.

Author 1: Toma Marian-Vladut
Author 2: Pascu Paul
Author 3: Turcu Corneliu Octavian

Keywords: Augmented reality; hybrid navigation; spatial computing; usability testing; indoor-outdoor contexts; ARCore; higher education

PDF

Paper 4: Brightness-Aware Generative Adversarial Network for Low-Light Image Enhancement

Abstract: Images from low-light frequently exhibit poor visibility, excessive noise, and color distortion, which substantially impair both computer vision systems and human visual perception. Although numerous enhancement techniques have been developed, producing visually appealing results with well-maintained structural details and natural color reproduction continues to pose significant challenges. To address these limitations, this paper present an Brightness-Aware Generative Adversarial Network (BA-GAN) for robust low-light image enhancement (LLIE). Our framework employs a U-Net-based generator that effectively captures multi-scale contextual features while preserving fine image details through skip connections. The key innovation lies in our novel Brightness Attention Mechanism Module, integrated within the decoder, which dynamically directs the network's focus to regions requiring substantial illumination correction. To ensure local photorealism, this paper adopt a PatchGAN discriminator architecture. The complete model is trained on the LOL dataset using a composite loss function combining: (1) adversarial loss for realistic image generation, (2) brightness attention loss for keeping the brightness accuracy, and (3) perceptual loss to maintain structural and semantic fidelity. Extensive experiments validate that our BA-GAN outperforms current state-of-the-art methods, achieving superior performance on both quantitative metrics (PSNR: 20.7127, SSIM: 0.7963, LPIPS: 0.2271) and qualitative visual assessments. The enhanced images demonstrate significantly improved visibility while effectively suppressing noise and preserving natural color characteristics.

Author 1: Huafei Zhao
Author 2: Mideth Abisado

Keywords: Low-light image enhancement; generative adversarial networks; U-Net; PatchGAN; attention mechanism; deep learning

PDF

Paper 5: Power-Aware Video Transmission in 5G Telemedicine: Challenges, Solutions, and Future Directions

Abstract: The transformation of healthcare delivery through telemedicine has been significantly accelerated by the deployment of 5G networks and the integration of Internet of Things (IoT) technologies. These advancements enable real-time video-based medical services, including remote operations, urgent care, mobile procedures, and virtual consultations. However, the energy limitations of IoT devices present significant questions regarding long-term video quality and system efficiency. This survey reviews state-of-the-art solutions envisioned for enhancing video transmission in telemedicine environments that are limited by energy consumption but made possible by 5G connectivity. The study discusses recent advances, including efficient video compression techniques, computation offloading via edge computing, adaptive streaming procedures, and dynamic 5G architecture-aware resource scheduling, such as network slicing. It discusses the trade-off between power efficiency and video performance for various telehealth scenarios. Using a scenario-based analysis with a unifying integration framework, this work advances research into energy-efficient e-health systems.

Author 1: Qiuhong SHI
Author 2: Mingjing CAO
Author 3: Yun BAI

Keywords: Telemedicine; Internet of Things; energy efficiency; video transmission

PDF

Paper 6: Evaluating Intangible Software Quality Metrics for Effective Project Management Information Systems

Abstract: In modern organizational environments, project management information systems (PMIS) play an important role in ensuring project success with the user requirements, keeping overall costs within the planned budget, and delivering projects at the agreed time. Selecting a high-quality PMIS is vital for the success of project management. A software quality model tailored to PMIS, summarizing the intangible software quality metrics (ISQM) that are effective in evaluating a PMIS, can help better decision-making on PMIS for project managers. However, there is limited research on the PMIS-tailored quality model. To fill the gap, this study evaluates effective SQM for PMIS quality assessment. There are two types of PMIS: Web-based PMIS and PMIS software applications. To narrow the context of PMIS, we focus on web-based PMIS since they are widely used across the industry, such as Microsoft Project and Jira. According to the PMIS features, we merely explored the tailored quality models that have been proven to be more appropriate for web-based PMIS rather than the basic models, such as ISO/IEC 9126, ISO/IEC 25010, and Bertoa. This research uses a qualitative approach to conduct the commonality screening among these models and find out the key evaluation metrics, such as usability and functionality, and the corresponding qualitative attributes suitable for Web-based PMIS quality assessment. The selected metrics and attributes are used to form a Web-based PMIS-tailored quality model. The scoring mechanism is introduced based on the PMIS-tailored quality model, where project managers can have a clear comparison among different web-based PMISs, leading to effective web-based PMIS selection for project management.

Author 1: Gu Xin
Author 2: Rozi Nor Haizan Nor
Author 3: Nur Ilyana Ismarau Tajuddin
Author 4: Khairi Azhar Aziz

Keywords: Web-based PMIS; software quality model; intangible software quality metrics; software quality

PDF

Paper 7: CNN-LinATFormer: Enhancing PM2.5 Prediction Through Feature Assessment and Linear Attention Mechanism

Abstract: Atmospheric fine particulate matter (PM2.5) poses a serious threat to public health, and its accurate prediction is crucial for environmental management and pollution control. However, existing prediction methods have difficulty in effectively capturing the complex nonlinear characteristics and multi-scale spatiotemporal dependencies of PM2.5 concentration changes. To address this challenge, this study proposes a CNN-LinATFormer hybrid deep learning architecture that combines the local feature extraction capabilities of CNN with the global dependency modeling advantages of the linear attention mechanism. The model innovatively introduces a feature evaluator to dynamically classify environmental features into three categories, and achieves targeted processing through three specially designed processing branches: CNN feature extraction, channel attention, and linear attention fusion. Based on the urban monitoring data of 9 environmental feature dimensions from 2020 to 2023, the experimental evaluation results show that CNN-LinATFormer outperforms the existing methods in all evaluation indicators, with an RMSE of 8.42μg/m³, which is 21.1% lower than the CNN-RF model with the closest performance; the ablation experiment confirms the effectiveness of each component, especKeywords-PM2.5 prediction; air quality forecasting; deep learning; convolutional neural network; linear attention mechanism; channel attention; feature assessment; hybrid model architecture; environmental monitoring; spatiotemporal modeling the channel attention mechanism; the case analysis reveals that the model performs well in the low concentration range (RMSE is 3.12μg/m³), but the high pollution range (>150μg/m³) still needs to be improved. This study provides a new technical path for air quality prediction, which is of great value to environmental monitoring and public health protection.

Author 1: Yuchen Zhang
Author 2: Rajermani Thinakaran

Keywords: PM2.5 prediction; air quality forecasting; deep learning; convolutional neural network; linear attention mechanism; channel attention; feature assessment; hybrid model architecture; environmental monitoring; spatiotemporal modeling

PDF

Paper 8: Transformer Model Optimization Method for Multi-Modal Data Fusion

Abstract: This study proposes an optimized Transformer model for multimodal data fusion tasks, designed to address the challenges of data fusion from different modes such as text, image, and audio. By improving data preprocessing methods, optimizing model architecture and fusion strategies, the study significantly improves the performance of the model in multimodal tasks. The experimental results show that the optimized model is superior to the benchmark model and other comparison models in key indicators such as accuracy, recall, F1 score and AUC value, and shows stronger performance and higher stability. In particular, the research solves the problems of data heterogeneity and computing resource consumption by introducing a weighted fusion strategy, multi-head self-attention mechanism and lightweight design. At the same time, the processing of missing modal data is optimized to enhance the robustness of the model. Despite the remarkable results, there are still challenges such as data heterogeneity, computational efficiency, and missing modal data. Future research can further optimize modal alignment methods and data preprocessing techniques to improve the performance of the model in practical applications. This research provides a new idea and direction for the application and development of multimodal data fusion technology.

Author 1: Shanshan Yang
Author 2: Jie peng

Keywords: Transformer model; multimodal data fusion; model optimization; attention mechanism; adaptive fusion

PDF

Paper 9: Emotional Analysis and Interpretation of Music Conducting Works Based on Artificial Intelligence

Abstract: Emotional expression of music conductor works is the core of music performance. Based on deep learning technology, this study puts forward an emotional analysis method of music conductor works, constructs a complete framework of audio feature extraction, emotional classification, model optimization and evaluation, selects different styles of music conductor works, extracts audio features by using short-time Fourier transform and mel-frequency cepstral coefficients, and classifies emotional categories by using convolutional neural network with bidirectional long short-term memory structure. The experimental results show that the model performs well in the recognition of joy, sadness and tranquility, and the accuracy and F1-score both reach a high level. Different styles of works have differences in emotional classification; classical works tend to be quiet and happy, and romantic works account for a higher proportion in the category of sadness. The change of command style has an impact on the results of emotion classification, and the treatment of rhythm, strength and timbre by different conductors leads to differences in emotion recognition of the same works. The research results provide a new methodological support for music emotional computing, and have application value in music education, intelligent recommendation, emotional computing and other fields. The experimental results demonstrate high effectiveness, with an average classification accuracy of 88.5% and an F1-score exceeding 0.87 across core emotional categories. These findings provide methodological support for affective computing in music, with practical applications in music education, intelligent recommendation, and affective computing. Future research will optimize the model structure and combine multimodal data to improve the accuracy of music emotion recognition, providing a broader research space for the combination of music analysis, interpretation technology, and artificial intelligence.

Author 1: He Huang
Author 2: Chengcheng Zhang
Author 3: Yun Liu
Author 4: Liyuan Liu

Keywords: Artificial intelligence; music conductor works; emotional analysis; interpretation technology

PDF

Paper 10: Object Recognition in Pond Environments Using Deep Learning

Abstract: Complicated underwater environment, such as visibility limitations and illumination conditions pose significant challenges for underwater imaging and its object recognition performance. These issues are especially critical for applications involving autonomous underwater vehicles (AUVs) or robotic systems involved in object recognition tasks during search-and-retrieval operations. Moreover, high-turbidity underwater image datasets, especially for pond environments, remain scarce. Therefore, this study focuses on establishing a pond underwater images dataset and evaluating the deep learning-based object recognition architecture, You Only Look Once Version 5 (YOLOv5), in recognizing multiple objects in respective underwater pond images. The dataset contains self-captured 1116 underwater pond images, which are annotated with LabelImg for object recognition and dataset generation. Under varying depths, camera distances, and object angles, the YOLOv5 reaches a mean accuracy mAP 50-95 of 87.96%, demonstrating its effectiveness for recognizing multiple objects in pond underwater environments.

Author 1: Suhaila Sari
Author 2: Ng Wei Jie
Author 3: Nik Shahidah Afifi Md Taujuddin
Author 4: Hazli Roslan
Author 5: Nabilah Ibrahim
Author 6: Mohd Helmy Abd Wahab

Keywords: Dataset; deep learning; object recognition; pond; underwater image; YOLO

PDF

Paper 11: FoodSharePro: An Integrated Mobile Platform for Sustainable Food Donation and Decentralized Composting

Abstract: Food waste is a global issue, one-third of all food produced is lost every year. This study introduces FoodSharePro, an integrated mobile-based Waste Food Management System that connects food donation and composting. The system allows for the efficient donation of surplus edible food through a mobile app and management of inedible waste through traditional composting methods. Built using Android Studio and Google Firebase, the app has secure authentication, location-based rider matching via Google Maps API and real-time data synchronization. Donors can log donations, track status and view delivery confirmations through a user-friendly dashboard, while riders are assigned tasks based on location and transport suitability. To minimize organic waste, composting hardware with temperature sensors and dehydration units supports aerobic composting process. Evaluation among 20 users showed that FoodSharePro has the highest satisfaction rate (75%) compared to 6 other platforms, with a mean user satisfaction of 24.29% and a standard deviation of 24.25%. The results prove that mobile technology can be integrated with grassroots waste management to reduce food loss and be sustainable.

Author 1: Jamil Abedalrahim Jamil Alsayaydeh
Author 2: Rex Bacarra
Author 3: Shamsul Fakhar Bin Abd Gani
Author 4: Serhij Mamchenko
Author 5: Safarudin Gazali Herawan

Keywords: Food waste management; food donation platforms; mobile applications; traditional composting; smart waste systems; user engagement; foodsharepro

PDF

Paper 12: Benefits and Challenges of Cloud Computing System in Malaysian Public Healthcare Organizations

Abstract: Cloud computing has become an emerging technology in information systems (IS), which brings worldwide attention to healthcare management, including in Malaysia. Therefore, the present study aims to identify the benefits and challenges of the cloud system in the healthcare sector. The TOE framework was adopted to explain the benefits and challenges of cloud system implementation in the healthcare sector. The findings show that cost, scalability, data accessibility, and interoperability are factors in a technological context that may enable the successful implementation of cloud systems in healthcare organizations. Apart from that, the size of healthcare organizations and their training were also important factors in the organizational context, while government regulations and policies, as well as cyber threats, are considered crucial factors in the environmental context in cloud implementation in the healthcare sector. A conceptual framework of the cloud system was proposed to provide a comprehensive understanding to optimise future implementation or adoption of the cloud system in the Malaysian healthcare sector.

Author 1: Nurul Izzatty Ismail
Author 2: Juhari Noor Faezah
Author 3: Muhammad Syukri Abdullah
Author 4: Masrina Nadia Mohd Salleh

Keywords: Cloud computing; technology; Malaysia; healthcare sector; public healthcare; TOE framework

PDF

Paper 13: niCNN: A Novel Neuromorphic Approach to Energy-Efficient and Lightweight Human Activity Recognition on Edge Devices

Abstract: Recent years have seen a surge in the use of deep learning for human activity recognition (HAR) in various applications. However, running complex deep learning models on edge devices with limited resources, such as processing power, memory, and energy, is challenging. The objective of this study is to design a novel, lightweight and energy-efficient neuromorphic inspired CNN (niCNN) architecture for real-time HAR on edge devices. The niCNN architecture consists of four stages: design of a shallow CNN, conversion into an equivalent spiking network using Clamping and Quantization (CnQ) algorithm to minimize information loss, threshold balancing to calculate spiking neuron firing rate using Threshold Firing (TF) algorithm, and edge deployment. The experimental evaluation shows that the niCNN architecture achieves 97.25% and 98.92% accuracy on two publicly accessible HAR datasets, WISDM and mHealth. Furthermore, the niCNN technique retains a low inference latency of 2.25 ms and 2.36 ms, as well as a low memory utilization of 22.11 KB and 31.84 KB, respectively. Furthermore, energy usage is reduced to 5.2w and 5.8w. In comparison to various state-of-the-art and baseline CNN models, the niCNN architecture outperforms them in terms of classification metrics, memory usage, energy consumption, and inference delay. The CnQ algorithm reduces memory usage and inference latency, while the TF algorithm improves classification accuracy. The findings show that neuromorphic computing has a lot of potential for resource-constrained edge devices.

Author 1: Preeti Agarwal

Keywords: Neuromorphic computing; human activity recognition (HAR); edge computing; convolution neural network (CNN); spiking neural network (SNN); sensors

PDF

Paper 14: Improved YOLOv8 Model for Enhanced Small-Sized Breast Mass Detection on Magnetic Resonance Imaging

Abstract: The early detection of breast cancer is critically important for prompt treatment and rescuing lives. However, the accuracy of small-sized breast masses’ early detection in various algorithms remains unsatisfactory, as the small-sized masses often exhibit subtle features, contain blurry boundaries, and may overlap with other parts in crowded magnetic resonance imaging (MRI) images. This research proposes an improved object detection model based on You Only Look Once (YOLO) v8 to enhance small-sized breast mass detection on MRI. A feature fusion method called the Bidirectional Feature Pyramid Network (BiFPN) and an attention mechanism called the Convolutional Block Attention Module (CBAM) are integrated into the YOLOv8 architecture. The improved YOLOv8 model, equipped with CBAM and BiFPN and hyperparameter tuning, achieved the best performance with a precision of 95.7%, a mAP50 of 91.2%, a recall of 84.3%, and the shortest inference time of 3.4ms per image. The proposed improved Yolov8 model outperformed the baseline model with improvements in precision, mAP50, and recall of 6%, 3.9%, and 2.1%, respectively. The inference time per image is reduced by 1.4ms as well. It is hoped that the proposed model could be applied in the clinical field to increase the early detection rate of breast cancer and the life expectancy of women in the world.

Author 1: Feiyan Wu
Author 2: Chia Yean Lim
Author 3: Sau Loong Ang
Author 4: Jiaxin Zheng

Keywords: Bidirectional feature pyramid network; breast cancer detection; convolutional block attention module; MRI; object detection; small-sized masses; YOLOv8

PDF

Paper 15: SE-Pruned ResNet-18: Balancing Accuracy and Efficiency for Object Classification on Resource-Constrained Devices

Abstract: Deep learning-based image object classification methods often achieve high accuracy, but with the growing demand for real-time performance on resource-constrained edge devices, existing approaches face challenges in balancing accuracy, computational complexity, and model size. To alleviate this awkward situation, we propose a novel ResNet-18 architecture that integrates the Squeeze-and-Excitation (SE) module and model pruning. The SE module adaptively emphasizes informative feature channels to enhance classification accuracy, while pruning technology reduces computational costs by removing unimportant connections or parameters without significant accuracy loss. Extensive experiments on benchmark datasets demonstrate that the optimized model outperforms the original ResNet-18 in both accuracy and inference speed. The classification accuracy increases from 93.2% to 94.1%, the number of parameters is reduced by 30%, the Floating-Point Operations decreases from 1.81 giga to 1.32 giga, and the inference time is decreased from 15.2 milliseconds to 12.8 milliseconds per batch. What’s more, the proposed model outperforms MobileNetV2, ShuffleNetV2, and EfficientNet-B0 in accuracy while maintaining competitive inference speed and parameter count. The experimental results highlight the model’s potential for deployment on resource-constrained devices, expanding the practical application scenarios of object classification methods in edge computing and real-time detection tasks.

Author 1: Zeyad Farisi

Keywords: ResNet-18; squeeze-and-excitation model; model pruning; object classification; resource-constrained devices

PDF

Paper 16: Advanced AI for Liver Cancer Detection: Vision Transformers, XAI and Contrastive Learning

Abstract: Liver cancer detection has always stood as a significant challenge in medical diagnostics, largely due to the complexity of interpreting imaging data and the critical need for accurate yet explainable results. This study explored how recent advances in artificial intelligence, specifically Vision Transformers (ViTs), Contrastive Learning, and Explainable AI (XAI), can be combined to address this challenge more effectively. Unlike conventional models, Vision Transformers are particularly good at capturing intricate patterns in medical images, which makes them well-suited for tasks like cancer classification. To improve the model's ability to generalize across different imaging conditions incorporated contrastive learning techniques, essentially teaching the system to recognize subtle distinctions between similar and dissimilar image features. This approach significantly sharpened its performance. Recognizing the importance of transparency in medical AI also integrated explainable AI tools into the model. This helped generate visual and textual cues that explain the system’s predictions, which is crucial for gaining the trust of clinicians who rely on these tools in high-stakes environments. The model was trained on a comprehensive dataset of liver cancer images, including both CT scans and MRIs, sourced from a well-established medical repository. The results were promising: the system reached a classification accuracy of 92 per cent, outperforming standard convolutional neural networks (CNNs) by 8 per cent. Most notably, it showed strong performance in identifying early-stage liver cancer, with 90 per cent sensitivity and 94 per cent specificity, suggesting that it may hold real potential for clinical application.

Author 1: B C Anil
Author 2: Jayasimha S R
Author 3: Samitha Khaiyum
Author 4: T L Divya
Author 5: Rakshitha Kiran P
Author 6: Vishal C

Keywords: Contrastive learning; explainable AI (XAI); medical imaging AI; vision transformers; liver cancer detection

PDF

Paper 17: A Qualitative Constructivist Framework for Assessing Knowledge Transfer in Enterprise System Projects: Insights from Expert Interviews

Abstract: Effective Knowledge Transfer (KT) is widely recognized as a cornerstone of success in Enterprise System Projects (ESPs). However, despite its critical role, many ESPs continue to suffer from poor KT practices, resulting in delays, cost overruns, and suboptimal system adoption. This study aims to develop a qualitative constructivist framework for assessing knowledge transfer in ESPs to advance the ESPs' success. This study introduces the Transfer Success Self-Assessment Framework (KTSSAF), a theoretically grounded and empirically validated framework designed to systematically evaluate KT effectiveness across ESP phases. Drawing upon the Information Systems (IS) Success Theory, the KTSSAF is built around the Project Management Process Groups (PMPG), enabling organizations to assess KT at granular levels within the pre-, during-, and post-implementation stages of ESPs. The development of KTSSAF was guided by a qualitative constructivist methodology, combining insights from semi-structured interviews with domain experts and a comprehensive literature review, an assessment kit, and a scoring mechanism tailored to enterprise-specific knowledge clusters and project phases. The framework supports ESP stakeholders in identifying KT gaps, forecasting KT success, and implementing targeted improvements. Empirical validation through expert reviews and pilot studies demonstrates the framework's practical utility and theoretical contributions. KTSSAF empowers organizations to make informed decisions regarding knowledge management strategies, facilitating improved knowledge retention, enhanced system use, and increased stakeholder engagement. By addressing longstanding gaps in KT evaluation within ESPs, this study contributes a structured, repeatable approach for practitioners and researchers to enhance KT outcomes and overall ESP success.

Author 1: Jamal M. Hussien
Author 2: Riza bin Sulaiman
Author 3: Ali H Hassan
Author 4: Mansoor Abdulhak
Author 5: Hasan Kahtan
Author 6: Basit Shahzad

Keywords: Knowledge transfer; enterprise system projects; KTSSAF; project management; constructivist methodology; knowledge management; IS success theory

PDF

Paper 18: Investigating Space-Time Dynamics in Live Memory Forensics Using Hybrid Transformer Approaches

Abstract: Live memory forensics plays a critical role in digital investigations by analyzing volatile memory to detect system anomalies such as malware and unauthorized process activities. Traditional approaches often fall short in modelling the evolving nature of live memory. This study presents a novel Hybrid Space-Time Transformer Architecture combining Swin Transformer for localized spatial feature extraction and Longformer for capturing long-term temporal dependencies. By integrating windowed and sliding attention mechanisms, the proposed method enables precise detection of anomalies such as malware injection and process hijacking. Evaluated on benchmark datasets, the model achieved an accuracy of 95%, F1-score of 0.94, outperforming conventional deep learning and transformer-based approaches. Our work contributes a scalable, interpretable, and highly accurate model for enhancing live memory forensic workflows.

Author 1: Sarishma Dangi
Author 2: Kamal Ghanshala
Author 3: Sachin Sharma

Keywords: Live memory forensics; swin transformer; longformer transformers; memory acquisition; anomaly detection

PDF

Paper 19: Detecting Fake News Images Using a Hybrid CNN-LSTM Architecture

Abstract: In today's digital world, images have become a double-edged tool in the dissemination of news; as much as they contribute to enriching honest content and communicating information effectively, they are increasingly being used to mislead the public and spread fake news. The ease of manipulating images and taking them out of their original context, or even creating them entirely with advanced techniques, gives them tremendous power in lending false credibility to false narratives, taking advantage of the human eye's tendency to believe what it sees and the image's superior ability to directly evoke emotions. These misleading images, which are often difficult to debunk with the naked eye, spread at lightning speed across digital platforms, allowing fake news to reach and influence large audiences before it can be verified. However, they tend to generate inaccurate reports. This study proposes a model architecture to detect fake news images. Machine learning and deep learning algorithms were used. The deep learning models are used depending on conventional neural nets (CNN), long short-term memory (LSTM) and a hybrid model that combines CNN and LSTM frameworks on Google Cloud. The hybrid model was able to categorize news with better accuracy than using each model individually. The model was tested and trained on a dataset for classifying fake news images. We used different evaluation metrics (precision, recall, F1 metric, etc.) to measure the efficiency of the model.

Author 1: Dina R. Salem
Author 2: Abdullah A. Abdullah
Author 3: AbdAllah A. AlHabshy
Author 4: Kamal A. ElDahshan

Keywords: Fake news images; machine learning; deep learning; cloud computing; CNN; LSTM

PDF

Paper 20: Heuristics-Based Clustering of Internet of Vehicle Based on Effective Approximate QoS Guideline for Message Dissemination

Abstract: Internet of Things and connected world concepts are emerging foreground in all walks of life, and Internet of Vehicles (IoV) is a natural evolution of these concepts for a connected vehicular environment. Clustering is needed in the IoV network for effective management of network resources and to avoid congestion in the network. On the effective cluster base, any efficient routing protocol can be realized for message dissemination. In most of the existing work, clustering based on multiple mobility metrics is the first step, followed by routing over established clusters. This work proposes a clustering solution without the need for frequent clustering or reduced cluster maintenance effort. The proposed solution establishes a virtual QoS guideline path for message dissemination based on graph theory as a first step, and clustering of the network is done to further improve the QoS in the guideline path using a heuristic algorithm. The proposed solution demonstrates notable improvements over existing approaches. Specifically, it achieves an average cluster duration that is at least 6% higher and reduces cluster maintenance overhead by 7%. In terms of Quality of Service (QoS), the solution attains a 3.6% higher packet delivery ratio, along with a 21% reduction in end-to-end delay and a 28% decrease in routing overhead.

Author 1: Tanuja Kayarga
Author 2: S Ananda Kumar
Author 3: Lakshmi B S
Author 4: Anil Kumar B H

Keywords: QoS; clustering; meta heuristics; IoV; PSO; steiner minimal tree

PDF

Paper 21: SOM-Based Leader Selection Strategies for Cooperative Spectrum Sensing in Multi-Band Multi-User 6G CR IoT

Abstract: In 6G Cognitive Radio Internet of Things (CR-IoT) networks, multi-band spectrum sensing cooperatively provides access to extensive spectrum resources. The suggested learning-based multi-band multi-user cooperative spectrum sensing (M2CSS) scheme addresses intelligent spectrum access challenges. A cooperative strategy is introduced into a dueling deep Q network to facilitate multi-user reinforcement learning. This study selects the most suitable IoT secondary users (SU) to sense channels using the proposed learning-based M2CSS scheme. With the restriction that each IoT SU can serve as a front-runner for a single network and that there will only be one leader for individual frequency, the proposed work expresses an optimization difficulty in choosing leaders through k-means and SOM, who can efficiently interact with other SUs. Next, choose matching cooperative SUs for each frequency and express additional optimization problems. Following this phase, a subset of cooperative secondary users (SUs) senses frequencies and employs accurate knowledge to determine the channels' availability in a distributed manner. The simulation findings demonstrate significant improvements in detection performance, preventing the misuse of specific devices, providing reliable sensing data over extensive IoT connections, and achieving energy efficiency—all essential for IoT implementations. These advantages make the proposed M2CSS system suitable for the massive machine-type communications anticipated in 6G IoT scenarios.

Author 1: Mayank Kothari
Author 2: Suresh Kurumbanshi

Keywords: Cooperative spectrum sensing; reinforcement learning; k-means leader selection; self-organizing map

PDF

Paper 22: High-Speed Fiber-Optic Communication Performance Utilizing Fiber Bragg Grating-Based Dispersion Compensation Schemes

Abstract: Chromatic dispersion is a significant limitation in optical fiber communication, as it causes pulse broadening, which negatively impacts transmission distance and data rates, both of which are critical for meeting the high-speed demands of 5G optical networks. This study focuses on addressing chromatic dispersion in Standard Single-Mode Fiber (SSMF) systems, which are widely deployed in 5G fronthaul and access networks. A comprehensive investigation is conducted using Gaussian-apodized linear chirped Fiber Bragg Gratings (FBGs) for dispersion compensation, implemented across three strategic configurations: pre-compensation, post-compensation, and symmetrical compensation. Each scheme is systematically evaluated to determine the most effective approach for enhancing signal integrity and overall network performance. Simulations are performed using OptiSystem 7.0 on a 10 Gbps SSMF-based optical system, with transmission distances ranging from 10 km to 80 km under controlled simulation parameters. Key performance metrics, including Quality factor (Q-factor), Bit Error Rate (BER), and eye height, are analyzed by varying SSMF length, input power, and bit rate. The results demonstrate that symmetrical compensation using Gaussian-apodized linear chirped FBGs provides the best performance, achieving a Q-factor of 12.3938, an ultra-low BER of 1.12336×10⁻³⁵, and a significantly improved eye height at 80 km. These findings establish the symmetrical compensation scheme employing Apodized Chirped Fiber Bragg Gratings (ACFBGs) as the most effective and scalable solution for high-speed, long-distance optical transmission in 5G networks. This approach enables key 5G applications, including ultra-reliable low-latency communication (URLLC), enhanced mobile broadband (eMBB), and smart infrastructure in smart cities. The proposed technique offers multiple advantages, such as low BER, high Q-factor, reduced signal distortion through sidelobe suppression, energy efficiency via passive operation, and design flexibility for long-haul network integration.

Author 1: Kripa Kalkala Balakrishna
Author 2: Karthik Palani

Keywords: Dispersion management; chirped fiber bragg grating; gaussian apodization; quality factor; bit error rate; optical transmission system

PDF

Paper 23: Bio-Inspired Metaheuristic Framework for DNA Motif Discovery Using Hybrid Cluster Based Walrus Optimization

Abstract: Motifs are short, recurring sequence elements with biological significance within a set of nucleotide sequences. Motif discovery is the problem of finding these motifs. The problem of motif discovery has become an important problem in the field of Bioinformatics since, it finds its applications in Drug discovery, Environmental Health Research, and early Detection of Diseases by finding anomalies in gene sequences. Motif discovery is a challenging job in bioinformatics since it is NP-hard and cannot be solved within an exact time. In this study, we have proposed Hybrid Cluster based Walrus Optimization algorithm (HCWaOA) to solve the problem of motif discovery. The accuracy and efficiency of the proposed algorithm are improved using a hybrid approach. The population is initialized using Random Projection technique to generate a meaningful solution space. Then, k-means clustering is used to group similar solutions. Lastly, a population-based metaheuristic algorithm, Walrus optimization technique, is applied on each of the clusters to find the best motif. The proposed Hybrid Cluster-based Walrus Optimization algorithm (HCWaOA) is tested on both simulated and real biological datasets. The performance of HCWaOA is compared with benchmark algorithms like MEME, AlignCE and other meta-heuristics algorithms. The results of the proposed algorithm are found to be stable with a precision of 92%, a recall of 93% and an F-score of 93%. The proposed HCWaOA is tested using biological cancer-causing BARC and CTCF datasets to identify cancer causing motifs. Results show that incorporating clustering to initial solution space results in optimal solutions within a fewer iteration. The results of HCWaOA are compared with other popular motifs discovery algorithms and found to be stable.

Author 1: M. Shilpa
Author 2: C. Nandini

Keywords: Motifs; walrus optimization algorithm; meta-heuristic algorithms; k-means clustering; DNA; bioinformatics

PDF

Paper 24: Detection of Autism Spectrum Disorder (ASD) Using Lightweight Ensemble CNN Based on Facial Images for Improved Diagnostic Accuracy

Abstract: Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that affects how people talk to each other and act. The fact that ASD is becoming more common and that diagnosing it can be difficult means that early detection is important for improving treatment outcomes. This study's goal is to use lightweight ensemble Convolutional Neural Networks (CNN) to make it easier to classify ASD from facial photos. The study looks at different CNN architectures, like MobileNetV2 and EfficientNet variations, to find the best model for diagnosing ASD quickly and accurately. The method involves training and testing five lightweight CNN models on a set of facial photos. We use pre-processing methods like scaling and data augmentation to help the model learn better. The study tests how well ensemble CNN models work by combining predictions from different architectures using averaging and voting methods. We use important performance metrics like accuracy, precision, recall, and F1-score to see how well each model works. The results show that the best balance between accuracy and computational efficiency is achieved by combining MobileNetV2 and EfficientNetB0. This combination achieves an accuracy of 0.8299, a precision of 0.8514, a recall of 0.8182, and an F1_score of 0.8344. Other models, like ResNet50 combined with EfficientNetB0, have higher precision but lower recall, making them less useful for finding all ASD cases. This study was also compared with other researchers, and the proposed study was found to have greater accuracy than other researchers. The results show that ensemble CNN models can significantly improve the accuracy of classifying ASD compared to single CNNs. This study shows that lightweight ensemble CNN models are good at finding ASD in pictures of people's faces. The method is fast and can be used on devices with limited processing power, making it a good way to find ASD early in both clinical and real-world settings.

Author 1: Andi Kurniawan Nugroho
Author 2: Jajang Edi Priyanto
Author 3: D. S. P. Vinski

Keywords: Component; autism spectrum disorder (ASD); early detection; ensemble convolutional neural network (CNN); facial images; classification; accuracy

PDF

Paper 25: An Adaptive SVR-Based Framework for Multimodal Corpus Classification

Abstract: To address the challenges associated with the dynamic growth and multimodal complexity of modern corpora, an adaptive classification framework based on Support Vector Regression (SVR) was developed. A structured corpus was first constructed, followed by the extraction of salient textual features using Term Frequency–Inverse Document Frequency (TF-IDF) metrics. To accommodate the continuous expansion of the corpus, an incremental learning strategy was employed, enabling the model to update efficiently without complete retraining. A kernel-based SVR model was trained to perform classification tasks, and an adaptive feedback-driven mechanism was introduced to dynamically adjust both model parameters and feature representations based on classification performance metrics. Evaluation was conducted on multiple multilingual and multimodal corpora, with particular emphasis on Chinese language processing, which often presents unique challenges due to character complexity and sparse feature representations. The proposed method achieved a significant improvement in classification accuracy when compared to conventional classification approaches. Furthermore, the model demonstrated superior adaptability and computational efficiency across various corpus types. The findings confirm the viability of SVR as a core component for adaptive classification tasks in dynamic linguistic environments. This study contributes to the field by establishing a generalizable, efficient, and interpretable framework suitable for real-time corpus management systems, intelligent content filtering, and multilingual information retrieval.

Author 1: Yuhui Wang

Keywords: SVR; adaptive corpus classification; incremental learning; multimodal corpus; feature extraction

PDF

Paper 26: Speech Emotion Recognition from Audio Data Using LSTM Model

Abstract: The capacity to comprehend and interact with others through language is the most valuable human ability. Since emotions are crucial to communication, we are well-trained to recognize and interpret the many emotions we encounter. Contrary to popular assumption, the subjective aspect of human mood makes emotion recognition difficult for computers. There are some works based on Emotion recognition using images, text, and audio. We are here working on the audio dataset to find the accurate human emotion for computers to understand. In this work, we have utilized a Long Short-Term Memory (LSTM) model to implement Speech Emotion Recognition (SER) from Audio data on two different datasets: the Toronto Emotional Speech Set (TESS) and the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). The accuracy rates of our LSTM-based model were impressive, with 91.25% for the RAVDESS dataset and 98.05% for the TESS dataset; the combined accuracy for both datasets was 87.66%. These results highlight the effectiveness of the LSTM model in effectively identifying and categorizing emotional states from audio files. The study adds significant knowledge to the field of speech emotion recognition by emphasizing the model’s ability to handle a variety of datasets and its potential.

Author 1: Md. Mahbub-Or-Rashid
Author 2: Akash Kumar Nondi
Author 3: Abdullah Al Sadnun
Author 4: Md. Anwar Hussen Wadud
Author 5: T M Amir Ul Haque Bhuiyan
Author 6: Md. Saddam Hossain

Keywords: Emotion; audio data; Ryerson audio-visual database; Toronto emotional speech set; classification; layers; combine

PDF

Paper 27: AI-Driven Textual Feedback Analysis in E-Training Using Enhanced RoBERTa

Abstract: In corporate e-training environments, traditional metrics like course completion and quiz scores often fail to reflect actual job performance. Rich insights are embedded in unstructured textual feedback, yet they remain underutilized due to limitations in existing analytical models. This study proposes E-RoBERTa, an enhanced transformer-based model designed to predict employee job performance by analyzing open-ended feedback from digital training platforms. The model aims to improve accuracy, domain adaptability, and interpretability. E-RoBERTa integrates Domain-Adaptive Pretraining (DAPT) to fine-tune RoBERTa on corporate-specific language and introduces Dynamic Attention Scaling (DAS) to highlight semantically critical tokens. A real-world, GDPR-compliant dataset containing 16,000 feedback entries from 3,500 employees across multiple departments was used. Preprocessing included tokenization, sentiment tagging, and feature extraction. The model achieved superior performance with a macro F1-score of 0.875, outperforming standard RoBERTa, LSTM, and SVM baselines. Attention visualizations revealed alignment between influential tokens and human-interpretable performance indicators. E-RoBERTa provides a transparent and accurate framework for evaluating job performance through textual feedback. Its use of domain adaptation and dynamic attention mechanisms supports scalable, ethical, and explainable AI in corporate learning analytics, offering actionable insights for personalized interventions and strategic HR decision-making.

Author 1: Rakan Saad Alotaibi
Author 2: Fahad Mazyed Alotaibi
Author 3: Sameer Abdullah Nooh
Author 4: Abdulaziz A. Alsulami

Keywords: Job performance prediction; transformer models; enhanced RoBERTa; domain-adaptive pretraining (DAPT); dynamic attention scaling (DAS); natural language processing (NLP); explainable AI; textual feedback analysis; workforce analytics

PDF

Paper 28: A Deep Learning-Based Dual-Model Framework for Real-Time Malware and Network Anomaly Detection with MITRE ATT and CK Integration

Abstract: The contemporary world of high connectivity in the digital realm has presented cybersecurity with more advanced threats, such as advanced malware and network attacks, which in most cases will not be detected using traditional detection tools. Static cybersecurity tools, which are traditional, often fail to deal with dynamic and hitherto unseen attacks, including signature-based antivirus systems and rule-based intrusion detection. To address this issue, we would suggest a two-part, AI-powered solution to cybersecurity which would allow real-time threat detection on an endpoint and a network level. The first element uses a Feed-forward Neural Network (FNN) to categorize Windows Porta-ble Executable (PE) files, whether they are benign or malicious, by using structured static features. The second component im-proves network anomaly detection with a deep learning model that is augmented by Generative Adversarial Networks (GAN) and effectively addresses the data imbalance issue and sensitivi-ty to rare cyber-attacks. To enhance its performance further, the system is integrated with the MITRE ATT&CK adversarial tactics and techniques, which correlate real-time detection re-sults with adversarial tactics and techniques, thus offering ac-tionable context to incident response teams. Tests based on open-source datasets provided accuracies of 98.0 per cent of malware detection and 96.2 per cent of network anomaly detec-tion. Data augmentation using GAN was very effective in im-proving the detection of less popular attacks, including SQL injections and internal reconnaissance. Moreover, the system is horizontally scalable and responsive in real-time due to Docker-based deployment. The suggested framework is an effective, explainable and scalable cybersecurity defense system, which is perfectly applicable to Managed Security Service Providers (MSSPs) and Security Operations Centers (SOCs), greatly in-creasing the precision rate and contextual insight of threat detection.

Author 1: Migara H. M. S
Author 2: Sandakelum M. D. B
Author 3: Maduranga D. B. W. N
Author 4: Kumara D. D. K. C
Author 5: Harinda Fernando
Author 6: Kavinga Abeywardena

Keywords: Cybersecurity; malware detection; generative adversarial net-works; deep learning; MITRE ATT&CK; feedforward neural network

PDF

Paper 29: Method for Maternal Health Risk Assessment with Smartwatch-Based Vital Sign Measurements

Abstract: The risk of maternal health issues remains a particular challenge in regions with scant access to continuous antenatal care. This study proposes a smartwatch-based system for evaluating the possible risks associated with maternal health through monitoring vital signs and machine learning algorithms. Using an open-access dataset from Kaggle, the smartwatch assesses maternal risk levels by monitoring systolic and diastolic blood pressure, heart rate, blood glucose, and body temperature. The combination of Artificial Neural Network (ANN) and Random Forest (RF) classifiers gave the system's best-obtained results of 95% accuracy, 97% precision, 97% recall, and an F1 score of 0.97 on the testing dataset. Analysis of correlation demonstrated significant relationships between maternal risk and several primary measures, particularly with systolic blood pressure (r = 0.931), diastolic pressure (r = 0.916), and blood glucose (r = 0.887). Two regression models, MHRL1 and MHRL2, were created to estimate risk levels based on these parameters. From the experimental data, three clinical action levels were defined for the management of pregnancy care: 1) hypertension with Blood Pressure: BP ≥140/90 mmHg, 2) elevated fasting glucose ≥95 mg/dL or postprandial ≥140 mg/dL, and 3) tachycardia with sustained heart rate >100 bpm. These results prove the capability of using IoT-based wearables integrated into workflows for maternal monitoring to enable early warning systems and tailored health management, particularly in constrained settings.

Author 1: Kohei Arai
Author 2: Diva Kurnianingtyas

Keywords: Artificial intelligence; Kaggle; maternal health risk assessment; IoT technology; classification performance; pregnancy risk level; ANN; RF; MHRL; BP

PDF

Paper 30: Multi-Step Cross-Domain Aspect-Based Sentiment Generation with Error Correction Mechanism

Abstract: With the rapid growth of social media and user-generated content, cross-domain aspect-level sentiment analysis has become an important research direction in sentiment computing. In this study, a cross-domain sentiment analysis method based on the T5 model is proposed. This method integrates a multi-step generative training mechanism with a correction mechanism to improve the model's generalization ability and sentiment classification accuracy when processing texts from different domains. First, domain-invariant sentiment features are extracted through training on texts and their associated aspect vocabularies from both the source and target domains. This process effectively reduces inter-domain discrepancies. Unlike other methods, the generative task is formulated in the source domain to produce both aspect and sentiment element pairs, which improves the model's reasoning ability through multi-step generation. Finally, a correction mechanism is used to detect the aspect labels in the generated labels of the target domain and regenerate the sentiment predictions when errors are detected, which improves the model’s robustness. Experimental results show that the proposed method performs well in several cross-domain sentiment analysis tasks and significantly outperforms traditional methods in sentiment classification accuracy. The study provides an innovative solution for cross-domain sentiment analysis with broad application potential.

Author 1: Ningning Mao
Author 2: Xuanliang Zhu
Author 3: Yadi Xu

Keywords: Cross-domain aspect-based sentiment analysis; multi-step generation; correction mechanism; domain-invariant feature learning

PDF

Paper 31: Perceptual Hash Techniques for Audio Copyright Protection in Decentralized Systems

Abstract: The overlap of perceptual hashing technologies with blockchain is an interesting answer to strengthen copyright protection of sound in decentralized networks. As the music society still has to endure attacks on unauthorized duplication and transformation of online content, traditional security does not hold water. Perceptual hashing bridges the gap, creating unique digital fingerprints that are resistant to small-scale modifications, to allow detection of copyright piracy even in edited audio content. When combined with the immutable nature of blockchain and the smart contract functionality, such a new framework not only guarantees ownership verification but also does licensing procedures automatically, thereby doing away with the need for intermediaries. The proposed method addresses the issues of the state-of-the-art methods and performs well under various conditions.

Author 1: N. Kavitha
Author 2: Rashmika S J
Author 3: Reshika A S

Keywords: Perceptual hashing; blockchain; audio; copyright protection

PDF

Paper 32: Advancing Precision Livestock Farming: Integrating Hybrid AI, IoT, Cloud and Edge Computing for Enhanced Welfare and Efficiency

Abstract: Poultry farming is pivotal to global food security, yet maintaining optimal environmental and operational conditions remains a challenge. Suboptimal conditions, such as high temperature and humidity, promote bacterial growth and the production of toxic gases like ammonia (NH3), carbon monoxide (CO), carbon dioxide (CO2), methane (CH4), and hydrogen sulfide (H2S), which increase poultry disease and mortality rates. This study introduces an innovative, modular, and scalable system integrating Artificial Intelligence (AI), Internet of Things (IoT), Edge Computing, and Cloud Computing for real-time monitoring, prediction, and automation in poultry barns. The system employs a hybrid AI framework combining Gradient Boosting techniques (XGBoost, LightGBM, CatBoost) and Long Short-Term Memory (LSTM) networks to analyze data from a heterogeneous wireless sensor network. It monitors critical parameters—temperature, humidity, and toxic gas concentrations—while predicting environmental conditions and detecting potential stress to optimize poultry welfare. Leveraging IoT for data collection, Edge Computing for low-latency processing, and cloud analytics for advanced insights, the system enhances decision-making, reduces feed wastage, lowers energy costs, and decreases mortality rates. A case study demonstrates significant improvements in prediction accuracy, operational efficiency, and animal welfare, underscoring the framework’s adaptability across diverse agricultural settings. This work establishes a robust precedent for hybrid AI-driven smart farming solutions, advancing precision livestock farming.

Author 1: Hakim Jebari
Author 2: Siham Rekiek
Author 3: Kamal Reklaoui

Keywords: Hybrid artificial intelligence; edge computing; cloud computing; Internet of Things; artificial intelligence; predictive analytics; smart farming; smart poultry farming

PDF

Paper 33: AI-Powered Skin Disease Detection Using Adaptive Particle Swarm Intelligent Optimization and Hyper-Convolutional Neural Networks

Abstract: In recent medical research, skin cancer has emerged as one of the most prevalent and fatal cancers globally. Previous studies have faced challenges in detecting skin cancer early due to the complexity of identifying specific skin diseases, segmenting affected areas, and selecting relevant features. To address these limitations, this study proposes a novel AI-powered enhanced skin disease detection system that applies an Adaptive Particle Swarm Intelligent Optimization (APSIO) in conjunction with a Hyper-Convoluted Intra-Capsuled Neural Network (HCI-CNN). In image processing, a Gaussian Wavelet Spectral Filter is initially used to preprocess the input dataset of skin-cancer images. This filter is used to standardize the skin layer of the pixel. After preprocessing, the method applies Slice Fragment Window Segmentation (SFWS) to divide the image into several clusters, focusing on the specified area affected by the disease. Next, Adaptive Particle Swarm Intelligent Optimization (APSIO) is applied for feature selection. APSIO is an optimization metaheuristic algorithm that optimizes the selection of relevant features from the segmented image. After removing evaluated and non-effective features, YOLO extracted features are passed through an HCI-CNN classifier to efficiently characterize high-level spatial hierarchies and relations of features in the feature space using hyper-convolutional operations and capsule representations. This paper analyzed the clinical images of individuals along with the dataset images. The output gain improved Accuracy to 97%, precision to 96.52%, recall to 96.55%, and F1-score to 96.93%, while simultaneously minimizing false positives and total time complexity.

Author 1: N Annalakshmi
Author 2: S Umarani

Keywords: Skin cancer; image preprocessing; hyper-convoluted intra-capsuled neural network (HCI-CNN); adaptive Particle Swarm Intelligent Optimization (APSIO); image classification

PDF

Paper 34: Deep Learning Optimization Conception: Less Data, Less Time, More Performance

Abstract: Although Deep Learning has not made a breakthrough in terms of artificial intelligence core technology, it achieves the best worldwide performance across areas such as computer vision and natural language processing. However, it depends on large-scale datasets and enormous computational resources. This paper tackles a major issue: Can we train more efficient deep learning models with less data in less time? We look at numerous strategies designed to reduce the burden of training, without letting the quality deteriorate. From transfer learning and few-shot learning to lightweight architectures, synthetic datasets produced artificially, as well as dispersed training, we contemplate how to make advanced AI subsystems fit for running under scarce resources. The aim is to lay down a future for deep learning that is more sustainable and all-embracing. This research focuses on the important issue of streamlining deep learning models with balancing model performance against data collection and computations. We look into other approaches such as transfer learning, couple with fewshot learning, data augmentation, architecture optimization, and parallelization. We explain processes with their benefits as well as their setbacks. Our research shows that training a model more efficiently improves the overall training process, making it cheaper and greener. A change like this would help more people use sophisticated AI systems even when limited by constrained resources. This broadens the real-world application of AI technology and further stimulates innovation in the area.

Author 1: Mohamed Amine MEDDAOUI
Author 2: Moulay AMZIL
Author 3: Imane KARKABA
Author 4: Mohammed ERRITALI

Keywords: Deep Learning; AI; IoT; optimization; transfer learning; model compression; few-shot learning

PDF

Paper 35: A Novel Hybrid HO-CAL Framework for Enhanced Stock Index Prediction

Abstract: The accurate prediction of stock indexes plays a critical role in supporting investment decisions and managing financial risks. This study proposed a novel hybrid deep learning model that integrated the strengths of Convolutional Neural Networks (CNN), the Attention mechanism, and Long Short-Term Memory (LSTM) networks to enhance the modelling of temporal patterns in financial time series. To further improve the prediction performance, the Hippopotamus Optimization (HO) algorithm was incorporated to fine-tune the networks parameters. This is the first application of the CNN-Attention-LSTM (CAL) architecture to stock index prediction. Ablation experiments revealed that the proposed CAL significantly outperformed traditional CNN, LSTM, and CNN-LSTM models, highlighting the effectiveness of the Attention-based architecture. Comparative analyses also demonstrated that the HO-optimized CAL (HO-CAL) model achieved superior predictive accuracy across multiple markets, confirming both the robustness of the hybrid model and the optimization algorithm. These findings underscore the potential of combining deep learning architectures with metaheuristic optimization to improve the prediction accuracy in financial markets, offering valuable insights for real-world investment strategies.

Author 1: Zeren Shi
Author 2: Othman Ibrahim
Author 3: Hanini Ilyana Che Hashim

Keywords: Attention mechanism; CNN; LSTM; stock index; hippopotamus optimization algorithm

PDF

Paper 36: The Text Mining Model for Lecturer Performance Evaluation: A Comparative Study

Abstract: To support the evaluation of the teaching and learning process in higher education institutions, it is necessary to develop a text mining (TM) model. The aim of this research is to compare the performance of Long Short-Term Memory using Word Embedding Text to Sequence (WETS-LSTM), WETS-BiLSTM, WETS-CNN1D, and WETS-RNN, using four dataset categories including pedagogic, professional, personality, and social competency. This research has five main steps, including literature study, dataset collection, TM model development, and evaluation. Dataset is collected from Universitas Sjakhyakirti, Institut Teknologi dan Bisnis Palcomtech, Universitas Muhammadiyah Palembang, Universitas Bina Darma, AMIK Bina Sriwijaya and Politeknik Darusalam. The questionnaire distribution process initially yielded 6,170 responses with 6,164 valid across four competency categories, with total of 24,656 text data for analysis. Model of WETS-LSTM obtained the best performance overall, achieved the train accuracy of 96.65% and the highest test accuracy of 82.92%. The CNN1D with Word Embedding Text to Sequence (WETS-CNN1D) demonstrated good train accuracy with 96.73% but obtained lower test performance with 80.67%. The WETS with Recurrent Neural Network (WETS-RNN) obtained the weakest results, with a train accuracy of 95.88% and a test accuracy of 77.99%.

Author 1: Anita Ratnasari
Author 2: Vina Ayumi
Author 3: Mariana Purba
Author 4: Wachyu Hari Haji
Author 5: Handrie Noprisson
Author 6: Marissa Utami

Keywords: Text mining; CNN; LSTM; RNN; text-to-sequence

PDF

Paper 37: Design and Evaluation of a Biometric IoT-Based Smart Lock System with Real-Time Monitoring and Alert Mechanisms

Abstract: A smart door lock system based on IoT is presented which uses fingerprint biometric authentication, ESP32 microcontroller and Blynk IoT platform to provide a secure, user friendly and remote controllable access control solution. The proposed architecture replaces traditional locks with a real time biometric system that gives instant feedback through onboard display (OLED) and buzzer and remote monitoring and control through a mobile app. A new fail-safe mechanism is implemented: after 3 failed fingerprint attempts the system will lock out for 15 seconds and send instant alert to the authorized user’s smartphone. Performance test of the prototype shows fingerprint recognition time of around 1.0 second and door unlock time of 5 seconds, so it’s convenient to use. The system has a very low False Acceptance Rate (FAR) of 1.32% which means strong resistance to unauthorized access. The False Rejection Rate (FRR) is higher (around 26.32%) due to user error such as improper finger placement – so a usability issue to be addressed. The device can store up to 3 fingerprint profiles and gives visual/audible alert for all access events. This integration of IoT with biometric security not only enhances physical security but also user convenience, a modern smart-lock solution for smart home automation.

Author 1: Jamil Abedalrahim Jamil Alsayaydeh
Author 2: Mohd Faizal Yusof
Author 3: Serhij Mamchenko
Author 4: Rostam Affendi Hamzah
Author 5: Safarudin Gazali Herawan

Keywords: Smart door lock; Internet of Things (IoT); biometric authentication; fingerprint sensor; ESP32 microcontroller; Blynk IoT platform; access control; cybersecurity; real-time monitoring; home automation; OLED display; False Rejection Rate (FRR); False Acceptance Rate (FAR); remote monitoring; smart home automation

PDF

Paper 38: Modelling Cloud Computing Adoption in the Malaysian Healthcare

Abstract: Cloud computing is increasingly reshaping the global IT landscape, offering scalable and efficient solutions across industries, including the healthcare sector. This study investigates the determinants of cloud computing adoption in the Malaysian healthcare industry by integrating the Resource-Based View (RBV) and Technology-Organisation-Environment (TOE) frameworks. Emphasising internal organisational capabilities, the study excludes traditional Information Systems (IS) models to maintain theoretical coherence with RBV’s strategic orientation toward firm-level resource advantages. Data were collected from 265 respondents across 127 healthcare organisations and analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM). The study also proposes an extended taxonomy of cloud services contextualised for healthcare, strengthening the theoretical underpinnings and practical applicability of cloud adoption strategies in this domain. The findings reveal that among IT capabilities, managerial IT capability exerts the most substantial influence on adoption, followed by relational and technical capabilities. Within the TOE dimensions, regulatory support emerged as the most critical enabler, while business resources, change management, organisational culture, and vendor support also demonstrated significant positive effects. The results offer empirical validation for a comprehensive conceptual model grounded in RBV and TOE, providing both theoretical insights and practical guidance for healthcare organisations aiming to strengthen IT capabilities, optimise organisational readiness, and align with external institutional drivers for successful cloud migration.

Author 1: Normilah Mohd Noh
Author 2: Nurhizam Safie Mohd Satar
Author 3: Hasimi Sallehudin
Author 4: Ibrahim Hassan Mallam
Author 5: Surya Sumarni Hussein
Author 6: Nur Azaliah Abu Bakar

Keywords: Adoption; cloud computing; Malaysian healthcare; partial least squares-structural equation modelling; resource-based view (RBV)

PDF

Paper 39: Potential Variables in Pharmaceutical Drug Prediction Research with Machine Learning Approach: A Literature Review

Abstract: As a downstream component of the drug supply chain, pharmaceutical installations often face uncertainty in drug demand. Predicting pharmaceutical drugs using a machine learning approach enables the development of new variables that can enhance the performance of medicine prediction. Amidst limited data and a choice of prediction algorithms, the accuracy of variable selection is significant for drug prediction performance. This study remaps the scope of variables from previous studies related to drug demand prediction and machine learning performance to develop further significant variables. Investigating research literature on significant variables in drug demand prediction with machine learning models published in 2020-2024. The systematic literature methodology uses the Kitchenham method. Mapping problems, discussion areas, and data availability result in ten categories of issue areas, each with its respective data needs and algorithm choices. A qualitative exploration of issue areas identifies potential variables for pharmaceutical drug prediction, including drug consumption, epidemiology, drug management, supply chain-patient domicile, and pharmacotherapy. Mapping potential variables facilitates the availability and integration of data relevant to local or regional characteristics, enabling further research on the characteristics of data and algorithm choices.

Author 1: Gunadi Emmanuel
Author 2: Yulyani Arifin
Author 3: Ilvico Sonata
Author 4: Muhammad Zarlis

Keywords: Drug demand; machine learning; pharmaceutical installations; prediction; potential variables

PDF

Paper 40: Scalable Graph Learning with Graph Convolutional Networks and Graph Attention Networks: Addressing Class Imbalance Through Augmentation and Optimized Hyperparameter Tuning

Abstract: In this study, we propose a graph-based node classification to address challenges such as data scarcity, class imbalance, limited access to original textual content in benchmark datasets, semantic preservation, and model generalization in node classification tasks. Beyond simple data replication, we enhanced the Cora dataset by extracting content from its original PostScript files using a three-dimensional framework that combines in one pipeline NLP-based techniques such as PEGASUS paraphrase, synthetic model generation and a controlled subject aware synonym replacement. We substantially expanded the dataset to 17,780 nodes—representing an approximation of 6.57x scaling while maintaining semantic fidelity (WMD scores: 0.27-0.34). Our Bayesian Hyperparameter tuning was conducted using Optuna, along with k-fold cross-validation for a rigorous optimized model validation protocol. Our Graph Convolutional Network (GCN) model achieves 95.42% accuracy while Graph Attention Network (GAT) reaches 93.46%, even when scaled to a significantly larger dataset than the base. Our empirical analysis demonstrates that semantic-preserving augmentation helped us achieve better performance while maintaining model stability across scaled datasets, offering a cost-effective alternative to architectural complexity, making graph learning accessible to resource-constrained environments.

Author 1: Chaima Ahle Touate
Author 2: Rachid El Ayachi
Author 3: Mohamed Biniz

Keywords: Graph Convolutional Networks (GCN); Graph Attention Networks (GAT); hyperparameter tuning; data augmentation; PEGASUS; synonym replacement; optuna bayesian optimization; node classification; class imbalance

PDF

Paper 41: Enhancing Banking Data Classification Through Hybrid L2 Regularisation and Early Stopping in Artificial Neural Networks

Abstract: The demand for robust data-driven classification (DDC) techniques remains critical in banking applications, where accurate and efficient decision-making is paramount. Artificial Neural Networks (ANNs), particularly Multi-Layer Perceptrons (MLPs), are widely used due to their strong learning capabilities. However, their performance often depends on effective hyperparameter tuning and regularisation strategies to avoid overfitting. This study aims to enhance the efficiency of the MLP training process by introducing a hybrid approach that integrates L2 regularisation with Early Stopping (ES) into the hyperparameter tuning procedure. The key contribution lies in embedding both techniques within a grid search framework, thereby streamlining the search for optimal hyperparameters. The proposed method was evaluated using three real-world banking datasets: two related to loan subscription (16 and 20 features) and one concerning credit card default payment (23 features). Experimental results demonstrate that the hybrid approach reduces hyperparameter tuning time by over 90% while achieving high classification performance. Notably, the Receiver Operating Characteristic - Area Under the Curve (ROC-AUC) scores of 93.89% and 91.21% were achieved on the loan datasets, and 73.28% on the credit card dataset, surpassing previous benchmarks. These findings highlight the potential of the L2ES hybrid method to improve both the accuracy and computational efficiency of DDC in financial applications.

Author 1: Khairul Nizam Abd Halim
Author 2: Abdul Syukor Mohamad Jaya
Author 3: Fauziah Kasmin
Author 4: Azlan Abdul Aziz

Keywords: Artificial neural networks; L2 regularisation; early stopping; banking; classification

PDF

Paper 42: Empirical Validation and Enhancement of ADiBA: A Framework for Big Data Analytics Implementation

Abstract: The implementation of Big Data Analytics (BDA) in organisations requires a structured approach to ensure alignment with strategic goals and infrastructure readiness. This study presents an enhanced version of the previously published ADiBA (Accelerating Digital Transformation Through Big Data Adoption) framework that aimed at guiding organizations through critical components necessary for successful BDA implementation. The initial framework was developed based on systematic literature review. To validate and refine the framework, a mixed-methods survey was conducted among domain experts using a five-point Likert scale and open-ended questions to assess the relevance of each framework component. Quantitative responses were analysed using the Content Validity Index (CVI), with a threshold of 0.78 adopted as the minimum acceptable I-CVI score for each item. Complementing the quantitative analysis, qualitative feedback from the open-ended survey responses, Focus Group Discussions (FGDs), and in-depth interviews were examined through thematic analysis, revealing key themes related to framework’s clarity and operational aspects. Insights from both analyses informed the refinement of several components. The resulting framework is a validated and empirically-informed guide designed to support effective BDA implementation in organizational contexts.

Author 1: Norhayati Daut
Author 2: Naomie Salim
Author 3: Sharin Hazlin Huspi
Author 4: Anazida Zainal
Author 5: Chan Weng Howe
Author 6: Muhammad Aliif Ahmad
Author 7: Siti Zaiton Mohd Hashim
Author 8: Masitah Ghazali
Author 9: Mohd Adham Isa
Author 10: Rashidah Kadir
Author 11: Nuremira Ibrahim
Author 12: Norazlina Khamis

Keywords: Adoption process; big data; big data analytics; framework; framework validation; expert survey; content validity index; thematic analysis; organizational implementation; digital transformation

PDF

Paper 43: An In-Depth Analysis of Security Flaws in Advanced Authentication Protocols for the Internet of Medical Things

Abstract: This study evaluates a four-factor authentication protocol designed for IoT healthcare systems, identifying several key vulnerabilities that could compromise its security. The analysis highlights risks associated with node cloning, insider threats, biometric data security, session management, and scalability. To address these vulnerabilities, the study proposes a series of enhancements, including the implementation of Physical Unclonable Functions (PUFs) to prevent node cloning and the use of advanced encryption techniques, such as homomorphic encryption, to protect biometric data. Additionally, the adoption of role-based access control (RBAC) and attribute-based access control (ABAC) systems can mitigate insider threats by limiting user permissions. Optimizing session management through strict expiration and key rotation policies can maintain session integrity, while lightweight cryptographic algorithms and adaptive power management techniques enhance scalability and resource utilization. Future research directions include exploring quantum-resistant cryptographic algorithms and developing adaptive security policies leveraging artificial intelligence. These efforts are essential for maintaining the protocol's resilience against evolving threats and ensuring the secure operation of IoT-based healthcare systems.

Author 1: Haewon Byeon

Keywords: Four-factor authentication; IoT Healthcare security; physical unclonable functions; quantum-resistant cryptography; biometric data protection

PDF

Paper 44: Enhancing Patient Health Through Smart IoT Technologies in Healthcare

Abstract: Health care has been revolutionized by this rapid change in the field of the Internet of things that enable smart connected devices that provide better patient monitoring, diagnosis and treatment. IoT technologies facilitate collection of real time health data and remote patient monitoring, as well as prediction, thus improving overall healthcare outcomes. The ideas and concepts related to Chronic diseases, Diseases, Emergency, Detection and its management are greatly transformed by the presence of Wearable sensors, Smart Hospital infrastructures and AI powered Analytics. Meanwhile, healthcare systems driven by IoT are more efficient, reduce number of hospital readmissions, and provide telemedicine services. But, IoT in healthcare comes with a lot of challenges such as IoT security risks, patient privacy issues and multitude of interoperability problems. In this paper a comprehensive review of smart IoT technologies in healthcare, their applications and the type of benefits that weigh them for the patient care is provided. Along with exploring the role of data analytics in the IoT based decision making, data handling ethical implications and security threats in IoT healthcare systems introduced here. Moreover, beyond this, they discuss future directions including integration of AI, 5G enabled telemedicine, and blockchain for secure patient data management. This makes IoT an ideal candidate for healthcare transformation — one that addresses existing challenges and capitalize on emerging innovations to go from more efficient, more accessible, and definitely more patient-centric healthcare.

Author 1: Monica Bhutani
Author 2: Osman Elwasila
Author 3: Rajermani Thinakaran
Author 4: Yonis Gulzar

Keywords: IoT in Healthcare; smart technologies; remote patient monitoring; data security; AI in healthcare; telemedicine; healthcare analytics

PDF

Paper 45: Modeling an Adaptive and Collaborative E-Learning System with Artificial Intelligence Tools

Abstract: In an educational environment that is undergoing digital transformation, the need to create smarter, learner-centred learning environments is becoming increasingly urgent. This article presents a conceptual modeling of an e-learning system that integrates the adaptive and collaborative dimensions, relying on the tools of artificial intelligence (AI), which occupies a central place, both as a dynamic adaptation engine and as a collaboration facilitator and automaton of certain pedagogical activities. This methodical and structured approach makes it possible to develop a hybrid environment capable of adjusting to individual needs while promoting the co-construction of knowledge between peers. Based on instructional design principles and the 2TUP (Two Tracks Unified Process) process, this approach aims to develop a systematic architecture, illustrated by UML (Unified Modeling Language) diagrams, of classes, use cases, activities and sequences, integrating AI (Artificial Intelligence) through adaptive learning, conversational agents and intelligent tutoring systems that make it possible to personalize learning, Provide targeted feedback, optimize learner performance, and guide learners more accurately. This combination of standardized modeling and AI improves the synergy between stakeholders and increases the efficiency of online learning environments. Finally, this model paves the way for a new era of more flexible, inclusive and responsive techno-pedagogical systems, capable of facing the contemporary challenges of online training.

Author 1: Kawtar Zargane
Author 2: Hassane Kemouss
Author 3: Mohamed Khaldi

Keywords: Conceptual modeling of an online learning system; Adaptive system; Online collaborative learning; Artificial intelligence (AI); Educational software architecture; UML modeling

PDF

Paper 46: Determinant Factors of Success for the Village Information System in Providing Sustainable Services and Governance

Abstract: The Indonesian government is working to implement a digital transformation aimed at providing effective services and governance at both central and local levels, similar to other developing countries. A key strategy for achieving this digital government transformation at the local level involves the implementation of village information system, which are managed by village officials. The Ministry of Villages and Development of Disadvantaged Regions is developing a village information system to support these efforts, village laws, and the Sustainable Development Goals. However, the village information system encounters ongoing challenges, such as slow access speeds and ineffective response services. This study adopts a quantitative approach by using cross-sectional questionnaires that collected 426 valid responses. This study identified seven main factors that influence the success or failure of implementing a village information system: system quality, information quality, service quality, perceived usefulness, user satisfaction, trust, and net benefits. This study contributes to the literature by recognizing these factors within the DeLone and McLean Information System Success Model and TAM frameworks, which are still rarely addressed in e-government adoption studies, especially regarding village governments in developing countries. Data analysis revealed significant relationships system quality, information quality, and service quality significantly impact perceived usefulness. Information quality, service quality, trust, and perceived usefulness significantly impact user satisfaction, and perceived usefulness and satisfaction significantly affect net benefits. This research has practical implications for the successful adoption of the village information system as part of the ministry's efforts to improve services and overall governance.

Author 1: Sutia Dwi Santika
Author 2: Tuga Mauritsius

Keywords: E-Government; System Quality; Service Quality; Delone and Mclean model; TAM

PDF

Paper 47: Machine Learning Methods for Detecting Fake News: A Systematic Literature Review of Machine Learning Applications in Key Domains

Abstract: Rapid digitisation in communication and online platform growth have transformed information dissemination and facilitated rapid access while simultaneously amplifying the spread of fake news. This widespread issue undermines public trust, destabilises political systems, and threatens economic stability. Machine learning techniques have been widely applied to fake news detection, but comparative analyses across specific domains such as health, politics, and economics remain limited. Existing reviews tend to focus on supervised learning methods, frequently excluding unsupervised and hybrid approaches, along with unique challenges and dataset requirements of each domain. This study conducted a systematic literature review of machine learning applications for detecting fake news across the three domains. The methodologies and metrics used were evaluated, while key challenges and opportunities were explored. The results revealed a strong reliance on supervised learning techniques, particularly in health-related contexts, where misinformation presented significant risks to public health outcomes. Deep learning methods were promising for processing complex data. Nonetheless, hybrid and unsupervised approaches were underexplored, which presented opportunities to address data scarcity and adaptability. Most datasets originated from social media platforms and news outlets. The common evaluation metrics included accuracy, but advanced measures were rarely applied, which indicated the possibility of enhancing such methods. Persistent challenges include poor data quality, bias, and ethical concerns highlighted the necessity for bias-mitigating algorithms and improved model interpretability. Specifically, economic misinformation has received less attention despite its potential to cause large-scale financial disruptions. This study highlighted that more effective, ethical, and context-specific machine learning solutions are needed to address fake news and enhance digital information credibility.

Author 1: Nur Ida Aniza Rusli
Author 2: Nur Atiqah Sia Abdullah
Author 3: Fatin Nabila Abd Razak
Author 4: Nor Haniza Ramli

Keywords: Machine learning; fake news; systematic review; health; politics; economy

PDF

Paper 48: SpatialSolar-Net: A Multi-Site Collaborative Framework for Solar Power Forecasting with Adaptive Spatial Correlation Assessment

Abstract: The increasing penetration of solar power generation poses significant challenges for grid integration due to its inherent variability and intermittency. Existing forecasting approaches treat individual solar installations independently, failing to leverage spatial correlations between geographically proximate sites and lacking adaptive mechanisms for varying environmental conditions. This paper presents SpatialSolar-Net, a novel multi-site collaborative solar power generation forecasting framework that addresses these limitations through adaptive spatial correlation evaluation and dynamic knowledge integration mechanisms. The proposed architecture combines a dual-branch design integrating convolutional neural network-based spatial feature extraction with attention mechanism-based temporal modeling, enhanced by graph neural networks for spatial dependency modeling and an adaptive fusion mechanism that intelligently balances local and spatial information based on real-time correlation strength. This framework significantly enhances renewable energy integration by enabling accurate solar power predictions that support grid stability and optimal resource allocation. Extensive experimental validation demonstrates that SpatialSolar-Net achieves superior performance with Mean Absolute Error of 9.98 kW and Root Mean Square Error of 14.79 kW, representing 12.6% and 10.8% improvements over state-of-the-art methods. Most notably, the framework exhibits exceptional robustness during extreme weather events, achieving a remarkable 64% error reduction during dust storm conditions compared to baseline approaches. The adaptive nature enables efficient deployment across diverse geographical regions while maintaining computational efficiency suitable for practical renewable energy integration.

Author 1: Yiming Liu
Author 2: Mugambigai Darajah
Author 3: Christopher Gan

Keywords: Solar power forecasting; spatial correlation; graph neural networks; adaptive fusion; multi-site collaboration; renewable energy integration; extreme weather robustness; grid stability

PDF

Paper 49: AFL-BERT : Enhancing Minority Class Detection in Multi-Label Text Classification with Adaptive Focal Loss and BERT

Abstract: Fine-tuning transformer models like Bidirectional Encoder Representations from Transformers has enhanced text classification performance. However, class imbalance remains a challenge, causing biased predictions. This study introduces an improved training strategy using a novel Adaptive Focal Loss with dynamically adjusted γ based on class frequencies. Unlike static γ values, this method emphasizes minority classes automatically. Experiments on the CMU Movie Summary dataset show Adaptive Focal Loss surpasses standard binary cross-entropy and Focal Loss, achieving an F1-score of 0.5, ROC accuracy of 0.79, and Micro Recall of 0.53. These results demonstrate the effectiveness of adaptive focusing methods in improving the detection of minority classes in imbalanced scenarios.

Author 1: Zakia Labd
Author 2: Said Bahassine
Author 3: Khalid Housni

Keywords: Adaptive focal loss; BERT; imbalanced text classification; multilabel text classification

PDF

Paper 50: Attention Aware Dual-Path Autoencoder with Asymmetric Loss for Recognition in Complex Scenes

Abstract: Object recognition in complex scenes is challenging due to cluttered backgrounds, overlapping objects, and degraded image quality. Another difficulty arises from sparse label presence, as most images contain only one to three active labels despite the dataset being balanced across 20 object classes. This intra-sample sparsity complicates binary classification by exposing models to a high proportion of inactive classes. This work aims to improve recognition accuracy, robustness under sparse multi-label conditions, and interpretability in visually complex environments. The objective is to help models focus on relevant visual features, suppress background noise, and better distinguish objects that are rare or overlapping. To address these challenges, we introduce an attention aware dual-path autoencoder that enhances image features while learning to classify multiple objects. The model uses asymmetric loss to reduce the influence of easy negatives and emphasize rare or difficult labels. It also integrates an attention mechanism in the reconstruction path to improve object clarity. The proposed model achieves 96.72 percent accuracy, 0.0328 Hamming Loss, 0.9809 macro ROC-AUC, and 0.8925 macro mAP, along with 0.9372 SSIM and 7.1012 dB PSNR in reconstruction. These results confirm its effectiveness for robust classification and enhanced visual understanding in complex scenes.

Author 1: Hashim Rosli
Author 2: Rozniza Ali
Author 3: Muhamad Suzuri Hitam
Author 4: Ashanira Mat Deris
Author 5: Noor Hafhizah Abd Rahim

Keywords: Component; autoencoder; attention aware; feature fusion; image enhancement; multi-label classification

PDF

Paper 51: Secure Data Sharing Using Blockchain Technology: A Systematic Literature Review

Abstract: Data sharing security is currently one of the crucial parts in e-government systems. Although blockchain, a type of Distributed Ledger Technology (DLT), has been increasingly applied to enhance secure data exchange, there is a significant lack of studies focusing on the specific security factors that underpin its implementation in e-government contexts. Defining and understanding these factors is crucial for the successful integration of blockchain into public data infrastructures. This study addresses this research gap through a Systematic Literature Review (SLR) guided by the PRISMA 2020 framework. A total of 511 articles were retrieved from five major databases, and 103 were selected and systematically reviewed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 approach. While the majority of studies emphasised privacy, integrity, and transparency, other critical security factors such as scalability, availability, governance, and decentralisation remain comparatively underexplored. The theory of blockchain-based data sharing security factors was developed as a reference. The article wraps up the discussion by highlighting nine security factors in data sharing using blockchain in e-government for future investigation.

Author 1: Azman Azmi
Author 2: Farashazillah Yahya
Author 3: Nur Afrina Azman
Author 4: Hazlina Jalil

Keywords: Blockchain; Distributed Ledger Technology (DLT); data sharing security; e-government; Systematic Literature Review (SLR); PRISMA 2020

PDF

Paper 52: Enhancing Firefighter PPE Compliance Through Deep Learning and Computer Vision

Abstract: Ensuring firefighter safety in high-risk environments requires strict adherence to Personal Protective Equipment (PPE) protocols. This study presents an automated real-time detection system for PPE using deep learning and computer vision techniques, aiming to improve PPE compliance and overall safety monitoring. The research employs advanced object detection models, specifically YOLOv10 and YOLOv11 (You Only Look Once), to identify critical PPE components such as helmets, gloves, boots, and self-contained breathing apparatus (SCBA) units. A custom-annotated dataset of firefighter images was developed to train and evaluate both models using standard performance metrics such as precision, recall, mAP, F1-score, and Intersection over Union (IoU). The results show that YOLOv11 outperformed YOLOv10, achieving a higher mAP@0.5 score of 0.646 compared to 0.586, with improved detection of small and partially occluded objects and a reduction in training time by 11%. YOLOv11 showed improved detection accuracy for small and partially blocked objects and reduced training time by 11%, while maintaining real-time efficiency. The system generates instant alerts when PPE is missing, minimizing reliance on manual monitoring and improving situational awareness in real-time. This research reinforces the role of AI in public safety and AI-powered automation in enhancing critical public safety operations. By integrating deep learning and computer vision into PPE monitoring systems, the study contributes to developing intelligent, responsive solutions aligned with modern safety standards.

Author 1: Asmaa Alayed
Author 2: Razan Talal Alqurashi
Author 3: Samah Hamoud Alhelali
Author 4: Asrar Yousef Khadawurdi
Author 5: Bashayer Fayez Khan

Keywords: Firefighter safety; Personal Protective Equipment (PPE) Object detection; YOLOv10; YOLOv11; deep learning; computer vision; real-time detection; PPE compliance; AI in public safety

PDF

Paper 53: Multi-Agent Deep Reinforcement Learning Algorithms for Distributed Charging Station Management

Abstract: With the continued growth of the electric vehicle (EV) fleet, the issue of cross-regional coordinated scheduling for charging infrastructure has become increasingly prominent, facing challenges such as uneven resource allocation and delayed responses. Considering the complex coupling between charging stations and the power system in a smart grid environment, this paper proposes a distributed scheduling strategy based on multi-agent deep reinforcement learning (MADRL) to achieve efficient, coordinated management of charging infrastructure and power resources. The proposed approach constructs a hierarchical decision-making architecture to jointly optimize intra-regional resource allocation and cross-regional power support, modeling the scheduling process as a Markov Decision Process (MDP) and treating regional charging stations, power nodes, and material units as independent agents. Through the multi-agent deep reinforcement learning mechanism, each agent autonomously learns optimal scheduling policies in the presence of uncertain demand and supply fluctuations, thus enabling rapid response and enhancing system robustness. Simulation results demonstrate that the proposed method effectively reduces scheduling costs and improves resource utilization and service quality. This study provides both theoretical support and practical pathways for building intelligent, efficient, and sustainable charging infrastructure.

Author 1: Li Junda
Author 2: Wang Tianan
Author 3: Zhang Dingyi
Author 4: Wu Quancai
Author 5: Liu Jian

Keywords: Charging station scheduling; cross-regional coordination; multi-agent systems; deep reinforcement learning; Markov decision process; resource optimization; uncertainty response

PDF

Paper 54: A Review of Federated Learning Attacks: Threat Models and Defence Strategies

Abstract: Federated Learning (FL) has emerged as a critical paradigm in privacy-preserving machine learning, enabling collaborative model training across decentralised devices without sharing raw data. While FL enhances privacy by maintaining data locality, it remains susceptible to sophisticated adversarial attacks. This review systematically analyses the FL threat landscape and introduces a novel taxonomy that classifies attack models based on their objectives, capabilities, and exploited vulnerabilities. Major categories include data poisoning, inference attacks, and Byzantine behaviours, each examined in terms of mechanisms, assumptions, and system impact. In addition, the paper evaluates prominent defence strategies—such as differential privacy, secure aggregation, and anomaly detection—by assessing their strengths, limitations, and real-world applicability. Key gaps include the lack of standardised evaluation metrics and limited exploration of adaptive defence mechanisms. Emerging trends such as homomorphic encryption, secure multi-party computation, and blockchain-based verifiability are also discussed. This review is a comprehensive resource for researchers and practitioners aiming to design resilient, privacy-aware FL systems that withstand evolving threats.

Author 1: Fizlin Zakaria
Author 2: Shamsul KamalAhmad Khalid

Keywords: Federated learning; threat models; defence strategies; privacy-preserving AI; adversarial attacks

PDF

Paper 55: Intelligent Logistics Vehicle Scheduling Based on MPHIGA

Abstract: The current intelligent logistics vehicle scheduling faces challenges, including the difficulty of obtaining real-time location data and the need for manual intervention in emergencies. To address these issues, a modified multi-population hybrid genetic algorithm is proposed, along with an intelligent scheduling model constructed through the reconstruction of domain generation strategies. Experimental results show that the model stabilizes the total cost at 7864 yuan within 49 iterations, whereas the dual-population hybrid genetic algorithm requires 51 iterations, making convergence more time-consuming. Moreover, when the scheduling frequency is two, the research model successfully allocates three company vehicles, whereas the comparison algorithm can only allocate two. Overall, the research model offers significant advantages in reducing operating costs and enhancing dynamic response capabilities, providing effective technical support for the digital transformation of logistics companies.

Author 1: Xinxin Gao
Author 2: Qing Wang

Keywords: Multi-population hybrid improved genetic algorithm; domain generation algorithm; logistics vehicles; co-evolution; scheduling management

PDF

Paper 56: Fuzzy Delphi Method: A Step-by-Step Guide to Obtaining Expert Consensus on Mobile Tourism Acceptance Culture

Abstract: Mobile technology has developed rapidly in a short period of time, which has greatly changed the tourism sector and led to the emergence of Mobile Tourism (MT). To ensure that MT grows well and is widely used, it is important to know how people from different cultures accept it. This study provides a complete description of how to use the Fuzzy Delphi Method (FDM) to obtain expert agreement on the most important factors that influence how acceptable mobile tourism is from a cultural perspective. This study uses the Technology Acceptance Model (TAM) and Hofstede's Cultural Dimensions to carefully find and confirm the variables and indicators and how they are interrelated. This approach describes a rigorous process with nine stages in reaching expert agreement. The results revealed that experts largely agreed on the variables related to perceived usefulness, perceived trust, perceived ease of use, and facilitating conditions in the TAM framework, as well as some variables of collectivism, uncertainty avoidance, and long-term orientation in Hofstede's cultural aspects. This study also verified and validated the overall relationship between variables in building the Mobile Tourism Cultural Acceptance (MTCA) framework, the general and specific interactions between variables, and the function of cultural dimensions as mediators. This study shows how important it is to get expert opinion when making a comprehensive plan on how to use technology in a culturally acceptable environment for mobile tourism. The information obtained has a major impact on mobile tourism developers, policy makers, and marketers who want to make MT more popular.

Author 1: Syaifullah
Author 2: Shamsul Arrieya Ariffin
Author 3: Norhisham Mohamad Nordin

Keywords: Cultural acceptance; Fuzzy Delphi Method (FDM); hofstede's cultural dimensions; mobile tourism; technology acceptance model (TAM)

PDF

Paper 57: Optimized Automatic Temperature and Humidity Control for Tobacco Storage Using TwinCAT and Deep Reinforcement Learning

Abstract: With the rapid development of the tobacco industry, precise temperature and humidity control in storage environments has become essential for maintaining tobacco leaf quality. Traditional manual control methods suffer from low efficiency and limited accuracy, failing to meet modern storage demands. This study proposes an optimized automatic control system integrating TwinCAT and deep reinforcement learning (DRL) to enhance climate regulation in tobacco warehouses. Leveraging TwinCAT’s real-time control capabilities and DRL’s adaptive decision-making, the system achieves precise environmental regulation. Experimental results demonstrate that temperature and humidity control errors are reduced to ±0.5 °C and ±3%, respectively. Compared to conventional methods, the proposed system lowers energy consumption by 20% and reduces the mildew rate of stored tobacco by 15%, significantly improving storage quality. This work offers a novel technical framework for intelligent environmental control in tobacco storage and provides valuable insights for broader applications in similar domains.

Author 1: Zhen Liu
Author 2: Jili Wang
Author 3: Shihao Song
Author 4: Qiang Hua

Keywords: TwinCAT; deep reinforcement learning; tobacco storage; temperature and humidity control; system optimization

PDF

Paper 58: Adaptive AI-Driven Enterprise Resource Planning for Scalable and Real-Time Strategic Decision Making

Abstract: Enterprise Resource Planning (ERP) systems play a critical role in managing organizational assets and operations. However, traditional ERP systems rely on static, rule-based decision-making frameworks that lack the agility and intelligence required for real-time strategic support. To address these limitations, this study proposes an Adaptive AI-Driven Enterprise Resource Planning (A2ERP). This AI-augmented ERP framework integrates adaptive predictive models to enhance decision-making capabilities at scale. The A2ERP architecture features a dynamic data ingestion layer, an adaptive predictive engine utilizing online learning and ensemble methods, and a decision support interface empowered with explainable AI (XAI). It is designed for scalability through a containerized microservices architecture. Experimental results demonstrate that A2ERP achieves a 98% accuracy rate in both training and testing phases, effectively identifying errors such as omission, addition, and overstatement. Comparative evaluations show that A2ERP outperforms traditional ERP methods across key performance metrics, including precision, recall, and F1-score. The framework’s ability to process large-scale, complex data in real-time underscores its effectiveness in delivering timely strategic insights. A2ERP represents a significant advancement toward scalable, adaptive, and intelligent ERP systems, bridging the gap between operational execution and strategic decision-making.

Author 1: Ghayth AlMahadin

Keywords: Enterprise resource planning; adaptive predictive modeling; real-time decision support; AI-Augmented ERP; ensemble learning

PDF

Paper 59: Enhancing Intent Recognition for Mixed Script Queries Using Roman Transliteration

Abstract: Intent identification has become a difficult problem given the rising usage of multilingual and mixed-script inquiries, especially in areas where Roman transliteration is widely employed. Traditional intent detection systems suffer from the discrepancies and differences in transliterated text, which lowers their accuracy. The objective of this paper is to examine difficulties connected with intent recognition in mixed-script inquiries and to create a method based on transliteration to enhance intent recognition to assess the efficacy of the suggested model concerning current intent detection methods. The suggested approach is feature extraction and classification using machine learning and deep learning models after Roman transliteration pre-processing of mixed-script queries. The proposed hybrid deep learning architecture, that involves CNN, BiLSTM, and an Attention mechanism, holds an accuracy of 92.4% and F1-score of 91.0%, and beats baseline models like SVM, Random Forest, LSTM, and Transformer. Moreover, transliteration preprocessing enhanced accuracy by 7–9% on various models, proving the success of the approach.

Author 1: Anu Chaudhary
Author 2: Rahul Pradhan
Author 3: Shashi Shekhar

Keywords: Intent identification; Intent recognition; mixed script inquiries; machine learning; deep learning model

PDF

Paper 60: A Hybrid Deep Learning and Optimization Approach for Accurate Channel Estimation in 5G MIMO-OFDM Systems

Abstract: Channel estimation plays a pivotal role in enhancing the reliability and efficiency of 5G wireless communication systems, particularly in MIMO-OFDM (Multiple Input Multiple Output - Orthogonal Frequency Division Multiplexing) architectures under multipath and Doppler-affected conditions. Conventional methods such as Least Squares (LS) are widely used due to their low computational complexity and lack of requirement for prior channel statistics. However, these approaches often result in poor estimation accuracy, especially in dynamic environments. To overcome these limitations, this study introduces a hybrid deep learning-based channel estimation framework that integrates Harris Hawks Optimization (HHO), Sparrow Search Algorithm (SSA), and Long Short-Term Memory (LSTM) networks—referred to as HHO-SSA-LSTM. The proposed method is designed to optimize the LSTM parameters using HHO and SSA, enhancing learning efficiency and estimation accuracy. Additionally, the model employs hybrid pre-coding aligned with codebook modeling strategies to preserve angle characteristics without disrupting azimuthal distributions. The system is evaluated in a 5G MIMO-OFDM setting under realistic conditions simulated using Doppler frequency and multipath propagation. Performance is assessed using key metrics including Bit Error Rate (BER), Mean Square Error (MSE), Symbol Error Rate (SER), efficiency, and execution time across different Pilot Lengths (PL = 128, 136, and 160). Simulation results demonstrate that the HHO-SSA-LSTM framework outperforms LS, LMMSE (Linear Minimum Mean Square Error), CNN (Convolutional Neural Network), FDNN (Forest Deep Neural Network), and standalone LSTM models. Notably, at PL = 160, BER is reduced by up to 91% and MSE by 86%, with an efficiency improvement exceeding 12% compared to traditional methods. Although the model exhibits a slightly higher execution time due to its hybrid design, the substantial accuracy gains justify the trade-off. The findings validate the effectiveness of the proposed hybrid model for robust and efficient channel estimation in 5G networks.

Author 1: Mohammed Fakhreldin

Keywords: Multiple-input multiple-output; channel estimation; orthogonal frequency division multiplexing; long short-term memory; pilot length

PDF

Paper 61: Explainable Approach Using Semantic-Guided Alignment for Radiology Imaging Diagnosis

Abstract: The increased success of deep learning in the radiology imaging domain has significantly advanced automated diagnosis and report generation, aiming to enhance diagnostic precision and clinical decision-making. However, existing methods often struggle to achieve detailed morphological description, resulting in reports that provide only general information without precise clinical specifics and thus fail to meet the stringent interpretability requirements of medical diagnosis. Also, the critical need for transparency in clinical automated systems has catalyzed the emergence of explainable artificial intelligence (XAI) as an essential research frontier. To address these limitations, we propose an explainable system for report generation that leverages semantic-guided alignment and interpretable multimodal deep learning. Our model combines hierarchical semantic feature extraction from medical reports with fine-grained features that guide the model to focus on lesion-relevant visual features and use Concept Activation Vectors (CAVs) to explain how radiological concepts affect report generation. A contrastive multimodal fusion module aligning textual and visual modalities through hierarchical attention and contrastive learning. Finally, an integrated concept activation system that provides transparent explanations by quantifying how radiological concepts influence generated reports. Validation of our approach in comparisons with existing methods indicates a corresponding boost in report quality in terms of clinical accuracy of the description, localization of the lesion, and contextual consistency, positioning our framework as a robust tool for generating more accurate and reliable medical reports.

Author 1: Fatima Cheddi
Author 2: Ahmed Habbani
Author 3: Hammadi Nait-Charif

Keywords: Automated report generation; explainable AI; cross-modal fusion; contrastive learning; semantic-guided alignment

PDF

Paper 62: 3D Reconstruction from JPG Images

Abstract: Three-dimensional (3D) reconstruction from two-dimensional (2D) images is a fundamental challenge in computer vision and photogrammetry, with applications in medical imaging, robotics, and augmented reality. This research introduces an image-based modeling pipeline designed to overcome the inherent limitations of Joint Photographic Experts Group (JPEG) images, such as lossy compression and reduced structural fidelity. The proposed hybrid framework integrates photogrammetric methods specifically Structure-from-Motion (SFM) and Dense Stereo Matching with advanced point cloud generation and surface reconstruction techniques. Initially, Marching Cubes was utilized to generate dense point clouds from sequential JPEG slices, followed by Poisson Surface Reconstruction to produce watertight 3D models. Structural details are further enhanced using Structural Similarity index (SSIM) guided texture refinement. Evaluated on the Kaggle Chest CT Segmentation dataset, the method achieves an SSIM score of 0.725, outperforming the JPEG-based reconstruction baseline of 0.675 by 7.4%. In addition to improved accuracy, the study explores the balance between computational cost and reconstruction quality, offering insights relevant to real time and resource constrained applications. By bridging photogrammetry with computer vision, this work advances practical 3D reconstruction from compressed medical images, enabling efficient digitization in low-bandwidth environments.

Author 1: Youssif Mohamed Mostafa
Author 2: Maryam N. Al-Berry
Author 3: Howida A. Shedeed

Keywords: 3D reconstruction; photogrammetry; computer vision; image-based modeling; point cloud generation; JPEG images

PDF

Paper 63: A Focused Survey of ECG Datasets for Artificial Intelligence-Based Atrial Fibrillation Detection

Abstract: Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia and increases the risk of stroke, heart failure, and mortality. Electrocardiography (ECG) is the most important technology for AF detection because it is inexpensive, non-invasive, and provides clinically useful information. However, the variability of ECG patterns, particularly during paroxysmal AF creates challenges in detecting AF. Artificial Intelligence (AI) offers a promising opportunity to improve AF recognition. However, AI performance is contingent on obtaining high-quality and diverse ECG datasets. This paper presents a focused survey of 15 publicly available and clinical ECG datasets used in AI-driven AF detection research between 2023 and 2025. We analyze the datasets based on acquisition methods, ECG type, format, lead configurations, annotation richness, and their application in AI models. Our comparative analysis reveals major trends, challenges such as data imbalance and motion artifacts, and gaps in current datasets including limited demographic diversity and underrepresentation of wearable ECG data. This study aims to guide future research toward more robust, interpretable, and inclusive AF detection models.

Author 1: ASSALHI Imane
Author 2: Bybi Abdelmajid
Author 3: Oulad Hamdaoui Hanaa
Author 4: Ebobisse Djene Yves Frederic
Author 5: Drissi Lahssini Hilal

Keywords: Atrial fibrillation; ECG datasets; Artificial Intelligence; AI-ECG; dataset survey; AF detection

PDF

Paper 64: DLCA-CapsNet: Dual-Lane CDH Atrous CapsNet for the Detection of Plant Diseases

Abstract: Humanity's survival, development, and existence are deeply intertwined with agriculture, the source of most of our food. Plant disease detection helps in securing food, but manual plant disease detection is error-prone and labor-intensive. Convolutional Neural Networks (CNNs) are highly effective for automated plant disease classification, but their difficulty in recognizing differently oriented images means they need large datasets with many variations to work best. Capsule Networks (CapsNets) were developed to overcome the shortcomings of CNNs and can function effectively with smaller datasets. However, CapsNets process every part of an input image, so their performance can suffer when dealing with complex visuals. To tackle this challenge, DLCA-CapsNet was introduced. DLCA-CapsNet integrates a Color Difference Histogram (CDH) layer for key feature extraction, atrous convolution layers to enlarge receptive fields while maintaining spatial details, along with max-pooling, standard convolutional layers, and a dropout layer. The proposed DLCA-CapsNet method was evaluated on datasets including apple, banana, grape, maize, mango, pepper, potato, rice, tomato, as well as CIFAR-10 and Fashion-MNIST. The model demonstrated strong performance with high test accuracies in plant disease detection and on CIFAR-10 and Fashion-MNIST. It improved test accuracies by 6.78%, 14.82%, 6.14%, 5.07%, 21.12%, 40.32%, 4.64%, 0.76%, 10.23%, 13.73%, and 2.03%, while also reducing the number of parameters in millions by 6.16M, 6.16M, 6.16M, 6.16M, 7.14M, 5.68M, 5.92M, 7.62M, 7.62M, and 6.54M respectively when compared with the original CapsNet. On sensitivity, F1-Score, precision, specificity, Receiver Operating Characteristics, Precision-Recall values, accuracy, disk size, and parameters generated, etc., the DLCA-CapsNet achieved better performance compared to the original CapsNet and other advanced CapsNets reported in the literature. The findings suggest that this efficient and computationally less demanding method can significantly enhance plant disease classification and contribute incrementally to efforts aligned with the SDG 2 goal by offering a lightweight, scalable solution that can be adapted for field use in resource-constrained settings.

Author 1: Steve Okyere-Gyamfi
Author 2: Michael Asante
Author 3: Yaw Marfo Missah
Author 4: Kwame Ofosuhene Peasah
Author 5: Vivian Akoto-Adjepong

Keywords: Color Difference Histogram (CDH); Convolutional Neural Network (CNN); atrous Convolution; Capsule Neural Network; plant disease detection; dynamic routing; AI in agriculture

PDF

Paper 65: A Comparative Study of Machine Learning Techniques for AE-Based Corrosion Detection with Emphasis on Transformer Models

Abstract: Corrosion-induced damage poses a critical threat to the structural integrity of fluid transport pipelines, necessitating advanced detection strategies for early intervention. This study investigates the use of acoustic emission (AE) monitoring in conjunction with machine learning techniques to identify anomalies indicative of corrosion. A comprehensive analysis of supervised, unsupervised, semi-supervised, and self-supervised learning methods is presented, with emphasis on their suitability for AE-based anomaly detection. Building upon this foundation, we implement and evaluate multiple machine learning models—including K-Nearest Neighbours (KNN), Support Vector Machines (SVM), Artificial Neural Networks (ANN), and Convolutional Neural Networks (CNN)—and compare them to a Transformer-based model integrated into a hybrid CNN-Transformer architecture. Experimental results demonstrate that the hybrid model outperforms all baselines, achieving R-squared values of 0.7037 for Acoustic Signal Level (ASL) and 0.6836 for Root Mean Square (RMS), thus confirming its superior ability to capture both local and long-range dependencies in acoustic emission data. A systematic review of recent Transformer-based corrosion detection models further contextualizes the results. This research highlights the promise of Transformer-based models in robust, real-time corrosion monitoring and offers a pathway toward more intelligent, machine learning-driven infrastructure maintenance systems.

Author 1: Osama Shahid Ali
Author 2: Lukman B A Rahim

Keywords: Acoustic emissions; transformer based models; machine learning

PDF

Paper 66: Smartphone-Integrated Sensor-Based DFU Risk Assessment Using CatBoost and Deep Neuro-Fuzzy Intelligence

Abstract: Diabetic Foot Ulcer (DFU) is a serious and common complication of diabetes mellitus, which can lead to lower limb amputation if not identified and treated in its early stages. This study introduces an integrated and intelligent system designed for the early detection and severity classification of DFUs by combining sensor-driven data collection with machine learning techniques in a mobile application. The research is based on a dataset comprising both clinical features (D-1 to D-16) and key sensor-based readings gathered from 316 participants. After preprocessing and normalization, the clinical data undergoes feature selection using CatBoost, which filters out the five least impactful features while preserving all sensor data due to its diagnostic relevance. The refined dataset is then processed using a Deep Neuro-Fuzzy Network (DN-FN) to deliver real-time DFU severity predictions, categorized into Low, Mid, and High-risk levels. The solution is deployed through an intuitive smartphone interface, enabling users to input clinical data once and conduct periodic sensor-based tests—including vibration, pressure, and temperature readings. The mobile application interfaces with embedded hardware via Bluetooth and performs offline inference using a compact version of the trained model. The system is designed to offer both patients and healthcare professionals a practical and interpretable tool for continuous monitoring of foot health, with the ultimate goal of reducing the risk and impact of DFU complications.

Author 1: Jayashree J
Author 2: Vijayashree J
Author 3: Perepi Rajarajeswari
Author 4: Saravanan S

Keywords: Bayesian Optimization; CatBoost; Deep Neuro-Fuzzy Networks (DN-FN); Diabetic Foot Ulcer (DFU) prediction; sensor-based risk stratification

PDF

Paper 67: Decoding Sales Order Anomalies: Advanced Predictive Modeling and Discrepancy Resolution Utilizing Machine Learning Algorithms

Abstract: This study examines the accuracy of order prediction and determines the grounds for order block predictions. It sets order deviation by calculating forecasted variation using R2 scores and mean absolute deviation. The blocks that are checked mainly include—business partner block, credit block, common block, and delivery block. Demand forecasts compare six months’ worth of sales data against mean absolute deviation and coefficient of variation. This study puts forth a proposal for solving discrepancies between sales order forecasts and confirms credit management’s system credit limits on sales orders. Parameters for evaluating orders are set relying on historical data. Machine Learning (ML) has been utilized in this study—which involves Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) algorithms to improve accuracy where they achieve 96% and 93% respectively.

Author 1: Amit Kumar Soni
Author 2: Pooja Jain

Keywords: Block predictions; credit; machine learning; sales data

PDF

Paper 68: Home Network Attached Storage (HOMENAS) Using Raspberry Pi with Telegram Bot Notification

Abstract: This paper presents the development of the Home Network Attached Storage (HOMENAS) using Raspberry Pi with a Telegram Bot Notification. Network Attached Storage (NAS) is an independent storage system connected directly to the network that can be accessed easily. NAS devices are readily available on the market nowadays. However, the current price is too expensive, consumes more electricity and lacks a notification mechanism. This paper proposes the development of HOMENAS at a lower cost and consumes less power than the current NAS devices available on the market. The proposed HOMENAS is also integrated with the Telegram Bot, which can notify users of the progress of downloading files. 95% of the energy cost can be reduced by implementing a Raspberry Pi as the Home Network Attached Storage. A network performance test has been conducted to evaluate the streaming rate for single and multiple users with wired and wireless connections. The result finding shows that the Raspberry Pi not only matches the performance of laptop but, in some aspects, it has better results in torrent-based file downloading tasks.

Author 1: Nurul Najwa Abdul Rahid @ Abdul Rashid
Author 2: Syafnidar Abdul Halim
Author 3: Siti Maisarah Md Zain
Author 4: Nik Aiman Shafiq Nik Shukri

Keywords: Home network; NAS; network attached storage; raspberry pi; telegram bot

PDF

Paper 69: Enhancing Data Management for Decision Support Systems in Indonesian Government Internal Audit: A DMBOK Approach

Abstract: Indonesian public institutions, including the Financial and Development Supervisory Agency (BPKP), face challenges such as fragmented standards and poor data quality, which hinder effective Decision Support Systems (DSS). This research aims to evaluate BPKP's current analytics maturity level using the TDWI Analytics Maturity Model and to formulate a Data Management Body of Knowledge (DMBOK)-based strategy to enhance its data management and analytical capabilities in support of decision-making. This qualitative descriptive case study methodology employed document analysis. The research stages involved assessing maturity using the TDWI model, conducting a gap analysis, formulating a strategy with DMBOK principles, and proposing an implementation roadmap based on Aiken's Data Management Value Pyramid. The research findings indicate BPKP's analytics maturity is at the "Early Adoption" stage (overall score 3.41), with the Analytics dimension scoring the lowest (2.60) and exhibiting the largest gap (1.40). Key challenges identified are underdeveloped institutional metadata and limited application of advanced analytics. A comprehensive DMBOK-based strategy and a four-phased implementation roadmap using Aiken's Pyramid were proposed to address these issues.

Author 1: Febrian Imanda Effendy
Author 2: Nilo Legowo

Keywords: Data management; data management body of knowledge; Indonesian government internal audit agency; decision support system

PDF

Paper 70: Game Theory Meets Explainable AI: An Enhanced Approach to Understanding Black Box Models Through Shapley Values

Abstract: The increasing complexity of machine learning models necessitates robust methods for interpretability, particularly in clustering applications, where understanding group characteristics is critical. To this end, this paper introduces a novel framework that integrates cooperative game theory and explainable artificial intelligence (XAI) to enhance the interpretability of black-box clustering models. Our framework integrates approximated Shapley values with multi-level clustering to reveal hierarchical feature interactions, enabling both local and global interpretability. The validity of this framework is achieved by conducting extensive empirical evaluations of two datasets, the Portuguese wine quality benchmark and Beijing Multi-Site Air Quality dataset the framework demonstrates improved clustering quality and interpretability, with features such as density and total sulfur dioxide emerging as dominant predictors in the wine analysis, while pollutants like PM2.5 and NO2 significantly influence air quality clustering. Key contributions include a multi-level clustering approach that reveals hierarchical feature attribution, use of interactive visualizations produced by Altair and a single interpretability framework that validate the state-of-art baselines. As a result, the framework forms a strong basis of interpretable clustering in essential fields like healthcare, finance, and environmental surveillance, which reinforces its generalization with respect to each domain. The results underline the need for interpretability in machine learning, providing actionable insights for stakeholders in a variety of fields.

Author 1: Mouad Louhichi
Author 2: Redwane Nesmaoui
Author 3: Mohamed Lazaar

Keywords: Cooperative game theory; Explainable Artificial Intelligence (XAI); Shapley values; cluster analysis; interpretability; feature attribution; black-box models

PDF

Paper 71: Anomaly Detection and Fault Diagnosis of Power Distribution Line Point Cloud Data Based on Deep Learning

Abstract: Early and accurate fault diagnosis in power distribution systems is essential to ensure stable electricity delivery and prevent outages. This study presents a deep learning-based anomaly detection framework that analyzes 3D LiDAR point cloud data to identify structural defects in power distribution lines. Leveraging advancements in deep learning and 3D sensing, a hybrid architecture combining PointNet++ and 3D Convolutional Neural Networks (3D CNN) is proposed. The system processes point clouds from the TS40K dataset, comprising high-resolution, annotated scans of power infrastructure, and uses a feature fusion strategy to integrate fine-grained local geometry from PointNet++ with global volumetric features from 3D CNN. Implemented in Python, the method achieves a 94.7% accuracy in fault diagnosis, outperforming standalone models. It robustly detects anomalies such as sagging wires, leaning poles, and broken insulators, maintaining precision, recall, and F1-scores above 90%, even under noisy and sparse conditions. Visualization of detected faults on 3D models confirms its precise localization capability, supporting real-time monitoring and maintenance planning in smart grids. By integrating complementary deep learning techniques, this approach offers a scalable, accurate, and automated solution for anomaly detection and fault diagnosis in power distribution systems. Future work will focus on multi-sensor fusion and semi-supervised learning to reduce dependence on labeled data and broaden applicability to other infrastructure use cases.

Author 1: Jiangshun Yu
Author 2: Poyu You
Author 3: Jian Zhao
Author 4: Xianzhe Long
Author 5: Yuran Chen

Keywords: Power distribution; anomaly detection; point cloud; deep learning; fault diagnosis

PDF

Paper 72: HGWWO: A Hybrid Grey Wolf–Whale Optimizer for Load Balancing in Cloud Computing Environments

Abstract: This paper aims to develop an efficient and adaptive load balancing algorithm for cloud computing environments using a novel hybrid meta-heuristic approach. Effective load balancing is necessary for optimum performance and resource utilization in cloud computing systems. Most conventional meta-heuristic algorithms suffer from premature convergence and poor exploration–exploitation tradeoffs. An innovative hybrid meta-heuristic algorithm, Hybrid Grey Wolf–Whale Optimizer (HGWWO), is proposed for efficiently and dynamically balancing cloud load. HGWWO integrates the leadership hierarchy and adaptive hunting strategy of the Grey Wolf Optimizer (GWO) with the spiral-shaped exploitation mechanism of the Whale Optimization Algorithm (WOA), resulting in high convergence rates. The algorithm is implemented in a multi-objective cloud load balancing model to reduce response time, energy usage, and makespan while optimizing resource utilization among virtual machines. The experimental outcomes prove that HGWWO outperforms existing algorithms regarding throughput, waiting time, and execution efficiency. The suggested model has potential for real-time cloud scheduling of resources and is an efficient solution for scalable and heterogeneous cloud environments.

Author 1: Yameng BAI
Author 2: Junxia MENG
Author 3: Shuai ZHAO
Author 4: Ruoyu REN

Keywords: Cloud computing; load balancing; hybrid meta-heuristic; grey wolf optimizer; whale optimization algorithm

PDF

Paper 73: Automated Bubble Detection in Contact Lenses Using a Hybrid Deep Learning Framework

Abstract: This study presents a hybrid deep learning approach for automated detection of bubbles in contact lenses, aiming to enhance quality assurance in the manufacturing process. A hybrid AlexNet+SVM model was developed using transfer learning, where AlexNet’s convolutional features were leveraged for binary classification (bubble vs. normal) via a Support Vector Machine (SVM) classifier. The dataset consisted of 320 images (160 bubbles, 160 normal) pre-processed using median filtering, local histogram equalization, and circular masking to improve image clarity and consistency. Through systematic hyperparameter tuning, the model achieved 100% testing accuracy and 97.92% validation accuracy, with perfect precision (100%) and high recall (96%). Comparative evaluation against ResNet and VGGNet demonstrated that the AlexNet+SVM model offered superior generalization and robustness, particularly for small-scale datasets. While VGGNet also achieved 100% testing accuracy with 95.83% validation accuracy, ResNet underperformed in recall (89%), likely due to its deeper architecture and data limitations. The findings underscore the suitability of hybrid models for binary classification tasks in limited-data scenarios. Identified challenges, including dataset size and risk of overfitting, point to future research directions involving expanded datasets and more advanced pre-processing techniques. This research contributes to the advancement of automated defect detection systems for contact lens manufacturing, offering a reliable and efficient quality control solution.

Author 1: Chee Chin Lim
Author 2: Yen Fook Chong
Author 3: Vikneswaran Vijean
Author 4: Gei Ki Tang

Keywords: Bubble detection; contact lens quality assurance; deep learning; transfer learning; Support Vector Machine (SVM); AlexNet; image pre-processing; binary classification; defect detection

PDF

Paper 74: Integration of Grey Wolf Optimizer Algorithm with Combinatorial Testing for Test Suite Generation

Abstract: Combinatorial Testing (CT) is a software testing technique designed to detect defects in complex systems by efficiently covering diverse combinations of input parameters within given time and resource constraints. A common strategy in CT is t-way testing, which ensures that all possible interactions among any t parameters are tested at least once. The Grey Wolf Optimization Algorithm (GWOA) is a nature-inspired metaheuristic that has been successfully applied to various optimization problems. In this study, we introduce the Combinatorial Grey Wolf Optimization Algorithm (CGWOA), which integrates GWOA with CT to enhance test suite generation. Effectiveness of CGWOA is evaluated through experiments on a real-world software system, where it is found that the number of test cases was reduced by 98%, from 3000 to 40, while still ensuring complete 2-way interaction coverage. Experimental results demonstrate that CGWOA consistently produces smaller test suites compared to pure computation methods such as Jenny, IPOG, IPOG-D and TConfig, as CGWOA consistently outperformed, especially in handling both lower and higher interaction strengths. In scenarios with binary parameters, CGWOA delivered the smallest test suites, while in more complex configurations, even in MCA settings, it showed impressive scalability, outperforming the other algorithms. Statistical analysis using the Wilcoxon signed-rank test revealed that the proposed approach significantly outperforms existing methods, with all p-values less than 0.02 after applying the Holm correction. The experimental results demonstrate that the proposed CGWOA approach advances software testing by efficiently minimizing the number of test cases required to achieve complete test coverage.

Author 1: Muhamad Asyraf Anuar
Author 2: Rosziati Ibrahim
Author 3: Mazidah Mat Rejab
Author 4: Nurezayana Zainal

Keywords: Grey wolf optimizer algorithm; combinatorial testing; metaheuristics; t-way testing

PDF

Paper 75: A Multi-Level Stacking Ensemble Model Optimized by Soft Set Theory for Customer Churn Prediction

Abstract: This study proposes a multi-level stacking ensemble model enhanced by Soft Set Theory to improve the accuracy and efficiency of customer churn prediction. The proposed model leverages Soft Set Theory to eliminate redundant classifiers via the analysis of the indiscernibility matrix, increasing classifier diversity and ensemble generalization. Ten base classifiers are considered at Level-1, from which five are selected: Gradient Boosting, Logistic Regression, XGBoost, Support Vector Machine, and CatBoost. Logistic Regression serves as the Level-2 meta-classifier. Experiments using the UCI Telco Churn dataset achieve an accuracy of 94.87% and an F1-score of 95.14%, while reducing computational time by over 50%. Comparative analyses with existing churn prediction models validate the model's superior performance. This framework demonstrates strong potential for implementation in telecommunications, healthcare, and finance sectors where customer retention is critical.

Author 1: Nurul Nadzirah Adnan
Author 2: Mohd Khalid Awang

Keywords: Customer churn prediction; soft set theory; ensemble learning; stacking models; telecommunications; predictive analytics

PDF

Paper 76: A Dual-Path Gated Attention-Based Deep Learning Model for Automated Essay Scoring Using Linguistic Features

Abstract: Automated Essay Scoring (AES) has become a critical tool for scaling writing assessment in modern education. However, existing AES models often struggle to effectively evaluate both the syntactic structure and semantic meaning of essays while maintaining interpretability and fairness. This study presents a novel deep learning-based model that integrates syntactic and semantic analysis using an improved LSTM architecture. The model employs a dual-path structure: one path processes semantic representations using BERT-tokenized input, while the other captures syntactic patterns via part-of-speech sequences. These paths are fused using a gated mechanism and enhanced through multi-head attention to emphasize important linguistic cues. Additional student metadata, such as grade level and gender, is also incorporated to improve personalization and fairness. The model jointly predicts both holistic and grammar scores, trained and evaluated on the ASAP 2.0 dataset. Performance is measured using multiple statistical metrics, including MAE, MSE, RMSE, R², Pearson’s r, and Spearman’s ρ. The proposed model achieves a high prediction accuracy of 92%, significantly outperforming traditional and single-path models. These results demonstrate the model’s ability to capture both surface-level and deep linguistic features, offering a robust, interpretable, and scalable solution for automated writing evaluation.

Author 1: Qin Jie
Author 2: Congling Huang

Keywords: Attention mechanism; deep learning; essay scoring; gated fusion; linguistic features; semantic encoding; syntactic representation

PDF

Paper 77: Comparative Analysis of Cybersecurity Frameworks in Educational Institutions: Towards a Tailored Security Model

Abstract: Educational institutions face unique cybersecurity challenges due to their open culture, decentralised structures, and limited resources. While standard frameworks such as NIST, ISO/IEC 27001, and COBIT offer comprehensive guidance, their full implementation in academic settings is often impractical. This study addresses the gap by conducting a document-based comparative analysis of these frameworks, focusing on their applicability in educational institutions. A total of 42 documents—including case studies, cybersecurity guidelines, and academic articles—were analysed using thematic coding. The findings reveal significant misalignments between current frameworks and academic environments, particularly in terms of complexity, adaptability, and resource demand. Based on these insights, a tailored cybersecurity model is proposed. The model emphasises modularity, cultural integration, resource optimisation, and decentralised implementation to suit the educational context. A multi-step validation plan is also outlined to assess the model's practicality. This research offers both theoretical and practical contributions to cybersecurity governance in the education sector.

Author 1: Syarif Hidayatulloh
Author 2: Aedah Binti Abd. Rahman

Keywords: Cybersecurity; educational institutions; cybersecurity frameworks; tailored security model

PDF

Paper 78: The Representation Learning Ability of Self-Supervised Learning in Unlabeled Image Data

Abstract: Many existing systems struggle to strike a balance between global feature discrimination and local semantic understanding, despite the growing popularity of Self-Supervised Learning (SSL) for representation learning with unlabeled image data. This study introduces a novel SSL framework—Contrastive and Contextual Self-Supervised Representation Learning (C2SRL)—which integrates contrastive learning mechanisms with auxiliary context-based pretext tasks, specifically rotation prediction and jigsaw puzzle solving. The proposed C2SRL enhances two leading constructive models, SimCLR and MoCo, by incorporating contextual modules and a unified multi-task loss function, thereby improving the robustness and generalizability of the learned representations. A lightweight ResNet backbone is employed for encoding, followed by a dual-view augmentation strategy and a projection head that maps features into a contrastive embedding space. The proposed C2SRL outperforms existing SSL approaches in terms of classification accuracy and clustering coherence on the STL-10 and CIFAR-10 datasets, two benchmark datasets. It demonstrates strong scalability, as evidenced by its 89.6% mAP and 0.81 NMI, achieved using only 10% labeled data for fine-tuning. These results highlight the potential of combining contextual and contrastive learning objectives to generate rich, transferable visual representations for low-label or label-free applications.

Author 1: Jinzhu Lin
Author 2: Tianwei Ni

Keywords: Self-supervised learning (SSL); unlabeled image data; representation learning; contrastive learning; convolutional neural network (CNN); image classification; feature embedding; label-efficient learning

PDF

Paper 79: AI-Driven Firewall Log Analysis: Enhancing Threat Detection with Deep Learning Techniques

Abstract: As cyber-attacks get increasingly sophisticated, cybersecurity threats have surged, with 430 million new malware instances identified in 2023 representing a 36% rise compared to 2020 figures in the United States.Traditional firewall defense mechanisms are increasingly restricted. Even though firewalls are the frontline defense mechanism, their reliance on preconfigured rules and signature-based detection leaves them behind in the identification of carefully crafted, dynamic attacks. Furthermore, they generate enormous volumes of logs and hence add high false positive rates, making manual threat analysis a tedious and time-consuming process. In order to counter such issues, we propose an AI-fortified SIEM system using deep learning algorithms for intelligent firewall log analysis. This serves to reduce false positives through event pattern extraction and correlation, allowing for more efficient threat detection. By employing deep neural networks like fully connected, convolutional, and recurrent, our system enhances classification accuracy and optimizes threat detection. We utilize actual firewall logs and benchmarking datasets (UNSW-NB15-training and UNSW-NB15-testing) to assess our system, one for training and the other for testing. Our primary objective is to differentiate between true positive and false positive alarms so that security analysts can respond to cyber threats more effectively. The experimental results demonstrate the effectiveness of our approach in improving threat monitoring and IT security. Besides, they confirm that our learning-based models are better than classical machine learning methods and are therefore a realistic and efficient solution to real-world firewall security.

Author 1: Yasmine ABOUDRAR
Author 2: Khalid BOURAGBA
Author 3: Mohamed OUZZIF

Keywords: AI-driven SIEM; deep learning; firewall log analysis; threat detection; false positives; cybersecurity

PDF

Paper 80: Automated Dried Fish Classification Using MobileNetV2 and Transfer Learning

Abstract: India, the second largest fish producer globally, contributes significantly to food security, nutrition, and economic development. Dried fish is a vital component of the fisheries value chain, especially in South Asia, yet current classification methods are manual, inconsistent, and labor-intensive. This study aims to automate dried fish classification using MobileNetV2 through transfer learning, enabling real-time, lightweight deployment on edge devices. We trained and evaluated the model across four diverse publicly available datasets using single, bulk, head, and tail image modalities. Our experiments demonstrated high accuracy (up to 100%) and strong generalization across datasets. The proposed model offers a practical, scalable, and efficient solution to modernize dried fish processing and enhance productivity and traceability in fisheries.

Author 1: Rajmohan Pardeshi
Author 2: Rajermani Thinakaran
Author 3: Sanjay Kharat

Keywords: Dried fish classification; MobileNetV2; transfer learning; edge deployment; fisheries automation

PDF

Paper 81: Enhancing Portfolio Optimization with Weighted Scoring for Return Prediction Through Machine Learning and Neural Networks

Abstract: Accurately predicting stock return can enhance the effectiveness of portfolio optimization models. Many previous studies typically divide machine learning algorithms and portfolio optimization into two separate stages: the first step leverages the powerful modeling capabilities of machine learning algorithms to select stocks, and the second step optimizes weights using traditional portfolio models. This separation means that the modeling strengths of machine learning are only utilized in the stock selection phase and not fully exploited during weight optimization. Therefore, this study proposes a portfolio construction method based on Return Prediction Weighted Scoring (RPWS). RPWS generates a stock ranking by assigning weighted scores to each stock, cleverly maps this ranking to weight biases, and then optimizes actual weights using a traditional covariance matrix. This process successfully integrates the modeling capabilities of machine learning into the weight optimization phase, ensuring its full utilization throughout the portfolio construction process. Backtesting experiments are conducted using the U.S. stock market, A-share market, and major cryptocurrencies as datasets, with Support Vector Regression (SVR), Transformer, and other machine learning algorithms as prediction models. Empirical results from these three markets show that the SVR-RPWS and Transformer-RPWS models significantly outperform mainstream funds and traditional portfolio models in terms of annualized returns, sharpe ratio, and drawdown control.

Author 1: Ruili Sun
Author 2: Qiongchao Xia
Author 3: Shiguo Huang

Keywords: Machine learning; stock return prediction; portfolio optimization; support vector regression; transformer; NASDAQ stock market; a-share stock market; cryptocurrency market

PDF

Paper 82: CeC-SMOTE: A Clustering and Centroid-Based Adaptive Oversampling Method for Imbalanced Data

Abstract: Class imbalance is a common challenge in real-world datasets, leading standard classifiers to perform poorly on underrepresented classes. Traditional oversampling techniques, such as SMOTE and its variants, often generate synthetic samples without fully considering the local data structure, resulting in increased noise and class overlap.This study introduces CeC-SMOTE, an adaptive oversampling method that integrates clustering and centroid-based strategies to enhance the quality of synthetic minority samples. By first partitioning minority instances using K-means clustering, CeC-SMOTE identifies safe and boundary regions, selectively generating new samples where they are most needed while filtering out noise. This targeted approach preserves the underlying distribution of the minority class and minimizes the risk of overfitting. Extensive experiments on artificial and benchmark UCI datasets demonstrate that CeC-SMOTE consistently delivers competitive or superior results compared to established oversampling techniques, particularly in cases with complex or ambiguous class boundaries. Sensitivity analysis confirms that the method is robust to parameter settings, enabling strong performance with minimal tuning.

Author 1: Xiaoling Gao
Author 2: Marshima Mohd Rosli
Author 3: Muhammad Izzad Ramli
Author 4: Nursuriati Jamil

Keywords: Imbalanced data classification; synthetic oversampling; k-means clustering; centroid-based neighbor

PDF

Paper 83: Towards Robust IoT Security: The Impact of Data Quality and Imbalanced Data on AI-Based IDS

Abstract: The increased number of connected devices and the rise of Big Data have revolutionized industries and triggered a surge in cyberattacks, making security a top priority. Machine learning and Deep Learning algorithms are crucial in intrusion detection and classification, enabling systems to identify and respond to threats with precision. However, the success of these algorithms is directly related to the quality of the data they process, underscoring the critical importance of robust and well-prepared datasets. Furthermore, despite their potential in detecting and classifying attacks, some algorithms are susceptible to imbalanced datasets, struggling to accurately classify minority classes, while others demonstrate resilience to such challenges. Hence, this study presents a comprehensive analysis of the impact of data quality and imbalanced data on different classification problems, particularly binary, 8-class, and 34-class classification in an intrusion detection context. Our work extensively evaluates six ML and DL algorithms using a novel IoT dataset. Unlike existing research, we use a diverse set of metrics, including accuracy, precision, recall, F1-score, AUC-ROC, and other visual tools, to provide a robust and reliable algorithm performance assessment. This unique analysis underscores the critical importance of addressing data quality and the impact of different balancing techniques on the type of algorithms and type of classification.

Author 1: Hiba El Balbali
Author 2: Anas Abou El Kalam

Keywords: Machine learning; intrusion detection; internet of things; data quality; big data

PDF

Paper 84: Automated Anatomical Analysis of Wood Cross Sections Using Macroscopic Images

Abstract: Wood anatomical features are crucial in forestry science, traditionally relying on manual inspection of wood cross-sections. This conventional method is time-consuming, subjective, and dependent on expert experience. Recent advancements in deep learning offer high accuracy but often operate as black-box models, lacking interpretability and struggling with out-of-distribution challenges under real-world variations. To address these limitations, we propose a two-stage framework combining deep-learning-based image classification and explicit anatomical feature analysis, directly extracting expert-recognized morphological attributes such as pore size, frequency, and spatial arrangement from macroscopic images. By quantifying these anatomical descriptors, our framework yields transparent, OOD-robust features that can be directly fed into downstream species-identification models, thereby enhancing future classification accuracy while preserving interpretability. An end-to-end implementation integrates data acquisition, automated feature extraction, and interactive visualization, making the methodology practically applicable in both laboratory and field settings.

Author 1: Khanh Nguyen-Trong
Author 2: Thanh Nhan Nguyen-Thi

Keywords: Wood species identification; wood anatomical analysis; segmentation; Mask R-CNN; DenseNet

PDF

Paper 85: Air Quality Prediction Based on VMD-CNN-BiLSTM-Attention

Abstract: With the advancement of industrialization, air pollution has emerged as a critical global health and environmental concern. This study presents an air quality prediction model based on variational mode decomposition, a convolutional neural network, bidirectional long short-term memory, and an attention mechanism. The variational mode decomposition method is employed to decompose the Air Quality Index sequence, capturing different local characteristics of the original data. A hybrid model is constructed by integrating the convolutional neural network for feature extraction, the bidirectional long short-term memory for temporal pattern recognition, and the attention mechanism for focusing on significant data features. The model is optimized using the Grey Wolf Optimizer for hyperparameter tuning, thereby enhancing prediction accuracy. The proposed model is evaluated using air quality data from Changsha, China, covering the years 2015 to 2023. The results demonstrate that our model outperforms several other models in terms of mean absolute error, mean squared error, root mean squared error, and R-squared. This study provides a robust approach to air quality prediction, offering valuable insights for residents and policymakers.

Author 1: Huang Xinxin
Author 2: Mohd Suffian Sulaiman
Author 3: Marshima Mohd Rosli

Keywords: Air quality prediction; variational mode decomposition; convolutional neural network; bidirectional long short-term memory; hyperparameter optimization; air quality index

PDF

Paper 86: Cardio-Edge: Hardware-Software Co-design Implementation of LSTM Based ECG Classification for Continuous Cardiac Monitoring on Wearable Devices

Abstract: Cardiac arrhythmias should be detected at an early stage so that clinical intervention can take place and continuous patient monitoring can be established in a timely manner. In this study, we present Cardio-Edge, a hardware-software co-design implementation of an LSTM-based ECG classification system optimized for real-time use on wearable devices. Proposed architecture comprises discrete wavelet transform (DWT) and principal component analysis (PCA) for efficient feature extraction followed by multiple parallel LSTM networks and a multi-layer perceptron (MLP) for classification. Implemented on a Xilinx ZYNQ-7000 SoC, our system leverages FPGA-based hardware acceleration alongside ARM Cortex-A9 for preprocessing tasks. Compared to software-only implementation on the same ARM processor, our co-design achieves a 10× improvement in execution speed with 99% classification accuracy trained and verified on the MIT-BIH arrhythmia dataset. The hardware-efficient implementation employs resource-optimized architectures for LSTM, activation functions, and fully connected layers making it appropriate for low-power, patient-specific wearable healthcare devices. This real-time, on-chip solution eliminates dependence in-cloud connectivity and ensures data privacy hence suitable for continuous cardiac monitoring applications.

Author 1: Nousheen Akhtar
Author 2: Abdul Rehman Buzdar
Author 3: Jiancun Fan
Author 4: Muhammad Umair Khan

Keywords: ECG classification; wearable devices; discrete wavelet transform (DWT); long short-term memory (LSTM); field-programmable gate array (FPGA)

PDF

Paper 87: Robust Particle Filter for Accurate WiFi-Based Indoor Positioning in the Presence of Outlier-Corrupted Sensor Data

Abstract: This study presents a comprehensive evaluation of an outlier-robust particle filter (RPF) designed to improve indoor positioning accuracy in complex environments with substantial measurement noise and outliers. The RPF’s performance is benchmarked against a standard Particle Filter (PF) using both simulated and real-world datasets. Simulation results indicate that the RPF consistently outperforms the PF in indoor positioning particularly when sensor measurements contain out-liers, achieving significant reductions in root mean square error (RMSE) for position, velocity, and acceleration estimation, with improvements of approximately 40.02%, 38.48%, and 65.80%, respectively. Real-world experiments, applying a calibrated log-normal path loss model to Wi-Fi received signal strength (RSS) data, further corroborate the RPF’s effectiveness, demonstrating a 93.61% improvement in positioning accuracy compared to the PF. These findings highlight the RPF’s robustness in delivering high accuracy, especially in environments with measurement outliers, establishing it as a reliable solution for indoor tracking in noisy sensor environments.

Author 1: Mohamed Aizad Bin Mohamed Ghazali
Author 2: Aroland Kiring
Author 3: Lyudmila Mihaylova
Author 4: Hoe Tung Yew
Author 5: Seng Kheau Chung
Author 6: Farrah Wong

Keywords: Complex environments; indoor positioning; measurement noise and outliers; RMSE reduction; robust particle filter

PDF

Paper 88: Design and Analysis of Smart Lighting System for Room Environments Using Simulation Supporting Diverse Light Bulbs

Abstract: Recently, the demand for intelligent, energy-efficient lighting systems has increased due to rising environ-mental concerns and increasing electricity consumption in smart room environment buildings. Conventional lighting systems often operate inefficiently, using outdated bulb technologies and lacking automation, which results in substantial energy waste, especially in rooms with variable occupancy. Lighting significantly contributes to energy consumption in indoor spaces, which presents vast opportunities for smart lighting model development through automation and adaptive control. This study proposes a smart lighting system model for room environments that dynamically adapts to user presence and supports diverse light bulb types. The study analyzes energy usage while maintaining automatic light control and operational effectiveness through simulation. The system is developed using AnyLogic by integrating agent-based and discrete event simulation to model occupant behavior and manage event-driven lighting logic. It incorporates sensors, smart door mechanisms, and energy-measuring processes, all powered by solar energy and managed through battery storage. The system dynamically adjusts lighting based on occupancy, minimizing idle energy usage for the room. LED bulbs offer more promising energy efficiency, while incandescent bulbs show the highest consumption. The outcome provides a visualized simulation model for designing adaptive lighting systems and reinforces the potential to enhance energy efficiency to support sustainability in smart room applications.

Author 1: Husnul Ajra
Author 2: Mazlina Abdul Majid
Author 3: Md. Shohidul Islam

Keywords: Bulb; energy; light; model; simulation

PDF

Paper 89: Deep Learning-Driven DNA Image Encryption with Optimal Chaotic Map Selection

Abstract: This research introduces an advanced image encryption framework addressing critical security limitations in existing approaches. The study focuses on developing a robust encryption methodology that overcomes arbitrary chaotic map selection and static key generation vulnerabilities. Our approach integrates three synergistic components: a systematic chaotic map evaluation protocol identifying optimal dynamic systems, a deep learning-based key generation mechanism employing fine-tuned convolutional neural networks for image-sensitive cryptographic keys, and a hybrid encryption pipeline combining DNA encoding with chaotic diffusion. Experimental validation demonstrates that the proposed scheme achieves near-ideal entropy values (cipher images with an average entropy of 7.90 and above), and ensures extremely low correlation coefficients between adjacent pixels (close to zero in horizontal, vertical, and diagonal directions). Differential analysis confirms strong robustness, with NPCR values exceeding 99.6% and UACI about 33.5% across multiple color images. Visual results show that encrypted images display no perceivable patterns or similarities with the original images. Comparative performance assessment also highlights the method’s efficiency, with encryption execution times competitive with or better than recent state-of-the-art methods. Brute-force resistance is guaranteed by an extensive key space determined by the combination of deep learning-generated keys, Lorenz chaotic parameters, and DNA encoding rule permutations. The comprehensive multi-layered security strategy further ensures resilience against brute-force, statistical, differential, and chosen-plaintext attacks, as well as against modern deep learning-based cryptanalysis.

Author 1: Sara Bentouila
Author 2: Kamel Mohamed Faraoun

Keywords: Image encryption; DNA encoding; chaotic map selection; lorenz system; deep learning; convolutional neural network (CNN); security analysis; VGG16; cryptographic robustness

PDF

Paper 90: Older Adults and Technology Design from the HCI Perspective

Abstract: Older adults are an important segment in all societies worldwide, and this category of users cannot be ignored, considering technological progress, especially in the proliferation of smartphone applications. The expected growth in this age group in the following years, specifically in some developing countries, will present interaction challenges and opportunities in several areas for both older adult users and smartphone application designers. The main purpose of this review study is to create a better understanding of such a group from different angles and to identify this group of users from the perspective of the Human-Computer Interaction field, as well as explore current and future challenges to build a solid literature review emphasizing findings from HCI and human sciences. This literature review concludes with current and future trends to help address technology designs and older adults’ characteristics and needs.

Author 1: Hasan Ali Sagga
Author 2: Richard Stone

Keywords: Older adults; HCI; smartphone applications; human science; technology design; older adults challenges

PDF

Paper 91: Forecast COVID-19 Epidemics by Strengthening Deep Learning Models with Time Series Analysis

Abstract: The COVID-19 pandemic has profoundly impacted economic and social structures, directly affecting individuals’ lives. Deep learning models offer the potential to forecast future long-term trends and capture the temporal dependencies present in time series data. In this study, we propose leveraging the autocorrelation function (ACF) and the partial autocorrelation function (PACF) series as additional components to enhance the forecasting accuracy of our models. Our proposed method is applied to forecast COVID-19 time series data in twelve countries using the deep learning techniques of Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). When comparing the rankings average of mean absolute error and R-squared, the proposed models demonstrated superior performance in time series forecasting compared to the standard LSTM and GRU model. Specifically, the ACF-PACF-GRU model achieved the best median values for mean absolute percentage error (1.67 per cent for confirmed cases and 2.17 per cent for death cases) and root mean square error (1.92 for confirmed cases and 2.17 for death cases). Therefore, the proposed ACF-PACF-GRU model showed the highest performance in forecasting both confirmed and death cases. This research introduces a novel method for constructing effective time series models aimed at forecasting disease burdens, thereby aiding in epidemic control and the implementation of preventive measures.

Author 1: Warapree Tangseefa
Author 2: Tepanata Pumpaibool
Author 3: Paisit Khanarsa
Author 4: Krung Sinapiromsaran

Keywords: Forecasting models; COVID-19; long short-term memory; gated recurrent unit; autocorrelation function; partial autocorrelation function

PDF

Paper 92: Cyber Deception Across Domains: A Comprehensive Survey of Techniques, Challenges, and Perspectives

Abstract: Cloud environments (CE), wireless networks (WN), cyber-physical systems (CPS), industrial control systems (ICS), smart grids (SG), internet of things (IoT), internet of vehicles (IOV), and unmanned aerial vehicles (UAV), are currently popular targets for cyberattacks due to their inherent limitations and vulnerabilities. Each domain has its own attack surfaces, weaknesses, and areas for implementing defense strategies appropriate to its specific conditions. Among the various defense mechanisms discussed in previous years, cyber deception has appeared as a very promising method. This approach allows the defenders to steer the attackers in the wrong direction, get threat intelligence, and at the same time, increase security by engaging with adversaries in deception environments in a proactive manner. Cyber deception has been a topic of investigation in several studies, where specific frameworks and techniques were proposed to identify, delay, or disrupt adversarial behavior. Nevertheless, the contributions of earlier works are frequently limited or missing a unified framework that makes a thorough and comparative study necessary. This survey investigates the cyber deception techniques used in various domains. The first part is about the cores of deception and its background. Next, it presents a summary of the available deception techniques with their modeling by different frameworks like MITRE ATT&CK, D3FEND, and Engage, and intelligent orchestration using reinforcement learning (RL) and game theory (GT). Then, it serves as a thorough systematic review of each selected paper, going over the system design, used deception techniques, evaluation metrics, and limitations on each scheme. The achieved results are compiled into a unified summary table to enable a quick and effective comparison across the domains. It concludes, therefore, by discussing the main challenges, open issues, and areas of research that have not yet been explored, thus making it a valuable source for future research on cyber deception.

Author 1: Amal Sayari
Author 2: Slim Rekhis

Keywords: Cyber defense; cyber deception; cloud environments; wireless networks; cyber-physical systems; industrial control systems; smart grids; internet of things; internet of vehicles; unmanned aerial vehicles

PDF

Paper 93: Gender and Age Estimation from Facial Images Based on Multi-Task and Curriculum Learning

Abstract: This study presents a multi-task deep learning approach for predicting age and gender attributes from facial images, with the aim of obtaining a robust dual classifier. The proposed system uses the pre-trained EfficientNet-B4 model as the feature extractor of the main model and incorporates a two-branch architecture, where the output of the gender classification branch informs the age prediction branch. This means a conditional feature learning with an explicit injection mechanism, by injecting gender information into the age field of the dual-task model, which is one of the novelties of our proposal. A curriculum learning strategy is applied during training to progressively improve the model’s performance using various datasets, such as UTKFace, MORPH-II, and Adience. The proposed multi-phase curriculum learning strategy, which uses both multi-task learning and multi-dataset training, is another novelty of our proposal. Experimental results show that the model achieves high accuracy in both age and gender classification tasks while maintaining low inference latency. Furthermore, the experiments highlighted that the classification accuracy values of the proposed method, both for gender and age, as well as in all datasets used, are close to the best state-of-the-art results, which validates the robustness of the proposed classifier.

Author 1: Toma Brezovan
Author 2: Claudiu Ionu? Popîrlan

Keywords: Age estimation; gender classification; multi-task learning; curriculum learning

PDF

Paper 94: Improving Cross-Patient Epilepsy Detection via EEG Decomposition into Canonical Brain Rhythms with Deep Learning

Abstract: Epilepsy affects more than 50 million people world-wide, and almost 80% of them live in low-income countries with limited access to medical and public services. Beyond these challenges, epileptic patients also face other problems, such as stigma and social exclusion due the misunderstanding of epilepsy. Thus, epilepsy has become a major public health problem with a high social impact. Electroencephalography (EEG) remains the primary tool for diagnosing epilepsy; however, the traditional procedure of reviewing long EEG recordings is time-consuming, error-prone, and highly dependent on the neurologist’s experience. Recent advances in deep learning (DL) have driven the development of new methods for automatic epilepsy detection. Despite these advances, most methods are not generalizable to all patients, limiting their clinical applicability in real-life cases. In this work, we present a cross-patient method capable of improving epilepsy detection by spectral decomposition of EEG signals into canonical brain rhythms. These spectral bands improve the signal significance and the model performance. The proposal was evaluated in a cross-patient validation scheme on the CHB-MIT dataset and proved superior performance using EEG signals from the interictal and ictal epilepsy stages. The model achieved of 100% of sensibility and specificity using the theta band, outperforming the state-of-the-art methods and offering a promising step towards real-world clinical implementation.

Author 1: Jose Yauri
Author 2: Elinar Carrillo-Riveros
Author 3: Edith Guevara-Morote
Author 4: Juan Carlos Carreño-Gamarra
Author 5: Karel Peralta-Sotomayor
Author 6: Pelayo Quispe-Bautista

Keywords: EEG signals; EEG signal decomposition; canonical brain rhythms; deep learning; convolutional neural network; transformer neural network

PDF

Paper 95: Evaluating and Interpreting Pooling Techniques in Spectrogram-Based Audio Analysis Using Diverse Metrics

Abstract: Audio analysis is a rapidly advancing field that spans various domains, including speech, music, and environmental sound data. Using spectrograms with Convolutional Neural Networks (CNNs) enables the visualization and extraction of critical audio features by combining time-frequency representations with deep learning. Pooling plays a crucial role in this process, as it reduces dimensionality while retaining essential information. However, existing evaluations of pooling methods primarily emphasize downstream task performance, such as classification accuracy, often overlooking their effectiveness in preserving critical signal features. To address this gap, we use 17 distinct metrics, categorized into four domains, to comprehensively assess various pooling operations. Furthermore, we explore the underex-amined relationship between specific pooling techniques and their impact on feature retention across diverse audio applications. Our analysis encompasses spectrograms from three audio domains (speech, music, and environmental sound), identifying their key characteristics, and grouping them accordingly. Using this setup, we evaluate the performance of 12 pooling methods across these applications. By investigating the features critical to each task and evaluating how well different pooling techniques preserve them, we give insights into their suitability for specific applications. This work aims to guide researchers in selecting the most appropriate pooling strategies for their applications, enabling more granular evaluations, improving explainability, and thereby advancing the precision and efficiency of audio analysis pipelines.

Author 1: Supun Bandara
Author 2: Uthayasanker Thayasivam

Keywords: Audio data analysis; pooling; deep learning; dimensionality reduction; spectrograms

PDF

Paper 96: Recommendation Engine for Amazon Magazine Subscriptions

Abstract: Recommender systems play a crucial role in enhancing user experience and engagement on e-commerce plat-forms by suggesting relevant products based on user behavior. In the context of Amazon’s extensive catalog of over 8,000 magazines spanning more than twenty-five categories, providing personalized magazine subscription recommendations poses a significant challenge. This study addresses the problem of identifying potential future associations between magazine reviewers and products using a graph-based approach. Specifically, we aim to predict unseen but likely links between users and magazines to improve recommendation quality. To achieve this, we construct an undirected bipartite network connecting reviewers and magazine products based on review data. We perform network analysis using measures such as centrality, modularity, and clustering, and apply sentiment analysis and topic modeling to extract behavioral and thematic insights from user reviews. These insights inform a series of link prediction techniques including Common Neighbors, Adamic-Adar, Jaccard Coefficient, and Preferential Attachment evaluated using cross-validation and ROC curves. Our results show that the Preferential Attachment model outperforms other approaches, attributed to the skewed degree distribution inherent in the dataset’s structure.

Author 1: Sushil Khairnar
Author 2: Deep Bodra

Keywords: Sentiment analysis; topic modeling; recommender system; link prediction

PDF

Paper 97: Empowering Accessibility: IoT-Driven Smart Buildings for Elderly and Disabled Individuals

Abstract: This study aims to examine the attitudes of elderly and disabled individuals in Saudi Arabia toward Internet of Things (IoT)-enabled smart home technologies, with specific attention to the influence of demographic variables. The research employed a descriptive survey design, utilizing an online questionnaire distributed to a stratified random sample of 249 participants. Stratification ensured balanced representation across gender, age, educational attainment, employment status, and economic background. Statistical analyses, including Scheffé’s post hoc test, revealed generally positive attitudes toward IoT adoption, primarily driven by perceived benefits related to enhanced quality of life, personal safety, and autonomy. Significant differences were identified across several demographic variables: married individuals, employed participants, those with higher education, higher-income groups, and individuals aged 30 to 45 all reported more favorable attitudes. Similarly, individuals with disabilities expressed stronger acceptance compared to their elderly counterparts. In contrast, gender differences were not statistically significant. These findings highlight the need for targeted, inclusive strategies that promote the adoption of IoT technologies across diverse social groups. The study contributes to a deeper understanding of how demographic characteristics shape technology acceptance and underscores the urgency of designing accessible, user-centered smart home systems. Recommendations emphasize public awareness initiatives, affordability measures, and inclusive design practices contributing to digital equity and aligning with the broader objectives of Saudi Vision 2030 for sustainable urban development.

Author 1: Zeyad Alshboul
Author 2: Burhan Mahmoud Hamadneh
Author 3: Turki Mahdi Alqarni
Author 4: Bajes Zeyad Aljunaeidia
Author 5: Methaq Khadum

Keywords: IoT; smart home; sustainability; urban development; quality of life; assistive technologies

PDF

Paper 98: Enhancing Dendritic Cell Algorithm by Integration with Multi-Layer Perceptron for Anomaly Detection

Abstract: Anomaly detection is crucial in a variety of areas, with the Dendritic Cell Algorithm (DCA) being one of the most used artificial immune systems (AIS) and introduced for binary classification of data. Both traditional and current perspectives on classification in DCA have primarily been threshold-based methods. Such approaches are limited in important ways, including inflexibility, manual tuning, and not being context-aware. The latest improvements in literature have provided adaptive dynamic threshold mechanisms that allow the system to adjust the sensitivity of the threshold using some statistical data of real-time observations. Although this is progress, the systems proposed are still based on rules, which have traditionally struggled with the more complex, higher-dimensional and nonlinear nature of data. This is common in most complex anomaly detection tasks today. In this study, we propose an improved DCA-MLP framework that uses a Multi-layer Perceptron (MLP) classifier replacing the thresholding phase. The MLP allows the DCA to learn from data context adaptively through a context-sensitive learning mechanism that can also change with the data distribution as it evolves, eliminating the need to robotically calibrate based on static or heuristic thresholds. The framework was tested thoroughly on fourteen benchmark datasets, and performance was evaluated against standard DCA in terms of accuracy, sensitivity and specificity measures. The performance results revealed considerable enhancements in DCA-MLP’s performance: 12%–50% improvements in accuracy (increasing accuracy to 93%–99%), 46% improvements in sensitivity (sensitivity as 98%), and 39% improvements in specificity. This shows that DCA-MLP is better adaptable, with learning capacity and robustness - a paradigm shift away from thresholds or threshold-based systems to an intelligent self-adjusting anomaly detection classification scheme.

Author 1: Yousra Abudaqqa
Author 2: Zulaiha Ali Othman
Author 3: Azuraliza Abu Bakar

Keywords: Dendritic Cell Algorithm (DCA); anomaly threshold; Multi-Layer Perceptron (MLP); anomaly detection

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Computer Vision Conference
  • Healthcare Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org