The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Outstanding Reviewers

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

IJACSA Volume 17 Issue 3

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Empirical Validation of the ASER Framework for Long-Term Knowledge Retention in Augmented Reality

Abstract: Long-term knowledge retention remains a critical challenge in augmented reality (AR) learning environments, which often prioritize novelty and short-term engagement over durable learning outcomes. This study empirically validates the Augmented Sensory Experience and Retention (ASER) Framework, an instructional model integrating emotional memory cues, interactive storytelling, and gamification within AR to promote sustained learning. A between-subjects experimental design was conducted with 30 adult participants randomly assigned to either an ASER-based AR condition or a traditional non-AR instructional condition. Baseline equivalence was established using equivalence testing. Learning outcomes were assessed using immediate post-test and three-week delayed recall measures. Individual gain scores were analyzed using Mann–Whitney U tests, and a one-way MANOVA examined multivariate effects across emotional engagement, motivation, learning engagement, and cognitive load. Results revealed significantly greater long-term retention gains in the ASER condition, with a large effect size, alongside stronger short-term improvement. Multivariate analysis demonstrated a significant overall effect of instructional condition, with the ASER group reporting higher engagement, motivation, and emotional involvement, as well as more favorable cognitive load. These findings provide empirical support for the ASER Framework and demonstrate that emotionally enriched, narrative-driven, and gamified AR instruction can foster deeper cognitive processing and more durable knowledge retention than conventional instructional approaches. The study offers evidence-based design guidance for developing pedagogically grounded AR learning systems aimed at sustained educational impact.

Author 1: Samer Alhebaishi
Author 2: Richard Stone
Author 3: Ulrike Genschel
Author 4: Kris De Brabanter
Author 5: Mani Mina
Author 6: Anthony M. Townsend
Author 7: Mohammed Ameen

Keywords: Augmented reality (AR); ASER Framework; long-term knowledge retention; emotional memory; interactive storytelling; gamification

PDF

Paper 2: Enhancing GANomaly-Based Anomaly Detection for X-Ray Cargo Inspection

Abstract: Anomaly detection in X-ray cargo imagery is challenging due to complex scene structures, object overlap, and limited labeled abnormal data. Reconstruction-based methods address this problem by learning normal cargo patterns and identifying deviations during testing. This study investigates how feature-level reconstruction objective functions influence detection performance within the GANomaly framework. Five objective configurations are evaluated on the CargoX dataset: a pixel-based baseline and three perceptual loss variants using Visual Geometry Group 16-layer network (VGG16) feature supervision at different depths (i.e., Rectified Linear Unit layers ReLU2_2, ReLU3_3, ReLU4_3, and multi-scale), and an encoder replacement using a ResNet50 with and without perceptual supervision. Performance is assessed using Receiver Operating Characteristic Area Under Curve (ROC-AUC), precision, recall, and F1-score, supported by qualitative analysis of reconstructions and residual maps. Results show that mid-level perceptual supervision (ReLU3_3) achieves the best performance. It improves ROC-AUC from 0.7182 to 0.7548 and demonstrates enhanced sensitivity to structural anomalies. Replacing the original GANomaly encoder with ResNet50 increases ROC-AUC to 0.7312 and improves precision. Combining ResNet50 with perceptual supervision achieves a ROC-AUC of 0.7517. However, it does not surpass the original ReLU3_3 configuration in recall or F1-score. Shallow features (ReLU2_2) and multi-scale aggregation do not improve detection. Failure analysis highlights challenges with low-contrast anomalies and structurally complex normal cargo scenes. These findings show that anomaly detection performance depends on both reconstruction supervision and encoder design. Therefore, loss selection and feature extraction should be analyzed together in reconstruction-based models.

Author 1: Kholoud Alotaibi
Author 2: Nasser Nasrabadi

Keywords: Anomaly detection; cargo X-ray imaging; GANomaly; perceptual loss; feature-level reconstruction; semi-supervised learning; generative adversarial networks; structural anomaly detection; security screening; reconstruction-based detection; deep learning for X-ray inspection; ResNet50

PDF

Paper 3: Benchmarking Lightweight Machine Learning Models for Epileptic Seizure Recognition: Accuracy, Calibration, and Robustness Analysis

Abstract: Epileptic seizure recognition is a critical task in clinical decision support systems, where both accuracy and reliability of predictions directly affect patient outcomes. While deep learning architectures such as CNNs and LSTMs are widely applied to EEG-based seizure detection, many publicly available seizure datasets consist of precomputed EEG-derived features, making the problem fundamentally tabular rather than raw-signal based. In such settings, the necessity and added value of complex deep learning pipelines remain unclear, and prior studies have largely emphasized classification accuracy while giving more limited attention to calibration, robustness, and deployment efficiency. In this work, we present a systematic benchmark of lightweight machine learning models—Logistic Regression, Random Forest, XGBoost, LightGBM, and CatBoost—on the Epileptic Seizure Recognition dataset. We evaluate performance across multiple dimensions: discriminative ability (accuracy, macro-F1, ROC-AUC, PR-AUC), confidence calibration (Brier score, calibration and reliability diagrams), and robustness under Gaussian feature perturbations. Our results show that LightGBM achieves 98.04% accuracy, a ROC-AUC of 0.9971, and a Brier score of 0.0166, while maintaining stable performance under the tested noise levels. Notably, all gradient boosting methods substantially outperform Logistic Regression, indicating that nonlinear feature interactions are critical for this task. Compared with prior deep learning approaches on the same dataset, these lightweight models achieve competitive performance at a fraction of the computational cost. These findings show that tabular machine learning methods deserve serious consideration for EEG-derived feature classification tasks, particularly in resource-constrained clinical settings where efficiency, calibration, and robustness are as important as raw accuracy.

Author 1: Sairam Tabibu
Author 2: Mugdha Abhyankar

Keywords: Epileptic seizure recognition; EEG classification; lightweight machine learning; LightGBM; calibration; robustness; biomedical AI

PDF

Paper 4: Hybridizing Collaborative Filtering and Knowledge: How do they Work Together? A Scoping Review

Abstract: The rapid expansion of digital platforms and the increasing complexity of user preferences have driven the need for more sophisticated recommendation systems. While Collaborative Filtering and Knowledge-Based Filtering have been widely adopted as core techniques for personalized recommendations, their individual limitations have led to the rise of hybrid approaches. Despite significant advancements, a comprehensive understanding of hybridization methodologies, their technical implementations, and emerging challenges remains unsolved. The purpose of this research is to systematically examine and synthe-size the domain of Hybrid Recommender Systems to address this. This study presents a scoping review, following the PRISMA-ScR guidelines, to systematically examine the domain of hybridizing Collaborative Filtering and Knowledge-Based Filtering. A total of 62 hybrid recommenders across various application domains were analyzed, and categorized into three primary hybridization strategies: Model Fusion, Transfer Learning, and Hierarchical Models. The review explores technical characteristics, hybridization techniques, data sources, evaluation methodologies, and domain-specific applications. Key findings indicate that most hybrid approaches focus on leveraging graph-based models, deep learning architectures, and causal inference techniques to enhance recommendation outcomes. However, despite these advancements, critical gaps remain. The review identifies key challenges, including computational complexity, lack of explainability, bias in recommendations, and reliance on offline evaluation metrics. Additionally, scalability issues in knowledge graph maintenance and the need for user-centered evaluation frameworks highlight important directions for future research. Addressing these gaps will be crucial in making hybrid recommendation systems more efficient, interpretable, and adaptable across diverse domains. This study contributes to the field by providing a structured synthesis of existing hybridization techniques, pinpointing success factors, and proposing future research avenues to advance hybrid recommendation systems.

Author 1: Alex Martínez-Martínez
Author 2: Raul Montoliu
Author 3: Inmaculada Remolar

Keywords: Hybrid Recommender Systems; collaborative filtering; knowledge-based recommenders; personalized recommendations; scoping review

PDF

Paper 5: Federated Gaussian Process Regression with Orthogonal Feature Encryption and Key-Based Access Control

Abstract: Federated learning (FL) makes it possible to train models across distributed data sources without collecting raw data in one place. However, even in federated settings, trained models may still leak sensitive information at inference time. This problem is particularly evident for Gaussian Process regression (GPR), where predictive uncertainty is explicitly returned and can differ between training and non-training samples. Such differences can be exploited for membership inference. In this work, we examine inference-time privacy and robustness in federated GPR by focusing on the behavior of predictive variance. To enable scalable training, we employ a Random Fourier Feature approximation together with an Alternating Direction Method of Multipliers (ADMM) based distributed optimization scheme. On top of this learning framework, we apply key-dependent orthogonal feature transformations that enable multi-key inference time access control. When inference is performed using the correct key, prediction accuracy and uncertainty behavior remain close to those of plaintext federated GPR. When incorrect or mismatched keys are used, prediction errors increase sharply and predictive variance becomes uniformly large. Experimental results show that this variance inflation removes the usual gap between training and unseen samples, reducing the effectiveness of variance-based membership inference. Importantly, this effect arises without adding noise or relying on cryptographic operations. These findings suggest that predictive uncertainty can play a practical role in enforcing inference-time access control and improving privacy robustness in federated Gaussian Process models.

Author 1: Md. Rashedul Islam
Author 2: Jannatul Ferdous Akhi
Author 3: Takayuki Nakachi

Keywords: Gaussian process; differential privacy; Random Unitary Transformation; membership inference attack; machine learning; federated learning

PDF

Paper 6: Fast E-Learning Recommendation: Enhancing Model Efficiency with Q-Matrix Complexity Reduction

Abstract: Intelligent tutoring systems generate a large volume of data, which becomes particularly valuable when effectively leveraged for learner performance prediction in adaptive learning environments. In this context, the speed and predictive accuracy of machine learning models are crucial, as they determine the system’s ability to deliver timely and relevant insights and support responsive, personalized instruction. Enhancing model speed not only increases tutoring efficiency but also improves the adaptability of educational systems to learners’ needs. This study introduces an approach aimed at improving the execution time of three logistic regression-based models widely used for learner performance prediction: DAS3H (Item Difficulty, Student Ability, Skill, and Student Skill Practice History), AFM (Additive Factor Model), and PFA (Performance Factor Analysis). The proposed optimization reduces the complexity of the Q-matrix that links each item to its required knowledge components by simplifying its structure while preserving pedagogical relevance. An empirical evaluation was conducted on four real-world datasets collected from online tutoring platforms. The results demonstrate that the proposed approach, called Fast E-learning Recommendation (FER), significantly improves the execution speed of the three models while maintaining comparable predictive performance across datasets.

Author 1: Ismail Menyani
Author 2: Ahmed Oussous
Author 3: Ayoub Ait Lahcen

Keywords: Learner performance prediction; adaptive learning; complexity; knowledge components; Q-matrix; machine learning; DAS3H; PFA; AFM; IRT

PDF

Paper 7: Photoplethysmogram-Based Diabetes Screening via Supervised Machine Learning: A Demographic Study on a Southeast Asian Cohort

Abstract: Diabetes mellitus is a major chronic metabolic disorder that often leads to serious long-term vascular complications. Traditional monitoring methods focus mainly on metabolic indicators and often miss early vascular changes. This study developed and validated a non-invasive framework for classifying diabetic status based on photoplethysmogram (PPG) pulse morphology. The approach offers a scalable and affordable alternative to invasive blood tests. A dataset from 78 Malaysian participants was analyzed in five phases: signal pre-processing, feature extraction, and statistical ranking. Raw signals were filtered with a 4th-order Chebyshev Type II band-pass filter for accurate waveform analysis. From a wide set of temporal and amplitude features, key biomarkers linked to arterial stiffness and vascular compliance were identified and ranked. Six supervised machine learning models were evaluated: Logistic Regression, Decision Tree (DT), KNN, Support Vector Machine (SVM), Artificial Neural Network (ANN), and Naïve Bayes (NB). ANN and SVM models achieved the highest classification accuracy and AUC. This demonstrates effective distinction between diabetic and non-diabetic status using interpretable waveform features. Validation with a Southeast Asian cohort addresses a demographic gap in the literature. The framework shows that ranked PPG biomarkers can be used for accessible, community-level diabetes screening, especially in healthcare settings with limited resources.

Author 1: Nazrul Anuar Nayan
Author 2: Mohd Taufik Rezza Mohd Foudzi
Author 3: Mohd Zubir Suboh
Author 4: Syaza Norfilsha Ishak
Author 5: Zazilah May

Keywords: Photoplethysmography (PPG); diabetes prediction; supervised machine learning; signal morphology features; non-invasive screening; feature selection

PDF

Paper 8: Quantum-Resilient Machine Learning and Q-Learning–Driven Priority Time-Slot AODV for Secure MANET Routing

Abstract: Mobile Ad Hoc Networks (MANETs) are decentralized in nature and, therefore, they have no centralized control, and consequently, they are highly susceptible to routing attacks like black hole attacks and gray hole attacks, both of which disable data delivery by causing a vicious loss of packets. To address these issues, the current study offers the Quantum-resilient Machine Learning and Q-Learning-driven Priority Time-Slot AODV (QR-MLQ-PTS-AODV) routing model. This framework combines a multi-metric trust query, an entropy-based behavioral stability query, a temporal query trust adjustment, and a managed machine learning method to attain exact malicious node forecasting. Reinforcement learning, through Q -learning, is employed to utilize dynamical assignment of MAC -layer priority time slots to enable cross-layer optimization, as well as adaptive routing decisions. In contrast to solutions that exist, the suggested framework avoids quantum-vulnerable cryptographic primitives in favor of hash-based trust authentication and learning-based mitigation measures, to make sure that it can withstand novel quantum-assisted routing attacks. The limited variables of the trust model are determined by a mathematical analysis and extensive NS-3 simulations that show that the model significantly improves the ratio of packet delivery, end-to-end delay, routing overhead, and attack detection accuracy in comparison with traditional AODV and the most up-to-date trust-, ML-, and RL-based protocols. Based on these results, the effectiveness of embedding quantum-sensitive security protocols and smart cross-layer routing in MANETs can be supported.

Author 1: Singireddy Sateesh Reddy
Author 2: E. Aravind

Keywords: Mobile Ad Hoc Networks; secure routing; AODV; trust management; machine learning; reinforcement learning; MAC layer scheduling; black hole attack; post-quantum security; quantum-resilient routing

PDF

Paper 9: Comparing Random Forest and Gradient Boosting for Monkeypox Diagnosis

Abstract: Early and accurate diagnosis of Monkeypox is essential to limit transmission and support effective treatment. This study aims to compare the performance of Random Forest and Gradient Boosting models for classifying Monkeypox cases using clinical symptom data. A synthetic dataset from Kaggle containing 25,000 records with 11 symptom-based features was used to evaluate both models under imbalanced and SMOTE-balanced conditions using stratified 5-fold cross-validation. Model performance was assessed using accuracy, precision, recall, F1-score, receiver operating characteristic (ROC) curves, and area under the curve (AUC). The experimental results indicate that both models achieve high recall values on imbalanced data, with Gradient Boosting slightly outperforming Random Forest in discriminative performance (AUC 0.6869 vs. 0.6839). While the application of SMOTE improves precision, it reduces recall and provides only marginal improvements in AUC, indicating a trade-off between sensitivity and precision in symptom-based classification. These findings demonstrate the potential of ensemble learning models for symptom-based Monkeypox classification in synthetic tabular datasets. However, further validation using real-world clinical data is necessary before practical diagnostic deployment.

Author 1: Fahlul Rizki
Author 2: Widowati
Author 3: Catur Edi Widodo

Keywords: Comparative analysis; Random Forest; Gradient Boosting; clinical symptoms; machine learning

PDF

Paper 10: User Behaviour Analysis for Insider Threat Detection Using Machine Learning: A Case Study in Enterprise Web Application Security

Abstract: This study presents a user behaviour analysis approach for detecting insider threats in an enterprise web application environment. The approach applies machine learning techniques to analyze patterns of user activity. Using a primary dataset collected from a leading ICT distributor company in Indonesia with nationwide channel operations over January–June 2025, we identify patterns of normal and anomalous user activities indicative of insider threats. Three machine learning models were implemented: Random Forest, Support Vector Machine (SVM) with RBF kernel, and 1D CNN, which are widely used in insider-threat and anomaly-detection research. Severe class imbalance was mitigated via undersampling followed by SMOTE. Random Forest delivered the best performance on the test set (Accuracy 97.38%, F1-Score 97.77%, ROC-AUC 99.82%), with CNN and SVM also showing strong anomaly sensitivity. The findings demonstrate a practical, high-accuracy insider-threat detector trained on real enterprise logs, not simulated datasets, suitable for deployment in Indonesian enterprise settings.

Author 1: Yosep
Author 2: Aditya Kurniawan

Keywords: Insider threat; user behaviour analytics; machine learning; anomaly detection; cybersecurity

PDF

Paper 11: CdbNorm: An Efficient Library for Automatic Database Normalization

Abstract: This study introduces CdbNorm, a library that provides efficient implementations of the first three normal forms of relational database normalization. CdbNorm makes it quick and straightforward for a data analyst to divide a large dataset into smaller tables free from database anomalies (insert, update, and delete) and duplicate data. This study describes each of the steps of our normalization algorithm, which includes the discovery of functional dependencies and the population of output normalized datasets. We evaluate the accuracy and efficiency of our algorithm with databases introduced in prior papers and with large datasets available online.

Author 1: Ivan Piza-Davila
Author 2: Fernando Gutierrez-Preciado
Author 3: Victor Ortega-Guzman
Author 4: Mildreth Alcaraz-Mejia

Keywords: Database normalization; functional dependency; normal form; 1NF; 2NF; 3NF

PDF

Paper 12: Assessment of Different Energy Management Strategies for the Operation of Hybrid Hot-Water Installations in Hotels

Abstract: With the adoption of the Energy market in Bulgaria, large fluctuations in the price of electrical energy have been occurring, which is a challenge for businesses in the different sectors of the economy. This study is aimed at evaluating the energy and financial performance of three energy management strategies for operating hybrid hot water installations in hotels: the first one is one assumes the water is heated only by an evacuated solar tube system; the second one assumes electrical energy is used whenever the water’s temperature falls below a certain threshold; and the third one uses preliminary pre-heating of the water during the off-peak hours when electrical energy is cheaper. A simulation model has been developed based on well-known physical and empirical dependencies, allowing for the necessary evaluations. A hot water installation operation has been investigated for a hotel with a capacity of 80 guests on a sunny summer day. The results showed that the first strategy does not allow for maintaining the temperature of the water in the tank above the required threshold. The second strategy ensured the requirements towards the water temperature are met with minimal application of electrical energy, leading to daily expenses between 3.4 EUR and 62 EUR. The third strategy increased the grid energy usage, but the daily expenses were limited to 18.5 EUR. The obtained results indicate that hotel owners could significantly reduce their hot water expenses with the help of a hybrid hot-water installation and an appropriate energy management strategy.

Author 1: Boris I. Evstatiev
Author 2: Nadezhda L. Evstatieva

Keywords: Energy management; energy market; evacuated tube collectors; hot water consumption; strategies

PDF

Paper 13: From Rules to Transformers: A Deep Learning Approach for Arabic Natural Language Interfaces to Databases

Abstract: Natural language interfaces to databases (NLIDBs) enable users to communicate with databases using natural everyday language rather than difficult query languages. This study presents a new approach using deep learning techniques to improve the robustness and accessibility of Arabic NLIDB systems through a new end-to-end framework. A Transformer-based architecture is proposed, in which AraT5 is utilized to translate Arabic Natural Language Queries (ANLQs) into structured JSON Logical Query (JLQ) representations, subsequently converting these into executable SQL statements. Traditional rule-based systems are surpassed by this approach, as semantic understanding is leveraged instead of grammatical pattern matching. Consequently, the morphological complexity and dialectical variations of Arabic are more effectively handled. This neural semantic parsing approach demonstrates a deep understanding of query intent, moving beyond surface-level pattern matching. Experimental evaluation on a large-scale, multi-domain curated dataset of 50,000 query pairs demonstrates superior performance, with 85.2% exact match accuracy for JLQ generation and 89.8% SQL execution accuracy. The findings indicate that Transformer-based approaches offer substantial improvements in translation accuracy compared to conventional rule-induction methods.

Author 1: Dahr Laila
Author 2: Sahib Mohamed Rida
Author 3: Er-Raha Brahim

Keywords: Sequence-to-sequence (SeqToSeq); natural language to SQL (NL2SQL); semantic parsing; arabic NLP; Text-to-SQL

PDF

Paper 14: A Review on Machine Learning Approaches for Solid Waste Management

Abstract: The rapid increase in population and the ongoing expansion of urban regions have resulted in a substantial growth in municipal solid waste generation, creating serious challenges for environmental protection and urban management. In response to these problems, recent research has increasingly focused on technological solutions, among which machine learning has gained considerable attention. Machine learning can capture complex nonlinear patterns and is therefore widely applied across various stages of municipal solid waste management to enhance sustainable and efficient waste handling. This review examines over one hundred research studies published between 2000 and 2022, with the objective of analyzing how machine learning techniques have been employed throughout the waste management process, including waste generation prediction, collection scheduling, transportation optimization, and disposal planning. The study systematically explores prevailing research trends, identifies methodological limitations, and highlights promising future research directions, offering conceptual understanding and practical guidance for subsequent investigations. In contrast to previous review studies, this research specifically focuses on the waste generation and disposal stages, highlighting how individuals, households, and municipal authorities employ advanced computational techniques to minimize waste volume and improve management efficiency. The findings indicate that most existing studies focus on waste classification, regional estimation of waste quantities, and prediction of bin fill levels. Nevertheless, several important challenges remain, such as the lack of real-time time-series datasets, limited model robustness and generalization capability, the absence of unified benchmarking standards, and the difficulty of achieving reliable long-term forecasting of waste generation.

Author 1: S. Vidya

Keywords: Municipal solid waste management; machine learning; modeling; optimization; solid waste generation; disposal

PDF

Paper 15: Sentiment and Emotion Analysis in Textual Data: A Recent Systematic Literature Review Method, Model and Application

Abstract: The analysis of sentiment and emotion has become an important research topic in Natural Language Processing (NLP) due to the rapid growth of textual data generated on digital platforms. Still, despite significant progress, the existing literature remains fragmented across methods, modalities, and application domains, making it difficult to obtain a comprehensive understanding of current research trends. This study presents a structured literature review that synthesizes recent advances in sentiment and emotion analysis of textual data. The review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol and systematically examines studies retrieved from the Web of Science (WoS) and Scopus databases. After screening, eligibility evaluation, and Quality Assessment (QA), 50 primary studies published between 2023 and 2025 were selected for analysis. As such, the findings reveal a clear methodological transition from traditional Machine Learning (ML) techniques toward transformer-based architectures and Large Language Models (LLMs). In addition, recent studies increasingly explore multimodal approaches and context-aware emotion modeling to improve sentiment and emotion detection. Despite these advancements, several challenges remain, including the detection of implicit emotions, dataset imbalance, and domain adaptability. Overall, this review provides a structured synthesis of recent developments in textual sentiment and emotion analysis, identifies key research challenges, and outlines potential directions for future studies.

Author 1: Wan Azzura Wan Ramli
Author 2: Rabiah Abdul Kadir
Author 3: Amalia Amalia
Author 4: Ang Mei Choo

Keywords: Sentiment; emotion analysis; textual data; transformer; large language models

PDF

Paper 16: Capacitated Location-Allocation Model for Emergency Supply Chain: The Case of Morocco

Abstract: Recently, Morocco has experienced a series of disasters, including the El Haouz earthquake in 2023, which have brought renewed attention to the country’s emergency preparedness and the efficiency of its national emergency supply chain. In addition, this study considers a prospective scenario based on potential flood events in northern Morocco to evaluate future resilience requirements. In this context, improving the strategic planning of Emergency Supply Facilities (ESFs) is essential for strengthening disaster response capabilities. This study develops a capacitated location–allocation optimization model for emergency supply chain planning that incorporates demand uncertainty, flexible allocation of ESFs, and donor contributions. The proposed framework is evaluated through computational experiments using problem instances consisting of multiple candidate ESF locations, demand points, and disruption scenarios, allowing the analysis of different emergency response configurations. The results indicate that the proposed optimization framework can significantly improve the efficiency and responsiveness of Morocco’s emergency supply chain. The model provides a practical decision-support tool for policymakers and planners to enhance disaster preparedness and resource allocation in national emergency logistics systems.

Author 1: Imane Sassaoui
Author 2: Aziz Ait Bassou
Author 3: Mustapha Hlyal
Author 4: Jamila El Alami

Keywords: Emergency logistics; disaster response; location–allocation; stochastic demand; supply chain resilience

PDF

Paper 17: Leveraging Kolmogorov-Arnold Networks (KANs) for Mixed-Domain Satellite Imagery Segmentation

Abstract: Semantic segmentation of satellite imagery requires models that capture global context while preserving sharp object boundaries. Convolutional Neural Networks (CNNs) excel at local feature extraction, but often struggle with long-range dependencies. Transformers provide global context but may blur edges and rely on opaque classifier heads. This study aims to develop an interpretable hybrid segmentation model that improves boundary accuracy and generalization across mixed-domain satellite imagery. This study presents SwinKANet, a hybrid segmentation model that combines a transformer encoder with boundary-aware decoding and an interpretable prediction head. SwinKANet employs a Swin Transformer (SwinV2-Tiny) encoder to extract multi-scale features, while a Convolutional Block Attention Module (CBAM) at the bottleneck refines channel and spatial responses. Skip connections equipped with SharpBlock units enhance edge detail, and an FPN-like lateral fusion module aligns and merges decoder features. The conventional multilayer perceptron head is replaced with a Kolmogorov–Arnold Network (KAN) head, enabling flexible function approximation and class-wise interpretability. We evaluate SwinKANet on a mixed-domain LoveDA dataset (urban + rural) for diverse spatial learning and on the urban-only ISPRS Vaihingen dataset for city-scale benchmarking. SwinKANet achieves 0.5269 mIoU on LoveDA and 0.7645 mIoU on Vaihingen, delivering sharper boundaries and more consistent class regions than CNN, Mamba, and transformer baselines. The KAN head further enhances explainability by revealing feature contributions for each class, supporting interpretable remote sensing applications.

Author 1: Abdul Hadi Mazbah
Author 2: Safiza Suhana Binti Kamal Baharin
Author 3: Md. Shadman Zoha

Keywords: Satellite imagery; Kolmogorov-Arnold Network; semantic segmentation; attention; mixed-domain

PDF

Paper 18: AFLBCRS: Blockchain-Enabled Federated Learning with Ring Signatures

Abstract: With the explosive development of machine learning and increased concern about data privacy, federated learning (FL) has emerged as a major area of study. Despite the benefits of FL, it deals with certain obstacles, including the risk of indirect data leaking via reverse engineering, the compromise of model architectural privacy, and the cost of connection and communication. Therefore, the proposed framework AFLBRS, or Adaptive Federated Learning with Blockchain and Ring Signatures, is an innovative framework that combines federated learning, blockchain technology, and ring signatures to enable collaborative and secure model training across decentralized networks while preserving data privacy. In AFLBRS, participants train local models using their private data and contribute updates to a shared model without disclosing raw data. Blockchain technology ensures the integrity and transparency of the process by securely recording and validating model updates. Ring signatures authenticate contributions while preserving participant anonymity. Key benefits of AFLBRS include privacy preservation, security, collaborative learning, and transparency. This framework is promising for applications in healthcare, finance, and other sensitive domains where data privacy and security are paramount. AFLBRS demonstrates competitive model accuracy compared to centralized approaches while effectively preserving data privacy and ensuring security through blockchain integration and ring signatures. The case study for AFLBCRS is a healthcare IoT setting using an ICU dataset, where multiple sites collaboratively trained a model to predict patient risk within 24 hours without sharing raw patient data. The results suggest that AFLBCRS is well-suited for compliance-focused environments because it keeps data local, protects participant identity, maintains an auditable (tamper-resistant) record of contributions, and ensures that only verified updates are accepted. When evaluated with a scoring method that prioritizes regulatory requirements alongside model usefulness and operational cost, AFLBCRS clearly outperformed a traditional centralized setup (0.898 vs. 0.343). The evaluation matrix for AFLBRS indicates promising results across key metrics such as model accuracy, privacy preservation, security, scalability, and usability.

Author 1: Menna Mamdouh Orabi
Author 2: Osama Emam
Author 3: Hanan Fahmy

Keywords: Machine learning; federated learning; blockchain; security; ring signature

PDF

Paper 19: Exploring Employability Factors: A Machine Learning Approach Using Association Rules in Business and Economics Graduates at Qassim University

Abstract: The growing number of business and economics graduates raises concerns about employability in a competitive job market. Furthermore, scrutiny from the Saudi Education and Training Evaluation Commission on educational outcomes highlights the relevance of this research for university administrations. Current literature often overlooks the factors affecting employment outcomes for recent graduates. Understanding these factors is essential for addressing concerns. This study aims to fill these gaps by focusing on graduates from the College of Business and Economics at Qassim University, using association rule mining to uncover patterns and relationships among academic performance, skills, and employment status. This analysis uses a dataset of 407 graduates to examine factors such as gender, major, cumulative GPA, and employment status. As the job market evolves, the findings offer valuable observations for universities on aligning educational programs with employer needs. The association rules model was utilized to predict graduates' likelihood of securing employment based on these attributes, showing that factors such as GPA and skills significantly impact employment outcomes. The proposed model demonstrated high accuracy in predicting employability and generated 147 association rules, indicating its effectiveness in identifying the factors that influence employment outcomes. It also reveals actionable knowledge for curriculum development. The effectiveness of the association rules in identifying the most impactful attributes related to employment outcomes reinforces the importance of addressing the skills and competencies sought by employers. The proposed model demonstrates its reliability for practical use. By aligning educational offerings with market demands, universities can enhance the employability of graduates, ensuring they are prepared for a dynamic environment. This research highlights the critical role of data mining in informing educational strategies and connecting academia with industry.

Author 1: Hussain Mohammad Abu-Dalbouh
Author 2: Osman Abdalla Mohamed Elhadi
Author 3: Ajlan Suliman Al-Ajlan
Author 4: Leenah Sulaiman Almuhanna Abalkhail
Author 5: Abdullah Suliman Almutlaq
Author 6: Wejdan Aamer Alasqah
Author 7: Mayadah Shikh Othman
Author 8: Sulaiman Abdullah Alateyah

Keywords: Machine learning; prediction; hidden patterns; employment rates; academic performance; data analysis; modeling

PDF

Paper 20: Improving Decision-Making Processes in Retail Through Artificial Intelligence for Advanced Management Information Systems: A Study on Consumer Behavior in Qassim

Abstract: In today’s rapidly evolving retail environment, the sheer volume of consumer data presents both opportunities and challenges for businesses striving to maintain a competitive edge. This study explores the pivotal role of artificial intelligence and sophisticated data mining techniques within management information systems. The study aims to transform decision-making processes and deepen the understanding of consumer behavior in the Qassim region of Saudi Arabia, while also exploring implications for broader regional markets. By employing a dataset of 712 customers that encompasses demographic variables, lifestyle choices, and purchasing patterns, we implement leading machine learning algorithms, including Decision Trees, Random Forests, and Support Vector Machines. This allows us to uncover actionable findings that drive strategic initiatives. Additionally, we analyze the impact of artificial intelligence on retailers by comparing outcomes before and after implementing AI-enhanced analytics. The investigation reveals that retailers applying AI-enhanced analytics experience a remarkable 32% improvement in their responsiveness to market changes, a 28% increase in customer retention rates, and a 34.7% improvement in repeat customers. These results highlight the substantial impact of these technologies on operational efficacy and demonstrate how AI can enhance customer loyalty, satisfaction, and overall business performance. The Random Forest model achieved the highest accuracy at 96.91%. Furthermore, this research emphasizes the effectiveness of predictive analytics in identifying distinct consumer segments and tailoring marketing strategies to meet their specific needs. By enabling retailers to respond proactively to consumer trends, AI emerges as a crucial tool for enhancing customer engagement and satisfaction. The findings illustrate how data analysis empowers businesses to detect emerging trends and optimize inventory management practices, and boost profitability. This research underscores the transformative potential of integrating advanced algorithms into retail operations, fostering data-informed decision-making that cultivates sustainable growth and elevates customer satisfaction in an increasingly competitive marketplace. The observations gained from this study serve as a valuable resource for retailers eager to utilize the power of AI and data mining to navigate the complexities of modern consumer behavior.

Author 1: Hussain Mohammad Abu-Dalbouh
Author 2: Mushira Mustafa Freihat
Author 3: Rayah Ismaeel Jawarneh
Author 4: Osman Abdalla Mohamed Elhadi
Author 5: Mortada Ibrahim Elimam
Author 6: Leenah Sulaiman Almuhanna Abalkhail
Author 7: Ghadi Mohammed Al Nafesah
Author 8: Soliman Aljarboa
Author 9: Sulaiman Abdullah Alateyah

Keywords: Analytics; machine learning; data-driven observations; predictive modeling; strategic marketing

PDF

Paper 21: Integrating Heterogeneous Data for Stock Market Prediction: A Systematic Literature Review

Abstract: This systematic literature review examines recent developments in stock market prediction using heterogeneous data sources that combine technical indicators, fundamental attributes, and sentiment-driven signals. Despite the growing adoption of machine learning in financial forecasting, existing research remains fragmented across data modalities, fusion strategies, and evaluation protocols, limiting comparability and practical applicability. Studies published between 2018 and 2024 were retrieved from five major scholarly databases and screened based on predefined eligibility criteria, resulting in 44 peer-reviewed articles included in the final analysis. The review synthesizes the quantitative and qualitative data modalities employed, the machine learning and deep learning methodologies adopted, the evaluation metrics used to assess predictive performance, and the principal challenges associated with multi-source stock market prediction. Findings reveal a clear shift toward deep learning architectures, hybrid fusion techniques, and the integration of external information such as news, corporate disclosures, and social media sentiment. Despite this progress, the literature exhibits inconsistent evaluation practices, limited attention to temporal data leakage, and insufficient coverage of non-English and emerging markets. This review consolidates current knowledge, presents a structured taxonomy of heterogeneous data sources and fusion strategies, and identifies open research challenges to guide future work in multimodal stock market prediction.

Author 1: Abdullah Almusned
Author 2: Mohammad Mehedi Hassan
Author 3: Bader Alkhamees
Author 4: Muhammad Al-Qurishi

Keywords: Stock prediction; heterogeneous data; machine learning; quantitative and qualitative data; systematic review

PDF

Paper 22: An Ontological Design Model for Integrating Notification, Appointment, and Queue in Healthcare Queue Systems

Abstract: Healthcare queue systems frequently suffer from prolonged waiting times, overcrowding, and inefficient patient flow management. Although various Queue Management Systems (QMS) have been developed, most existing solutions treat notification, appointment scheduling, and queue management as independent components. This fragmented design limits semantic clarity, adaptability, and reusability. This study proposes an ontology-based design model, termed OntoNAQ, which integrates Notification, Appointment, and Queue (NAQ) into a unified conceptual framework for healthcare queue systems. The study adopts the Design Science Research Methodology (DSRM) to identify conceptual gaps, design the ontological model, and demonstrate its applicability through prototype mapping and qualitative evaluation. The findings indicate that OntoNAQ provides explicit semantic relationships among NAQ components and serves as a reusable and theoretically grounded conceptual foundation for healthcare queue system design.

Author 1: Nik Mohd Habibullah Nik Mohd Nizam
Author 2: Shafrida Sahrani
Author 3: Mohd Nazri Kama
Author 4: Abdul Ghafar Jaafar
Author 5: Mohd Yazid Bajuri
Author 6: Mohammad Nazir Ahmad

Keywords: Ontology-based design; queue management system; appointment scheduling; notification system; healthcare information systems; design science research

PDF

Paper 23: Multi-Objective Intelligent Control of Bi-Directional V2X Charging Using NSGA-II in an Integrated Energy Management System

Abstract: This study presents the development and evaluation of an intelligent control system for a real-time bi-directional Electric Vehicle (EV) charging infrastructure integrated with solar Photovoltaic (PV), Energy Storage Systems (ESS), and the power grid. The proposed system aims to optimize energy flow decisions such as cost minimization, energy efficiency maximization, and prioritization of renewable sources. Two evolutionary optimization techniques are implemented and compared: a traditional single-objective Genetic Algorithm (GA) and the Non-dominated Sorting Genetic Algorithm II (NSGA-II). The GA approach focuses solely on minimizing operational cost, while NSGA-II considers multiple objectives simultaneously, offering a set of optimal trade-off solutions. Real-time switching decisions are formulated based on binary control variables corresponding to relay states in the V2X energy system. Simulation results demonstrate that NSGA-II provides superior flexibility in handling multi-objective trade-offs, achieving improved solar utilization and reduced grid dependency without compromising cost efficiency. The hybrid integration of NSGA-II with rule-based override logic further enhances the system's adaptability to dynamic operating conditions, making it suitable for deployment in smart energy management applications.

Author 1: Muhammad Aqmal Bin Abu Hassan
Author 2: Ezmin Abdullah
Author 3: Muhammad Umair
Author 4: Nik Hakimi Nik Ali
Author 5: Roslina Mohamad
Author 6: Nabil M. Hidayat

Keywords: Genetic algorithm; NSGA-II; optimization; Non-dominated Sorting Genetic Algorithm; EV charging; bi-directional EV charger; V2X; energy management system

PDF

Paper 24: Rule-Based Myanmar Herbal Recommendation System Using Ontology

Abstract: Myanmar herbal medicine is recognized as a vital component of traditional healthcare; however, its documentation remains disorganized and primarily available in the local language. Identifying appropriate herbs for individual users from existing records is inefficient and may result in medication errors. This study presents a formalized, digitized representation of Myanmar herbal knowledge using an ontology-based framework that enables precise and efficient herb identification and recommendation. The ontology and rule-based recommendation system were developed through literature review, expert consultation, and analysis of volumes 1 and 2 of Medicinal Plants of Myanmar. The system’s performance was evaluated by three experts from the University of Traditional Medicine in Mandalay. The constructed ontology models 119 herbs, 17 plant parts, 12 distribution regions, 256 disease symptoms, and 23 adverse effects. Seven inference rules were defined to generate recommendations based on seven benchmark questions. The system achieved an average accuracy of 95% and a recall of 96% in recommending herbs based on symptoms, plant parts used, location, plant family, adverse effects, combinations of users’ symptoms and location, and combinations of symptoms and adverse effects through rule-based evaluations. The proposed system provides a formalized structure for preserving Myanmar herbal knowledge and offers reliable recommendations within the scope of a limited dataset and a rigid ontology structure.

Author 1: Nang Saing Horm
Author 2: Nikom Suvonvorn

Keywords: Myanmar herbal medicine; ontology; recommendation system

PDF

Paper 25: Real-Time LiDAR SLAM-Driven Navigation and Collision Avoidance for Mobile Robots in Unstructured Environments

Abstract: Autonomous navigation in unknown environments requires accurate simultaneous localization and mapping, reliable obstacle detection, and efficient path planning within a unified framework. This study proposes a real-time LiDAR-based SLAM-driven navigation system for mobile robots operating in structured indoor environments. The developed architecture integrates three-dimensional LiDAR sensing, ego-motion estimation, scan registration, loop closure optimization, and collision-aware trajectory planning to achieve robust environmental reconstruction and safe autonomous mobility. A probabilistic measurement model is employed to relate sensor observations to robot pose and map states, while back-end optimization mitigates cumulative drift and enhances global consistency. The navigation module incorporates obstacle segmentation and goal-directed path generation, ensuring smooth and collision-free trajectories under kinematic constraints. Experimental validation is conducted in both incremental and full-environment exploration scenarios using a physical robotic platform equipped with LiDAR and auxiliary sensors. Results demonstrate consistent mapping accuracy, stable trajectory estimation, and effective obstacle avoidance in cluttered indoor settings. The system maintains real-time computational performance while preserving the structural coherence of reconstructed environments. The findings confirm the reliability and scalability of the proposed framework, providing a practical foundation for autonomous robotic navigation in semi-structured and unstructured operational domains.

Author 1: Amandyk Tuleshov
Author 2: Anar Adilkhan
Author 3: Moldir Kuatova
Author 4: Gaukhar Seidaliyeva

Keywords: LiDAR SLAM; autonomous navigation; obstacle avoidance; path planning; mobile robots; real-time mapping; 3D point cloud processing; loop closure optimization; sensor fusion; robotic perception

PDF

Paper 26: Simulation Study on the Proposed Multi-Agent Backdoor Detection System

Abstract: The proposed multi-layered backdoor detection system was evaluated across 10 diverse scenarios, including benign tasks, keyword-triggered attacks, semantic backdoors, and distributed multi-agent attacks. In the simulation experiments, Total Scenarios: 10 | Attack Scenarios: 5 | Benign Scenarios: 5, are prepared and Detection Mechanisms: 5 | Agent Architecture: 3-agent pipeline with a dedicated auditor are also prepared as the proposed system. All experiments executed successfully with comprehensive logging and tracing enabled. The system achieved perfect detection with zero false positives. The simulation experiments validate the effectiveness of the multi-layered defense architecture for detecting distributed backdoors in multi-agent LLM systems. These results demonstrate that architectural security approaches—treating multi-agent systems as distributed computing environments with Byzantine fault tolerance—can provide robust protection against sophisticated backdoor attacks without requiring model-level guarantees or training data access.

Author 1: Kohei Arai

Keywords: Multi-layered backdoor detection system; keyword-triggered attack; semantic backdoor; distributed multi-agent attack; multi-agent LLM; Byzantine fault tolerance

PDF

Paper 27: Traffic Sign Classification Under Varying Lighting Conditions in the Philippines Using Transfer Learning with ResNet50 and Zero-DCE

Abstract: This study presents a multi-stage transfer learning approach for improving traffic sign recognition performance under both normal and low-light conditions, addressing the gap between existing datasets and the real-world road environments of the Philippines, where poor lighting, faded signs, and unstructured roads are common. A curated local dataset of 7 commonly encountered traffic sign classes comprising approximately 5,000 manually localized images was constructed and split into training, validation, and test sets (70–10–20 ratio). Five model configurations were developed and compared: a VGG-inspired baseline trained from scratch, a standard ResNet50 transfer learning model, a multiphase ResNet50 model pretrained on the GTSRB dataset, and two corresponding variants enhanced using Zero-DCE low-light preprocessing. The baseline achieved 92.17% accuracy, while the standard ResNet50 models performed similarly with and without Zero-DCE (92.10–92.45%). The multiphase ResNet50 significantly improved accuracy to 96.43% by leveraging domain-aligned pretraining, and the highest performance was achieved by its Zero-DCE-enhanced counterpart at 98.21%, showing more balanced metrics and improved recognition stability. These results indicate that low-light enhancement alone does not guarantee better performance, but becomes highly effective when paired with a feature extractor already specialized in traffic sign features. Overall, the proposed multiphase, Zero-DCE–assisted pipeline provides a strong and scalable solution for traffic sign recognition in low-visibility Philippine conditions, with potential applications in ADAS and autonomous driving systems.

Author 1: John Paul Q. Tomas
Author 2: Carlo Miguel P. Legaspi
Author 3: Karl Anthony S. Dalangin
Author 4: Gabriel Paul Q. Lim

Keywords: Traffic Sign Recognition (TSR); Traffic Sign Classification (TSC); Advanced Driver-Assistance System (ADAS)

PDF

Paper 28: Evaluating ChatGPT for Grading Programming Assignments: Effectiveness, Fairness, and Student Perceptions

Abstract: This study investigates ChatGPT as an automated grading tool for programming assignments in higher education. Three datasets comprising Python, C++, and Java assignments were graded three times by ChatGPT and compared with faculty evaluations. Results show that ChatGPT achieves high grading accuracy, closely aligning with faculty scores and demonstrating statistically significant correlations. Statistical analyses using the Kolmogorov–Smirnov test, paired t-test, and Wilcoxon signed-rank test confirm overall agreement, although ChatGPT tends to apply stricter grading criteria. High intraclass correlation coefficients further indicate strong reliability and consistency across repeated grading attempts. The study highlights the critical role of well-defined rubrics in improving grading alignment and proposes an Instructor–AI Collaborative Rubric Development framework to support effective AI integration in assessment. A survey of 158 students indicates increased satisfaction and trust following disclosure of AI-assisted grading, although some still prefer human evaluation. Overall, the findings provide strong evidence that ChatGPT is a reliable and consistent grading tool, demonstrating close alignment with faculty evaluations and high reproducibility across attempts. However, its effectiveness is critically dependent on well-defined rubrics and requires human oversight to mitigate strictness, ensure fairness, and account for contextual nuances. These results strongly support a hybrid AI–human grading approach, grounded in transparent rubric design and reinforced by appropriate ethical safeguards.

Author 1: Abedallah Zaid Abualkishik
Author 2: Sherzod Turaev
Author 3: Ali A. Alwan
Author 4: Mohamed Elhoseny
Author 5: Mohsin Murtaza

Keywords: AI-assisted grading; ChatGPT; automated grading; programming assignments; higher education; grading reliability; rubric-based evaluation

PDF

Paper 29: Reinforcement Learning-Based Adaptive Penetration Testing Framework for Wireless Communication

Abstract: Wireless Fidelity (Wi-Fi) technology is widely used in the environment of Internet of Things (IoT), and the importance of security assessment has increased. Nowadays, Wi-Fi security assessment is based on security tools that are operated manually, and some methods face issues due to the lack of automation. In this study, we suggest a method for adapting Wi-Fi penetration testing in which a reinforcement learning (RL) agent interacts with the environment by choosing actions based on the current state to maximize the total reward received. We model a tabular Q-learning algorithm as an agent interacting with the wi-fi environment. The action space is made up of denial-of-service attacks, while the environment state vector includes parameters of the network and indicators of attack success, which all contribute to the reward function. The experiments show that the RL agent successfully finds vulnerabilities in the Wi-Fi Protected Access 2 (WPA2) and Wi-Fi Protected Access 3 (WPA3) protocols.

Author 1: Saken Tleuberdin
Author 2: Konstantin Malakhov
Author 3: Nurlan Tashatov
Author 4: Dina Satybaldina
Author 5: Didar Yedilkhan

Keywords: Internet of things; security; penetration testing; reinforcement learning; wireless communication

PDF

Paper 30: Machine Learning-Based Web System for Predicting and Classifying Financial Incentives in the Automotive Sector

Abstract: This research presents the development of a web-based system using machine learning to predict and classify financial incentives in the automotive sector, contributing to Sustainable Development Goal 9 (Industry, Innovation and Infrastructure) and SDG 12 (Responsible Consumption and Production). The main objective was to design and implement an intelligent system that enhances decision-making regarding incentives such as exemptions (EXEM), natural gas subsidies (GNT), and tax benefits (TAX). The study employed a quantitative approach, applied type, and pre-experimental design, assessing model performance through accuracy, error rate, and response time metrics. Results showed an accuracy of 93.44%, a 45.12% reduction in error rate, and an average response time of 0.13 seconds. It is concluded that the proposed system significantly improves efficiency in predicting financial incentives, positioning itself as a viable technological tool for the automotive sector and economic sustainability.

Author 1: Antony Jesus Ramirez Rivas
Author 2: Rosalynn Ornella Flores-Castañeda

Keywords: Artificial intelligence; fiscal policy; forecasting; sustainable development; automobile

PDF

Paper 31: Segmentation of Convective Initiation Based on Spatio-Temporal Feature Joint Modeling

Abstract: As a key indicator of the occurrence of severe convection, convective initiation (CI) exhibits characteristics such as fragmentation, scale heterogeneity, and susceptibility to confusion with other cloud systems in single-temporal remote sensing imagery, posing significant challenges for accurate CI detection. Traditional threshold-based methods inadequately capture spatial representations and have limited generalization capabilities, while existing deep learning approaches fail to fully utilize the temporal correlation features of the same target cloud cluster, resulting in a high false alarm rate. To address these challenges, based on the physical laws of convective development, we propose a spatiotemporal feature fusion-based CI detection model, namely Ti-UHRNet. The model integrates three core designs: integrating digital elevation model geographic information at the input layer to quantify the topographic modulation on convective development and enhance the physical consistency of features; adopting U-HRNet embedded with attention-gated feature fusion as the backbone to extract multi-scale features efficiently, filter critical information dynamically, and retain high-resolution spatial details of convective clouds; and designing a multi-head self-attention-based TransTrack module with multi-temporal inputs to capture the dynamic evolution information of convective clouds within a 15-minute window, thereby distinguishing them from other cloud systems. Experimental results show that compared with several advanced 2D and 3D convolutional segmentation methods, Ti-UHRNet achieves the best performance in extracting the spatiotemporal features of rapidly developing convective cloud clusters. On the test set, it attains a probability of detection of 0.954, a false alarm rate of 0.082, and a critical success index of 0.879. Verified against ground-based radar echoes, the model enables effective early warning of severe convective weather at 15–30 minutes in advance.

Author 1: Runzhe Tao
Author 2: Rui Chen
Author 3: Peibei Zheng
Author 4: Zibo Hong

Keywords: Semantic segmentation; remote sensing imagery; convective initiation; spatiotemporal feature fusion

PDF

Paper 32: Immersive Educational Application Based on Unity for Learning the Quechua Language

Abstract: Learning indigenous languages such as Quechua helps preserve cultural identity by narrowing educational gaps, in line with Sustainable Development Goal 4, which promotes inclusive and quality education. The objective was to develop an immersive educational application based on Unity to improve the learning of the Quechua language. This was an applied research study with a quantitative approach and a pre-experimental design, in which a pre- and post-test was administered to a group of young people. The results demonstrated that the Unity-based immersive educational app significantly improved recognition of Quechua vocabulary (Z = -4.149, p < 0.001), increased performance on interactive activities (mean Level 2 = 15.58 vs. Level 1 = 13.17; error rate reduced from 34.17% to 22.08%), and decreased the overall error rate in language use (Z = -4.149, p < 0.001), demonstrating its effectiveness in language learning and accuracy. In conclusion, virtual reality proved to be an effective and motivating tool for learning Quechua, promoting quality education and an appreciation for Peruvian cultural heritage.

Author 1: Giancarlo Eliseo Arrieta Villarreal
Author 2: Rosalynn Ornella Flores-Castañeda

Keywords: Quechua; computer application; information technology (software)

PDF

Paper 33: Weight Trajectory Prediction in Precision Livestock Farming Using Machine Learning: A Comparative Approach

Abstract: Accurate livestock body weight prediction is a key component of precision livestock farming, as it supports herd monitoring, production management, and planning in response to the increasing global demand for meat. Existing approaches for weight prediction include age-based regression models, growth trajectory modelling, average daily gain estimation, and methods relying on morphometric measurements or image-derived features. However, many of these approaches require frequent measurements or specialized data acquisition systems, which are often costly and difficult to deploy under practical farming conditions. This study presents a comparative evaluation of data-driven models for livestock body weight trajectory prediction under low-measurement conditions. A matrix factorization approach and four ensemble-based machine learning methods, namely XGBoost, LightGBM, CatBoost, and ExtraTrees, were evaluated using a dataset of Holstein cows. Model performance was assessed using standard regression metrics, including root mean squared error, mean absolute error, and mean absolute percentage error, with five-fold cross-validation employed to ensure robustness. The results show that ensemble learning methods consistently outperform matrix factorization techniques when only a limited number of weight measurements per animal are available. More specifically, XGBoost achieves the best predictive performance when only one historical measurement per animal is available, whereas ExtraTrees provides the most accurate predictions when two or three historical measurements are available. These findings demonstrate that accurate and cost-effective livestock weight prediction can be achieved from sparse routine body weight records, without relying on dense longitudinal sampling, image-based systems, or extensive morphometric measurements, thereby supporting the practical deployment of predictive tools in precision livestock farming systems.

Author 1: Moad Hakem
Author 2: Zakaria Boulouard
Author 3: Mohamed Kissi

Keywords: Machine learning; data science; precision livestock farming; weight trajectory prediction; ensemble learning

PDF

Paper 34: Prickly Pear Disease Classification Using Deep Convolutional Neural Networks: A Case Study

Abstract: Prickly pear (Opuntia ficus-indica) is a member of the Cactaceae family. Because of its anti-inflammatory, anti-oxidant, antibacterial, hypoglycemic, and neuroprotective properties, prickly pears are a magical fruit. Both the fruit and its stem are utilized in value-added products. Deep learning (DL) applications are required for prickly pear disease detection and classification. To the best of our knowledge, no previous study has investigated prickly pear disease classification using convolutional neural networks. In this study, we propose the use of deep convolutional neural networks MobileNetV2 and DenseNet121, to classify prickly pear disease. A locally collected dataset from Tunisia was divided into two classes: healthy and cochineal. Data augmentation techniques were applied to increase the number of images. These augmented data were then fed as input into MobileNetV2 and DentNet121 networks. The experimental results show that MobileNetV2 achieved a precision, recall, and F1-score of 96.55% for healthy plants. For diseased plants, precision, recall, and F1-score reached 97.14%. Overall, the model obtained a classification accuracy of 96.88%. DenseNet121 achieved precision, recall, and F1-score values of 90.62%, 100%, and 95.08%, respectively, for healthy plants. For diseased plants, the precision, recall, and F1-score were 100%, 91.43%, and 95.52%, respectively, resulting in an overall classification accuracy of 95.31%. Our proposed deep learning models, MobileNetV2 and DenseNet121, perform well and demonstrate strong performance on the prickly pear dataset.

Author 1: Raghiya Elghawth
Author 2: Wafae Abbaoui
Author 3: Soumia Ziti

Keywords: Plant disease classification; data augmentation; deep learning; prickly pear disease; MobileNetV2; DenseNet121

PDF

Paper 35: CEEMDAN–SSA–RWKV–SMA: A Robust Hybrid Model for Long-Term Wind Speed Forecasting in India

Abstract: Reliable long-term wind speed forecasting is a critical requirement for the strategic deployment and operational stability of wind energy systems, particularly in meteorologically diverse regions like India. This study proposes a novel hybrid framework, CEEMDAN–SSA–RWKV–SMA, which integrates advanced signal decomposition, deep sequence modeling, and metaheuristic optimization. Initially, the raw wind speed time series is decomposed using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) to extract multi-scale Intrinsic Mode Functions (IMFs). To enhance signal clarity and reduce dimensionality, each IMF is further processed using Singular Spectrum Analysis (SSA). The resulting denoised and trend-extracted components are modeled using the Receptance Weighted Key Value (RWKV) neural network, a recent Transformer-RNN hybrid designed to capture long-range temporal dependencies efficiently. To optimize RWKV hyperparameters and SSA windowing parameters, the Slime Mould Algorithm (SMA) is employed as a global metaheuristic optimizer. Empirical evaluations on multi-regional Indian wind datasets demonstrate that the proposed framework consistently outperforms conventional models such as LSTM, Transformer, and CEEMDAN-LSTM in terms of MAE, RMSE, and MAPE. The proposed CEEMDAN-SSA-RWKV-SMA framework is a reliable forecasting strategy for improving wind energy integration in non-stationary and resource-critical environments.

Author 1: S. Vidya

Keywords: Wind speed forecasting; Complete Ensemble Empirical Mode Decomposition with Adaptive Noise; Singular Spectrum Analysis; RWKV neural network; Slime Mould Algorithm; Renewable energy integration

PDF

Paper 36: Scaled Agile Process Improvement Recommendations with CMMI 2-Based Agile Scaling Model: A Case Study of the Indonesia National Single Window Agency

Abstract: The Indonesia National Single Window System (SINSW) is a platform developed by the Indonesia National Single Window Agency (LNSW) using Agile methodology, specifically the Scrum framework. Several challenges were identified during its development, including deviations from the Scrum Guide and the absence of formal, regular events necessary for effective team coordination and alignment. These issues revealed gaps in Scrum implementation and broader difficulties associated with scaling Agile practices beyond the team level. Therefore, this study applied a scaling Agile model based on Capability Maturity Model Integration (CMMI) 2 to evaluate the agency's existing Agile process and recommend targeted improvements to Agile practices. The evaluation involved qualitative interviews with key stakeholders, including the Project Management Officer, System Analyst, and developers. The interviews were subsequently quantified using the Key Process Area (KPA) rating framework. The findings led to actionable recommendations to optimize the Agile process, improve team collaboration, and support SINSW’s success.

Author 1: I Made Aditya Pradnyadipa Mustika
Author 2: Betty Purwandari
Author 3: Alex Ferdinansyah

Keywords: Scaling Agile; CMMI; Scrum; KPA rating; software engineering

PDF

Paper 37: Explainable Deep Learning for Automated Skin Cancer Detection Using Advanced CNN Architectures on Dermoscopic Images

Abstract: Skin cancer is a considerable health issue worldwide, occurring when pigment cells turn malignant. However, diagnosing skin lesions is difficult for dermatologists because most lesions have similar characteristics. Initial detection is essential because it significantly increases the success rate of treatment and survival rates. In the past few decades, the rapid development of artificial intelligence has made it possible to build automated diagnostic systems based on large histopathology-validated image datasets. In this study, we introduce a deep learning solution for multi-class skin cancer classification based on state-of-the-art convolutional neural networks (CNNs) on the HAM10000+ISC image dataset. We used pre-trained CNN backbones, InceptionV3, DenseNet121, ResNet50, and VGG16, initialized with weights from ImageNet, for feature extraction, fine-tuning, and evaluation. Among the models, InceptionV3 achieved the highest accuracy of 76% and an ROC score of 0.967. To enhance interpretability, we used explainable AI (XAI) methods, Grad-CAM, Grad-CAM++, and class-wise attention maps, to examine both correctly and incorrectly classified images. The experiment demonstrates that the suggested system is not only characterized by high classification accuracy, but also by the ability to explain and visualize, which is a significant advantage for dermatologists when diagnosing skin cancer early and correctly.

Author 1: Adel Rajab

Keywords: Deep learning models; skin cancer detection; image processing; Grad-CAM

PDF

Paper 38: Energy-Efficient Cluster Head Rotation in WSNs Using Bee Colony Optimization

Abstract: In this study, we present a method designed to improve energy efficiency and balance the workload across Wireless Sensor Networks (WSNs). Our approach dynamically selects and rotates cluster heads (CHs) based on factors such as remaining energy, node mobility, distance to the base station, and data processing needs. By focusing on nodes with more energy and lower mobility, we aim to extend the network's operational life and prevent any single node from being overburdened. At the heart of our method is the Artificial Bee Colony (ABC) optimization algorithm, which mimics the foraging behavior of bees. This algorithm helps to identify the best nodes to act as CHs, balancing the energy load across the network and maintaining strong connectivity within clusters. Our simulations show that this method outperforms existing protocols like FEEC and PSAP-WSN, particularly when it comes to distributing energy more evenly and extending the network's lifespan. By continuously rotating the CHs, we ensure that energy consumption is spread out, leading to improved network performance and sustainability. The results indicate that this dynamic and adaptive approach is highly effective in maintaining a balanced energy distribution, making it a robust solution for energy management in WSNs.

Author 1: Azamuddin Bin Ab Rahman
Author 2: Sakib Iqram Hamim

Keywords: Energy efficiency; load balancing; cluster head rotation; bee colony optimization; metaheuristic clustering; Wireless Sensor Network

PDF

Paper 39: An Improved Hybrid CURE–SNE Model for High-Dimensional Data Clustering

Abstract: Stunting remains a critical public health issue in rural communities, largely driven by inadequate nutrition, poor sanitation, and unfavorable socioeconomic conditions. This study proposes a hybrid clustering approach by integrating Clustering Using Representatives (CURE) with t-distributed Stochastic Neighbor Embedding (t-SNE) to analyze stunting prevalence and support the optimization of child nutrition strategies. Secondary data were collected from publicly accessible national health and nutrition repositories, comprising 500 child records with multiple parameters, including anthropometric indicators, nutritional intake, maternal characteristics, environmental sanitation, and socioeconomic factors. The t-SNE algorithm was employed to reduce the high-dimensional data into a two-dimensional space while preserving neighborhood structures, followed by the application of the CURE algorithm to construct clusters that are robust to noise and outliers. Experimental results indicate that the proposed CURE–SNE approach successfully formed four distinct clusters, namely C1 Very High Stunting Risk with 128 data points (25.6%), C2 High Stunting Risk with 142 data points (28.4%), C3 Moderate/Transitional Stunting Risk with 117 data points (23.4%), and C4 Low Stunting Risk with 113 data points (22.6%). Cluster quality evaluation demonstrates that the hybrid CURE–SNE method achieves a higher Silhouette Score and a lower Davies Bouldin Index compared to the CURE only approach, indicating improved cluster separation and compactness. These findings confirm that combining dimensionality reduction with representative-based clustering enhances the interpretability of stunting patterns and provides a reliable analytical foundation for designing targeted and data-driven child nutrition interventions in rural settings.

Author 1: Dewi Sartika Br Ginting
Author 2: T. H. F. Harumy
Author 3: Ade Sarah Huzaifah
Author 4: Ivanny Putri Marianto

Keywords: Hybrid clustering; CURE-SNE; stunting; davies bouldin index; silhouete score

PDF

Paper 40: Pedagogical Mediation Through Prompt Engineering: An Expert Evaluation of AI-Generated Feedback on Islamic-Integrated EFL Argumentative Writing

Abstract: This research tested whether prompt engineering could act as a type of pedagogical mediation to enhance the quality of AI-generated student feedback on EFL students' argumentative essays in an Islamic education system. An initial pool of eight expert raters (four primary raters and four inter-raters) scored the AI-generated feedback from Claude Sonnet 4 using 12 systematically developed prompts to elicit feedback. Raters were asked to score feedback produced by these prompts across four areas of evaluation: pedagogy, linguistics, Islamic content, and AI reliability. The highest rated configuration was Prompt 4 (Feedback-only sequencing, English Lecturer Persona), with a mean rating of 31.00 (out of 35) in all categories. A Friedman test showed there were statistically significant differences among the four evaluative categories, χ²(3) = 30.077, p < .001. Additionally, inter-rater reliabilities were high for each of the possible pairs of raters (r = .89 -.96). Overall, this research suggests that prompt engineering is a potentially viable method of pedagogical mediation, allowing educators to develop more culturally responsive and pedagogically relevant AI-generated feedback systems for Islamic EFL higher education settings.

Author 1: Sari Dewi Noviyanti
Author 2: Rudi Hartono
Author 3: Hendi Pratama
Author 4: Seful Bahri

Keywords: AI-generated feedback; pedagogical mediation; prompt engineering

PDF

Paper 41: Graph Neural Networks and Ensemble Learning for Mineral Prospectivity Mapping Using Geochemical Data

Abstract: Mineral exploration is inherently challenging because geological formations are complex and geochemical relationships are often nonlinear and spatially variable. Although artificial intelligence has recently shown strong potential in improving mineral potential mapping, many existing approaches struggle to fully capture spatial relationships within geochemical data. In this study, an integrated framework that combines Graph Neural Networks (GNNs), ensemble learning classifiers, and unsupervised K-means clustering was developed to analyze geochemical data from Saudi Arabia. The geochemical samples were modeled as a spatial graph, where each node represents a sampling location, and the connections between nodes reflect their geographic proximity. This structure allows the GNN to better capture spatial relationships within the data, while ensemble models serve as baseline methods for performance comparison. K-means clustering was further used to examine spatial patterns and highlight potential mineralization zones. The proposed approach achieved strong predictive results, with classification accuracies reaching 85.08% for lithium and 90.62% for tungsten, alongside comparable performance for other elements. Overall, these results demonstrate the value of incorporating spatially-aware artificial intelligence techniques to support more accurate mineral exploration and more informed resource management.

Author 1: Kholod M. Alzubidi
Author 2: Alaa O. Khadidos
Author 3: Adil O. Khadidos
Author 4: Haitham M. Baggazi
Author 5: Fahad M. Alharbi
Author 6: Razan Alamoudi

Keywords: Rare mineral mapping; geochemical data; geospatial analysis; GNN; ensemble learning

PDF

Paper 42: Emotion Prediction in Performance-Critical Tasks: A Systematic Review of Physiological Signals and Deep Learning Models

Abstract: Emotions strongly influence how people think, decide, and perform, making reliable emotion forecasting essential in performance-critical environments. Traditional methods such as facial expressions, speech, and self-reports often lack reliability and continuity. Physiological signals offer a more objective alternative, providing continuous indicators of emotional states, while deep learning models are well-suited to capturing their non-linear temporal characteristics. Unlike prior reviews that primarily focus on general emotion recognition or isolated model performance, this study specifically examines emotion prediction in performance-critical contexts through the combined analysis of physiological signals, deep learning architectures, and task-driven requirements. This systematic review synthesizes recent studies on emotion prediction using physiological data and deep learning models. Following the PRISMA framework, relevant studies published between 2021 and 2025 were identified from the Dimensions AI and Web of Science databases, resulting in 25 eligible articles. The review examines trends in physiological modalities, deep learning architecture, emotion representations, and evaluation practices. Beyond summarizing these trends, the review provides a structured comparative synthesis that organizes existing studies according to physiological signal modality, model architecture, performance-critical task context, emotion representation, and evaluation practices, thereby offering methodological guidance for future emotion prediction system design. Findings show that EEG is the most widely used modality, frequently combined with peripheral signals such as heart rate variability, electrodermal activity and electrocardiography in multimodal systems. Hybrid architectures, particularly CNN–LSTM models, dominate current approaches, although attention-based and lightweight models are gaining traction. Key challenges remain, including inter-subject variability, limited real-world validity, inconsistent emotion modeling and non-standardized evaluation. This review highlights current gaps and offers guidance for developing more robust emotion prediction systems in high-performance contexts.

Author 1: Norhawani Ahmad Teridi
Author 2: Tengku Mohd Tengku Sembok
Author 3: Muhammad Fairuz Abd Rauf
Author 4: Nurhafizah Moziyana Mohd Yusop
Author 5: Zuraini Zainol
Author 6: Shahrulfadly Rustam
Author 7: Azlinda Abdul Aziz
Author 8: Hazri Haidar
Author 9: Mohd Fahmi Mohamad Amran

Keywords: Emotion prediction; physiological signals; deep learning; multimodal fusion; performance-critical tasks

PDF

Paper 43: A Novel Latent-Representation-Based Algorithm with Dynamic Obstacle Avoidance (LADy) for Autonomous Mobile Robots (AMRs)

Abstract: Conventional path-planning algorithms are often tailored to industrial and warehouse settings, creating the necessity to integrate two or more planners, which requires more memory and computation limitations. To overcome this limitation, this study aims to develop a novel algorithm, semantic cost encoding-based A* with Dynamic obstacle avoidance, specifically designed for Autonomous Mobile Robots (AMRs) in warehouses. In this study, the proposed algorithm is benchmarked in Matplotlib against A* and RRT* showing 60.33% higher memory efficiency and 60.36% more efficiency in aspect of number of computed nodes than RRT*, while all equivalent to A* in a static environment, and benchmarked against D* lite for dynamic environment, as well as against a hybrid algorithm, a simplified interpretation of commercial AMR path planning approaches, to show 45.30% more memory-efficiency than the hybrid algorithm, which is much more preferred in a real time implementation than D* lite, for AMRs.

Author 1: Harishma Prakash
Author 2: Prasina A
Author 3: Samuthira Pandi V
Author 4: Naregalkar Akshaykumar Rangnath
Author 5: Rajalingam A
Author 6: Sundar R

Keywords: Path planning; semantic encoding; Autonomous Mobile Robots (AMRs); warehouse navigation; dynamic environment

PDF

Paper 44: Design and Experimental Evaluation of Adaptive Load Balancing Strategies in Software-Defined Networks Using Mininet

Abstract: Software-Defined Networking (SDN) introduces centralized control and programmability, enabling more flexible and efficient network management compared to traditional architectures. Load balancing is a serious SDN application that improves resource utilization, reduces latency, and enhances service reliability. However, implementing SDN-based load balancers involves several challenges, such as controller overhead, scalability issues, dynamic traffic handling, and protocol integration. This study investigates these challenges and presents practical approaches for implementing SDN load balancers using the Mininet emulation environment. Different load-balancing algorithms are implemented and evaluated, highlighting the trade-offs between static and dynamic techniques. The study also examines traffic generation tools supported by Mininet. Furthermore, the performance of various SDN controllers, including Ryu, POX, OpenDaylight (ODL), ONOS, and Floodlight, is assessed using metrics such as throughput and round-trip time. Key performance evaluation metrics and their computation methods are also discussed. The goal of this research is to examine the challenges of implementing load balancing in Software-Defined Networking. It also aims to explore effective methods for designing and evaluating SDN-based load-balancing solutions using a Mininet test environment.

Author 1: M Shona
Author 2: Rinki Sharma

Keywords: Load balancing; control plane; data plane; OpenFlow; Mininet

PDF

Paper 45: Machine Learning-Based Air Quality Monitoring in Indian Metropolitan Cities: A Comparative Study

Abstract: Pure and clean air is essential to make the ecosystem healthy. Air pollution is becoming a critical global concern for both the environment and human health. Presence of harmful pollutants such as PM2.5, PM10, CO2, NO2, SO2, and O3 continuously degrades air quality and influences climatic conditions. This study aims to present a comprehensive air quality monitoring between traditional and advanced ensemble-based machine learning models. To monitor air quality, data collected from major metropolitan cities of India from 2015 to 2023 (Three phases- Pre-COVID, during COVID-19, and post-COVID). After pre-processing the data, a baseline supervised machine learning method, Support Vector Machine (SVM), was applied for ease of implementation. Later, to train weak learner features, ensemble-based machine learning techniques include Gradient Boosting Machine (GBM) and Extreme Gradient Boosting Machine (XGBM), evaluated to get better prediction analysis. The systematic analysis is inspected using different performance parameters: R², Mean Squared Error, Root Mean Squared Error, and Mean Absolute Error. The outcome indicates XGBM achieves superior predictive accuracy and robustness across most cities and time periods, and achieves better variability in spatial and temporal features in performance. The key findings highlight the importance of location-based specific modelling strategies and demonstrate the potential of ensemble learning models for reliable urban air quality monitoring.

Author 1: Khushbu Chauhan
Author 2: Kruti Sutaria

Keywords: AQI; COVID; SVM; Gradient Boosting Machine (GBM); Extreme Gradient Boosting (XGBoost)

PDF

Paper 46: Cloud-Based Replication Models Using AI Techniques for Enhanced Data Management

Abstract: Elastic cloud infrastructure relies on dynamic replication mechanisms to maintain service availability and performance under fluctuating, non-stationary workloads. However, conventional threshold-based and static replication strategies frequently fail to maintain latency stability and Service Level Agreement (SLA) compliance in highly dynamic environments characterised by bursty, peak-stress traffic. This study introduces a Q-learning–based adaptive replication framework that formulates replication control as a sequential decision-making problem. The system is modelled as a Markov Decision Process (MDP), where replication adjustments are selected to maximise cumulative discounted reward, integrating latency minimisation, SLA violation penalties, and replica cost regularisation within a unified optimisation objective. A controlled cloud simulation environment was developed to emulate phased stochastic workload patterns, including normal, burst, sustained peak, and recovery intervals. The reinforcement learning controller was trained over 5000 episodes and subsequently evaluated under fixed-policy conditions against a reaction-delayed rule-based baseline controller. Experimental results demonstrate substantial improvements in performance stability. The proposed learning-based controller achieves a significant reduction in average latency, strong suppression of 95th percentile tail latency, and complete elimination of SLA violations under dynamic workload conditions. Unlike reactive threshold-based mechanisms, the learned policy anticipates workload transitions and proactively adjusts replication levels through long-term reward optimisation. These findings confirm that learning-driven replication control provides a structurally superior paradigm for latency-sensitive elastic cloud systems. By embedding SLA awareness directly into the reward formulation, replication management is transformed from a static configuration task into an adaptive, intelligent control process.

Author 1: Moneef M. Jazzar
Author 2: Aws I. Abueid

Keywords: Q-learning; elastic cloud replication; SLA-aware control; latency optimization; adaptive replication; reinforcement learning

PDF

Paper 47: Gradient-Guided Data Augmentation with mBERT and MuRIL for Malayalam Offensive Language Detection

Abstract: The widespread adoption of social media platforms has facilitated increased usage of offensive content, particularly in native languages where users express themselves more freely. Automated offensive language detection in low-resource languages such as Malayalam faces significant challenges due to severe class imbalance, where non-offensive samples substantially outnumber offensive instances, resulting in biased model performance and diminished detection accuracy for underrepresented classes. This study addresses the critical challenge of class imbalance in Malayalam offensive language identification through a comprehensive data augmentation framework. We propose a novel gradient-guided augmentation technique specifically designed to mitigate minority class imbalance by selectively enhancing underrepresented class samples through the identification and synthesis of challenging instances that improve model robustness. The effectiveness of various augmentation strategies is systematically evaluated, including back-translation, paraphrasing, and NLPAUG techniques, integrated with mBERT and MuRIL models. Our gradient-guided augmentation approach demonstrates substantial performance improvements, achieving a notable 0.09 increase in recall score compared to the baseline model's 0.74 recall, while preserving overall model performance on imbalanced Malayalam offensive language datasets. The proposed methodology offers a promising solution for addressing class imbalance challenges in offensive content detection for low-resource languages. The results highlight that integrating augmentation with explainability not only improves classification performance but also helps in overcoming certain limitations associated with the previous methods, while also contributing more to the study.

Author 1: Munawwar K V
Author 2: Nandhini K

Keywords: Offensive comment detection; gradient guided augmentation; NLPAUG; back translation; paraphrasing with MultiIndicParaphraseGeneration

PDF

Paper 48: Predictive Modeling of Lung Cancer Risk in Workers Using Dropouts Meet Multiple Additive Regression Trees

Abstract: Lung cancer remains a leading cause of cancer mortality, and preventable occupational and environmental exposures may compound risk in working-age populations. This study developed and compared predictive models for lung cancer risk using a publicly available tabular dataset (Kaggle; n = 1,000) containing demographic, lifestyle, symptom, and exposure-related variables. After standard preprocessing and an 80/20 train-test split, a Classification and Regression Tree (CART), a dropout-regularized gradient-boosted tree model (DART), k-nearest neighbors (KNN), and Gaussian Naïve Bayes were trained and evaluated using accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). CART achieved the highest accuracy (84.5%), while KNN achieved the highest precision (78.7%). DART produced the best F1-score (77.3%) and the highest AUC (0.801), suggesting a favorable balance between sensitivity and specificity when accounting for class imbalance. Feature-importance patterns in the final DART model highlighted occupational hazards, smoking habits, genetic predisposition, and air pollution exposure as leading contributors to model-based risk stratification in occupational settings. These findings suggest that regularized ensemble tree methods can support stable risk stratification and may complement screening by prioritizing individuals who warrant closer evaluation. The analysis is limited by the modest sample size and reliance on a single public dataset; external validation in occupational cohorts with measured exposure histories is required before practical implementation.

Author 1: Haewon Byeon

Keywords: Lung cancer; predictive modeling; occupational exposure; gradient boosting; risk stratification

PDF

Paper 49: A Hybrid Analysis Using Adaptive and Self-Adjusting Boosting and Logistic Regression

Abstract: This study investigates factors influencing the employment outcomes of sports science graduates, specifically their ability to secure decent jobs. Utilizing data from the Graduates Occupational Mobility Survey (GOMS) from 2015 to 2019, the study analyzed a sample of 1,019 sports science graduates aged 19 to 34. Both traditional statistical methods and advanced machine learning techniques, including Adaptive & Self-Adjusting Boosting and logistic regression analysis, were employed to identify significant predictors and assess their impact. Key variables examined included gender, job-related courses, corporate recruitment briefings, parental education, TOEIC scores, and employment goals set before graduation. Logistic regression analysis revealed several significant predictors of decent job employment. Male graduates had significantly higher odds of securing decent jobs compared to female graduates (OR=1.45, 95% CI: 1.10-1.90, p=0.02). The number of job-related courses taken (OR=1.30, 95% CI: 1.05-1.60, p=0.04) and participation in corporate recruitment briefings (OR=1.25, 95% CI: 1.02-1.53, p=0.03) were positively associated with decent job employment. Parental education (OR=1.15, 95% CI: 1.01-1.30, p=0.05) and TOEIC scores (OR=1.10, 95% CI: 1.00-1.22, p=0.06) also showed modest but significant effects. Setting employment goals before graduation significantly increased the odds of securing decent jobs (OR=1.20, 95% CI: 1.05-1.37, p=0.03). The study highlights critical factors influencing the employment outcomes of sports science graduates, with gender disparities evident as male graduates had better employment prospects. Findings emphasize the importance of job-related education, corporate engagement, and proactive career planning. Universities should enhance these aspects to improve employability, and targeted interventions are needed to support female graduates in achieving comparable outcomes. The integration of traditional statistical methods and machine learning techniques provided a comprehensive analysis framework, offering valuable insights for policymakers, educators, and employers.

Author 1: Haewon Byeon

Keywords: Employment outcomes; machine learning; adaptive & self-adjusting boosting; gender disparities

PDF

Paper 50: A Real-Time Multi-Scale Feature Pyramid YOLO Architecture for Accurate and Deployment-Efficient Road Damage Detection

Abstract: Automated road damage detection has become a critical component of intelligent transportation systems, enabling timely infrastructure maintenance and enhanced traffic safety. However, detecting pavement defects such as cracks, potholes, and surface degradation remains challenging due to significant scale variation, irregular geometries, illumination changes, and class imbalance. This study proposes a real-time Multi-Scale Feature Pyramid YOLO architecture designed to achieve accurate and deployment-efficient multi-class road damage detection. The framework integrates hierarchical feature extraction with bidirectional multi-scale fusion to enhance sensitivity to both small and large defects. A decoupled detection head is employed to improve classification–localization balance, while focal loss and small-object emphasis mechanisms address class imbalance and fine-grained crack detection challenges. Comprehensive experiments conducted on a multi-class road damage dataset demonstrate that the proposed model achieves a mAP@0.5 of 0.68 and a recall of 0.81, outperforming several representative real-time detection approaches. Precision–recall analysis, confusion matrix evaluation, and ablation studies confirm the effectiveness of multi-scale feature aggregation and targeted optimization strategies. Qualitative results further illustrate robust detection performance under diverse environmental conditions. The proposed framework provides a practical trade-off between accuracy and computational efficiency, making it suitable for real-world deployment in intelligent road condition monitoring systems.

Author 1: Olzhas Olzhayev
Author 2: Bakhytzhan Kulambayev
Author 3: Nurly Sakenkyzy
Author 4: Madina Belisbek

Keywords: Road damage; Multi-Scale Feature Pyramid; YOLO architecture; intelligent transportation systems; small-object detection; real-time deployment; pavement defect analysis

PDF

Paper 51: Interior Windshield Moisture Management Using a Vapor-Assisted Wiping Model: System Architecture and Design

Abstract: Fogging and moisture accumulation on the interior side of vehicle windshields continue to affect driving visibility despite the development of various defogging and climate-control approaches. While previous studies have mainly addressed the problem through HVAC optimization, airflow management, or intelligent monitoring systems, direct moisture handling at the interior glass surface remains less explored. In response to this gap, the present study proposes an interior windshield moisture management model based on vapor-assisted wiping. The model integrates a water reservoir and vapor generation unit, a guided vapor delivery pathway, and a regulated wiping interface that includes a porous moisture-distribution structure. These components work together to control vapor transfer before it reaches the windshield, allowing the wiping action to operate with a moderated moisture layer rather than uncontrolled vapor flow. The proposed system architecture explains how moisture generation, routing, and surface interaction are coordinated to support stable visibility inside the vehicle cabin. This model offers an alternative approach that complements existing defogging strategies and may contribute in future work to the development and evaluation of more effective interior windshield visibility enhancement systems.

Author 1: Abdelrahim Fathy Ismail

Keywords: Interior windshield; moisture management; vapor-assisted wiping; windshield defogging; vehicle visibility enhancement

PDF

Paper 52: A Detailed Classification of the Scheduling Algorithms in Fog Computing Environment, Challenges, and Future Directions

Abstract: Fog computing is considered as a distributed computing and storage resource. It is placed near to users rather than cloud computing to reduce latency in Internet of Things sensitive time applications. Several scheduling algorithms were proposed in fog environment to achieve better performance in terms of execution time, cost, latency and quality of services. In this research, a complete study of the different scheduling approaches in the fog computing is illustrated. Also, an operational classification of up-to-date algorithms is highlighted with detailed comparison of their performance metrics.

Author 1: Hend Gamal El Din Hassan Ali
Author 2: Imane Aly Saroit
Author 3: Amira Mohamed Kotb

Keywords: Cloud computing; fog computing; edge computing; Internet of Things; scheduling algorithms; directed acyclic graph

PDF

Paper 53: Construction and Application Analysis of a Response Model for Improper Customer Behaviors in Service Enterprises Based on Cognitive Evaluation Theory

Abstract: This study investigates customer misbehavior in tourism services through the lens of Cognitive Evaluation Theory and Game Theory, contributing to both management research and social psychology. A dual-path model, tested via a multi-experiment design optimized with a machine learning algorithm, examines mediation versus defense strategies across violation types (interpersonal/transactional). The results show that for interpersonal norm violations, mediation boosts repurchase intention by 23.5%, mediated by organizational and moral justice perceptions. For transaction norm violations, the defense strategy shows higher recovery efficiency (32.1%), largely via organizational justice. Conversely, defense strategies are superior for transactional norm breaches, primarily mediated by organizational justice. This highlights how corporate responses signal organizational values, shaping onsite customer reactions. The analysis, framed by game theory, posits that service scenarios constitute a dynamic strategic system involving customers, firms, and bystanders. Choosing mediation in interpersonal conflicts fosters cooperative atmospheres, while defending transactional rules maintains authority in non-cooperative games. Ultimately, this algorithm-informed approach seeks a refined Bayesian equilibrium, offering data-driven intervention solutions for service order management.

Author 1: Enhou Zu
Author 2: Chun-Wei Lu
Author 3: Jui-Chan Huang
Author 4: Tien-Shou Huang
Author 5: Cheng-Ju Liu

Keywords: Cognitive evaluation; customer misbehavior; management research; social psychology; tourism services; game theory

PDF

Paper 54: Convolutional Neural Network for Chili Plant Disease Classification: A Deep Learning Approach

Abstract: Chili peppers are a high-value horticultural crop that is highly susceptible to foliar diseases, which can significantly reduce yield and market quality. This study proposes and evaluates a Convolutional Neural Network (CNN) model based on the MobileNet-V2 architecture for chili leaf disease classification. A combined dataset consisting of 2,690 images collected from two public repositories and one field-acquired source was used in this research. The dataset was divided into training, validation, and testing subsets using an 80:10:10 ratio and underwent preprocessing steps including image resizing, data augmentation, and normalization. The proposed model was implemented using TensorFlow 2.15 and trained on the Google Colab platform. Experimental results demonstrate strong classification performance, achieving 95.6% validation accuracy and 96.8% test accuracy with a low loss value of 0.1011. All evaluated classes, anthracnose, yellow virus, leaf spot, leaf curl, and healthy leaves achieved precision, recall, and F1-scores exceeding 0.90, accompanied by near-perfect AUC values. These findings indicate that the MobileNet-V2-based CNN exhibits effective discriminative capability and generalization across heterogeneous visual conditions, highlighting its potential applicability for AI-assisted agricultural disease monitoring systems based on image processing techniques.

Author 1: Erna Dwi Astuti
Author 2: Widowati
Author 3: Aris Sugiharto

Keywords: Convolutional neural network; MobileNet-V2; chili plant disease; image processing

PDF

Paper 55: Blockchain-Based Secure Data Sharing Framework: Dual Validation Through Content Validity and Thematic Analysis

Abstract: Blockchain technology applied to digital government services platforms has introduced new possibilities for secure data sharing for public sector agencies. However, the verification and validation of critical security factors remain underexplored, leading to inconsistent security factors for implementations and theoretical gaps. This study addresses this issue by conducting a double validation analysis of security factors relevant to blockchain-based data sharing in e-government applications using thematic analysis and content validity index. Drawing from an extensive literature review, 54 security items from nine factors were evaluated by a panel of six domain experts using the methodological triangulation. The results indicate that key factors of confidentiality, integrity, availability, decentralisation, interoperability, transparency, auditability, and governance exhibit strong content validity and themes for thematic analysis. Immutability factors are outside the Universal Agreement (UA) scale and require further refinement. The validated framework contributes to both academic and practical domains by offering concrete fundamentals for secure system design and policy formulation. Future research directions include operational testing of validated factors and exploration of user-centric verification.

Author 1: Azman Azmi
Author 2: Farashazillah Yahya
Author 3: Nur Afrina Azman

Keywords: Blockchain; content validity; data sharing; e-government; thematic analysis

PDF

Paper 56: One Decade of Artificial Intelligence (AI) Research in Public Health Stunting Prediction and Intervention

Abstract: Stunting attributable to malnutrition remains a global public health problem impacting the long-term physical and cognitive growth of children. In recent years, artificial intelligence (AI) has been applied in public health research to help diagnose and predict stunting. This study seeks to review trends in AI research on stunting prediction and intervention, and to identify existing challenges and opportunities. The articles were screened using the Systematic Literature Review (SLR) method with the PRISMA protocol through databases like PubMed, ScienceDirect, Scopus, and Google Scholar. The analysis of the data was performed using VOSviewer and Microsoft Excel. The results showed that the most used models in predicting stunting were Random Forest (RF), Support Vector Machine (SVM), Gradient Boosting (XGBoost, LGBM), and Artificial Neural Network (ANN). Model evaluation is usually done through metrics such as AUC-ROC, accuracy, sensitivity, and specificity. Although AI has shown promise in identifying and predicting stunting, a few challenges remain: One is of data access and quality; others are model interpretability and integration within healthcare networks. Towards increasingly promising application outcomes: future directions for home-based health data prediction of the Internet of Things (IoT), Explainable AI (XAI), Multimodal AI, and natural language processing (NLP) models.

Author 1: Nurjoko
Author 2: Admi Syarif
Author 3: Favoriten R. Lumbanraja
Author 4: Khairun Nisa Berawi

Keywords: Artificial intelligence; stunting; public health; machine learning; systematic review

PDF

Paper 57: Students’ Perspectives of AI and Academic Integrity in Higher Education Institutions in Oman

Abstract: The rapid adoption of generative artificial intelligence (AI) in higher education offers significant learning benefits while raising serious concerns about academic integrity. This study was conducted to examine undergraduate students’ perspectives and their awareness, attitudes, usage behaviors, perceived educational impact, policy awareness, and intentions to misuse AI tools in academic contexts in Oman. Using a cross-sectional survey design, data were collected from 200 undergraduate students across multiple academic levels. The survey measured six constructs using Likert-scale items, and data were analyzed using descriptive statistics, correlation analysis, group comparisons, and hierarchical multiple regression. Results indicated that students demonstrated moderate to high awareness of AI tools and generally positive attitudes toward their use for learning-related tasks such as grammar checking, summarization, and brainstorming. Correlation analysis showed that AI awareness, perceived educational impact, and policy awareness were significantly and negatively associated with intentions to misuse AI. Hierarchical multiple regression revealed that ethics-related variables, specifically perceived impact and policy awareness, explained substantial additional variance in misuse intentions beyond baseline predictors of awareness, attitudes, and usage frequency. Gender differences were observed, with male students reporting higher intentions to misuse AI, while senior students demonstrated higher awareness and policy understanding than early-year students. The findings highlight the critical role of AI literacy, ethical awareness, and clear institutional policies in mitigating unethical AI use. Integrating AI ethics education early in undergraduate curricula and strengthening communication of academic integrity policies may promote responsible AI engagement. These results contribute empirical evidence from the Middle Eastern context and offer practical implications for higher education institutions.

Author 1: Alaa Edein Qoussini
Author 2: Shaima Al Tabib
Author 3: Akel Freij

Keywords: Artificial intelligence; academic integrity; student perspectives; ethical AI use; policy awareness

PDF

Paper 58: Web System Based on Convolutional Neural Networks to Support Early Identification of Ocular Pterygium

Abstract: This research aligned with SDG No. 9, “Industry, Innovation, and Infrastructure,” as it promotes health and well-being using innovative technologies. The objective was to determine whether the development of a web-based system based on convolutional neural networks improves the early identification of pterygium. Sensitivity, specificity, and accuracy metrics were used to measure the results, yielding excellent incremental values of 96%, 98%, and 97%, respectively. The study was applied research with a quantitative approach and an experimental design, specifically pre-experimental. The study variable was the early identification of ocular pterygium, consisting of a sample of 100 images, which were divided into 50 images corresponding to individuals with ocular pterygium and 50 from healthy individuals. The type of sampling used was non-probabilistic convenience sampling. The results obtained showed an increase in sensitivity of 4.35%, specificity of 2.80%, and accuracy of 3.56%. It is concluded that the proposal positively improves support for the early identification of pterygium, thanks to the high results obtained with the indicators evaluated, which makes it executable and scalable for future research.

Author 1: Justo Oscar Salcedo-Enriquez
Author 2: Keyla Guadalupe Yataco-Argomedo
Author 3: Rosalynn Ornella Flores-Castañeda

Keywords: Ocular pterygium; web system; convolutional neural networks; early identification; technology

PDF

Paper 59: Mobile Application Based on Convolutional Neural Networks for the Initial Evaluation of Cutaneous Melanoma

Abstract: Cutaneous melanoma is a dermatological disease that affects a large portion of the world's population and is characterized by its high capacity for dissemination and aggressiveness, especially when not detected early. Given this need, the objective was to develop a mobile application based on convolutional neural networks for the initial assessment of this condition. Therefore, the percentage increase in sensitivity, specificity, and accuracy was evaluated. The research employed a quantitative approach and a pre-experimental design. The study variable was the initial assessment of cutaneous melanoma. The sample consisted of 120 images: 60 images from patients with melanoma-positive and 60 images from patients with melanoma-negative. The results of the implementation showed an increase in sensitivity of 0.729%, specificity of 3.626%, and accuracy of 2.631%. In conclusion, the adoption of the mobile application based on convolutional neural networks strengthens the initial assessment of cutaneous melanoma by optimizing these indicators.

Author 1: Julio Guillermo Farro-Llanos
Author 2: Manases Sabteca Juan De Dios-Arango
Author 3: Rosalynn Ornella Flores-Castañeda

Keywords: Melanoma; skin disease; convolutional neural networks; mobile app

PDF

Paper 60: A Survey of AI-Based Methods for Cloud Resource Allocation and Optimization

Abstract: Cloud computing has become essential for modern digital services, yet efficiently allocating compute, storage, and network resources in large-scale and highly dynamic environments remains a significant challenge. Traditional rule-based approaches often struggle to cope with workload variability, multi-tenancy, and the need for real-time multi-objective optimization. In response, recent research has increasingly explored artificial intelligence techniques to improve prediction, scheduling, and automated resource control in cloud infrastructures. This study presents a comprehensive survey of AI-based methods for cloud resource allocation, including machine learning, deep learning, reinforcement learning, and hybrid approaches. It systematically analyzes selected studies published between 2020 and 2026, examining their learning paradigms, optimization objectives (e.g., performance, cost, energy efficiency), experimental validation strategies, and reported limitations. While classical optimization techniques are briefly discussed to contextualize the evolution of the field, the core analysis is strictly centered on AI-driven approaches. The study concludes by identifying the key challenges that persist in intelligent cloud resource management and outlines promising directions for future research toward more adaptive, reliable, and scalable optimization frameworks.

Author 1: Rim Doukha
Author 2: Abderrahmane Ez-Zahout

Keywords: AI techniques; heuristics; metaheuristic; cloud resource management; sustainability; survey

PDF

Paper 61: Hate Speech Detection on Multiple Social Networks Using Deep Learning and Optimization Techniques: A Hybrid Approach

Abstract: The use of social media networks as a source of hate speech is another emerging factor that complicates the possibility of a comprehensive organization of an environment suitable for promoting healthy communication. Automating the detection of hate speech in various social media networks has turned out to be a very difficult process. It is critical to identify and monitor hate speech to reduce its negative effects on people and groups. Currently, there are many approaches to classifying hate speech, but they still have indeterminacy when it comes to distinguishing between hate and normal messages and low accuracy. Many domains have greatly benefited from deep learning, especially in speech and NLP tasks. The hyperparameters of Deep Neural Networks (DNN) play a crucial role and are reflected in their success. However, because these hyperparameters are highly recursive, it is sometimes difficult to set them for machine learning models, such as deep neural networks. The work proposed in this study employed the sparrow search algorithm (SSA) optimization methods to fine-tune the hyperparameters of deep learning models for hate speech detection. In the training process of the SSA-DNN model, the SSA can help search and select the best hyperparameters. Based on the obtained experimental outcomes, it can be observed that the proposed SSA-DNN model outperforms different machine learning and deep learning techniques in the context of hate speech detection.

Author 1: Vishu Tyagi
Author 2: Sourabh Jain

Keywords: Natural language processing; sparrow search algorithm; hate speech; deep neural network; social media

PDF

Paper 62: Enhancing the Successful Microservices Implementation

Abstract: Despite the widespread adoption of microservices across major global companies, a knowledge gap persists between best practices and real-world implementation challenges faced by practitioners. While existing literature provides extensive coverage of microservices patterns, limited evidence exists regarding how the practices are actually implemented in various organizational contexts. Thus, this study aims to address this gap through a systematic synthesis of empirically-validated microservices practices from recent peer-reviewed literature. We conducted a comprehensive systematic literature review and ultimately included thirty-four high-quality articles published between 2021 and 2025. We extracted 114 microservice practices and classified them into eight domains, which are Architecture and Design, Communication and Integration, Development and Deployment, Monitoring and Observability, Testing and Quality Assurance, Migration and Legacy Modernization, Security and Access Control, and Team Organization and Development Process. Architecture and Design practices and Team Organization and Process collectively account for nearly half of all identified practices, while Security and Access Control emerged as a significant research gap, with only 5.9 per cent of studies addressing this domain. To the best of our knowledge, this is the first systematic literature review that comprehensively synthesizes microservice implementation practices across multiple domains with explicit empirical validation in real-world contexts as inclusion criteria. This study provides a comprehensive catalogue of empirically-validated practices, offering structured guidance for practitioners and a foundation for future development of microservice implementation guidelines, contributing to more successful microservice projects and mitigated implementation risks.

Author 1: Dinda Ayu Hapsari
Author 2: Teguh Raharjo
Author 3: Anita Nur Fitriani
Author 4: Bob Hardian

Keywords: Microservice; practice; systematic literature review; Kitchenham; PRISMA 2020

PDF

Paper 63: Real-Time Person Re-Identification Using Image Generation-Based Data Augmentation

Abstract: Person Re-identification (Re-ID) in single-gallery scenarios—where each individual has only one registration image—suffers from severe viewpoint sensitivity due to insufficient pose diversity. This study introduces ViewSynthReID, a pioneering generative augmentation framework that leverages Wan2.2, the latest diffusion-based video generation model, to synthesize complete 360° viewpoint coverage from a single input. The pipeline innovatively employs MediaPipe for automatic frontal pose selection, Hybrid Attention Transformer (HAT) for texture-preserving super-resolution, and diffusion synthesis to create photorealistic multi-pose variants, all seamlessly integrated into the lightweight OSNet backbone for efficient multi-scale feature extraction. On Market-1501, while overall Rank metrics experienced minor degradation from synthetic artifacts (Rank-1: 92.3% → 91.8%), the method delivered targeted gains in challenging viewpoint transitions: 75/3,368 queries (2.2%) showed Rank-1 improvements averaging +12.4%, with 28 cases exceeding +25%. These gains were most pronounced in >90° viewpoint gaps, proving generative synthesis effectively bridges critical pose gaps unattainable through traditional augmentation. For real-world deployment, a production-grade inference pipeline is engineered, combining YOLO26 pedestrian detection with TensorRT-optimized OSNet, achieving 7.20 FPS and 135ms latency on 4K video streams. This system enables practical smart city applications, including real-time crowd monitoring, lost person recovery, and traffic behavior analysis, demonstrating that strategic generative augmentation can transform single-shot Re-ID from research curiosity to deployable surveillance technology.

Author 1: Yuya Ifuku
Author 2: Kohei Arai
Author 3: Oda Mariko

Keywords: Person re-identification; generative AI; data augmentation; OSNet; real-time systems

PDF

Paper 64: Transformer-Enhanced Soft Actor-Critic with EV-Aware Reward Shaping for Maize Optimization

Abstract: Optimizing fertilization and irrigation strategies is essential for improving productivity and resource efficiency in precision agriculture. Artificial intelligence (AI), particularly reinforcement learning (RL), has been increasingly explored for adaptive crop management under uncertain environmental conditions. However, many existing approaches rely on single-action formulations that struggle with joint input control, leading to economically unstable outcomes and limited policy interpretability. This study proposes a Transformer-enhanced Soft Actor-Critic (SAC) framework with expected value (EV)-aware reward shaping for maize optimization in a Decision Support System for Agrotechnology Transfer (DSSAT) Gym environment, enabling simultaneous control of fertilization and irrigation under dynamic crop-environment interactions. Unlike standard SAC implementations, the proposed framework incorporates a transformer-based state encoder for richer agronomic state representation and an EV-aware reward shaping mechanism to guide economically stable long-horizon decision-making. The proposed AI-driven approach improves economic profitability and profit stability compared with the prior state-of-the-art (SOTA) large language model (LLM)-enhanced Deep Q-Network (DQN) baseline. Behavioral analysis shows that the learned policy exhibits temporally structured decision patterns characterized by smaller-magnitude, higher-frequency actions and an associated input-efficiency trade-off. Furthermore, Shapley Additive Explanations (SHAP)-based explainable AI (XAI) analysis identifies growth-stage and crop-development variables as dominant drivers of long-horizon control decisions. Overall, the results demonstrate that the Transformer-enhanced SAC with EV-aware reward shaping provides a more profitable, financially stable, and interpretable AI-based decision-making framework for maize optimization in the DSSAT Gym environment.

Author 1: Xuan Lim
Author 2: Hock Guan Goh
Author 3: Shen Khang Teoh
Author 4: Peh Chiong Teh
Author 5: Ivan Andonovic

Keywords: Precision agriculture; maize optimization; fertilization and irrigation management; reinforcement learning; Soft Actor-Critic; transformer; reward shaping; explainable artificial intelligence

PDF

Paper 65: Web Application to Improve Advertising Order Management Based on Cloud Computing: Case Study of a Television Company

Abstract: Television companies face significant problems in managing advertising orders due to manual processes that cause transcription errors, delays, and a lack of traceability. This study proposes a web application based on cloud computing to automate this process. The solution implements a hybrid architecture that allows advertising agencies to enter orders directly, eliminating manual transcription and providing complete traceability. The results of the pilot test demonstrated substantial improvements: processing time was reduced by 88.33%, decreasing from 17 minutes 16 seconds to 2 minutes 1 second per order. Transcription errors fell from 60% to 0%, and operating costs were reduced by 54.5%. Automation in reporting eliminated manual management, allowing agencies direct access to campaign information. The implementation successfully transformed manual management into an automated, efficient, and scalable system, improving operational efficiency and customer satisfaction.

Author 1: Ariadna Gisselle Ledesma-Sánchez
Author 2: Ricardo Alejandro Gamarra-Valle
Author 3: Ernesto Adolfo Carrera-Salas

Keywords: Advertising order management; cloud computing; process automation; television companies; web applications

PDF

Paper 66: Dynamic Multilevel User Allocation in MEC Using CESO for Resource Efficiency and QoE

Abstract: Mobile Edge Computing (MEC) has become one of the key paradigms to enable next-generation networks in supporting applications that are latency sensitive and computation-intensive. Nevertheless, the resourceful placement of heterogeneous and dynamically incoming user tasks with distributed edge servers is a problematic issue to be achieved because of network fluctuation, non-uniform resource availability, and variance in Quality of Experience (QoE) demand. To overcome these constraints, this study suggests the Dynamic Multilevel User Allocation Algorithm (DMUAA) that incorporates a new Cognitive Evolutionary Synergy Optimization (CESO) framework in order to reach stable, adaptive, and resource-optimizing allocation in real-time. DMUAA means a hierarchical optimization pipeline that consists of heuristic initialization, stochastic refinement, and strategic game-theoretic equilibrium assisted by a coordination and feedback mechanism that guarantees the constant adaptation to variations in user mobility and load. The system model collaboratively optimizes the latency, energy, resource, and QoE under the multi-constraint edge-server conditions. Extensive simulations over a wide range of resource capabilities, user rates, and mobility patterns indicate that DMUAA can be greatly superior to five state-of-the-art baselines, which are the MGGO, GTA, EUA, HAILP, and LGP. Findings indicate that DMUAA decreases average end-to-end latency by 18-34%, increases Resource Utilization Efficiency (RUE) by 12–27%, and increases Service Continuity Rate (SCR) by 15–30% over the current practices. The solved approach also produces 20-35% greater QoE, better load balancing (with up to 25% reduced LBI), and up to 22 per cent greater energy-QoE efficiency (EQR). Moreover, CESO allows for more rapid and stable convergence, and DMUAA comes to optimal allocation states 40-55% quicker than competing algorithms.

Author 1: V Arun
Author 2: M Azhagiri

Keywords: Mobile Edge Computing; distributed edge server; Quality of Experience; Cognitive Evolutionary Synergy Optimization; game-theoretic equilibrium

PDF

Paper 67: A Deployment-Oriented Framework for Machine Learning-Based Learning Style Identification: A Systematic Computational Analysis

Abstract: This study presents a systematic and deployment-oriented analysis of machine learning (ML) techniques for learning style identification in adaptive digital environments. A total of 57 peer-reviewed studies published between 2020 and 2025 were analysed using a PRISMA-guided methodology. Beyond descriptive synthesis, the review systematically examines algorithmic paradigms, multimodal data integration strategies, evaluation protocols, and deployment readiness characteristics. The findings reveal that classical supervised models remain prevalent in small-scale applications, while deep learning and ensemble methods demonstrate improved performance in high-dimensional behavioural datasets. However, significant heterogeneity exists in validation strategies, fusion architectures, and system scalability. To address these limitations, this study proposes a deployment-oriented architectural framework that integrates: 1) context-aware model selection, 2) structured multimodal fusion design, 3) layered explainability mechanisms, and 4) a four-level deployment maturity evaluation model. The framework provides a unified system-level perspective that shifts emphasis from isolated performance optimization toward scalable, interpretable, and integration-ready ML system design. This work contributes a structured computational blueprint for developing robust and deployment-aware learning style identification systems in intelligent educational platforms.

Author 1: Sarafa Olasunkanmi Adeyemo
Author 2: Mohd Shahizan Othman
Author 3: Chan Weng Howe
Author 4: Muteb Sinhat Almarshadi
Author 5: Siti Zaiton Mohd Hashim
Author 6: Taofik Olasunkanmi Tafa
Author 7: Abdulaziz Saidu Yalwa

Keywords: Machine learning; learning style identification; multimodal data fusion; deep learning; ensemble learning; explainable artificial intelligence; deployment maturity; adaptive learning systems

PDF

Paper 68: An Automated Shrimp Feeding System Using Passive Acoustic Monitoring and Faster R-CNN

Abstract: Shrimp aquaculture plays a vital role in global seafood production, contributing substantially to food security, economic growth, and export revenue. Feed typically accounts for 40–60% of total production costs, making efficient feed management crucial for improving farm profitability and the sustainability of culture operations. Acoustic-based feeding strategies offer a promising solution by enabling demand-driven feed control through the detection of shrimp feeding sounds. However, reliable recognition in commercial ponds remains difficult due to strong background noise from aerators, pumps, diffusers, and rainfall, which overlaps with the frequency band of the feeding signals. In addition, the dependence on specialized software and high-performance computing resources hinders large-scale adoption. This study proposes a novel shrimp feeding sound recognition approach that converts acoustic signals into spectrogram images and employs a Faster R-CNN–based framework to regulate feed delivery in real time according to shrimp demand. A wavelet-based filtering method is introduced to effectively suppress ambient noise under practical farming conditions. Moreover, the developed open-source Python-based software enhances the feasibility of deploying intelligent acoustic-based feeding systems in commercial shrimp aquaculture. Experimental results demonstrate that the proposed system improves feed utilization efficiency and growth performance compared with traditional feeding practices.

Author 1: Huynh Viet Hung
Author 2: Huynh Vi Khang
Author 3: Luong Vinh Quoc Danh

Keywords: Automated feeding system; faster R-CNN; passive acoustic monitoring; shrimp culture; whiteleg shrimp

PDF

Paper 69: The DAGC-ATS Database for Arabic Grammar Correction for Arabic Summaries

Abstract: Arabic Grammar Correction is a comprehensive open-domain. Modern methods of correcting Arabic language errors rely on a database specific to a particular field and containing specific words and phrases, which leads to the emergence of the problem of out-of-context words. Due to the growth in recent work for Arabic text summarization and Arabic grammar correction, out-of-context words and the complex nature of Arabic grammar, an open-domain Arabic database is a required resource for Arabic language processing techniques. In this study, A new open-domain Database for Arabic Grammar Correction (DAGC-ATS) is presented to solve the out-of-context words problem, limited domain existing databases for training. The proposed database is based on the description of Arabic grammar using part-of-speech tags and relations between words by a dependency parser. The DAGC-ATS database is based on grammar error detection and correction at the simple sentence level. The database contains entries that describe Arabic grammar rules. The DAGC-ATS database contains two files, one for correcting Arabic simple sentences and the other for correcting incorrect Arabic basic sentences in grammar. It is designed for use only in the training stage. Every entry in the database describes one different grammatical problem, such as gender, number, singular, dual, or plural faults. It contains 9309888 entries. Using the QALB dataset, the system's precision, recall, and F-measure scores were 96.9, 94.80, and 95.83. Additionally, the same system was tested using the EASC database with 785 summaries, and the results for precision, recall, and F-measure were 99.73%, 95.90%, and 97.77%.

Author 1: Nada Essa
Author 2: Mostafa. M. El-Gayar
Author 3: Eman M. El-Daydamony

Keywords: Arabic grammar correction; Arabic natural processing; open domain database

PDF

Paper 70: An Enhanced Framework Using XLM-R with Optimized TF-IDF and Positional Encoding for Intra-Sentential Code Mixing Malay-English Sentiment Analysis

Abstract: The increasing use of online platforms, especially in social media, has led to a rapid growth of user-generated content that frequently exhibits intra-sentential code mixing between Malay and English language. Sentiment analysis remains challenging due to linguistic heterogeneity, frequent language switching, non-standard syntax and limited availability of adequate representations for code mixing text. Although multilingual contextual embedding models such as Cross-lingual Language Model (XLM-R) provide good semantic representations, but there are still challenges in capturing fine-grained sentiment cues in intra-sentential code mixing text when used directly. This study proposed an enhanced feature extraction framework for intra-sentential code mixing Malay-English. The framework first constructs TF-IDF weighting based on trigrams and followed by lexicon-guided filtering to select trigrams that contain sentiment-relevant words. Contextual embeddings are then extracted using XLM-R and further refined through Term Frequency–Inverse Document Frequency (TF-IDF) weighting and positional encoding to preserve structural information. The dataset derived from the MESocSentiment corpus with total of 4,292. The experimental results show that the proposed framework achieves an accuracy of 0.896 and an F1-score of 0.932, where it outperforms traditional sparse feature representations and multilingual contextual embedding baselines. Notably, the framework demonstrates a high recall of 0.954, indicating strong sensitivity in identifying sentiment-bearing instances across diverse social media code mixing expressions. Further analysis reveals that the integration of informative trigram filtering, XLM-R based contextual embedding, TF-IDF weighting, positional encoding, and sentiment polarity scoring enhances the representation of sentiment cues in short and informal social media text. Overall, the results suggest that the proposed feature extraction framework enhances the representation quality of sentiment analysis for code mixing Malay–English in social media.

Author 1: Surendran Selvaraju
Author 2: Nilam Nur Amir Sjarif
Author 3: Nurulhuda Firdaus Mohd Azmi
Author 4: Wan Noor Hamiza Wan Ali
Author 5: Norshaliza Kamaruddin

Keywords: Sentiment analysis; code mixing; feature extraction; contextual embeddings; XML-R; TF-IDF; positional encoding

PDF

Paper 71: Empirical Quantitive Investigation of the Effect of GoF Design Patterns on the Quality of Software Systems

Abstract: Design patterns are universal, reusable fixes for common issues in software design. They are supposed to encourage better design choices by rep-urposing already-established successful solutions, saving cost and time. The purpose of the current study is to present quantitive evidence about the expected implications on software quality by employing GoF design patterns; 10 commonly applied patterns have been empirically assessed for their impact on software quality. The evaluated patterns are: Factory Method, Prototype, Singleton, Adapter, Composite, Decorator, Observer, State, Template Method, and Proxy. The study considers software quality attributes that match the intents of the subject design patterns; these attributes are: software maintainability, testability, reusability, simple design, sensitivity to change, and error-proneness. The empirical evaluation of patterns is performed by computing 10 software quality metrics for pattern classes that have been detected in 10 real open-source projects implemented with Java. The findings reveal that the evaluated patterns promote software quality, except for the classes of Prototype, Composite, and Singleton, which were often found not cohesive.

Author 1: Somia Abufakher

Keywords: Gang-of-four patterns; empirical quantitive investigation; software systems quality

PDF

Paper 72: D-LexeCan: A Dynamic Lexicon-Based Framework for Sentiment Analysis in Tarifit, a Low-Resource Multiscript Language

Abstract: Sentiment analysis for low-resource languages remains challenging due to limited annotated data, orthographic instability, informal writing practices, and the lack of dedicated linguistic resources, challenges that are particularly acute for Tarifit (Tamazight of the Rif), an under-resourced Amazigh language characterized by strong dialectal variation, pervasive multi-script usage, and highly noisy user-generated content on social media. This study introduces D-LexeCan, a dynamic lexicon-based sentiment analysis framework that infers polarity directly from annotated corpus evidence without relying on predefined sentiment dictionaries or computationally intensive pretrained deep learning and transformer-based models. The framework combines deterministic multi-script normalization, unifying Arabic script, Tifinagh, and Arabizi into a single Tarifit Latin representation with automatic induction of sentiment-bearing unigrams and bigrams, while explicitly modeling negation and amplification phenomena through linguistically motivated operators and preserving emojis as meaningful discourse-level sentiment cues. The approach is evaluated on a manually annotated social media corpus collected from multiple online platforms, where it achieves an accuracy of 0.8800 and a Macro-F1 score of 0.8798. The results outperform a static lexicon baseline with an accuracy of 0.5275, a classical machine-learning model based on TF–IDF and SVM with an accuracy of 0.8525, and neural architectures including BiLSTM with an accuracy of 0.7950. Experiments with frozen multilingual transformer encoders show accuracy ranging from 0.6725 to 0.7650. Fine-tuned multilingual transformers such as mBERT achieve competitive performance, reaching an accuracy of 0.8175. Overall, the results demonstrate that adaptive and linguistically grounded dynamic lexicon induction constitutes an effective, interpretable, and computationally efficient alternative for sentiment analysis in low-resource, noisy, and multi-script African language contexts.

Author 1: Amar Amakssoum
Author 2: Fadwa Bouhafer
Author 3: Anass El Haddadi
Author 4: Abdelkhalak Bahri

Keywords: Sentiment analysis; low-resource languages; dynamic lexicon-based; static lexicon baseline; deep learning; machine-learning; transformer-based models

PDF

Paper 73: Framework for Implementing ERP Integrating Client-Consultant Agency Management within Moroccan SMEs

Abstract: Nowadays, Enterprise Resource Planning (ERP) is a preferred solution for Small and Medium-sized Enterprises (SMEs) wishing to modernize and integrate their information Systems (IS). However, implementing an ERP remains a complex process due to the organizational constraints specific to SMEs and the difficulties associated with the implementation process. In the context of Moroccan SMEs, the lack of a structured implementation framework and conflicts between the client and the consultant are major factors affecting the success of ERP projects. Furthermore, most of the theoretical frameworks presented in the literature are structured around similar phases, do not take into account the client-consultant agency management, and were developed in contexts different from that of Morocco. This research aims to develop an ERP implementation framework that integrates client-consultant agency management, specifically adapted to Moroccan SMEs. To achieve this objective, a mixed-methods approach was adopted. We opted for a quantitative research method using the PLS-SEM statistical technique, with the aid of SmartPLS software, to examine how client–consultant agency management affects the success of ERP implementation within Moroccan SMEs. Next, we used the action research method to develop a framework for ERP implementation that integrates client–consultant agency management within Moroccan SMEs. The proposed framework is based on five phases, each defining the objectives, inputs, processes, outputs, critical success factors (CSF), and associated risks. The integration of client-consultant agency management makes it possible to anticipate and manage organizational, technical, and human conflicts, particularly through the contract and conflict resolution strategies. This study contributes to both academic research and professional practice by offering consultants and Moroccan SMEs a structured framework aimed at improving the success of ERP projects.

Author 1: Yassine Zouhair
Author 2: Younous El Mrini
Author 3: Mustapha Belaissaoui

Keywords: ERP; ERP implementation; client-consultant; Moroccan SMEs; IS success; system benefits

PDF

Paper 74: A Hybrid Machine Learning Algorithm for Pipeline Leak Detection and Localisation in Water Distribution Networks

Abstract: Water Distribution Networks (WDNs) frequently experience significant water losses due to pipeline leakages. These losses not only create economic challenges for water utilities but also intensify global concerns regarding water scarcity. This study aims to enhance the accuracy and reliability of leak detection and localisation within WDN infrastructures. Traditional leak detection techniques often exhibit limitations such as high operational costs, inefficient detection processes, and susceptibility to false alarms, particularly when sensors are deployed randomly across the network. Furthermore, detecting concealed or low-intensity leaks remains a difficult task. To address these challenges, this study introduces a hybrid supervised machine learning framework that combines Support Vector Machines (SVM), Artificial Neural Networks (ANN), and Graph Theory (GT). The integration of these techniques enables the proposed model to analyse multiple parameters influencing leak behaviour and improve the reliability of detection outcomes. The hybrid model, referred to as the SVM-ANN-GT algorithm, is evaluated using the EPANET hydraulic simulation environment and compared with conventional machine learning approaches. Experimental results indicate that the proposed hybrid model significantly improves leak detection performance. The model achieves an average detection accuracy of approximately 96%, outperforming standalone SVM and ANN models, which achieved accuracies of 85% and 80%, respectively. The improved performance is primarily attributed to the integration of graph-theoretic optimisation for sensor placement, which enhances monitoring coverage and reduces redundancy within the network.

Author 1: Giresse M. Komba
Author 2: Topside E. Mathonsi
Author 3: Pius A. Owolawi

Keywords: SVM-ANN-GT; leak detection and localisation; EPANET; WDNs; ML

PDF

Paper 75: Parameter-Driven Evaluation of Zero Trust Security in Blockchain Networks Under Dynamic Threats

Abstract: In this study, we investigate a parameterized method to evaluate the Zero Trust (ZT) security model integrated with blockchain networks under dynamic cyberattack conditions. Existing static trust-based security models fail to adapt to evolving adversarial behavior and dynamic attack conditions. To address these deficiencies, a discrete-time simulation model is constructed a discrete-time simulation model is developed to capture dynamic trust evolution, probabilistic attack patterns, and threshold-based access control within a financial network. In this regard, the proposed model considers significant parameters such as the probability of attack, trust decay rate, and access, isolation, and quarantine thresholds to evaluate their impact on network security performance. From the results, a significant correlation is noted between trust, mitigation, adversarial intensity, and policy parameter tuning. For instance, a strict threshold policy results in improved attack mitigation but compromises network participation, while a lenient approach results in improved network participation but compromises network security. The proposed framework is a more adaptive, scalable, and robust approach compared to conventional static approaches in addressing dynamic threats. From the results, optimal tuning of parameters is a fundamental aspect in achieving a balance in security enforcement in blockchain-based zero-trust networks.

Author 1: Samuthira Pandi V
Author 2: T. Vijayanandh
Author 3: A. Jeyamurugan
Author 4: M. D. Boomija
Author 5: V. Parimala
Author 6: S. Preena Jacinth Shalom
Author 7: Lavanya. M
Author 8: Veena. K

Keywords: Zero trust; blockchain security; trust management; cyber-attack simulation; access control; financial networks

PDF

Paper 76: Thread-Sensitive Shared Instruction Cache Analysis for Precise WCET Estimation of Multithreaded Programs

Abstract: Real-time systems require strict adherence to task deadlines, making Worst-Case Execution Time (WCET) analysis essential. WCET estimation typically involves static analysis of instructions in a program, making use of models of hardware and architectural units such as shared instruction caches that are as precise as possible. Shared instruction caches pose a challenge because cache behaviour depends on access history and inter-core interference in multicore systems. Existing approaches do not fully exploit thread lifecycle and synchronization semantics when modeling shared instruction cache behavior. In contrast, the proposed TP-WCIP model explicitly incorporates these semantics to eliminate infeasible interference scenarios. By confining interference placement to feasible concurrent execution regions, it achieves more precise WCET estimation. Worst-case latency, due to shared instruction cache accesses in the presence of inter-core interferences, is estimated for each thread in a multithreaded program with a focus on start, join, and synchronization (wait and notify) primitives. A highly precise model, termed Threaded Program Worst-Case Interference Placement (TP-WCIP), is proposed for the static analysis of multithreaded programs to estimate Worst-Case Execution Time (WCET). The proposed model is evaluated in comparison with Cache Block Conflict Number (CCN), Worst-Case Interference Placement (WCIP), and Interference Partitioning (IP). TP-WCIP exploits concurrency and happens-before relationships induced by start, join, and synchronization primitives to accurately characterize inter-core interferences. The TP-WCIP model for shared instruction cache analysis is validated using benchmark programs and compared against approaches reported in the literature. It is established both theoretically and experimentally that the proposed model TP-WCIP leads to more precise worst-case latency measurements. Result analysis of benchmark programs Papabench and extended Mälardalen benchmarks shows that the TP-WCIP model reduces interferences by up to 27% over IP, 53% over WCIP, and 75% over CCN, while preserving up to 16% more Shared Instruction Cache Hits than IP, 48% than WCIP, and 84% than CCN, thereby delivering more precise static WCET estimates for multi-threaded programs on multicore architectures.

Author 1: Naveeta Rani
Author 2: P Padma Priya Dharishini
Author 3: PVR Murthy

Keywords: Synchronization; concurrency; multicore systems; multithreaded program; shared instruction cache analysis; worst-case execution time

PDF

Paper 77: A Deterministic ANN–CA Computational Framework for Spatial Simulation Using Socioeconomic Data

Abstract: Hybrid approaches combining Cellular Automata (CA) and Artificial Neural Networks (ANN) have been widely applied to spatial simulation; however, most implementations rely on stochastic components that limit reproducibility and interpretability. This study proposes a deterministic ANN–CA computational framework in which the stochastic perturbation term of a constrained CA model is replaced by ANN-derived classification values based on socioeconomic variables. The framework integrates data preprocessing, ANN training, transition coefficient generation, and CA-based simulation into a unified workflow. A multilayer perceptron is trained using spatialized socioeconomic indicators (age, education, sex, and income) to generate deterministic transition potentials at the pixel level. Experimental evaluation using multitemporal land-use data shows that the proposed ANN–CA model achieves a moderate improvement in global spatial association (Cramer’s V: 0.5622 → 0.6016), while pixel-level agreement (Kappa: 0.6589 → 0.6595) remains nearly unchanged. These results indicate that the proposed approach primarily enhances structural coherence and spatial organization—reducing fragmented growth and improving corridor-oriented expansion—rather than significantly increasing pixel-wise predictive accuracy. By replacing stochastic behavior with data-driven deterministic rules, the proposed framework improves reproducibility and provides a more interpretable linkage between urban growth patterns and socioeconomic drivers. This work contributes a transparent hybrid modeling approach suitable for spatial simulation and planning-oriented applications.

Author 1: Álvaro Peraza Garzón
Author 2: René Rodríguez Zamora
Author 3: Mónica Avelina Gutiérrez Haros
Author 4: Iliana Amabely Silva Hernández
Author 5: Juan Francisco Peraza Garzón

Keywords: Artificial neural networks; cellular automata; spatial simulation; deterministic modeling; hybrid computational framework

PDF

Paper 78: An FMA-Based Action Research Framework for Blockchain-Driven Scholarship Management: A Diagnostic Perspective

Abstract: For equity in access to higher education, scholarship schemes play a vital role. This is particularly the case with developing countries, where economic problems may become an impediment to academic betterment. In India, in the state of Maharashtra, the Shikshan Shulk Scholarship scheme was designed with the same goal – of reducing the financial obstacles for eligible students. However, the operation of this scholarship is often seen to be dealing with delays, manual verification issues, and a lack of transparency. Present scholarship portals are centralized, causing central points of failure, lack immutable audit trails, and depend mainly on manual involvement. This causes inefficiencies, a reduction in stakeholder trust, and compromises the outcome of financial assistance. To resolve these challenges, this study proposes an FMA (Framework of Ideas, Problem Solving Methodology, Areas of Application) based Action Research framework for automating the Shikshan Shulk Scholarship Management using blockchain technology. By utilizing an action research approach, the study establishes how blockchain can improve accountability, reduce delays, and improve operational efficiency in scholarship disbursement. This strategy will contribute to streamlining scholarship management as well as to the broader theme of blockchain-based e-governance.

Author 1: Chetna Achar
Author 2: Bharati Wukkadada
Author 3: Harshali Patil

Keywords: Blockchain; scholarship; shikshan shulk scholarship scheme; e-governance; smart contracts; action research; FMA (Framework, Methodology, Applications)

PDF

Paper 79: Clustering Analysis for Extracting Moroccan Health Provinces Typology According to Breast and Cervical Cancer Early Screening

Abstract: Cancer remains a major global concern, and its screening is a complex public health intervention. In Morocco, breast and cervical cancers are the most frequent malignancies among women, accounting for about half of all diagnosed cases. However, screening participation and coverage still vary across provinces. This study proposes a provincial typology of early screening performance using collected indicators for breast and cervical cancer. Before clustering, we applied several dimensionality reduction methods to improve cluster separability. We adopt a comparative framework that evaluates combinations of DR techniques (PCA, ICA, kernel PCA, t-SNE, and LLE) and clustering algorithms (ACH, K-Means, and GMM) to identify the optimal model with the help of internal validation measures. Kernel PCA with K-Means presents the most optimal model, producing the most coherent province clustering from all tested combinations (DR & algorithm clustering). It demonstrates the best overall separation and compactness according to the evaluation metrics. Three clusters were obtained describing a gradient of early screening system performance: the first group of provinces shows higher screening coverage and stronger diagnostic and referral capacity, the second group demonstrates intermediate performance and differentiated service delivery, and the third group of provinces with low coverage and restrictive access reflects geographic remoteness and service constraints. These results emphasize marked spatial disparity in preventive service performance. They demonstrate how unsupervised learning can support territorial health analysis. The resultant typology can inform targeted action: maintaining and sustaining quality in high-performing provinces, strengthening operations in intermediate-performing provinces, and giving priority to catch-up interventions in low-performing areas.

Author 1: Meryem Chakkouch
Author 2: Merouane Ertel
Author 3: Aziz Mengad
Author 4: Said Amali
Author 5: Majda Frindy

Keywords: Clustering; PCA; ICA; KPCA; t-SNE; LLE; K-Means; ACH; GMM; breast and cervical cancers early screening

PDF

Paper 80: A Lightweight Smart Contract Blockchain Platform for Secure and Efficient SME Transaction Systems

Abstract: Small and medium enterprises (SMEs) require secure, efficient, and low-cost digital transaction systems. However, many blockchain-based platforms are designed for large-scale applications and impose significant computational overhead, making them unsuitable for resource-constrained SMEs. This study proposes a lightweight smart contract blockchain platform tailored for SME-scale service environments. The system implements a modular smart contract architecture integrated with a lightweight blockchain and automates key transactional processes, including balance top-ups, service ordering, order confirmation, and payment execution, while ensuring data integrity through a simplified Proof-of-Work mechanism. System performance is evaluated using a Design of Experiment (DOE) framework with a full factorial design and analyzed through Analysis of Variance (ANOVA). The results show that execution time remains below 5 seconds under workloads of up to 20 concurrent transactions, with CPU utilization below 55%. ANOVA results indicate that transaction concurrency and smart contract complexity significantly affect performance, while block size has a limited impact. Security evaluation confirms resistance to unauthorized access, double-spending, and reentrancy attacks.

Author 1: Sabam Parjuangan
Author 2: Suhardi
Author 3: I Gusti Bagus Baskara Nugraha

Keywords: Smart contract; blockchain; SME digitalization; performance evaluation; transaction systems

PDF

Paper 81: An Efficient Computational Framework for Scalable Learning in Complex Data Environments Using Deep Neural Networks

Abstract: This study introduces an efficient computational framework designed to support scalable learning in complex data environments using deep neural networks. In many real-world settings, data are not only large in volume but also diverse in structure, noisy in quality, and constantly evolving. These conditions often make conventional deep learning pipelines difficult to scale and expensive to maintain, especially when computational resources are limited or when rapid model updates are required. To address these challenges, we propose a framework that integrates adaptive data preprocessing, modular neural network architectures, and resource-aware training strategies into a unified learning pipeline. The framework is built to balance learning performance with computational efficiency, allowing models to be trained and updated without excessive overhead. Experiments were conducted on multiple heterogeneous datasets representing different levels of data complexity and scale. The results show that the proposed approach consistently improves training stability and convergence speed while maintaining competitive predictive performance compared to standard deep learning setups. In addition, the framework demonstrates better adaptability when handling data distribution shifts, which are common in dynamic environments. These findings suggest that scalable learning does not necessarily require increasingly complex model designs, but rather thoughtful integration of computational strategies that align model behavior with data characteristics and system constraints. The proposed framework offers a practical pathway for deploying deep learning solutions in large-scale, real-world applications where efficiency, robustness, and scalability are equally important.

Author 1: Priyanto
Author 2: Heri Nurdiyanto

Keywords: Scalable learning; deep neural networks; computational framework; complex data environments; efficient training

PDF

Paper 82: Enhanced Grey Wolf Optimization Dimension Learning for Energy-Efficient Task Scheduling in Edge Computing Environment

Abstract: The development of edge computing has facilitated the development of numerous applications with diverse characteristics and stringent quality of service (QoS) requirements; these applications demand significant computational power and have strict time-sensitive constraints. While cloud computing offers seemingly unlimited computational resources, it often fails to meet the real-time demands of certain applications because of the latency introduced by the distance between edge devices and cloud data centers. Edge computing enables computational services closer to edge devices, better fulfilling these time-sensitive demands. Task scheduling that tries to share tasks among diverse virtual machines in an optimum manner concerning overall system performance metrics, such as minimal execution time or reduced energy consumption, is one of the key challenges of this heterogeneous computing environment. Task scheduling is an NP-complete problem. Therefore, metaheuristic algorithms are usually applied to obtain near-optimal solutions. The study presents an enhanced grey wolf optimization hybridized by a dimension learning-based strategy, EGWODLB, for optimizing QoS objectives focusing on execution time and energy consumption. The experimental results reflect that EGWODLB outperforms the benchmark algorithms by achieving significant improvements in both execution time, energy consumption, and VM utilization.

Author 1: Jafar Aminu
Author 2: Rohaya Latip
Author 3: Zurina Mohd Hanapi
Author 4: Shafinah Kamarudin
Author 5: Mustapha Abubakar Giro

Keywords: Edge computing; energy consumption; execution time; task Scheduling; grey wolf optimization dimension learning

PDF

Paper 83: A Leakage-Aware and Reproducible Evaluation Framework for Predictive Maintenance Classification

Abstract: Predictive maintenance classification is widely used to support industrial maintenance planning; however, reported model performance is often influenced by evaluation practices that allow unintended information leakage between training and testing data, resulting in optimistic and difficult-to-reproduce estimates. This study examines predictive maintenance classification from the perspective of evaluation design, with a specific focus on quantifying the impact of leakage on performance assessment. A leakage-aware and fully reproducible evaluation protocol is implemented on the AI4I 2020 dataset, which exhibits severe class imbalance representative of practical industrial conditions. A comparative analysis between leakage-prone and leakage-aware evaluation settings shows that leakage-prone configurations can inflate AUC estimates by up to 8–9 percentage points, demonstrating the substantial influence of evaluation design on reported performance. Logistic Regression, Random Forest, and Gradient Boosting models are evaluated using stratified five-fold cross-validation with strictly fold-wise isolated preprocessing. While tree-based models achieved strong discriminative performance (mean AUC = 0.966 and 0.971), recall remained substantially lower than specificity, highlighting the persistent challenge of minority-class detection. The findings demonstrate that evaluation configuration, rather than model architecture alone, can significantly influence performance interpretation and lead to misleading conclusions when leakage is not controlled. This work provides a transparent and reproducible framework for reliable empirical evaluation in predictive maintenance research.

Author 1: Abdulrahman M. Qahtani

Keywords: Predictive maintenance; evaluation methodology; information leakage; reproducible machine learning; cross-validation; class imbalance; performance metrics

PDF

Paper 84: Deep Learning-Based Model to Predict Personality Traits of Social Media Users

Abstract: The rapid expansion of social media platforms has created enormous amounts of user-created content and behavioral information, providing the computational means to study human personality and psychology. This study creates a temporal deep learning model and Gated Recurrent Units (GRUs) to predict personality traits with behavioral and content-based features obtained from Facebook social media. The research fits the Big Five Personality Traits paradigm and aims at modelling temporal relationships in user activity patterns, including the frequency of posts, linguistic behavior, and social interaction relationships, to identify latent psychological aspects. A GRU-based framework was created to model sequential dependencies and contextual relationships among the activity timelines of the user. To evaluate the model performance and reliability, two comparison baselines: Long Short-Term Memory (LSTM) and Artificial Neural Network (ANN were run within the same experimental conditions. Model evaluation also used regression (Mean Absolute Error, MAE; Coefficient of Determination, R2) and classification (Accuracy, Precision, Recall, F1-score, and AUC-ROC) metrics, which were also validated using a 10-fold cross-validation process to ensure that they were stable and generalizable. The experimental findings indicated that the proposed GRU model was always better in all the evaluation metrics compared to the base models. It had the least MAE (0.00825) and the highest R2 (0.9917) and showed outstanding predictive reliability. GRU had a high accuracy (96.8) and F1-score (0.96) and AUC-ROC (0.98), which were better than LSTM (F1 = 0.95) and ANN (F1 = 0.84), in classification performance. The analysis at the trait level showed that the predictive accuracy is high on all dimensions of personality, with Agreeableness (R2 =0.9942, F1 =0.97) being the most accurately predicted and Extraversion (R2 =0.9862) having a high predictive consistency. The findings of the cross-validation also confirmed the strength and the external validity of the GRU framework.

Author 1: Faiza Abid
Author 2: Mazni Binti Omar
Author 3: Mohamad Sabri Bin Sinal

Keywords: Personality trait prediction; deep learning; gated recurrent units; human psychology; social media analytics; digital behavior analysis

PDF

Paper 85: A Multi-Vector Framework for Injection Attack Detection Using NLP Lexical–Semantic Fusion with Reinforcement Learning DQN–Based Calibration

Abstract: Injection attacks persist as dominant threats in modern web systems due to obfuscation, polymorphism, and multi-vector exploitation across SQLi, XSS, LDAP Injection, and Command Injection. Existing defenses often rely on static signatures or single-vector models, which limit generalization under adversarial payload mutation. This study addressed that limitation by designing and evaluating a unified multi-vector detection framework that integrated Natural Language Processing (NLP) and Deep Q-Network (DQN) Reinforcement Learning (RL) within a structured Design–Development–Research methodology. The study consolidated heterogeneous open-source datasets comprising 346,954 benign and 653,046 malicious XSS samples, 107,328 benign and 136,746 malicious SQLi samples, 1,591 benign and 515 malicious Command Injection samples, and 1,100 benign and 900 malicious LDAP Injection samples. The pipeline operationalized canonicalized payloads as inputs, hybrid lexical–semantic feature extraction and supervised classification as processes, and probabilistic attack decisions with calibrated thresholds as outputs. The NLP pipeline fused TF-IDF character n-grams with transformer embeddings to preserve structural and contextual signatures. Logistic Regression and One-vs-Rest Linear SVM achieved strong discrimination under group-aware splits, while the DQN agent optimized decision thresholds using reward-based calibration without modifying classifier parameters. Results demonstrated stable ROC and Precision–Recall performance, coherent embedding separation, and convergence of reinforcement learning rewards and loss. The deployed system was evaluated using ISO/IEC 25010 functional suitability criteria, including functional completeness, correctness, and appropriateness, to verify that the detection pipeline executed all required operations and produced reliable decision outputs and explainable, confidence-supported decisions. The framework strengthened secure digital infrastructure, contributing to resilient innovation ecosystems aligned with Sustainable Development Goals 9 and 16.

Author 1: Carlo Jude P. Abuda
Author 2: Cristina E. Dumdumaya

Keywords: Deep Q-network reinforcement learning; injection attack detection; machine learning for cybersecurity; multi-vector attack detection; Natural Language Processing; payload analysis; web application security

PDF

Paper 86: Machine Learning Application in Healthcare: A Case Study Using Ensemble Methods for Hospital Length of Stay Prediction

Abstract: Artificial intelligence is driving digital transformation across multiple sectors, including healthcare, pharmaceuticals, industrial production, and the automotive industry. In healthcare specifically, AI-powered predictive analytics offer significant potential for optimizing operational efficiency and resource allocation. To demonstrate this potential, we present a case study focused on hospital length of stay (LOS) prediction using 2,125,280 admission records from the New York SPARCS database. We implemented and compared four machine learning algorithms: Linear Regression, Random Forest, Gradient Boosting, and XGBoost. Following hyperparameter optimization, the XGBoost model achieved superior performance with R²=0.8686, RMSE=3.24 days, and MAE=1.42 days, substantially outperforming Linear Regression (R²=0.5339, RMSE=6.10 days, MAE=2.86 days). Prediction accuracy reached 63.34% within ±1 day and 89.44% within ±3 days of actual LOS. SHAP analysis identified Total Costs, Total Charges, Hospital Service Area, APR Medical Surgical Description, and APR DRG Code as the most impactful predictors. Performance varied across LOS categories, with MAE ranging from 0.66 days for short stays (1-3 days) to 11.81 days for extended hospitalizations (>30 days). These results demonstrate that ensemble machine learning methods, particularly XGBoost, provide clinically meaningful accuracy for healthcare operational planning, though challenges remain for extended stays and complex cases requiring specialized modeling approaches.

Author 1: Hakima Reddad
Author 2: Maria Zemzami
Author 3: Norelislam El Hami
Author 4: Nabil Hmina
Author 5: Farouk Yalaoui

Keywords: Machine learning; XGBoost; healthcare operations; hospital resource management; ensemble methods; predictive analytics; SHAP analysis

PDF

Paper 87: A Socio-Technical Analysis of Enterprise Architecture Misalignments

Abstract: Modern organizations must adapt to competitive environments through the analysis of their current state and plan for a future state through the use of Enterprise Architecture (EA). EA is a management discipline to align business and IT strategies. However, many organizations face challenges in effectively implementing and using EA due to misalignments between EA parts, such as organizational support, documentation, and governance, leading to inefficiencies. Thus, this study employs the Punctuated Socio-Technical Information System Change model to examine EA misalignments through four interrelated components: structure, task, actor, and technology. It offers a comprehensive analytical lens for examining EA misalignments. This model is used to examine how EA misalignments emerge from disruptions or inconsistencies among various EA components that affect EA coherence and efficiency. Considering the purpose and nature of this research, a case study is a suitable research method. The research issue to be examined, EA misalignments, is contemporary and must be explored in its context. The findings reveal some EA misalignments that are categorized into four groups: organizational, governance, capabilities, and management, highlighting how disruptions among these components affect EA coherence and efficiency. The implications of this research are twofold: first, EA components must be aligned for optimal efficiency; second, any misalignment between these components results in EA operational inadequacies and practice failures.

Author 1: Ayed Alwadain

Keywords: Enterprise Architecture; EA; EA issues; EA misalignments; socio-technical systems

PDF

Paper 88: Bibliometric Mapping and Systematic Review of Deep Learning Approaches in Film and Multimedia Recommendation Systems within New Media

Abstract: The rapid growth of film, video, and multimedia content on new media platforms has intensified information overload, increasing the importance of effective recommender systems. Traditional recommendation approaches face limitations in modeling complex content semantics and dynamic user preferences. Deep learning techniques have been widely adopted to enhance film and multimedia recommendation performance. This study presents a bibliometric mapping and systematic literature review of deep learning film and multimedia recommendation systems in new media. Scopus was used as the primary data source, yielding 679 peer-reviewed studies following a structured screening and inclusion process. The research methodology, search strategy, and selection criteria are explicitly documented. Bibliometric techniques, including citation analysis, keyword co-occurrence, and thematic clustering, are applied to identify influential publications, dominant research streams, and emerging trends. The reviewed literature is synthesized into major thematic areas, including multimodal representation learning, graph-based recommendation, multimedia feature extraction, personalization and cold-start mitigation, fairness and bias, emotion-aware recommendation, and explainability. The findings reveal a strong dominance of multimodal and graph-based deep learning models, particularly those integrating visual, audio, textual, and interaction data. However, many existing approaches rely on shallow feature fusion and demonstrate limited capability in capturing fine-grained semantic relationships, user attraction mechanisms, and contextual meaning. Challenges related to cold-start, sparse feedback, fairness, transparency, and user experience remain insufficiently addressed. This study identifies critical research gaps and outlines future research directions, emphasizing the need for semantically rich, explainable, fair, and human-centered multimedia recommender systems capable of supporting the evolving complexity of new media ecosystems.

Author 1: Linlin Hou

Keywords: Deep learning; multimedia recommendation systems; film recommendation; new media platforms; multimodal learning; graph-based recommender systems

PDF

Paper 89: LoRA-Based Fine-Tuning of Local LLMs for Hallucination Detection in Indonesian RAG Systems

Abstract: Retrieval Augmented Generation (RAG) improves the factual grounding of Large Language Models (LLMs) by incorporating external knowledge. However, RAG systems may still generate hallucinated responses, and this issue remains underexplored in Indonesian language settings, particularly in settings where local deployment is preferred. This study proposes a hallucination detection approach for Indonesian RAG systems using Low Rank Adaptation (LoRA) fine-tuning. To support this objective, the study constructs a dataset in the Human-Computer Interaction domain consisting of 908 context, question, and answer pairs. The dataset is classified into four categories: FACT-H, FAITH-H, LOG-H, and FAITHFUL. Three local LLMs, namely, Gemma-7B-it, LlaMA-2-7B chat, and Phi-3-medium-4k-instruct, were evaluated using 5-fold cross-validation. The results show that Gemma-7B-it achieved the best performance in the four-class setting, with a Macro F1 score of 0.846. In the binary classification setting, Gemma achieved an accuracy of 98.1 per cent. Further analysis shows that Gemma was particularly effective in recognizing FAITHFUL, FAITH-H, and FACT-H, while LOG-H remained the most difficult class to distinguish consistently.

Author 1: I Ketut Resika Arthana
Author 2: Nyoman Gunantara
Author 3: Made Sudarma
Author 4: Made Sukarsa

Keywords: Hallucination detection; Retrieval-Augmented Generation; LoRA fine-tuning

PDF

Paper 90: Noncommunicable Eye Diseases Trend Related to Artificial Intelligence: A Bibliometric and Visualization Analysis

Abstract: In recent years, artificial intelligence (AI) has transformed numerous sectors, including healthcare, and ophthalmology is no exception. The field has seen remarkable progress in using AI to detect, diagnose, and manage noncommunicable eye diseases (NCEDs), such as cataract, keratoconus, glaucoma, diabetic retinopathy, and age-related macular degeneration. This study presents a comprehensive bibliometric analysis of 4,280 articles between 2004 and 2026, revealing significant trends in AI-based NCED research. The literature search focused on a highly reputable database: Scopus. The selection of this database ensured a thorough exploration of the field, given its broad coverage of both technical and medical literature. The search strategy employed a carefully curated set of keywords to capture relevant articles and reviews. The field has experienced robust growth, with an average annual increase of 19.41% in publications, peaking in 2023 with 516 articles. Deep learning, particularly Convolutional Neural Networks (CNNs), has emerged as the leading approach, surpassing traditional image processing techniques. Research in medical image analysis has primarily focused on age-related macular degeneration, glaucoma, and diabetic retinopathy, with an increasing emphasis on automated screening systems for early detection. Future trends may include a focus on explainable AI and attention mechanisms, integration with telemedicine, and development of more robust, generalizable models, highlighting its potential to revolutionize early diagnosis and management of eye diseases.

Author 1: Marizuana Mat Daud
Author 2: W Mimi Diyana W Zaki
Author 3: Laily Azyan Ramlan
Author 4: Fazlina Mohd Ali
Author 5: Jun Kit Chaw

Keywords: Artificial intelligence; noncommunicable eye disease; cataract; keratoconus; glaucoma; diabetic retinopathy; age-related macular degeneration

PDF

Paper 91: An Algorithmic Model Based on Optimization of the Production Rules for Phishing Attacks

Abstract: Phishing cyber-hazards, which are a correct cyber threat to audit, monitoring, control, and data acquisition systems in the digitalization environment, are aimed at misleading participants in a complex system and editing personal digitalization data through unauthorized access. In the research work, functioning tables, production rules, algorithmic and mathematical modeling apparatus are used as a foundation for formulating, analyzing, and synthesizing the discrete adaptive behavior of large systems. In the scientific practical research work, phishing identification technologies based on production rules are implemented in solving a scientific problem by integrating access to digitalization resources into control and/or management operations and/or processes. In this research, a set of production rules is created to identify malicious and legitimate resources from URLs, URL features are extracted from a dataset of trusted platforms based on algorithmization, and logical rules are generated from these features; the authenticity of the URLs is then verified using this rule set. The results are compared with other existing models and algorithms, and two different approaches to generating production rules are developed. The study also develops a logical model for building a knowledge base from URL features and demonstrates the representation of malicious attacks through logical implications, conjunctions, and disjunctions. Finally, it tests optimized expressions based on monotone Boolean functions and their perfect disjunctive normal form (CDNF) on an independent test dataset in order to select the most efficient rule system.

Author 1: Anvar Kabulov
Author 2: Erkin Urinbaev
Author 3: Inomjon Yarashov
Author 4: Alisher Otakhonov

Keywords: Petri nets; phishing; production rules; URL; functioning table

PDF

Paper 92: Deep Learning and Optimization-Driven Intrusion Detection Systems for Internet of Things Security: A Systematic Literature Review

Abstract: The rapid expansion of Internet of Things (IoT) deployments has increased the exposure of interconnected devices to cyber threats, particularly in heterogeneous and resource-constrained environments. Although recent research increasingly emphasizes learning-based detection, classical intrusion detection system (IDS) paradigms remain widely deployed in practical IoT settings due to their interpretability, deterministic behavior, and low computational overhead. This study presents a systematic literature review focused exclusively on classical IDS for IoT environments, including signature-based, anomaly-based, specification-based, and hybrid classical approaches. Following PRISMA-aligned procedures, peer-reviewed studies published between 2021 and 2026 were identified, screened, and synthesized using qualitative comparative analysis. The review examines detection principles, deployment contexts, datasets, evaluation practices, and reported limitations across the classical paradigms. The findings indicate that classical IDS continues to function as a baseline defensive mechanism, particularly at gateway and edge levels. However, persistent challenges remain, including limited capability against zero-day attacks, high false-positive behavior in dynamic environments, scalability constraints, rule maintenance overhead, and restricted adaptability to evolving IoT behavior. This study contributes a consolidated taxonomy and evidence-based analysis of classical IDS deployment characteristics in IoT environments, providing a validated baseline for future intrusion detection research and evaluation.

Author 1: Rosilawati Mohamad
Author 2: Muhammad Arif Mohamad
Author 3: Mohd Faizal Ab Razak
Author 4: Imam Riadi
Author 5: Sri Winiarti
Author 6: Herman Yuliansyah

Keywords: Internet of Things (IoT); intrusion detection system (IDS); deep learning (DL); metaheuristic optimization; systematic literature review (SLR); IoT security

PDF

Paper 93: ProGem: A Hybrid AI Framework for Task Effort Estimation

Abstract: Accurate effort estimation at the task level is essential for effective project planning, resource allocation, and meeting delivery timelines in software development. Traditional approaches have focused primarily on project-level estimation, leaving a critical gap in predicting the duration of individual tasks. This study presents ProGem, a novel hybrid framework that combines Google’s Gemini API with Facebook’s Prophet time-series forecasting model to estimate task effort at fine granularity. ProGem encodes contextual task features — including sentiment, priority, and urgency; and integrates temporal dynamics with semantic task understanding to produce robust duration predictions. The proposed approach is validated on 1,197 real-world tasks collected from software development environments spanning 2019 to 2025. Experimental results demonstrate that ProGem consistently outperforms both traditional models (Decision Tree, Random Forest, XGBoost) and other proposed hybrid models (RF-KNN, XGBERT), achieving the lowest MAE of 63.75, MSE of 9,987.54, RMSE of 100.45, and the highest coefficient of determination (R2 = 0.4750). On individual real-world tasks, ProGem produced estimates of 9.16, 3.00, 6.08, 4.10, and 2.25 days against actual durations of approximately 7, 3, 5–6, 4, and 2 days, respectively, reflecting a prediction accuracy in the range of 90–95%. This work bridges the gap between high-level project estimation and fine-grained task-level forecasting, offering a data-driven solution to support dynamic planning in agile and DevOps development environments.

Author 1: Shahid Islam
Author 2: Shazia Arshad
Author 3: Natasha Nigar
Author 4: Jose Lukose

Keywords: Task effort estimation; software project management; time-series forecasting; real-time task insights

PDF

Paper 94: Trend-Based Encoding of Exogenous Time-Series for Interpretable Financial Prediction

Abstract: Integrating heterogeneous exogenous data into financial prediction models is challenging due to scale mismatches and semantic ambiguity. We propose a trend-encoding framework that transforms raw exogenous time-series into directional binary representations, improving predictive robustness while preserving interpretability. Using Saudi stock market data with COVID- 19 indicators, we evaluate predictive models under baseline and trend-enhanced configurations. Results show that trend encoding consistently enhances predictive stability over raw inputs. Interpretable models benefit disproportionately, achieving performance comparable to black-box methods. Sectoral analysis reveals heterogeneous sensitivities: Banking responds strongly to case and mortality trends, Energy to recovery indicators, while Food & Beverages shows weaker alignment. These findings show that trend-based encoding of exogenous signals can improve cross-domain financial prediction, particularly for interpretable models.

Author 1: Khudran M. Alzhrani

Keywords: Trend-based encoding; exogenous time-series; interpretable machine learning; financial prediction forecasting

PDF

Paper 95: Overcoming Temporal Shuffling in Non-Profiled SCA: A Translation-Invariant Deep Learning Approach

Abstract: Side-Channel Analysis (SCA) utilizing Deep learning has demonstrated significant potential in recovering secret keys from cryptographic implementations. However, the efficiency of these attacks is often severely compromised by hardware countermeasures such as temporal shuffling, which desynchronizes leakage traces. Existing non-profiled collision attacks successfully mitigate shuffling, but often rely on a “Grey-Box” threat model, requiring prior knowledge of the shuffle permutation to align traces before analysis. This study presents a Global Average Pooling Convolutional Neural Network (GAP-CNN) designed to exploit side-channel collisions in a strict Black-Box setting. By integrating a translation-invariant GAP layer, the proposed architecture forces the network to learn the presence of leakage signatures regardless of their temporal location, effectively neutralizing the shuffling countermeasure end-to-end without pre-processing. The methodology is evaluated on the DPA Contest v4.2 dataset, a highly protected AES-128 implementation. The empirical results demonstrate that the proposed Black-Box approach successfully recovers a majority of the target bytes, outperforming previous Grey-Box baselines. Furthermore, the study demonstrates strong cross-byte portability and cross-dataset robustness against masking countermeasures (ASCAD), confirming the existence of exploitable leakage clusters that persist despite advanced randomization.

Author 1: Ahmed Ismail
Author 2: Eid Emary
Author 3: Hala Abbas

Keywords: Side-Channel Analysis; Deep Learning; collision attack; shuffling countermeasure; Global Average Pooling; AES

PDF

Paper 96: A Retrieval-Augmented Generation System for Automated Functional Safety Analysis of AUTOSAR Basic Software Module Dependencies

Abstract: This study presents an advanced Retrieval-Augmented Generation (RAG) system designed to assist functional safety engineers in performing safety analysis of AUTOSAR Classic Platform Basic Software (BSW) module dependencies. The system extracts structured dependency information from 128 AUTOSAR Software Specification (SWS) documents in ARXML format and generates draft Failure Mode and Effects Analysis (FMEA), Fault Tree Analysis (FTA), and Dependent Failure Analysis (DFA) tables compliant with AIAG VDA, IEC 60812, IEC 61025, and ISO 26262 standards for human expert review and approval. Key innovations include: 1) LLM-driven table definition extraction that designs optimal analysis output formats based on merged AUTOSAR safety context, ISO 26262 lifecycle considerations, and standard methodologies; 2) content-based inter-module dependency validation that prevents hallucination of non-existent module interactions; 3) ASIL-aware analysis that prioritizes lower-integrity components corrupting higher-integrity components per ISO 26262 freedom from interference; 4) a modular architecture with dual interfaces (CLI tool and LangGraph-based conversational chatbot) where the chatbot reuses core RAG functions, enabling single-source maintenance. The architecture combines semantic chunking with metadata-based filtering for precise module retrieval, episodic and working memory for multi-turn sessions, and automated Excel report generation with source traceability. A comparative evaluation against an LLM-only baseline and a standard semantic-search RAG baseline demonstrates that metadata filtering with content validation eliminates hallucinated dependencies. On a curated stress-test dataset of 15 safety-critical modules representing the most complex BSW interdependencies (watchdog supervision, diagnostics, memory management, communication stacks), the system achieves perfect micro-averaged precision/recall across 95 documented dependencies. Preliminary expert validation by three functional safety engineers confirmed the practical utility of the generated analyses as draft starting points for formal safety assessments.

Author 1: Mohand Hammad
Author 2: Ahmed Moro
Author 3: Mohamed Taher

Keywords: AUTOSAR; RAG; Retrieval-Augmented Generation; functional safety; FMEA; FTA; DFA; ISO 26262; Langhain; vector database; automotive software; safety analysis

PDF

Paper 97: Domain-Agnostic Knowledge Graph Construction for Systematic Hallucination Reduction and Knowledge Reusability in Large Language Models

Abstract: Large Language Models (LLMs) have rapidly advanced the capabilities of automated reasoning and text generation, yet they continue to hallucinate when responding to domain-specific or rapidly evolving queries due to limitations in their static, parametric knowledge. This challenge is especially significant in high-stakes domains where factual accuracy is critical. To address this gap, the present study introduces a domain-agnostic framework called the Web-Constructed Knowledge Graph (WCKG), designed to ground LLM outputs in verifiable, web-retrieved information. Unlike conventional Retrieval-Augmented Generation (RAG) pipelines, WCKG transforms ad-hoc retrieval into structured, reusable knowledge through automated, query-triggered web searches that extract entities and relations and synthesize them into lightweight, provenance-aware knowledge graphs maintained locally within user sessions. A global registry stores only abstracted metadata, ensuring decentralized knowledge management and privacy while enabling efficient indexing and discovery. Web-grounded reasoning is achieved by serializing relevant graph fragments directly into LLM prompts. Experimental evaluation demonstrates that this framework generates coherent knowledge graphs, supports iterative refinement through user interactions, and improves the reliability of model responses across diverse domains, achieving an average hallucination reduction of 3.3% over a RAG baseline. The findings imply that WCKG can convert transient LLM interactions into evolving knowledge resources, offering a practical foundation for long-term reasoning, model adaptation, and decentralized knowledge sharing in future AI systems.

Author 1: Durvesh Narkhede
Author 2: Rama Gaikwad
Author 3: Saniya Jadhav
Author 4: Pratiksha Ovhal
Author 5: Nigam Roy
Author 6: Prasad Dhanade

Keywords: Large Language Models; knowledge graph construction; hallucination reduction; Retrieval-Augmented Generation; web-grounded reasoning; decentralized knowledge systems

PDF

Paper 98: ANN-Based Employee Performance Prediction: A Comparative Analysis of Optimization Techniques

Abstract: With the increasing use of artificial intelligence in decision-making systems, predicting employee performance has attracted growing attention in human resource analytics. This study aims to systematically evaluate the impact of data preprocessing and model optimization techniques on artificial neural network (ANN)-based prediction of employee performance in HR analytics. Three publicly available HR datasets were used, and multiple configurations involving feature selection, feature extraction, principal component analysis (PCA), reduced architectures, and regularization were evaluated. The experimental results show that appropriate feature selection and regularization consistently improve predictive performance across datasets, whereas PCA-based dimensionality reduction resulted in lower accuracy in the evaluated datasets, possibly due to the loss of discriminative information. Additionally, simplified ANN architectures yielded modest, but consistent improvements in generalization performance across datasets, highlighting the importance of controlling model complexity. The top-performing configurations across the assessed datasets achieved accuracies ranging from 81% to 96%. These findings offer practical guidance on selecting efficient preprocessing and architectural techniques when applying ANN-based models in human resource analytics.

Author 1: Rahaf Mohammed Bajhzer
Author 2: Yousef Alsenani
Author 3: Sahar Jambi
Author 4: Tawfiq Hasanin

Keywords: Employee performance prediction; artificial neural networks; data preprocessing; model optimization; HR analytics

PDF

Paper 99: RollupFL: An Auditable Federated Learning Framework for Byzantine Client Accountability

Abstract: Federated learning (FL) trains a shared model without sending raw data, but some clients can be Byzantine and send harmful updates. Robust aggregation methods like Median and Krum can reduce poisoning damage, but they do not clearly show which client attacked. In this study, we propose RollupFL, an audit layer for FL that improves accountability under Byzantine attacks. RollupFL keeps aggregation and auditing separate, so it can work with FedAvg, Median, or Krum without changing how aggregation is computed. We study two audit designs: simple logging, which is fast, but assumes a trusted server, and blockchain-based audit, which gives stronger integrity and attribution, but adds more latency. We evaluate MNIST training for 20 rounds with 10%–30% Byzantine clients under sign-flip and model-replacement attacks. Results show that auditing does not meaningfully change accuracy, but it improves accountability. At 30% Byzantine, blockchain audit achieves higher attribution (0.95) and tamper detection (0.92) than logging (0.65 and 0.58). Logging adds small per-round latency, while blockchain adds larger latency mainly due to ledger writing.

Author 1: Md Tahmid Ashraf Chowdhury
Author 2: Fasee Ullah
Author 3: Shanjida Islam Labonno
Author 4: Shahid Kamal
Author 5: Mohammad Ahsanul Islam

Keywords: Federated learning; Byzantine attacks; audit layer; accountability; attacker attribution; tamper detection; robust aggregation; FedAvg; blockchain audit; sign-flip attack; model-replacement attack

PDF

Paper 100: Predicting Concession Curves of Negotiating Agents Using Machine Learning

Abstract: Accurate opponent modeling is critical for effective automated negotiation, enabling agents to adapt their strategies based on the type of opponent. This study investigates machine learning approaches for classifying negotiation agent strategies from offer sequences across three scenarios: time-dependent agents following predetermined concession functions, strategic agents adapting to opponent behavior with deadline-only termination, and strategic agents with realistic termination through mutual agreement or deadline expiration. We systematically evaluate four algorithms—Naive Bayes, Random Forest, Support Vector Machines, and Neural Networks— on a number of simulated negotiations, comparing classification performance with and without temporal feature augmentation. A key contribution of this work is the introduction of temporal feature augmentation, where quarterly concession patterns and variance metrics are used to capture adaptive negotiation behavior that raw offer sequences alone cannot reveal. The augmented features encode temporal adaptation characteristics that distinguish Boulware, Linear, Conceder, and strategic negotiation behaviors. Feature augmentation produced statistically significant improvements in 7 of 12 model–scenario combinations, with the most notable gains observed in strategic agent identification.

Author 1: Khalid Mansour

Keywords: Automated negotiation; strategy classification; machine learning; feature engineering; strategic agents

PDF

Paper 101: Comparative Study of Supervised Machine Learning Models for Fake News Detection with Interpretability and Statistical Validation

Abstract: The rapid proliferation of fake news across digital platforms has intensified the need for reliable and computationally efficient automated detection systems. While deep learning models have demonstrated strong performance, their high computational cost and limited interpretability restrict practical deployment in real-time systems. This study proposes a structured comparative framework that evaluates seven supervised machine learning algorithms—Decision Tree, Passive Aggressive, Support Vector Machine (SVM), Random Forest, Logistic Regression, Perceptron, and Naïve Bayes—under identical preprocessing and feature engineering conditions using a balanced dataset of 44,989 news articles. Unlike prior works that emphasize accuracy alone, this research integrates statistical validation, computational efficiency analysis, and interpretability assessment using SHAP explanations. Experimental results show that the Decision Tree model achieved the highest accuracy of 99.58%, closely followed by Passive Aggressive (99.57%) and SVM (99.45%). Additionally, tree-based and linear classifiers demonstrated superior stability and lower computational overhead compared to more complex architectures. The findings indicate that interpretable and computationally efficient supervised models remain highly competitive for large-scale fake news detection, offering practical advantages for real-time deployment in digital media monitoring systems.

Author 1: Bayan M. Alsharbi

Keywords: Fake news detection; supervised learning; Decision Tree

PDF

Paper 102: A Systematic Review on Crowd Density Estimation Using Deep Learning Techniques: State-of-the-Art Methods and Future Challenges

Abstract: Estimating crowd density is a cornerstone of modern urban management and public safety, particularly in the aftermath of catastrophic incidents, such as the 2015 Mina stampede. With the rapid advancement of artificial intelligence (AI) technologies, deep learning (DL) has emerged as a powerful tool for addressing these challenges. This systematic review provides a comprehensive evaluation of current crowd density estimation methodologies, analyzing model architectures, datasets, and research trends. The review was conducted in accordance with PRISMA 2020 guidelines, and the search encompassed five major electronic databases (IEEE Xplore, Scopus, Google Scholar, Web of Science, and ScienceDirect) for the period 2020 to 2025. The selection process relied on rigorous eligibility criteria, including English-language publications that offer methodological contributions or empirical assessments in the field of computer vision and machine learning (ML). Twenty final studies were included, 70% of which were published in scientific journals. The analysis revealed that 55% of the studies relied entirely on DL models, while 30% leaned towards hybrid modelling. The ShanghaiTech dataset remained the most frequently used benchmark, accounting for 50% of the studies, followed by UCF CC 50 and WorldExpo’10 datasets. Although some models achieved a high accuracy of 99.88%, they still faced challenges in highly congested scenes and visual obstructions. This review reveals a growing shift towards edge intelligence and lightweight models to reduce latency, with a pressing need for more diverse datasets to minimize bias. This study concludes that bridging the gap between simulation and reality requires integrating contextual information and behavioral analysis to enable more reliable, proactive, and real-time crowd management.

Author 1: Norah Aloufi
Author 2: Liyakathunisa Syed

Keywords: Crowd density estimation; computer vision; deep learning; PRISMA 2020; systematic literature review

PDF

Paper 103: Lightweight Human Parsing with Multi-Scale Context for Edge Devices

Abstract: For human parsing in wild and cluttered environment, deep architectures are widely utilized since they yield strong segmentation performance, while with the price of huge model size and computational complexity. These properties are highly limiting for them if to be deployed on resource-limited platforms, in particular for edge intelligence in real-time. In this study, we propose a lightweight framework for human parsing, named Fast DSPP+PGN+Attn, which focuses on the efficiency-accuracy trade-off. The proposed model consists of a MobileNetV2 back-bone (i.e., the AirLab-Net), a Dilated Spatial Pyramid Pooling (DSPP) block to capture multi-scale contextual information, a pixel grouping decoder employing the PGN for improved part boundary flow consistency and potency spatial and squeeze-and-excitation attention modules are used for feature refinement. However, the model with a relatively compact size—i.e., 2.14M parameters and 5.70 GFLOPs— could achieve the performance of 40.67% mean IoU (mIoU) and 87.3% pixel accuracy on the CIHP benchmark, while running at a speed of 51.9 frames per second on a single GPU. These findings indicate that combining the contextual aggregation methods with the method of structured pixel grouping is a strategy to exploit orthogonal avenues and cross-examine their complementarity, more efficiently and potentially achieve better segmentation quality without lost of real-time performance. Therefore, the proposed method can be widely applicable in embedded vision system, surveillance and mobile perception.

Author 1: Abderrahim Ouza
Author 2: Mohamed El Ghmary
Author 3: Ali Choukri

Keywords: Human parsing; lightweight networks; multi-scale representation; edge computing; real-time segmentation

PDF

Paper 104: Time-Aware Hierarchical Attention Recurrent Neural Networks for Multi-Criteria Recommender System

Abstract: Recommendation systems are an important component for various online platforms, especially in the e-commerce domain. Recommendation systems suggest items to users using information from their past interactions such as reviews, ratings, and purchase history. Traditional recommendation systems allow users to give only a single rating for an item. Recently, deep learning approaches have been used to improve recommendation accuracy in single rating systems, but these systems do not provide enough information about user preferences for an item. Domains such as gaming, movies, and tourism enable users to give ratings on multiple criteria for an item, which makes it easier to understand user preferences compared to single rating systems. In this study, we propose a Time-Aware Hierarchical Attention Recurrent Neural Network (TAH-RNN), a deep learning-based approach designed to utilize ratings from multiple criteria. Our proposed approach helps understand the association between multiple criteria ratings and overall ratings for each user. The model integrates temporal dynamics with multi-criteria ratings by applying a Time-Aware Importance-Based Sequence Formation mechanism, which assigns importance weights to each criterion based on interaction time and enables hierarchical attention to learn their relationships over sequential user behavior. Experiments using real-world datasets (TripAdvisor, BeerAdvocate, and Skytrax Airlines) indicate that the proposed approach performs well compared to single rating systems and multiple criteria approaches across various metrics.

Author 1: Manogna Vankayalapati
Author 2: V Ramanjaneyulu Yannam
Author 3: Sarada Korrapati
Author 4: Murali Krishna Enduri

Keywords: Recommendation system; multi-criteria ratings; time-aware; hierarchical attention; recurrent neural networks; user preferences

PDF

Paper 105: PicLingo: A GenAI-Based System for Language-Disabled Children

Abstract: Children with language disorders often face challenges in understanding sentences and communicating effectively with others. While previous studies have utilized static digital games and automated feedback to support vocabulary and spelling, there remains a significant gap in leveraging generative models to provide dynamic, personalized visual reinforcement for verbal tasks. This study presents a new approach to support language development through PicLingo, a GenAI-powered system developed to assist both children and their mentors. A comparative experimental methodology is used to evaluate multiple generative models using MS COCO samples and standard metrics, including Inception Score (IS), Fréchet Inception Distance (FID), and human evaluation. PicLingo’s primary feature is a text-to-image generation (TTI) task that generates illustrative images from textual descriptions. Additionally, the system includes an interactive game that uses speech recognition technology to encourage active verbal participation. This approach aims to enhance language development and overall communication skills. The experimental results demonstrate that the proposed Stable Diffusion-based architecture significantly outperforms baseline models in generating high-quality, semantically accurate images, suggesting PicLingo as a promising, interactive tool for enhancing verbal communication and tracking linguistic progress in children with language disorders.

Author 1: Razan Alatawi
Author 2: Shahad Alamri
Author 3: Renad Almaghthawi
Author 4: Shada Alofi
Author 5: Ghada Alharbi
Author 6: Rehab Albeladi

Keywords: PicLingo; Generative AI; text-to-image generation; speech recognition

PDF

Paper 106: An Evidence-Aware and Risk-Sensitive Retrieval-Augmented Generation Framework for Internal Auditing

Abstract: Large Language Models (LLMs) that are enhanced with Retrieval-Augmented Generation(RAG) can aid in internal auditing, particularly in search and analysis of documents. However, in general, most RAG-based audit tools focus more on quick document access and being easy to use, then about deeper auditing reasons. They don’t do much to help with significant audit procedures, such as maintenance of clear evidence, calculating risk, and making intelligent decisions. Because of this, they are yet to find a place in ongoing internal auditing, which needs serious evidence and must adhere to recommended auditing standards. This study introduces an Evidence-Aware and Risk-Sensitive Retrieval-Augmented Generation (ER2-RAG) to help in internal auditing. The framework doesn’t just see how well documents can be retrieved, but also manages audit evidence and considers risk. It connects audit conclusions to supporting documents with confidence levels, modifies the information retrieval based on the audit risk and materiality, and restricts the process of generation to the standard audit reasoning practices. These design choices make AI assistance more transparent, reliable, and defensible in audit judgments. ER2-RAG has been developed and evaluated using the normal audit situations. These situations are related to the analysis of exceptions, evaluation of control efficiency, and control over procedural adherence. The research uses design science methodology. Compared to the older methods of RAG, ER2-RAG is efficient and presents a higher scope of evidence, references the sources much better, and the argument is clearer. The results indicate that risk sensitivity must be taken into account, and evidence should be used when adopting AI systems to perform continuous internal audits. The given research transforms RAG into not only aiding in information retrieval but also as a powerful reasoning foundation of professional assurance. It strives to enhance audit reliability and guide future development of evidence-aware AI systems.

Author 1: Tareq Fahad Aljabri
Author 2: Mariam Abdulaziz Alnajim

Keywords: LLM; RAG; Audit; digitalization; automation

PDF

Paper 107: A Hybrid Modeling and Control Framework for Intelligent Wheelchairs Using Timed Petri Nets and Machine Learning

Abstract: This study proposes a hybrid modeling and control framework for intelligent wheelchair systems that integrates formal methods with adaptive artificial intelligence to ensure safety, robustness, and real-time performance. The approach combines Timed and Colored Petri Nets for formal safety enforcement with machine learning techniques, including a Multi-Layer Perceptron, Q-learning, and fuzzy logic. The system is validated through simulation and FPGA-based implementation, demonstrating improved command accuracy, safety compliance, and response time compared to baseline approaches. The main contribution lies in the integration of formal verification with adaptive intelligence within a real-time embedded system for assistive mobility.

Author 1: Ayoub Elbazzazi
Author 2: Ikram Dahamou
Author 3: Cherki Daoui

Keywords: Timed Petri Nets; assistive robotics; adaptive control; neural networks; fuzzy logic FPGA; human-machine interaction

PDF

Paper 108: Bi-Transformers-Aided Contextual Contrastive Learning for Sequential Recommendation

Abstract: Contrastive learning (CL) based on Transformer sequence encoders offers a robust framework for sequential recommendation by effectively addressing data noise and sparsity issues. By utilizing the advantages of CL, these models are able to learn rich representations from sequences of user historical interactions, leading to improved recommendation and user satisfaction. However, recent CL methods are affected by two limitations. Firstly, CL approaches are mainly designed to process input sequences in single direction, i.e., left to right, which is sub-optimal for sequential prediction tasks because user historical interactions might not be in a fixed single direction sequence. Secondly, these models focus on designing CL objectives based solely on the input sequence, overlooking the valuable self-supervision features available as auxiliary information of descriptive text. To overcome these limitations, we introduce a new framework named Bi-Transformers aided Contextual Contrastive Learning for Sequential Recommendation (CCLRec). Specifically, bidirectional Transformers are extended to incorporate auxiliary information by using sentence embedding formulated from item’s textual description. Next, we introduce the rolling glass step technique for handling lengthy user sequence and descriptive features of corresponding item, which enables more refined partitioning of user sequences. Next, the cloze task, random occlusion, and dropout masking strategies are jointly applied to generate high-quality positive samples, enabling improved performance of the contrastive learning objective. Comprehensive experiments upon three benchmark datasets demonstrate that CCLRec consistently outperforms state-of-the-art baselines, achieving improvements of up to 5.69% to 6.34% in NDCG@10 across the MovieLens-1M, Amazon Beauty, and Amazon Toys datasets.

Author 1: Adel Alkhalil
Author 2: Ikhlaq Ahmed
Author 3: Zafran Khan
Author 4: Mazhar Abbas
Author 5: Aakash Ahmad
Author 6: Abdulrahman Albarrak

Keywords: Contextual sequential recommendation; bidirectional transformers; contrastive learning; auxiliary information

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

The Science and Information (SAI) Organization Limited is a company registered in England and Wales under Company Number 8933205.