The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Outstanding Reviewers

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

IJACSA Volume 17 Issue 4

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: EU Public Procurement Contract Value Prediction: A Machine Learning Approach

Abstract: Using modern analytical tools, such as machine learning, organisations can collect and analyse large amounts of data related to suppliers, pricing, demand structure, and market trends. This study uses machine learning models to predict the value of EU public procurement contracts based on the Tenders Electronic Daily (TED) database. The analysis covers 13,345,120 initial contract records for the period 2006-2025 in 33 states and 63 procurement sectors. After careful data quality control procedures, the analytical set was formed by 10,038,018 valid contracts (75% retention rate). Three complementary methodologies were used: Random Forest regression to identify nonlinear patterns, ordinary least squares (OLS) regression to interpret coefficients, and K-means clustering to classify procurement behaviour at the country level. The Random Forest achieved cross-validation R²=0.2795 and the test R²=0.2613, with the country of origin dominating the predictive value (Germany: 24.39%, United Kingdom: 2.33%, Italy: 1.96%). Temporal features had a significance of 18.36%, while competition indicators (number of proposals: 8.99%) and structural characteristics (batch number: 12.29%) had a significant impact. The OLS regression showed statistically significant effects for countries: Germany showed 98.4% lower contract prices despite being Europe's largest economy, reflecting federal administrative fragmentation, while Italy showed 295.7% higher rates thanks to centralised infrastructure projects. The K-means clustering revealed three clear procurement profiles: Greece as a transparency-focused outsider (109 average bids per contract), 19 mature economies with high value, and 13 less valuable, fragmented systems, including Germany. The results show that institutional frameworks dominate economic factors in determining the value of contracts, which has political implications for the design of procurement systems across the EU.

Author 1: Muhammad Azizur Rahman
Author 2: Anna Hlinska

Keywords: Public procurement; EU procurement; machine learning; Random Forest; Contract Value Prediction; TED database; institutional economics; OLS regression; K-means clustering; predictive modelling; data quality; EU AI Act

PDF

Paper 2: Unified Modular Architectural and Software Model for Educational XR Systems

Abstract: This article presents the development and validation of a unified architectural and software model for educational Extended Reality (XR) systems, which integrates the technological, pedagogical, and interface-related aspects of virtual and augmented reality into a coherent, modular, and interoperable framework. The proposed model addresses key limitations of existing XR solutions in education—such as platform dependency, limited scalability, and insufficient pedagogical grounding—through the use of WebXR technologies and open standards. The architecture is designed as a multi-layered structure that ensures cross-platform access, real-time interactivity, adaptive levels of immersion, and seamless integration with learning management systems. A conceptual model is formulated that explicitly links architectural design decisions to instructional objectives and establishes the principle of pedagogically determined design of XR learning environments. The applicability of the proposed model is evaluated through the design, implementation, and functional testing of a prototype WebXR-based educational system for engineering education. The results demonstrate that the unified architecture provides a sustainable, scalable, and pedagogically grounded foundation for the effective integration of XR technologies in educational contexts.

Author 1: Miroslav GALABOV

Keywords: Extended reality; virtual reality; augmented reality; educational XR systems; software architecture; WebXR; immersive learning; modular architecture; interactive simulations

PDF

Paper 3: Interruptible Multi-Agent Debate: Sentence-Level Disclosure and Urgency-Based Turn-Taking for Early Error Correction

Abstract: Multi-agent debate (MAD) has emerged as a promising approach for improving the reasoning ability of large language models (LLMs). However, existing turn-taking schemes typically disclose a speaker’s entire utterance before other agents can respond, allowing erroneous premises to spread through the shared context and making early correction difficult. This study proposes an interruptible MAD framework that enables early error correction through sentence-level disclosure and urgency-based turn-taking under a shared public-token budget. Each non-speaking agent continuously generates an action plan including its current assessment, action choice, urgency, and supported answer. The next speaker is then selected dynamically from agents requesting to speak or interrupt, while silent turns are allowed when no intervention is necessary. By revealing only one sentence at a time and discarding undisclosed sentences after interruption, the proposed framework is designed to prevent misleading claims from expanding into long incorrect explanations. Under a controlled evaluation on 1,000 MMLU questions using three agents with conditioned initial states containing both correct and incorrect answers, the proposed framework achieves the highest final accuracy in both the two-incorrect-one-correct setting (49.5% vs. 37.2% and 43.7%) and the one-incorrect-two-correct setting (79.2% vs. 68.7% and 73.8%). Analysis of intermediate answers further show that interruptions improve listeners’ answers more often than they worsen them. These results suggest that fine-grained, interruptible turn-taking can suppress misinformation propagation and stabilize consensus formation under the evaluated setting.

Author 1: Akikazu Kimura
Author 2: Ken Fukuda
Author 3: Yasuyuki Tahara
Author 4: Yuichi Sei

Keywords: Large Language Models (LLMs); multi-agent debate; turn-taking; early error correction; misinformation propagation

PDF

Paper 4: Secure User Authentication Model Using Identity-Based Encryption (IBE) Scheme: Challenges, Techniques, and Trends

Abstract: User authentication is the crucial component in ensuring that digital identities are securely authenticated when performing sensitive digital transactions. Building resilience against emerging cyber threats like identity theft and Man-in-the-Middle (MITM) attacks remains an ongoing challenge. Identity-Based Encryption (IBE) is a cryptographic method in the category of public key cryptography that allows the use of unique identifiers, such as email addresses, as public keys for encryption, making key management easier than traditional public key cryptography. This study presents a comprehensive overview of secure authentication techniques using IBE schemes proposed by researchers, emphasizing techniques that have been explored and introduced. The review examines current IBE schemes, including blockchain-based techniques, and analyzes the security features, application domains, implementation requirements, and challenges. By incorporating findings from previous works, this study identifies common challenges that impede real-world implementation and suggests a potential approach for combining alternative security methods to improve authentication robustness. The purpose of this work is to provide researchers and practitioners with a comprehensive review of secure authentication models, as well as practical insights to assist in developing an implementable authentication solution for real-world secure digital transactions.

Author 1: Raja Farah Sharima Raja Muhamad Danial
Author 2: Nazhatul Hafizah Kamarudin
Author 3: Abdul Ghafar Jaafar

Keywords: User authentication; Identity-Based Encryption (IBE); digital identity; network security

PDF

Paper 5: Improving Heart Sound Diagnosis with a Combined CNN-LSTM and Dual-Attention Deep Learning Model

Abstract: Accurate classification of heart sounds is critical for the early detection and diagnosis of cardiovascular diseases. This research presents an automated technique for classifying heart sounds into normal, murmur, and extrasystolic categories. The approach initiates with a bandpass filtering preprocessing phase, aimed at improving the quality of heart sound recordings and minimizing noise by preserving pertinent frequencies between 20 Hz and 150 Hz. Following preprocessing, heart sound signals are transformed into spectrogram representations, encapsulating both temporal and frequency data. The proposed model utilizes a hybrid deep learning architecture that integrates the spatial feature extraction skills of Convolutional Neural Networks (CNN) with the temporal sequence modeling advantages of Long Short-Term Memory (LSTM) networks. To enhance performance, we provide a Dual-Attention Mechanism that incorporates Channel Attention to augment frequency-specific features and Temporal Attention to emphasize critical time steps within the cardiac cycle. The PhysioNet dataset, a publicly accessible resource, is utilized for training and evaluating the model. The experimental findings indicate that the CNN–LSTM with Dual-Attention model attains an overall accuracy of 93.29%. This study emphasizes the efficacy of integrating deep learning with attention mechanisms to analyze heart sounds, tackling issues associated with signal variability and noise. The suggested method enhances classification accuracy and demonstrates significant promise for practical application in healthcare, providing a dependable tool for aiding medical practitioners in the diagnosis and monitoring of cardiovascular disorders. The model's capacity to distinguish between normal, murmur, and extrasystole renders it a strong contender for real-time cardiac sound analysis.

Author 1: Arshad Jamal
Author 2: R. Kanesaraj Ramasamy
Author 3: Junaidi Abdullah

Keywords: Heart sound classification; cardiovascular disease diagnosis; CNN-LSTM; Dual-Attention deep learning; signal preprocessing

PDF

Paper 6: Application of Large-Scale Array Acoustic System Performance Detection Technology Based on Embedded ARM7 + uClinux Platform in Petroleum Exploration

Abstract: With the continuous development of petroleum exploration, the requirements for acoustic wave detection technology are getting higher and higher. Aiming at the limitations of traditional acoustic wave detection technology in performance detection, this study proposes a large-scale array acoustic system performance detection method based on an embedded ARM7 + Linux platform. Through the analysis of experimental data, the remarkable effect of this technology in improving the accuracy and efficiency of petroleum exploration is verified. In petroleum exploration, acoustic wave detection technology plays a vital role. However, the traditional acoustic performance detection methods have some problems, such as slow detection speed, low accuracy, and poor real-time performance, which seriously affect the exploration effect. Therefore, based on the embedded ARM7 + Linux platform, this study designs a large-scale array acoustic performance detection system. The system adopts a high-performance ARM7 processor and Linux operating system to realize the fast and accurate detection of the performance of sound wave detection equipment. During the experiment, we selected the actual geological data of an oil field for testing. Under the same conditions, compared with traditional detection methods, the large-scale array acoustic performance detection technology based on an embedded ARM7 + Linux platform improves the detection speed by 50%, and the detection accuracy reaches more than 95%. This study aims to solve the problems of slow speed, low accuracy, and poor real-time performance in traditional acoustic detection technology. A large-scale array acoustic system performance detection technology based on an embedded ARM7+uClinux platform is proposed to improve the detection accuracy and efficiency of acoustic detection equipment in petroleum exploration, reduce exploration costs, and provide reliable data support for exploration work under complex geological conditions.

Author 1: Zhiyuan Sun
Author 2: Cheng Yang
Author 3: Guixu Xu

Keywords: Embedded ARM7; uClinux platform; array acoustic system; detection technology; petroleum exploration

PDF

Paper 7: Factors Influencing Generative AI-Enabled e-Government Services (GAIGS) Information Quality: A Systematic Literature Review

Abstract: The integration of Generative Artificial Intelligence (GAI) into electronic government (e-Government) services has transformed the delivery of public information, raising critical questions about the quality of AI-generated content. This study presents a systematic literature review (SLR) to identify and categorise the key factors influencing information quality in GAI-enabled e-Government Services (GAIGS). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and using the Population, Interest, and Context (PICo) framework, the review screened 664 articles from major databases, including Web of Science (WoS), Scopus, IEEE Xplore, and Wiley Online Library. A total of 33 high-quality studies published between 2021 and 2025 were selected for thematic analysis. The findings reveal 22 distinct information quality factors, which were synthesised into five overarching themes: trustworthiness and verifiability, security and ethics, content quality and structure, user perception and value, and adaptability and system behaviour. The themes indicate a holistic model that encompasses the multidimensional nature of issues and needs of measuring the quality of information in the AI-mediated delivery of public services. The research adds value to the scholarly knowledge of information quality in the changing digital governance environments. It offers workable lessons to policymakers and developers who want to design credible and citizen-centred GAI applications. This review provides a systematic overview of the existing body of knowledge, which can guide future research and model development in the context of GAIGS.

Author 1: Azwan Abd Aziz
Author 2: Rozi Nor Haizan Nor
Author 3: Yusmadi Yah Jusoh
Author 4: Wan Nurhayati Wan Ab. Rahman
Author 5: Khairi Azhar Aziz
Author 6: Nur Ilyana Ismarau Tajuddin
Author 7: Raditya Muhammad

Keywords: Generative AI; e-government services; information quality; PRISMA; PICo

PDF

Paper 8: Balancing Accuracy Robustness and Explainability in E-Commerce Recommender Systems

Abstract: Recommender systems are essential to digital marketplaces, shaping how users discover products and engage with platforms. While AI has significantly improved accuracy, critical concerns about robustness and explainability remain. This study introduces and empirically validates the “Recommender’s Trilemma”—an inherent trade-off between accuracy, robustness, and explainability. Through comparative analysis of NeuMF, SVD, and TF-IDF on the Amazon Electronics dataset, we uncover a dual failure cascade: adversarial attacks not only degrade recommendation quality but also destabilize the explanations meant to foster user trust. While NeuMF achieves high accuracy, it is susceptible to data poisoning that undermines its decision logic; in contrast, the transparent TF-IDF model offers interpretability but suffers from low predictive power and brittle explanations. These findings expose a structural vulnerability in recommender system design and provide a diagnostic framework for auditing deployed systems. We call for a new development paradigm where robustness and explainability are treated as co-primary objectives alongside accuracy—enabling trustworthy, resilient, and ethically aligned AI in digital commerce.

Author 1: Mansor Alohali

Keywords: Recommender systems; E-commerce; explainable AI; adversarial robustness; personalization; digital platforms

PDF

Paper 9: An Algorithm for Image Dataset Compression and Privacy Enhancement via Fusing Bilateral Filtering and Easy-to-Complex Trajectory Matching Distillation

Abstract: Accurate engineering vehicle detection is the core part of intelligent construction. Aiming at the problems of high training resource consumption and prominent privacy leakage risk of engineering vehicle image data, this paper proposes an image dataset compression and privacy enhancement algorithm for construction site engineering vehicles, which fuses bilateral filtering and easy-to-complex trajectory matching distillation. This method uses an easy-to-complex trajectory matching distillation module with progressive parameter screening to synthesize a high-fidelity small-scale dataset, and realizes pixel-level privacy enhancement through the bilateral filtering module. Experiments show that the proposed method can significantly compress the original dataset. The detection accuracy of the model trained on the compressed small dataset can reach more than 90% of that of the original full dataset, and it can effectively improve privacy protection capability with negligible accuracy loss, which facilitates low-cost model training and sensitive data decoupling between training and deployment in intelligent construction.

Author 1: Qiao Zhou
Author 2: Lei Zhang
Author 3: Longjie Li
Author 4: Tong Wang
Author 5: Jun Cheng

Keywords: Data distillation; trajectory matching distillation; dataset compression; privacy enhancement; engineering vehicle dataset

PDF

Paper 10: Cryptographic Vulnerability Assessment and Secure Redesign of an ECC-Based Authentication Framework for Energy Internet Vehicle-to-Grid Systems

Abstract: This study re-examines the protocol introduced in A robust ECC-based authentication framework for energy internet (EI)-based vehicle to grid communication system by Itoo et al. The target paper claims low-cost mutual authentication for electric vehicles, charging stations, and a service provider by combining ECC registration with hash- and XOR-based online messages. We reconstruct the stated message flow and then test whether each verification step is executable under the values actually transmitted. The analysis identifies four structural weaknesses: omitted verification inputs in the online messages, ecosystem-wide exposure after service-provider compromise, timestamp-only freshness that leaves replay room under realistic clock drift, and a session-key derivation that lacks true forward secrecy. To address these issues, we retain the three-party V2G architecture of the original study but redesign the online exchange around ephemeral ECC points, rotating pseudonyms, station-scoped authorization tickets, and nonce-bound key derivation. Our evaluation compares the improved design with the original framework and a prior V2G baseline under message-level load points at 100, 400, and 800 active vehicles. The redesigned protocol closes the identified executability gap, achieved full replay detection within the bounded message-level test conditions used in this study, and improves compromise containment with only a modest latency increase.

Author 1: Haewon Byeon

Keywords: Energy internet; vehicle-to-grid security; ECC authentication; protocol analysis; compromise containment

PDF

Paper 11: An AI-Powered Approach for Medical Specialty Triage Using Natural Language Processing and Transformer Models

Abstract: Upon arrival at a hospital, patients require an initial assessment to determine the urgency of their condition and the appropriate medical specialty for their needs. This manual triage process, however, is often time-consuming and resource-intensive, leading to potential delays in care, patient dissatisfaction, and inefficient allocation of specialized medical staff. This study presents an AI-based solution to address this critical challenge. A model is introduced that automatically suggests a suitable medical specialty based on a textual description of a patient’s symptoms, with the aim of improving the efficiency of the hospital’s initial patient triage process. The proposed methodology involves pre-processing a large dataset of over 100,000 patient inquiries from online health forums and conducting a comparative analysis of multiple BERT-based models. Experimental results demonstrate that a domain-specific model, BiomedNLP-PubMedBERT, is par-ticularly effective. To further enhance performance and address the inherent class imbalance in the dataset, a data augmentation strategy using synonym replacement and a weighted loss function was implemented. This combined approach achieved a final weighted F1-score of 92.91%, significantly outperforming the non-augmented baseline models. This work provides a practical path toward building effective automated triage tools that can streamline initial patient assessment and improve operational efficiency in hospital environments. The final model is publicly available for verification and further application.

Author 1: Anas Chahid
Author 2: Ismail Chahid
Author 3: Wafae Mrabti
Author 4: Mohamed Emharraf
Author 5: Mohammed Ghaouth Belkasmi

Keywords: Medical triage; natural language processing; BERT; deep learning; healthcare AI; text classification

PDF

Paper 12: Multi-View Behavioral Probing for Political Bias in Arabic and Multilingual Transformers Before and After Domain Adaptation

Abstract: Political bias in transformer-based language models poses a critical challenge for applications involving politically sensitive Arabic news, yet systematic evaluation remains limited. This paper presents a multi-view behavioral framework to detect political bias in four pre-trained transformer models: AraBERTv2, CAMeLBERT, mBERT, and XLM-R. The framework integrates four complementary probes: sentiment drift, emotion drift, counterfactual actor-swapping for identity sensitivity, and masked language model probing to detect lexical preference shifts. Each model is evaluated before and after domain-adaptive fine-tuning on the FigNews Arabic political news dataset to analyze how politically sensitive training data influences representational bias. To synthesize signals from these probes, a Decision and Bias Reporting Agent (DBRA) aggregates the evidence using a structured hierarchy that prioritizes implicit bias indicators. Results show that bias is already present in base checkpoints and can significantly shift after adaptation. For example, mBERT’s masked preference for SideA drops from 40.7% to 0.0%, indicating complete directional collapse, while XLM-R shows a large increase in masked preference toward SideA (ΔPR = +32.8%).

Author 1: Ahmad Abdelhameed
Author 2: Ensaf Hussein Mohamed
Author 3: Walaa Medhat

Keywords: NLP; political bias; Arabic transformers; domain-adaptive pretraining; masked probing; actor-swapping; bias detection; behavioral evaluation

PDF

Paper 13: Trust and Hallucinations: A Study of 39 Experts on AI-Assisted Requirements Reverse Engineering

Abstract: With regard to the evolution of software systems, the process is hindered by the poor state of documentation, as software systems continuously evolve, which thereafter increases the maintenance costs to around 90% of development lifecycle spending. In addition, although the extraction of embedded business logic through the reverse engineering of requirements is essential, a gap in meaning remains between the source code and the high-level objectives, which means a need for addressing this issue. Therefore, currently, many artificial intelligence tools are in place for such actions. This research evaluates the performance of specialized Retrieval-Augmented Generation (RAG), general-purpose large language models, and hybrid static AI systems by focusing on the expert observations of practitioners within industrial environments. To achieve this, the study gathers data to measure hallucination rates and the accuracy of business rule recovery based on the actual professional experience of those managing legacy code. In particular, these experts used EPAM ART, GitHub Copilot, and IBM ADDI to provide percentage-based error estimates and rate rule identification on a standard scale. Ultimately, this empirical approach ensures that the research questions are addressed through the practical insights and lived experiences of professionals. In this research, a study of perspectives of 39 senior professionals observed that, while general models are successful at abstracting meaning with a score of 4.05 out of 5, a shortfall in traceability is retained. Furthermore, it was discovered that hybrid tools such as IBM ADDI allow for superior formal mapping with a score of 4.23 out of 5, although a struggle in verification is produced because high rates of incorrect data generation or hallucination exceeding 20% were reported by 66.7% of the participants. In light of these findings, this research proposes a strategy of multiple tool coordination in order to make the evolution of software systems feasible over long periods.

Author 1: Abdullah A H Alzahrani

Keywords: Software engineering (SE); requirements engineering (RE); requirements reverse engineering (RRE); large language models (LLMs); natural language processing (NLP)

PDF

Paper 14: Generative Monocular Perception Pipeline-Based Framework for Accurate Stem Detection in Automated Strawberry Harvest

Abstract: Horticulture faces a growing labor crisis, driving demand for autonomous harvesting robots, but reliable strawberry peduncle detection remains a critical unsolved challenge due to their fine, millimeter-scale structure and severe intertwining with leaves and stems. Existing single-view RGB imaging struggles with occlusions and ambiguities, while depth sensors falter in reflective greenhouse environments plagued by noise and data gaps. Introducing a generative monocular perception pipeline—the first to reconstruct multi-view cues purely from a single RGB image—this study achieves perceptual consistency through four novel, synergistic innovations: (i) pseudo multi-view synthesis to emulate diverse viewpoints, (ii) monocular depth estimation for precise geometric guidance and background isolation, (iii) line-curve geometric modeling to capture subtle peduncle features, and (iv) occlusion-order reasoning via cross-view consistency analysis. In comparative trials against a YOLO-based detector (85.71% region accuracy vs. 57.14%), our pipeline delivers orientation precision, slashing mean angular error from 18.31° to 13.96°—robust, clutter-resilient cutting cues for next-generation robotic harvesters. Evaluated on farm images, it reduces mean angular error to 13.96° (SD 10.15°) from YOLO's 18.31° (SD 11.27°), with p<0.05 (paired t-test, n=14).

Author 1: Kohei Arai
Author 2: Jin Sawada
Author 3: Mariko Oda

Keywords: Monocular depth estimation; marigold; easy wan22; strawberry harvesting robot; stem detection

PDF

Paper 15: Method for Improving Object Detection and Classification Accuracy Using a Small Training Dataset by Reducing the Number of Classes

Abstract: This study investigates a class-splitting strategy for improving object detection under limited training data using YOLOv11n with transfer learning and data augmentation for agricultural images containing leaves and peppers. The proposed approach evaluates leaf-only, pepper-only, and combined-class configurations using mAP@0.5, mAP@0.5:0.95, precision, recall, and F1-score to examine how class splitting affects detection performance. On the small validation set used in this study, single-class training improved performance relative to the combined-class baseline, but the results should be interpreted as preliminary because the validation set contains only two samples.

Author 1: Kohei Arai

Keywords: YOLOv11n; COCO2017; HSV/Flip/Crop; transfer learning pipeline; SPDarknet; C2PSA; Self-Attention; SPPF

PDF

Paper 16: Enhancing Communication Accessibility: Real-Time Recognition and Synthesis of Arabic Sign Language Gestures Using Long Short-Term Memory

Abstract: The Arabic Sign Language Recognition Research aims to develop a real-time system that accurately recognizes Arabic Sign Language (ArSL) gestures and translates them into both text and speech. This Research leverages the KArSL-502 Dataset, which contains 502 unique Arabic signs, to train a deep learning model using Bidirectional Long Short-Term Memory (LSTM) networks. LSTMs are particularly suited for capturing the temporal patterns of sign language gestures, which often involve sequential hand movements. The system integrates advanced image processing techniques such as Mediapipe and Handtrack for detecting and extracting hand landmarks, followed by key point adjustments to ensure consistency across gestures. The model's performance was evaluated using categorical accuracy, achieving a training accuracy of 98% and a testing accuracy of 96%, demonstrating the model’s ability to generalize well to unseen data. Additionally, the proposed system includes text-to-speech functionality via Google Text-to-Speech (Gtts), enabling real-time vocalization of recognized gestures, thus facilitating communication between sign language users and non-sign language speakers. The system’s high accuracy and fast processing time (measured in milliseconds per gesture) make it suitable for real-time applications.

Author 1: Mina Nagy Gaber Sorial
Author 2: Rodaina Abdelsalam
Author 3: Mayar Ali
Author 4: Hesham Hassan

Keywords: Arabic Sign Language; bidirectional LSTM; machine learning; text-to-speech; real-time processing; accessibility; sign language translation

PDF

Paper 17: A Multi-Layer Computational Framework for Predicting Student Performance Ranges in Higher Education Using Machine Learning

Abstract: Predicting student academic performance constitutes a strategic priority for higher education institutions seeking to reduce attainment gaps and provide timely, targeted support. Existing approaches predominantly generate single-point performance estimates, overlooking the inherent variability in individual academic trajectories. This paper introduces a novel seven-layer computational framework that predicts student performance as a bounded range, capturing both minimum and maximum expected outcomes rather than as a solitary value. The framework integrates a bespoke imbalanced-data mitigation algorithm, three heuristic feature-selection methods: Genetic Algorithm, Particle Swarm Optimization, and Recursive Feature Elimination, and two complementary model architectures: a Parallel Architecture built upon fourteen supervised learning classifiers, and a Popularity Architecture centered on K-Modes/K-Prototype unsupervised clustering. The framework was validated on a rich, anonymized dataset provided by IBN ZOHR University in Morocco, comprising records from over 200,055 undergraduate students. The proposed framework achieves accuracy of 84%/86% (worst/common-case scenario), representing a 3%/5% improvement over an 81% baseline derived from the ten most relevant prior studies. The unsupervised Popularity Architecture attained peak accuracy of 96.91%, outperforming all supervised configurations. Results further demonstrate that omitting feature selection frequently yields competitive performance, and that increasing the number of hidden layers in neural networks does not significantly alter predictive accuracy in this educational context. The framework is designed for seamless integration into existing student performance dashboard systems, offering the institutions an actionable decision-support tool.

Author 1: Abdellatif HARIF
Author 2: Moulay Abdellah KASSIMI

Keywords: Student performance prediction; machine learning; unsupervised learning; performance range; higher education; educational data mining; feature selection

PDF

Paper 18: Digital Infrastructure Transformation in the Public Sector: Explaining IPv6 Adoption Through the UTAUT Framework

Abstract: The Internet, as a product of advanced technological development, has evolved through a dynamic and synergistic process. However, its original architecture was not designed to accommodate such unprecedented growth, resulting in fundamental limitations, particularly in the addressing architecture. The exhaustion of the IPv4 address space has emerged as a major sustainability problem for the Internet. To overcome this limitation, Internet Protocol version 6 (IPv6), which provides a significantly larger address space and additional technical capabilities, was standardized by the Internet Engineering Task Force (IETF) in 1998. Despite its technical superiority, IPv4—standardized in 1981—continues to dominate operational networks, indicating that IPv6 adoption has not yet reached expected levels. This study examines the factors influencing IPv6 adoption in public institutions using the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. Survey data were collected from 456 managerial and technical personnel employed in public institutions in Türkiye. Structural Equation Modeling (SEM) was conducted using SPSS and AMOS software. The findings reveal that facilitating conditions significantly affect both effort expectancy and performance expectancy. Furthermore, effort expectancy and performance expectancy positively influence behavioral intention, which in turn has a direct effect on actual IPv6 usage. These results emphasize the critical role of organizational and structural factors in accelerating IPv6 transition within the public sector. The findings further demonstrate that next-generation network deployment represents not only an engineering challenge but also a socio-technical transformation process shaped by human and organizational factors.

Author 1: Aydin Koçak
Author 2: Ugur Dagtekin
Author 3: Ahmet Kamil Kabakus

Keywords: IPv6; UTAUT; Structural Equation Modeling (SEM); Technology Acceptance Models; internet and network technologies

PDF

Paper 19: FIFO Age-Cohort Stochastic MILP for Perishable Inventory Optimization

Abstract: This paper addresses the perishable inventory optimization problem for fish processing SMEs under compound supply-demand uncertainty. We develop a two-stage stochastic Mixed-Integer Linear Programming (MILP) framework comparing four model variants: two employing the conventional fixed deterioration rate (FDR) approach and two incorporating explicit First-In-First-Out (FIFO) age-cohort tracking, each with and without cold storage investment options. The formulation integrates production scheduling, workforce planning, machine investment, and cold storage decisions over a 12-week horizon under five stochastic scenarios calibrated from empirical data. We prove that the FIFO age-cohort formulation preserves linearity (Proposition 1), establish theoretical dominance of FIFO over FDR under surplus conditions (Proposition 2), and demonstrate feasibility preservation through an adaptive service level constraint (Proposition 3). Computational results on empirical instances show that FIFO models achieve 25.0% expected cost reduction with perfectly stable service levels (70.0% across all scenarios, zero variance) compared to FDR models exhibiting 58.7 percentage point service level volatility. Extended sensitivity analysis across 15 parameter configurations reveals that cold storage value is conditional: marginal (0.046%) under supply-constrained regimes but significant (up to 19.2%) under supply surplus with high expiration costs. Pareto frontier analysis confirms FIFO dominance across the entire cost-service level trade-off space. The Value of Stochastic Solution (VSS) reaches 12.4%, validating the stochastic approach. All configurations solve within 15.1 seconds despite 18,540 variables, with FIFO solving 5.7× faster than FDR due to a tighter constraint structure. Managerial implications include a conditional decision framework linking supply-demand regime identification to optimal investment strategy.

Author 1: Hirman Rachman
Author 2: Saib Suwilo
Author 3: Sutarman
Author 4: Elvina Herawati

Keywords: Mixed-integer linear programming; FIFO age-cohort tracking; fixed deterioration rate; perishable inventory; two-stage stochastic programming; cold storage optimization; fish processing SME

PDF

Paper 20: Detection of Video Anomalies via CNN-LSTM Model for Intelligent Surveillance

Abstract: Automated Video Anomaly Detection (VAD) plays a vital role in developing surveillance systems in public spots. Our study develops real-time anomaly detection via a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) model, which uses the UCSD Pedestrian (Ped2) dataset. It introduces a methodology designed for detection accuracy enhancements by extracting CNN-based spatial features combined with learning LSTM-based temporal sequences. Preprocessing manages the class imbalance issue throughout several phases, including frame extraction, resizing, normalization, augmentation, and SMOTE balancing. Regarding the evaluation phase, several metrics such as accuracy, precision, recall, F1-score, and AUC are applied, indicating the superior performance of the CNN-LSTM model, which could outperform both the standalone CNN and LSTM models, having 93.5% accuracy, 91.8% precision, 90.2% recall, 91.0% F1-score, and an AUC of 0.947. Conclusively, our methodology is designed for improving the accuracy of the detection phase by integrating CNN-based spatial feature extraction along with LSTM-based temporal sequence learning.

Author 1: Mohamed H. Mousa
Author 2: Yasser M. Ayid
Author 3: Ayman E. Khedr
Author 4: Ahmed M. Elshewey

Keywords: Video Anomaly Detection; smart surveillance; computer vision; CNN-LSTM; hybrid deep learning

PDF

Paper 21: Evidence-Driven AI Governance for Healthcare: A PEARL-PATHWAY Analysis of Madinah

Abstract: The rapid integration of artificial intelligence (AI) into healthcare systems has intensified the need for governance frameworks that ensure safety, accountability, ethical, and sustainable deployment. However, existing AI governance approaches are primarily articulated through high-level ethical and regulatory principles, with limited operational guidance tailored to specific healthcare contexts. This challenge is particularly evident in dynamic settings such as Al Madinah, Saudi Arabia, where demographic diversity, evolving healthcare needs, and large-scale public health pressures, including the presence of millions of visitors annually during Hajj and Umrah, require adaptive and context-aware governance. This study presents an evidence-driven approach to AI governance analysis that directly links empirical healthcare needs with regulatory frameworks. It integrates the PEARL framework to systematically analyse an initial corpus of 4,277 healthcare publications related to Madinah, refined to 243 articles through inclusion and exclusion criteria, extracting structured representations of healthcare priorities, with the PATHWAY framework to evaluate alignment between these needs and both Saudi Arabian and international AI governance frameworks. This enables a systematic assessment of governance applicability, identification of gaps, and analysis of associated risks. The results reveal that while existing frameworks provide strong foundations in terms of privacy, ethics, and risk-based regulation, they lack operational pathways tailored to domain-specific healthcare requirements and local contexts. Key gaps are identified in areas including epidemiological surveillance, behavioural health, maternal and paediatric care, environmental health integration, and generative AI in public health communication. By bridging empirical evidence with governance analysis, this study advances a structured approach to domain-informed and context-sensitive AI governance. It contributes to the emerging field of computational policy analysis and provides evidence-driven insights for developing adaptive, scalable, and trustworthy AI governance strategies in healthcare systems.

Author 1: Roba Alsaigh
Author 2: Rashid Mehmood
Author 3: Iyad Katib
Author 4: Abdulaziz A. Almuzaini
Author 5: Sami Saad Albouq
Author 6: Sami Alshmrany

Keywords: AI governance; healthcare AI; evidence-driven governance; PEARL framework; PATHWAY framework; governance gap analysis; domain-specific governance; context-aware governance; policy intelligence; sustainability

PDF

Paper 22: KnowRAG: A Zero-Shot Diagnostic Analysis of Knowledge Base Coverage in Scientific Retrieval-Augmented Generation

Abstract: The "hallucination" problem in Large Language Models (LLMs) remains an unresolved hurdle for scientific researchers who require precise, grounded evidence. While Retrieval-Augmented Generation (RAG) aims to mitigate these errors, standard systems are often unoptimized for the structural complexities of scientific papers. We introduce KnowRAG, a zero-shot RAG pipeline specifically designed for scientific applications. Using a novel "LLM-as-a-Judge" diagnostic framework, we evaluated KnowRAG against a standalone GPT-3.5-Turbo baseline across four specialized Q&A Test Sets. Our results demonstrate that KnowRAG significantly improves factual accuracy over the baseline. More importantly, diagnostic analysis reveals that the vast majority of errors (over 46%) stem from Knowledge Base Coverage (knowledge gaps), while generation failures remain negligible at 4%. These findings suggest that retrieval and generation capabilities are no longer the primary bottlenecks in the scientific domain. Instead, this diagnostic analysis advocates for a paradigm shift from model-centric research toward expert data engineering as the definitive path to trustworthy AI. By repurposing the LLM-as-a-Judge framework as a diagnostic instrument rather than a mere performance metric, we move RAG evaluation beyond aggregate scoring toward actionable, evidence-based systemic diagnosis.

Author 1: Assmaa MOUTAOUKKIL
Author 2: Ali EL MEZOUARY
Author 3: Kaoutar BOUMALEK

Keywords: Large Language Models; Retrieval-Augmented Generation; evaluation; scientific writing; information retrieval; knowledge base; data engineering; GenAI

PDF

Paper 23: Determinants of Telemedicine Security Readiness Among ICT Professionals: Extended UTAUT Model

Abstract: Ensuring strong security readiness in telemedicine is essential to protect patients, safeguard their data, and build confidence and satisfaction in today’s digital healthcare platforms. Hence, this research explores and evaluates the maturity of telemedicine services from the perspectives of ICT professionals focused on security readiness. A case study with a stratified survey was conducted among ICT personnel in public hospitals in Malaysia, revealing critical gaps by examining professionals in authority for system security, data fortification, infrastructure resilience, and operational endurance of the telemedicine platform. The UTAUT model employed in this study incorporates two additional constructs, Trust in Technology and Adaptability. Trust in technology captures confidence in system safety, data privacy, and dependability. The adaptability, focused on system technical areas and user-related factors, towards secure telemedicine adoption. The SPSS Statistics Version 21 was used to analyze the data, retaining descriptive statistics, reliability testing, correlation analysis, and multiple regression. Results indicated a generally high level of security readiness among ICT professionals, with strong Trust in Technology and Adaptability to telemedicine systems. Facilitating Conditions, particularly the availability of secure infrastructure and technical safeguards, emerged as the most influential factor and significantly predicted Performance Expectancy. Although Trust in Technology and Adaptability exhibited strong positive correlations with key UTAUT constructs, their effects were statistically significant in the regression analysis, supporting all proposed hypotheses, including their influence on Satisfaction, which suggests a direct role in shaping security readiness.

Author 1: Fazlina Mohd Ali
Author 2: Afraa Mohammed Mohammed Gaashan
Author 3: Marizuana Mat Daud
Author 4: Rimaniza Zainal Abidin

Keywords: Telemedicine; security readiness; ICT professionals; UTAUT; healthcare systems

PDF

Paper 24: Evaluating Perceptual Reliability of Latent Attribute Control in Diffusion-Based Fashion Generation

Abstract: Although diffusion-based image generation models enable high-quality synthesis of fashion images, the reliable control of perceptual attributes in these models remains poorly understood. Current evaluation approaches primarily rely on semantic similarity metrics, such as CLIP scores, which may not accurately reflect human perceptual judgments. This study proposes a three-layer evaluation framework linking latent space geometry, semantic embedding space, and human perception. First, latent attribute directions are validated using geometric quality-control metrics measuring linearity and centrality. Second, semantic consistency is examined through directional projection in CLIP embedding space. Third, a two-alternative forced-choice experiment is conducted with 37 participants, and perceptual strength is estimated using a Bradley-Terry preference model. Experiments cover gender and garment conditions for four fashion attributes: fit, lightness, glossiness, and pattern scale. Results reveal that fit exhibits strong cross-layer alignment, while pattern scale shows semantic and perceptual ambiguity. The findings highlight that perceptual reliability in controllable generation is attribute-dependent and that semantic metrics alone cannot fully replace human evaluation.

Author 1: Noriaki Kuwahara
Author 2: Shintaro Kawanami
Author 3: Takashi Sato
Author 4: Dongeun Choi

Keywords: Diffusion models; controllable generation; latent space analysis; human preference modeling; perceptual reliability; fashion image generation

PDF

Paper 25: Performance Comparison of Regularization Methods on Transfer Learning Algorithm for Fish Species Classification

Abstract: Automatic fish classification played an essential role in the fisheries sector, particularly in underwater environments where visual quality was often degraded. This study addressed challenges related to low-contrast underwater images and limited dataset conditions by integrating Contrast Limited Adaptive Histogram Equalization (CLAHE) with a VGG16-based transfer learning model with regularization approaches including L1, L2, and Dropout. The dataset consisted of multiple fish species, including Bream, Sea Bass, Horse Mackerel, Red Mullet, and Black Sea Sprat. To enhance dataset diversity, data augmentation was performed using geometric transformations such as rotation, flipping, cropping/resizing, translation, shearing, and zooming. The dataset was divided into training (70%, 18,900 images), validation (20%, 5,400 images), and testing (10%, 2,700 images). Experimental results showed that the VGG16-CLAHE-Dropout model achieved the best overall performance, with training, validation, and testing accuracies of 99.15%, 98.37%, and 97.07%, respectively. CLAHE was implemented using a clip limit of 2.0 and a tile grid size of 8×8 to enhance image contrast, while the model was optimized using the Adam optimizer with a learning rate of 0.0001 and a batch size of 32. These findings demonstrated that combining contrast enhancement with appropriate regularization techniques significantly improved deep learning performance for underwater fish species classification.

Author 1: Handrie Noprisson
Author 2: Anita Ratnasari
Author 3: Sri Dianing Asri
Author 4: Vina Ayumi
Author 5: Hadiguna Setiawan

Keywords: Fish classification; underwater image; VGG16; CLAHE; regularization; transfer learning

PDF

Paper 26: FEM-KP: A Functional Evaluation Metric for Keyphrase Prediction Models

Abstract: Keyphrase prediction models are among the natural language processing (NLP) tasks that have improved their performance with transformers and large language models (LLMs). Instead of extracting present keyphrases in the text, these models also generate absent keyphrases. This improvement has led to significant challenges in the evaluation process of these models, which rely on metrics that compare the predicted keyphrases with the reference keyphrases. To measure the performance of these models, several evaluation metrics such as F1-score, ROUGE-L, and BertScore have been used. However, they often prioritize lexical similarity over semantic usefulness. Consequently, the functional usefulness of keyphrases in document representation is not evaluated during the evaluation process, which leads to inconsistencies in the evaluation results. Therefore, in this paper we propose a functional evaluation metric for Keyphrase prediction models (FEM-KP), a new evaluation metric that uses a two-track approach where track (A) evaluates the performance of the model to generate keyphrases capable of constructing a document summary, while track (B) measures the ability of these phrases to retrieve the document. We evaluated the performance of four keyphrase prediction models using current evaluation metrics and FEM-KP across the Inspec, KP20k, and Krapivin datasets. The experimental results showed that FEM-KP is the only evaluation system that maintained a consistent performance ranking regardless of document length or dataset complexity. In contrast, other metrics showed inversions in ranking. These results confirm that FEM-KP is a robust, reliable, and domain-independent evaluation metric for evaluating the performance of keyphrase prediction systems.

Author 1: Lahbib Ajallouda
Author 2: Ahmed Zellou

Keywords: Document retrieval; Document summarization; Evaluation metrics; Functional evaluation metric; Keyphrase prediction models; natural language processing

PDF

Paper 27: A Neutrosophic Machine Learning-Based Intelligent Ensemble Model for Sustainable Tea Yield Prediction Under Climatic Variability

Abstract: For efficient agricultural planning, resource management, and enhancing farmer livelihoods in significant tea-producing regions, tea production prediction is essential. However, climate variability—including temperature, rainfall, humidity, and sunlight duration—has a significant impact on tea output, making precise forecasting difficult. Using meteorological data from 2015 to 2025, this study suggests a hybrid machine learning approach for predicting tea production. Initially, four models are created as separate predictors: Random Forest, XG Boost, Light GBM, and Cat Boost. Three ensemble models are shown to increase prediction accuracy: a Neutrosophic Ensemble Model, a Fuzzy Logic Weighted Ensemble, and an optimized weighted ensemble utilizing Sequential Least Squares Programming (SLSQP). According to experimental data, the optimized ensemble outperforms individual and alternative ensemble models, achieving the best performance with an R2 value of 0.86, an RMSE value of 130.89, and an MAE value of 103.96. The suggested methodology improves the accuracy of the tea yield forecast while managing climate variability.

Author 1: Maitraya Dey
Author 2: Pushpita Roy
Author 3: Shubhendu Banerjee
Author 4: Amrut Ranjan Jena
Author 5: Rakesh Naskar
Author 6: Suparna Dasgupta
Author 7: Soumyabrata Saha
Author 8: Sudarshan Nath
Author 9: Bikash Mondal

Keywords: Random Forest; XG Boost; Light GBM; Cat Boost; SLSQP; Fuzzy; Neutrosophic

PDF

Paper 28: Incorporating Generative AI in Foreign Language Teaching Preparation: An Empirical Investigation Into the Effects of AI-Assisted Lesson Planning

Abstract: Generative artificial intelligence, particularly large language models such as ChatGPT, has emerged as a promising educational technology tool with extensive application potential in teaching preparation. The research objectives of this study are: 1) to examine how generative AI affects foreign language teachers’ teaching preparation experiences in terms of curriculum design, instructional resource development, teaching activity planning, work motivation, and efficiency; and 2) to identify the strengths, challenges, and teacher-recommended improvements associated with generative AI use in foreign language teaching preparation. This qualitative case study explored how generative AI assists foreign language teachers in their teaching preparation work, addressing a research gap regarding the benefits and challenges of this innovative approach. The study involved 13 foreign language teachers at a comprehensive university in central China who participated in a four-week training program. Qualitative data from semi-structured interviews were analyzed using thematic analysis. The findings indicate that generative AI positively affects teachers’ preparation experiences in curriculum design, instructional resource development, and teaching activity planning, while enhancing motivation and efficiency. In degree-of-significance terms, the positive impact was large for curriculum design (reported by 10/13 teachers, approximately 77%), large for instructional resource development (10/13, approximately 77%), moderate-to-large for teaching activity planning (8/13, approximately 62%), large for work motivation (9/13, approximately 69%), and large for work efficiency (10/13, approximately 77%); by contrast, the impact on assessment design and teaching research was small (2/13 and 1/13, respectively). These insights contribute to understanding the utility and constraints of employing generative AI in foreign language teaching preparation and inform the development of effective teaching support strategies and training programs.

Author 1: Xinxin Guo
Author 2: Hao Chen

Keywords: Artificial intelligence; ChatGPT; generative AI; educational technology; teacher professional development

PDF

Paper 29: Climate Change on Social Media: AI and Deep Learning-Based Analysis of Tweets

Abstract: The study analyses Turkish and English tweets about climate change on the social media platform Twitter and comparatively examines individuals” perceptions, concerns, and emotional reactions to this issue. A total of 2,046 Turkish and 18,000 English tweets were collected; 1,104 Turkish and 6,449 English tweets were analyzed after the cleaning process. Artificial intelligence-based methods such as text mining, sentiment analysis, and topic modelling are used. Topic modelling with Latent Dirichlet Allocation (LDA) identified prominent themes in tweets in both languages. Sentiment analysis is performed using deep learning techniques to categories tweets into positive, negative, and neutral categories. The findings show that English tweets contain stronger emotional reactions, while Turkish tweets contain a higher proportion of neutral expressions. Additionally, it was observed that the perception of climate change can differ in local and global contexts. Based on a multidimensional analysis of social media data, the study provides valuable insights into the development of environmental communication strategies. The comparison of Turkish and English tweets contributes to understanding the effects of cultural contexts on climate change perception. The findings have important implications for policymakers and environmental awareness campaigns, as they highlight the need for tailored communication strategies that consider cultural differences in climate change perception.

Author 1: Bahar URHAN
Author 2: Mehmet KAYAKUS
Author 3: Dilsad ERDOGAN
Author 4: Gülten ADALI
Author 5: Emrah BOZKURT
Author 6: Zeynep Nihan BAKIR

Keywords: Climate change; awareness; social media; communication; machine learning; sentiment analysis; deep learning

PDF

Paper 30: Preference-Controllable Multi-Objective Deep Reinforcement Learning for Human-Robot Task Allocation in Service Environments

Abstract: Human–Robot Collaboration (HRC) has gained increasing attention as it expands from industrial environments to service-oriented settings, where dynamic conditions and diverse operational objectives pose significant challenges for task allocation. Unlike controlled industrial environments, service contexts are characterized by frequent changes, uncertainty, and time-varying priorities, rendering static task allocation strategies ineffective. This paper proposes a method to address the problem of determining the optimal balance between human and robotic task allocation in dynamic service-oriented HRC systems. A preference-controllable multi-objective deep reinforcement learning framework is introduced to formulate task allocation as a dynamic, preference-dependent decision-making process. The proposed approach explicitly captures trade-offs among multiple, potentially conflicting objectives and enables adaptive task allocation under changing operational conditions and service priorities. The framework is evaluated through simulation-based experiments and comparative analysis with baseline strategies using multiple evaluation metrics, complemented by additional validation using external datasets. Experimental results demonstrate the effectiveness and adaptability of the proposed approach across varying preference configurations and workload conditions, supporting its applicability in real-world smart service environments.

Author 1: Asmaa Rashed Alahmari
Author 2: Wadee Alhalabi

Keywords: Deep reinforcement learning; human–robot collaboration; preference-controllable reinforcement learning; smart service environments; task allocation

PDF

Paper 31: Fuzzy-Integrated Modular Neural Networks for Accurate Prediction of On-Time-In-Full Supply Chain Performance

Abstract: Accurate prediction of supply chain performance is essential for improving operational efficiency and enabling proactive, data-driven decision-making under dynamic and uncertain conditions. Conventional forecasting methods often struggle to capture the nonlinear relationships between operational factors and performance outcomes. This paper proposes an improved neural modeling framework for predicting supply chain performance based on artificial neural networks (ANNs). The proposed approach compares a mono-network model (global ANN) with a modular multi-network architecture composed of several local neural models integrated through a fuzzy fusion mechanism. Unlike existing studies that focus on isolated performance metrics, this work targets the prediction of the key composite indicator On-Time-In-Full (OTIF). Simulation experiments conducted on a nonlinear dynamic supply chain system demonstrate that the modular ANN approach achieves a significant reduction in learning error, dropping from 0.0223 in the global model to as low as 0.0004 in local modules. Furthermore, the total training time was reduced from 1631.58 seconds to an average of approximately 311 seconds per module. These results confirm that fuzzy-integrated modular architectures offer superior generalization and computational efficiency for advanced predictive analytics in complex supply chain management (SCM) environments.

Author 1: Mariem Mrad
Author 2: Mohamed Amine Frikha
Author 3: Younes Boujelbene
Author 4: Soufiene Ben Othman

Keywords: Artificial neural networks; supply chain management; performance prediction; predictive analytics; modular-network modeling; demand forecasting

PDF

Paper 32: Federated and Secure Artificial Intelligence as a Driver of Operational Excellence and Competitive Advantage in Smart Manufacturing Enterprises

Abstract: The rapid advancement of Industry 4.0 and adoption of cyber-physical production systems (CPPS) demand real-time, adaptive, and privacy-preserving optimization that conventional centralized AI architectures cannot adequately provide. Such approaches remain susceptible to data privacy vulnerabilities, regulatory non-compliance (GDPR, CCPA), communication latency, and insufficient responsiveness to dynamic production variability within edge device constraints. Although federated learning (FL) offers a promising paradigm for distributed privacy-sensitive intelligence, existing implementations fail to address practical security requirements and hardware limitations of shop-floor edge environments, rendering real-world deployment infeasible. This study introduces FedSecure-OPE, a secure autonomous AI framework designed to concurrently optimize production scheduling, quality management, and predictive maintenance across distributed manufacturing cells. The framework integrates homomorphic encryption-based federated aggregation, secure multi-party computation (SMPC) for model updates, and dynamic neural architecture search subject to edge hardware constraints. FedSecure-OPE is evaluated against centralized deep learning (Model A) and unsecured federated learning (Model B) using the Manufacturing Cyber-physical Middleware Testbed (MCMT) and Synthetic Manufacturing Trace (SMT) datasets. All experimental results were obtained through digital twin simulation under hardware emulation and have not been validated on physical edge environments. Within this context, FedSecure-OPE (Model C) achieves 31.2% and 16.8% operational performance improvements over Models A and B respectively, reduces edge energy consumption by 43.7%, attains 99.2% cryptographically protected model-update coverage under defined simulation security assumptions, and maintains average inference latency of 38 ms per control cycle. These findings establish a simulation-based foundation for security-conscious federated AI in smart manufacturing, while underscoring the necessity of future validation in physical environments.

Author 1: Adel Saad Assiri

Keywords: Federated learning; secure AI; smart manufacturing; industry 4.0; cyber-physical production systems; edge computing; digital twin; homomorphic encryption; neural architecture search; operational excellence

PDF

Paper 33: Blockchain Governance Framework and Assessment Tools from a Readiness Perspective

Abstract: Blockchain has been used in various sectors and use cases, but inadequate blockchain governance can cause failure in blockchain adoption. Previous research on blockchain governance has not been carried out with multiple perspectives and has not yet reached the point of developing governance assessment tools. This research aims to identify the factors that comprise blockchain governance, as well as identifying functional requirements of blockchain governance assessment tools. The methodology used in this research is literature review using PRISMA, thematic analysis, expert validation with fuzzy Delphi method. This research resulted in 25 factors and enhanced the STOPE framework by adding a collaboration dimension to develop a blockchain governance framework and 5 functional requirements for blockchain governance assessment tools. The 25 factors consist of 3 factors in strategy, 3 factors in technology, 4 factors in organization, 1 factor in the people, 4 factors in the environment, and 10 factors in the collaboration. This research identifies the factors that comprise blockchain governance and the functional requirements of blockchain governance assessment tools and provides a readiness-based blockchain governance initial framework, and functional requirements of blockchain governance assessment tools. This research also provides guidance for researchers, regulators, and practitioners in blockchain governance implementation and assessment.

Author 1: Nur Indrawati
Author 2: Dana Indra Sensuse
Author 3: Deden Sumirat Hidayat
Author 4: Erisvaha Kiki Purwaningsih

Keywords: Blockchain governance; blockchain governance framework; blockchain readiness; fuzzy Delphi method; governance assessment tools; STOPE framework

PDF

Paper 34: The Untapped Potential of Extended Reality for Indigenous Medicinal Knowledge: A Review of Cross-Disciplinary XR Applications

Abstract: Many sectors are quickly moving toward XR for education and professional training. These technologies, which include Virtual, Augmented, and Mixed Reality, are not being applied equally across different types of knowledge preservation. This study aims to review how effective these technologies are and whether they have overlooked the preservation of Indigenous Medicinal Knowledge. Following the PRISMA 2020 guidelines, relevant literature from 2019 to 2026 was gathered from major academic databases, including IEEE Xplore, Scopus, and ScienceDirect. After removing duplicates and non-English papers, 23 out of 39 publications met the selection criteria. The findings show that while XR is a proven technology in many sectors, it remains underused for documenting the medicinal knowledge of communities such as the Kayan and Kenyah. This lack of progress is not caused by technical limitations, but by internal organizational barriers and weak project planning. The evidence, therefore, points to a procedural problem rather than a technological one.

Author 1: Giampearo Anak Peter
Author 2: Suriati Khartini Jali
Author 3: Jane Labadin
Author 4: Abby Lian Hendrick
Author 5: Ally Dian Hendrick
Author 6: Nurul Farizah Ridzuan

Keywords: Extended Reality; Indigenous Medicinal Knowledge; Virtual Reality; Augmented Reality; Mixed Reality; gamification; cultural heritage preservation; Task-Technology Fit; PRISMA

PDF

Paper 35: IFDA-EMA-YOLOv9: Wheat Disease Detection Integrating Flow Optimization and Auxiliary Supervision

Abstract: To address the challenges of feature extraction in complex field environments, the limited sensitivity of YOLOv9 to subtle disease features, and the lack of adaptive hyperparameter optimization, this paper proposes an improved high-precision detection model, named IFDA-EMA-YOLOv9. First, to enhance feature extraction capabilities, an Efficient Multi-scale Attention (EMA) mechanism and residual connections are incorporated into the network architecture. This integration effectively suppresses background noise interference and significantly improves the model's ability to aggregate and represent multi-level features of wheat disease lesions. Second, to tackle localization deviations caused by the irregular geometric shapes of disease lesions, an auxiliary box mechanism is integrated into the Complete Intersection over Union (CIoU) loss function, optimizing the regression process to improve the fit of detection boxes. Furthermore, an Improved Flow Direction Algorithm (IFDA) is employed to perform global optimization of the critical model hyperparameters, thereby avoiding the blindness of manual tuning and the local optimum trap. Experimental results on the LWDCD2020 dataset demonstrate that the proposed IFDA-EMA-YOLOv9 significantly outperforms current state-of-the-art (SOTA) methods, achieving substantial improvements in Precision, Recall, and mAP@0.5 by 6.4%, 5.94%, and 6.66%, respectively. These results demonstrate the effectiveness and robustness of the proposed method for wheat disease and pest detection.

Author 1: Jiaxiang Fan
Author 2: Leixiao Li

Keywords: Wheat disease detection; Object detection; YOLOv9; Flow algorithm

PDF

Paper 36: Energy-Efficient Multi-Hop LoRa Communication in Forested Environments via Proximal Policy Optimization

Abstract: Multi-hop LoRa networks extend coverage for large-scale Internet of Things deployments but are severely limited by interference-induced collisions, retransmissions, and rapid battery depletion of relay nodes. Conventional routing strategies that minimize hop count or rely on static heuristics fail to account for dynamic medium contention and its impact on energy consumption and reliability. This paper proposes a Proximal Policy Optimization (PPO)–based routing framework for multi-hop LoRa networks that learns interference-aware and energy-efficient routing policies through reinforcement learning. A discrete-event simulation framework is developed to model LoRa physical-layer behaviour, co-spreading-factor interference, adaptive data rate control, and battery-limited relay nodes under multi-source traffic. The routing problem is formulated as a Markov Decision Process (MDP) in which the PPO agent selects next-hop relays based on local topology, relay load, and channel occupancy, while physical-layer parameters are adapted independently using a standards-inspired physical-layer parameters are adapted independently using a standards-inspired ADR mechanism (ADR) mechanism. Simulation results show that the proposed approach achieves a packet delivery ratio of up to 73.7%, reduces collision rates by approximately 46% compared with Random routing, and lowers the average energy consumption per delivered packet to about 206 mJ, outperforming Shortest Path and Ad hoc On-Demand Distance Vector (AODV)-like routing. These gains are achieved by learning spatially diverse routing paths that mitigate relay congestion and reduce collision-induced retransmissions.

Author 1: Muhd Kahfi Bin Jumali
Author 2: Lim Kit Guan
Author 3: Ervin Gubin Moung
Author 4: Lorita Angeline
Author 5: Tianlei Wang
Author 6: Kenneth Teo Tze Kin

Keywords: LoRa; Multi-Hop; PPO; ADR; MDP-Markov Decision Process; Reinforcement-Learning; PDR

PDF

Paper 37: Comparative Analysis of Fixed vs Machine Learning Dynamic Pricing Models: A Computational Performance Study

Abstract: The rise of e-commerce and digital offerings has generated a need for ultra-adaptable pricing policies seeking to maximize revenue while optimizing competitive advantage. Traditional fixed pricing schemes are inherently flawed due to a lack of responsiveness to instantaneous fluctuations in the marketplace, inventory levels, as well as demand inelasticity. This study conducts a detailed computational performance study comparing fixed pricing, standard heuristic dynamic pricing (HDP), advanced Machine Learning (ML)-oriented dynamic pricing schemes, with a special focus on a Bi-LSTM network as well as a hybrid scheme based on Wavelet Decomposition (WD). Through simulated high-frequency transactions as well as marketplace data, model evaluation relies on three critical performance metrics: Total Revenue Generated, Pricing Accuracy (measured through Mean Absolute Percentage Error, MAPE), as well as Computational Latency (vital for real-time utilization). The results indicate that while HDP shows marginal improvements over fixed pricing, ML-based schemes, particularly a hybrid WD-Bi-LSTM model, exhibit substantial revenue maximization (up to 18.5% improvement) as well as forecasting accuracy (MAPE up to 2.1%), though at a slight increase in computational latency remains acceptable for real-time deployment for near real-time deployment. This study provides a quantitative foundation for organizations embracing AI-supportive pricing initiatives with emphasis on trade-offs among model sophistication, predictive potency, as well as functionality performance.

Author 1: Emmanuel Ofotsu Kwesi Bannor
Author 2: S. Sarah Maidin
Author 3: Vinayakumar Ravi
Author 4: Nguyen Thi Thu Thuy
Author 5: Nghiem Thi-Lich

Keywords: Computational performance; deep learning; dynamic pricing; Machine Learning (ML); process innovation

PDF

Paper 38: Machine Learning-Driven Resource Provisioning in Modern Cloud Environments: A Taxonomic Survey

Abstract: Dynamic resource provisioning is a critical challenge in cloud computing, offering the necessary elasticity to guarantee reliable services within a usage-based payment framework. With the evolution of distributed systems, traditional threshold-based provisioning methods are increasingly inadequate for managing highly dynamic workloads. This inadequacy necessitates adaptive, machine learning (ML)-driven approaches capable of forecasting demand and autonomously optimizing scheduling. This survey presents a comprehensive review of recent ML-based resource provisioning strategies in cloud computing. Through a rigorous taxonomic analysis of 35 key studies, with a focus on developments from 2023 to 2025, the research categorizes existing work along two primary dimensions: ML methodology, including classical, deep learning, and advanced reinforcement learning, and optimization objectives, such as cost, Quality of Service (QoS), sustainability, and security-aware paradigms. The findings reveal a paradigm shift from reactive heuristics to proactive, hybrid forecasting-optimization models, Multi-Agent Reinforcement Learning (MARL), and serverless computing orchestration. Quantitative synthesis demonstrates that intelligence-driven interventions offer measurable improvements over traditional methods. For example, Deep Reinforcement Learning (DRL) models have reduced resource consumption by 10% and improved performance by 30%, while hybrid architectures have achieved user cost reductions of up to 44%. The survey concludes by discussing fundamental tradeoffs and identifying critical open challenges and future research directions in the edge-cloud continuum, including predictive container pre-warming and carbon-aware green AI orchestration.

Author 1: Stefanus Albert Kosim
Author 2: Bagus Jati Santoso
Author 3: Deka Julian Arrizki
Author 4: Riki Mi'roj Achmad
Author 5: I Nyoman Gede Artadana Mahaputra Wardhiana
Author 6: Royyana Muslim Ijtihadie

Keywords: Deep learning; cloud computing; machine learning; resource provisioning; taxonomy

PDF

Paper 39: A Machine Learning-Driven Framework for Accurate Brain Image Registration in Multimodal and Noisy Environments

Abstract: Brain image registration is fundamental for medical imaging to allow the matching of images from multiple modalities, temporal sequences, and people to offer spatial correlation. This is crucial for activities such as cohort studies, intervention recommendations, and treatment monitoring, where exact alignment assures consistent analysis. Notwithstanding their importance, modern brain image registration techniques have many shortcomings, including limited resistance to noise, misalignment in multi-modality images, and costly computational expenses. These limits may impede real-time clinical environment practical implementation and provide less than optimal registration accuracy. This study addresses these issues by means of an Improved Brain Image Registration Technique Using Machine Learning Algorithms (BIRT-MLA). The proposed architecture detects significant image properties by means of convolutional neural networks (CNNs), therefore enabling feature extraction. By applying a supervised learning method, it guarantees precise alignment even in noisy and demanding imaging situations by forecasting transformation parameters. Lowering the registration error by modern optimization techniques helps to save processing time and maintain remarkable accuracy even in this respect. Using CNNs, the proposed method helps to effectively classify brain images, thereby improving diagnostic support and the usefulness of registered images for downstream operations. Improving clinical judgment and simplifying processes rely on grouping registration and categorization into a logical sequence. By means of enhanced alignment precision, resistance to picture faults, and shortened computing time compared to current approaches, experimental findings expose the advantages of the suggested technology. This development may be very useful in clinical and experimental settings, thereby supporting the accuracy and efficiency of brain picture analysis.

Author 1: M. S. Minu
Author 2: Mutharasu M
Author 3: S. Hemamalini
Author 4: Sunitha T
Author 5: Mohanaprakash T A
Author 6: Justindhas Y

Keywords: Machine learning algorithms; brain image registration; convolutional neural networks; deep learning; brain disease

PDF

Paper 40: On the Security of Authentication Protocols for Remote Healthcare Systems Through Cryptographic Vulnerability Analysis and Secure Protocol Redesign

Abstract: This paper revisits a previously proposed authentication scheme for remote healthcare systems in Cloud-IoT. Although that protocol was introduced as a repair of an earlier healthcare design and was claimed to satisfy the usual confidentiality and mutual-authentication goals, a closer reconstruction of its registration, login, authentication, and password-update logic reveals several structural weaknesses. The analysis shows that a stolen smart card combined with one captured transcript enables offline password verification, that the session key is deterministic for a fixed user-sensor pair, that static pseudonyms expose long-term linkability, and that compromise of the server-wide secret expands immediately to all registered sensors. To address these problems, the core key-management path is redesigned while keeping the original cloud-assisted remote healthcare architecture. The revised scheme uses a device-bound seed only for local recovery, a fresh elliptic-curve Diffie-Hellman exchange for every run, dynamic pseudonyms, and KDF-based session-key derivation with explicit context binding. A comparative evaluation against the Sharma-Kalra baseline and the 2021 Azrour design indicates that the revised protocol raises resistance to guessing, replay, cross-session correlation, and compromise propagation with only modest latency growth.

Author 1: Haewon Byeon

Keywords: Remote Healthcare Security; Authentication Protocol; Internet of Medical Things; Cryptographic Vulnerability Analysis; Secure Session Key Establishment; Privacy-Preserving Authentication

PDF

Paper 41: An Ensemble Boosting Approach with Boruta Feature Selection for Predicting E-Payment Adoption

Abstract: This study examined the factors influencing the adoption of electronic payment systems among Micro, Small, and Medium Enterprises (MSMEs) and developed a predictive model to evaluate the suitability of e-payment implementation. The research applied an ensemble machine learning approach consisting of AdaBoost, Binomial Boosting, L2 Boosting, GLM Boosting, and Random Forest to predict the likelihood of e-payment adoption. The novelty of this study lay in optimizing ensemble learning performance through Boruta-based feature selection, which improved the identification of the most relevant predictors. Data were collected from 1,500 MSME owners in DKI Jakarta, Indonesia, using a structured questionnaire. The Boruta feature selection process was implemented using predictor variables as input features and the adoption decision as the target variable, with maxRuns = 50, pValue = 0.05, mcAdj = TRUE, and getImpRfZ as the feature importance function. The GLM Boosting model was implemented using a binomial family for binary classification with a learning rate of 0.1 and a stopping iteration of 50. The results indicated that Perceived Risk, Perceived Usefulness, Subjective Norms, and Loyalty to E-payment Brands were the most influential factors affecting adoption. Among all models, GLM Boosting achieved the best performance with the highest test accuracy of 82.30%, demonstrating strong predictive capability and generalization performance. These findings provided practical insights for MSME owners and policymakers in designing strategies to improve e-payment adoption and supported the development of more effective digital financial inclusion policies.

Author 1: Mariana Purba
Author 2: Junaidi Junaidi
Author 3: Lemi Iryani
Author 4: Nia Umilizah
Author 5: Handrie Noprisson
Author 6: Nur Ani

Keywords: E-payment; AdaBoost; binomial boosting; L2 boosting; GLM boosting; Boruta procedure

PDF

Paper 42: Beyond Paper and Pencil: Evaluating the Effectiveness and User Perception of a Digital Reading Proficiency Assessment Platform

Abstract: This study investigated the effectiveness of a digital reading proficiency assessment platform compared with traditional paper-and-pencil methods and examined users’ perceptions of its performance. It employed a quasi-experimental pretest–posttest control group design involving two comparable groups of secondary school students: a control group assessed using conventional reading assessment procedures and an experimental group assessed using the digital platform for reading proficiency assessment. Pre-test and post-test scores in linear and non-linear reading tasks were analyzed using paired and independent sample t-tests to determine differences in reading proficiency gains between the two groups. To evaluate user perception, survey questionnaires grounded in the ISO/IEC 25010 software quality standards were administered to students and teachers, focusing on functional suitability, usability, security, and maintainability. Results revealed that the experimental group achieved significantly higher reading proficiency gains than the control group in both assessment types, with statistical analyses confirming that the improvements were significant. In contrast, the control group showed minimal or no meaningful improvement. User perception findings indicated a high level of satisfaction across all evaluated quality dimensions, suggesting that the digital assessment platform was perceived as reliable, user-friendly, secure, and adaptable to instructional needs. The results provide empirical evidence that digital reading proficiency assessment is both more effective and more positively perceived than traditional methods. The study concludes that integrating digital assessment platforms in reading evaluation can enhance assessment accuracy, efficiency, and user acceptance, offering a viable and evidence-based alternative for improving reading assessment practices in secondary education.

Author 1: Ivy M. Tarun

Keywords: Digital reading assessment; reading proficiency; quasi-experimental design; user perception; software quality evaluation

PDF

Paper 43: Hybrid Learning-to-Rank Approach for Complex Information Retrieval Systems

Abstract: Biomedical question answering presents significant challenges due to the complexity of biomedical language and the need for precise information retrieval. This study aims to improve the performance of a biomedical information retrieval system through a hybrid learning-to-rank framework. Specifically, we combine lexical (BM25) and semantic (BioBERT) representations to form hybrid inputs for RankFormer, a transformer-based ranking model. This hybrid representation captures both surface-level term matching and deep contextual understanding. Experiments conducted on the BioASQ dataset show that our approach achieves better ranking performance compared to the standalone lexical or neural baselines, reaching a MAP@10 of 0.9614 and an nDCG@10 of 0.9320. These results highlight the effectiveness of hybrid input representations in enhancing biomedical answer ranking.

Author 1: Fatma Zohra Bessai-Mechmache
Author 2: Yasmine Hanifi
Author 3: Damia Lyna Ait Idir

Keywords: Learning-to-rank; information retrieval; hybrid learning-to-rank; transformer-based ranking model; biomedical question answering

PDF

Paper 44: Optimizing Rainfall Prediction in Settat, Morocco, Through Machine Learning

Abstract: Rainfall prediction is still a difficult challenge because rainfall is nonlinear, intermittent, and highly variable, especially in semi-arid climates. Accurate rainfall prediction is crucial for water resource management, agricultural planning, climate-driven decision-making, and more. This study proposes a comparative framework based on machine learning and ensemble learning techniques to predict daily rainfall in Settat, Morocco, as a representative semi-arid region. Five predictive models were trained and evaluated based on meteorological station observations: Random Forest, XGBoost, LightGBM, CatBoost, and a Multilayer Perceptron (MLP). The models' performance was evaluated based on mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE), and the coefficient of determination (R-squared). The results demonstrate that the performance and stability of gradient boosting algorithms are superior to all other evaluated models. Specifically, LightGBM produced the fewest erroneous values and explained rainfall variability best. These results underscore the success of boosting-based ensemble techniques in modeling inconsistent precipitation patterns and provide a comparative framework for machine-learning-based rainfall forecasting in semi-arid environments.

Author 1: Oussama Zemnazi
Author 2: Sanaa El Filali
Author 3: Sara Ouahabi
Author 4: Abderrahim Mouhtadi

Keywords: Rainfall forecasting; machine learning; gradient boosting; LightGBM; semi-arid climate; ensemble learning

PDF

Paper 45: A Systematic Literature Review on Artificial Intelligence Applications for Breast Cancer Classification

Abstract: Breast cancer remains one of the most prevalent and life-threatening diseases worldwide, needing to be diagnosed early and properly classified for effective treatment. Advancements in artificial intelligence (AI), deep learning, and machine learning techniques have shown great potential in automating breast cancer diagnosis and molecular subtyping using medical imaging. This systematic literature review explores the application of AI in breast cancer classification, focusing on mammographic imaging and its application in distinguishing molecular subtypes. The study follows the PRISMA guideline, investigating studies from multiple digital libraries published between 2020 and November 2024. Findings show that while deep learning models have significantly improved breast cancer detection, challenges remain in optimizing classification models for molecular subtypes, balancing accuracy and interpretability, and integrating AI-based tools into clinical practice workflows. Besides, heterogeneity in preprocessing pipeline algorithms and dataset limitations highlights the importance of conducting additional research to develop robust and generalized classification models. This review underscores the importance of AI-driven solutions in advancing breast cancer diagnosis and treatment planning while providing insights into future research directions.

Author 1: Nursakinah Abdullah
Author 2: Qi Wei Oung
Author 3: Chee Chin Lim
Author 4: Chiew Chea Lau
Author 5: Vrshni Menaka R Siva Nathan
Author 6: Hui Wen Tiu

Keywords: Artificial intelligence; breast cancer; classification; Convolutional Neural Network; deep learning; machine learning; mammography; medical imaging; molecular subtypes; Vision Transformer

PDF

Paper 46: Integrating Big Data and Machine Learning for Effective Cyberattack Prediction in e-Health Information Systems

Abstract: This study proposes an intrusion-prediction framework for e-Health information systems that combines structured web-log analysis, supervised machine learning, and Apache Spark-based distributed processing. A corpus of 1,000,000 labeled HTTP log instances collected from a university hospital web environment was preprocessed into security-relevant features, including request method, request/response type, packet size, status code, URL length, and parameter count. Using a stratified 80/20 train-test split and five-fold cross-validation on the training data, we compared K-Nearest Neighbors (KNN), Logistic Regression, and Decision Trees. KNN achieved the best held-out performance, with 95.66% accuracy, 91.79% precision, 93.93% recall, 92.85% F1-score, and a 3.60% false positive rate. Logistic Regression and Decision Trees reached accuracies of 85.30% and 83.20%, respectively. Spark also reduced runtime substantially at the 1,000,000-instance scale, lowering KNN processing time from 12.0 s to 6.5 s. The results show that combining big data infrastructure with carefully tuned machine learning can improve both detection quality and operational feasibility in hospital cybersecurity monitoring.

Author 1: Mohamed Abdelbaki
Author 2: Latif Adnane
Author 3: Charaf Eddine Ait Zaouiat

Keywords: Artificial intelligence; big data; cybersecurity; hospital information systems; log files

PDF

Paper 47: Solar Irradiance Forecasting Approaches Based on Machine Learning: A Systematic Literature Review

Abstract: The prediction of solar irradiance plays a crucial role in the design, performance, and stability of renewable energy sources, and especially photovoltaic (PV) power generation. Accurate forecasting helps in managing energy, grid stability, and integration of solar energy in contemporary power systems. The study is a Systematic Literature Review (SLR) of 37 recent (2019-2025) peer-reviewed papers on solar irradiance forecasting that apply Machine Learning (ML), Deep Learning (DL), and hybrid or ensemble modelling methods. The review is based on the Preferred Reporting Items of Systematic Reviews and Meta-Analyses (PRISMA 2020) to make it transparent and reproducible. A thorough search of seven large databases, such as Google Scholar, IEEE Xplore, Web of Science, Springer Nature Link, ScienceDirect, MDPI and the ACM Digital Library, was conducted to find relevant studies. Based on a structured synthesis of the chosen literature, the findings suggest that there is a definite methodological change in the traditional ML methods to DL and hybrid modelling structures. Although the classical ML algorithms have low computational complexity and can be effectively used to make short-term predictions, DL architectures consistently outperform them in terms of capturing nonlinear temporal and spatial patterns in solar irradiance data. Moreover, hybrid models combining DL architectures with signal decomposition and feature fusion methods also improve predictive accuracy. Nevertheless, the review notes that there are a number of ongoing shortcomings, such as the lack of geographic generalizability because of single-site dominance, the lack of consistency in reporting computational efficiency, the inconsistency of evaluation metrics, the lack of robustness testing in dynamic weather conditions, and a strong bias towards short-term forecasting horizons. In order to fill these gaps, future studies need to focus on multi-site and cross-climatic validation, domain adaptation using transfer learning, designing lightweight models to deploy in real-time, standardised benchmarking guidelines, and broaden their scope to medium and long-term forecasting with enriched meteorological inputs. Overall, the results offer an evidence-based, systematic review of existing trends in methodology and emphasise the need to balance predictive accuracy with generalizability, efficiency, and practical application in solar energy forecasting systems.

Author 1: Sempe Thom Leholo
Author 2: Chunling Tu
Author 3: Topside Ehleketani Mathonsi

Keywords: Solar irradiance forecasting; Machine Learning; Deep Learning; hybrid models; Systematic Literature Review

PDF

Paper 48: Efficient Computation of Parametric Exponentiation in Parametric Algebra

Abstract: This study analyzes the computational efficiency of parametric exponentiation operations in parametric algebra and proposes an algorithmic approach for their fast computation. By leveraging the properties of parametric algebra, the possibility of reducing parametric exponentiation to fast exponentiation methods is established, and a corresponding computational algorithm is developed. The proposed approach enables more efficient computation of parametric exponentiation and reduces the computational impact of the system parameter. The obtained results provide a basis for reconsidering approaches to the efficiency of parametric algebra-based computations, demonstrating that the slowness of exponentiation often stems from the computational methods employed rather than the algebraic structure itself. This research establishes a significant foundation for the efficient organization of cryptographic computations based on parametric algebra and for addressing algorithmic optimization challenges.

Author 1: Khudoykulov Zarifjon
Author 2: Khudoynazarov Umidjon
Author 3: Muminova Mastura

Keywords: Parametric algebra; parametric multiplication; parametric exponentiation; computational efficiency; algebraic structures; computational complexity; fast exponentiation

PDF

Paper 49: An Interoperable Multi-Agent Architecture for Personalized Smart Learning Using Generative AI and Learning Analytics

Abstract: Learning Management Systems (LMSs) remain central to digital education, but they still provide limited support for adaptive and personalized learning across heterogeneous platforms. This study proposes an interoperable smart learning architecture that integrates a Multi-Agent System (MAS), generative artificial intelligence, and learning analytics to support context-aware interventions while preserving LMS independence. Methodologically, the work follows a design-oriented research approach based on architectural modeling and scenario-based validation. The proposed framework combines Learning Tools Interoperability (LTI) 1.3, the Experience API (xAPI), Sharable Content Object Reference Model (SCORM), a Learning Record Store (LRS), and an asynchronous Extensible Messaging and Presence Protocol (XMPP)/JavaScript Object Notation (JSON) communication bus to connect intelligent services with existing LMS environments. The architecture includes tutor, assessment, recommendation, monitoring, collaboration, and profile agents coordinated through a microservices-based design. Its functional coherence is illustrated through four representative scenarios covering dropout-risk detection, targeted remediation, teacher dashboards with grade return, and collaborative feedback. The main contribution is a modular and standards-based architecture that connects analytics, agent-based orchestration, and generative AI within a closed-loop adaptation process for scalable smart learning environments.

Author 1: Al Mahdi Khaddar
Author 2: Youssef Said
Author 3: Amine Dehbi
Author 4: Tarik Chafiq

Keywords: Smart learning; multi-agent systems; generative AI; learning analytics; LMS interoperability; adaptive learning

PDF

Paper 50: Mapping Tourist Sentiments Through Lexicon-Based Analysis of Social Media Reviews: The Case of Salak Sibetan Agritourism

Abstract: Salak Sibetan, Bali's emblematic snake fruit cultivated in Sibetan Village, Karangasem, has gained increasing digital visibility through user-generated content across social media platforms. This study applies a bilingual lexicon‑based sentiment analysis framework integrating Indonesian and English sentiment lexicons, explicit negation handling, domain‑specific agritourism vocabulary, and a mean‑based sentiment scoring function to classify 500 online reviews collected from Facebook, Instagram, Shopee, TikTok, and Twitter/X. The methodology adapts Indonesian and English lexicons with negation handling, domain-specific refinement for agritourism terminology, and mean-based scoring to improve neutrality discrimination. Results indicate an overall sentiment distribution of 40% positive, 40% neutral, and 20% negative. Positive reviews emphasize taste quality (manis legit, renyah, fresh) and cultural authenticity, while negative feedback highlights packaging issues, inconsistent quality, and pricing concerns. Image‑centric platforms (Instagram and TikTok) exhibit higher proportions of positive sentiment emphasizing taste quality and authenticity, whereas transaction‑oriented platforms (Shopee and Twitter/X) show more neutral and negative expressions related to logistics, packaging, and pricing. Beyond sentiment measurement, the study demonstrates how lexicon‑based methods can capture platform‑specific evaluative behavior within heritage agritourism contexts, offering methodological insights for multilingual sentiment analysis in low‑resource domains and strategic implications for sustainable destination communication aligned with Sibetan’s FAO GIAHS recognition.

Author 1: Paula Dewanti
Author 2: Putu Adi Guna Permana
Author 3: I Gusti Ayu Widari Upadani
Author 4: Ellyn Ly Maramento
Author 5: Alfred John G. Borreros

Keywords: Sentiment analysis; lexicon-based methods; Salak Sibetan; agritourism; GIAHS; social media analytics

PDF

Paper 51: Optimization of Access Point Location Using an Integer Programming Model

Abstract: To address the increasing demand for reliable connectivity in educational environments, strategic access point (AP) placement is necessary to support coverage and bandwidth requirements while controlling installation cost. This study applies a binary integer programming model to the third floor of the College of Technologies at Bukidnon State University in order to determine a feasible minimum-cost AP configuration under site-specific conditions. The model incorporates coverage, bandwidth, range, and AP-count constraints, together with demand estimates based on expected simultaneous network use. Results show that the optimization framework can identify a feasible AP-to-area assignment that satisfies the imposed constraints within the study site. Because the nominal AP ranges are large relative to the dimensions of the floor, the resulting configuration is influenced more strongly by installation cost and bandwidth feasibility than by range limitation alone. The findings show the practical value of binary integer programming as a structured planning tool for access point placement in bounded indoor environments. While the resulting configuration is specific to the physical, technical, and cost conditions of the study area, the modeling approach may be applied to other institutional settings through appropriate parameter recalibration. A post-deployment wireless site survey is recommended to verify actual field performance and identify any remaining coverage gaps.

Author 1: Joan Marie M. Panes
Author 2: Marilou O. Espina
Author 3: Jovelin M. Lapates

Keywords: Access points (APs); optimization; wireless network; coverage; integer programming

PDF

Paper 52: AI and Blockchain for Secure Healthcare Data Management: A Bibliometric Analysis of Research Trends and Thematic Clusters (2020–2025)

Abstract: The convergence of artificial intelligence (AI) and blockchain has become an active axis of interdisciplinary research in healthcare data security. This paper reports a bibliometric analysis of 434 Scopus-indexed articles published between 2020 and 2025, with data collection and processing performed on 20 April 2026. The objective is to map the intellectual structure, collaborative dynamics, and thematic composition of this expanding field. The corpus was analyzed using the bibliometrix R package (version 4.3.0) and VOSviewer (version 1.6.20). The increasing research output is evidenced by an annual growth rate of 34.8%, with publication volume growing from 10 articles in 2020 to a maximum of 152 articles in 2025 across a total of 198 unique venues. Keyword co-occurrence analysis, processed through the Louvain community detection algorithm on 250 high-frequency terms with association-strength normalization, produced four thematic clusters: Blockchain and Privacy-Preserving Techniques, Healthcare Systems and Cybersecurity Infrastructure, Artificial Intelligence and Clinical Diagnostics, and Electronic Health Records and Interoperability. India, China, Saudi Arabia, and the United States lead scholarly output. The international co-authorship rate of 49.31% reflects the globally distributed nature of the research community. IEEE Access and the IEEE Journal of Biomedical and Health Informatics are the dominant publication venues. Federated learning occupies a structurally central position, with a betweenness centrality of 101.5, acting as the principal methodological bridge between the two technologies. An average of 27.96 citations per document confirms the above-average scholarly impact of the corpus. The results provide researchers, practitioners, and policymakers with an evidence-based map of the field’s trajectory, its most productive research directions, and its remaining structural gaps. The underlying dataset, search logs, and analysis scripts are released openly to support full reproducibility.

Author 1: Maroufi Mohammed
Author 2: Lamzabi Siham
Author 3: Ziti Soumia

Keywords: Artificial intelligence; blockchain; healthcare data security; bibliometric analysis; federated learning; internet of medical things; data privacy; cybersecurity; science mapping; VOSviewer

PDF

Paper 53: A Bidirectional LSTM–Sentiment Fusion Framework for Dynamic Financial Market Prediction

Abstract: Financial market prediction can be said to be a great challenge because of the intrinsic fluctuation, non-stationarity and multi-faceted influence of the economic indicators, world events, as well as the voter sentiment. Conventional models can easily miss the time dependence and emotional aspects inherent in market data, and the results in poor forecasting precision. The paper presents a sequence-based modelling with sentiment analysis based on textual information like news articles and social media, which incorporates a two-way LSTM-sentiment fusion framework. It discloses that sentiment integration discerns as well as polishes predictive results into alignment with temporal characteristics with real-time emotive drivers.

Author 1: Minal Dhankar
Author 2: Neha Gupta

Keywords: Bidirectional LSTM; deep learning; financial market; modelling; sentiment analysis

PDF

Paper 54: A Systematic Review of Graph Neural Networks and Social Network Analysis Techniques for Public Sentiment Uncovering

Abstract: The rapid growth of social media has produced large-scale, highly interconnected user-generated data, creating the need for analytical approaches that can capture both textual meaning and relational structure. This systematic literature review examines the integration of Graph Neural Networks (GNNs) and Social Network Analysis (SNA) for public sentiment uncovering in social media. Following a PRISMA-based review process, 75 studies were selected from ScienceDirect and IEEE Xplore. The synthesis shows that recent research has expanded beyond direct sentiment classification to include closely related tasks that improve sentiment reliability, including misinformation detection, rumor analysis, bot detection, anomaly detection, and recommendation personalization. Within the reviewed sentiment-oriented studies, the thematic distribution indicates that 32% focus on direct sentiment or emotion analysis, 29% on misinformation or rumor detection, 24% on malicious-user, bot, or anomaly detection, and 15% on community detection or link prediction. Hybrid models consistently reported strong empirical gains, including 95.25% accuracy for GNN–LSTM sentiment classification, improvements of more than 5% over baseline in heterogeneous neural network and language-model integration, and up to 98.4% accuracy/F1 in bot detection settings. The review also identifies key limitations related to scalability, noisy and incomplete data, interpretability, class imbalance, and cross-platform generalization. In response, it proposes future research directions centered on real-time graph learning, multilingual adaptation, emotion-aware graph representations, fairness-aware evaluation, and human-in-the-loop explainability. These findings provide a clearer methodological foundation for researchers and practitioners seeking to build more robust, explainable, and socially aware sentiment analysis systems.

Author 1: Adi Wibowo
Author 2: Wijayanto
Author 3: Henri Tantyoko
Author 4: Ari Wibisono
Author 5: Usman Ependi

Keywords: Graph Neural Networks (GNNs); Social Network Analysis (SNA); public sentiment analysis; hybrid models; Systematic Literature Review (SLR)

PDF

Paper 55: AI-Based Process Mining Framework for Process Business Integration in an Enterprise System

Abstract: The accelerated progression of information technology has necessitated the adoption of more intelligent and adaptive Enterprise Systems (ES) to sustain and optimize organizational processes. The ES serve as critical infrastructures for resource management, operational efficiency, and sustaining competitive advantage; however, their deployment frequently encounters persistent challenges. These include discrepancies between modelled and actual business processes, insufficient visibility in process execution, and limited automation in the detection and optimization of workflows. To mitigate these limitations, this study advances an Artificial Intelligence (AI)-enabled Process Mining paradigm. This approach facilitates the systematic extraction, analysis, and visualization of business processes, thereby supporting the identification of deviations, the detection of anomalies, and the provision of data-driven recommendations for continuous improvement. The overarching aim of the research is to conceptualize and evaluate an enterprise system framework that integrates AI-driven Process Mining to reinforce transparency, efficiency, and effectiveness in business process management. The proposed framework aims to provide automated analytical capabilities, predictive insights, and a robust data-centric foundation to enhance the precision of strategic decision-making, thereby contributing to the advancement of adaptive and intelligent enterprise systems.

Author 1: Mardhani Riasetiawan
Author 2: Ahmad Ashari

Keywords: Enterprise System; process mining; artificial intelligence; process business

PDF

Paper 56: Intelligent ECU Load Management in Electric Vehicles Using a Gated Multi-Stage Machine Learning Framework

Abstract: The growing adoption of software-defined and electrified vehicle architectures has significantly increased the computational burden on electronic control units, leading to dynamic and non-stationary load conditions that can compromise real-time performance and system reliability. Conventional ECU load-management strategies are largely static or address isolated aspects of the problem, such as overload prediction or energy optimization, without providing an end-to-end decision mechanism for runtime load redistribution. This study proposes a leakage-safe, three-stage intelligent ECU load-management model for electric vehicles that jointly performs overload detection, target ECU recommendation, and load-shift magnitude estimation within a gated architecture. The proposed model used ensemble and boosting-based machine learning models with task-specific feature design to prevent data leakage and reduce computational overhead through conditional execution. The performance of the proposed model is measured on a multi-feature ECU dataset characterized by non-stationary operational conditions and significant class imbalance between normal and overload states and addressed using stratified sampling and SMOTE-based augmentation. The proposed model obtained the overload detection rate F1-score of 0.916 and a ROC–AUC of 0.996, the target ECU recommendation obtained the accuracy of 0.935, and load-shift estimation, yielding an R² of 0.988 with low prediction error. This study also conducted the statistical test and ablation analysis, which observed that performance gains were consistent and attributable to key designs such as imbalance-aware learning, leakage control, and gated inference. The final results show that the proposed model is an effective and deployable solution for intelligent ECU load management in next-generation electric vehicles.

Author 1: Vaishali Mishra
Author 2: Sonali Kadam

Keywords: Electric vehicles; electronic control units; intelligent load management; machine learning; overload detection; resource allocation

PDF

Paper 57: Hybrid Feature Learning with TF-IDF and SBERT for Ambiguous Requirement Classification

Abstract: Ambiguity in Software Requirement Specifications (SRS) remains a major source of project delay, rework, and misinterpretation in software engineering. Traditional ambiguity detection approaches rely on lexical or rule-based techniques that capture surface-level patterns but fail to model contextual meaning. Recent transformer-based models improve semantic representation; however, when applied independently, they often overlook lexical ambiguity and remain sensitive to class imbalance. This study proposes a hybrid feature learning framework that integrates TF-IDF lexical representations with Sentence-BERT (SBERT) contextual embeddings for ambiguous requirement classification. The approach is evaluated on the Functional–Non-Functional Requirements (FR–NFR) dataset using Logistic Regression, Random Forest, and Support Vector Machine classifiers. Experimental results demonstrate that single-feature models produce unstable precision–recall trade-offs, particularly under severe class imbalance. In contrast, the proposed TF-IDF + SBERT hybrid representation consistently improves recall and F1-score. The best performance is achieved using Support Vector Machine, attaining an F1-score of 0.7122 and a recall of 0.6429, significantly outperforming standalone lexical and semantic baselines. The findings confirm that ambiguity detection is a multi-dimensional problem requiring both lexical frequency patterns and contextual semantic modelling. The proposed framework offers a reproducible and practically deployable solution for automated ambiguity detection in software requirements engineering.

Author 1: Fariha Khalid
Author 2: Muhammad Yaseen
Author 3: Gohar Rahman
Author 4: Nauman Mazhar
Author 5: Muhammad Asif Nauman
Author 6: Aida Mustapha

Keywords: Ambiguity detection; software requirements engineering; hybrid feature learning; TF-IDF; Sentence-BERT; Support Vector Machine

PDF

Paper 58: QR Code-Based Access Control Systems: Architectural Taxonomy, Security Landscape, and Future Research Directions

Abstract: Access control systems secure physical and digital settings, especially in colleges, businesses, and restricted areas. Physical key, magnetic card, and biometric access control techniques have loss, duplication, high deployment costs, and maintenance complexity. With the widespread adoption of smartphones, QR code-based access control systems (QR-ACS) have emerged as a flexible and cost-effective alternative. Scannable QR codes allow fast authentication without hardware, boosting user ease. How QR codes are generated, handled, and checked in the system determines the success of QR-based access control. This systematic review examines the integration and evolution of QR Code-Based Access Control Systems (QR-ACS), with particular attention to both recent innovations and the challenges that continue to accompany their adoption. A broad set of studies published between 2015 and 2023 was reviewed in order to explore how these systems have been designed, implemented, and evaluated across different application contexts. The analysis draws on literature indexed in major academic databases, including IEEE Xplore, ACM Digital Library, and JSTOR, with an emphasis on system architecture, implementation strategies, and reported performance outcomes. Overall, the reviewed studies indicate that QR-ACS can enhance operational efficiency and offer practical security benefits, especially when combined with complementary technologies. At the same time, recurring concerns related to security, robustness, and deployment limitations remain evident. This systematic review analyzes QR-ACS architectures, security mechanisms, and threat models, with particular emphasis on IoT integration, authentication strategies, and risk-aware system design.

Author 1: Worood Alsawi
Author 2: Dina M. Ibrahim

Keywords: QR codes; access control systems; security mechanisms; authentication; Internet of Things

PDF

Paper 59: Real-Traffic-Trained Intelligent IDS for Advanced Cyberattack Detection in Enterprise Networks

Abstract: Early detection of cyberattacks remains a major challenge in enterprise networks due to encrypted traffic, protocol diversity, and highly dynamic service behavior. This study evaluates a machine learning-based intrusion detection system trained on real enterprise traffic captured over 20 working days under operational conditions. A total of 1,163,014 packets were collected and complemented with controlled attack traffic, including DDoS, brute force, botnet, SQL injection, port scanning, privilege escalation, and service exploitation scenarios. After flow-based feature extraction and preprocessing, six supervised learning models were evaluated under the same data partition and validation settings. Among them, Random Forest achieved the best overall performance, with precision, recall, and F1-score above 0.999 and an AUC of 0.9994 on the collected dataset. These findings suggest that training with real traffic can improve IDS performance under realistic enterprise conditions. However, further validation across additional organizations and time periods is required to confirm generalizability.

Author 1: Dalila Naira Chinchay
Author 2: Rodrigo Calderón Ari
Author 3: Liset S. Rodriguez-Baca

Keywords: IDS; machine learning; cyberattacks; cybersecurity; intrusion detection

PDF

Paper 60: Evaluating Open-Source LLMs for Thai Clinical Information Extraction

Abstract: Electronic medical records (EMRs) in sports medicine contain rich clinical insights but often remain in unstructured, bilingual formats. While locally-deployed large language models (LLMs) offer a privacy-preserving solution for data extraction, their performance in handling Thai-English clinical shorthand remains under-explored. This study evaluated five open-source LLMs for extracting structured clinical data from Thai sports medicine records and assessed the reliability of human-AI collaborative annotation. Mistral-7B, Qwen2.5-7B, Gemma2-9B, LLaMA3.1-8B, and Typhoon2-3.1 were deployed locally. We evaluated the extraction of four clinical fields against a ground truth of 444 records. A standardized JSON schema was utilized to ensure data interoperability. Inter-annotator agreement (IAA) was measured using Cohen’s kappa on a 100-record sample. Mistral-7B achieved the highest F1-score (92.2%), followed by Qwen2.5-7B (91.9%). Typhoon2-3.1 underperformed (32.9%) due to bilingual format mismatches and difficulties in shorthand normalization. IAA for treatment was moderate (kappa=0.43), whereas diagnosis showed near-zero agreement (kappa=-0.04) due to non-standardized institutional shorthand. Locally-deployed LLMs can effectively transform unstructured bilingual EMRs into structured JSON formats, ensuring data privacy and readiness for clinical analytics. However, the lack of standardized clinical coding in Thai EMRs remains a significant barrier. Future digital health initiatives should integrate LLMs with standardized terminologies like ICD-11 to enhance data reliability.

Author 1: Somkiat Kosolsombat
Author 2: Phatnattachat Chatsiraphon
Author 3: Taratep Si-Aksorn
Author 4: Chiabwoot Ratanavilisagul

Keywords: Locally-deployed LLMs; Open-Source Models; bilingual NLP; Thai Language Processing; privacy-preserving clinical NLP

PDF

Paper 61: Two-Phase Transfer Learning Framework for Automated Depression Classification in the Elderly via Facial Expression Recognition

Abstract: Automatic detection of depression in the elderly through Facial Expression Recognition faces a fundamental challenge in the form of domain shift due to skin deformation and facial structural changes due to aging, such as ptosis and deep wrinkles. This study proposes a Two-Phase Transfer Learning framework that integrates high-density facial landmark point extraction (468 points using MediaPipe) with a hybrid spatiotemporal CNN-BiLSTM-VGG19 architecture to address these challenges. Phase I training was conducted on a standard facial dataset to obtain fundamental feature representations, followed by a fine-tuning process in Phase II using a geriatric facial dataset. Experimental results show that the CNN-BiLSTM-VGG19 architecture is highly robust, exploiting deep facial wrinkles as informative texture features. The model successfully achieved 91.42% accuracy on 70-year-old older adults. Furthermore, hyperparameter evaluation confirmed that the Stochastic Gradient Descent (SGD) optimizer combined with a low learning rate of 0.0005 was the most optimal configuration. This balance effectively prevented catastrophic forgetting during domain adaptation, while also achieving a clinical sensitivity recall rate above 96%. Comprehensively, this study demonstrates that the texture-biased CNN-BiLSTM-VGG19 model offers a robust, non-invasive, and highly efficient depression screening instrument for implementation in elderly care facilities.

Author 1: Muhammad Daffa Zahrandika Wibisono
Author 2: Marizuana Mat Daud
Author 3: Wan Mimi Diyana Wan Zaki

Keywords: Elderly depression; Facial Expression Recognition; transfer learning; VGG19; texture bias; spatiotemporal network

PDF

Paper 62: Comparison of Time-Domain and Frequency-Domain EMG Features for Gait Phases Classification Using Machine Learning

Abstract: Accurate gait phase detection is essential for biomechanical analysis and the control of wearable assistive devices such as powered prostheses and exoskeletons. Electromyography (EMG) provides a direct representation of neuromuscular activation and offers potential advantages for low-latency, anticipatory gait phase recognition. However, the effectiveness of different EMG feature representations for stance-swing classification has not yet been clearly established. Therefore, this study presents a systematic comparison of time-domain (TD) and frequency-domain (FD) EMG features for gait phase classification. EMG signals were recorded from the tibialis anterior and medial gastrocnemius muscles of ten healthy participants during level walking. After preprocessing and segmentation, TD and FD features were extracted and used as inputs to a support vector machine classifier with a radial basis function kernel. Model performance was evaluated using a leave-one-subject-out cross-validation framework to assess generalization. The results demonstrate that TD features consistently outperform FD features across all evaluation metrics, achieving an accuracy of 0.813 ± 0.112, macro-averaged F1-score (Macro-F1) of 0.812 ± 0.114, and Matthews correlation coefficient (MCC) of 0.672 ± 0.178, compared to FD features with an accuracy of 0.712 ± 0.077, Macro-F1 of 0.708 ± 0.079, and MCC of 0.448 ± 0.159. These findings indicate that TD features more effectively capture the transient amplitude-based neuromuscular patterns associated with gait phase transitions. In addition, TD features offer lower computational complexity, making them well-suited for real-time implementation. Overall, this study highlights the superiority of time-domain EMG representations for reliable and efficient gait phase detection and provides practical guidance for the development of wearable gait monitoring and assistive control systems.

Author 1: Muhamad Amirul Sunni Rohim
Author 2: Nurhazimah Nazmi
Author 3: Shin-Ichirou Yamamoto
Author 4: Muhammad Kashfi Shabdin
Author 5: Mohd Asyadi Azam

Keywords: Electromyography (EMG); gait phase detection; stance–swing classification; time-domain features; frequency-domain features; support vector machine (SVM); wearable assistive devices

PDF

Paper 63: A Framework for Digital Technology and AI Adoption in Slovak Firms: Evidence from Qualitative Analysis

Abstract: This study examines the adoption of digital technologies and artificial intelligence (AI) in Slovak firms, with particular attention to technological integration, employee adaptation, and organizational change. The study is based on qualitative semi-structured interviews conducted with managers and employees from 20 companies across multiple sectors and firm-size categories. The data were analyzed using thematic analysis. The findings identify three levels of digital adoption: basic digitization, process automation, and AI-supported adoption. Basic digital tools were reported across all firms, while advanced forms of adoption were concentrated mainly in IT and manufacturing companies. AI use remained limited and was typically confined to exploratory applications such as chatbots, automated support, or data-processing assistance. Across most cases, employees initially responded to digital change with hesitation or resistance, followed by gradual adaptation through practice-based and informal workplace learning. The results further indicate that digitalization was associated primarily with task reallocation, workflow optimization, and role redesign, whereas direct workforce reduction was reported only in isolated cases. Based on these findings, the study develops a conceptual framework linking technological adoption, employee adaptation, organizational restructuring, and sectoral context. Given the qualitative and exploratory nature of the research, the findings should be interpreted as analytically transferable rather than statistically generalizable. The study contributes to the literature by providing firm-level evidence from a Central and Eastern European context and by proposing a structured interpretation of digital and AI adoption as a multi-level organizational process.

Author 1: Martina Chrancokova
Author 2: Ludmila Mitkova
Author 3: Lukas Vartiak

Keywords: Digital technology adoption; artificial intelligence; digital transformation; process automation; thematic analysis; Slovak firms

PDF

Paper 64: Feature-Level Analysis and Robust Baselines for EEG-Based Imagined Speech Recognition on the ASU Dataset

Abstract: Imagined speech decoding from non-invasive electroencephalography remains a challenging problem, especially when moving beyond small vocabularies and optimistic evaluation protocols. This work revisits the Arizona State University (ASU) imagined speech dataset and treats it as a rigorous ten-class benchmark, with a focus on offline, corpus-level analysis rather than real-time deployment. After unifying all recordings into 5 s epochs at 256 Hz, 6,520 trials with 60 EEG channels were preprocessed using bandpass filtering, baseline correction, z-score normalization, and trial-wise ICA for artifact attenuation. On top of this pipeline, a comprehensive feature representation was constructed that combines common spatial patterns, discrete wavelet statistics, time-domain moments, autocorrelation coefficients, power spectral density band powers, and Hjorth parameters into a single 5,120-dimensional vector. A block-wise ablation indicates that autocorrelation, CSP, PSD, and Hjorth features carry most of the discriminative information in this setting, while wavelet and simple statistical descriptors contribute little and can be removed without harming performance. Using only the informative blocks (3,440 features), a multinomial logistic regression classifier reaches about 0.41 accuracy and 0.42 macro F1 on the ten-class task, roughly four times chance level. A multi-layer perceptron and a CNN–LSTM model, trained under the same splits and with class weighting, do not outperform this linear baseline and exhibit stronger overfitting. Within the evaluated protocol, these findings suggest that carefully engineered features capture most of the discriminative structure accessible on this corpus, and that deeper models add complexity without clear benefit. The study provides a transparent baseline and a feature-level analysis that can serve as a reference point for future work on imagined speech recognition and transfer learning across EEG corpora.

Author 1: Hatem T M Duhair
Author 2: Masrullizam Mat Ibrahim
Author 3: Jamil Abedalrahim Jamil Alsayaydeh
Author 4: Mazen Farid
Author 5: Safarudin Gazali Herawan

Keywords: EEG-based imagined speech; ASU imagined speech dataset; brain–computer interface; feature extraction; logistic regression

PDF

Paper 65: A Multivocal Literature Review of Metadata Governance: Conceptual Foundations and Research Gaps

Abstract: As digital ecosystems grow and cross-organizational data sharing intensifies, metadata governance has become increasingly important. It plays a critical role in supporting interoperability, accountability, and sustainable data ecosystems. However, metadata governance remains conceptually fragmented and insufficiently structured as a distinct research domain. This study conducts a multivocal literature review (MLR) that integrates academic publications with authoritative non-academic sources, including international standards, governance frameworks, and regulatory instruments. A systematic review process was applied to identify governance constructs, recurring patterns, and conceptual gaps across heterogeneous evidence sources. The findings show that no peer-reviewed studies explicitly integrate metadata governance with formal conceptual structuring. This finding should not be interpreted as a limitation of the search process, but rather as an empirical indication of a structural gap in the literature. It provides a methodological justification for extending the analysis to multivocal evidence. The results further indicate that existing studies predominantly emphasize metadata management and technical interoperability. In contrast, governance-level constructs—such as decision rights, accountability mechanisms, oversight structures, and lifecycle coordination—remain underdefined and inconsistently formalized. This study synthesizes fragmented knowledge into a coherent conceptual understanding of metadata governance, clarifies its distinction from metadata management, and identifies critical research gaps. These findings provide a structured foundation for advancing metadata governance as a cumulative research domain and support future conceptual and methodological research.

Author 1: Dana Indra Sensuse
Author 2: Alivia Yulfitri
Author 3: Erisva Hakiki Purwaningsih
Author 4: Anton Satria Prabuwono

Keywords: Metadata governance; multivocal literature review; data governance; metadata management; conceptual synthesis

PDF

Paper 66: Advanced Forensic Analysis Techniques for Insider Threat Detection in Database Management Systems

Abstract: This paper presents a real-time database forensic framework designed to detect insider threats within database management systems (DBMSs). Existing database forensic approaches, as identified through a systematic literature review employing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, are predominantly reactive and post-mortem in nature, lacking real-time SQL-layer visibility, forensic correlation, and evidence integrity assurance. To address these gaps, the proposed framework integrates native Microsoft SQL Server auditing mechanisms with a centralized ELK (Elasticsearch-Logstash-Kibana) pipeline to enable continuous evidence collection, automated correlation, and real-time visualization. The framework is evaluated against six insider-threat scenarios within the SPL ForensicDB—a simulated enterprise logistics database environment modeled after Saudi Post Logistics (SPL)—encompassing unauthorized access, data exfiltration, privilege escalation, data manipulation, backdoor creation, and audit suppression. Experimental results demonstrate high detection accuracy (precision: 0.92, recall: 0.88), a low false-positive rate of 3%, and alert latency consistently below five seconds, with minimal system overhead of 3.2% CPU utilization. The framework further ensures forensic integrity through SHA-256-verified, tamper-resistant audit logs and a structured chain-of-custody preservation mechanism compliant with ISO/IEC 27037:2012, making it suitable for both proactive security monitoring and legally defensible digital forensic investigations.

Author 1: Kholod Saeed Talea AlQahtani
Author 2: Mounir Frikha
Author 3: M. M. Hafizur Rahman

Keywords: Database management systems; insider threat detection; SQL server; forensic auditing; ELK stack; digital forensics

PDF

Paper 67: A Hybrid Explainable Ensemble Learning Framework for Health Risk Prediction

Abstract: Early prediction of patients’ health risk is a crucial component of the safe and effective implementation of clinical triage and timely intervention. Health risk data in the real world often tends to be small in size and limited, with class imbalance, making the overall accuracy and transparency of Machine Learning (ML) models insufficient and more difficult to achieve. This paper proposes a hybrid explainable ensemble learning framework for multi-class health risk prediction, which is built based on a stacking architecture with three strong base learners, namely Random Forest (RF), eXtreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). The XGBoost is chosen as the meta-learner due to its ability to learn complex non-linear probability mapping from the base models and provide complementary signals for the prediction. A complete preprocessing pipeline is implemented, covering missing data handling, systematic encoding of categorical variables, and strict separation between training and test sets to ensure unbiased assessment. Experimental results show that the proposed framework achieved an accuracy of 97.5%, which exceeds the accuracy results of individual models, which are 96.0%, 95.5%, and 96.0% for RF, XGBoost, and LightGBM, respectively. Additionally, the proposed framework integrates the predictive performance with the interpretable clinical decision support and transparency using SHapley Additive exPlanations (SHAP) method. The SHAP values are used to provide global and local explanations for revealing the most influential features that drive each prediction.

Author 1: Hussain AlSalman

Keywords: Health risk prediction; hybrid explainable; ensemble learning; meta-learner; LightGBM; SHAP

PDF

Paper 68: A Digital Twin-Enabled Approach to Optimize Freight Fleet Operations in a Peruvian Transportation Company

Abstract: The study evaluates the impact of a system enabled by Digital Twin on the optimization of the operations of a cargo fleet belonging to a transport company in Lima, Peru, during the year 2025. The proposed solution integrates OBD-II sensors, a Python processing engine, and a MongoDB-based data layer to build a synchronized virtual representation of 25 operating vehicles. An experimental, pre-experimental design with measurements before and after the intervention was applied to analyze changes in the frequency of accidents, the use of load capacity, and the monthly number of trips. The results show significant improvements: the frequency of accidents decreased from an average value of 0.08 to zero; the use of load capacity increased from 31 to 35 units; and the number of trips required to transport equivalent volumes decreased. These findings suggest that Digital Twin–based systems can support safer, more efficient, and data-driven operations in emerging logistics environments.

Author 1: Deyber Flores Cabezas
Author 2: Liset S. Rodriguez-Baca

Keywords: Digital Twin; fleet management; IoT; transportation; operational optimization

PDF

Paper 69: Real-Time Thought-to-Vision Generation Using Low-Channel EEG and Feature-Fusion Learning

Abstract: Severe motor disabilities and paralysis make it hard for individuals to communicate their thoughts and express their imagination using standard interfaces. Recent methods that convert EEG signals into images, using diffusion models, have shown superior results. However, these methods usually depend on high-density EEG systems with 32 to 128 channels, deep neural EEG encoders, and large datasets. This leads to high computational costs, poor real-time performance, and limits their use in assistive settings. To address these problems, this paper proposes a Thought-to-Vision system that is lightweight and real-time. In this work, Thought refers specifically to the imagination of simple geometric shapes (circle, square, and triangle) under controlled experimental conditions. This Thought-to-Vision system can decode the imagined geometric shapes from a low-channel EEG system that only requires 2 channels and then produce visual images based on a diffusion model. The EEG signal was recorded at 250 Hz with 150 trials per session, consisting of 50 trials for each circle, square, and triangle shape. The signal was filtered using artifact rejection, 50 Hz notch filtering, and bandpass filtering between 1 and 40 Hz. A Tri-Domain EEG feature fusion (TDEF) that combines spectral features (FFT band power), Time-Frequency features (Daubechies-4 wavelet coefficients), and statistical features was developed and tested against several benchmarks. These included feedforward networks, CNNs, LSTM/GRU-based time-series encoders, CNN-Transformer models, and EEG-CLIP alignment. Evaluation is measured using classification accuracy, precision, recall, and F1 score, along with embedding consistency for semantic alignment. The experimental results indicate that the TDEF with the XGBoost classifier reaches around 94% for classification accuracy, precision, recall, and F1-score. This performance surpasses deep time-series encoders, which achieved up to 39.09% accuracy, and contrastive EEG-CLIP models, which had 82.97% accuracy. The classified EEG embeddings were then used to guide a latent diffusion model, enabling coherent and semantically consistent image generation. These findings confirm that feature-fusion learning with XGBoost can outperform deep EEG encoders in low-channel situations. This offers a solid, efficient, and practical solution for real-time assistive brain-computer interfaces.

Author 1: Abha Marathe
Author 2: Medha Wyawahare
Author 3: Milind Rane
Author 4: Vrinda Parkhi

Keywords: Brain computer interface; feature fusion learning; diffusion models; EEG to image generation; wavelet; EEG signal processing

PDF

Paper 70: Adoption of Blockchain Technology in Electronic Records Management in the Malaysian Public Sector

Abstract: As the Malaysian public sector undergoes digital transformation, its electronic records management faces challenges, including security issues, maintaining record integrity and record authenticity, audit trails, and trust in existing systems. Blockchain technology has the potential to solve these challenges through features such as distributed records, transparency, restricted immutability, and cryptographic security. However, the adoption of this technology in the Malaysian public sector is still underexplored. The main objective of the study is to identify factors that influence the adoption of blockchain technology in Electronic Records Management in the Malaysian public sector and to develop an adoption model. A quantitative method was used to collect data from 253 public-sector officials directly involved in electronic records management. The conceptual framework was developed by integrating the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). Data analysis was conducted using the Partial Least Squares–Structural Equation Modeling (PLS-SEM) approach to test the relationship between variables and confirm the study hypotheses. Results show that users' behavioral intentions have a significant effect on actual usage of blockchain technology (H10 is accepted), with an R² of 0.633, indicating that the independent construct explains 63.3% of the variance in behavioral intentions. Individual, organizational, and environmental factors, including performance expectations, effort expectations, social influences, and facilitating conditions, have a significant effect on users’ behavioral intentions, emphasizing that technology adoption requires a holistic approach. The study also finds that behavioral intentions act as a critical mediator before actual usage of the technology can occur.

Author 1: Aida Ruzana Ahmad Yani
Author 2: Umi Asma’ Mokhtar

Keywords: Electronic records management; blockchain adoption; public sector; TAM; UTAUT

PDF

Paper 71: Reproducible Prediction Framework of Customer Churn Using Machine Learning, Advanced Data Science and Business Intelligence Techniques

Abstract: The telecommunications sector has evolved in recent years, resulting in intense competition and high customer acquisition costs. As a result, retaining customers has become a key concern for telecom operators. In this work, we propose the design and implementation of a complete customer churn prediction system that combines data science, machine learning and business intelligence approaches. The methodology is structured into five main steps: exploratory data analysis, development of an ETL pipeline, feature engineering, predictive modeling using a Random Forest algorithm, and the creation of decision-support dashboards in Power BI. Random Forest demonstrated higher performance with AUC-ROC of 0,85 and the results demonstrated that the main predictors of churn are monthly charges, contract type, and customer tenure. Our approach, which validated by a confusion matrix, offers decision-makers an operational tool to anticipate departures and implement targeted loyalty actions. This study proposes a reproducible methodological framework for companies facing the problem of churn and contributes to the use of machine learning in relationship marketing.

Author 1: Younes KOULOU
Author 2: Norelislam EL HAMI

Keywords: Churn prediction; telecommunications; machine learning; random forest; business intelligence; ETL; Power BI; feature engineering; decision support

PDF

Paper 72: Attribute-Conditioned Attention Scaling for Text-to-Image Diffusion Models

Abstract: Many text-to-image diffusion models have attained significant success in the generation of images from textual prompts; however, they still face some challenges, such as limited fine-grained control over different semantic attributes. To overcome this issue, this study proposes an Attribute-Conditioned Attention Scaling (ACAS), which modulates the cross-attention layers of the UNet model using attribute-related scaling factors. These scaling factors are assigned to the attention maps that allow selective enhancement of features in the ACAS without retraining. Moreover, this model allows precise control over generated images without retraining of the base model at the inference level, along with preservation. For experiments, 30 diverse prompts along with eight descriptive attributes are used to inspect the multi-attribute controllability of the proposed model. Different evaluation metrics, such as CLIP, LPIPS, and Inception Score (IS), are used to quantitatively evaluate the proposed model. Experimental results prove that the proposed model ACAS obtains competitive results with an LPIPS of 0.75, CLIP score of 0.316, IS of 3.98, and a minimal 8.47 seconds generation time. Furthermore, a comparative analysis of the ACAS model with similar baseline methods is performed, and the comparison shows that the ACAS improves attribute controllability without adding extra computational cost. Overall, this model bridges the gap between fine-grained attribute control and prompt-based guidance in the latest diffusion models.

Author 1: Rabia Tahir
Author 2: Bilal Ahmed Memon

Keywords: Diffusion models; attribute-control; CLIP; LPIPS; text-to-image generation; cross-attention modulation

PDF

Paper 73: Adaptive Multi-Layer Encryption for Enhancing Security in Smart Home IoT Ecosystems

Abstract: The proliferation of Internet of Things (IoT) devices in smart home environments has introduced significant security challenges, largely due to device heterogeneity, constrained computational resources, and limited native support for robust encryption mechanisms. This study presents an Adaptive Multi-Layer Encryption (AMLE) model designed to enhance security across the device, network, and cloud layers of smart home IoT ecosystems. The proposed model integrates lightweight encryption mechanisms at the device layer, machine learning-based anomaly detection at the network layer, and strong cryptographic protection combined with Attribute-Based Access Control (ABAC) at the cloud layer. Evaluation was conducted in a controlled, simulated environment to assess functional correctness, security behavior under representative threat scenarios, and system performance. Results demonstrate that the AMLE framework is capable of detecting and responding to simulated unauthorized access attempts, anomalous traffic patterns associated with botnet-like behavior, and data exfiltration scenarios, while maintaining operational performance suitable for typical smart home use cases. The Isolation Forest algorithm, configured with a contamination threshold of 0.05, successfully identified deviations from baseline traffic behavior and triggered policy-driven security responses. The findings indicate that AMLE provides a practical reference framework for implementing adaptive, layered security controls in smart home IoT environments, balancing security requirements with operational constraints.

Author 1: Dancan Obuya Machuki
Author 2: Kennedy Ronoh

Keywords: Smart home security; IoT encryption; adaptive security; multi-layer protection; resource-constrained devices

PDF

Paper 74: Workload-Aware Storage Reduction for Multi-Tenant SIEM on ClickHouse

Abstract: Security Information and Event Management (SIEM) platforms ingest terabytes of heterogeneous telemetry daily—Windows event logs, DNS queries, HTTP transactions, EDR alerts, and network metadata from Zeek—yet the majority of stored records are never queried for threat-hunting or incident-response workflows. This study presents a workload-aware storage reduction framework that tailors data retention to observed analytical demand within a multi-tenant ClickHouse deployment. The main contribution is a Workload Analyzer algorithm that extracts importance scores for columns from ClickHouse query logs using a frequency–recency–coverage weighting scheme, and a Storage-Coverage Cost Model that computes the optimal pruning threshold that minimizes a weighted sum of storage cost and coverage loss. Guided by these metrics, the framework applies six composable reduction operators: column pruning with materialized views, adaptive sampling, deduplication, per-column codec selection, skip-indexing, and time-to-live (TTL)-based retention tiering across hot/warm/cold storage. Multi-tenant isolation is enforced through role-based access control overlays aligned with the Thai Personal Data Protection Act (PDPA). Experimental evaluation on 1,000,000 Zipf-distributed Windows Security Events demonstrates 79% uncompressed and 70% compressed storage reduction with sub-second query latency, while the Workload Analyzer automatically identifies the optimal column subset that preserves 100% detection rule coverage at minimum storage cost.

Author 1: Nutthakorn Chalaemwongwan

Keywords: ClickHouse; SIEM; storage reduction; workload analysis; column importance scoring; multi-tenant; PDPA compliance

PDF

Paper 75: Web Server Seizure and Live Acquisition: A Legally Compliant Forensic Framework for Indonesia

Abstract: The proliferation of cybercrime activities, particularly those leveraging web servers for illicit purposes such as distributing hoaxes, hosting illegal online gambling, and spreading malware, underscores a pressing demand for a standardized digital forensic framework. Existing methodologies, like simple IP blocking, have proven insufficient in guaranteeing the integrity and admissibility of digital evidence in legal proceedings. This research introduces a comprehensive seizure and acquisition framework specifically engineered to manage digital evidence from both on-premise and cloud-based web servers. A core emphasis of this framework is live acquisition to preserve volatile data and ensure minimal service disruption. The framework systematically addresses critical challenges by focusing on legal authorization, precise server type identification and technical preparation, judicious forensic tool selection, rigorous evidence integrity validation through hashing, diligent Chain of Custody (CoC) documentation, and secure data storage. Tested through simulations of on-premise and cloud server seizures, the frame-work demonstrated its capacity to uphold evidence integrity and legal compliance. While robust, Subject Matter Expert (SME) validation indicated areas for optimization, particularly in cloud-native contexts and the automation of Chain of Custody documentation. This study marks a pivotal advancement towards standardizing web server seizure procedures, thereby ensuring that digital evidence remains valid, intact, and legally admissible in court.

Author 1: Irwan Hariyanto
Author 2: Yudi Prayudi
Author 3: Rimba Whidiana Ciptasari

Keywords: Digital forensics; web server seizure; live acquisition; evidence integrity; chain of custody

PDF

Paper 76: Adaptive Phishing Website Detection Using Incremental Machine Learning: A Dynamic Approach to Cybersecurity Threats

Abstract: The rapid expansion of internet services and cloud-based platforms has increased cybersecurity threats, particularly phishing attacks that deceive users into disclosing sensitive information. Traditional phishing detection methods, including blacklists and batch-learning models, often struggle to adapt to the continuously evolving nature of these attacks. In order to address this challenge, this study proposes an adaptive phishing detection framework based on incremental machine learning techniques that enable real-time learning and dynamic adjustment to new attack patterns. A comprehensive evaluation of multiple incremental algorithms was performed using the River-ML framework and a publicly available phishing website dataset. The models were assessed based on accuracy, precision, recall, F1 score, Cohen’s kappa, and memory efficiency. Evaluation results demonstrate that models such as Aggregated Mondrian Forest, Extremely Fast Decision Trees, and Logistic Regression achieved strong classification performance, with the best accuracy reaching 90.15%, precision up to 91.05%, recall up to 89.42%, F1 score up to 88.75%, and Cohen’s kappa up to 79.99%, while lightweight models like ALMA maintained extreme memory efficiency, requiring as little as 1.81 KB. In general, the pro-posed incremental learning framework significantly improves the effectiveness of phishing detection and computational efficiency, providing a scalable and adaptive defense mechanism against evolving cyber threats.

Author 1: Ajla Kulaglic
Author 2: Mutaz A. B. Al-Tarawneh

Keywords: Phishing detection; incremental learning; online machine learning; cybersecurity threats; real-time classification

PDF

Paper 77: Automated Interactance Near-Infrared Spectral Acquisition System for Mandarin Quality Assessment

Abstract: Intelligent fruit grading systems require automated sensing solutions capable of rapid and reliable non-destructive internal quality assessment. While near-infrared spectroscopy has been widely used for soluble solids content (SSC) prediction, most existing studies rely on manually acquired spectra, limiting scalability in smart agricultural environments. Although online Vis–NIR systems based on transmission configurations have been reported, automated interactance-based systems designed for deployment-oriented grading remain limited. This study presents the design and validation of an automated interactance near-infrared spectral acquisition system for mandarin SSC evaluation. The system integrates controlled clamping, rotational positioning, and automated probe actuation to ensure stable optical geometry and repeatable probe–fruit contact during measurement. Spectral consistency was assessed by comparing consecutive scans obtained using manual and automated acquisition modes. The automated system reduced spectral dispersion among consecutive acquisitions within a measurement session by approximately 65%relative to manual measurement, indicating improved acquisition stability. Chemometric models based on partial least squares regression, support vector regression, and extremely randomized trees were developed under multiple preprocessing strategies. Prediction performance under automated acquisition remained within the same range as manual measurement, with several preprocessing–model combinations (particularly PLS and SVR with smoothing-based preprocessing), showing slightly higher Rp values and lower RMSEp values under automated acquisition. The findings demonstrate the feasibility of the proposed auto-mated system for stable interactance spectral acquisition suitable for SSC prediction, supporting its potential future integration into automated fruit quality assessment systems.

Author 1: Van-Linh Lam
Author 2: Dinh-Tri Nguyen
Author 3: Thanh-Trung Le
Author 4: Hoang-Tien Nguyen
Author 5: Phuoc-Loc Nguyen
Author 6: Quoc-Khanh Huynh
Author 7: Nhut-Thanh Tran
Author 8: Chanh-Nghiem Nguyen

Keywords: Automated spectral acquisition; interactance spectroscopy; near-infrared spectroscopy; fruit quality assessment; soluble solids content; chemometric modeling; smart agriculture; sensing systems

PDF

Paper 78: The Impact of Modern AI on Software Development: A Systematic Literature Review

Abstract: As large language models, and agentic AI systems are increasingly being integrated into software engineering, an expanding amount of empirical evidence surrounding these technologies has emerged. This systematic literature review examines the impact of modern AI techniques and tools in software development lifecycle phases and related activities, covering studies published between 2023 and 2025 and resulting in a corpus of 62 primary studies, investigating the role of large language models, AI agents and agentic AI workflows across lifecycle phases. The synthesis is guided by three research questions that address the entire software development cycle, reported impacts on development practices, outcomes, and constraints. This review fills a gap in the synthesis of modern AI applications and their impacts. It concludes that modern AI in software engineering is progressively evolving from a generation tool into a reasoning and coordination infrastructure layer, with ongoing efforts targeting the mitigation of identified limitations and the advancement of trustworthy agentic AI capabilities for dependable software engineering practices.

Author 1: Meryem Bensaid
Author 2: Marouane Achbari
Author 3: Younesse Ouahbi
Author 4: Soumia Ziti

Keywords: Large language models; generative AI; agentic AI; software development lifecycle; systematic literature review

PDF

Paper 79: A Survey on Extended Reality Technologies for Industrial Maintenance

Abstract: Industrial maintenance increasingly relies on extended reality (XR) technologies, yet comprehensive analysis comparing AR, VR, and MR implementations with human-centric evaluation remains limited. This systematic literature review employs the PRISMA methodology to analyze 95 primary studies (2014–2025) that investigate implementation patterns, benefits, challenges, and evaluation criteria across XR modalities. AR dominates operational guidance (71.6%), VR prevails in training (38.9%), and MR enables collaborative maintenance (6.3%). Temporal analysis reveals three evolutionary phases: basic visualization (2014–2017), cognitive enhancement (2018–2021), and AI-integrated adaptive systems (2022–2025), mirroring the transition from Maintenance 4.0 to 5.0. Demonstrated benefits include 38% task completion time reductions and 92.4% error decreases. Human-centric factors appear in 44.2% of studies overall, with temporal analysis showing a progressive increase from approximately 25% in 2014–2017 to 58% in 2022–2025, substantiating a paradigm shift toward prioritizing cognitive support and user experience. Two novel frameworks advance theoretical understanding: an Evaluation Framework that ex-presses effectiveness as a function of technology, task complexity, and human factors; and a Technology-Task-Context Alignment Model that prescribes optimal XR-maintenance pairings. Critical gaps include the need for longitudinal field studies, standardized evaluation protocols, and economic analyses.

Author 1: Mouad Danane
Author 2: Abdelmajid El Ouadi
Author 3: Youssef Rochdi

Keywords: Industrial maintenance; maintenance 5.0; extended reality; augmented reality; virtual reality; mixed reality; human-centric design; artificial intelligence; systematic literature review

PDF

Paper 80: Design and Implementation of a Trust-Aware Fog Computing Simulation Framework Using WorkflowSim

Abstract: Fog computing extends cloud capabilities to the net-work edge, providing low-latency services for mission-critical applications such as healthcare and industrial automation. However, validating security-aware scheduling policies in this distributed paradigm remains a challenge due to the lack of native support for trust modeling in standard simulation tools like CloudSim and iFogSim. Existing simulators focus primarily on resource provisioning, cost, and energy metrics, often neglecting “Trust” and “Data Confidentiality” as first-class simulation parameters. This study presents the design and implementation of a trust-aware extension for WorkflowSim. We detail the software architecture modifications required to support attribute-based trust verification, secure task fragmentation, and encryption overhead modeling. Unlike standard simulators, our extended framework treats Trust as a dynamic entity in the Virtual Machine (VM) and Task class hierarchy. We validate the framework’s correctness through unit tests and scenario-based trace analysis, demonstrating its ability to accurately model security-performance trade-offs in heterogeneous fog environments. Experimental results indicate that enabling trust-aware scheduling increases makespan by approximately 19% and cost by 16%, while achieving a security score of 0.88 compared to 0.65 for the baseline. The simulation engine overhead remains below 7.1% even for 1000-task workflows. The source code and design patterns presented provide researchers with a robust, extensible tool for evaluating secure scheduling algorithms without the cost of physical testbeds.

Author 1: Chia Chuan Wu
Author 2: Selvakumar Manickam
Author 3: Shams Ul Arfeen Laghari
Author 4: Shankar Karuppayah

Keywords: Fog computing; simulation tools; WorkflowSim; trust modeling; task scheduling; software architecture

PDF

Paper 81: User-Centered Ergonomic Design of a Portable EEG System for Evaluating Dolphin-Assisted Therapy in Children with Neurodevelopmental Disorders

Abstract: Dolphin-Assisted Therapy (DAT) is a therapeutic alternative used in comprehensive neurorehabilitation, aimed at improving the quality of life and social integration of patients with neurodevelopmental disorders through interaction with dolphins. However, the lack of defined standards makes it difficult to objectively evaluate its effectiveness. In this context, it was identified that the electroencephalographic device used to record EEG signals before, during, and after the sessions (based on a TGAM1 sensor) presented both functional and ergonomic limitations, as its readjustment interrupted the therapy sequence and caused discomfort in patients. To address this issue, the present work proposes the redesign of the device, incorporating ergonomic criteria to improve comfort, stability, and ease of use, without compromising the quality of the signal acquired by the TGAM1 sensor. The main objective was to optimize the user experience while also enabling the collection of more consistent and reliable data for analysis. The results show a significant improvement, achieving a 51% increase in signal retention. This made it possible to recover approximately 36 additional seconds of high-quality neural data per hour of therapy, thanks to more continuous and accurate EEG acquisition during interactions with dolphins. Furthermore, patient evaluations indicated greater acceptance of the device, highlighting improvements in comfort and weight perception compared to the previous version, thereby validating both its functionality and ergonomic design.

Author 1: Brenda Lorena Flores Hidalgo
Author 2: Urim Rarael Pérez Bernal
Author 3: Jesús Jaime Moreno Escobar
Author 4: Hugo Quintana Espinosa
Author 5: Ana Lilia Coria Páez

Keywords: User-centered design; ergonomics; portable EEG system; Dolphin-Assisted Therapy; neurodevelopmental disorders; 3D printing; TGAM1

PDF

Paper 82: Automated Medical Image De-Identification via U-Net++ Segmentation and Conditional GAN Inpainting

Abstract: The acceleration of multi-centric medical AI studies hinges on the ability to share imaging data without exposing burnt-in Protected Health Information (PHI). Manual redaction remains the dominant practice, but it erases diagnostically relevant context, violates harmonization guidelines issued by large consortia, and cannot keep up with the petabyte-scale repositories envisioned by regulatory agencies. This study delivers a comprehensive treatment of a fully automated Detect-and-Restore pipeline that fuses fine-grained U-Net++ segmentation with a context-aware conditional GAN (cGAN) inpainter. Building on two engineering notebooks (U-Net++ training and GAN generator orchestration), we develop a synthetic PHI rendering engine, a dynamic oracle that freezes the detector during adversarial optimization, and a hybrid loss that couples adversarial, pixelwise, and perceptual cues. Extensive experiments on 48,000 synthetically annotated radiographs demonstrate a Dice score of 0.8147 for PHI localization and a PSNR/SSIM/LPIPS triplet of 41.87 dB/0.985/0.027 for restoration while keeping inference below 92 ms per image on a single RTX 4090. Beyond reporting raw metrics, we dissect error modes, quantify the effect of imperfect masks on the inpainter, and position the proposal relative to recent international initiatives on medical image de-identification. Testing on an external clinical cohort of 200 real-world DICOM radiographs confirms generalizability, maintaining a PSNR of 40.12 dB and demonstrating robust blending at masking boundaries without compromising downstream diagnostic utility across heterogeneous hospital data.

Author 1: Ismail Chahid
Author 2: Anas Chahid
Author 3: Yassine Chahid
Author 4: Aissa Kerkour Elmiad
Author 5: Mohammed Badaoui

Keywords: Medical image de-identification; U-Net++; conditional GAN; generative inpainting; patient privacy; PHI; DICOM; deep learning; synthetic data; perceptual loss

PDF

Paper 83: Learning Structural Regularities over Mobile UI Flows from Interaction Traces

Abstract: Mobile applications exhibit rich user interface (UI) flows composed of sequences of screens connected through user interactions. While prior work has made significant progress in understanding individual UI screens, reusable flow-level regularities across applications remain underexplored. In this study, we learn a role-based probabilistic prior over mobile UI flows from large-scale interaction traces by mapping each screen to an abstract screen role via unsupervised clustering of multimodal screen embeddings derived from screenshots and view-hierarchy text. Interaction traces are then converted into sequences of screen roles, enabling probabilistic modeling of flow structure beyond app-specific identifiers and layouts. We study two complementary tasks. First, for next-step prediction on unseen applications and held-out categories, simple n-gram baselines already capture meaningful cross-app regularities, and a causal Transformer further improves performance, achieving R@1 = 0.2598 and R@10 = 0.7268 on unseen test applications at K = 40 while outper-forming the trigram baseline across all reported cutoffs. Second, we study atypical-transition scoring under a fixed sampled-candidate protocol, where the same learned flow prior is used to rank observed transitions against sampled alternatives. Under this controlled setting, the Transformer achieves the strongest ranking performance among the compared models. These results indicate that role-based abstractions provide a promising basis for modeling UI flow regularities across applications and for ranking-oriented sequence prediction under cross-application generalization. The anomaly results are encouraging under the sampled evaluation protocol, but broader validation will require more realistic evaluation settings and structured human assessment.

Author 1: Iskandar Salama
Author 2: Masayasu Atsumi

Keywords: UI Flow modeling; causal transformer; n-grams; clustering; multimodal embeddings; next-step prediction; cross-app generalization

PDF

Paper 84: Risk Assessment and Risk Management Challenges in Intelligent IoT-Based Smart City Infrastructures

Abstract: Smart Internet of Things (IoT) technologies, which include artificial intelligence (AI) are increasingly being implemented in the infrastructures of smart cities to enhance the efficiency, sustainability, and service delivery of cities. Nonetheless, with the implementation of such intelligent and interconnected systems, there are complex security, privacy, and safety risks that are not easily handled against conventional risk assessment and risk management methods. Current frameworks tend to be unresponsive and centralized whereas smart city infrastructures are dynamic, decentralized, and based on autonomous decision-making elements. This incompatibility poses serious problems for the reliability and robustness of intelligent urban systems. The study contains a systematic literature review of the risk assessment and risk management issues in smart city infrastructure based on intelligent IoT. The review is based on security, privacy, safety, and risks of AI, such as adversarial machine learning, data poisoning, model drift, and failures of autonomous systems. To enhance the level of methodological transparency, the review lists the databases searched, search words, screening process, inclusion, and exclusion criteria, and the resulting list of studies selected. The comparison of the key risk assessment methods, such as the standard-based, qualitative, probabilistic, and AI-based approaches, has been made, and their advantages and weaknesses in the smart city context have been identified. The results indicate that existing frameworks are still in fragments and not always able to deal with the joint effect of the heterogeneity of IoT, AI-based decisions, cyber-physical interdependence, scalability, and governance. The study, based on this summary, offers a more defined taxonomy of risk factors and research directions towards adaptive, AI-conscious and operationally feasible risk management in smart cities.

Author 1: Abdullah Alessa
Author 2: Yaseen Alduwayl
Author 3: M M Hafizur Rahman

Keywords: Artificial intelligence; cybersecurity; Internet of Things; risk assessment; risk management; smart cities

PDF

Paper 85: A Cryptographic Framework Using AES-ECC with Threshold Key Management for Cloud Storage Systems

Abstract: Cloud storage systems have become an essential platform for storing and managing large volumes of data, but their security depends not only on confidentiality but also on integrity, controlled key management, and resistance to active attacks. Many existing protection approaches emphasize encryption of data while giving less attention to context–aware verification and controlled object recovery in untrusted cloud settings. This study proposes a novel hybrid cryptographic model for cloud storage systems, Object-Centric Threshold–Sealed Encryption with Two Keys (OCTET). The model integrates AES chunk-based encryption to protect data confidentiality, ECC to secure key exchange, threshold–based secrets to reconstruct, HKDF–derived per chunk, and Merkle root to enforce a verification–before–decryption policy. The implementation of the proposed model is in an emulated cloud storage system and examined on a dataset of large objects, then compared against baseline schemes, including symmetric and hybrid encryption models, under the same experimental environment. The main outcome demonstrates that the proposed model achieves practical performance, minor overhead, and superior resistance to the attack models. In general, this study demonstrates that the proposed model offers trade–offs between security and efficiency and a robust integrity technique for large objects in cloud storage systems.

Author 1: Abdulsalam Ibrahim Almirdasi
Author 2: Mohamed Tahar Ben Othman

Keywords: Cloud storage system; hybrid encryption; AES; ECC; Object-Centric; threshold key management; integrity verification; Merkle root verification

PDF

Paper 86: AFT-Attentive BiLSTM: Improving Early Warning of Firm Financial Distress with Temporal Attention in an Accelerated Failure Time Framework

Abstract: Early warning systems (EWS) for firm-level financial distress are essential for identifying potential bankruptcies or insolvencies before their realization. While traditional statistical models such as Z-score and logistic regression offer interpretability, they lack the ability to capture nonlinear and temporal dependencies in financial data. Recent deep learning approaches improve predictive accuracy but often sacrifice interpretability. The purpose of this study is to develop and evaluate a novel deep learning-based early warning model for firm-level financial distress that integrates temporal attention with parametric survival analysis to improve both predictive accuracy and interpretability. Therefore, this study proposes an AFT-Attentive BiLSTM model that integrates a Bidirectional Long Short-Term Memory (BiLSTM), a temporal attention mechanism, and a log-normal Accelerated Failure Time (AFT) survival framework. The model predicts time-to-distress distributions rather than binary outcomes, enabling probabilistic early warnings with calibrated survival probabilities. Empirical results demonstrate that the proposed model outperforms Cox Proportional Hazards, DeepSurv, and prior AFT-BiLSTM models without attention. The inclusion of temporal attention improves concordance index (C-index), Integrated Brier Score (IBS), and time-dependent AUC, and provides interpretable insights by identifying critical financial periods preceding distress. Kaplan–Meier analysis confirms strong separation between high- and low-risk groups. The findings suggest that combining temporal attention with parametric survival modeling enhances both predictive accuracy and interpretability in financial distress early warning systems.

Author 1: Muhammad Ali Chohan
Author 2: Suresh Ramakrishnan
Author 3: Mohammad Abrar
Author 4: Shamaila Butt
Author 5: Shahid Kamal

Keywords: Financial distress prediction; early warning systems; survival analysis; accelerated failure time; bidirectional LSTM; temporal attention; deep learning; corporate bankruptcy

PDF

Paper 87: TrustPatch-X: Multi-Stage Explainable Framework for Reliable LLM Patch Validation

Abstract: Large Language Models (LLMs) have shown strong potential in automated vulnerability repair; however, generated security patches often lack reliability, semantic guarantees, and interpretability. Purely generative approaches may remove super-ficial patterns while failing to eliminate root-cause vulnerabilities or preserve program behavior. To address this limitation, this study proposes an Explainable Multi-Stage Validation Frame-work that integrates static vulnerability filtering, graph-based semantic consistency analysis, and test-driven verification within a unified pipeline. The framework further incorporates a structured explanation module to provide interpretable reasoning for patch correctness. Experimental evaluation on Juliet, Devign, and Defects4J security benchmarks demonstrates that the proposed approach achieves 96.3% vulnerability removal accuracy and reduces false-fix rates to 9.3%, outperforming LLM-only and hybrid baselines. Additionally, the framework maintains high semantic similarity (0.97) and explanation fidelity above 90%while preserving computational efficiency. The results indicate that combining neural generation with structured validation significantly enhances the trustworthiness of AI-driven security patch validation systems.

Author 1: Sheetal Madhukar Parate
Author 2: Jasmine Selvakumari Jeya I

Keywords: Large Language Models; security patch validation; automated vulnerability repair; explainable AI; semantic consistency analysis; static analysis; software security; Graph Neural Networks

PDF

Paper 88: An Identity-Aware Privacy-Preserving Deep Learning Framework for Culturally Sensitive Image Sharing

Abstract: The blistering development of digital image sharing raises privacy concerns, especially in cultural contexts where image exposure could be ethically and socially provocative. In Islamic societies, sharing images of women without hijab can be deeply sensitive. This study presents SITR, an identity-sensitive privacy-preserving deep learning system aimed at reducing un-intended sharing of sensitive images involving female family members without hijab. SITR integrates three components in a unified deployment-ready pipeline: 1) face recognition with Multi-task Cascaded Convolutional Networks (MTCNN), 2) family-member authentication with FaceNet embeddings stored in a vector database, and 3) hijab detection with an optimized Densely Connected Convolutional Network (DenseNet). The hijab detection model was trained and evaluated on a cleaned dataset of 2,191 images with hijab and non-hijab cases with diverse visual conditions. DenseNet121 was benchmarked against ResNet50, MobileNetV2, and EfficientNet-B0, achieving the best overall performance. To further enhance its effectiveness, DenseNet121 was modified by integrating an Efficient Channel Attention (ECA) mechanism and applying hyperparameter tuning. The optimized selected model achieved 92.16% test accuracy, strong discrimination with precision of 91.63% , and 86.39% F1-score on a held-out test set. The model was deployed as a quantized RESTful API, reduced from 82 MB to 27 MB while maintaining predictive reliability. Results demonstrate the practicability of identity-conditioned, culturally-aware AI systems for privacy protection. This work highlights the role of context-sensitive computer vision beyond generic content moderation toward culturally-aware and ethically accountable applications.

Author 1: Mahmoud Obaid
Author 2: Hadeel Bkhaitan
Author 3: Duha Maali
Author 4: Saja Hammad
Author 5: Thaer Thaher

Keywords: Cultural sensitivity; deep learning; DenseNet121; Efficient Channel Attention; FaceNet; hijab detection; MTCNN

PDF

Paper 89: Stability-Weighted Feature Selection with Adaptive PSO-SGD for Neural Network-Based Predictive Hiring

Abstract: Predictive hiring leverages machine learning to forecast candidate success, yet existing approaches suffer from two limitations: reliance on single-method feature selection that lacks robustness, and sensitivity to neural network initialization that impairs convergence. This study introduces two contributions integrated into a unified framework. First, Stability-Weighted Multi-Criteria Feature Selection (SW-MCFS) is proposed, which aggregates four heterogeneous scoring methods—Mutual Information, Wald statistical significance, Fisher discriminant loading, and Permutation Importance—through a cross-validation stability-weighted consensus function. Unlike single-method approaches, SW-MCFS weights each method proportionally to its ranking consistency across folds, producing robust and data-driven feature subsets. Second, Adaptive Particle Swarm Optimization (APSO) is introduced, a PSO variant featuring fitness-landscape-aware inertia adaptation and Levy flight perturbation for stagnation escape. The framework is evaluated on 10, 247 recruitment records from a North African telecommunications company and benchmarked against Random Forest, XGBoost, SVM, standard ANNs, and classical LR/DA-based approaches through 10-fold cross-validation. The integrated SW-MCFS-APSO-SGD framework achieves 76.8% accuracy, significantly outperforming XGBoost (73.8%, p = 0.012), standard PSO-SGD (75.2%, p = 0.041), and LR-based feature selection (74.6%, p = 0.028). Ablation studies confirm that SW-MCFS contributes 1.6%accuracy gain over single-method selection, while APSO improves performance by 0.8% with 31% faster convergence compared to standard PSO. SHAP analysis reveals communication skills, experience, and seniority as dominant predictors with minimal demographic influence. It is noted that the accuracy ceiling may partly reflect inherent label noise in subjective performance assessments. The proposed framework demonstrates effectiveness on organizational recruitment data, warranting further cross-domain validation to establish broader generalizability.

Author 1: Yassine Temsamani Khallouk
Author 2: Said Achchab

Keywords: Predictive hiring; Multi-Criteria Feature Selection; Adaptive Particle Swarm Optimization; stability-weighted consensus; neural networks; HR analytics; ensemble methods

PDF

Paper 90: GTMedoids: A New Grey Sheep Users Detection Approach

Abstract: Recommender systems have been developed to serve users and provide them with the best suggestions. Despite their success, offering fully identical recommendations to users’ preferences remains a difficult task, where the complexity of human taste results in different challenges. Grey sheep user phenomena continues to be one of the most common, where the user is defined by his unique interactions with the system, making it difficult to associate with similar users, as he rarely agrees with them. In this study, we presented a new approach for identifying grey sheep users. It is based on the taste context and nature of user interaction with the system. We grouped similar users using an enhanced Kmedoids clustering method with a new dissimilarity metric and introduced a novel process to distinguish between users. The differentiation is achieved by assigning weights to each cluster based on how much it reflects the grey sheep user characteristics. We evaluated the efficiency of Grey Threshold Medoids (GTMedoids) using the FilmTrust and MovieLens 100k datasets. The results show the superior performance of our approach in detecting grey sheep users.

Author 1: Bouchra Boualaoui
Author 2: Ahmed Zellou
Author 3: Lahbib Ajallouda

Keywords: Recommender system; grey sheep users detection; clustering; Kmeans; Kmedoids; dissimilarity metric

PDF

Paper 91: A Proxy-Based DevSecOps Framework for Multi-Tier Web Applications

Abstract: Multi-tier web applications face significant implementation challenges in DevSecOps, including tool integration complexity, automation gaps, and cultural resistance. This study presents a proxy-based DevSecOps framework grounded in a formal architectural decomposition that transforms classical O(n × m) pipeline integration complexity into additive O(n1 × m1) + O(n2 × m2) components through the separation of static and dynamic execution contexts. The framework is instantiated for PHP-based multi-tier web applications deployed on AWS infrastructure using Terraform-managed Infrastructure as Code principles; all ecosystem coherence metrics, toolset analysis, and complexity computations are derived within this specific technology context, and generalization to other language ecosystems or cloud platforms constitutes a boundary condition discussed in Section V. Theoretical contributions include: 1) a formal proxy pipeline architecture with mathematical complexity analysis demonstrating that complexity reduction is guaranteed when tool ecosystem coherence exceeds 70%; 2) systematic tool integration using PHP-specific tooling, yielding a theoretical ecosystem coherence of 76.9%; and 3) theoretical validation addressing 18 out of 20 identified DevSecOps implementation challenges. Mathematical analysis theoretically predicts a 48.13%reduction in tool integration conflicts and a 61.9% toolset reduction relative to traditional monolithic pipelines through context separation. All quantitative figures presented above are theoretically derived predictions, not empirically measured outcomes. They are formalized as falsifiable hypotheses H1 through H5 in Section V, with empirical validation identified as the primary direction for future work.

Author 1: Abderrahim Rida
Author 2: Abdelaziz Bakhil
Author 3: Ayoub Ait Lahcen

Keywords: DevSecOps; proxy architecture; multi-tier applications; PHP; AWS; security testing; CI/CD pipeline optimization; ecosystem coherence

PDF

Paper 92: Hybrid Shape Descriptor Fusion with LightGBM for Robust 3D Mesh Classification and Retrieval

Abstract: Recent advances in 3D mesh acquisition and the development of interactive modeling tools have significantly increased both the quantity and diversity of available 3D model databases. Therefore, the task of searching, querying, and re-trieving models in large-scale 3D databases has become a focus of research in this area. Indexing 3D models for content-based retrieval is a challenging task that involves numerous algorithms and tools to capture the most significant representation of the object. In this study, a novel framework for 3D mesh retrieval is proposed that combines distribution-based, spectral, and geometric features into a single representation and employs a machine learning classifier based on LightGBM (Light Gradient Boosting Machine) for classifying 3D objects. To capture the complex geometry of 3D meshes, our approach analyzes surface smoothness, radial vertex distributions, spectral signatures, global shape distributions, topological connectivity, and local curvatures. Evaluated on the Princeton Shape Benchmark (PSB), the pro-posed approach achieves a 1st Tier accuracy of 0.97 and an F-Measure of 0.96, substantially outperforming both individual descriptors and state-of-the-art methods. The mean pairwise cross-correlation between descriptors is low (?¯ = 0.128), confirming their complementary rather than redundant nature. The proposed approach presents a consistent solution with potential applications in various areas, such as computer vision, robotics, e-commerce, medical imaging, and other related fields.

Author 1: Khadija Arhid
Author 2: Youness Ghazi
Author 3: Ilham Kachbal
Author 4: Fatima Rafii Zakani
Author 5: Mohcine Bouksim
Author 6: Said El Abdellaoui
Author 7: Taoufiq Gadi
Author 8: Mohamed Aboulfatah

Keywords: 3D object; 3D mesh retrieval; classification; machine learning; shape matching; feature fusion; LightGBM

PDF

Paper 93: A Survey of FPGA Floorplanning for Dynamic Partial Reconfiguration: From Heuristic Approaches to Autonomous AI-Driven Methods

Abstract: Dynamic Partial Reconfiguration (DPR) has emerged as a key enabler of runtime adaptability and hardware-software co-design in modern FPGA-based heterogeneous systems. However, with the transition toward 5nm technologies and multi-die 3D-IC architectures, spatial resource management faces a “complexity wall,” where traditional manual floorplanning techniques struggle to satisfy timing, utilization, and scalability constraints. This study presents a systematic literature review and proposes a comprehensive taxonomy of FPGA floorplanning and placement methodologies developed over the past two decades. The proposed classification organizes existing approaches into three generations: 1) the Heuristic Era, focused on rule-based automation and physical feasibility; 2) the Optimization Era, characterized by formal mathematical models and Mixed-Integer Linear Programming (MILP) for heterogeneous resource allocation; and 3) the Autonomous Era, which leverages AI-driven techniques, including Reinforcement Learning and intelligent scheduling, to enable predictive and shape-adaptive placement strategies. This evolution reflects a fundamental shift from static grid-based management toward elastic, self-optimizing FPGA fabrics. We further examine emerging architectural constraints, including Super Logic Region (SLR) boundaries and hierarchical nested Partial Reconfigurable Regions (PRRs). Beyond this taxonomy, the survey identifies a critical scalability–optimality trade-off, highlighting the need for hybrid frameworks that combine the formal guarantees of optimization-based methods with the real-time adaptability of AI-driven approaches. It further establishes a unifying perspective in which DPR is evolving from a logic-reconfiguration mechanism into a thermal–spatial management paradigm for mitigating heat in high-density 3D-IC systems. Finally, the analysis reveals a significant functional–physical gap in current autonomous design tools, emphasizing the need for context-aware agents capable of jointly reasoning about temporal task dependencies and spatial floorplanning constraints. This review provides a structured roadmap for the development of next-generation intelligent control frameworks for edge and cloud-scale reconfigurable computing systems.

Author 1: Ibrahim LIMEM
Author 2: Sadok BAZINE
Author 3: Abdessalem BEN ABDELALI

Keywords: Field Programmable Gate Arrays (FPGA); Floor-planning; Dynamic Partial Reconfiguration (DPR); bitstream relocation; hardware autonomy; reinforcement learning; heterogeneous computing; 2D shape-adaptive placement

PDF

Paper 94: SentimentPulse: A Concurrent Multi-Platform System for Social Media Sentiment Monitoring with LLM-Based Interpretation

Abstract: Monitoring public perception on social media is increasingly important for detecting reputational risks and communication opportunities in rapidly evolving digital environments. However, operational sentiment monitoring remains challenging due to platform fragmentation, heterogeneous data formats, and the need to generate interpretable reports quickly when specialized analysts are not available. This study presents SentimentPulse, a web-based system for multi-platform sentiment monitoring driven by a user-defined query. The system integrates concurrent data extraction from X, Facebook, LinkedIn, and Instagram with large language model (LLM) based sentiment classification and automated executive story-telling generation. The architecture combines parallel scraping processes for data acquisition with multithreaded LLM inference to improve throughput, while structured persistence enables job tracking and cross-platform analysis. The system operates under practical constraints, including platform-specific access limitations, dynamic content availability, and dependency on external LLM services, which may introduce variability in response times and outputs. The evaluation is conducted under controlled experimental conditions using fixed query limits and asynchronous execution settings, and results should be interpreted within these operational boundaries. Experimental results from two anonymized case studies demonstrate the effectiveness and operational performance of the approach. In the first case study, the system processed 1032 social media interactions and produced a sentiment distribution of 49.0% positive, 36.6% negative, and 14.2% neutral, with a manual validation accuracy of 0.88. In the second case study, the pipeline processed 1121 records, with parallel scraping accounting for the majority of the runtime and LLM inference achieving a throughput of 12.08 items per second. These results show that combining concurrent multi-platform extraction with LLM-based interpretation enables practical and interpretable social listening workflows, while highlighting the im-portance of considering system-level constraints when deploying such solutions in real-world environments.

Author 1: Gabriel A. León-Paredes
Author 2: Erika C. Villa-Quishpi
Author 3: Jorge E. Márquez-Chávez
Author 4: Erick F. Zhigue-Granda

Keywords: Sentiment analysis; social media monitoring; large language models; social listening systems; multi-platform analytics

PDF

Paper 95: Improving Performance and Accuracy in Decision Trees: A Literature Survey on Impurity Functions

Abstract: Decision tree algorithms have remained some of the most popular tools for supervised learning tasks. This has been because of their comprehensibility, malleability, and robustness to accommodate varying types of data. Nevertheless, their accuracy and performance are dependent on the use of impurity measures to guide the process of tree splits. The standard impurity measures, such as the Gini impurity, Entropy, and Classification Error, tend to face problems as the complexity, imbalance, and noise levels are raised. This often gives rise to overfitting. An examination of the disadvantages and present research efforts on impurity-based optimization of decision trees, as well as the study of new paradigms such as the use of Rényi Entropy, the utilization of the Tsallis Entropy, combinations of impurity functions, complexity-aware tree splits, and tailored impurity function augmentation, makes it clear that there are efforts underway to improve the accuracy, robustness, and comprehensibility, as well as the processing complexity, for the tasks of decision tree algorithms.

Author 1: Abed Alsulami
Author 2: Reda Khalifa
Author 3: Wajdi Alghamdi

Keywords: Decision trees; impurity functions; split criteria; decision tree optimization; literature review

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

The Science and Information (SAI) Organization Limited is a company registered in England and Wales under Company Number 8933205.