The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Outstanding Reviewers

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

IJACSA Volume 16 Issue 12

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: ML to Predict Effectiveness of the MCP Authorization Model for LLM-Powered Agent

Abstract: In today’s AI-driven world, unlocking AI potential and enabling AI models to communicate with external data sources is vital for enhancing the efficiency and security of AI-driven applications. The Model Context Protocol (MCP) serves as a standard for maximizing AI potential. This study leverages a machine learning approach to predict the effectiveness of the MCP Authorization Model for an LLM-powered agent. It utilizes logs from Azure services such as Azure Monitor, Azure Sentinel, and Azure Active Directory, which are used to monitor MCP server activity, to create a sample dataset. This dataset includes features such as source_ip, destination_ip, event_type, alert_severity, and target_variable. These features are used to train the ML model to assess the effectiveness of the MCP Authorization model for LLM-powered agents, enabling organizations to better understand the importance of a secure connection between AI models. This approach contributes to unlocking AI’s full potential while improving application security and operational efficiency.

Author 1: Upakar Bhatta

Keywords: Model Context Protocol; artificial intelligence; machine learning; large language model

PDF

Paper 2: Efficient Multi-Class Analysis of Consumer Complaints Using Frozen MiniLM Embeddings and Neural Networks

Abstract: Text classification is a critical task in domains generating large volumes of unstructured text, such as finance, healthcare, and consumer services. However, accurately classifying such data remains challenging due to its noisy, imbalanced, and context-dependent nature. While pre-trained language models have improved general text classification, their direct application often overlooks domain-specific cues and sentiment patterns that are important for nuanced understanding. In this study, we propose a novel framework that extends the MiniLM language model by integrating domain-relevant cues and sentiment features with textual embeddings. This integration allows the model to capture both semantic richness and domain-specific patterns, enhancing reliability and interpretability. Comparative experiments against baselines including TF-IDF + Logistic Regression, Word2Vec + Logistic Regression, TF-IDF + Naïve Bayes, and Word2Vec + Naïve Bayes shows that the proposed approach consistently outperforms traditional methods, achieving an accuracy of 0.8653, precision of 0.8697, recall of 0.8653, F1-score of 0.8668, Cohen’s Kappa of 0.7862, and MCC of 0.7870. Ablation studies further demonstrate the critical role of cues and sentiment features in improving performance. These findings indicate that combining pre-trained embeddings with carefully selected domain features offers a more robust and context-aware solution for text classification, establishing a foundation for future work integrating transformer-based models with explainable AI techniques in domain-specific applications.

Author 1: Sri Vishnu Gopinathan
Author 2: Muhammad Faraz Manzoor

Keywords: Consumer complaints; text classification; sentence embeddings; MiniLM; class imbalance; sentiment analysis; domain adaptation; contextual embeddings

PDF

Paper 3: TransAneu-Net: A Hybrid Radiomics and Contrastive Deep Learning Framework for Automated Brain Aneurysm Diagnosis

Abstract: Accurate and early detection of intracranial aneurysms is critical for preventing life-threatening subarachnoid hemorrhage and improving clinical outcomes. This study proposes a hybrid diagnostic framework that integrates radiomics-based feature engineering with a transformer-driven deep learning architecture enhanced by teacher–student contrastive representation learning. The workflow incorporates region-of-interest segmentation, handcrafted radiomic feature extraction, multimodal representation fusion, and probabilistic aneurysm localization using high-resolution MR and MRA imaging. Comprehensive experiments conducted on benchmark neuroimaging datasets demonstrate that the proposed model achieves high classification accuracy, stable convergence, and robust generalization across diverse anatomical and imaging conditions. Qualitative evaluations further reveal that heatmap-based confidence overlays reliably identify aneurysmal regions and closely align with ground-truth annotations. The contrastive learning module strengthens spatial and frequency-domain feature alignment, enabling effective training under limited supervision and reducing performance degradation associated with data heterogeneity. While limitations remain regarding dataset breadth and segmentation dependencies, the results indicate that this hybrid radiomics–AI framework offers a promising pathway toward automated aneurysm screening and clinical decision support. The proposed system has the potential to enhance diagnostic precision, mitigate inter-observer variability, and contribute to earlier intervention in neurovascular care.

Author 1: Zhadra Kozhamkulova
Author 2: Shirin Amanzholova
Author 3: Bella Tussupova
Author 4: Yelena Satimova
Author 5: Mukhamedali Uzakbayev
Author 6: Kenzhekhan Kaden
Author 7: Dastan Kambarov

Keywords: Aneurysm; deep learning; radiomics; transformer networks; contrastive learning; MR imaging; MRA; medical image analysis; aneurysm detection; neurovascular diagnostics

PDF

Paper 4: Improving Emergency Preparedness with a Mobile Application for Respiratory Therapy Resource Coordination

Abstract: Mass gatherings such as Hajj and Umrah, along with pandemic outbreaks, place a significant strain on healthcare systems, particularly respiratory therapy (RT) services, where shortages of respiratory therapists, ventilators, and specialized equipment can compromise emergency response due to increased patient volume, environmental exposure, and heightened risk of respiratory diseases. This study presents a conceptual model of a mobile-based resource coordination system designed to enhance emergency preparedness and response for respiratory therapy services during pandemics and mass-gathering events, such as Hajj and Umrah. The proposed system integrates a real-time database with mobile clinical decision support to provide RT managers and supervisors with centralized visibility of ventilators, staffing levels, and critical respiratory equipment across healthcare facilities. By enabling real-time monitoring and predictive, alert-driven decision support, the system aims to support proactive resource allocation, staff deployment, and equipment distribution. Although the work is conceptual and does not include empirical evaluation, it provides a well-defined architectural framework suitable for future implementation that aligns respiratory care workflows with real-time monitoring and forecast-driven decision support in settings such as Hajj, Umrah, and large-scale public health emergencies.

Author 1: Rahaf Katib

Keywords: Respiratory therapy; mass gatherings; Hajj and Umrah; emergency preparedness; health information systems

PDF

Paper 5: TrustGraph: A Heterogeneous GNN for Dynamic Zero-Trust Policy Enforcement in Microservices

Abstract: Securing cloud microservices requires a unified understanding of how services behave, authenticate, and interact in real time. Unlike existing methods that analyze telemetry signals in isolation, this work presents a heterogeneous graph-based Zero-Trust framework that represents microservices using multi-modal telemetry—logs, metrics, traces, and authentication flows—embedded directly into graph nodes and edges. A Graph Neural Network architecture with attention captures risk propagation across service dependencies, while a joint anomaly detection and trust computation mechanism generates dynamic trust scores with temporal decay to support continuous verification. These trust signals drive real-time dynamic policy enforcement capable of denying or restricting suspicious interactions with minimal operational overhead. Experiments on the TrainTicket, Sock Shop, and DeathStarBench benchmarks show strong performance, achieving 97.2% accuracy, 98.1% recall, and 0.987 AUC on TrainTicket, with consistent results across the other datasets and latency overhead below 3.2 ms. Robustness tests demonstrate accuracy above 95.8% under noisy logs, delayed traces, and authentication failures. Ablation and SHAP analyses confirm that leveraging multiple telemetry modalities—especially authentication data—is critical for accurate detection and trust scoring. These findings show that multi-modal heterogeneous graph modeling, coupled with integrated anomaly-to-policy decision pipelines, provides an effective foundation for Zero-Trust security in cloud-native microservices.

Author 1: Nurmyrat Amanmadov
Author 2: Jemshit Iskanderov
Author 3: Tarlan Abdullayev

Keywords: Graph neural networks; zero-trust security; microservices; anomaly detection; heterogeneous graphs; multi-modal telemetry; dynamic policy enforcement

PDF

Paper 6: Hybrid Optimization and CNN-Transformer Framework for Hot Topic Detection in Social Media

Abstract: The rapid growth of Twitter as a real-time communication platform has created an urgent need for effective hot topic detection. Traditional statistical and machine learning models often fail to capture contextual semantics and long-range dependencies, while deep learning approaches such as CNNs and LSTMs improve representation but face challenges in scalability, optimization, and convergence. This study proposes a novel deep learning framework that integrates Multi-Scale Conv1D for diverse n-gram feature extraction, an attention-enhanced BiLSTM for contextual learning, and a hybrid Modified Bald Eagle Optimization–Particle Swarm Optimization (MBES-PSO) strategy for robust parameter tuning. Unlike conventional models limited by fixed kernel sizes or shallow architectures, the proposed design dynamically captures both local and global semantic patterns in tweets. The hybrid optimizer balances global exploration with local exploitation, achieving faster convergence and improved stability. The framework is evaluated on a large-scale Twitter dataset from Kaggle. Experimental results show that the proposed model achieved the highest accuracy of 90.12%, significantly outperforming 13 state-of-the-art baselines across precision, recall, and F1-score. This study contributes: 1) a Multi-Scale Conv1D architecture for enriched feature extraction; 2) an attention-based BiLSTM module for improved interpretability; 3) a hybrid MBES-PSO optimizer that enhances convergence and avoids local minima; and 4) extensive comparative evaluation validating robustness on real-world Twitter data. The proposed framework offers a scalable, interpretable, and high-performing solution for real-time hot topic detection in social media analytics.

Author 1: Hemasundara Reddy Lanka
Author 2: Vinodkumar Reddy Surasani
Author 3: Nagaraju Devarakonda
Author 4: Sarvani Anandarao

Keywords: Hot topic detection; Twitter trend analysis; CNN-Transformer; Modified Bald Eagle Search (MBES); Particle Swarm Optimization (PSO)

PDF

Paper 7: Intelligent Diagnostic Model for Early Malaria Symptoms

Abstract: One of the most significant worldwide health concerns in low-middle-income nations over the past few decades is Malaria, especially in Kenya. In Kenya, seventy per cent of people reside in areas where malaria is widespread, and most of them face obstacles getting access to medical care because of social culture, distance, and lack of money. Malaria transmission is high, particularly in Kenya’s remote areas, despite a plethora of scientific efforts to combat the disease. This study aims to design and develop an intelligent malaria diagnosis model for early symptom detection using an Adaptive Neuro-fuzzy-Inference System with a 2000 dataset extracted from Six Types of Patient Data Inputs to optimize the model performance. The result achieved was 98.3% accuracy, which contrasted with the pertinent cutting-edge finding to illustrate the benefits of the suggested approach. The main contributions of this study are a combined Six Types of Patient Data Inputs, including Demographic, Symptoms, Blood pressure, Heartbeats, Height, and Weight, using fuzzy Systems techniques to detect early malaria symptoms accurately. The combined patient data input used for evaluation is demonstrated in the results, and the technique can identify different forms of malaria and has the best outcome when compared to relevant findings from the existing studies.

Author 1: Phoebe A Barraclough
Author 2: Charles M Were
Author 3: Hilda Mwangakala
Author 4: Philip Anderson
Author 5: Dornald O Ohanya
Author 6: Harison Agola
Author 7: Philip Nandi

Keywords: Malaria diagnosis system; malaria symptoms; classifier; ANFIS; fuzzy rules

PDF

Paper 8: Latent-Topology Graph State-Space Model (LT-GSSM) for Robust Traffic Fore-Casting

Abstract: Accurate traffic forecasting remains challenging when sensor data are noisy, incomplete, or non-stationary. Recent advances in spatio-temporal learning have combined Graph Neural Networks (GNNs) with recurrent, convolutional, or attention mechanisms to capture spatio-temporal dependencies. However, most existing approaches remain largely deterministic and rely on fixed or pre-learned adjacency matrices, limiting their adaptability when network structures evolve or sensor reliability varies. Some methods further stack multiple adjacency matrices to represent complex spatial relations, yet still lack explicit mechanisms to model uncertainty, resulting in reduced robustness under degraded data conditions. This work introduces the Latent Topology Graph State-Space Model (LT-GSSM), a probabilistic framework designed to enhance robustness and adaptability in traffic forecasting. LT-GSSM represents the road network as a latent dynamic graph whose structure evolves over-time through dynamic adjacency learning based on past hidden states and observations, enabling the model to capture evolving spatial correlations such as congestion propagation. Temporal dependencies are modelled by a nonlinear state-space function implemented with a Temporal Convolutional Network (TCN), which captures long-range temporal patterns without recurrence. The probabilistic state-space formulation explicitly represents sensor noise and handles missing data through probabilistic estimation inspired by Kalman filtering. By jointly integrating dynamic graph learning, explicit noise modelling, and nonlinear temporal transitions, LT-GSSM achieves greater stability and resilience to data uncertainty. Experiments on SUMO simulations and real-world PeMS datasets show that LT-GSSM consistently outperforms static and adaptive-graph models, providing a strong foundation for robust spatio-temporal forecasting under uncertain conditions.

Author 1: Selma Kerdous

Keywords: Traffic forecasting; graph neural networks; state-space models; latent topology; dynamic adjacency learning; spatio-temporal modeling; noise and missing data robustness; probabilistic modeling

PDF

Paper 9: Dynamic Trust Modulation and Human Oversight in AI-Driven AML Systems: A Conceptual Framework for Compliance

Abstract: This literature review investigates how human trust, decision fatigue, explainability (XAI), and human oversight interrelate to influence analyst decision-making in AI-driven anti-money laundering (AML) systems. While prior research has predominantly emphasized algorithmic performance, detection accuracy, or regulatory compliance in isolation, a critical gap remains in understanding the human-centered dynamics that shape real-world operational outcomes. Addressing this gap, the review examines how financial institutions navigate compliance demands and operational constraints, drawing on the Australian regulatory environment as an illustrative governance reference, including expectations articulated by AUSTRAC. Building on this synthesis, the study identifies structural gaps in Trust Calibration and oversight practices. It introduces a Dynamic Trust Modulation (DTM) framework to conceptualize how trust evolves across AML workflows. The framework models trust as a fluid, context-dependent construct shaped by system behavior, analyst workload, explainability mechanisms, and regulatory pressure. By framing trust, explainability, and decision fatigue as interdependent components of human–AI collaboration, this review advances a more holistic perspective on socio-technical system design in financial crime detection. The proposed framework contributes theoretically by extending human–AI trust research into the AML domain and practically by offering actionable design principles to enhance system accountability, decision defensibility, and adaptive compliance in operational AML environments.

Author 1: Julian Diaz
Author 2: Abeer Alsadoon
Author 3: Oday D. Jerew
Author 4: Ahmed Hamza Osman
Author 5: Hani Moetque Aljahdali
Author 6: Albaraa Abuobieda
Author 7: Abubakar Elsafi

Keywords: Artificial intelligence; anti-money laundering (AML); Trust Calibration; Explainability; decision fatigue; human oversight; AUSTRAC Compliance; transaction monitoring; false positives; Analyst–System Interaction; Regulatory Technology (RegTech)

PDF

Paper 10: A Systematic Review of Functional Requirements, Modelling Practices, and Validation Strategies in IoT Application Development

Abstract: The rapid development of the Internet of Things (IoT) requires systematic development methods that address complex functional, architectural, and validation concerns. This review synthesized research published between 2016 and 2023 to characterize common functional requirements (FRs), current modelling techniques, and validation practices. From an initial corpus of 1,598 articles, 425 publications were selected for in-depth analysis. The results demonstrated a consistent emphasis on data handling, inter-device communication, analytics, and system management. Nevertheless, critical gaps persisted in end-to-end security and context-aware computing. Sequence, class, and use case diagrams dominated modelling practices, which indicated attention to system behavior and interactions. Conversely, state machine and deployment diagrams were underutilized despite their potential to better capture runtime states and architectural configurations. The validation approaches in IoT development were primarily empirical, with experiments and case studies predominating. Expert reviews are valuable for early-stage assessment, but were rarely applied, which indicated missed opportunities for early improvement of design quality in the lifecycle. Overall, the results reflected the maturity and limitations of current practices in IoT engineering. These limitations can be addressed by diversifying modelling techniques, enhancing security integration, and using hybrid validation frameworks. This review presented a foundational reference to guide the systematic development of scalable, secure, and context-aware IoT systems, and contributed to the evolving body of IoT software engineering knowledge.

Author 1: Nor Haniza Ramli
Author 2: Nur Atiqah Sia Abdullah
Author 3: Nur Ida Aniza Rusli

Keywords: Functional requirements; Unified Modeling Language; validation; Internet of Things; systematic review

PDF

Paper 11: CAT-TODNet: A Contextual Transformer-Based Optimized Deformable Convolution Framework for Efficient ECG-Based Heart Failure Detection

Abstract: Heart Failure detection using Electrocardiogram (ECG) signals is a critical clinical task, as continuous analysis of cardiac waveforms supports early diagnosis and effective intervention. Despite advancements in machine learning and deep learning techniques, existing approaches often suffer from limited contextual representation, sensitivity to noise, and in-adequate handling of non-stationary temporal deformations in ECG signals, which restrict diagnostic reliability. To address these challenges, this study introduces a novel deep learning framework termed Contextual Auxiliary Transformer with Triple Stacked Optimized Deformable Convolution Network (CAT-TODNet) for accurate heart failure detection from ECG signals. ECG recordings acquired from the MIT-BIH Arrhythmia Database are initially subjected to three-stage preprocessing, including de-noising, signal smoothing, and Power Line Interference (PLI) removal, to enhance signal quality. The Contextual Auxiliary Transformer (CAT) module explicitly captures both static and dynamic contextual dependencies, enabling robust contextual feature extraction. These context-aware features are subsequently processed through triple stacked deformable convolution layers with adaptive receptive fields. To ensure stable offset estimation under non-stationary ECG conditions, the Al-Biruni Earth Radius (ABER) optimization algorithm is employed to optimize deformable convolution offsets, overcoming the limitations of gradient-based learning. Experimental results demonstrate that CAT-TODNet achieves an accuracy of 98.88.

Author 1: Vinitha V
Author 2: V. Parthasarathy
Author 3: R. Santhosh

Keywords: Heart Failure (HF); Electrocardiogram (ECG); Artificial Intelligence (AI); Contextual Auxiliary Transformer (CAT); deformable convolution; optimization algorithm

PDF

Paper 12: Application of Improved YOLO-LSTM with Combined MQTT-LoRaWAN for AI Surveillance in Tea Plantations to Prevent Elephant Intrusion

Abstract: Elephant-human conflict is a growing problem in tea garden areas of Dooars in North Bengal, resulting in massive cost for crops, infrastructures and sometimes human life as well. Each year, these mild-mannered giants destroy crops, destroy fences and even threaten the locals, which raises the costs of repairs and endangering lives. The conventional deterring methods, such as fences, firecrackers, and patrols are mostly ineffective, unsustainable or cruel to the animals. As a way of addressing this predicament, scholars have designed a non-invasive, intelligent surveillance system named HIS-Hexagonal Intelligent Surveillance. HIS integrates state-of-the-art machine learning with artificial intelligence by combining an improved YOLO-LSTM and MQTT-LoRaWAN, which combines the capabilities of distributed-based agents with predictive analytics and hex-grid mapping scheme. HIS is an effective solution for elephant intrusions detection and deterrence for ecological balance. The system sends specific warnings before the elephants can cause havoc when an intrusion is detected. The hex-grid mapping enables the operators to have accurate spatial knowledge and the predictive analytics forecasts the time and location where the elephants could roam. The virtual simulation of the proposed work shows 98% accuracy on designed-custom dataset of elephants. The paper offers a background of the architecture, theoretical framework, algorithm models, and expected benefits of the proposed framework.

Author 1: Rabin Kumar Mullick
Author 2: Rakesh Kumar Mandal

Keywords: Tea garden; machine learning; artificial intelligence

PDF

Paper 13: Creative Guidance of Intelligent Emotion Recognition in Video Art

Abstract: This study optimizes the existing emotion recognition model to improve the application effect of emotion recognition technology in the guidance of video art creation. It also compares the performances of Multimodal Sentiment Analysis (MMSA), Multimodal Sequence Encoder (MuSE), and the optimized model through a series of simulation experiments. In the fine-grained accuracy of emotion recognition, the optimized model performs well in micro-expression recognition accuracy and multimodal fusion effect, and scores 4.17 and 4.96, respectively. In the robustness test under a complex background, the background interference resistance score of the optimized model is 4.30, and the modal mismatch processing score is 4.38. In the experiment on the effectiveness of emotion recognition in creative guidance, the optimized model scores 4.40 and 3.66 in terms of artistic expression enhancement and creative feedback accuracy, respectively. The experimental results show that the optimized model is superior to the existing models in several key dimensions of emotion recognition, especially in enhancing the emotional expression of artistic works and providing creative guidance. Therefore, this study provides some reference for the application and development of emotion recognition technology in video art creation.

Author 1: Weixing Chen
Author 2: Yubo Zhou

Keywords: Emotional recognition; image art; artistic expressiveness; creative feedback

PDF

Paper 14: NeuroFusionNet Adaptive Deep Learning for Intelligent Real-Time Industrial IoT Decisions

Abstract: The rapid development of Industrial IoT (IIoT) has facilitated real-time observation and decision-making in smart factories, even though current methods suffer from constraints like processing noisy, high-dimensional sensor data and modeling both spatial and temporal relationships well. Classical models like CNN, LSTM, and GRU tend to fail in handling sequential patterns and context-aware anomaly detection, which restricts predictive maintenance and operational efficiency. To address these limitations, this research introduces NeuroFusionNet, a CNN–BiGRU–Attention hybrid framework, developed using Python and TensorFlow, to pull localized spatial features using CNN, capture bidirectional temporal relationships using BiGRU, and highlight key time steps using Attention for improved anomaly detection and predictive maintenance. The framework is tested on the Environmental Sensor Telemetry dataset, with multivariate industrial signals such as gas levels, temperature, and equipment vibrations. Experimental results demonstrate that NeuroFusionNet achieves 95.2% accuracy, 94.8% precision, 94.1% recall, and 94.4% F1-score, representing an improvement of approximately 2 to 7% over baseline models (CNN, RNN, LSTM) across multiple performance metrics. The method provides faster convergence and robust real-time inference, supporting scalable deployment for smart manufacturing environments. These results highlight that NeuroFusionNet not only outperforms conventional hybrid models such as CNN–LSTM and CNN–GRU but also offers actionable insights for predictive maintenance, safety, and efficiency, establishing a foundation for adaptive AI-driven monitoring in Industry 4.0 applications.

Author 1: Ghayth AlMahadin

Keywords: Deep learning; hybrid CNN-BiGRU; OptiSenseNet; sensor data synthesis; smart manufacturing

PDF

Paper 15: Speckle Denoising in Breast Ultrasound Images Using Multi-Filter Pseudo-Clean Targets and Deep Learning

Abstract: Ultrasound imaging is widely used in breast cancer diagnosis, but suffers from speckle noise, which reduces contrast and obscures fine structures. Supervised deep learning methods for speckle reduction/denoising typically require clean ground truth, which is unattainable in vivo. To address this, this study proposes a multi-filter pseudo-ground-truth strategy combined with a UNet++ denoiser. Each image in the BUSI dataset is processed using three classical despeckling filters (Gaussian, median, and total variation) to generate diverse pseudo-clean targets. The network is trained with deep supervision to minimize a robust loss with respect to these targets, enabling it to learn a consensus representation beyond any single filter. On the BUSI test set, the proposed method achieves PSNR = 34.11 dB and SSIM = 0.8901, outperforming recent CNN baselines under the same evaluation protocol. Qualitative results show improved edge preservation and lesion visibility. This approach eliminates the need for unattainable clean ultrasound images and provides a practical path toward clinically useful ultrasound despeckling. Code, data splits, pretrained weights, and the full evaluation protocol will be released for reproducibility.

Author 1: Omar Ayad Alani
Author 2: Muhammad Moinuddin

Keywords: Speckle noise; breast ultrasound; denoising; U-Net++; multi-filter pseudo-clean targets; deep supervision

PDF

Paper 16: From Bits to Qubits: Comparative Insights into Classical and Quantum Computing Systems

Abstract: The rapid development of computing hardware has been driven by an ever-emerging need for high throughput, scalable performance, and computation capabilities to be able to address increasingly complex problems. The paradigm of classical computing, centered on deterministic binary logic and the von Neumann architecture, has long favored modern information processing and still supports a wide range of applications. However, physical limits in scaling transistors, power dissipation, and slowing down Moore's Law have stimulated the consideration of alternative computing paradigms. Quantum computing is now an emerging means that exploits the basic principles of quantum mechanics-such as superposition, entanglement, and quantum interference-to enable new forms of computation. This review study discusses a comparison between the classical and quantum computing systems in operational paradigms, architecture structures, performance characteristics, and application domains. This work is supported by a systematic review of the established theories, currently realized hardware implementations, and representative algorithms. The analysis underlines that classical systems remain very reliable, scalable, and efficient in general-purpose and deterministic workloads, while quantum systems ensure essential advantages in specified problem classes, such as cryptography, quantum chemistry, combinatorial optimization, and selected machine learning tasks. The study concludes that classical and quantum computing are best viewed as complementary technologies. Future high-performance computing platforms will most likely be based on hybrid-classical–quantum architectures in which quantum processors serve as specialized accelerators to help classical systems solve new computational challenges.

Author 1: Tariq Jamil

Keywords: Classical computing; high-performance computing; quantum computing; qubit

PDF

Paper 17: Histogram Gradient Boosting Classifier-Based UWSN Cyber Attack Detection Incorporating Environmental Factors (HGBoostUCAD)

Abstract: Underwater Wireless Sensor Networks (UWSNs) are commonly employed for exploring and exploiting aquatic areas, and its role is very important and more beneficial precisely in hostile and constrained marine environments. However, their security is more critical than terrestrial wireless sensor networks (TWSNs) due to the space in which they are deployed, the wireless communication medium, and the cost of damage repair, and their protection is a problematic issue that needs to be continuously resolved. Consequently, it is highly recommended see required to take procedure to protect UWSNs against attacks and intrusion and maintain service quality. In general, existing works on machine learning-based intrusion detection system (IDS) and cyber-attack detection approaches for (UWSNs) utilize dedicated datasets designed for (WSNs) without adapting them to the aquatic environment. Furthermore, these studies analyze the enhancement of UWSN performance based on network metrics separately from machine learning model metrics, and vice versa. In this way, this paper proposes a novel cybersecurity detection approach-based model learning Histogram Gradient Bosting (HGB) classifier called (HGBoostUCAD). It classifies four types of DoS attacks (Blackhole, Grayhole, Flooding, and Scheduling), employing an adjusted dataset for (IDS) in wireless sensor networks (WSN-DS) taking into account simulate realistic environmental factors: salinity, temperature, deep through Mackenzie’s equation and node movement in training data. The insight of simulation results obtained, shows that our method reached 97% as accuracy and 96% as precision also outperformed both Deep Neural Network (DNN) as well as the recent study Hyper_RNN_SVM eventually referenced in this research, in terms of machine learning model metrics. In addition to machine learning model metrics, our approach provides network measurements by DoS attack type.

Author 1: Hamid OUIDIR
Author 2: Amine BERQIA
Author 3: Siham AOUAD

Keywords: UWSN; security; intrusion detection system; cyber-attack detection; cybersecurity; machine learning; histogram gradient boosting

PDF

Paper 18: Human–Technology Interaction in Generative AI: A Theoretical Review of Technology Acceptance and Cognitive Response

Abstract: The rapid rise of Generative Artificial Intelligence (GenAI) has transformed the way humans interact with technology and has revealed cognitive mechanisms that extend beyond the explanatory scope of traditional technology acceptance models, such as the Technology Acceptance Model (TAM), Technology Acceptance Model 2 (TAM2), and the Unified Theory of Acceptance and Use of Technology (UTAUT). This theoretical review examines the combined role of the Technology Acceptance Model (TAM) and Cognitive Response Theory (CRT) in explaining GenAI-related user behaviors. The increasing involvement of GenAI in knowledge production triggers complex cognitive reactions, including cognitive trust, curiosity, ambivalence, epistemic suspicion, and resistance, which fundamentally shape technology acceptance processes. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines, a systematic literature search was conducted in the Web of Science and Scopus databases. From 3,842 records published between 2014 and 2025, duplicates were removed, and the remaining studies underwent title–abstract and full-text screening. In the final stage, 69 publications were included in the review corpus. The findings indicate that, while perceived usefulness and perceived ease of use remain core determinants of GenAI adoption within the TAM framework, integrating CRT highlights the importance of deeper internal mechanisms, such as cognitive reappraisal, epistemic trust, algorithmic scepticism, cognitive load, and curiosity. Post-ChatGPT literature further emphasizes the influence of anthropomorphic cues and cognitive tension on user attitudes, trust calibration, and engagement. Overall, the combined application of TAM and CRT provides a more comprehensive theoretical lens for understanding GenAI interactions by concurrently capturing cognitive, emotional, and behavioural processes. This integrative approach offers a comprehensive lens for understanding cognitive, emotional, and behavioral processes in GenAI interactions.

Author 1: Ugur Dagtekin
Author 2: Ahmet Kamil Kabakus

Keywords: Generative Artificial Intelligence; Technology Acceptance Model; Cognitive Response Theory; Human-AI Interaction; cognitive trust

PDF

Paper 19: Optimized Dimensionality Reduction Using Metaheuristic and Class Separability

Abstract: The high dimensionality of modern datasets presents significant challenges for machine learning, including increased computational cost, model complexity, and risk of overfitting. This study introduces a metaheuristic framework for optimized dimensionality reduction to identify the highly discriminative feature subsets. The proposed method (KDR-PSO) combines a Particle Swarm Optimization (PSO) algorithm with the K-Nearest Neighbors Distance Ratio (KDR) as a filter-based objective function. This metric quantitatively assesses class separability within a feature subspace by computing the ratio of the average distance from a sample to neighbors in other classes versus those in its own class. Maximizing this ratio with a penalty for model size, KDR-PSO automates the discovery of parsimonious feature sets that maximize inter-class discrimination. The method is computationally efficient, naturally lending itself to multi-class classification and avoiding the prohibitive cost associated with classifier-in-the-loop wrappers. Experimental results on benchmark gene expression and image datasets show that KDR-PSO can achieve better dimensionality reduction compared to baselines and other algorithms, such as winning a better or at least similar performing models with decreased features. This approach is a strong and pragmatic technique to improve the model interpretability and generalizability for high-dimensional regions.

Author 1: Eman Abdulazeem Ahmed
Author 2: Malek Alzaqebah
Author 3: Sana Jawarneh

Keywords: Dimensionality reduction; Particle Swarm Optimization; metaheuristics; K-Nearest Neighbors; class separability; high-dimensional data

PDF

Paper 20: Dynamic Assessment and Goal Optimization of Corporate ESG Performance Based on DEA-CCR-GML and Inverse DEA Integration Framework

Abstract: Since the Ministry of Ecology and Environment issued the "Reform Plan for the System of Environmental Information Disclosure in Accordance with the Law" in 2021, the nation has set forth new requirements for sustainable development. Against this backdrop, how enterprises enhance their value across all dimensions through ESG in compliance with national mandates holds significant implications for refining ESG management systems. This study employs DEA-CCR-GML and inverse DEA models to empirically examine how investments in specific dimensions of ESG's three pillars influence corporate value realization, using data from Shanghai to Shenzhen A-share listed companies from 2019 to 2024. Findings reveal that higher levels of scale efficiency and technical efficiency correlate with greater improvements in ESG scores, while mandatory enforcement of national policies exerts a highly effective driving force on enterprises. Mechanism analysis indicates that firms can enhance ESG score improvement efficiency by elevating scale and technical efficiency, thereby more effectively realizing their intrinsic value.

Author 1: Hui Liu
Author 2: Tsung-Xian Lin
Author 3: Yaqing Hu
Author 4: Yingxi Xiao
Author 5: Chengze Ou
Author 6: Yayi Lao
Author 7: Wenchao Pan

Keywords: Corporate ESG performance; DEA; financing constraints; green innovation

PDF

Paper 21: Development and Evaluation of a Mobile-Based Local Food Information System for Elderly Nutrition Support

Abstract: This research aimed to: 1) study information needs regarding local foods and information systems for the elderly, 2) develop a local food information system for the elderly, and 3) evaluate system effectiveness. The quantitative study included 235 senior caregivers selected via purposive sampling. The research tools were an interview form (IOC = 0.98) and a questionnaire (Cronbach's alpha = 0.953). The results indicated that there were three key information needs: 1) Comprehensive and reliable content that includes dietary data, food types, and disease categorization, daily meal search features, disease-specific recommendations, and presentation with large-font images and succinct language. The system was developed on the LINE Official Account platform and is based on a web application. 2) User evaluation from 235 participants showed strong agreement with overall system efficiency. Usefulness received the highest rating, followed by LINE OA usability and efficiency. A gender comparison showed that there were statistically significant differences (p < .01) between females and males in terms of content and efficiency. 3) Linear regression analysis identified efficiency as the primary factor influencing usefulness, followed by system usability and functionality. A one-way ANOVA showed that users accessing LINE once per week had significantly higher usefulness scores than those using it every 2 to 3 days (p < .05).

Author 1: Renuka Khunchamnan
Author 2: Kewalin Angkananon

Keywords: Information system; LINE OA; elderly care system; local food; elderly nutrition

PDF

Paper 22: Human-Centered Behavioral Analysis of Window Operation Using AI-Based Skeletal Recognition

Abstract: This study presents a quantitative approach to analyzing window opening and closing behaviors using skeletal recognition technology. Video data of five participants performing these actions were captured and processed using the Openpose model, which detects 25 human joints. Focusing on the shoulder, elbow, and wrist, the study analyzed time-series joint coordinates to identify motion patterns and behavioral characteristics. The results revealed consistent relationships among joint movements and enabled accurate distinction between left- and right-hand operations. In addition, behavioral distribution characteristics were examined by visualizing horizontal and vertical skeletal displacements. The results showed that stationary postures are concentrated near a reference origin, whereas window operation actions produce distinct spatial shifts in the coordinate space, indicating that occupant behavior can be interpreted as a sequence of state transitions composed of distinct behavioral phases. The findings confirm that skeletal data can effectively represent occupant behavior without intrusive sensors, providing a non-contact and privacy-preserving monitoring method. This approach contributes to the development of human-centered intelligent building systems that can adapt indoor environments in real time based on occupant actions, thereby improving both thermal comfort and energy efficiency. Future research will expand behavioral categories and explore real-time implementation in smart building applications.

Author 1: Jewon Oh
Author 2: Daisuke Sumiyoshi
Author 3: Takahiro Yamamoto
Author 4: Takahiro Ueno
Author 5: Tatsuto Kihara

Keywords: Image processing; skeletal recognition; behavioral analysis; Openpose

PDF

Paper 23: Agentic AI as the Orchestrator of Mobile Ecosystems: A Review of the Trade-off Between Performance and Drawbacks

Abstract: This system review explores the transformational role of agentic artificial intelligence (AI) as an orchestrator in mobile ecosystems. Agentic AI systems proactively plan, execute, and adapt across applications, devices, and services, unlike traditional and generative AI. These systems offer autonomous, context-aware coordination by integrating reasoning engines, tool orchestration, memory, retrieval-augmented generation (RAG), and safety layers. The review examines architectural requirements for mobile deployment, including on-device processing, resource-aware execution, and cross-platform synchronization. It stresses implementation targets and achievements through 2025, automation levels across key capabilities, and the impact of agentic orchestration on mobile ecosystem challenges. The findings highlight agentic AI’s potential to optimize performance, privacy, and user experience simultaneously. Future directions include edge-native architectures, human-in-the-loop frameworks, and multi-agent interoperability standards. This study provides a comprehensive roadmap for advancing agentic AI as a foundational layer in next-generation mobile computing.

Author 1: Ayat Aljarrah
Author 2: Mustafa Ababneh

Keywords: Agentic AI; orchestrator; mobile ecosystems; on-device

PDF

Paper 24: A Novel Fuzzy Logic System for Real-Time Text Difficulty Assessment in Mobile Reading Apps for Dyslexia

Abstract: Automated text difficulty assessment in mobile reading applications remains an underexplored challenge for dyslexia support systems. This study presents the development and validation of an intelligent fuzzy logic system engineered for real-time text complexity analysis in mobile web environments. Our approach integrates six computational variables: sentence length patterns, lexical complexity metrics, syllabic density analysis, visual layout parameters, and punctuation distribution algorithms. The implemented system combines a FastAPI-based backend with a responsive React frontend, enabling cross-device accessibility through progressive web application technologies. Technical validation demonstrates 94.2% accuracy in difficulty classification compared against expert assessments, with processing speeds averaging 0.3 seconds per text analysis. Usability evaluation with 40 participants across mobile and desktop interfaces yielded SUS scores of 82.6, while 85% expressed frequent usage intention. The mobile web architecture achieves 98% device coverage with WCAG 2.1 AA compliance standards. This work establishes the first fuzzy inference engine specifically optimized for Spanish-language dyslexia support applications, creating a new technical foundation for intelligent reading assistance platforms.

Author 1: Enrique Lee Huamaní
Author 2: Brian Andreé Meneses-Claudio
Author 3: Carlos Fidel Ponce Sánchez
Author 4: Jehovanni F. Velarde-Molina

Keywords: Fuzzy logic systems; mobile web applications; dyslexia support technology; automated text analysis; accessibility engineering

PDF

Paper 25: Ubiquitous Computing Framework for Reducing Ambiguity in the Lanna Thai Dialect Using Transformer Models and Fuzzy Logic

Abstract: This research focuses on developing a speech-recognition model that can better handle the unique sounds and vocabulary of the Lanna Thai dialect while supporting translation between Lanna and Standard Thai. A dataset of spoken Lanna Thai was collected from native and fluent speakers between 2023 and 2025, refined from 200 selected words to 100 terms that were consistently difficult to interpret. This dataset was used to train several supervised models, including HuBERT, Wav2Vec2 (baseTH), Wav2Vec2, and WavLM, with HuBERT showing the strongest overall performance and Wav2Vec2 (baseTH) offering a balanced vocabulary response. A companion web application was also created to convert Lanna speech to text and provide two-way translation, improving access to local language resources despite some limitations with rare terms and diverse accents. Early user feedback indicates that the system is practical and helpful, supporting the broader goal of preserving Lanna Thai in a modern digital environment.

Author 1: Wongpanya S. Nuankaew
Author 2: Pathapol Jomsawan
Author 3: Pratya Nuankaew

Keywords: Ambiguities in Lanna Thai vocabulary; Lanna Thai vocabulary; pervasive computing; Speech Transformer; Thai speech recognition; ubiquitous computing

PDF

Paper 26: The Retrieval-Augmented Pedagogical Assistant (RAPA): A Methodology for Enhancing Critical Thinking and Equity in AI-Augmented Education

Abstract: This study presents the Retrieval-Augmented Pedagogical Assistant (RAPA) methodology, an integrated framework designed to overcome the core limitations of general Large Language Models (LLMs)—specifically factual instability (hallucination) and static knowledge bases—by deploying a specialized, institutional Retrieval-Augmented Generation (RAG) architecture. The methodology addresses three critical challenges to the responsible integration of AI in higher education. Firstly, the framework ensures data sovereignty and sustainable deployment by mandating a comprehensive Total Cost of Ownership (TCO) analysis. This analysis validates the strategic necessity of local RAG hosting and of leveraging computational efficiencies, such as Parameter-Efficient Fine-Tuning (PEFT) and PROXIMITY caching, to ensure a cost-effective solution that strictly complies with FERPA and GDPR data protection mandates and mitigates security risks associated with data leakage. Secondly, the framework ensures the equitable integration of AI literacy across disciplines with varying technological resources, particularly in the Humanities and Vocational Education and Training (VET). This is achieved by minimizing technical prerequisites and institutionalizing continuous Professional Development (PD) through the Dialogic Video Cycle (DVC), which trains faculty in Prompt Engineering to embed individualized pedagogical rules and ethical constraints into the RAPA’s architecture. Finally, specific measures are implemented to evaluate the development of Critical Thinking (CT). RAPA outputs are architecturally constrained to include transparent Chain-of-Thought (CoT) reasoning and verifiable source citations. Student Critical AI Analysis Assignments require students to critique the AI's synthesis, identifying inaccuracies, biases, or limitations. The effectiveness of this assessment is quantified using a quasi-experimental design and technical RAGAS metrics, such as Faithfulness and Context Precision, ensuring a verifiable shift from passive knowledge consumption to active, informed critique. Key findings from the preliminary architectural validation indicate that integrating Proximity-LSH caching reduced database retrieval calls by 77.2% and retrieval latency by approximately 72.5%, while maintaining high retrieval recall, addressing the scalability bottleneck inherent in high-volume educational deployments. Furthermore, the application of Robust Fine-Tuning (RbFT) demonstrated a marked improvement in the system's resilience to noisy educational data, preventing performance degradation where standard RAG models typically fail when exposed to irrelevant or counterfactual document chunks. These technical optimizations directly support the pedagogical objective by ensuring that the AI assistant remains responsive and factually grounded.

Author 1: Shohel Pramanik
Author 2: Mohd Heikal Bin Husin

Keywords: RAG; AI literacy; critical thinking; equitable education; professional development

PDF

Paper 27: Automated Quality Evaluation of Panoramic Dental Radiographs Using a Domain-Adapted Transfer Learning

Abstract: Assessing the quality of panoramic dental radiographs is essential to ensure diagnostic accuracy and patient safety. However, existing CNN-based approaches for radiograph quality assessment often emphasize architectural comparisons, while providing limited discussion on training stability and generalization, particularly when applied to relatively small and heterogeneous datasets. To address this gap, this study proposes a transfer learning-based framework that integrates Global Average Pooling (GAP) and Batch Normalization (BN) to enhance feature robustness and reduce overfitting in panoramic dental radiograph quality classification. Three pretrained CNN architectures: ResNet50, VGG16, and VGG19 were evaluated using panoramic radiographs collected from two tertiary hospitals in Indonesia. Experimental results using k-fold cross-validation indicate that the proposed GAP+BN refinement improves classification consistency across models, with VGG16 demonstrating the most stable and reliable performance. These findings suggest that domain-adapted transfer learning with appropriate feature aggregation and normalization can support the development of automated and clinically reliable quality assurance systems for panoramic dental imaging.

Author 1: Nur Nafiiyah
Author 2: Rifky Aisyatul Faroh
Author 3: Eha Renwi Astuti
Author 4: Rini Widyaningrum
Author 5: Agus Harjoko
Author 6: Kang-Hyun Jo
Author 7: Alhidayati Asymal
Author 8: Youan Nhareswary Dwike Prasetya

Keywords: Batch Normalization; image quality; panoramic radiograph; transfer learning

PDF

Paper 28: A Multi-View Classification Method for Distribution Network Towers Based on Improved EfficientNet

Abstract: View recognition of distribution network towers is a key technology in UAV intelligent inspection. To address the problem of low accuracy of existing deep learning methods in complex background interference, this paper proposes a tower view classification method based on EfficientNet that integrates foreground perception, multi-scale feature fusion, and dual-dimensional attention. First, a Mask-Guided Fusion Module (MGFM) is designed to extract tower foreground masks using the BiRefNet network, enhancing foreground representation and suppressing background interference through a two-stage fusion strategy. Second, a Multi-Scale Attention Aggregation Module (MSAA) is constructed to achieve efficient cross-layer feature fusion through parallel multi-scale convolution, fully integrating shallow details and deep semantic information. Finally, the Convolutional Block Attention Module (CBAM) is introduced to adaptively strengthen view-discriminative features through channel and spatial dual-attention mechanisms, significantly improving the recognition capability for small-sample categories such as top views. Ablation experiments on a self-built multi-view tower dataset show that the proposed method can effectively distinguish different views such as top view, front view, and side view, with significantly improved accuracy compared to other deep learning models, providing technical support for intelligent inspection of transmission lines.

Author 1: Gao Liu
Author 2: Changyu Li
Author 3: Junsheng Lin
Author 4: Xinzhe Weng
Author 5: Qianming Wang
Author 6: Zhenbing Zhao

Keywords: Multi-view classification of power towers; mask-guided feature fusion; BirefNet; multi-scale feature fusion; convolutional block attention

PDF

Paper 29: Accessible Application Prototype for Improving Digital Reading in People with Dyslexia: User-Centered Design and Usability Validation

Abstract: Digital reading remains a challenge for individuals with dyslexia due to the limited availability of accessible tools tailored to their cognitive and perceptual needs. Although many digital reading applications offer basic personalization options, they often lack integrated mechanisms to support reading comprehension and user autonomy. This study presents the design and usability validation of an accessible mobile application prototype aimed at improving the digital reading experience for people with dyslexia using a user-centered design approach. The research followed a design thinking methodology that included a systematic literature review, analysis of documented user needs from peer-reviewed studies and dyslexia support communities, personas development based on published user profiles, and a competitive analysis of existing accessible reading applications. The design process focused on identifying key pain points and developing interactive prototypes that integrate advanced text personalization, visual and auditory supports, text difficulty analysis, and intelligent accessibility suggestions. The proposed prototype enables users to customize font type and size, color, and spacing; activate text-to-speech functionality; highlight words or lines; upload and edit texts in multiple formats; and receive automated recommendations to enhance accessibility. Usability and accessibility were evaluated through expert heuristic assessment using established usability principles and WCAG 2.1 guidelines. Results indicate strong adherence to usability standards, with experts highlighting effective feature integration and identifying the need for additional instructional support for advanced functions. Overall, the proposed application addresses key gaps in digital reading accessibility and provides a foundation for future empirical validation with end users.

Author 1: Enrique Lee Huamaní
Author 2: Brian Andreé Meneses-Claudio
Author 3: Carlos Fidel Ponce Sánchez
Author 4: Jehovanni F. Velarde-Molina

Keywords: Dyslexia; digital accessibility; digital reading; usability; user-centered design

PDF

Paper 30: Vegetation Identification in Hyperspectral Images of Cartagena City Using the Haar Wavelet Transform

Abstract: Hyperspectral imaging is one of the most widespread remote sensing techniques in earth observation, corresponding to images with high spectral and spatial resolution that enable material detection through the identification of their spectral signature. A key challenge in hyperspectral imaging is the definition of novel and efficient computational methods that contribute to reducing computational cost while maintaining the efficacy and precision in material detection provided by methods such as correlation or machine learning. This study aims to propose a new efficient method for vegetation detection in hyperspectral images based on the similarity between the approximate and detailed components of the Haar wavelet transform of the vegetation spectral signature, with respect to the components of the pixel to be classified in the image. For the development of the present investigation, five methodological phases were defined: P1. Selection of sample pixels for vegetation and other materials; P2. Determination of the characteristic vegetation pixel; P3. Implementation and evaluation of the method with vegetation and non-vegetation pixels; P4. Deployment of the method on the reference hyperspectral image; P5. Comparative evaluation of the proposed method against the correlation method. As a result of this research, a novel computational method for vegetation identification in hyperspectral images was proposed, leveraging the similarity of wavelet transform components. This method demonstrated comparable detection efficacy to the correlation method and proved to be approximately 5% more efficient in the detection process. The proposed method can be suitably integrated into hyperspectral image-based environmental monitoring systems, particularly where images are of considerable size and more efficient methods are required.

Author 1: Gabriel Elías Chanchí Golondrino
Author 2: Manuel Alejandro Ospina Alarcón
Author 3: Manuel Saba

Keywords: Vegetation detection; hyperspectral images; remote sensing; wavelet transform; earth observation

PDF

Paper 31: Impact of Climate Change on Animal Diseases Based on Machine Learning

Abstract: The rapid pace of climate change has altered the distribution of animal diseases, increased their frequency, and dispersed them over a larger geographic area. Rising temperatures, fluctuating humidity, and erratic rainfall patterns have increased the risk of illness in cows. These modifications have facilitated the growth of diseases and vectors. As a result, timely and accurate identification of these illnesses has become crucial for both food security and sustainable animal health management. To detect and classify animal diseases using visual data, this study proposes a diagnostic framework that utilizes machine learning approaches, with a focus on convolutional neural networks (CNNs) in conjunction with classification models, including ResNet, YOLOv5, and AltCLIP. By learning to distinguish between healthy and sick animals, the model enables prompt identification and treatment of sick animals. Merging data on disease detection with climate parameters, we make comparisons between them to get the best result and use this to generate advanced tools for detecting diseases. This informs us about potential risks. According to the results, machine learning-based diagnosis can improve disease detection's accuracy and efficiency while also providing important new information for climate adaptation strategies in cattle management. The optimal model will produce a graphical user interface (GUI) that displays environmental risk scores, diagnostic data, and recommendations for actions such as monitoring the situation, seeking immediate veterinary care, or verifying the animal's health.

Author 1: Gehad K. Hussien
Author 2: Mohamed H. Khafagy
Author 3: Hussam M. Elbehiery

Keywords: Climate change; environmental health; convolutional neural network (CNN); animal diseases; graphical user interface (GUI)

PDF

Paper 32: Linking Leadership Styles to Corporate ESG Performance: A Novel Sierpinski Triangle Fuzzy Decision-Making Modelling

Abstract: Existing research generally addresses the factors affecting ESG performance at a general level, but fails to examine the relative impact of leadership approaches on this performance with a holistic decision-making model. This deficiency makes it difficult for businesses to align their sustainability strategies with the right leadership styles and creates uncertainty in achieving their ESG goals. In this context, the aim of this study is to determine the most appropriate leadership style for improving ESG performance and to prioritize the criteria influencing this selection with an integrated approach. To address this gap in the literature, the study proposes a new multi-criteria decision-making model. The model utilizes a combination of the Z-score-based normalized ideal distance method (z-NIDM), CIMAS, RAM, and the innovative Sierpinski triangle fuzzy sets. According to the analysis results, the most important criterion for improving ESG performance is promoting green innovation, with a weight of 0.108, followed by resource efficiency, with a weight of 0.105. The most appropriate leadership style was determined to be ethical leadership, with a weight of 1.4841. These findings suggest that, to achieve their sustainability goals, businesses must adopt ethical management approaches, increase investments in green innovation, and make resource efficiency a strategic priority. This study offers a unique contribution by introducing a new fuzzy set approach to the literature, analyzing the relationship between ESG and leadership with an integrated decision-making model, and proposing a methodologically robust framework.

Author 1: Serkan Eti
Author 2: Çagla Özgen Safak
Author 3: Serhat Yüksel
Author 4: Hasan Dinçer

Keywords: Leadership approaches; ESG performance; z-NIDM; CIMAS; RAM

PDF

Paper 33: Temporal-Cross-Modal Intelligence for Detecting Fraudulent Crowdfunding Campaigns

Abstract: Reward-based crowdfunding platform fraud has now become a multimodal and temporally dynamic threat, with conventional text-only or snapshot-based detection methods ineffective at detecting more complex deceptive campaigns. In this study, a Temporal Dynamics Aware Multi-Model Fraud Detection Framework (TDMM-FDF) that simultaneously models linguistic indicators, visual discrepancies, and time behavioral changes is proposed. The framework introduces three key innovations: 1) HM4, a Hidden Method-of-Moments Markov model for modeling long-range latent transitions across campaign updates, 2) Polynomial Expansion Canonical Correlation Analysis (PECCA) for quantifying nonlinear semantic discrepancies between textual narratives and associated images, and 3) Frequency-Gated GRU (FG-GRU) which separates recurrent activations into low frequency (trend) and high frequency (anomaly) components in order to achieve higher sensitivity to abrupt fraudulent behaviors. Massive simulations on an actual Kickstarter data set prove that the given architecture outperforms classical machine learning models, sequence encoders, and transformer baselines significantly [96.4% accuracy and good calibration (ECE = 0.06) and high ROC-AUC]. The supplementary role of all modules is confirmed in ablation studies, and their qualitative analyses provide precise semantic-visual discrepancies and semantic time anomalies of fraudulent campaigns.

Author 1: Lakshmi B S
Author 2: Rekha K S

Keywords: Crowdfunding fraud detection; multimodal learning; temporal behavior modeling; cross-modal consistency analysis; blockchain-based verification

PDF

Paper 34: Innovative Approaches to Green Strategy Formulation with a Novel Hybrid AI-Spherical Fuzzy Framework

Abstract: This study aims to establish prioritized strategies for businesses to adopt green strategies. In this framework, literature-based criteria are analyzed through a three-stage model. In the first stage of the analysis, an artificial intelligence (AI)-based decision matrix is created. In the second stage, factors affecting green business strategies are weighted by the Spherical Fuzzy (et SF) Entropy method. In the last stage, the strategies are ranked using the SF ARAS method. The novelty of this study is the integration of AI with SF numbers. Expert opinions can be evaluated by AI with different coefficients according to the knowledge level and experience of the experts. The AI-based decision matrix enables expert weights to differ according to factors such as experience. The findings show that the most important criterion is cost efficiency (weight: 0.2219). According to the analysis results, investments in clean energy projects have a positive impact on this process (Ki:0.9799).

Author 1: Yasar Gökalp
Author 2: Serkan Eti
Author 3: Halil Yorulmaz
Author 4: Serhat Yüksel
Author 5: Hasan Dinçer

Keywords: Artificial intelligence; fuzzy decision-making; spherical fuzzy sets; ARAS; Entropy; green strategy

PDF

Paper 35: RFM–K-OPT Based Machine Learning Framework for Customer Segmentation and Behavioral Profiling in Direct Marketing

Abstract: Customer segmentation is an essential element of modern marketing analytics, which helps companies recognize, comprehend, and market to customers depending on their behavioral and transactional attributes. Conventional methods based on Recency, Frequency, and Monetary (RFM) analysis or on simple unsupervised clustering algorithms such as K-Means are very common, but they are usually limited by weaknesses such as sensitivity to centroid starting location, low cluster separability, and low interpretability. Such problems will cause volatile effects of segmentation and restrict the dependability of data-driven marketing choices. In an effort to deal with these concerns, this research study suggests a hybrid model, the RFM K-Means Optimization Technique (RFM–K-OPT), a combination of RFM analytics and K-Means clustering, and an iterative centroid optimization unit. The proposed structure will help to improve cluster compactness, stability, and interpretability using statistical computation and refinement of centroid positioning. The model is written in Python and tested with publicly available customer transaction data. The result of the experimental process shows a better quality of clustering with a Silhouette Coefficient of 0.83, Davies-Bouldin Index of 0.31, Calinski-Harabasz Index of 563, purity of clustering of 94.2 per cent, and an execution time of 5.4 seconds. The results suggest that the RFMK Opt model is a useful tool that offers credible and explainable customer segments, which can be used to make effective behavioral profiles and make sound judgments when it comes to making decisions in the context of direct marketing.

Author 1: Khadija Mehrez

Keywords: Customer segmentation; behavioral profiling; clustering optimization; predictive marketing; data-driven decision making

PDF

Paper 36: Enhancing Organizational Information Systems Through Explainable Artificial Intelligence

Abstract: This study examines Workplace Perceptions among Finnish employees through the application of Artificial Intelligence within the domain of Human Resource Analytics. An integrated analytical framework combining Clustering Analysis, supervised classification, and Explainable Artificial Intelligence is proposed to uncover and interpret latent employee perception profiles. Using 23 perception-related indicators from the Finnish Working Life Barometer 2022, K-means clustering identified two distinct employee groups: one characterized by consistently positive evaluations of fairness, leadership, well-being, and motivation, and another reflecting systematically negative workplace perceptions. A LightGBM model was subsequently employed to predict cluster membership based on demographic and occupational variables, and SHapley Additive exPlanations (SHAP) were used to provide transparent global and local interpretations of the predictive outcomes. The results show that employment duration, age, industry affiliation, gender, and socioeconomic status are the most influential determinants of cluster membership. By embedding Explainable Artificial Intelligence into Human Resource Analytics, the study demonstrates how employee perception data can be transformed into interpretable knowledge that supports organizational Decision-Support Systems. The proposed framework advances data-driven and transparent HR decision-making and contributes to the United Nations Sustainable Development Goal 8, Decent Work and Economic Growth, by identifying structural disparities in employee experience and enabling more equitable and inclusive workplace interventions.

Author 1: Kian Jazayeri

Keywords: Artificial intelligence; Human Resource Analytics; Explainable Artificial Intelligence; Decision-Support Systems; workplace perceptions; Clustering Analysis; Decent Work and Economic Growth

PDF

Paper 37: Business Process Outsourcing and Digitalization in Albania: Challenges, Opportunities, and Strategic Directions

Abstract: The rapid expansion of Business Process Outsourcing (BPO) has transformed the global services economy, and Albania is emerging as a competitive nearshoring destination in the Western Balkans. This study examines the intersection of BPO and digitalisation in Albania, exploring how technological innovation, artificial intelligence (AI), and cloud-based automation are reshaping service delivery, labour productivity, and competitiveness. This study is explicitly framed as a desk-based policy and analytical study relying exclusively on secondary data, without primary data collection based on secondary data from OECD, World Bank, IBM, and European Commission reports (2023–2025). Findings indicate that Albania’s BPO sector benefits from low labour costs, multilingual human capital, and favourable fiscal policies, yet faces challenges related to technological capability, digital infrastructure, and talent retention. Through a PEST analytical framework, this study identifies the macro-environmental factors influencing BPO development and proposes strategic directions for enhancing digital readiness and regional integration. The study further expands its comparative analysis to include other Western Balkan economies, Kosovo, North Macedonia, and Montenegro, providing a broader perspective on Albania’s position within the regional outsourcing ecosystem. This research contributes to the academic and policy discourse on digital transformation by presenting an integrated model aligning BPO growth with sustainable innovation and regional competitiveness.

Author 1: Nertila Çika

Keywords: Business Process Outsourcing (BPO); digitalisation; artificial intelligence (AI); automation; PEST analysis; Western Balkans; Albania; strategic development

PDF

Paper 38: Framework for Ethical Acquisition of User-Data to Improve Recommendation Models’ Accuracy in Digital Systems

Abstract: The modern digital ecosystem has evolved into a pervasive, opaque system where platforms collect and infer personal data through nearly every online action, search queries, email content, browsing history, and app usage, without transparency. Justified as a means to deliver “relevant” content and ads, this approach undermines user privacy, introduces bias, and normalizes surveillance. Through a comprehensive literature review, this study sought to: critically analyze the current landscape of user tracking, profiling, and privacy violations in online platforms and to evaluate the impact of existing legal, technical, and platform-driven mechanisms like GDPR, CCPA, ATT, and Privacy Sandbox in protecting user autonomy. It was learned that the current frameworks fall short by mostly being policy-based and having hard-to-access user controls. And a major flaw in existing systems is the assumption that all digital behavior reflects actual user preference, overlooking shared devices, accidental clicks, and non-user actions. To validate these insights, a survey of 572 privacy-aware participants was conducted, where nearly 71% preferring a proactive solution over passive regulatory frameworks and hard-to-navigate privacy menus/dashboards. Building on these findings, this study proposes a framework: a digital platform where individuals actively create and manage modular preference profiles, categorized by app type or content domain, which can be selectively and consensually shared with platforms in a standardized format. This concept facilitates high-quality, context-rich datasets for algorithms, enhancing personalization and recommendation models’ accuracy and performance. By shifting from forced surveillance to invited participation, this approach advances ethical data-sourcing, enhances algorithmic accuracy, and aligns with SDG 9 and SDG 16 by prioritizing responsible digital solutions, process innovation, and safeguarding user autonomy.

Author 1: Shaheer Hussain Qazi
Author 2: M.Batumalay
Author 3: Asheer Hussain
Author 4: Ali Abbas

Keywords: Data handling; data privacy; online tracking; data collection; user profiling; model accuracy; ethical AdTech; open standards; user autonomy; SDG 9; SDG 16; process innovation

PDF

Paper 39: Towards Quantum-Accelerated Urban Systems: Integrating Quantum Computing into Saudi Smart City Megaprojects

Abstract: Quantum Computing (QC), rooted in the principles of superposition and entanglement, enables transformative computational capabilities that surpass classical systems, particularly in solving NP-hard combinatorial optimization, simulation, and machine learning problems. These capabilities are increasingly vital for smart cities, which depend on real-time data from the Internet of Things (IoT) devices, Artificial Intelligence (AI), and Urban Digital Twins (UDTs) to orchestrate complex urban systems such as traffic, energy, logistics, and public safety. As global urbanization accelerates, the demand for hyper-efficient, secure, and adaptive infrastructure exceeds the limits of classical computation. This study employs a multi-pronged methodology that combines literature synthesis, algorithmic mapping, and strategic roadmap design. This study investigates the strategic alignment between QC and the computational demands of next-generation urban environments, with a specific focus on Saudi Arabia’s greenfield megaprojects, including NEOM, The Line, and the Red Sea Project, within the Saudi Vision 2030 framework. The analysis systematically maps urban computational challenges to applicable quantum algorithm families—Quantum Approximate Optimization Algorithm (QAOA), Variational Quantum Eigensolver (VQE), and Quantum Machine Learning (QML)—and synthesizes the technical, organizational, financial, ethical, and regulatory prerequisites for national deployment. The core contribution is the development of a conceptual Hybrid Quantum-Classical Architecture (HQCA) and a methodologically grounded three-phase deployment roadmap, tailored to the Saudi context, mapping quantum technical readiness to policy and infrastructure milestones in Saudi Arabia. This framework positions Saudi Arabia to pioneer quantum-accelerated urban systems, enabling resilient infrastructure, sovereign digital capabilities, and global leadership in the emerging Quantum City paradigm.

Author 1: Eissa Alreshidi

Keywords: Quantum Computing (QC); Hybrid Quantum-Classical Architecture (HQCA); quantum security; smart cities; NEOM; Saudi Vision 2030; combinatorial optimization; Urban Digital Twin (UDT); Quantum Machine Learning (QML); roadmap

PDF

Paper 40: MetaEdge: A Meta-Learning-Based Auto-Selective Tool for Hardware-Aware Anomaly Detection on Edge Devices

Abstract: The deployment of anomaly detection systems across heterogeneous edge computing environments faces significant challenges due to varying computational constraints and resource limitations. Existing approaches typically employ static model selection strategies that fail to adapt to diverse hardware capabilities, resulting in suboptimal detection performance and inefficient resource utilization. To address this, we propose MetaEdge, a novel hardware-aware framework that intelligently selects and deploys anomaly detection models based on specific device characteristics and hardware constraints. The MetaEdge framework introduces a systematic methodology that leverages meta-learning in the first stage to train a machine learning model to predict the top-k anomaly detectors by considering dataset characteristics. These candidates are then put through hardware-aware optimization that incorporates the hardware constraints of edge devices to ensure deployment feasibility. The framework evaluates 11 candidate anomaly detection algorithms spanning traditional machine learning and deep learning methods across four representative computing architectures ranging from ultra-constrained edge devices to GPU-accelerated cloud instances. Model conversion through ONNX standardization enables cross-platform deployment while maintaining detection capabilities. Experimental evaluation demonstrates the framework's effectiveness in achieving superior anomaly detection performance across diverse hardware configurations. The hardware-aware stage successfully identifies optimal model-hardware pairings, with the deployed models achieving up to 96.6% accuracy and 90.4% precision on edge devices. The framework demonstrates high accuracy in model selection decisions, with confidence scores providing meaningful hardware compatibility assessments that guide deployment. MetaEdge introduces a novel paradigm for hardware-aware anomaly detection in edge computing, demonstrating that meta-learning–driven model selection can deliver superior detection performance while adhering to stringent hardware constraints. By integrating automatic model selection with hardware-aware optimization, the proposed approach enables anomaly detection systems to intelligently adapt to diverse computing environments and maximize performance under resource constraints.

Author 1: Nadia Rashid
Author 2: Rashid Mehmood
Author 3: Fahad Alqurashi
Author 4: Turki Alghamdi

Keywords: Anomaly detection; edge computing; hardware-aware optimization; machine learning; meta-learning; model selection; ONNX

PDF

Paper 41: A Bio-Inspired Behavior-Based Hybrid Framework for Ransomware Detection

Abstract: Ransomware remains a critical and evolving cybersecurity threat, increasingly rendering traditional signature-based detection techniques ineffective. While modern machine learning models achieve high detection accuracy, they often operate as opaque “black boxes”, introducing a significant explainability gap that undermines analyst trust. In addition, behavior-based anomaly detection systems frequently suffer from high false-positive rates, limiting their operational viability. To address these challenges, this study adopts a Design Science Research Methodology to develop a novel, interpretable, multi-stage ransomware detection framework. The proposed architecture integrates three complementary components: a bio-inspired Negative Selection Algorithm from Artificial Immune Systems to filter benign behavioral patterns, a first-order Markov chain model to capture probabilistic deviations in execution sequences, and a Random Forest ensemble classifier to synthesize these signals for final decision-making. The framework is evaluated using a dual-pipeline experimental design on real-world ransomware and benign software samples, enabling controlled comparison between probabilistic and pattern-based behavioral modeling. Experimental results demonstrate that the proposed approach achieves high detection performance while maintaining a low false-positive rate and providing interpretable behavioral evidence. Overall, the framework offers a principled balance between detection effectiveness and interpretability, addressing key limitations of existing ransomware detection systems.

Author 1: Mohammed A. F. Salah
Author 2: Mohd Fadzli Marhusin
Author 3: Rossilawati Sulaiman

Keywords: Ransomware; Artificial Immune Systems (AIS); anomaly detection; Negative Selection Algorithm; Markov chain; Random Forest; hybrid framework

PDF

Paper 42: A Resilient Framework for Industry 5.0 WSNs: Enhancing Network Lifetime via a Lightweight Reputation Ledger and Hybrid AI

Abstract: Wireless Sensor Networks (WSNs) play an increasingly important role in Industry 5.0 cyber–physical systems, where resilience, trust, and energy efficiency are essential under dynamic operating conditions. However, their limited resources, scattered deployment, and continuous operation make these networks highly susceptible to unusual behavior and cyberattacks. Such issues can compromise data quality, disrupt network reliability, and shorten the overall lifespan of the system. To address these challenges, this study examines WSN resilience as a combined problem of anomaly detection accuracy, fault isolation latency, and network lifetime under realistic fault and energy constraints. At the core of the framework is a Model Context Protocol (MCP), which combines a supervised LightGBM classifier with an unsupervised LSTM autoencoder to capture both event-driven and temporal anomalies in sensor data. Complementing this is a compact “Micro-Ledger” system that updates trust values for each node by monitoring behavior and using streamlined consensus rules. Together, they create a continuous feedback mechanism that isolates suspicious nodes while keeping energy consumption in check. The framework is evaluated using a set of resilience-oriented metrics, including fault detection latency, Mean Time To Failure (MTTF), reputation convergence behavior, and overall network lifetime. Experiments conducted in a Digital Twin simulation environment report an F1-score of 0.997, an 18.7% improvement in network lifetime, and a Micro-Ledger storage overhead of approximately 98 KB. While the current validation is simulation-based, the proposed design can be extended to physical deployments through adaptive trust weighting, cluster-head redundancy, and probation-based node reintegration.

Author 1: Padma Sree N
Author 2: Malini M Patil

Keywords: Wireless Sensor Networks (WSNs); Industry 5.0; anomaly detection; lightweight blockchain; trust management; network lifetime; Digital Twin

PDF

Paper 43: MTML 1.0: A Novel Interlingua Knowledge Representation Model for Machine Translation

Abstract: Machine translation is one of the major areas of both computational linguistics and artificial intelligence that employs computer algorithms to automatically translate text between different natural languages. At present, the advent of Large Language Models (LLMs) has revolutionized this field, marking a significant turning point in its evolution. Despite their impressive capabilities, LLMs still fall short of achieving human-like translation due to key limitations, namely lack of transparency, explainability, and interpretability, the production of non-deterministic outputs, and insufficient support for low-resource languages. To address these challenges, incorporating human-aided translation mechanisms that reflect how the human brain performs translation is effective. Therefore, from a computer science perspective, this motivates the development of a novel hybrid machine translation approach that integrates a rule-based approach with LLM-based methods. This study presents a novel rule-based interlingual knowledge representation model named MTML 1.0 that has been designed and implemented to accurately analyze source language input and systematically structure the resulting linguistic information to facilitate applications, including target language generation and question-answering systems. The MTML 1.0 system consists of four key modules, namely the preprocessing module, morphological analyzer module, syntax analyzer module, and semantic analyzer module. Furthermore, the system has been fully implemented as a web-based application using the Python programming language, with spaCy serving as the foundation for natural language processing tasks. Finally, the functionality of the system has been demonstrated through the development of a prototype question-answering system.

Author 1: M. A. S. T Goonatilleke
Author 2: B Hettige
Author 3: A. M. R. R Bandara

Keywords: Machine translation; knowledge representation; LLMs; rule-based approach; hybrid approach

PDF

Paper 44: Safety Helmet Wear Detection Algorithm Based on ASG-YOLOv8s

Abstract: In the field of industrial safety, the standardised wearing of safety helmets by workers constitutes a core protective measure against head injuries. However, in industrial settings, multi-scale background interference arising from variations in monitoring distance renders traditional detection models ineffective at capturing the contour features of small-sized helmets. This study, therefore, proposes the ASG-YOLOv8s safety helmet detection network, based on YOLOv8s, to address the challenge of complex scene background interference. First, the AKC-SCAM unit is introduced within the YOLOv8 backbone network to replace certain standard convolutions. This module dynamically adjusts the sampling shape of convolutional kernels, enhancing the extraction of multi-scale defect features. Secondly, a cross-scale interaction architecture (Slim-neck) is constructed in the Neck section, employing GSConv instead of conventional convolutions. This combines with a cross-level feature pyramid to achieve cross-scale interaction between deep semantic features and shallow details. Finally, GAM attention is embedded before the multi-scale output for head detection, establishing a dual-stream attention mechanism that synergistically optimises feature response intensity for low-quality candidate boxes, while suppressing background noise interference. Experimental results demonstrate that the enhanced ASG-YOLOv8s achieves improvements of 2.54%, 2.94%, and 3.16% over the original model in Precision (P), Recall (R), and mean average precision (mAP), respectively, on the SHWD dataset.

Author 1: Li-Zhen He
Author 2: Zhi-Sheng Wang
Author 3: Yi-Wei Duan
Author 4: Jin-Hai Sa

Keywords: YOLOv8; safety helmet wearing detection; slim- neck; attention mechanism

PDF

Paper 45: A Comparative Review of AI, IoT, and Big Data in Healthcare: Towards a Data-Centric Approach for Enhanced Data Quality and Contextual Adaptability

Abstract: The convergence of Artificial Intelligence (AI), the Internet of Things (IoT), and Big Data is revolutionizing healthcare by enabling predictive diagnostics, real-time monitoring, and personalized treatment through data-driven analytics and intelligent decision-making. Despite these advancements, the effectiveness of such systems is significantly hindered by poor data quality, including issues such as missing values, noise, bias, and inconsistencies. This study presents a systematic and comparative review of recent research at the intersection of AI, IoT, and Big Data in healthcare, highlighting critical gaps in data quality that undermine model performance and real-world reliability. In response, we introduce the Data-Centric AI (DCAI) paradigm as a promising approach focused on systematic data improvement rather than model complexity. We examine the application of the METRIC framework for assessing data quality dimensions such as completeness, consistency, fairness, and timeliness. Furthermore, we propose future research directions to improve scalability and trustworthiness in AI-driven healthcare, integrating advanced AI techniques such as generative AI and multimodal frameworks with DCAI principles for more ethical AI applications. This work serves as both a comparative synthesis of existing literature and a conceptual foundation for future experimental validation through a case study integrating context-aware data modeling and real-time decision support.

Author 1: Imane RAFIQ
Author 2: Zahi JARIR
Author 3: Hiba ASRI

Keywords: Data-Centric AI; IoT; Big Data Analytics; healthcare informatics; data quality; bias mitigation; privacy; predictive analytics; machine learning; disease prediction

PDF

Paper 46: A Bibliometric Analysis of Blockchain Applications in E-Commerce: Trends and Research Directions

Abstract: Blockchain technology has emerged as a transformative force within the e-commerce industry, offering significant potential to address longstanding issues such as data security, transaction transparency, and customer trust. Despite its growing relevance, the academic exploration of blockchain applications in e-commerce remains fragmented and lacks a cohesive research agenda. This study conducts a comprehensive bibliometric analysis to map the intellectual landscape of blockchain applications in e-commerce, identifying influential publications, key authors, prominent journals, and major thematic trends. Using data extracted from the Scopus database between 2014 and 2024, the study employs bibliometric tools such as VOSviewer and Biblioshiny for performance analysis and science mapping. The analysis reveals a steady increase in research interest, with dominant themes including trust, smart contracts, supply chain management, and secure payments. Furthermore, the findings indicate that most research is concentrated in technologically advanced countries, and collaborations among scholars remain limited. By interpreting these patterns, the study uncovers critical gaps in the literature and proposes future research directions focusing on consumer behavior, regulatory frameworks, cross-border challenges, and integration with emerging technologies like AI and IoT. The results contribute to a clearer understanding of the evolution of blockchain research in e-commerce and provide a foundation for academics and practitioners to develop more secure, efficient, and user-centric digital commerce systems.

Author 1: Nguyen Thi Phuong Giang
Author 2: Le Ngoc Son
Author 3: Thai Dong Tan

Keywords: Blockchain; e-commerce; bibliometric analysis; smart contracts; digital transformation

PDF

Paper 47: Related Multi-Task Allocation Scheme Based on Greedy Algorithm in Mobile Crowdsensing

Abstract: With the popularity of mobile intelligent devices, the mobile crowdsensing (MCS) network based on wireless sensor networks and crowdsourcing technology came into being. There is more and more research on MCS, and it has been applied in many scenarios. Due to the increase in data volume of the MCS platform, the task shows exponential growth. Among them, there will be irreplaceable tasks that belong to the same category, that is, tasks with correlation. If the related tasks can be allocated to the same person for execution, the overhead will be greatly reduced, and the success probability of task allocation will be improved. Firstly, the spatio-temporal distribution of tasks and users is predicted by fuzzy logic to divide spatio-temporal scenarios in this study, and a more suitable multi-task allocation algorithm is selected. Then, when allocating multi-tasks, considering the correlation of tasks, the greedy algorithm is used to allocate multi-tasks according to different scenarios. The experimental results show that compared with the benchmark scheme, the proposed related multi-task allocation scheme based on the greedy algorithm improves the task allocation completion rate by 25.2%, and significantly improves the task allocation success rate in MCS.

Author 1: Xia Zhuoyue
Author 2: Raja Kumar Murugesan

Keywords: Mobile crowdsensing; task allocation; fuzzy logic; greedy algorithm

PDF

Paper 48: A Lightweight Rule-Based Detection Approach for ARP Flooding Malware in Office Networks

Abstract: Address Resolution Protocol (ARP) is a standard protocol used to map an IP address to its MAC address so the network can send packets to its destination. Office networks, which typically have limited network resources, are vulnerable to ARP flooding attacks launched by malware. ARP flooding can be used by malware to create network disruption and jam the networks. This study presents a rule-based detection method, Time Density ARP Thresholding with Binding Consistency Monitoring (TDCM), to identify ARP flooding using a simple mechanism, making it suitable for use in networks with limited hardware. To detect flooding anomalies, the TDCM algorithm monitors the flow of ARP packets and the consistency between MAC IP bindings in ARP packets. In this study, a series of experiments was conducted and repeated multiple times. On average, the experiment shows that the system performs well under high-volume ARP attack conditions. This proposed method offers an alternative to machine learning techniques, making it more suitable for deployment in resource-constrained office networks. Future work will focus on improving detection in low-volume attack scenarios, validating performance in real-world environments, and implementing on devices with limited computing resources.

Author 1: Rizal Fathoni Aji
Author 2: Heri Kurniawan
Author 3: Nilamsari Putri Utami

Keywords: ARP flooding; cybersecurity detection; rule-based detection; lightweight intrusion detection

PDF

Paper 49: EfficientNet-Based Melanoma Classification with CBAM Attention and Monte Carlo Dropout for Robust Uncertainty Estimation

Abstract: Recent developments in deep learning have demonstrated tremendous potential for enhancing medical picture classification tasks, particularly for the detection of skin malignancies like melanoma. However, it is still a huge challenge to guarantee high accuracy, reliability, and interpretability in real clinical settings. This study attempted to resolve these issues by proposing a novel approach to melanoma detection, by employing diverse techniques such as the Convolutional Block Attention Module (CBAM), binary focal loss, and Monte Carlo Dropout (MC Dropout) for uncertainty estimation. The CBAM attention module was inserted to help the network focus on important features of images, and focal loss was applied to solve class imbalance and encourage learning from hard samples. MC Dropout was used to achieve an uncertainty estimate in the test set, and thus, more reliable and interpretable predictions. The approach was implemented with a pre-trained deep CNN called EfficientNetB4 as the backbone and trained on a large melanoma dataset, which is separated into training sets, test sets, and validation sets in order to test the performance. Model evaluation was performed using accuracy, precision, recall, F1-score, and AUC, resulting in 0.95 for accuracy, whereas the AUC value is 0.98. Furthermore, the uncertainty estimate made a clearer decision-making, and the interpretability was crucial when used as a clinical task model. These results highlight the necessity to combine attention mechanisms, task-specific loss terms, and uncertainty quantification for building accurate and interpretable AI in medical domains. The study prototype has the potential for improving the detection of early-stage melanoma and provides useful guidance to future AI-based healthcare services.

Author 1: Soujenya Voggu
Author 2: Shadab Siddiqui
Author 3: Shahin Fatima

Keywords: Deep learning; CNN; accuracy; CBAM; EfficientNetB4

PDF

Paper 50: Multi-Objective Design Optimization of Ventilation Duct Systems: A Graph-Informed Hybrid Evolutionary Approach

Abstract: Optimizing silencer placement in Heating, Ventilation, and Air Conditioning (HVAC) systems is a complex multi-objective problem due to conflicting objectives (noise, energy, cost) and intricate topological constraints. Conventional Multi-Objective Evolutionary Algorithms (MOEAs) often exhibit inefficient convergence on such problems due to their reliance on random search strategies. Addressing this challenging HVAC design problem requires a more informed approach. This paper proposes the G-HNSGA-III (Graph-Informed Hybrid NSGA-III), a novel framework that enhances the NSGA-III algorithm by embedding domain-specific knowledge from the system's Directed Acyclic Graph (DAG) topology. This is achieved through two core components that leverage heuristic search: a Graph-Informed Initialization (GINI) strategy to provide a high-quality starting population and a Graph-Informed Local Search (GILS) module for post-processing refinement. The performance of G-HNSGA-III was comprehensively benchmarked against the baseline NSGA-III and six other established MOEAs on a complex data center test instance. The results demonstrate a marked superiority, with G-HNSGA-III achieving a 38.4% higher mean Hypervolume (HV) than the baseline NSGA-III and a 99.3% Set Coverage (SC) dominance over MOEA/D. The framework consistently converged to the best-known Pareto front, achieving a final mean Inverted Generational Distance (IGD) of 0.0030. These findings validate that the proposed graph-informed strategies effectively accelerate convergence and enable the discovery of a higher-quality Pareto front, providing superior and practically applicable solutions for complex engineering design problems.

Author 1: Xiangming Liu
Author 2: Bin Liu
Author 3: Kunze Du
Author 4: Da Gao
Author 5: Nan Li

Keywords: Multi-objective optimization; NSGA-III; graph-informed optimization; HVAC design; heuristic search; domain knowledge

PDF

Paper 51: AI-Powered Architecture Refactoring: From Legacy Systems to Modern Patterns

Abstract: This study explores the integration of artificial intelligence (AI), especially large language models (LLMs), into software engineering, particularly the architecture refactoring process, focusing on automated command-query classification for legacy systems transitioning to the Command Query Responsibility Segregation (CQRS) pattern. We present Airchitect, a modular system. NET-based tools that orchestrate legacy code analysis, LLM-driven classification, CQRS artifact generation, and automated test creation are also available. Based on the CodeLlama model, Airchitect achieved a 16x–40x reduction in classification time compared to expert manual methods while maintaining over 85% classification accuracy. A test case involving N-tier legacy classes demonstrated the model’s ability to decompose and modularize the methods into CQRS-aligned components. Despite these gains, the study highlights key limitations: the need for human validation in complex or ambiguous cases, dependence on high-quality labeled datasets, and variability of legacy patterns that challenge rule-based automation. The results suggest that LLMs, when embedded in structured tools like Airchitect, can significantly accelerate modernization workflows—provided they are used in tandem with expert oversight.

Author 1: Mohamed El BOUKHARI
Author 2: Nassim KHARMOUM
Author 3: Soumia ZITI

Keywords: Artificial intelligence; LLM; AI-driven refactoring; code-level refactoring; legacy systems, command and query responsibility segregation; CQRS; software architecture refactoring; software engineering; CodeSearchNet

PDF

Paper 52: Optimizing Dermatological Image Classification Using Efficient Convolutional Neural Network Architecture

Abstract: Skin diseases represent a global healthcare challenge because of their frequent occurrence and complex diagnosis. However, despite clinical advances, accurately identifying dermatological lesions remains difficult due to significant intra-class variability, overlapping visual patterns, and reliance on clinician expertise. In this study, it presents a complete overview of a number of state-of-the-art CNN architectures as they apply to multiclass classification of skin diseases. The study introduces an overview of the common skin diseases and discuss the fundamentals of deep learning for medical image analysis. The study proceeds to introduce the dataset used in this work and provide a brief description of the two diagnostic groups identified for evaluation. A range of CNN models which comprise GoogLeNet, Inception-V3, Inception-V4, ResNet-50, Xception, MobileNet, ResNeXt-50, AlexNet, VGG-16, and VGG-19 were trained and tested in terms of accuracy, loss, FLOPs, and epoch runtime. The experimental findings suggest that Xception performs constantly at the highest level, with an accuracy of more than 98% and low validation loss, whereas lightweight models such as MobileNet-V3 provide a competitive outperformance with minimum computational cost. These findings demonstrate the potential of modern CNN architectures to enhance efficient and accurate dermatological diagnosis and offer guidance for selecting appropriate architectures for clinical and real-time deployment.

Author 1: Khalil Ladrham
Author 2: Hicham Gueddah

Keywords: Convolutional neural networks; skin diseases; medical image; classification; Xception; clinical

PDF

Paper 53: Elaboration Context Graph: A System to Support Understanding the Contexts in Elaboration Processes of Research Documents

Abstract: Elaborating research documents is carried out by repeatedly creating and editing documents while simultaneously performing tasks such as surveys, presentation of results, and discussion of research directions. Although indispensable for advancing research, such work is often challenging because it requires handling diverse documents. Effective execution therefore demands an accurate understanding of the elaboration contexts of research documents, including related artifacts, referenced documents, and the circumstances and history of past tasks, so that these can be applied in subsequent work. However, these contexts grow increasingly large and complex as research progresses, making them difficult to grasp and reducing task efficiency. This paper describes a method for generating an elaboration context graph by organizing documents involved in the elaboration process using work history data recorded on a PC. The graph visually represents the documents, screenshots capturing work scenes, and the relationships among them, thereby supporting the understanding of elaboration contexts. We further describe a system developed on the basis of this method. Finally, we report an experiment conducted with the prototype and discuss the system’s effectiveness.

Author 1: Sho Onami
Author 2: Ryo Onuma
Author 3: Hiroki Nakayama
Author 4: Hiroaki Kaminaga
Author 5: Youzou Miyadera
Author 6: Shoichi Nakamura

Keywords: Elaboration contexts graph; elaboration work of research documents; elaboration contexts; understanding work circumstances and histories; screenshots

PDF

Paper 54: Machine Learning-Based Dissolved Oxygen Classification Using Low-Cost IoT Sensors for Smart Aquaponic

Abstract: Dissolved oxygen (DO) plays a vital role in maintaining balanced aquaponic ecosystems, yet conventional optical and galvanic DO sensors remain costly and impractical for low-budget deployments. However, most existing dissolved oxygen monitoring studies rely on costly sensing infrastructures, regression-oriented prediction approaches, or centralized processing schemes, which limit their applicability in small-scale and resource-constrained aquaculture settings. Furthermore, many previous works focus primarily on numerical prediction accuracy without explicitly addressing data imbalance issues or providing actionable classification outputs that can directly support real-time operational decisions at the pond level. This study proposes a machine learning–based approach for estimating DO levels using low-cost pH, temperature, and nitrogen sensors integrated with an IoT data acquisition system. A dataset comprising approximately 1,048,536 records was processed using feature engineering and class balancing techniques, followed by training an XGBoost classifier optimized through grid search. The model classified DO into three categories—Low (<5 mg/L), Medium (5–7 mg/L), and Good (>7 mg/L)—achieving 96.6% accuracy, outperforming baseline regression models including Linear Regression, Random Forest, and XGBoost Regressor. Feature importance analysis revealed temperature and the pH–temperature interaction as dominant predictors. The model was successfully deployed on a Raspberry Pi for real-time monitoring, offering a scalable and cost-effective alternative to high-end probes. The proposed framework demonstrates practical potential for smart aquaponic systems, enabling affordable, automated, and data-driven oxygen management.

Author 1: Supria
Author 2: Afis Julianto
Author 3: Wahyat
Author 4: Marzuarman
Author 5: M Nur Faizi
Author 6: Hardiyanto

Keywords: Aquaponic; dissolved oxygen; IoT; machine learning; XGBoost; low-cost sensors

PDF

Paper 55: A Game-Based Learning Model for Basic Life Support Using First-Person Interactive Simulation

Abstract: Previous BLS and first-aid learning studies largely rely on traditional face-to-face training or low-fidelity digital approaches, which are often costly, time-consuming, and inaccessible to many learners, especially laypersons. Many serious games focus primarily on awareness and conceptual knowledge, rather than procedural mastery and real-time decision-making. In addition, most existing games lack high-fidelity first-person immersion, provide limited real-time feedback, and are not aligned with localized national medical protocols, reducing their realism and contextual relevance. To address these gaps, this study proposes the development of a 3D game-based learning model using first person interactive simulation, designed to educate users on Basic Life Support (BLS) procedures in cardiac arrest scenarios. Unreal Engine 5.4 was utilized to create an immersive and realistic environment where players engage in critical emergency steps, real-time visual prompts and audio feedback, and decision-making under pressure, rather than passive content delivery. Importantly, it strictly follows the Ministry of Health Malaysia’s BLS guidelines, ensuring procedural accuracy and local relevance. This approach bridges the gap between theoretical knowledge and practical application, while providing a scalable, accessible, and engaging alternative to conventional BLS training. Through this educational serious game, players are empowered to gain confidence and practical understanding of life-saving procedures, ultimately contributing to greater public preparedness in real-world emergencies.

Author 1: Nur Raidah Rahim
Author 2: Siti Aisyah Mohd Nasron
Author 3: Sazilah Salam
Author 4: Che Ku Nuraini Che Ku Mohd
Author 5: Wan Mohd Ya’akob Wan Bejuri
Author 6: Richki Hardi
Author 7: Nur Sri Syazana Rahim

Keywords: Basic life support; emergency; simulation; game-based learning; serious games

PDF

Paper 56: Enhancing Arabic Biomedical Named Entity Recognition Using Transformer-Based Representations and CRF Sequence Labeling

Abstract: Electronic health records have witnessed tremendous growth in recent years. To make these documents useful for decision-making, high-performance natural language processing (NLP) systems are essential. Named entity recognition (NER) is a critical task for many biomedical NLP applications that contribute to improving patient care, drug discovery, and disease surveillance. However, despite its status as an official language in more than 22 countries, Arabic is largely neglected in this field. Only limited work has been done and there is little well-annotated public dataset. This work tackles these issues by proposing an NER model capable of recognizing entities such as diseases, symptoms, and organs from biomedical Arabic text. To achieve this, an annotated dataset was first developed, followed by fine-tuning the CAMeLBERT model, a BERT-based model, in conjunction with a conditional random field (CRF) layer. The evaluation results indicate that the CAMeLBERT+CRF model achieves the best overall F1-score of 90%, surpassing other base models such as CAMeLBERT and AraBERT. This study demonstrates the effectiveness of the hybrid approach and underscores the importance of transfer learning techniques for low-resourced and morphologically rich languages like Arabic.

Author 1: Nassima Gannoune
Author 2: Abdellah Madani
Author 3: Mohamed Kissi

Keywords: Named entity recognition; electronic health records; natural language processing; CAMeLBERT; CRF

PDF

Paper 57: Adaptive Denoising of Partial Discharge Using Absolute Difference Optimization Versus Artificial Neural Networks

Abstract: Accurate partial discharge (PD) localization in medium-voltage (MV) power cables is essential for condition-based maintenance, yet it remains unreliable when PD pulses are masked by broadband noise and narrowband interference. The novelty of this work is a controlled denoiser-to-localization benchmarking framework that isolates the denoising front end, while keeping the downstream PD detection and localization backend fixed, allowing localization differences to be attributed solely to denoising decisions. Within this fixed-backend paradigm, an optimization-driven Adaptive Denoising Optimization (ADO) method is introduced as an adaptive discrete wavelet transform (DWT) front end that systematically selects the mother wavelet, decomposition level, and threshold parameters to preserve time-of-arrival (ToA) critical wavefront features rather than only maximizing noise suppression. ADO is evaluated against two learning-based denoisers, a multilayer artificial neural network (ANN) and a lightweight feedforward neural network (FNN), using MATLAB simulations of synthetic PD pulses corrupted by white Gaussian noise (WGN) and discrete spectral interference (DSI) over SNRs from 9.78 dB to -10.34 dB. Performance is quantified using execution time, percentage localization error (PE), median absolute localization error (MedAE), and F1 score. Results show that ADO delivers the most robust localization fidelity, maintaining near-zero PE above -6 dB, keeping PE below 0.3% at -10.34 dB, achieving sub-metre MedAE, and sustaining F1 close to 1.0 across noise levels. In contrast, FNN is the fastest option, reducing runtime by approximately 15% versus ANN and 27% versus ADO, highlighting a practical robustness-efficiency trade-off for real-time MV cable monitoring.

Author 1: Kui-Fern Chin
Author 2: Chang-Yii Chai
Author 3: Ismail Saad
Author 4: Yee-Ann Lee

Keywords: Partial discharge localization; adaptive denoising optimization; discrete wavelet transform; artificial neural network

PDF

Paper 58: Improving YOLO11 Architecture for Reckless Driving Detection on the Road

Abstract: Reckless driving behavior on the road can increase the risk of traffic accidents for drivers and other road users. Currently, supervision remains weak, particularly in direct supervision, due to the limited number of officers. This study developed an automated system to detect reckless drivers based on their road trajectories. This system comprised three subsystems: car detection, car tracking, and driving trajectory detection. In the driving trajectory detection subsystem, we proposed an improved YOLO11n-cls method developed from YOLO11n-cls by adding convolution and C3k2 blocks. The test results showed that the proposed model achieved an accuracy increase of 4.4% over YOLO11n-cls. The proposed model achieved an accuracy of 0.935 and an inference time of 0.5 ms for car trajectory classification. In addition, the proposed model achieved higher accuracy than all YOLO11 models (YOLO11n-cls, YOLO11s-cls, YOLO11m-cls, YOLO11l-cls, and YOLO11x-cls) and all YOLO12 models (YOLO12n-cls, YOLO12s-cls, YOLO12m-cls, YOLO12l-cls, and YOLO12x-cls). Therefore, the proposed model is better suited to support traffic law enforcement, especially the real-time detection of reckless drivers on highways.

Author 1: Sutikno
Author 2: Aris Sugiharto
Author 3: Retno Kusumaningrum

Keywords: Reckless driving detection; improved YOLO11n-cls; added convolution blocks; added C3k2 blocks

PDF

Paper 59: Formal Verification Unified Modeling Language Statechart Using Enhancement Common Modeling Language

Abstract: Modern systems are rapidly evolving and increasing in complexity to satisfy growing requirements. Such systems often incorporate multiple hierarchical statecharts within their behavior modeling diagram, which significantly complicates the verification process. To address this challenge, the Common Modeling Language (CML) was introduced as an intermediate modeling language for formal verification, serving as a bridge between Unified Modeling Language (UML) Statechart and the model checkers. However, CML supports modeling only a single hierarchical statechart, which limits its applicability to complex systems. This study introduces the Enhancement Common Modeling Language (E-CML), an extension of the CML, to support the verification of systems that incorporate multiple hierarchical statecharts. We introduce the group component in E-CML, comprising an initial state, a set of states, transitions, triggers, and a region, to formally differentiate the group components from superstates. We also propose new translation rules to map E-CML into Symbolic Model Verifier (SMV) syntax. E-CML operates through two main processes: transformation and translation. The transformation process transforms an XML Metadata Interchange (XMI) file into E-CML, while the translation process translates E-CML to an Input Symbolic Model Verifier (I-SMV) file. The system is verified using the SMV model checker, with formal properties specified in Computational Tree Logic (CTL) and represented in the I-SMV file. The results demonstrate that the behavior modeling diagram satisfies all formal properties, indicating that E-CML provides an effective framework for the verification of complex systems comprising multiple hierarchical statecharts.

Author 1: Muhammad Amsyar Azwarrudin
Author 2: Pathiah Abdul Samat
Author 3: Norhayati Mohd Ali
Author 4: Novia Indriaty Admodisastro

Keywords: CML; E-CML; formal verification; model checkers; UML Statechart

PDF

Paper 60: Detecting Low-Quality Deepfake Videos Using 3D Residual Vision Transformer

Abstract: The rapid evolution of deep generative models has facilitated the creation of "Deepfakes", enabling the synthesis of hyper-realistic facial manipulations that threaten the trustworthiness of digital media. While forensic countermeasures have been developed to identify these forgeries, deepfake detection in real-world scenarios is severely hampered by video compression artifacts, which often obscure the subtle pixel-level traces exploited by conventional Convolutional Neural Networks (CNNs). This study introduces a robust detection framework designed specifically to withstand the aggressive compression inherent to social media dissemination. We present a hybrid 3D architecture that integrates the local spatiotemporal feature extraction capabilities of a 3D-ResNet-50 backbone with the global context modeling of a temporal Video Vision Transformer. Unlike frame-based or joint spatiotemporal attention approaches, the proposed model performs fully video-level reasoning and utilizes a factorized self-attention mechanism to decouple spatial and temporal modeling, thereby preserving stable temporal cues under compression while minimizing computational costs. Experimental results on the compressed protocols of the FaceForensics++ dataset as well as Celeb-DF-v2 and DFDC datasets, including cross-dataset generalization evaluation, validate the efficacy of this design, demonstrating that our method achieves superior detection accuracy and generalization compared to existing baselines, particularly on low-quality inputs.

Author 1: Amna Saga
Author 2: Lili N. A
Author 3: Fatimah Khalid
Author 4: Nor Fazlida Mohd Sani
Author 5: Hussna E. M. Abdalla
Author 6: Zulfahmi Syahputra
Author 7: Rian Farta Wijaya

Keywords: Deepfake detection; compressed deepfake videos; low-quality deepfakes; 3D convolutional neural networks; Video Vision Transformer

PDF

Paper 61: Contact-Free Cardiovascular Monitoring Using AI-Driven Radar and Sensor Fusion on a Hybrid Edge-Cloud Platform

Abstract: Access to essential cardiovascular parameters such as heart rate (HR), heart rate variability (HRV), and blood pressure (BP) remains limited in low-income and remote populations, particularly among older adults in developing regions. Continuous, simultaneous, and contact-free monitoring of these parameters beyond close proximity can enhance early detection, screening, and management of cardiovascular and related conditions. This study presents a real-time, contact-free health monitoring system based on millimeter-wave (mmWave) FMCW radar, phase demodulation, and digital signal processing (DSP), integrated with multimodal sensor fusion and artificial intelligence (AI)-driven inference. Sub-millimeter chest wall displacements are captured using radar in-phase and quadrature (I/Q) signals to extract beat-to-beat physiological features, including ECG-correlated waveform components, HR, and HRV, while non-invasive blood pressure is indirectly estimated using a physics-informed adaptive learning framework. A custom Long Short-Term Memory (LSTM) neural network is employed for temporal smoothing and stabilization of HRV signals, improving robustness under real-world conditions. The system is implemented within a hybrid edge–cloud architecture, enabling on-device inference for real-time monitoring and cloud-based analytics for long-term analysis and integration. Clinical-like validation conducted on over 100 adult participants demonstrates measurement accuracy comparable to clinically accepted reference devices, and statistical analysis confirms the robustness and reliability of the proposed system.

Author 1: K Ravindra Shetty
Author 2: Shanthala K V
Author 3: Nishanth A R
Author 4: Himani Jain

Keywords: Wireless sensing; radar signal processing; sensor fusion; contact-free monitoring; heart rate; heart rate variability; blood pressure; deep learning

PDF

Paper 62: Choosing the Arena: A Systematic Review of Simulators for Deep Reinforcement Learning in Mobile Robot Navigation

Abstract: This study presents a formal Systematic Literature Review (SLR) to address a critical methodological question in robotics research: "Which simulator is most suitable for a given Deep Reinforcement Learning (DRL) algorithm and mobile robot navigation task?" The choice of a simulation environment profoundly impacts policy robustness, data efficiency, and sim-to-real transfer, yet the community has lacked an evidence-based guide for this decision. Following PRISMA guidelines, we methodically searched and analyzed 87 peer-reviewed studies published between January 2020 and June 2025 to map the contemporary research landscape. Our synthesis introduces a novel, theory-informed taxonomy that classifies simulators into three archetypes based on their empirical use. Archetype I, ROS-centric standards (e.g., Gazebo), are chosen for algorithmic novelty with low-dimensional sensor inputs. Archetype II, versatile platforms (e.g., CoppeliaSim), are favored for rapid prototyping. Archetype III, GPU-native engines (e.g., NVIDIA Isaac Sim), have emerged for large-scale, perception-heavy challenges, leveraging photorealism and parallelization to mitigate the perception gap and enable zero-shot transfer. This review reveals a paradigm shift towards data-driven methodologies and culminates in a prescriptive decision-making framework, transforming simulator selection from an incidental detail into a strategic choice.

Author 1: Zakaria Haja
Author 2: Leila Kelmoua
Author 3: Ihababdelbasset Annaki
Author 4: Jamal Berrich
Author 5: Toumi Bouchentouf

Keywords: Simulator; mobile robot; Deep Reinforcement Learning; navigation

PDF

Paper 63: Deep Learning Approach for Solar Radiation Forecasting in a Tropical Region Using LSTM Networks

Abstract: Solar radiation forecasting is a key task for energy planning, grid management, and photovoltaic deployment, especially in tropical regions where weather variability reduces operational reliability. This work applies deep learning techniques to forecast hourly solar radiation in Mompox, Colombia, using Long Short-Term Memory (LSTM) neural networks. Three temporal windows were studied (5, 24, and 720 hours) to examine how sequence length affects prediction accuracy and model behavior. Hourly radiation data from 2021 to 2022 were used for training, and independent datasets from 2023 to 2024 were used for external validation to ensure long-term assessment and reproducibility. Most existing studies use short input windows designed for mid-latitude environments (5–24 hours), which do not capture multi-day tropical cloud persistence or sub-seasonal radiation variability. This gap limits forecasting accuracy and restricts practical use in tropical energy planning. To address this issue, this study introduces a long temporal input design that allows the model to learn month-scale variability more effectively. The three network configurations were trained under the same settings, allowing a direct comparison between short, daily, and long input memories. The LSTM-720 model performed best, achieving the lowest RMSE and the most stable predictions across all validation years, showing its ability to reconstruct both diurnal cycles and broader seasonal dynamics. Unlike most solar forecasting work, which treats window size as a tuning parameter, this study introduces a long-context LSTM design based on a 720-hour sequence. This allowed the model to learn intra-month atmospheric persistence—an essential tropical feature that short windows cannot represent—positioning the approach as a methodological contribution that expands the temporal learning paradigm rather than a configuration adjustment. Time-series comparisons revealed close agreement between measured and predicted radiation, particularly during stable climate periods. The proposed framework can support practical applications in solar plant design, renewable energy scheduling, and operational grid strategies in tropical regions. Future work will integrate satellite information and hybrid deep learning architectures to enhance spatial transferability and long-term forecasting accuracy.

Author 1: Manuel Ospina
Author 2: Gabriel Chanchí
Author 3: Álvaro Realpe

Keywords: Deep learning; LSTM networks; renewable energy; solar radiation forecasting; time series prediction

PDF

Paper 64: A Hybrid Spherical Fuzzy–Machine Learning Model for Multi-Criteria Decision-Making in Sustainable Water Resource Management

Abstract: The aim of this study is to develop an innovative, multi-dimensional, and uncertain decision-making model that can identify the most appropriate alternative irrigation method for the efficient use of water resources in agriculture. In this context, the proposed model is based on the integrated use of spherical fuzzy sets, machine learning, MEREC, and WASPAS methods. The evaluations obtained from ten experts were converted into spherical fuzzy numbers, and the experts' importance weights were objectively calculated using machine learning. Criteria weights were determined using the MEREC method, and alternatives were ranked using the WASPAS method. This hybrid approach both reduces expert subjectivity and objectively reflects the relationships between criteria. According to the findings, feasibility/technological suitability (0.152) emerged as the most important criterion, followed by environmental impacts (0.144). Among the alternatives, drip irrigation (2.226) was identified as the most suitable option for efficient use of water resources. This result demonstrates that modern, technology-based irrigation systems should be a priority in sustainable agricultural policies. This study's contribution to the literature is its ability to bring objectivity, transparency, and the ability to manage high uncertainty to decision-making processes in agricultural water management. The model offers both methodological innovation and a practical decision-support tool at the application level.

Author 1: Edanur Ergün
Author 2: Serkan Eti
Author 3: Serhat Yüksel
Author 4: Hasan Dinçer

Keywords: Irrigation activities; water use; decision-making model; machine learning; MEREC; WASPAS

PDF

Paper 65: Balancing Privacy and Acceptance: The Role of Anthropomorphism and Information Sensitivity in Autonomous Taxis

Abstract: This study investigates how anthropomorphic interface design and information sensitivity influence users’ acceptance of autonomous vehicles (AVS), and examines the underlying role of privacy concern and its boundary conditions in a commercial autonomous taxi context. Addressing prior research that has predominantly examined anthropomorphism or privacy concerns in isolation, this study employs a 2 × 2 experimental design to test the main interaction effects of anthropomorphism and information sensitivity on technology acceptance. The results demonstrate that both anthropomorphism and information sensitivity significantly affect users’ acceptance of AV technology, with a significant interaction effect between the two. Specifically, when information sensitivity is high, lower levels of anthropomorphism lead to higher acceptance, whereas under low information sensitivity, anthropomorphic design enhances acceptance. Further analysis reveals that privacy concern mediates the relationship between anthropomorphism, information sensitivity, and technology acceptance. Moreover, cultural value orientation and technical familiarity moderate the effect of privacy concern on technology acceptance, such that the negative impact of privacy concern is attenuated among users with stronger collectivist orientations and higher levels of technical familiarity. By clarifying the sequential roles of design cues, privacy concern, and individual differences, this study reveals a dynamic balance mechanism between emotional engagement and perceived privacy risk in data-intensive mobility services. These findings advance understanding of privacy–acceptance dynamics and provide practical implications for the design and deployment of autonomous taxi interfaces.

Author 1: Jia Fu
Author 2: Kyoung-jae Kim

Keywords: Anthropomorphism; information sensitivity; privacy concern; technology acceptance; individual cultural value; technical familiarity; autonomous taxis

PDF

Paper 66: Intelligent Systems, Machine Learning, and Deep Learning Algorithms for Detecting Banking Fraud: A Review

Abstract: The increase in unauthorized remote banking fraud has intensified with the expansion of digital channels, creating new risks and highlighting the inadequacy of traditional methods based on fixed rules and manual audits. This review aims to synthesize recent scientific evidence on the use of machine learning and deep learning techniques for the early detection of fraudulent banking transactions, considering supervised and unsupervised models and deep architectures that allow the analysis of complex patterns present in financial transactions. A total of 357 original articles were identified in the Scopus and Web of Science databases, in addition to manual research, published up to 2025. Of these, 35 studies met the inclusion criteria established using the PICOT approach and the PRISMA protocol. The most widely implemented models in the selected studies were Random Forest, XGBoost, SVM, LSTM networks, and graph-based approaches. The combination of different algorithms improves fraud detection by integrating temporal, relational, and behavioral patterns. Advanced models show better metrics in accuracy, recall, and F1-score compared to traditional methods, expanding the possibilities for continuous monitoring and reducing false positives. There are consistent associations between the application of advanced models, the availability of quality data, and the ability to adapt to different transactional scenarios, which favor timely fraud detection if challenges such as class imbalance, the need for real-time decisions, and the heterogeneity of financial contexts are addressed. The integration of multiple approaches and the optimization of preprocessing and evaluation processes allow us to move toward more robust, scalable anti-fraud systems that are better suited to the current demands of the digital environment.

Author 1: Jessica Vazallo-Bautista
Author 2: Allison Villalobos-Peña
Author 3: Juan Soria-Quijaite

Keywords: Deep learning; algorithms; machine learning; fraud detection; real-time methods

PDF

Paper 67: User Experience Evaluation in Government Applications: A Systematic Review

Abstract: Evaluating the User Experience (UX) of government applications is becoming increasingly crucial as governments deploy public services online. Nevertheless, research in this area remains fragmented. Correspondingly, this study presents a systematic review of UX evaluation in government applications to address the following Research Questions (RQs): What UX evaluation approaches and UX dimensions have been employed in the UX evaluation of government applications, and how do domains, contextual, and cultural considerations influence the UX evaluation of government applications? Kitchenham and Charters’ guidelines, as well as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), are employed to guide this review. Moreover, recent studies from Scopus and Web of Science (WoS) databases between the years 2023 and 2025 were retrieved using a predefined review protocol. After applying the inclusion and exclusion criteria and subjecting the studies to quality assessment, the final number of retained studies for this review is 19. The analysis reveals four key themes: diversity in UX evaluation approaches, the range of UX dimensions evaluated, the range of domains evaluated, and the contextual and cultural considerations in UX evaluations. The findings reveal that UX evaluations of government applications are predominantly usability-focused, while hedonic, emotional, and cultural dimensions receive limited and inconsistent attention. In addition, the review highlights that UX evaluations for government applications should encompass both technical and pragmatic aspects, as well as domain-specific, cultural, and contextual dimensions. Accordingly, strengthening these evaluations can lead to more inclusive and meaningful assessments, resulting in government applications that offer better UX. Overall, the findings of this review may serve as a reference for future work and advance the field of UX evaluation, especially in the context of government applications.

Author 1: Emmy Hossain
Author 2: Noris Mohd Norowi
Author 3: Azrina Kamaruddin
Author 4: Hazura Zulzalil

Keywords: User Experience Evaluation; UX evaluation; government applications; e-government; systematic review

PDF

Paper 68: Feature Engineering for Machine Learning-Based Trading Systems Using Decision Tree, Random Forest, and Gradient Boosting

Abstract: Machine learning-based trading systems require the selection and creation of features that crucially determine the performance level of the trading system. This study introduces an asset-specific, correlation-based feature selection approach for machine learning–based stock trading models. The research conducts a systematic evaluation of the influence of lookup period, the number of features from technical analysis, and feature selection on the performance of trading systems using tree-based algorithms: Decision Tree, Random Forest, and Gradient Boosting. The performance of the trading system was measured using the backtesting method, with metrics such as total return, win rate ratio, and profit factor. The research steps included selecting stocks with the largest market capitalization in the financial sector, which are included in the banking index. Historical data on the prices of these stocks was obtained from Yahoo! Finance for the years 2014-2025. The historical data was then divided into two parts, namely the in-sample dataset (2014-2024 time period) and the out-of-sample dataset (2025 time period). Each part of the data was supplemented with features from technical analysis and several other additional features. Trading signals are determined based on a profit target of +4% and a loss limit of –2% in a lookup period of 2 to 10 days. The results show that the ML strategy consistently outperforms the buy-and-hold strategy, with Gradient Boosting generating the highest return (37.443%). Spearman correlation-based feature selection per stock improves the performance of the strategy compared to uniform features.

Author 1: Nugroho Agus Haryono
Author 2: Yuan Lukito
Author 3: Aditya Wikan Mahastama

Keywords: Feature engineering; machine learning; trading system; decision tree; Random Forest; Gradient Boosting

PDF

Paper 69: Computational Intelligence for Sustainable Banking: A Novel Fermatean Fuzzy LOPCOW–EDAS Framework

Abstract: The primary objective of this study is to identify the priority strategies required for banks to achieve their sustainable growth targets and to develop a new fuzzy multi-criteria decision-making model sensitive to uncertainty conditions. The model proposed in this study is designed based on the integration of Fermatean Fuzzy LOPCOW–EDAS. In the first stage, the criteria and strategic alternatives affecting sustainable growth were identified through a literature review. The LOPCOW method was then used to objectively calculate the criteria's importance weights. The prioritization of strategic alternatives was then performed using the EDAS method. To more accurately model the uncertainties in expert judgments, the opinions of ten experts were converted to the Fermatean fuzzy numbers and analyzed. The use of Fermatean fuzzy sets offers greater expressive power and increases decision reliability compared to traditional fuzzy and Pythagorean approaches. The LOPCOW method objectively evaluates the information density of the criteria by using logarithmic percentage change, while the EDAS method reduces the impact of outliers by considering the distance of the alternatives from the mean solution, producing a more stable ranking. The findings indicate that the "digital green banking practices" criterion is the most critical element for sustainable growth. Furthermore, the "Digitalization and innovation capability" strategy was determined to be the most important alternative. This result demonstrates that sustainable growth in the banking sector can be achieved through the integration of digital technologies and environmentally friendly practices.

Author 1: Majidah Majidah
Author 2: Dadan Rahadian
Author 3: Anisah Firli
Author 4: Suhal Kusairi
Author 5: Serkan Eti
Author 6: Serhat Yüksel
Author 7: Hasan Dinçer

Keywords: Fermatean fuzzy sets; multi-criteria decision-making; sustainable banking; digital transformation; decision support systems

PDF

Paper 70: Predicting the Duration of Judicial Cases Using Hybrid Systems Based on Language Models

Abstract: Recent technological developments in the field of Natural Language Processing (NLP), notably due to Transformer architectures and language models, have made it possible to tackle aspects that were previously inaccessible with traditional tools. The present study addresses the issue of predicting legal case durations using Arabic judicial data. For this task, hybrid systems based on language models were implemented. The Arabic_LegalBERT model, derived from AraBERT and specialized through additional pre-training on an Arabic legal corpus, was proposed to generate representations that were integrated into the downstream steps of the approach. Two methods were adopted for predicting the processing time of a new case: The first followed a framework combining automatic classification with statistical correspondence, while the second relied on cosine similarity combined with empirical statistics. The results obtained with the classification approach are particularly promising, with a small improvement for the system based on the specialized model. For the similarity-based approach, the results are also promising, with a clear distinction observed when evaluating each type individually, indicating that types with a higher number of cases generally perform better than those with fewer cases.

Author 1: Amina BOUHOUCHE
Author 2: Saliha YASSINE
Author 3: Mustapha ESGHIR
Author 4: Mohammed ERRACHID

Keywords: Language model; judicial case durations; legal domain; Arabic legal corpus

PDF

Paper 71: Reinforcement Learning-Driven Adaptive Aggregation for Blockchain-Enabled Federated Learning in Secure EHR Management

Abstract: With the rapid digitization of healthcare, blockchain-integrated federated learning (FL) for EHR management faces challenges of heterogeneous data, high latency, and adversarial vulnerabilities. This study proposes a novel Reinforcement Learning-Driven Adaptive Aggregation (RL-DAA) in an enhanced blockchain-FL framework, using Q-learning to dynamically optimize model weights based on trust, data quality, and node reliability. RL-DAA reduces computational overhead by 40% via state-action-reward optimization (mitigating non-IID bias) and boosts robustness against Byzantine faults by 35% with fault-tolerant rewards. Validated on adapted CIFAR-10 and real-world healthcare simulations, compared to EPP-BCFL and baseline models, RL-DAA achieves 96.5% accuracy, 45% lower latency, and 38% reduced energy consumption. By dynamically balancing efficiency, privacy, and robustness via RL-driven optimization, this work advances secure, scalable EHR management, with broader potential in privacy-sensitive domains.

Author 1: Cai Yanmin
Author 2: Wang Lei
Author 3: Zainura Idrus
Author 4: Jasni Mohamad Zain
Author 5: Marina Yusoff

Keywords: Federated learning; blockchain; reinforcement learning; electronic health records; privacy preservation

PDF

Paper 72: Engineering Prompt-Orchestrated LLM Workflows for Automated Test Case Generation in Agile Environments

Abstract: Manual test case generation for agile software development is a critical bottleneck that is costly, inconsistent, and error-prone. This study introduces a prompt-engineering and multi-level orchestration framework to automate this process. The proposed approach explicitly targets the automated generation of high-level acceptance test cases, addressing a gap in existing research that predominantly focuses on unit-level or reactive testing. The proposed tool, AI-Based Desktop Test Generator (AIDTG), employs a dual-LLM engine (Gemini 1.5 and GPT-4) to transform high-level functional descriptions from the Product Backlog into structured validation scenarios. Unlike prior LLM-based testing approaches, the framework integrates schema-aware prompt engineering and dual-model orchestration to ground the generation process in both functional intent and technical data constraints. The methodology is distinguished by its context-aware prompt engineering, which injects a frozen database schema to ground the models, and its ability to format outputs for the TestRigor BDD 2.0 platform. This schema-grounded and orchestrated workflow enables the systematic translation of informal User Stories into executable Behavior-Driven Development (BDD) acceptance tests, reducing ambiguity and improving semantic correctness. Experimental results on a real-world dataset of fifty User Stories show the framework reduces manual test design effort by eighty per cent, achieves a four point seven five (out of five) average quality rating from human experts, and produces BDD scripts with a ninety-one point nine per cent functional correctness pass rate. These results demonstrate that orchestrated, schema-aware Generative AI can operate as a reliable co-assistant for QA teams, improving efficiency while maintaining high standards of quality and executability.

Author 1: Almeyda Alania Fredy Antonio
Author 2: Barrientos Padilla Alfredo
Author 3: Siancas Garay Ronald Gustavo

Keywords: Software testing; test case generation; Large Language Models; Generative AI; prompt engineering; LLM orchestration; Behavior-Driven Development (BDD); agile methodology; acceptance testing; schema-aware prompting; Human-in-the-Loop; quality assurance Software testing; test case generation; Large Language Models; Generative AI; prompt engineering; LLM orchestration; Behavior-Driven Development (BDD); agile methodology; acceptance testing; schema-aware prompting; Human-in-the-Loop; quality assurance automation

PDF

Paper 73: Task Scheduling in Cloud Computing Environment Based on Dwarf Mongoose Optimization

Abstract: The rapid advancement of the Internet and Internet of Things (IoT) technologies has significantly increased the demand for scalable and efficient cloud computing solutions. Task scheduling, a critical aspect of cloud computing, directly impacts system performance by influencing resource utilization, execution time, and operational costs. However, scheduling tasks in large-scale, dynamic cloud environments remains an NP-hard problem, with existing metaheuristic methods often struggling with scalability, convergence, and adaptability. This study proposes a novel task scheduling approach based on the dwarf mongoose optimization (DMO) algorithm. To assess its effectiveness, we conduct two experimental scenarios. The results demonstrate that, compared with existing algorithms, the proposed DMO algorithm offers faster convergence and higher accuracy in identifying optimal task scheduling solutions, particularly under large-scale task loads. We evaluated the method using the Google Cloud Jobs (GoCJ) dataset, and the findings confirm that DMO outperforms prior state-of-the-art techniques in terms of reducing makespan.

Author 1: Olanrewaju Lawrence Abraham
Author 2: Md Asri Ngadi
Author 3: Johan Bin Mohamad Sharif
Author 4: Mohd Kufaisal Mohd Sidik
Author 5: Ogunyinka Taiwo Kolawole

Keywords: Task scheduling; cloud computing; virtual machines; dwarf mongoose optimization algorithm; Cloudsim; makespan

PDF

Paper 74: H∞ Control Design for Nonlinear Systems via Multimodel Approach

Abstract: Nonlinear systems are integral to contemporary engineering applications, yet their regulation remains a significant challenge due to complex and highly dynamic behaviors. Robust control frameworks, particularly H∞ methods, provide systematic tools to ensure stability and performance in the presence of disturbances and modeling uncertainties. This study proposes an integrated design methodology that combines H∞ loop-shaping techniques with multimodel approaches to achieve resilient control of nonlinear systems. The control law is structured around the H∞ loop-shaping scheme, which shapes the open-loop dynamics to meet desired robustness and performance specifications. The multimodel strategy further enhances adaptability by accommodating diverse operating conditions and capturing variations in system behavior. Several control architectures are presented that unify H∞ loop-shaping with multimodel representations, offering a flexible framework for nonlinear system control. The design methodology also ensures desirable transient responses, thereby improving practical applicability for complex systems. A study is conducted to validate the proposed approaches. Simulation results confirm the effectiveness of multimodel H∞ control systems, underscoring their potential as a robust solution for complex nonlinear applications.

Author 1: Rihab ABDELKRIM

Keywords: Nonlinear systems; H∞ loop shaping control; multimodel

PDF

Paper 75: Modelling Dimensions and Indicators of Readiness for Lean 4.0 Implementation in Indonesian Industries

Abstract: The integration of Lean Manufacturing and Industry 4.0, known as Lean 4.0, has emerged as a strategic approach to enhancing operational efficiency, digital transformation, and competitiveness in modern industries. The rapid development of Industry 4.0 has driven a massive transformation in the global manufacturing sector, including Indonesia, which continues to face competitiveness challenges due to limitations in technological capabilities, human resources, and organizational culture. However, the successful implementation of Lean 4.0 requires a structured and measurable level of organizational readiness. This study aims to model the key dimensions and indicators that define the readiness of Indonesian companies for Lean 4.0 implementation. Using a mixed-methods approach of qualitative and quantitative analysis, this study begins with a meta-synthesis of existing literature and expert interviews to identify initial dimensions and indicators, followed by a structured survey to validate the model. The analytical framework integrates the theoretical foundations of Lean and Industry 4.0 principles, existing readiness assessment models, as well as models from key references. The key finding is a proposed model for Lean 4.0 Readiness for Indonesian industries, consisting of 6 key dimensions and 41 indicators. The 6 key dimensions are: Leadership and Strategy, People and Culture, Technology and Digital Infrastructure, Operation and Process, Product and Service, and External Collaboration and Integration. Expert validation and pilot testing confirmed the consistency and contextual relevance of this model for industries in Indonesia. The finding contributes to theory and practice by providing a comprehensive and diagnostic framework for evaluating Lean 4.0 implementation readiness, as well as supporting the development of a readiness model that can be adapted to another industrial park in Indonesia.

Author 1: Sarjono Sarjono
Author 2: Pudji Hastuti
Author 3: Satrio Utomo
Author 4: Gani Soehadi
Author 5: Budi Setiadi Sadikin
Author 6: Manifas Zubair
Author 7: Jaizuluddin Mahmud
Author 8: Rizki Arizal Purnama
Author 9: Hardono Hardono
Author 10: Helen Fifianny

Keywords: Lean 4.0; Industry 4.0; readiness assessment; organizational readiness; Indonesian industries; digital transformation

PDF

Paper 76: Autonomous Blockchain-Enabled Security Framework for Smart Grids Using Adaptive AI

Abstract: The increasing interconnectivity of smart grids exposes critical energy infrastructure to more sophisticated cyber threats, necessitating adaptable and auditable security measures. This study presents a blockchain-enabled, self-improving intrusion detection system (IDS) that integrates a permissioned blockchain, autonomous governance loops, and a hybrid CNN–LSTM detector. The platform retrains models across federated nodes using blockchain-anchored data, facilitates automatic containment through smart contracts, and permanently stores validated alarms. Following multiple self-improvement cycles, the system enhances its performance from an initial 94.5% accuracy and 4.2% false positive rate (FPR) to 98.1% accuracy, a 97.6% detection rate (recall), and a 2.1% FPR in simulated tests. In comparison to baselines, a blockchain-only IDS recorded 94.1% accuracy with a 4.8% FPR, while a conventional machine learning-based IDS achieved 92.7% accuracy with a 5.4% FPR. Operationally, blockchain anchoring provided a throughput of approximately 1,200 transactions per second with an average transaction latency of about 1.5 seconds. The combined detect-to-contain latency for high-severity events was approximately 3.2 seconds. These findings demonstrate that a scalable, low-FPR, and rapid-response security paradigm for modern smart grids can be achieved by integrating adaptive artificial intelligence with decentralized, robust governance.

Author 1: Brinal Colaco
Author 2: Nazneen Ansari

Keywords: Smart Grid Security; intrusion detection system (IDS); adaptive AI; deep learning; false data injection (FDI) attacks; cyber-physical systems (CPS)

PDF

Paper 77: Automated Question Answering System for FAQ COVID-19 Using Word Embeddings

Abstract: This study's scope includes the development of a Question Answering System for COVID-19 and a review of that system. This is unlike previous work in biomedical QAS, which primarily targets technical users. This work leans towards developing a COVID-19 QAS customized for the general public, especially those who have limited knowledge in the clinical field. This study aims to present the development and analysis of COVID-19 QAS systems. The methodology was based on the development of the system, which included conducting experiments to check the accuracy of the QAS system. Consequently, the QAS system would process the user query using three different feature extraction approaches and output the related FAQ and the answer associated with it from a set of 561 FAQs that were sourced from the Ministry of Health, the Virginia Department of Health, and the World Health Organization. The accuracy of the ensuing responses has been tested by Qaviar. The experimental results indicated that BERT achieved the highest accuracy across all datasets consistently, with 96.25%–98%; Word2Vec scored 86.25%–95.2%, while Bow scored between 86.24% and 88%. While most models performed stably, the performance of Word2Vec was comparatively unstable across data sets. Generally, the lowest accuracy value resulted for all models on the smallest dataset. Increased size of the datasets might not necessarily result in higher accuracy. Generally, BERT outperformed the other embedding approaches.

Author 1: Nazar Elfadil
Author 2: Sarah Saad Alanazi

Keywords: Word embedding; Bag of Words; BERT; Word2Vec; Qaviar; Question Answering System (QAS); COVID-19; natural language processing; public health informatics

PDF

Paper 78: Adversarial Robustness of Deep Learning in Medical Imaging: A Comprehensive Survey and Benchmark of State-of-the-Art Architectures

Abstract: The integration of artificial intelligence into medical diagnostics promises to revolutionize healthcare. However, the reliability of these systems is critically undermined by adversarial examples, which are imperceptible perturbations that can lead to misdiagnosis. Ensuring the robustness of AI-driven clinical decisions is paramount for ensuring patient safety and institutional trust. This study addresses this challenge in two ways. First, we provide a structured survey of the state-of-the-art adversarial threats, including adversarial attacks and detection strategies. Second, we present a rigorous empirical benchmark of five prominent CNN architectures for dermatoscopic skin cancer classification using the gold standard Auto Attack suite. The results revealed significant disparities in robustness based on the architectural design. Although all standard-trained models are highly vulnerable, their defensibility through adversarial training varies significantly. We found that modern transformer-inspired architectures, such as ConvNeXt, achieved the state-of-the-art robust accuracy while maintaining high performance with minimal trade-offs. Conversely, architectures optimized for mobile efficiency, such as MobileNetV2 and EfficientNet-B2, are exceptionally difficult to defend. To the best of our knowledge, this is the first study to establish an architectural hierarchy of robustness for dermatoscopic tasks, demonstrating that hybrid designs outperform mobile-optimized models by over 25% under adversarial conditions. These findings advocate a shift in clinical AI validation from accuracy-centric to robustness-centric metrics.

Author 1: Neethunath M R
Author 2: Gladston Raj S
Author 3: Pradeepan P

Keywords: Adversarial attacks; dermatoscopy; deep learning; robustness benchmark; security in medical AI

PDF

Paper 79: Enhanced Detection of Acute Lymphocytic Leukemia Using Deep Learning and Hybrid Classifiers on Microscopic Blood Images

Abstract: There is no doubt that a significant number of individuals worldwide suffer from blood cancer. A lot of people are unaware of the dangers associated with this disease, which can be fatal. When diagnosed, patients may feel intense fear and a sense of powerlessness. In addition, due to the rarity of these diseases, patients often struggle to find the necessary help and information. A specific type of blood cancer called acute lymphocytic leukemia (ALL) mainly affects white blood cells and is particularly prevalent in children. Early detection of this disease will improve the chances of recovery. Therefore, it is crucial to have an accurate and dependable method for identifying blood cancers. Deep learning (DL) architectures have garnered significant interest within the computer vision realm. Recently, there has been a strong focus on the accomplishments of pretrained architectures in accurately describing or classifying data from various real-world image datasets. Classification performances of the proposed models are investigated by using Soft-max, Support Vector Machine (SVM), and K-Nearest Neighbors algorithm (K-NN) separately on a deep learning neural network (Alexnet and VGG19) to differentiate between the three types of ALL using microscopic images dataset. The experimental results demonstrate that the combination of Alexnet with SVM achieves outstanding classification performance on the leukemia dataset, particularly on the original(unsegmented) data, achieved 97.03%on bengin class, 96.14% on early class, 99.49% on pre class and 99.9% on pro class. This approach achieves higher accuracy levels than practicing physicians.

Author 1: H. A. El Shenbary
Author 2: Amr T. A. Elsayed
Author 3: Khaled A. A. Khalaf Allah
Author 4: Belal Z. Hassan

Keywords: Deep learning; transfer learning; leukemia; Alexnet; VGG19; SVM; K-NN; classification

PDF

Paper 80: An AI-Driven Framework for Network Intrusion Detection Using ANOVA-Based Feature Selection

Abstract: In the last few years, cyberattacks have become more complex, and it is becoming increasingly necessary to establish secure networks. This study examines enhancements to intrusion detection systems (IDSs) with the implementation of machine learning for the categorization of network traffic attacks. For the current study, we utilize four publicly available datasets: CIC-IDS2017, CIC-DoS2017, CSE-CIC-IDS2018, and CIC-DDoS2019. We examined three machine learning techniques: LightGBM, Random Forest, and XGBoost. Experimental results showed that RandomForest and XGBoost achieved the highest accuracy of 0.99 in both binary and multi-class intrusion detection tasks, maintaining balanced performance with macro F1-scores around 0.86. LightGBM exhibited slightly lower overall performance, but benefited from ANOVA-based feature selection, which improved its recall and model stability. Feature selection also enhanced computational efficiency by reducing feature redundancy while preserving accuracy across models. These results highlight how AI tools could help network security deal with emerging threats and improve the performance of IDS. The study underscores the critical role of feature selection in enhancing model efficiency, hence promoting advancements in automated network security systems that can adapt to evolving cyber threats.

Author 1: Salam Allawi Hussein
Author 2: Sándor Répás

Keywords: Network security; intrusion detection; machine learning; feature selection

PDF

Paper 81: DrugCellGNN: Graph Convolutional Networks for Integrating Omics and Drug Similarities in Cancer Therapy Prediction

Abstract: Predicting drug response in cancer cell lines is a critical step toward precision oncology, enabling more efficient therapeutic discovery and personalized treatment strategies. However, the complexity of drug–cell interactions, driven by diverse omics profiles and structural variability among drugs, poses significant challenges for conventional machine learning approaches. In this study, we propose an end-to-end pipeline that integrates multi-omics data (gene expression, copy number variation, and mutations) with chemical structure representations of drugs to predict binary drug response. Our method employs principal component analysis (PCA) for dimensionality reduction of high-dimensional omics data, followed by the computation of drug–drug and cell–cell similarity matrices. These are used to construct a heterogeneous graph combining intra-class similarities with drug–cell interactions. A customized graph neural network model, DrugCellGNN, is then applied to learn context-aware embeddings of drugs and cells. The fused representations are passed to a downstream multi-layer perceptron for classification. To address class imbalance, we introduce a dynamic focal loss function that adaptively emphasizes hard-to-classify examples. Evaluation on the GDSC dataset with an 80/20 train–test split demonstrates strong performance: Accuracy = 0.8935, F1 = 0.9201, AUC = 0.9510. This work highlights the utility of graph-based integration of multi-omics and drug features for drug sensitivity prediction. By leveraging both molecular and relational information, the proposed framework offers a robust and extensible foundation for advancing computational approaches in precision oncology.

Author 1: Gehad Awad Aly
Author 2: Rania Ahmed Abdel Azeem Abul Seoud
Author 3: Dina Ahmed Salem

Keywords: Precision oncology; drug sensitivity prediction; graph neural networks (GNNs); multi-omics integration; focal loss; PCA

PDF

Paper 82: Personalized Point of Interest in Location-Based Augmented Reality Tourism Application

Abstract: In recent years, the rapid growth of the tourism industry and increasing demand for efficient and meaningful travel experiences have highlighted the need for smarter travel assistance tools. Many tourists, particularly first-time visitors, often face challenges navigating unfamiliar destinations and identifying relevant points of interest, leading to delays, inconvenience, and reduced satisfaction. During the peak tourist season, most local hotels and restaurants are overcrowded, and tourists have to find accommodation and food in unfamiliar places, which reduces their travel efficiency and experience. To address these challenges, this study proposes the PutrajayAR, a personalized Location-Based Augmented Reality (LBAR) tourism application designed to enhance tourists’ efficiency and overall travel experience. The application provides AR Discovery and AR Recommendation features that dynamically present personalized points of interest based on user preferences, spatial proximity, and contextual constraints. The system was developed using the waterfall model and evaluated through black box testing, usability testing, and persuasive design assessment. The results demonstrate that PutrajayAR significantly improves user experience and satisfaction compared to non-personalized approaches, thereby validating the effectiveness of personalized LBAR systems in helping users navigate effectively and discover attractions that match their interests.

Author 1: Rimaniza Zainal Abidin
Author 2: Ma Boqi
Author 3: Rosilah Hassan
Author 4: Nor Shahriza Abdul Karim
Author 5: Mohamad Hidir Mhd Salim

Keywords: Augmented Reality; LBAR; PutrajayAR; AR Discovery; AR Recommendation

PDF

Paper 83: Multi-Class Object Detection Using Quantized YOLOv11 for Real-Time Inference

Abstract: Real-time multi-class object detection on embedded devices poses significant challenges due to limited computational power, memory capacity, and energy efficiency requirements. Conventional high-precision object detectors, such as YOLOv11, deliver outstanding accuracy but are computationally intensive, making them unsuitable for deployment on resource-constrained hardware. This study presents a quantized implementation of the YOLOv11 model designed to enable efficient real-time inference on embedded platforms. The proposed approach applies post-training integer quantization and mixed-precision optimization to minimize computation and memory usage while maintaining detection accuracy across multiple object categories. Experimental evaluations were conducted on the COCO and Pascal VOC datasets. The results indicate that the quantized YOLOv11 achieves a 3.2× increase in inference speed, a 2.7× reduction in memory footprint, and a 35% improvement in energy efficiency, with less than 2% loss in mean Average Precision (mAP) compared to the full-precision baseline. The optimized model sustains real-time performance exceeding 45 frames per second (FPS), demonstrating that quantization is a viable and effective approach for deploying high-performance object detection models on embedded systems.

Author 1: Yehia A. Soliman
Author 2: Amr Ghoneim
Author 3: Mahmoud Elkhouly

Keywords: Quantized neural networks; YOLOv11; object detection; embedded systems; real-time inference; model optimization

PDF

Paper 84: SWAP Optimization for Qubit Mapping Based on the Centric-Shortest Quantum Gate Set in NISQ Devices

Abstract: In the Quantum computing era of Noisy Intermediate-Scale Quantum (NISQ) devices, conventional qubit mapping strategies typically rely on specific heuristic rules to solve the mapping problem, overlooking the impact of other factors on the mapping, which leads to increased overhead from extra SWAP gates. To address this issue, we propose a SWAP optimization strategy based on the Centric-Shortest Quantum Gate Set (C-SQGS) and applies it to qubit mapping. In this approach, the centric qubit is determined by analyzing the maximum flexibility qubit set and the physical distances between the associated CNOT gates, leading to the identification of the Centric-Shortest Quantum Gate Set. To overcome the limitations of traditional cost functions that consider only single factors, a multi-factor cost function is introduced to evaluate the overall overhead of candidate SWAP operations and determine pending SWAP gate Set. Based on qubit flexibility analysis, executable SWAP gate is identified and inserted into the circuit. Experimental results demonstrate that the C-SQGS strategy effectively reduces both SWAP gate and two-qubit gate overhead. Specifically, it achieves an average SWAP gate reduction of 36.9% and 47.7%, and a two-qubit gate reduction of 13.8% and 13.5% on the t|ket⟩ and Qiskit compilers, respectively. These results highlight the potential of the C-SQGS strategy in enhancing the efficiency of qubit mapping for NISQ devices.

Author 1: Shujuan Liu
Author 2: Hui Li
Author 3: Yingsong Ji
Author 4: Jiepeng Wang

Keywords: Quantum computing; qubit mapping; Centric-Shortest Quantum Gate Set (C-SQGS); executable SWAP gate; multi-factor cost function

PDF

Paper 85: Predictive Modelling of Flood Dynamics in Malaysia’s East Coast Using an NARX Model

Abstract: Flood forecasting is critical for improving early warning systems in Malaysia’s East Coast region, particularly in flood-prone Pekan. This study develops a Nonlinear Autoregressive with Exogenous Inputs (NARX) model to predict river water levels using data from four stations: Sungai Pahang, Sungai Pahang Tua, Sungai Paloh Hinai, and Sungai Mentiga (2020–2024). The dataset was preprocessed through short-gap interpolation, removal of long missing segments, and segmentation into continuous sequences to ensure high-quality inputs for modeling. A total of 75 NARX configurations were evaluated using different lag values, hidden neuron counts, and training epochs. Model performance was assessed using Mean Squared Error (MSE) and residual diagnostics. The best model—lag = 6 and 300 hidden units—achieved a validation loss of 0.102, demonstrating stable convergence and strong generalization. Prediction results showed close alignment with actual river levels. The findings confirm that the NARX approach effectively captures nonlinear hydrological dynamics and provides reliable short-term water level forecasts for Pekan, addressing an existing gap in localized flood prediction studies.

Author 1: Nur Nabilah Zakaria
Author 2: Azlee Zabidi
Author 3: Mahmood Alsaadi
Author 4: Mohd Izham Mohd Jaya

Keywords: Flood prediction; NARX model; hydrological modelling; Pekan

PDF

Paper 86: Polarimetric Imaging and Computational Techniques for Identification of Malignant Lesions

Abstract: According to the International Agency for Research on Cancer, cervical cancer is a major cause of death among Moroccan women, with high incidence and mortality rates. Early detection remains essential to increasing patients’ chances of recovery. Our study combines polarized light imaging, digital image correlation (DIC), Gray-Level Co-occurrence Matrix (GLCM) texture analysis, and fractal-based local standard deviation mapping to identify microstructural alterations in cervical tissue. Smear and biopsy samples were collected and anonymized in hospitals in Agadir, Morocco. Our goal is to develop an optical system based on the interaction between polarized light and tissue, as well as a complementary computational framework to distinguish between different types of healthy, precancerous, and cancerous tissue. DIC revealed heterogeneous deformation patterns in cancerous regions, fractal analysis highlighted increased structural complexity, and GLCM features showed higher contrast and entropy in malignant samples. This pilot study introduces a novel approach combining polarimetric imaging and computational analysis, applied to cervical tissue samples from Moroccan women in Africa. Despite the small size of the ex vivo dataset, the results obtained encourage the conduct of larger-scale prospective and in vivo studies.

Author 1: Mohammed Hachem MEZOUAR
Author 2: Abdessamad ACHNAOUI
Author 3: Mohammed TBOUDA
Author 4: Said CHOUHAM
Author 5: Said BELKACIM
Author 6: Mohamed NEJMEDDINE
Author 7: Driss MGHARAZ

Keywords: Polarized light; digital image correlation; Gray-Level Co-occurrence Matrix; fractal; cervix; cancer

PDF

Paper 87: Implementation of Hybrid Channel-Aware Prioritization (HCAP) Scheduler for a Multi-User MIMO System in 5G Communication

Abstract: The evolution of 5G networks demands highly efficient resource allocation strategies to accommodate burgeoning mobile data traffic, latency-sensitive applications, and diverse user requirements. Multi-User Multiple-Input Multiple-Output (MU-MIMO) technology is a cornerstone of 5G, enabling simultaneous service to multiple users and significantly improving spectral efficiency. However, its performance is critically dependent on dynamic scheduling algorithms that must balance high system throughput with equitable user access amidst rapidly changing channel conditions and interference. Traditional schedulers like Round Robin, Proportional Fair, and Max-CQI often exhibit a pronounced trade-off between these objectives, struggling to adapt effectively in heterogeneous and dynamic network environments. To address this gap, this study proposes a Hybrid Channel-Aware Prioritization (HCAP) scheduler. The HCAP framework intelligently integrates real-time Channel Quality Indicator (CQI) and interference measurements into a unified user priority score, utilizing tunable α–β weights to flexibly emphasize throughput or fairness. Furthermore, it employs k-means clustering based on long-term channel statistics to group users, thereby reducing scheduling bias and promoting fairness within clusters. Evaluated through comprehensive MATLAB simulations within a realistic MU-MIMO system model employing Regularized Zero-Forcing precoding, HCAP demonstrates a superior performance balance. The results indicate that HCAP achieves up to 2.6 times higher aggregate throughput compared to conventional Proportional Fair and Max-CQI schedulers, while consistently maintaining Jain's Fairness Index above 0.90 across varied network scenarios. These findings validate HCAP as a robust, scalable, and QoS-aware scheduling solution, offering significant potential for enhancing resource allocation in next-generation wireless communication systems.

Author 1: Krishna Deshpande
Author 2: Virupaxi B. Dalal
Author 3: Yedukondalu Udara

Keywords: Multiple input and multiple output; HCAP; CQI; throughput; 5G; QoS; k-means clustering; resource scheduling

PDF

Paper 88: Adaptive Intelligence in Retail Space Optimization: Modeling the Coffee Shop Dilemma with Q-Learning Agents

Abstract: This study models the "coffee shop dilemma", where customer attendance is discouraged by both overcrowding and emptiness. Using an agent-based model with Q-learning reinforcement learning, this study simulates the daily decisions of 100 agents over a one-year period. The results reveal a self-organizing attendance cycle around a $60\%$ capacity threshold. This study demonstrates that customer satisfaction is not driven by visit frequency, but by adaptive decision-making strategies shaped by learned congestion values. Clustering analysis identifies distinct behavioral patron groups (e.g., Ultra-Frequent, Optimized) that emerge from these subtle value differences. The study provides a data-driven framework for optimizing shop space and customer flow, offering conceptual insights into balancing the needs of quick-service and long-stay customers by dynamically managing perceived occupancy.

Author 1: Siranee Nuchitprasitchai
Author 2: Kanchana Viriyapant
Author 3: Kanjanee Satitrangseewong
Author 4: May Myo Naing

Keywords: El Farol Bar problem; agent-based modeling; Q-learning; reinforcement learning; customer behavior; congestion paradox; decision-making; coffee shop operations

PDF

Paper 89: AI Readiness as a Pathway to Sustainable Competitiveness in Tourism Transport: Evidence from an Integrative SEM Model

Abstract: Artificial intelligence (AI) is transforming demand forecasting in the tourism transportation sector, delivering unprecedented accuracy in volatile, seasonal, and customer-sensitive environments. Yet, many firms struggle to translate AI's potential into performance due to gaps in technological and organizational readiness. Drawing on the resource-based-view (RBV) and the Technology-Organization-Environment (TOE) framework, this study develops and tests an integrative model linking digital infrastructure, employee skills, and managerial support to AI adoption and, consequently, business performance. We use Structural Equation Modeling (SEM) with bootstrapping to test direct and indirect effects. The results confirm that all three dimensions of readiness significantly boost AI adoption, which improves operational efficiency, occupancy rates, customer satisfaction, and profitability. It is crucial that AI adoption fully mitigates the effects of employee skills and managerial support, while partially mitigating those of digital infrastructure, reflecting its dual role in analytics, through both AI and other means. The study contributes theoretically by clarifying the mechanism by which readiness translates into value and offers practitioners a roadmap for successful AI assimilation in high-uncertainty service contexts.

Author 1: Mohamed Amine Frikha

Keywords: Artificial intelligence (AI); demand forecasting; tourism transport; Intelligent Transportation Systems (ITS); digital infrastructure; Resource-Based View (RBV); Technology- Organization-Environment (TOE) Framework; Structural Equation Modeling (SEM); mediation analysis; sustainable mobility; data-driven decision making

PDF

Paper 90: RoadSCNet: Road Surface Condition Detection Network

Abstract: The quality of the road is an important issue that contributes to accidents, resulting in the loss of time, resources, and lives. To manually survey the road issue. This is very delayed and costly. Automatic detection of road conditions facilitates surveys more efficiently than human methods. This research identifies three objects: cracks, potholes, and manhole covers. This research shown the highest efficiency with YOLO V6 compared to YOLO V5, V7, and V8. This paper proposes RoadSCNet, designed for YOLOv6 implementations, has been developed for road research. A key part is the customized Horizon block, which enhances horizontal contextual feature extraction efficacy and reduces the limitations of traditional YOLO architecture in identifying road surface condition by long and low light variation, such as cracks and potholes.

Author 1: Sujittra Sa-ngiem
Author 2: Kwankamon Dittakan
Author 3: Saroch Boonsiripant

Keywords: RoadSCNet; road surface; detect road; road condition; deep learning; crack; pothole; manhole cover; image analysis; convolutional neural network; CNN

PDF

Paper 91: Optimizing Fetal Health Prediction Using Machine Learning on Biocompatible Sensor Data

Abstract: Automatic Fetal Health Prediction plays a vital role in supporting early prenatal intervention through continuous and non-invasive monitoring. Recent advances in biocompatible sensors enable the safe long-term acquisition of physiological signals, which can be effectively analyzed using machine learning techniques. This study proposes a comprehensive machine learning pipeline for Fetal Health Prediction through fetal health classification using the fetal_health.csv dataset from Kaggle, consisting of 2,126 samples and 22 cardiotocography-derived features related to fetal heart rate and uterine contractions. To address class imbalance and the presence of outliers, RobustScaler normalization was applied during the preprocessing stage. Feature selection was performed using Random Forest feature importance to identify the most relevant predictors. Two classification models, namely Random Forest (RF) and Support Vector Machine (SVM), were trained and evaluated using an 80:20 stratified train–test split. Experimental results indicate that the Random Forest model outperformed SVM, achieving an accuracy of 92.7% and a macro F1-score of 85.9%, compared with 88.97% accuracy and a macro F1-score of 79.85% for SVM. Moreover, Random Forest demonstrated superior performance in detecting minority classes (Suspect and Pathological), which are of high clinical significance. These findings suggest that the proposed pipeline is robust, interpretable, and suitable for integration with biocompatible sensor-based systems for real-time fetal health monitoring and clinical decision support.

Author 1: Yuli Wahyuni
Author 2: Hadiyanto
Author 3: Ridwan Sanjaya
Author 4: Nendar Herdianto

Keywords: Fetal health prediction; biocompatible sensors; machine learning; Random Forest; SVM

PDF

Paper 92: 6G Wireless Networks in the Generative AI Age: Overview, Techniques, and Future Trends

Abstract: As the world move beyond the 5G era, the emergence of 6G promises a significant integration with innovative communication paradigms and burgeoning technology trends, actualizing previously utopian concepts alongside increased technical complexities. Analytical models offer basic frameworks, but ML and AI now outperform them in solving complex problems, either by augmenting or supplanting model-based methodologies. The predominant focus of data-driven wireless research is on discriminative AI (DAI), which necessitates extensive real-world datasets. In contrast to DAI, Generative AI (GenAI) refers to generative models (GMs) that can identify the fundamental data circulation, patterns, and characteristics of the incoming data. Given these attractive characteristics, GenAI can either substitute or augment DAI methodologies in multiple contexts. This comprehensive tutorial-survey article begins with an overview of 6G and wireless intellectual ability by delineating potential 6G applications and services. The aspects presented in this paper support the internet of things integration with 6G networks with the support of the AI as intelligent systems. This review paper concentrates on fundamental wireless research domains, encompassing network optimization, organization, and management. It examines the foundational learning principles of DAI and its methodologies, the application of DAI in wireless networks, and the utilization of GMs in 6G networks. Due to its comprehensive nature, this paper will act as a crucial reference for researchers and professionals exploring this dynamic and promising field.

Author 1: Sallar S. Murad
Author 2: Rozin Badeel
Author 3: Harth Ghassan Hamid
Author 4: Reham A. Ahmed

Keywords: GenAI; 6G; generative models; intelligent systems; wireless communication

PDF

Paper 93: Context-Aware Requirements Prioritization Using Integrated Regression Learning with Ordinal Neural Modeling and Roberta

Abstract: Effective prioritization of software requirements is essential for reducing project risks, optimizing resource allocation, and ensuring timely delivery. Conventional approaches such as Analytic Hierarchy Process (AHP) and MoSCoW often suffer from subjectivity, inefficiency, and poor scalability, making them unsuitable for large-scale projects. Although machine learning (ML) based methods improve scalability, they frequently overlook critical contextual factors such as risk, urgency, implementation effort, and inter-requirement dependencies. To address this gap, this study proposes a new machine learning based context aware software requirements prioritization system. In the proposed system, a pre-trained RoBERTa model and an ordinal neural regression model are employed to infer contextual features including technical risk, complexity, urgency, business value, implementation effort, requirement stability, stakeholder criticality, security sensitivity, and inter-requirement dependencies directly from requirement statements. These inferred features are then used as inputs to a supervised multiple regression model (XGBoost), which generates continuous priority scores for each requirement, with higher scores reflecting higher implementation priority. To ensure transparency, SHAP-based feature attribution is applied for feature importance analysis, and a feedback integration mechanism allows stakeholders to iteratively refine prioritization outcomes, thus in-turn retraining the core prioritization model. Empirical validation against three domain experts across five projects from different application domains demonstrates strong alignment, with Spearman rank correlations between 0.6 and 0.75, Mean Absolute Error (MAE) around 0.10, and Top 5 Match Rates up to 0.80. The results confirm that the proposed system provides a scalable, explainable, and context aware requirements prioritization mechanism suitable for real-world software engineering projects.

Author 1: Prasis Poudel
Author 2: Noraini Che Pa
Author 3: Abdikadir Yusuf Mohamed

Keywords: Requirements prioritization; context-aware prioritization; machine learning; natural language processing; ordinal regression; dependency analysis; Explainable AI

PDF

Paper 94: Functions Inverse Using Neural Networks via Branch-Wise Decomposition and Newton Refinement

Abstract: In this work, a unified framework (using Neural Networks) is proposed to find the inverse of mathematical functions, spanning both simple one-to-one mapping and complex multivalued relations. The approach uses standard multilayer Neural Networks (NN) to approximate the functions’ inverse and introduces a deterministic branch-wise decomposition to handle multi-valued inverses. For single-valued (one-to-one) functions, a NN is directly trained on input-output pairs to learn the inverse mapping. For multi-valued functions, the function domain is decomposed into one-to-one branches, and a dedicated NN is trained for each branch. A refinement step using Newton’s method is applied to the NN output to further improve inversion accuracy. Across a broad set of benchmark functions, the proposed approach achieved low mean absolute error (MAE) and mean squared error (MSE) in recovering the true inverse, with high round-trip consistency. Newton refinement further reduces inversion error by rapidly converging to higher precision solutions. Notably, even for multi-valued inverse functions, each branch-specific NN can accurately recover the true inverse. Accordingly, standard NN, when combined with branch-wise decomposition and Newton refinement, can serve as an effective universal approximator for the inverse of functions across a spectrum of complexities.

Author 1: Abdullah Balamash

Keywords: Neural networks; function inverse; Newton method; branch-wise decomposition

PDF

Paper 95: Hybrid Diagnostic Approaches Integrating Fuzzy Logic and Neural Networks for Parkinson’s Disease

Abstract: Parkinson’s Disease (PD) is a movement-related and non-motor symptom neurological condition that requires early diagnosis and treatment. Fuzzy Logic and Neural Network Diagnostic hybrids are more accurate and reliable. The diagnostic approaches of PD are not sensitive to early PD, are subjective in assessing symptoms, and lack standardization. Such problems restrict treatment choices, thereby preventing a favorable patient outcome. In the PD Hybrid Diagnostic Approach (PD-HDA), fuzzy logic is utilized to address uncertainties in clinical data, and neural networks are employed to identify complex patterns in multimodal data. The PD-HDA design features structured selection and data fusion, which enhance diagnostic accuracy and constrain method variability. The images of hand tremors, gait analysis, and speech patterns are categorized using a CNN to reveal their complex properties. Fuzzy Logic and CNNs enhance the classification of PD stages and patient responses to symptoms. The PD-HDA model increases accuracy, sensitivity, and specificity during testing. The hybrid methods can be useful for early identification of PD and provide individualized care, leading to improved patient outcomes.

Author 1: Marwah Muwafaq Almozani
Author 2: Hüseyin Demirel

Keywords: Convolutional neural network; disease hybrid diagnostic; Parkinson's disease; fuzzy logic

PDF

Paper 96: Evaluating CTGAN-Generated Synthetic Data for Heart Disease Prediction: Fidelity, Predictive Utility, and Feature Preservation

Abstract: The increasing scarcity and sensitivity of clinical data necessitate the development of high-quality synthetic datasets. This study evaluated the ability of Conditional Tabular GAN (CTGAN) to generate synthetic heart disease data that preserves the statistical properties and predictive patterns of the Cleveland Heart Disease dataset. It assessed the fidelity of numerical and categorical features, preservation of pairwise correlations, and predictive utility using Logistic Regression and Random Forest classifiers. Dimensionality reduction analysis using PCA and t-SNE further measured the global similarity between the real and synthetic datasets. The results obtained show that CTGAN successfully reproduces the general distribution and correlations, especially for key features such as age, talach, and old peak. However, some discrepancies remain in categorical attributes. Predictive modeling shows moderate transferability, indicating that synthetic data captures important patterns without completely replicating the original labels. These findings highlight the potential of CTGAN-generated synthetic data as a privacy-preserving alternative for benchmarking and early algorithm development, while emphasizing the importance of feature-level and prediction validation in synthetic data research.

Author 1: Wan Aezwani Wan Abu Bakar
Author 2: Nur Laila Najwa Josdi
Author 3: Mustafa Man
Author 4: Evizal Abdul Kadir

Keywords: Conditional Tabular GAN (CTGAN); correlation analysis; dimensionality reduction; feature importance; heart disease prediction; predictive utility; synthetic data; tabular data fidelity

PDF

Paper 97: Epidemic Modeling with a Hybrid RF-LSTM Method for Healthcare Demand Prediction

Abstract: Accurate resource demand forecasts are necessary for sustainable healthcare systems to preserve flexibility and efficiency as well as to provide services in a professional manner. In this work, we propose an integrated Random Forest/Long Short-Term Memory (RF-LSTM) model for predicting Saudi Arabia's national healthcare resource demand. It combines non-linear feature extraction and temporal sequence learning. The integrated model employs governmental epidemiological and operational data from 2020 to 2024 to capture both short-term and long-term volatility and sustainability trends. The results demonstrate significant improvements in predictive accuracy compared with single-model baselines, such as Autoregressive Integrated Moving Average (ARIMA), Random Forest (RF), and Long Short Term Memory (LSTM), with reductions in Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) for up to 22% and 18% compared with ARIMA, and by 12% and 9% relative to the best single model, which is LSTM, respectively A statistical analysis using one-way ANOVA confirmed the robustness of the hybrid method. Furthermore, residual plots were examined to verify model assumptions and visually assess the uniformity of prediction errors, thereby validating the results. These findings suggest that integrated AI-based prediction models can effectively facilitate capacity planning, enhance resource allocation, and contribute to achieving the objectives of Saudi Vision 2030 for a resilient, data-driven healthcare system.

Author 1: Budor Alshammari
Author 2: Bassam Zafar

Keywords: Predictive analytics; Hybrid modeling; digital health; Saudi Arabia; COVID-19; decision support systems

PDF

Paper 98: Intelligent Platform for Employee Retention Prediction

Abstract: Employee retention is a very important challenge to the organizations since it raises the cost of recruitment, affects domain knowledge retention, and impacts workforce stability. Presented here is a platform-based intelligent employee retention prediction system as a real-time HR decision support tool. As a part of the research, Feedforward Neural Network was initially trained and tested with structured employee data to confirm the relevance of the features and predictive viability, which had recorded an accuracy of 88.7%. This final implementation will integrate an AI-based Chat Widget with a modular pipeline system that will utilize an LLM for performing analytical reasoning on employee attributes and provide human-understandable explanations that will aid the HR decisions. The architecture separates the user interaction layer (Agent) from the prediction and reasoning logic (Pipeline), which makes the system scalable, interpretable, and easily integrable with the workflows of an organization. The proposed platform will show how validated predictive models and LLM-provided reasoning can be integrated in order to provide actionable and explainable employee retention insights.

Author 1: Medha Wyawahare
Author 2: Milind Rane
Author 3: Ashish Rodi
Author 4: Samarth Arole
Author 5: Aryan Mundra

Keywords: Employee retention; feedforward neural network; Large Language Model; HR analytics; intelligent platform

PDF

Paper 99: Spatial Classification of Fertilizer Requirements Using Fuzzy C-Means on Shallot Agricultural Land

Abstract: Spatial variability in soil fertility constrains productivity in intensive shallot farming, yet fertilizer is frequently applied uniformly across fields. This practice results in nutrient inefficiencies, increased costs, and heightened environmental risks. This study introduces a fertilizer requirement mapping framework utilizing Fuzzy C-Means (FCM) clustering, a machine learning technique for data grouping, applied to in-situ measurements of soil Nitrogen (N), Phosphorus (P), and Potassium (K). The framework was evaluated in a 500 × 500 m shallot field in Srikayangan, Kulon Progo, Indonesia, subdivided into 10 × 10 m management blocks suitable for smallholder operations. Soil NPK levels were measured using IoT sensor nodes and georeferenced with GNSS, while high-resolution RGB imagery from a UAV provided spatial context. Normalized NPK data were clustered with FCM to delineate fertility zones exhibiting nutrient differences. To operationalize clustering results, a nutrient-priority decision logic identified the most limiting nutrient (N, P, or K) for each block. Fertilizer recommendation points were visualized on a UAV-derived orthomosaic map to facilitate interpretation and field application. The results indicate that this approach effectively captures gradual fertility transitions and produces actionable fertilizer zones for site-specific nutrient management (SSNM) in smallholder systems. The study demonstrates the practical integration of fuzzy clustering, IoT-based soil sensing, and UAV mapping to inform precision agriculture decisions.

Author 1: Roghib Muhammad Hujja
Author 2: Ahmad Ashari
Author 3: Danang Lelono
Author 4: Agus Prasekti

Keywords: Fuzzy C-Means (FCM); soil fertility zoning; NPK (Nitrogen, Phosphorus, Potassium); fertilizer recommendation; precision agriculture; Site-Specific Nutrient Management (SSNM); IoT (Internet of Things); UAV (Unmanned Aerial Vehicle)

PDF

Paper 100: CleanCity IoT: A Vehicle-Mounted Platform for Real-Time Urban Air-Quality Monitoring and Forecasting in Resource-Constrained African Cities

Abstract: Urban air pollution is a growing public-health challenge in African cities, yet traditional monitoring stations are sparse and expensive. The paper presents CleanCity IoT, a deployed, low-cost, vehicle-mounted air-quality platform that combines IoT sensors, GSM connectivity, cloud aggregation, and machine learning to produce near-real-time exposure maps and 2-hour forecasts for multiple pollutants. Each device integrates low-cost sensors for PM2.5, PM10, NO₂, O₃, SO₂, and CO₂, alongside temperature and humidity. Measurements are geotagged and transmitted over mobile networks form vehicles to a cloud backend, where data are validated, stored, and visualized through a user-friendly dashboard that also issues automated alerts and periodic reports. Using a dataset collected in Kigali and secondary cities via routine vehicular routes, the paper introduces the training of a multivariate time-series model to forecast short-horizon pollutant levels, supporting proactive health guidance and regulatory action. The system reports a performance in terms of latency, uptime, coverage, and data quality, and evaluate forecast accuracy using MAE/RMSE/MAPE and event-oriented metrics for spike prediction. Results indicate that CleanCity IoT provides reliable, scalable, and cost-effective urban air-quality intelligence, closing key gaps in spatiotemporal coverage while enabling citizen access, policy support, and social impact. The platform demonstrates a practical blueprint for African cities to operationalize air-quality intelligence using existing mobile infrastructure and locally developed technology.

Author 1: Eric Nizeyimana
Author 2: Damien Hanyurwimfura
Author 3: Gabriel Uwanyirigira
Author 4: Bonaventure Karikumutima
Author 5: Jimmy Nsenga
Author 6: Irene Niyonambaza Mihigo

Keywords: CleanCity IoT; air quality; mobile sensing; multivariate forecasting; spike detection

PDF

Paper 101: From Consensus to Chaos: A Vulnerability Assessment of the RAFT Algorithm

Abstract: In recent decades, the RAFT distributed consensus algorithm has become a main pillar of the distributed systems ecosystem, ensuring data consistency and fault tolerance across multiple nodes. Although the fact that RAFT is well known for its simplicity, reliability, and efficiency, its security properties are not fully recognized, leaving implementations vulnerable to different kinds of attacks and threats, which can transform the RAFT harmony of consensus into a chaos of data inconsistency. This paper presents a systematic security analysis of the RAFT protocol, with a specific focus on its susceptibility to security threats such as message replay attacks and message forgery attacks. Examined how a malicious actor can exploit the protocol's message-passing mechanism to reintroduce old messages, disrupting the consensus process and leading to data inconsistency. The practical feasibility of these attacks is examined through simulated scenarios, and the key weaknesses in RAFT's design that enable them are identified. To address these vulnerabilities, a novel approach based on cryptography, authenticated message verification, and freshness check is proposed. This proposed solution provides a framework for enhancing the security of the RAFT implementations and guiding the development of more resilient distributed systems.

Author 1: Tamer Afifi
Author 2: Abdelfatah Hegazy
Author 3: Ehab Abousaif

Keywords: RAFT; consensus protocol; security; distributed systems; message forgery; replay attacks; cryptography

PDF

Paper 102: EvoNorm-GAN for Adaptive and Interpretable Detection of Ransomware in Windows PE Files

Abstract: Ransomware remains a key cybersecurity issue because of its growing amount of obfuscation, polymorphism, and constantly changing patterns of attack that repeatedly circumvent conventional defenses. Traditional systems and standard deep learning may fail, lowering accuracy and increasing false positives. To address these shortcomings, the proposed work proposes EvoNorm-GAN, a dynamic adversarial-based detection architecture that will incorporate Feature-Wise Dynamic Normalization (FDN) and Generative Adversarial Network to analyze ransomware in Windows Portable Executable (PE) files in a very flexible manner. Generator creates ransomware variants; discriminator classifies files using Wasserstein loss. EvoNorm-GAN is a TensorFlow application, using the Keras back-end, and tested on a large-scale Windows PE File Analysis Dataset of 62, 200 samples, with 31, 100 benign and 31, 100 malicious examples. The experimental findings indicate that EvoNorm-GAN has the state-of-the-art results of 98.2 % accuracy, 98.4 % precision, 98.1 % recall, 97.4 % F1-score, and 0.99 AUC, which are about 1 to 3 percent higher than the traditional CNN, RNN, and ensemble-based models. To enhance transparency and trust, SHAP-based explainable AI is integrated into EvoNorm-GAN, highlighting key PE file features such as Section Entropy and SizeOfCode that drive classification decisions. By combining adaptive learning, adversarial sample generation, and analyst-friendly interpretability into a unified framework, EvoNorm-GAN delivers an efficient, robust, and transparent ransomware detection system. Its scalable and resilient design makes it well-suited for real-world deployment in endpoint protection and cybersecurity environments, providing reliable detection of evolving ransomware threats.

Author 1: G Badrinath
Author 2: Arpita Gupta

Keywords: Ransomware detection; EvoNorm-GAN; feature-wise dynamic normalization; portable executable files; adversarial learning; Explainable AI

PDF

Paper 103: An Interpretable Analytical Intelligence Architecture Delivering Reliable Detection of Software Defect Instances

Abstract: Software defect prediction plays a crucial role in improving software quality, yet existing approaches still suffer from severe class imbalance, redundant feature spaces, weak generalization, and limited interpretability, making their adoption in real development pipelines difficult. Many current models rely on black-box deep learning architectures or conventional classifiers that fail to identify minority defects or explain the reasoning behind their decisions. To overcome these limitations, this study introduces a novel framework named Contrastive Siamese Defect Learning–Integrated Explainable Neural Optimization System (CSDL-SEN-XAI), which integrates contrastive metric learning, enzyme-inspired optimization, and transparent explainability. The method combines SMOTE-based balancing, the Enzyme Action Optimizer for joint feature–hyperparameter optimization, and a Siamese Neural Network trained using contrastive loss to learn discriminative similarity embeddings. The entire workflow is implemented using Python, enabling efficient scalability and reproducibility. Experimental analysis reveals that the proposed model achieves an accuracy of 95.5%, a recall of 96.2%, and an F1-score of 95.5%, outperforming traditional models such as Random Forest, SVM, and CNN by margins ranging from 7% to 15% under identical evaluation settings. SHAP and Integrated Gradients further demonstrate that the model provides clear global and instance-level explanations, highlighting influential software metrics and strengthening the interpretability of predictions. Overall, the results confirm that CSDL-SEN-XAI delivers superior predictive performance, stable optimization, balanced learning, and transparent defect interpretation, offering a reliable and interpretable solution suitable for practical software engineering environments. Future work will explore cross-project defect prediction and the integration of lightweight optimization strategies to further enhance scalability.

Author 1: Srinivasa Rao Katragadda
Author 2: Sirisha Potluri

Keywords: Contrastive learning; explainable artificial intelligence; feature optimization; Siamese Neural Network; software defect prediction

PDF

Paper 104: A Hybrid CNN-BiGRU-GAN Framework for Enhanced Automated Analysis of Cervical Cancer in Medical Imaging

Abstract: Cervical cancer screening requires reliable automated systems capable of overcoming variability in staining, morphology, and limited annotated data, which often undermine the performance of traditional machine learning and deep learning approaches. Existing techniques commonly rely on single-modality feature extraction or static fusion, resulting in weak generalization, class imbalance sensitivity, and limited interpretability in clinical environments. Addressing these gaps, the research introduces DiagnoFusionNet, a hybrid CNN-BiGRU-GAN framework that integrates spatial features from Convolutional Neural Network (CNN), contextual dependencies from Bidirectional Gated Recurrent Unit (BiGRU), and Generative Adversarial Network (GAN)-generated samples to enhance data diversity and correct minority-class deficiencies. The methodology incorporates an Adaptive Triple-Stage Feature Fusion mechanism that dynamically recalibrates modality contributions using discriminator-informed attention, ensuring discriminative and clinically aligned feature representations. Experiments on the SIPaKMeD dataset demonstrate strong performance with 97.89% accuracy, 97.69% precision, 96.95% recall, 96.89% F1-score, and a 0.99 AUC, supported by GAN evaluation metrics, including an FID of 18.3, IS of 3.91, and SSIM of 0.92. Ablation analysis confirms the dominant contribution of the adaptive fusion module, while t-SNE clustering and confusion-matrix inspection highlight effective separability and reduced misclassification. Model development and experimentation were executed using Python, TensorFlow, Keras, OpenCV, and Scikit-learn on GPU-enabled environments. The framework provides a clinically interpretable, data-efficient, and scalable solution for automated cervical cytology analysis in real-world and resource-limited settings.

Author 1: Donepudi Rohini
Author 2: M Kavitha

Keywords: Cervical cancer detection; DiagnoFusionNet; medical image analysis; Adaptive Triple-Stage Feature Fusion; generative adversarial networks

PDF

Paper 105: Reinforcement Learning Framework for Missing Data Imputation in IoT Environments

Abstract: Continuous, accurate meteorological sensing underpins many Internet of Things (IoT) applications, from smart irrigation and urban heat-island monitoring to early weather warnings, but data from distributed stations are often disrupted by sensor faults, power loss, or communication noise, causing missing values that degrade analytics and decisions. Existing data imputation methods lose accuracy on small or irregular datasets and adapt poorly to dynamic IoT settings. This study proposes a reinforcement learning (RL)-based framework for missing-data imputation that treats each gap as a sequential decision problem. The authors develop and compare three RL architectures, two Q-table methods and one Deep Q-learning model, to learn temporal dependencies and optimize imputation via experience. A second objective is to assess the feasibility and performance of RL for imputation in domains related to robotics and autonomous systems, where RL remains less explored. A third objective is to validate the methods on real-world datasets and simulations, supported by a user-friendly graphical interface for visualization and performance monitoring. The proposed RL imputers outperform state-of-the-art methods in accuracy and robustness: the best RL configuration cuts MSE/MAE by 8.6%/5.9% vs. K-Nearest Neighbors’ algorithm (KNN), 74.4%/75.6% vs. autoencoder, 79.6%/79.9% vs. clustering, 89.0%/83.7% vs. mean, 89.5%/83.3% vs. median, and 94.2%/89.3% vs. most-frequent, while raising the coefficient of determination (R²) by +0.023, +0.532, +0.123, +0.407, +0.436, and +0.932, respectively. These findings highlight RL as an effective paradigm for intelligent data restoration in IoT-based sensing systems.

Author 1: Ahmed M. Salama Salem
Author 2: Sayed AbdelGaber A
Author 3: Ahmed E. Yakoub

Keywords: Data imputation; reinforcement learning; machine learning; deep learning; Internet of Things (IoT)

PDF

Paper 106: Hierarchical Swin Transformer Encoder-Decoder Architecture for Robust Cerebrovascular Abnormality Segmentation in Multimodal MRI

Abstract: This study presents a hierarchical Swin Transformer–based framework for automated segmentation of cerebrovascular structures using multimodal magnetic resonance imaging. The proposed architecture integrates patch partitioning, linear embedding, hierarchical windowed self-attention, and a multilevel encoder–decoder design to address the inherent challenges of vascular segmentation, including irregular morphology, small-caliber vessel visibility, and intensity variability across MRI modalities. A multimodal fusion module enhances the ability to capture complementary anatomical and vascular information, while skip-connected decoding ensures the preservation of fine-grained spatial features essential for accurate vessel reconstruction. The model was evaluated using a combination of open-access datasets and demonstrated superior performance across multiple quantitative metrics, achieving higher Dice similarity, precision, sensitivity, and specificity compared to existing state-of-the-art methods. Qualitative analysis further revealed accurate recovery of major arterial pathways, distal branches, and complex vascular topologies, confirming the model’s robustness in both global and localized segmentation tasks. The results highlight the discriminative strength of hierarchical attention mechanisms and emphasize their role in improving cerebrovascular characterization. Overall, the proposed framework offers a reliable and anatomically coherent approach for vascular segmentation, with strong potential for integration into clinical neuroimaging workflows and advanced cerebrovascular research applications.

Author 1: Nazbek Katayev
Author 2: Zhanel Bakirova
Author 3: Assel Kaziyeva
Author 4: Aigerim Altayeva
Author 5: Karakat Zhanabaykyzy
Author 6: Daniyar Sultan

Keywords: Cerebrovascular segmentation; Swin Transformer; multimodal MRI; deep learning; vascular imaging; hierarchical attention; encoder–decoder architecture; medical image analysis

PDF

Paper 107: A Multi-Scale ROI-Aligned Deep Learning Framework for Automated Road Damage Detection and Severity Assessment

Abstract: This study presents a multi-scale ROI-aligned deep learning framework designed to advance automated road damage detection and severity assessment using high-resolution roadway imagery. The proposed architecture integrates hierarchical feature extraction, a road-damage proposal network, and refined ROI-aligned encoding to capture both fine-grained local anomalies and broader contextual patterns across diverse pavement conditions. Leveraging the RDD2020 dataset, the model effectively identifies multiple defect categories, including longitudinal cracks, transverse cracks, alligator cracking, and potholes, achieving strong convergence behavior and stable generalization across training and validation phases. Quantitative evaluations reveal high detection accuracy and smooth loss reduction over 500 learning epochs, while qualitative visualizations demonstrate precise localization and robust classification of damages under varying environmental and structural complexities. The framework consistently maintains performance in challenging scenes featuring shadows, cluttered backgrounds, low contrast, or irregular defect geometries, underscoring the benefits of multi-scale fusion and ROI alignment mechanisms. Although slight fluctuations in validation metrics indicate the presence of inherently difficult samples, the overall results affirm the model’s capability to support large-scale, real-time road monitoring systems. The findings highlight the potential of the proposed approach to significantly enhance intelligent transportation infrastructure, offering an efficient and reliable solution for proactive pavement maintenance and improved roadway safety.

Author 1: Bakhytzhan Orazaliyevich Kulambayev
Author 2: Olzhas Muratuly Olzhayev
Author 3: Aigerim Bakatkaliyevna Altayeva
Author 4: Zhanna Zhunisbekova

Keywords: Road damage detection; deep learning; ROI alignment; multi-scale features; severity assessment; RDD2020 dataset; intelligent transportation systems

PDF

Paper 108: Enhanced Mobile GC Vit Architecture for Efficient Image Classification with Application to Plant Disease Detection

Abstract: Efficient and accurate automated diagnosis of plant diseases remains a challenge for deployment on resource-constrained edge devices. While hybrid vision transformers like GCViT balance accuracy and efficiency, they often lose critical high-frequency details such as fine lesion textures and leaf margins that are essential for fine-grained disease classification. To address this gap, we propose the Enhanced High-Frequencies Global Context Visual Transformer (EHF-GCViT), a novel hybrid architecture designed to explicitly enhance high-frequency feature retention within a lightweight framework. The core innovations of EHF-GCViT include: first, a customized, lightweight convolutional refinement block based on depthwise separable operations that acts as a learnable pre-processor to preserve discriminative spatial details before tokenization; second, a gated convolutional block that replaces the final transformer stage, reducing the model memory footprint from 46.36 MB to 34.48 MB; and third, an adaptive normalization strategy to stabilize the training of the integrated heterogeneous layers. Extensive experiments on the PlantVillage tomato disease dataset demonstrate that EHF-GCViT achieves superior performance, surpassing the baseline GCViT, standard Vision Transformers (ViT), and CNN benchmarks (e.g., ResNet) in accuracy, precision, recall, and F1-score. These results validate that explicitly modeling high-frequency features within a hybrid transformer design provides a more memory-efficient and accurate backbone for practical plant disease detection systems targeting edge deployment.

Author 1: Mohamed Jawher Bahrouni
Author 2: Faouzi Benzarti
Author 3: Mohamed Touati
Author 4: Sadok Ben Yahia

Keywords: Hybrid transformer architecture; convolutional refinement block; gated convolution; edge devices; high-frequency features; tomato leaf disease classification

PDF

Paper 109: Modeling Mixed Gas Reactions in Air Pollution: Stoichiometry, Kinetics, and Hazard Assessment

Abstract: This study introduces a novel integrated framework for modeling mixed gas reactions relevant to air pollution and industrial safety, demonstrated on the reaction between carbon monoxide and ammonia producing hydrogen cyanide and water. The approach couples closed form stoichiometric mass balances with a transport corrected kinetic ordinary differential equation system and a Bayesian logistic hazard classifier that incorporates expert informed priors. The combined pipeline predicts chemical yields, identifies reaction and transport limited regimes, and produces calibrated probabilistic hazard estimates with quantified uncertainty. Validation on synthetic and near experimental datasets shows reproducible parameter recovery and strong classifier performance, with area under the curve approximately 0.93 on held out data. The framework supports decision making for sensor prioritization, sampling design, and regulatory monitoring, and it can be extended to multi-stage reactions and spatial dispersion models. The novelty lies in coupling closed‑form stoichiometry with transport‑corrected kinetics and Bayesian hazard classification, producing a nondimensional regime map and calibrated probabilistic hazard scores not available in prior models.

Author 1: T Somasekhar
Author 2: Rekha B. Venkatapur

Keywords: Stoichiometric reaction modeling; mixed-gas kinetics; plug-flow transport correction; Bayesian hazard classification; air pollution risk assessment; environmental process safety; probabilistic uncertainty quantification

PDF

Paper 110: VidAvDetect: A Deepfake-Inspired Vision Transformer Approach for Detecting Real Humans vs. AI-Avatars in Video Streams

Abstract: The pace of advancement in Generative AI has made it possible to realize highly realistic synthetic identities in the form of avatars for non-existent persons, thus paving the way for a paradigm beyond state-of-the-art deepfake attacks that aim to manipulate real identities in people. This rapidly emerging trend poses a challenge to digital media forensics in a most critical way, in terms of deciding whether a facial identity observed in a video clip represents a real human identity versus a fully synthetic identity created using advanced tools in the realm of Generative AI. To address this gap, we introduce VidAvDetect, a deepfake-inspired Vision Transformer approach specifically designed to discriminate real human faces from AI-generated avatars in video streams, addressing a novel identity-existence verification task. The proposed system integrates efficient frame sampling, robust facial preprocessing, patch-based embeddings, and global structural modeling through a transformer encoder, enabling the detection of subtle geometric and textural regularities characteristic of synthetic identities. Experimental results demonstrate strong performance, with training accuracy reaching 97–98%, video-level accuracy of 95.1%, a macro F1-score of 0.944, and a ROC-AUC of 0.991, confirming the model’s robustness across heterogeneous real, manipulated, and fully synthetic datasets. By moving beyond manipulation detection to focus on identity-existence verification, VidAvDetect establishes a new methodological direction for transparency, regulation, and trust in modern digital media environments where AI-generated avatars increasingly resemble real humans.

Author 1: Btissam Acim
Author 2: Hamid Ouhnni
Author 3: Nassim Kharmoum
Author 4: Soumia Ziti

Keywords: Vision transformer; deepfake; Artificial Intelligence (AI); Generative AI; AI Avatar; video streams

PDF

Paper 111: Dynamic Sentiment Analysis on the Emergence of Pre-Trained Generative Model-Based Applications in Indonesia

Abstract: The emergence of pre-trained generative model–based applications has intensified sentiment dynamics within Indonesia’s multi-platform digital ecosystem, where sentiment intensity and temporal fluctuations occur simultaneously. To overcome these challenges, this study extends IndoBERT by incorporating a time-aware tokenization mechanism within a fine-grained dynamic sentiment analysis framework. This mechanism is designed to explicitly capture the evolution of sentiment over time. Instead of relying on external embeddings or implicit timestamps, temporal information is injected directly into the IndoBERT tokenizer through explicit temporal tokens, enabling end-to-end temporal adaptation during fine-tuning. We utilized a large-scale dataset harvested from various platforms—including TikTok, Twitter (X), YouTube, and forums—alongside AI-generated content from Gemini, ChatGPT, and Copilot. The dataset was annotated into five fine-grained sentiment classes: very positive, positive, neutral, negative, and very negative. The experimental evaluation demonstrates that the proposed time-aware IndoBERT model attains an average accuracy of 96.38%, exceeding the performance of the baseline BERT and RoBERTa models. Furthermore, ablation studies validate that the inclusion of time-aware tokenization yields quantifiable performance gains, proving that explicit temporal encoding refines sentiment sensitivity and offers sharper insights into the shifting public opinion in Indonesia.

Author 1: Frans Mikael Sinaga
Author 2: Jefri Junifer Pangaribuan
Author 3: Kelvin
Author 4: Ferawaty
Author 5: Andree Emmanuel Widjaja

Keywords: Dynamic sentiment; fine-grained; IndoBERT; multi-platform big data; sentiment analysis

PDF

Paper 112: Data-Driven Insights for Moroccan Airports: PCA and Clustering to Enhance Operational Performance

Abstract: Following the trend of increasing complexity among systems, in an attempt to meet air passengers’ demands for higher quality service, this paper contributes to this stream of research by studying the operational efficiency of Moroccan airports through a novel multivariate approach. This research examines the following five performance metrics: baggage handling time, police screening time, customs processing time, passenger traffic, and flight delays. In this context and making use of Principal Component Analysis (PCA) with K-Means clustering, this paper aims at identifying the causes of operational variability, their significance in terms of performance management, and differentiating flights with similar operational profiles. Turning so particular techniques to the data of the moroccan airports this study reveals hidden patterns within airport interrelated activities, that in most cases were neglected by the traditional system of measurement. The findings make methodologies advancement in multivariate analysis of transport systems as well as practical improvement in the management of airport operations, and eventually impact on coordinated strategies of resource allocation for the systemic profit and the passenger utility. Through the use of PCA and K-means on the unreleased data of airports in Morocco, this paper is the first to offer a full multivariate study of the airport in the whole North African region. In contrast with standard monitoring systems which treat metrics as isolated entities, the study concurrently analyzes the dependencies among five key measures, discloses latent operational patterns, and promotes the formulation of context-based management policies suitable for an immature aviation market.

Author 1: H. Fatih
Author 2: A. Bentaleb
Author 3: M. Lazaar
Author 4: B. Bentalha

Keywords: Principal Component Analysis (PCA); airport performance; transportation systems; K-means clustering; operational optimization; airport efficiency; airport operations management; air traffic; passenger experience

PDF

Paper 113: Explainable AI Models for Assessing Short-Circuit Propagation in Fire-Exposed Cable Bundles

Abstract: Fire-induced short-circuit propagation in cable bundles poses significant safety risks in electrical installations, nuclear facilities, and transportation systems. Traditional fault detection methods often lack interpretability, hindering root cause analysis and preventive maintenance strategies. This paper presents novel explainable artificial intelligence (XAI) models for predicting and analyzing short-circuit propagation in fire-exposed cable bundles. We develop a hybrid framework combining gradient boosting machines with SHAP (SHapley Additive exPlanations) values to provide interpretable predictions of time-to-short-circuit and failure modes. Our approach integrates thermal imaging data, cable physical properties, and environmental conditions from controlled fire tests conducted on IEEE 383-qualified cables. The proposed XAI models achieve 94.7% accuracy in predicting short-circuit occurrence within 5-second windows while providing human-interpretable feature importance rankings. Experimental validation using the NUREG/CR-6931 dataset demonstrates that insulation temperature gradient, cable bundle density, and oxygen concentration are the three most critical factors influencing short-circuit propagation. The explainable framework enables fire safety engineers to understand model decisions, identify vulnerable cable configurations, and optimize protection strategies. Our results show a 23% improvement in early fault detection compared to conventional black-box deep learning approaches, with significantly enhanced model transparency for safety-critical applications.

Author 1: Vijay H. Kalmani
Author 2: Kishor S. Wagh
Author 3: Kavita Tukaram Patil
Author 4: Pallavi Jha
Author 5: Tanuja Satish Dhope
Author 6: Deepak Gupta
Author 7: Chanakya Kumar Jha

Keywords: Explainable AI; short-circuit propagation; fire safety; cable testing; SHAP values; gradient boosting; feature importance; nuclear safety

PDF

Paper 114: A Framework Design and Solutions Taxonomy for Performance Optimization in Internet of Things Network

Abstract: The Internet of Things (IoT) is an exciting, rapidly expanding technology that’s still in its early stages and faces several complex issues. These challenges primarily arise from the limitations of IoT devices (e.g., restricted energy, memory, and processing power), the diversity of communication protocols, and the heterogeneity of interconnected devices. Collectively, these issues often hinder overall IoT system performance, prompting extensive research into techniques to improve Quality of Service (QoS), particularly in terms of latency, throughput, and energy use. This paper introduces a conceptual framework for multi-dimensional IoT performance optimization. The framework pro-vides a structured approach for evaluating and enhancing performance across all layers of the IoT architecture: device, network, support, and application. It assesses key performance dimensions—reliability, security, scalability, energy efficiency, quality assurance, and enabling technologies—and defines them in terms of overall system performance. To ensure a systematic assessment, these dimensions are supported by concrete performance metrics and precise measurement criteria. Finally, the paper provides a taxonomy of IoT Performance Optimization Components, identifies the essential prerequisites and core attributes that influence the overall efficiency of IoT systems, and thus provides a structured foundation for evaluating and advancing performance across the entire IoT ecosystem.

Author 1: Mariam A. Alotaibi
Author 2: Sami S. Alwakeel
Author 3: Aasem N. Alyahya

Keywords: IoT performance; reliability; security; scalability; quality; energy efficiency; technology

PDF

Paper 115: Q-Learning Guided Local Search for the Traveling Salesman Problem

Abstract: The Traveling Salesman Problem (TSP) remains a fundamental challenge in combinatorial optimization with applications in logistics, routing, and network design. Classical local search methods face a trade-off between solution quality and computational efficiency: while 3-opt delivers better solutions than 2-opt, its O(n3) complexity renders it impractical for large instances. This paper presents a reinforcement learning (RL) approach that addresses this challenge through intelligent guidance of local search operators. Our method employs a simple one-dimensional Q-table that learns to identify poorly positioned cities and directs 2-opt and 3-opt operations toward the most promising tour segments. We evaluate the approach on 55 TSPLIB benchmark instances ranging from 51 to 18,512 cities. For instances up to 1,000 cities, RL-guided 3-opt (RL-3opt) achieves optimality gaps of 0.9–2.2% compared to 3.8–4.3% for classical 3-opt, with execution times reduced from hours to under one second and speedups reaching 32,323×. For instances between 1,000–5,000 cities, RL-3opt maintains computational efficiency (100–30,000× speedups) while achieving competitive 6.3% gaps. Both RL-2opt and RL-3opt execute in sub-second to a few seconds even on problems with over 18,000 cities. All experiments run on standard CPU hardware without GPU acceleration, demonstrating that effective TSP optimization remains accessible without specialized resources.

Author 1: Sanaa El Jaghaoui
Author 2: Aissa Kerkour Elmiad

Keywords: Traveling salesman problem; reinforcement learning; Q-Learning; local search; 2-opt; 3-opt

PDF

Paper 116: A Two-Step Real-Time Complex Environmental Vehicle Detection Model

Abstract: In recent years, as a critical pillar supporting the national economy and daily life, the safe and efficient operation of road traffic has highly relied on precise environmental perception capabilities. To address this, this study proposes a two-stage “denoising-detection” framework: the first stage restores clear images using an improved Uformer algorithm, which funda-mentally merges a probabilistic sparse self-attention mechanism. In contrast, the second stage leverages YOLOv11 for real-time object detection. This framework is introduced in the field for the first time and enhances the accuracy and robustness of vehicle detection in traffic images under complex weather scenarios, providing technical support for intelligent driving systems and traffic monitoring applications. Our experimental validation on our own flexible weather car detection database demonstrated the superior performance of the proposed model: CM-YOLO achieved 0.95 precision and 0.91 mAP50, which promoted 0.2 than the YOLOv11.

Author 1: Zhihui Huo
Author 2: Yiqian Liang
Author 3: Xingju Wang

Keywords: Object detection; vehicle detection; image denoising

PDF

Paper 117: HCC: A Hierarchical Chart Captioning Model for Enhanced Accessibility of Chart Data for Visually Impaired Users

Abstract: In educational settings, charts and graphs are commonly used to convey complex information in a simple and understandable manner. However, these visual representations often present accessibility challenges regarding Accessibility for Visually Impaired users, as they cannot be directly interpreted by screen readers without proper alternative text. This pa-per proposes a novel hierarchical captioning model (HCC: Hierarchical Chart Captioning) designed to facilitate effective Chart Interpretation. The model utilizes spatial token features to generate captions at multiple levels, each offering varying degrees of detail and abstraction, mimicking human cognitive processing. Three hierarchical levels are developed: Level 1 offers basic and factual descriptions, Level 2 presents more detailed information, and Level 3 provides intuitive interpretations and inferences. By integrating a fine-tuned Transformer Models, this approach ensures efficient caption generation and supports user-selectable caption lengths. The model’s effectiveness is evaluated through user surveys involving 20 instructors, confirming that Level 2 captions provide the most comprehensible descriptions. Experimental results demonstrate that the proposed method outperforms existing captioning approaches, improving both the efficiency and accessibility of educational materials for visually impaired students. These findings highlight the potential of hierarchical learning models to create more inclusive and accessible educational experiences.

Author 1: Yoojeong Song
Author 2: Kanghyeon Seo
Author 3: Svetlana Kim
Author 4: Joo Hyun Park

Keywords: Hierarchical captioning; accessibility for visually impaired; chart interpretation; transformer models

PDF

Paper 118: Enhancing Privacy in Databases by Data-Layer

Abstract: This study addresses the growing challenge of enhancing privacy in enterprise database systems, where excessive privileges and shared service accounts often lead to unauthorized data access and insider threats. The study proposes a data-layer security framework that enforces fine-grained access control based on authenticated user identities, integrating role-based access control (RBAC) and the principle of least privilege (PLP) to protect sensitive information. The model restricts developer and administrative access strictly to authorized data objects, reducing exposure while maintaining operational efficiency. Drawing on established database security mechanisms, including authentication, authorization, and centralized identity management through Active Directory, the proposed framework ensures that all database interactions are executed under verified user credentials. The approach is implemented using Microsoft SQL Server within an enterprise environment and evaluated through controlled experiments conducted before and after deployment. Results demonstrate a significant reduction in unauthorized data retrieval without introducing noticeable performance overhead. The findings confirm that enforcing privacy at the data-layer provides an effective and scalable solution for securing sensitive data in modern database systems, strengthening accountability and mitigating risks associated with privilege misuse.

Author 1: Sami Alharbi
Author 2: Samer Atawneh
Author 3: Hussein Al Bazar
Author 4: Roxane Elias Mallouhy

Keywords: Database privacy; security model; access control; data protection; privacy enhancing technologies; database systems

PDF

Paper 119: Multi-Spectral Image Analysis Using Different CNN Models to Detect the Plant Diseases in its Early Stages

Abstract: The Researchers and academicians are continuously working on minimizing the production losses due to various plant diseases. Therefore, recent technologies such as artificial intelligence (AI), and machine learning (ML) are playing a crucial role in detecting plant diseases in their early stages. These technologies help classifying plant leaves into ‘healthy’ and ‘rusty’ or ‘diseased’ leaves. It is difficult for human being to detect the plant diseases and take remedial action within stipulated time period. Hence, this research work addresses comparison of different convolutional neural network (CNN) models like Alexnet, Resent18, Resnet50, Xception, VGG16, VGG19, InceptionV3, InceptionResentV2 etc. and concluded with the top CNN models with good filters used to capture the plant leaves images. Proposed research work uses different filters dataset like K590, K665, K720, K850, BlueIR, Hotmirror. Plant disease detection requires accurately detecting the rust, or disease on the leaves immediately and efficiently. CNN models help classifying the plant leaves with higher accuracy and precision. Proposed research work gave accuracies for different filters with different models and found that, for K850 filter accuracy is 72.72% using balance efficientnetB0 CNN model, for K720 filter accuracy is 81.81%using balance efficientnet B0 CNN model, for K665 filter accuracy is 84.09% using balance efficientnetB0 CNN model, for K590 filter accuracy is 90.90% using balance MobilenetV2 CNN model, and for the hotmirror filter accuracy is 93.18% using balance Xception & for the blueIR filter accuracy is 81.81% using balance Xception CNN model.

Author 1: Dhiraj Bhise
Author 2: Sunil Kumar
Author 3: Hitesh Mohapatra

Keywords: Convolutional Neural Network (CNN); Multi-spectral images; Alexnet; Densenet121; Resnet18; Resnet50; VGG16; VGG19; Effficeienet80; MobilenetV2; Xception; InceptionV3; InceptionResnetV2

PDF

Paper 120: Model-Driven Transformation of Business Processes into Blockchain Smart Contracts

Abstract: This paper presents a comprehensive Model-Driven Engineering (MDE) methodology for automatically transforming Business Process Model and Notation (BPMN) diagrams into executable blockchain-based smart contracts. The proposed approach defines a set of Atlas Transformation Language (ATL) rules that systematically map BPMN elements to Solidity con-structs, ensuring semantic consistency and traceability through-out the transformation process. The framework integrates several stages, including process modeling, model validation, code generation, and deployment, supported by tools such as Camunda, Eclipse ATL, Remix IDE, and MetaMask. Experimental vali-dation on the Ethereum Sepolia test network demonstrates the approach’s ability to enhance automation, reduce manual coding errors, and improve synchronization between business work-flows and their on-chain implementations. Compared to existing BPMN-to-blockchain frameworks, the proposed solution offers a unified and reusable transformation pipeline that bridges the gap between business process modeling and blockchain execution. The study concludes that MDE provides a scalable, traceable, and standardized foundation for developing decentralized business process applications.

Author 1: Imane Bouzaidi Tiali
Author 2: Zineb Aarab
Author 3: Achraf Lyazidi
Author 4: Moulay Driss Rahmani

Keywords: Model-driven engineering; BPMN; smart contracts; blockchain; ATL; automation; solidity; process transformation

PDF

Paper 121: Bridging the Gap Between Text-Based and Visual Programming: A Comparative Study of Efficiency and Student Engagement in Game Development

Abstract: The integration of Low-Code and No-Code (LCNC) tools in higher education challenges traditional text-based programming pedagogies. While visual environments are often relegated to K-12 education, their adoption in professional engines like Unity suggests a need to re-evaluate their role in engineering curricula. This study analyzes the effectiveness, development efficiency, and perceived utility of Unity Visual Scripting com-pared to traditional C# programming (MonoGame) within a “Physics for Videogames" undergraduate course. Employing a quasi-experimental design with a within-subjects approach (N = 22), students first developed a game using C#/MonoGame and subsequently a complex variant using Unity Visual Scripting. Metrics included development time for core mechanics, project grades, and pre/post surveys on self-efficacy. Results demonstrate a statistically significant reduction in development time (30–50%faster for core mechanics) using Visual Scripting. Furthermore, academic performance improved slightly, and students reported higher confidence levels. Crucially, participants identified Visual Scripting not as a replacement, but as a cognitive bridge that facilitates the understanding of algorithmic logic before tackling syntactic complexities. Consequently, Visual Scripting serves as an efficient accelerator for prototyping and conceptual learning in higher education, fostering a “logic-first, syntax-second" approach.

Author 1: Álvaro Villagómez-Palacios
Author 2: Claudia De la Fuente-Burdiles
Author 3: Cristian Vidal-Silva

Keywords: Visual scripting; higher education; development efficiency; engineering curricula

PDF

Paper 122: Confidence-Based Trust Calibration in Human-AI Teams

Abstract: Effective human-AI collaboration is contingent upon calibrated trust, wherein users depend on AI systems when accuracy is probable and rely on human judgment when errors are likely. In this study, a confidence-based mechanism for trust calibration within human-AI teams is examined. A decision-making strategy is proposed in which task delegation is governed by the AI’s confidence: when the confidence surpasses a specified threshold, the AI’s recommendation is adopted; otherwise, the decision is deferred to the human. Through simulation experiments on a binary classification task, performance outcomes are compared. The AI system achieves an accuracy of 77.7%, whereas the human decision-maker, modeled with a confidence-sensitive accuracy function ph(c) = 0.95 − 0.3c, attains an overall accuracy of 71.9%. Team performance is evaluated across a range of AI confidence thresholds (0.50 to 0.99), revealing that an intermediate threshold yields optimal team accuracy of 84.14%, substantially exceeding the performance of either agent individually. The findings provide a detailed analysis of confidence-based delegation, align with existing research on trust calibration, and underscore critical design implications for the development of human-centric AI systems.

Author 1: Michael Ibrahim

Keywords: Human-AI collaboration; trust calibration; confidence-based delegation; decision-making strategies

PDF

Paper 123: AI-Based Framework for Automated Cell Cleavage Detection and Timing in Embryo Time-Lapse Videos

Abstract: In vitro fertilization (IVF) has become a primary therapeutic intervention for couples worldwide addressing in-fertility challenges. IVF success depends critically on embryo quality assessment, where cell cleavage timing serves as a key developmental parameter. Traditional morphological evaluation methods suffer from inter-observer variability and laborintensive manual analysis. This study presents an automated AI-based framework for cleavage stage detection and cleavage onset timing estimation from Time-Lapse Microscopy (TLM) videos to assist embryologists in embryo selection. The proposed YOLO-based approach addresses significant class imbalance through selective data augmentation and random undersampling strategies. To ensure precise temporal data, an OCR (Optical Character Recognition) library was integrated to automatically read and record the Hours Post-Insemination (HPI) timestamps from the video frames. The proposed framework accurately identifies cell division stages up to the seven-cell stage with 1-2 hours mean timing delay post-insemination. The framework achieves an overall Accuracy of 86.61% , F1-score of 86.24% ,and precision of 86.24% in cleavage stage classification, demonstrating significant improvements over existing methods, particularly in intermediate and later stages (4-cell to 8-cell transitions) where previous research have demonstrated challenges in accurately detecting them. Automated extraction of morphokinetic parameters enables objective embryo assessment, reducing subjectivity in clinical decision-making. The proposed framework demonstrated significant improvements over previous research, which frequently has trouble accurately classifying beyond early cleavage stages. This has implications in improving the selection of good-quality embryos, and thus to help improve the success rate of IVF. This work contributes to advancing assisted reproductive technology by providing reliable, automated embryo quality assessment tools.

Author 1: Yasmin Alharbi
Author 2: Sultanah Alshammari
Author 3: Aisha Elaimi

Keywords: In vitro fertilization; Time-Lapse Microscopy (TLM) videos; AI-based framework; cleavage stage; cleavage onset timing; optical character recognition; Hours Post-Insemination (HPI)

PDF

Paper 124: Fine-Tuning Language Models for Pedagogy-Aligned Lesson Plans in Cybersecurity Education

Abstract: Lesson planning in cybersecurity is time-consuming and cognitively demanding, especially for less experienced instructors, and manual approaches often lack flexibility across courses and contexts. We present a framework for generating pedagogy-aligned lesson plans using a large language model, integrating measurable objectives (Revised Bloom’s Taxonomy), explicit learning theories, and evidence-based teaching strategies. We constructed a domain-specific knowledge base for cybersecurity topics and organized it with sentence-level embeddings and KMeans clustering. A pretrained large language model (GPT- 3.5) was then fine-tuned to produce lesson plans that follow this structure. On a held-out test set, the model achieved BLEU 73.5, ROUGE-1 82.2, ROUGE-L 78.2, and BERTScore F1 97.4, reflecting strong lexical and semantic fidelity to reference plans. Although the study is limited to a single academic program and relies primarily on automated metrics, the framework offers practical support for instructors by reducing preparation time, enhancing consistency, and ensuring alignment with pedagogical standards. Future work will expand the curricular scope and in-volve expert review and classroom validation to assess educational impact.

Author 1: Samar Althagafi
Author 2: Miada Almasre
Author 3: Wafaa Alsaggaf
Author 4: Lana Alshawwa

Keywords: Fine-Tuning; large language models; lesson planning; cybersecurity

PDF

Paper 125: Advanced Multi-Scale Enhanced U-Net for Efficient Land Cover Classification of Remote Sensing Images

Abstract: For monitoring the environment, building cities, assessing crops, and studying the climate, it is very important to be able to accurately classify land cover from remote sensing images. Deep learning has made semantic segmentation work much better, especially with encoder-decoder designs like U-Net. Still, ordinary U-Net models have trouble capturing multi-scale contextual relationships, distinguishing narrow borders, and successfully emphasizing traits that are distinctive to an area. This work presents an Advanced Multi-Scale Enhanced U-Net (AMSE-U-Net) to address these difficulties. The AMSE-U-Net combines (i) multi-scale feature extraction, (ii) squeeze-and-excitation channel attention, and (iii) attention-gated skip connections. The model improves learning of both local and global features while getting rid of background noise that isn’t useful. Tests done on common remote sensing datasets show big improvements in Intersection over Union (IoU), pixel precision, and boundary delineation when compared to standard U-Net and similar models. The suggested AMSE-U-Net works better for generalization with only a little amount of extra processing power, making it good for monitoring land cover and the environment.

Author 1: Syed Zaheeruddin
Author 2: K. Suganthi

Keywords: Land cover classification; remote sensing; UNet; satellite images; AMSE-U-Net; multi-scale features; semantic segmentation

PDF

Paper 126: A Fuzzy Petri Net Approach with Automated ANFIS Rule Learning for Modelling Real-Time Systems

Abstract: In this paper, we propose a modelling approach for real-time intelligent systems using Fuzzy Petri Nets (FPNs), a formalism that generates dynamic fuzzy rules, supports uncertainty, and enables concurrent reasoning. FPNs offer a well-defined tool for dynamically evaluating Fuzzy Production Rules (FPRs), Certainty Factors (CFs), and truth degrees, and for making real-time decisions. To reduce the complexity of manually constructed or probabilistically modelled fuzzy rules, we extend the modelling toolkit with the Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS learns membership functions and Sugeno-type rules from numeric datasets through a feature. This results in a richer and more accurate set of rules. At the novelty level, we propose a rule-integrating scheme that maps Sugeno rules learned by ANFIS into FPN transitions to obtain more clearly explained reasoning and traceable rule execution within a neuro-fuzzy Petri net. Based on these learned rules, FPN executes them within a two-layer real-time (prediction and decision) while maintaining concurrent inference and real-time execution. The hybrid methodology is verified by fitting a real-time expert system for solar collector cleaning. Results from the experiments demonstrate that, in terms of predictive performance, ANFIS-induced rules drastically boost accuracy (from 85% to 93%) and reduce Root Mean Square Error (RMSE) from 4.82 to 2.57 relative to those generated by a single probabilistic FPN model. These results indicate that using neural learning combined with an FPN-based expert system makes real-time decision-making much more accurate and reliable.

Author 1: Abdelilah Serji
Author 2: El Bekkaye Mermri
Author 3: Mohammed Blej

Keywords: Fuzzy petri net; adaptive neuro-fuzzy inference system; expert systems; fuzzy logic; real-time system; artificial intelligence

PDF

Paper 127: Trajectory Planning of Shipbuilding Welding Manipulator Based on Improved Whale Optimization Algorithm

Abstract: Time-optimal trajectory planning for shipboard welding robotic arms is a challenging problem due to strong kinematic constraints and the nonlinear coupling between trajectory parameters and execution time. Although various intelligent optimization algorithms have been combined with robotic arm trajectory planning in existing studies, most approaches primarily focus on algorithmic performance improvement and lack a clear formulation of time optimization within polynomial trajectory planning. To address this gap, this study proposes an Improved Whale Optimization Algorithm (IWOA) based on the traditional quintic polynomial trajectory planning method. In the proposed method, the trajectory execution time is explicitly formulated as the optimization objective under kinematic constraints, and the IWOA is designed to stably and efficiently search the time parameter space of the quintic polynomial trajectory. Specifically, chaotic sequence initialization is employed to enhance population distribution, an adaptive weight mechanism is introduced to balance global exploration and local exploitation, and a hybrid co-optimization strategy combining differential evolution and genetic operators is integrated to improve robustness and convergence stability. Simulation experiments are conducted to evaluate the effectiveness of the proposed algorithms. The results demonstrated that, while satisfying robotic arms kinematic constraints, the proposed method achieves an 18.3% reduction in operating time compared with the unoptimized trajectory. These results indicate that the proposed approach provides a systematic and effective solution for time-efficient trajectory planning of shipboard welding robotic arms.

Author 1: Caiping Liang
Author 2: Hao Yuan
Author 3: Chen Wang
Author 4: Wenxu Niu
Author 5: Yansong Zhang

Keywords: Shipboard welding robotic arm; quintic polynomial; Improved Whale Optimization Algorithm; time-optimal trajectory planning

PDF

Paper 128: An RBAC-Based Access Control and Security Architecture for UAV Networks in Precision Agriculture Using Software-Defined Drone Networking

Abstract: Unmanned Aerial Vehicles (UAVs), commonly referred to as drones, are widely employed in applications such as surveillance, delivery, mapping, and precision agriculture. Their flexibility, mobility, and cost effectiveness have accelerated their adoption in both civilian and industrial domains. However, the rapid evolution of UAV technologies introduces significant challenges related to limited resources, data processing constraints, and, most critically, security and privacy. Cyberattacks targeting UAV systems may result in data breaches, mission failures, operational disruptions, and risks to human safety. In our previous work, we proposed a lightweight identity authentication scheme based on Elliptic Curve Cryptography (ECC) and integrated it into a Software-Defined Drone Network (SDDN) architecture to ensure strong security with low computational overhead. Building on this foundation, the present study focuses on the agricultural domain, where UAVs are increasingly used for crop monitoring, precision farming, and environmental data collection. Due to the sensitivity of agricultural data and the involvement of multiple stakeholders, fine-grained access control is essential. The main contribution of this work is the design and evaluation of an SDDN-based security framework that integrates role-based access control (RBAC) with trust management to enable secure, scalable, and controlled UAV operations in agricultural environments. The framework restricts user actions according to predefined roles, improving system security and manageability. Simulation results demonstrate that the proposed approach effectively enforces access policies, enhances trust-aware decision making, and maintains low computational overhead suitable for resource-constrained UAV networks. Validation is conducted using Python and YAML-based configurations on Google Colab, confirming the practicality of the proposed solution.

Author 1: Nadia Kammoun
Author 2: Aida Ben Chehida Douss
Author 3: Ryma Abassi

Keywords: Unmanned Aerial Vehicles; Software-Defined Drone Network; role-based access control; security; attacks; trust management; authentication; access control

PDF

Paper 129: A Soft and Hard Mixture-of-Experts Approach for Improved ADR Extraction from Patient-Generated Narratives

Abstract: Traditional single-architecture neural models, in-cluding monolithic transformer-based and sequence-to-sequence architectures, often struggle to extract Adverse Drug Reactions (ADRs) from patient-generated health narratives due to informal language, high linguistic variability, and complex relationships among drugs, diseases, and adverse events. Although Mixture-of-Experts (MoE) architectures have demonstrated strong performance across various Natural Language Processing (NLP) tasks, their effectiveness for ADR extraction from unstructured patient narratives remains largely unexplored. This study investigates the application of MoE architectures, specifically Soft MoE and Hard MoE, for ADR extraction from patient-generated content. The task is formulated as a sequence-to-sequence generation problem and evaluated on the PsyTAR dataset using both strict and relaxed evaluation metrics. Experimental results demonstrate that Soft MoE consistently outperforms Hard MoE, achieving a relaxed F1-score of 80.40% compared to 79.40%. These findings highlight the critical role of expert-routing strategies in capturing linguistic variability in patient narratives and establish MoE architectures as a competitive and reliable approach for automated ADR extraction in biomedical text mining and pharmacovigilance applications.

Author 1: Oumayma Elbiach
Author 2: Hanane Grissette
Author 3: El Habib Nfaoui

Keywords: Adverse Drug Reaction; Mixture-of-Experts; Soft and Hard MoE; sequence-to-sequence; patient narratives; biomedical text mining

PDF

Paper 130: Achieving Long-Term Autonomy: A Self-Correcting Deep Reinforcement Learning Agent for Edge IoT Using Digital Twin-Based Drift Compensation

Abstract: Ensuring long-term autonomy in Edge AI systems remains one of the most persistent challenges in environmental monitoring and biorisk management. Over time, the degradation of low-cost sensors—particularly sensor drift—leads to cumulative measurement errors, distorted state perception, and catastrophic decision failures in Deep Reinforcement Learning (DRL) agents. This paper proposes a novel Self-Correcting Deep Reinforcement Learning (SCDRL) framework that enables robust, long-term autonomy through in-loop drift compensation. The proposed Self-Correcting Agent (SCA) integrates a dual-input architecture combining (i) the local, drifted sensor reading and (ii) a stable reference prediction from a macro-scale Digital Twin (DT). By learning to correlate both signals, the agent implicitly estimates and neutralizes sensor bias in real time, achieving self-calibration without human intervention. To validate this approach, a nine-year simulation of autonomous water management was conducted using real-world hourly climate data from Arequipa, Peru. Results show that a conventional “blind” DRL agent suffers complete performance collapse as drift accumulates, whereas the proposed SCA maintains stable operation indefinitely. Quantitatively, the SCA achieved a 722% higher cumulative reward (415,662 vs. 57,556) and a 53% reduction in plant stress (RMSE 0.2238 vs. 0.4762). These findings establish a validated blueprint for fault-tolerant Edge AI, demonstrating that the fusion of local sensing with digital twin predictions enables self-calibrating agents capable of sustained, reliable autonomy in real-world, resource-constrained environments.

Author 1: Jhon Monroy
Author 2: Miguel Paco
Author 3: Miguel Portella
Author 4: Geral Basurco
Author 5: Jeymi Valdivia
Author 6: Fiorela Jara
Author 7: Guido Anco

Keywords: Deep Reinforcement Learning (DRL); Edge AI; Internet of Things (IoT); digital twin; sensor drift; fault tolerance; autonomous systems; self-correcting systems

PDF

Paper 131: Sustainable and Ethical AI-Driven Recognition in Robotics: Integrating ESG Analytics and Human–Robot Interaction

Abstract: Environmental, Social, and Governance (ESG) in-formation has become an essential component in evaluating corporate responsibility and long-term resilience. However, its incremental value in predicting firm profitability remains insufficiently understood. This study investigates whether integrating ESG analytics with traditional financial ratios enhances the machine-learning classification of firms into high- and low-profitability categories. Using a multi-industry dataset that combines firm-level ESG pillar scores with accounting-based financial indica-tors, three supervised learning models—Decision Trees, Random Forests, and Support Vector Machines (SVM)—are developed and evaluated. Model validation is conducted through cross-validation, and predictive performance is assessed using Accu-racy, F1-score, and the Area Under the ROC Curve (AUROC). To isolate the specific contribution of ESG factors, ablation experiments and feature-importance analyses are performed. The findings reveal that the Random Forest model provides the most consistent and robust predictive performance (Accuracy = 0.89, F1-score = 0.88, AUROC = 0.93), with Environmental and Governance dimensions emerging as the most influential ESG predictors. The novelty of this research lies in establishing a clear mechanism linking ESG analytics to financial performance and in proposing an ESG-aware evaluation framework, rather than introducing a new predictive model or dataset.

Author 1: Fatma Mallouli
Author 2: Lobna Amouri
Author 3: Mejda Dakhlaoui
Author 4: Nada Chaabane
Author 5: Imen Gmach
Author 6: Inès Hammami
Author 7: Hanen Chakroun
Author 8: Ahmed Mellouli
Author 9: Sonda Elloumi
Author 10: Abdelwaheb Trabelsi
Author 11: Heba Elbeh
Author 12: Mohamed Elkawkagy

Keywords: Artificial intelligence; robotic recognition; human–robot interaction; explainable AI; ESG analytics; sustainable robotics

PDF

Paper 132: Hybrid Deep Learning for Signals Automatic Modulation Classification

Abstract: Classifying signals or modulation classification is a crucial step in developing communication receivers. A common practice is to extract features before categorizing the signal, which requires implementing long preprocessing techniques. Due to breakthroughs in neural network topologies, machine learning (ML) algorithms, and optimization techniques, referred to as "deep learning" (DL), we have witnessed a vast degree of change over the previous five years. Advanced deep learning algorithms can be applied to the same automatic modulation classification problem and generate excellent outcomes without requiring time-consuming, manual, and complex feature extraction methods. In recent years, various DL techniques have been explored for automatic modulation classification (AMC). However, it has been observed that these techniques are effective only for higher Signal-to-Noise-Ratio (SNR) values. To overcome this challenge, we proposed a hybrid DL-based AMC technique by combining a customized EfficientNet with a customized Transformer Block. The transformer block is used to enhance the DL performance for the lower SNR values. The performance of the proposed hybrid model is tested on a benchmark dataset, RadioML2018.01A, and compared with the state-of-the-art existing DL method which shows the supremacy of the proposed hybrid model.

Author 1: Muhammad Moinuddin
Author 2: Hitham K. Alshoubaki
Author 3: Omar Ayad Alani
Author 4: Ubaid M. Al-Saggaf
Author 5: Karim Abed-Meraim

Keywords: Automatic modulation classification; deep learning; machine learning; EfficientNet; Transformer Network

PDF

Paper 133: A Comprehensive Analysis of Security Challenges and Solutions in the Internet of Drones: Recent Trends and Development

Abstract: The Internet of Drones (IoD) is a decentralized structure that links drones to regulate airspace and offer inter-location navigation services. With the increasing use of drones in both civilian and military applications, the importance of the IoD has grown significantly. It reshapes the current internet landscape, making it more extensive and all-encompassing. IoD establishes a connection between drones and the network, which exposes the IoD network to numerous privacy and security issues often associated with IoT ecosystems. To ensure optimal performance from IoD applications, it is crucial to maintain a secure environment devoid of privacy and security risks. Privacy and security concerns have obstructed the overall effectiveness of the IoD framework. This study conducts an extensive examination of security concerns and solutions related to IoD security. It delves into IoD-specific security requirements and sheds light on the latest developments in IoD security research. Hence, we first provide an overview of the overall context and structure of the IoD. We then identify the security issues linked to it. Afterward, we present the most recent security measures developed specifically for the IoD. Finally, we go through the challenges and potential areas for future research in the realm of IoD security.

Author 1: Amine Hedfi
Author 2: Aida Ben Chehida Douss
Author 3: Ryma Abassi
Author 4: Mohamed Aymen Chalouf
Author 5: Om Saad Hamdi

Keywords: Unmanned Aerial Vehicle; Internet of Drones; cybersecurity; attacks; threats

PDF

Paper 134: Relationship Management System: A Data-Driven Framework for Modeling, Monitoring, and Restoring Human–AI Relationships

Abstract: We present the Relationship Management System (RMS) a modular framework for modeling, monitoring, and repairing human AI relationships. Grounded in Knapp’s Relational Development Model and Social Penetration Theory, RMS operationalizes ten stages of relationship growth and decline, linking depth of disclosure with stage-appropriate behavior. An Airtable-backed schema Relationship Stages, Conversational Arcs, Session Directives) separates master content from user-specific state. A Trust Evaluator quantifies trust, engagement, and disclosure after each session and drives stage transitions. A weighted Regression Risk Score anticipates degradation by tracking shifts in trust, drops in engagement and frequency, patterns of topic avoidance, and conflict cues. When risk climbs, RMS activates empathy centered Recovery Arcs that acknowledge strain and guide repair. This two way, data-informed loop delivers early warning, adjusts pacing to context, and offers gentle offramps when needed improving long-term engagement while preserving interpretability and keeping operational costs low.

Author 1: Ilia Sedoshkin

Keywords: Human-AI interaction; relationship modeling; trust dynamics; conversational systems; affective computing; regression detection; recovery protocols

PDF

Paper 135: Medical Diagnosis Using Hybrid of Machine Learning and Deep Learning Techniques

Abstract: The rapid development of medical practices and imaging technology tools creates substantial growth in the amount of medical image data each year in our present era. This research aims to develop a hybrid approach that integrates Machine Learning (ML) and Deep Learning (DL) techniques to enhance the accuracy and reliability of medical image classification for diagnostic purposes. Medical imaging data complexity and growing volume serve as the research motivation, which leads to an investigation of standalone ML or DL limitations and their combination into a single framework. The medical image processing starts with normalization, then noise reduction, and continues to grayscale conversion before performing histogram equalization. This research uses VGG16 and ResNet50 alongside MobileNet and InceptionV3 for feature extraction, then applies ten different ML algorithms, including SVM and MLP, and Ran-dom Forest, for classification. Five public medical image datasets from Kaggle are used: COVID-19 chest X-rays, melanoma skin lesions, pneumonia chest X-rays, acute stroke facial images, and various eye diseases. Hybrid models display superior performance compared to stand-alone ML or DL models based on accuracy, precision, recall, and F1-score evaluation measures. Multiple datasets demonstrate that the MobileNet+MLP combination de-livers the most accurate results, which demonstrates its reliable and efficient performance. The developed AI diagnostic tool presents a scalable system alongside accuracy and interpretability to enhance clinical decision outcomes.

Author 1: Raed Alazaidah
Author 2: Moath Alomari
Author 3: Hamza Mashagba
Author 4: Musab Iqtait
Author 5: Azlan B. Abd Aziz
Author 6: Hayel Khafajeh
Author 7: Omar Khair Alla Alidmat
Author 8: Ghassan Samara
Author 9: Haneen Alzoubi
Author 10: Samir Salem Al-Bawri

Keywords: Classification; deep learning; feature selection; hybrid models; machine learning; medical diagnosis; medical image classification

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

The Science and Information (SAI) Organization Limited is a company registered in England and Wales under Company Number 8933205.