The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

IJACSA Volume 16 Issue 8

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Development of Web Apps for Users with Special Needs

Abstract: This study presents software solutions and prototypes of converters designed to assist individuals who suffer from visual impairments or deafness. The prototypes were developed in a laboratory environment using modern programming technologies, with their applicability focused on contexts such as education, employment, and online shopping. The converter prototypes are designed to transform textual information into Braille or sign language, depending on users’ needs. Other solutions facilitate auditory interpretation when working with assessment materials, thereby helping the sightless to access information more easily. The research methodology combines synthesis and analysis of existing information from related research works. The choice of programming technologies was carefully considered to ensure the implementation of more accessible functionalities in the developed applications. In the teaching process, the authors of the study motivate their students to develop programming and analytical skills. The results achieved are based on student project work in the design and implementation of software prototypes to assist users with hearing or visual impairments. The prototypes created are aligned with scenarios for use in education and everyday activities, which expand the practical relevance of the study. Importantly, the presented applications also benefit people without disabilities by promoting communication that is more effective and understanding in cases of visually or hearing impaired.

Author 1: Silviya Varbanova
Author 2: Milena Stefanova
Author 3: Tihomir Stefanov

Keywords: Braille; sign language; deafness; visually impaired; software prototypes

PDF

Paper 2: How Teachers’ Gestural Culture Influences Japanese Students’ Emotions: A Machine Learning Approach

Abstract: This study analyzes differences in teachers’ gestural styles based on their culture and investigates how these differences are perceived to influence Japanese students’ emotional responses by active observers. Classroom videos of Japanese- and English-native instructors were analyzed using MediaPipe for gesture tracking and DeepFace for facial emotion recognition. Ground-truth emotion labels were collected from four Japanese observers. Results show that Japanese and non-Japanese teachers’ gesture dynamics differ in terms of range, rhythm, and symmetry. Japanese student observers perceived each group’s gestures differently, with cultural familiarity playing a role in their shifts in emotion. Machine learning models trained on gesture features, facial emotion scores, and teacher background successfully predicted students’ affective reactions. These findings highlight the importance of culturally sensitive nonverbal communication in education and demonstrate the potential of AI-based approaches for modeling student emotion in cross-cultural contexts. This study contributes a novel multimodal framework that integrates gesture dynamics, facial emotion recognition, and teacher cultural background to predict student affect, thereby highlighting the necessity of culturally adaptive affective computing in education.

Author 1: Yuka Nishi
Author 2: Olivia Kennedy
Author 3: Choi Dongeun
Author 4: Noriaki Kuwahara

Keywords: Nonverbal behavior; affective computing; AI-based gesture recognition; cultural differences; multimodal analysis; cross-cultural education; emotion prediction

PDF

Paper 3: Enhancing Approximate Conformance Checking Accuracy with Hierarchical Clustering Model Behaviour Sampling

Abstract: Conformance checking techniques evaluate how well a process model aligns with an actual event log. Existing methods, which are based on optimal trace alignment, are computationally intensive. To improve efficiency, a model sampling method has been proposed to construct a subset of model behaviour that represents the entire model. However, current model sampling techniques often lack sufficient model representativeness, limiting their potential to achieve optimal approximation accuracy. This study proposes new model behaviour sampling approaches using hierarchical clustering to compute an approximation closer to the exact result. This study also refines the existing upper bound algorithm for better approximation. Our experiments on six real-world event logs demonstrate that our method improves approximation accuracy compared to state-of-the-art model sampling methods.

Author 1: Yilin Lyu

Keywords: Approximate conformance checking; model behaviour sampling; hierarchical clustering; process mining

PDF

Paper 4: Enhancing Cyber Security Through Predictive Analytics: Real-Time Threat Detection and Response

Abstract: This study evaluates the application of predictive analytics for real-time cyber-attack detection and response, focusing on how statistical and machine learning methods can improve decision-making in Security Operations Centers (SOCs). Using a curated network-traffic dataset of 2,000 records, we analyzed key features such as attack type, packet length, anomaly scores, protocol usage, and geo-location patterns to assess their predictive value. Findings indicate that attack type has a measurable influence on response actions, while basic header metrics alone lack the precision needed for accurate classification. These results highlight the importance of incorporating richer contextual features—such as user behavior, asset criticality, and temporal patterns—into predictive models. By integrating such features into operational pipelines, organizations can improve early threat detection, reduce false positives, and optimize resource allocation. This research contributes actionable insights for advancing proactive, data-driven cyber defense strategies and outlines directions for future implementation in live SOC environments.

Author 1: Muhammad Danish

Keywords: Predictive analytics; real-time cyber-attack detection; statistical methods; machine learning; threat detection

PDF

Paper 5: A Hybrid Approach to Automatic Timetabling Using Self-Organizing Maps, Secure Convex Dominating Sets, and Metaheuristics

Abstract: Creating conflict-free academic timetables that respect teacher availability, subject eligibility, and limited re-sources remains a persistent challenge in educational institutions. This study introduces a novel hybrid algorithm that combines Self-Organizing Maps (SOM), Secure Convex Dominating Sets (SCDS), and Genetic Algorithms (GA) to address this problem effectively. SOM is employed to cluster subjects based on teaching duration and eligibility, providing structured guidance in initial scheduling. SCDS identifies the most conflict-prone subjects—typically those with limited eligible teachers—and ensures they are prioritized, thereby reducing downstream bottlenecks. GA then iteratively refines the schedule by evaluating room assignments, teacher loads, and constraint satisfaction. Extensive simulation experiments were conducted under varying conditions, including worst-case scenarios with dense scheduling conflicts. The system achieved high success rates, particularly in moderate to complex settings, and demonstrated robustness even in constrained environments. Notably, SOM improved spatial and temporal coherence, while SCDS enhanced conflict resolution and GA enabled adaptive optimization. Runtime and convergence results remained within practical limits, with a time complexity of O(n2+gpn). The proposed hybrid framework balances structural prioritization and evolutionary refinement, offering a scalable and intelligent solution to the timetabling problem. It stands out by gracefully handling worst-case scenarios where traditional heuristics often fail.

Author 1: Elmo Ranolo
Author 2: Ken Gorro
Author 3: Pierre Anthony Gwen Abella
Author 4: Lawrence Roble
Author 5: Rue Nicole Santillan
Author 6: Anthony Ilano
Author 7: Benjie Ociones
Author 8: Roel Vasquez
Author 9: Deofel Balijon
Author 10: Daniel Ariaso Sr.
Author 11: Rose Ann Campita
Author 12: Robert Jay Angco

Keywords: Timetable optimization; Self-Organizing Maps (SOM); Secure Convex Dominating Set (SCDS); Genetic Algorithm (GA); Academic Scheduling

PDF

Paper 6: Integrating Fine-Tuned GPT with Agent-Based Economic Modeling for Transparent Wage Policy Decisions

Abstract: This study presents a decision-support system powered by GPT-enhanced insights to help policymakers explore the economic effects of minimum wage policies in the Philippines. The system integrates agent-based simulation, fuzzy logic, reinforcement learning, and Fuzzy Analytic Hierarchy Process (Fuzzy AHP) to model the complex relationships between wages, inflation, firm behavior, and employment. At its core is a fine-tuned GPT model trained on synthetic simulation outputs, capable of generating human-readable interpretations that explain dynamic trends, trade-offs, and fuzzy economic behaviors that are often difficult to decipher from numbers alone. Two policy scenarios were simulated over 100 months: increasing the minimum wage from ‚500 to ‚600, and from ‚500 to ‚700. While the ‚700 scenario led to short-term boosts in productivity and real wages, it also triggered early inflation, unstable profits, and reduced employment. In contrast, the ‚600 scenario produced more stable results, balancing moderate wage growth with firm sustainability and lower inflationary pressure. Fuzzy AHP was used to evaluate each scenario across four key criteria—real wages, firm profitability, employment, and inflation—favoring ‚600 as the more sustainable policy path. What sets this study apart is the integration of GPT-generated policy narratives that accompany each simulation run. These insights help translate fuzzy, nonlinear model behaviors into clear, accessible language—supporting more inclusive, transparent, and evidence-based wage policy decisions. By combining simulation and generative AI, the framework offers not just predictions, but practical understanding of how economic systems respond to complex changes.

Author 1: Daniel A. Ariaso Sr.
Author 2: Ken D. Gorro
Author 3: Deofel Balijon
Author 4: Meshel Balijon

Keywords: Agent-Based simulation; reinforcement learning; fuzzy AHP; GPT

PDF

Paper 7: A Privacy-Preserving Gaussian Process Regression Framework Against Membership Inference Attacks Using Random Unitary Transformation

Abstract: As artificial intelligence (AI) systems become increasingly embedded in sensitive domains such as healthcare and finance, they face heightened vulnerabilities to privacy threats. A prominent type of attack against AI is the membership inference attack (MIA), which aims to determine whether specific data instances were used in a model’s training set, thereby posing a serious risk of sensitive information disclosure. This study focuses on Gaussian Process (GP) models, which are widely adopted for their probabilistic interpretability and ability to quantify predictive uncertainty, and examines their susceptibility to MIAs. To mitigate this threat, a novel defense mechanism based on Random Unitary Transformation (RUT) is introduced, which encrypts training and testing inputs using orthonormal matrices. Unlike Differential Privacy-based Gaussian Processes (DP-GPR), which rely on noise injection and often degrade model performance, the proposed method preserves both the structural integrity and predictive fidelity of the GP model without injecting noise into the learning process. Two configurations are evaluated: i) encryption applied to both training and test data, and ii) encryption applied only to training data. Experimental results on a medical dataset demonstrate that the framework significantly reduces the effectiveness of MIAs while maintaining high predictive accuracy. Comparative analysis with DP-GPR models further confirms that the proposed method achieves competitive or stronger privacy protection with less impact on model utility. These findings underscore the potential of structure-preserving transformations as a practical and effective alternative to noise-based privacy mechanisms in GP models, particularly in privacy-critical machine learning applications.

Author 1: Md. Rashedul Islam
Author 2: Jannatul Ferdous Akhi
Author 3: Takayuki Nakachi

Keywords: Gaussian process; differential privacy; random unitary transformation; membership inference attack; machine learning

PDF

Paper 8: Autonomous Driving in Adverse Weather: A Multi-Modal Fusion Framework with Uncertainty-Aware Learning for Robust Obstacle Detection

Abstract: Robust obstacle detection in autonomous driving under adverse weather remains a critical challenge due to sensor degradation, visibility reduction, and increased uncertainty. This study proposes an Uncertainty-Aware Multi-Modal Fusion (UAMF) framework that integrates LiDAR, RGB images, and weather priors through a dynamic cross-modal attention mechanism and Bayesian uncertainty modeling. The model adaptively adjusts the fusion weights between sensor modalities according to real-time weather conditions and jointly optimizes detection loss with a KL divergence regularization to quantify predictive uncertainty. Experimental results on the nuScenes, KITTI-Adverse, and CARLA datasets demonstrate that UAMF achieves superior performance across rain, snow, and fog scenarios, with mAP@0.5 reaching 0.78, 0.72, and 0.65, respectively—representing 12–31% gains over existing baselines. Notably, UAMF reduces false positive rates by up to 40% in low-visibility conditions and exhibits a strong correlation (ρ = 0.85) between estimated uncertainty and localization error. Ablation studies confirm the importance of the weather-aware fusion and uncertainty modules, while visibility-level analysis shows improved robustness under <30 m scenarios. The proposed framework offers reliable uncertainty signals for downstream decision-making and is deployable in real-time on embedded platforms. Future work will explore unsupervised weather parameter estimation, uncertainty-aware trajectory forecasting, and cross-domain generalization.

Author 1: Zhengqing Li
Author 2: Baljit Singh Bhathal Singh

Keywords: Autonomous driving; adverse weather; multimodal sensor fusion; Bayesian neural networks; uncertainty estimation

PDF

Paper 9: EcoRouting: Carbon-Aware Path Optimization in Green Internet Architectures

Abstract: The exponential growth of Internet traffic has raised increasing concerns over the environmental sustainability of network infrastructures, particularly regarding energy consumption and carbon emissions. While traditional routing algorithms prioritize performance metrics such as speed, reliability, and QoS (Quality of Service), they often overlook the environmental cost associated with data transmission. This study presents EcoRouting, a carbon-aware routing algorithm designed for the Green Internet that integrates emission intensity into the graph-based path optimization process. Implemented in a simulated network environment using Python and NetworkX, EcoRouting leverages real-world carbon intensity data from ElectricityMap to evaluate route selection based on both carbon emissions and latency. Across four experimental scenarios, including static and time-varying emissions, QoS comparison, and multi-city topologies, EcoRouting consistently demonstrated carbon savings of up to 47.1%, with acceptable latency tradeoffs ranging from 2.61% to 95.2% depending on network conditions. The results confirm that EcoRouting provides a viable, scalable, and environmentally conscious approach for reducing the carbon footprint of Internet routing while maintaining QoS.

Author 1: Handrizal
Author 2: Herriyance
Author 3: Amer Sharif

Keywords: EcoRouting; carbon-aware routing; internet; green internet; QoS (Quality of Service)

PDF

Paper 10: Exploring the Future Research Agenda for Health Applications Adoption: A Systematic Literature Review

Abstract: The healthcare sector is experiencing rapid digital transformation, marked by the growing popularity of mobile health (mHealth) and eHealth applications for various health-related purposes. However, despite their potential, the adoption of health applications remains inconsistent due to varying influencing factors. Previous reviews often focused on specific populations or limited frameworks, leaving a gap for a comprehensive synthesis. This study aims to systematically review and consolidate the current understanding of the factors affecting user adoption behavior in health applications. Following the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, a comprehensive literature search was conducted to identify relevant studies between 2016 and 2025. A total of 79 primary studies were analyzed to explore the theoretical model, variables, and emerging trends in health applications. The Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) models are the most widely used models by researchers. Beyond these core frameworks, researchers have proposed extended constructs such as psychological factors, health literacy, regulator readiness, security concerns, and infrastructure limitations. This review highlights the need for more inclusive, cross-cultural, and mixed-method research, particularly focusing on underrepresented populations such as rural users, the elderly, and low-literacy groups. These findings offer valuable insight to inform the design of future models and support the development of more effective, context-aware, and user-centered health technologies.

Author 1: Rahmat Fauzi
Author 2: Adhistya Erna Permanasari
Author 3: Silmi Fauziati

Keywords: Systematic literature review; health application adoption; health technology; user behavior; technology acceptance; TAM; UTAUT

PDF

Paper 11: A New Method for Real-Time Fall Detection Based on MediaPipe Pose Estimation and LSTM

Abstract: Falls are a significant health problem among older adults, leading to serious injuries and adversely affecting both quality of life and public health burdens. Although various fall detection systems have been developed using technologies such as wearable sensors and image processing (computer vision), limitations remain in dimensions of convenience, accuracy, and real-time responsiveness. To overcome these limitations, this research aimed to present a real-time fall detection system that integrates MediaPipe pose estimation technology with a Long Short-Term Memory (LSTM) neural network. The proposed method functioned through two main components. MediaPipe pose estimation technology was applied to detect and track keypoints on the human body from real-time video input; meanwhile, a trained LSTM model was utilized to analyze the sequence of movements of the detected keypoints for classifying and differentiating between fall behaviors and normal activities. The system was trained, and its performance was evaluated using the standard UR Fall Detection Dataset. From experimental results, the proposed system achieved high efficiency in fall detection, with an accuracy of 95.2% on the test dataset. The integrated system had its capability to detect all actual fall events (with a recall of 100%). Its false positive rate was low. Compared to other research, the proposed method provided higher accuracy. These results indicated that the proposed system has the potential for practical application as an effective tool for real-time fall alerts, enabling timely assistance for those injured from falls.

Author 1: Puwadol Sirikongtham
Author 2: Apichaya Nimkoompai

Keywords: Fall detection; older adults; real-time; MediaPipe; LSTM; computer vision; pose estimation

PDF

Paper 12: Design of Marketing Digital Control System Based on the Integration of Big Data Analysis and Machine Learning

Abstract: The advent of the digital age has made it difficult for traditional marketing methods to meet the rapidly changing needs of the market. To improve the efficiency and effectiveness of enterprise marketing activities, it is particularly important to develop intelligent and precise marketing management systems. Therefore, the study proposes a marketing digital control system based on the integration of big data analysis and machine learning, which utilizes the Apache Flink distributed big data processing framework to design a marketing control system that includes system content and functional requirements. At the same time, machine learning design ideas are introduced into marketing recommendation algorithms, using reinforcement learning to enrich the business logic of the Rete network, dynamically generate and update rules, and calculate user interest while ensuring the fit between marketing recommendation content and user interest information. The study constructed a self-designed dataset (consisting of over 30000 pieces of data) through data simulation and crawling, and compared the research method with other machine learning algorithms on the same dataset. The results show that the maximum matching accuracy of the improved recommendation algorithm reaches 90.12%, and the prediction accuracy of user consumption behavior exceeds 88%, which is better than other comparative algorithms. The mean absolute error value on the product is less than 0.10, and the F1 value is greater than 0.65, indicating significant recommendation effectiveness. The research-designed marketing digital control system effectively integrates big data analysis and machine learning technology, providing support for the digital transformation of enterprises and the intelligent upgrading of related fields.

Author 1: Qiming Li
Author 2: Songling Du
Author 3: Shiyuan Zhang

Keywords: Big data; flink; reinforcement learning; rete network; interest level; term frequency-inverse document frequency; rule

PDF

Paper 13: Artificial Intelligence in Optometry: Potential Benefits and Key Challenges: A Narrative Review

Abstract: The integration of Artificial Intelligence (AI) into healthcare is transforming many medical fields, including optometry. This study provides a narrative review of the current applications and future potential of AI in optometric practice, emphasizing its role in automated screening and diagnosis, personalized treatment planning, and enhanced accessibility through tele-optometry. Alongside these opportunities, this study examines the technical, socioeconomic, ethical, legal, and professional challenges that limit the effective integration of AI in optometry practice. Focus is placed on concerns surrounding data privacy, patient autonomy, regulatory disparities, and practitioner resistance to adoption. Furthermore, this review highlights key research gaps, including the need for diverse training datasets, large-scale validation trials, and collaborative training between clinicians and AI developers. By resolving these challenges, AI has the potential to improve diagnostic accuracy, expand access to care, and enhance the quality of eye care services. By integrating the available evidence, this narrative review provides clinicians, policymakers, and researchers with a comprehensive overview of the benefits, challenges, and future directions of AI in optometry.

Author 1: Noura A. Aldossary

Keywords: Artificial intelligence; automated screening; tele-optometry; eye care services; patient autonomy

PDF

Paper 14: nodeWSNsec: A Hybrid Metaheuristic Approach for Reliable Security and Node Deployment in Wireless Sensor Networks

Abstract: Efficient and reliable node deployment in Wireless Sensor Networks is crucial for optimizing coverage of the area, connectivity among nodes, and energy efficiency. Random deployment of nodes may lead to coverage gaps, connectivity issues and reduce network lifetime. This study proposes a hybrid metaheuristic approach combining a Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) to address the challenges of energy-efficient and reliable node deployment. The GA-PSO hybrid leverages GA’s strong exploration capabilities and PSO’s rapid convergence, achieving an optimum stability between coverage and energy consumption. The performance of the proposed approach is evaluated against GA and PSO alone and the innovatory metaheuristic-based Competitive Multi-Objective Marine Predators Algorithm (CMOMPA) across varying sensing ranges. Simulation results demonstrate that GA-PSO requires 15 to 25% fewer sensor nodes and maintains 95% or more area coverage while maintaining connectivity in comparison to the standalone GA or PSO algorithm. The proposed algorithm also dominates CMOMPA when compared for long sensing and communication range in terms of higher coverage, improved connectivity, and reduced deployment time while requiring fewer sensor nodes. This study also explores key trade-offs in WSN deployment and highlights future research directions, including heterogeneous node deployment, mobile WSNs, and enhanced multi-objective optimization techniques. The findings underscore the effectiveness of hybrid metaheuristics in improving WSN performance, offering a promising approach for real-world applications such as environmental monitoring, smart cities, smart agriculture, disaster response, and IIoT.

Author 1: Rahul Mishra
Author 2: Sudhanshu Kumar Jha
Author 3: Naresh Kshetri
Author 4: Bishnu Bhusal
Author 5: Mir Mehedi Rahman
Author 6: Md Masud Rana
Author 7: Aimina Ali Eli
Author 8: Khaled Aminul Islam
Author 9: Bishwo Prakash Pokharel

Keywords: Node deployment; wireless sensor networks; genetic algorithm; particular swarm optimization; competitive multi-objective marine predators algorithm

PDF

Paper 15: Exploring Trust Management in Fog Computing: A Comprehensive Review and Future Challenges in Task Offloading

Abstract: With the proliferation of data-driven services and latency-sensitive applications, fog computing has emerged as a pivotal extension of cloud infrastructure, enabling data processing and resource allocation at the network edge. However, the trustworthiness of task offloading in such decentralized and heterogeneous environments remains insufficiently explored, posing significant concerns related to system reliability, security, and performance. This review aims to address this gap by providing a comprehensive and systematic analysis of current research on trust-based task offloading in fog computing. The study investigates various trust evaluation mechanisms, categorizing them into three major paradigms: Direct Trust-based, Recommended Trust-based, and Comprehensive Trust. Through this classification, the study identifies and examines key trust-related metrics that influence offloading decisions, including task execution accuracy, trust evaluation accuracy, and evaluation latency. A critical assessment of the strengths and limitations of existing approaches reveals ongoing challenges such as dynamic trust management, scalability in large-scale networks, interoperability among diverse nodes, and resilience against malicious behaviours. Based on these insights, the study highlights pressing research opportunities and recommends the development of lightweight, adaptive, and context-aware trust frameworks capable of supporting real-time decision-making in dynamic fog environments. By synthesizing fragmented research and offering a forward-looking perspective, this review contributes a foundational reference for scholars and practitioners seeking to enhance the reliability and security of task offloading in fog computing, thereby supporting the evolution of more robust and efficient edge-based computing infrastructures.

Author 1: Liu Feng
Author 2: Suhaidi Hassan
Author 3: Mohammed Alsamman

Keywords: Trust management; fog computing; cloud computing; task offloading; heterogeneous networks; trust evaluation; task completion time

PDF

Paper 16: AraSpam: A Multitask Deep Neural Network for Spam Detection in Arabic Twitter

Abstract: Twitter has become widely used for disseminating information across the Arab world. It provides diverse communicative and informational needs while serving as a rich data source for a wide range of research. However, the integrity of such data is frequently undermined by the pervasive issue of spam. Existing research proposed the use of spam detection models at multiple levels—the account, tweet, and campaign levels. Many of these models target Uniform Resource Locator (URL)-based spam messages, whereas a significant portion of spam content operates without embedded URLs. Furthermore, spam detection methodologies tailored to the account level often lack the precision required for tweet-level analysis or, conversely, fail to capture broader account-level behavioral patterns. Moreover, studies focusing on Arabic spam have largely been restricted to specific geographical regions or linguistic varieties, such as Arabic dialect (AD) or Modern Standard Arabic (MSA), thereby neglecting the full spectrum of Arabic’s linguistic diversity in spam messages. This study aims to address these limitations by proposing AraSpam, a multitask deep neural network that detects both spam messages and profiles using a single model. It was trained using a dataset of tweets written in AD and MSA covering different spamming targets. The text features were extracted using transformer-based models: AraBERT for tweet text and mBERT for profile screen name. The experiment demonstrated 96% accuracy in detecting both spam accounts and tweets with seven different spamming targets. Additionally, the experiments revealed that reducing the number of spam classes resulted in an increase in tweet detection performance and a decrease at the account level.

Author 1: Lulua Alhamdan
Author 2: Ahmed Alsanad
Author 3: Nora Al-Twairesh

Keywords: Spam detection; Twitter; multitask deep neural network; transformer-based model

PDF

Paper 17: Hierarchical Transformer Residual Model for Pneumonia Detection and Lesion Mapping

Abstract: Pneumonia, a potentially fatal infection and a common disease-causing culprit among children and the elderly, still remains as a prevalent threat even after years of research on tackling it. Rapid and proper identification is crucial for timely treatment and improved results. While thoracic radiographs are widely employed in pneumonia diagnosis, real-world clinical assessment is frequently questioned by factors such as subtle radiographic patterns, overlapping symptoms, subjective manual judgement and dependency on expert radiologists. The study proposes a hybrid deep learning model integrating ResNet50 and the Swin Transformer, coupled with an auxiliary segmentation decoder to facilitate both classification and lesion localization in chest X-ray images. ResNet50 acts as the backbone for hierarchical spatial feature extraction, capturing fine-grained local textures indicative of pulmonary abnormalities, and the Swin Transformer serves as the global attention-driven feature aggregator. The shifted window mechanism of the Swin Transformer maintains spatial hierarchy while facilitating effective contextual modelling. Global Average Pooling (GAP) and Multilayer Perceptron (MLP) form the classification head, yielding accurate predictions in classifying the images, while the segmentation decoder utilizes multiscale features to generate pixel-wise masks for pneumonia lesion regions. The model outperformed conventional methods with 98.4% classification accuracy, 98.2% precision, 99.2% recall and an F1-score of 98.7% with a 0.88 Dice Coefficient in segmentation. These results reflect the hybrid architecture’s superior performance and its dual capacity for diagnostic prediction and lesion interpretability. The proposed model demonstrates promising results for deployment in real-world clinical workflows, especially in resource-constrained or high-patient-load environments.

Author 1: Anupama Prasanth

Keywords: Pneumonia detection; lesion segmentation; chest X-ray; ResNet50; swin transformer; global average pooling

PDF

Paper 18: Developing ReAdaBalancer for Load Balancing Optimization in Networked Cloud Computing

Abstract: Traditional load balancing systems frequently have trouble adjusting to abrupt and unexpected changes in traffic. This can cause problems like server overload, longer response times, and more requests being denied. This problem is highly important in areas like healthcare, finance, cloud computing, and e-commerce, where performance, stability, and fast data delivery are all very important. To solve this problem, this study presents ReAdaBalancer, an adaptive load balancing architecture that aims to improve system performance, scalability, stability, and efficiency in contexts with changing traffic. Flask serves as the backend framework for ReAdaBalancer, while Nginx serves as the load balancer. Real-time monitoring and analytics are used to improve traffic distribution based on the resources that are currently available. Leveraging queuing theory (M/M/s/K Network), the system’s performance is tested under diverse load situations, providing insights into its scalability and efficiency. ReAdaBalancer can also learn and adapt all the time, thanks to machine learning and heuristic optimization. This makes sure that it works the same way even when demand changes. Experimental results demonstrate that, under equivalent settings, ReAdaBalancer decreases response times by over 67% and reduces request denial rates by over 50% in comparison to traditional methods. This work has multiple opportunities for subsequent investigation. Future improvements could involve making ReAdaBalancer work in distributed multi-data center environments, adding reinforcement learning to make decisions more independently, looking into load balancing strategies that use less energy, and making it work in edge computing and IoT ecosystems.

Author 1: M Diarmansyah Batubara
Author 2: Poltak Sihombing
Author 3: Syahril Efendi
Author 4: Suherman

Keywords: Cloud computing; heuristic optimization; adaptive load balancing; scalability; ReAdaBalancer

PDF

Paper 19: An Augmentation-Based System for Diagnosing COVID-19 Using Deep Learning

Abstract: Recently, due to the dangerous spread of COVID-19, there has been strong competition among computer science researchers within the scientific research community to employ deep learning for the development of intelligent medical systems that diagnose this illness. Enhancing accuracy is considered the most important objective, and augmentation techniques are used in this context. This study addresses two main issues related to applying augmentation on X-ray and CT-scan images: losing the positional information of augmented medical images and the integration of extracted features while scanning them. The use of the Vision Transformer Structure, supported by a Position-Aware Embedding (PAE) method, is proposed to deal with these issues. Moreover, in this study, a student–teacher-based approach was adopted to enable considerable resistance against training on a small batch of training images. Due to the sensitivity of medical data, preserving the privacy of patients was taken into account by using a pseudonym-based anonymity approach. After evaluations based on accuracy, precision, recall, and specificity metrics, the results showed that the proposed system has a high-level capability to predict class images (X-ray or CT-scan) as well as considerable resistance against training on small medical images.

Author 1: Mohamad Shady Alrahhal
Author 2: Mohammad A. Mezher
Author 3: Osamah A.M. Ghaleb
Author 4: Mohammad Al-Hjouj
Author 5: Raghad Sehly
Author 6: Samir Bataineh

Keywords: COVID-19; medical images; augmentation; vision transformer; training data ratio

PDF

Paper 20: Navigating the Landscape of Automated Information Extraction for Financial Fund Prospectuses: Survey and Challenges

Abstract: In the financial sector, a fund prospectus is a critical document mandated by the Securities and Exchange Commission (SEC) that provides vital information about investments to the public. These documents encompass a range of financial concepts that define the fund's operations, including its name and disclaimers associated with periodic reports. Traditionally, the identification of these concepts has been a manual, labour-intensive, and costly task for financial regulators, aimed at ensuring the completeness of information. Automating this process is fraught with challenges, including the lengthy nature of prospectuses, the nuances of financial language, and the scarcity of labelled data for effective model training. This study explores state-of-the-art methods for information extraction, specifically within the context of financial documents. It begins with an overview of information extraction, detailing its definition and various types, such as Named Entity Recognition (NER) and event extraction. The discussion highlights the increasing significance of information extraction in the financial domain and reviews typical application areas. Ultimately, this research seeks to highlight the challenges within existing methods through a comprehensive literature review, emphasizing the need for more effective techniques tailored to the extraction of financial concepts in fund prospectuses. By enhancing and streamlining the extraction process, it aspires to improve efficiency and reduce costs for financial regulators, thereby ensuring more accurate and comprehensive information dissemination.

Author 1: Yuyao Xu
Author 2: Mohamad Farhan Mohamad Mohsin

Keywords: Machine learning; information automation; financial documentation

PDF

Paper 21: Proactive Cancer Prediction Using IoT and Deep Learning Before Symptoms

Abstract: The ability to predict cancer before the onset of clinical symptoms represents a paradigm shift in oncology and preventive medicine. Existing diagnostic approaches remain reactive, relying on imaging or symptomatic manifestations that frequently detect the disease only at advanced stages, particularly in pancreatic, lung, and ovarian cancers. To address this gap, we propose a novel methodology that integrates the Internet of Things (IoT), Artificial Intelligence (AI), and Deep Learning for proactive cancer prediction. Continuous high-resolution physiological, behavioral, and environmental data are collected through IoT-enabled wearable and implantable devices and analyzed using a hybrid architecture that combines Autoencoders, Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), with a specific focus on Long Short-Term Memory (LSTM) models. Unlike previous work, which primarily targeted general IoT-based monitoring or symptom-driven detection, this study explicitly demonstrates how the fusion of multidimensional IoT data and advanced deep learning enables the identification of micro-level deviations from an individual’s baseline as early biomarkers of cancer risk. Experiments conducted on synthetic datasets simulating pancreatic, lung, and ovarian cancer progression show that the proposed framework achieves an accuracy of 89%, a sensitivity of 85%, a specificity of 91%, and an AUC of 0.93, with an average early detection lead time of 7.5 months. These findings highlight the rigor and originality of the proposed approach, which advances the field by offering a validated, proactive methodology for cancer prediction and establishing clear differences from prior studies by the authors that focused on narrower IoT applications. This work paves the way for predictive and preventive oncology, where intervention can occur long before clinical manifestation of the disease.

Author 1: Mohamed Amine Meddaoui
Author 2: Imane Karkaba
Author 3: Moulay Amzil
Author 4: Mohammed Erritali

Keywords: Deep learning; internet of things; artificial intelligence; convolutional neural network; recurrent neural network; long short-term memory; autoencoders; cancer prediction

PDF

Paper 22: Comparative Analysis of Machine Learning and Deep Learning Models for Handwritten Digit Recognition

Abstract: Handwritten digit recognition (HDR) forms a key component of computer vision systems, especially in optical character recognition (OCR). This study presents a comparative analysis of Machine Learning (ML) algorithms and Deep Learning (DL) models for HDR tasks. A contour-based segmentation technique was applied in preprocessing to enhance feature extraction by detecting digit boundaries and reducing noise. ML models, including K-Nearest Neighbors (KNN) and Support Vector Machine (SVM), and DL architectures, such as Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), were evaluated on the Modified National Institute of Standards and Technology (MNIST) and the National Institute of Standards and Technology (NIST) datasets. The results demonstrate that DL models significantly outperform ML algorithms in terms of accuracy and robustness, while the KNN model achieved acceptable results. The results underline the importance of contour-based preprocessing in boosting deep learning techniques for HDR.

Author 1: Soukaina Chekki
Author 2: Boutaina Hdioud
Author 3: Rachid Oulad Haj Thami
Author 4: Sanaa El Fkihi

Keywords: Handwritten digit recognition (HDR); Optical Character Recognition (OCR); Machine Learning (ML); Deep Learning (DL); segmentation

PDF

Paper 23: Residual DDPG Control with Error-Aware Reward Rescaling for Active Suspension Under Unseen Road Conditions

Abstract: This study investigates a hybrid residual control framework combining Deep Deterministic Policy Gradient (DDPG) and a Proportional–Integral–Derivative (PID) based correction module for active suspension (AS) systems, aiming to improve ride performance and generalization under complex road excitations. The DDPG controller is trained on sinewave inputs, while the PID module compensates for residual errors to enhance robustness. To further guide policy optimization, an error-aware reward rescaling strategy is introduced during training, adaptively shaping the reward signal based on acceleration deviation. The controller is tested under five typical road conditions. These include sinewave inputs and step inputs, and ISO 8608 Level B random profiles. Simulation results show that the residual DDPG (RDDPG) controller works better than both DDPG alone and the PID controller. It reduces vertical acceleration RMS by 50.35% under a 0.05 m sinewave input. This shows that using reinforcement learning (RL) with fast correction and reward adjustment is a useful and stable way to control AS in different driving conditions.

Author 1: Zien Zhang
Author 2: Abdul Hadi Abd Rahman
Author 3: Noraishikin Zulkarnain

Keywords: Deep deterministic policy gradient; active suspension; reward function; generalization

PDF

Paper 24: Edge-Guided Multi-Scale YOLOv11n: An Advanced Framework for Accurate Ship Detection in Remote Sensing Imagery

Abstract: Ship detection in optical remote sensing imagery plays a vital role in maritime surveillance and environmental monitoring. However, existing deep learning models often struggle to generalize effectively in complex marine environments due to challenges such as noise interference, small object sizes, and diverse weather conditions. To address these issues, this study proposes an Edge-Guided Multi-Scale YOLO algorithm (YOLOv11n-EGM). The approach introduces multi-scale deep convolutional branches with varying kernel sizes to perform parallel feature extraction, enhancing the model’s ability to detect objects of different scales. Additionally, the classic Sobel operator is incorporated for edge-aware feature extraction, improving the model’s sensitivity to object boundaries. Finally, 1×1 convolutions are employed for feature fusion, reducing computational complexity. Experimental results on the ShipRSImageNet V1.0 dataset demonstrate that the improved model achieves notable gains in precision, recall, mAP@0.5, and mAP@0.5:0.95 compared to the baseline, highlighting its superior performance in challenging maritime scenarios. Qualitative analysis further shows that YOLOv11n-EGM can accurately detect both large and extremely small ships in cluttered scenes, with precise boundary localization. However, occasional misclassification in fine-grained categories (e.g., motorboat vs. hovercraft) highlights the challenge of small-instance recognition. Overall, the proposed method exhibits strong robustness and practical applicability in real-world maritime scenarios, offering a promising solution for edge-aware, multi-scale ship detection in remote sensing imagery.

Author 1: Yan Shibo
Author 2: Liu Pan
Author 3: Abudhahir Buhari

Keywords: Optical remote sensing imagery; ship detection; multi-scale deep convolution; edge-aware feature extraction

PDF

Paper 25: The ECTLC-Horcrux Protocol for Decentralized Biometric-Based Self-Sovereign Identity with Time-Lapse Encryption

Abstract: In the era of rapid development of digital communication, there is a growing need for technologies that guarantee secure user identification, document authentication and protection of personal data, including biometrics. Previously used centralized identity management systems are becoming increasingly vulnerable to hacking, falsification and misuse. This problem is especially relevant when information must remain closed until a specific moment or event occurs, for example, in the fields of forensics, healthcare or law (medical certificates, legal acts, inheritance agreements, etc.). The main goal is to create a secure, verifiable and at the same time distributed access control system with the ability to defer disclosure of information. The study proposes a cryptographic protocol that combines Self-Sovereign Identity (SSI), Time-Lapse Cryptography (TLC), and decentralized biometric data management. The protocol is based on the principles of Time-Lapse Cryptography (TLC) and the Horcrux protocol, which enable time-controlled disclosure of encrypted information associated with a user's identity. The architecture includes the use of QR codes as a transport for Verifiable Credentials (VC), blockchain for authenticity verification and key management, and biometrics as a second factor of identity binding. The proposed solution is intended for use in scenarios where cryptographic protection against premature access to sensitive data is required, such as in medicine, forensics, notarial acts, or intellectual property. The study presents the protocol structure and application options.

Author 1: N. M. Kaziyeva
Author 2: R. M. Ospanov
Author 3: N. Issayev
Author 4: K. Maulenov
Author 5: Shakhmaran Seilov

Keywords: Self-sovereign identity; horcrux protocol; elliptic curves time-lapse cryptography; biometrics; QR codes; blockchain

PDF

Paper 26: Integrating Chatbots into E-Learning Platforms: A Systematic Review

Abstract: The application of chatbots in e-learning has experienced rapid growth in recent years, but a dilemma remains about their pedagogical contribution in practice. For this reason, the aim of this systematic literature review was to analyze the implementation of chatbots in e-learning platforms, evaluating their benefits, academic impact and challenges. The methodology used was PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), based on a structured search in databases such as Scopus, Web of Science, Springer and ScienceDirect. The selection included 55 studies published between 2020 and 2024, after applying rigorous inclusion and exclusion controls. The research results show that personalization of learning, self-regulation, increased student engagement and educational efficiency benefit most when chatbots are integrated with active methodologies. Geographically, scientific output was dominated by the UK, Malaysia and Spain, with 38.18% of publications in 2024. It was also found that the majority of methodological approaches were quantitative, followed by mixed and qualitative studies less frequently. Among the barriers that emerged in terms of the pedagogical dimension were teacher resistance and limited training in artificial intelligence tools. Educational issues, privacy concerns, and biases in generated responses also emerged. Keywords from co-occurrence analysis using VOSviewer revealed the prominence of terms such as chatbot, intelligent tutoring and technology-enhanced learning in recent scientific output. Thus, it is concluded that chatbots are a determinant of autonomy, motivation and effectiveness of online learning, leading to a change in future educational environments, where students will adopt emerging technology. Among the limitations of this review were the scarcity of longitudinal studies and restricted access to certain articles.

Author 1: Victor Sevillano-Vega
Author 2: Juan Chavez-Perez
Author 3: Carmen Torres-Ceclén
Author 4: Orlando Iparraguirre-Villanueva

Keywords: Chatbots; educational platforms; e-learning; education; challenges

PDF

Paper 27: A Hybrid Approach Combining Deep CNN Features with Classical Machine Learning for Diabetic Retinopathy Diagnosis

Abstract: One of the main causes of vision impairment is diabetic retinopathy (DR), a common and dangerous consequence of diabetes that damages the retinal blood vessels. Preventing irreversible vision loss requires early detection of DR. Recent developments demonstrate how artificial intelligence (AI), and in particular deep learning (DL), can automate the classification of retinal images for the diagnosis of DR. In this study, a hybrid model is proposed that combines deep learning-based feature extraction with classical machine learning classifiers for robust medical image analysis. After using preprocessing methods to lower background noise, this study investigates the use of Convolutional Neural Networks (CNNs) for extracting discriminative features from DR images. To improve image contrast and highlight vascular features, the preprocessing pipeline uses morphological top-hat filtering and green channel extraction. Furthermore, transfer learning was applied to enhance feature representation. The tuned Radial Basis Function Support Vector Machine (RBF-SVM) had the greatest classification accuracy of 85% among the machine learning (ML) classifiers that were assessed, including Random Forest (RF), Gradient Boosting (GB), and RBF-SVM. These findings demonstrate the potential of hybrid AI-driven approaches and domain-specific medical image analysis in providing reliable and efficient automated DR detection.

Author 1: Amandeep Kaur
Author 2: Simranjit Singh
Author 3: Hardeep Singh
Author 4: Sarveshwar Bharti
Author 5: Jai Sharma
Author 6: Himanshi Sharma

Keywords: Deep learning; convolutional neural networks; hybrid model; diabetic retinopathy; machine learning; medical image analysis; feature extraction

PDF

Paper 28: Forecasting Currency Exchange Direction with an Advanced Immune-Inspired Model

Abstract: Accurately forecasting currency exchange rates is a persistent and significant challenge in computational finance. This study addresses the challenge by introducing an advanced model based on the Artificial Immune Recognition System (AIRS), an algorithm inspired by the adaptive learning of biological immune systems, to predict the directional movement of the EUR/USD pair. While conventional machine learning models are widely used, immune-inspired approaches have been largely unexplored in this domain. Using historical data from May 2002 to July 2024, the proposed model was rigorously optimized through time-series cross-validation and an Evolutionary Algorithm search. On the out-of-sample test set, the optimized model demonstrates strong predictive power, achieving an F1-Score of 0.66 and an ROC AUC of 0.74, results that are competitive with standard machine learning benchmarks. These findings validate AIRS as a robust and scientifically defensible tool for financial forecasting, offering a viable alternative to conventional methods in a highly volatile market.

Author 1: EL BADAOUI Mohamed
Author 2: RAOUYANE Brahim
Author 3: EL MOUMEN Samira
Author 4: BELLAFKIH Mostafa

Keywords: Artificial immune recognition system; financial market prediction; machine learning; predictive analytics; time series forecasting

PDF

Paper 29: Adaptive Ensemble Models for Robust Intrusion Detection in Cloud Environment on Imbalanced Dataset

Abstract: The rapid development of Information storage and sharing technologies brings new challenges in protecting against network security attacks. In this study, ensemble learning models are evaluated to enhance the performance of a network intrusion detection system (NIDS) with three phases through machine learning approaches. In the first phase, the unbalanced dataset is processed through four re-sampling techniques, such as SMOTE, RUS, RUS+ROS, and RUS+SMOTE, for balancing treatment. In the second phase, Random Forest feature selection is imposed for these four balanced datasets. Finally, three Ensemble Models named as EM1, EM2 and EM3 are designed using six basic classifiers and thus evaluated. In earlier studies, the first and second phases were evaluated through an SVM binary classifier for four feature subsets. The four feature subsets are obtained through Random Forest feature selection with the four different thresholds of Cumulative Feature Importance Scores (CFIS) (85%, 90%, 95% and 99%). With the observation of the evaluated results, three challenges were identified: i) The highest accuracy obtained through the re-sampling method required maximum computational time. ii) Different thresholds of CFIS exhibit instability in performance metrics as well as computational times, even though the number of features is less. iii) The adopted multi-class SVM classifier’s efficiency to detect the attacks within minimum computational time and without compromising accuracy when compared to earlier works is yet to be ascertained. In this study, an attempt has been made to address these challenges with ensemble learning. Three ensemble models are chosen for the evaluation process conducted on the adopted CIIDS -2017 dataset. Finally, the comparative results are presented, and decisive discussions are carried out for implementing the prevention and mitigation algorithms by security professionals.

Author 1: Swarnalatha K
Author 2: Nirmalajyothi Narisetty
Author 3: Gangadhara Rao Kancherla
Author 4: Neelima Guntupalli
Author 5: Simhadri Mallikarjuna Rao
Author 6: Archana Kalidindi

Keywords: Resampling methods; cloud computing; feature selection; ensemble model; intrusion detection system; machine learning

PDF

Paper 30: Detection of Leaf Fall Disease in Sembawa Rubber Plantation Through Feature Extraction Model and Clustering Methods

Abstract: Natural rubber is one of Indonesia's most important export commodities, making the country the second-largest exporter globally with a 28.65% share of the world market. However, recent production has declined, partly due to leaf fall disease caused by the Pestalotiopsis sp. fungus. This disease leads to premature leaf drop, which forces rubber trees to redirect energy from latex production to leaf regeneration, potentially reducing yields by up to 30%. Traditional detection methods that rely on manual visual inspection of leaf morphology are impractical over large plantation areas. To address this, the present study proposes a remote sensing-based detection approach using aerial drone imagery and unsupervised machine learning. Two feature extraction methods: Convolutional Autoencoder (CAE) and Gray Level Co-occurrence Matrix (GLCM) were used prior to clustering with k-means. Despite a small dataset, the GLCM-based approach significantly outperforms the CAE-based method. These results demonstrate that GLCM combined with clustering can reliably distinguish between healthy and diseased plantation areas. The proposed method offers a cost-effective, scalable, and non-invasive alternative to ground surveys, and has strong potential for real-world deployment in disease monitoring and early warning systems across large agricultural regions.

Author 1: Alhadi Bustamam
Author 2: Devvi Sarwinda
Author 3: Retno Lestari
Author 4: Ahmad Ihsan Farhani
Author 5: Harum Ananda Setyawan
Author 6: Masita Dwi Mandini Manessa
Author 7: Tri Rappani Febbiyanti
Author 8: Minami Matsui

Keywords: Convolutional autoencoder; gray level co-occurrence matrix; k-means clustering; rubber plant plantation; Pestalotiopsis sp

PDF

Paper 31: Tamil Handwritten Character Recognition: A Comprehensive Review of Recent Innovations and Progress

Abstract: Recognizing handwritten characters is a complex task, particularly when dealing with Tamil, a writing system known for its intricate and stylized nature. Several challenges arise in recognizing Tamil handwritten characters, including the complexity of the writing style, similar character shapes, irregular handwriting, slanting characters, varying curves, inconsistent font sizes, and limited datasets. Additionally, the diversity of writing styles and the absence of a standard solution for accurately recognizing all Tamil characters further complicate the process. To address these issues, researchers have explored various techniques, including neural networks, support vector machines, clustering, and groupwise classification. However, Tamil handwritten character recognition remains an evolving field with ample opportunities for exploration and advancement. This review study aims to provide a thorough analysis of the current state of the field, identify key challenges, and highlight areas for improvement. Furthermore, it presents a detailed examination of the proposed techniques and suggests potential directions for future research in this domain.

Author 1: Manoj K
Author 2: Iyapparaja M

Keywords: Convolutional Neural Network (CNN); handwritten recognition; Tamil characters; offline recognition; feature extraction techniques; neural network architecture; Support Vector Machine (SVM); groupwise classification

PDF

Paper 32: Boosting Deepfake Detection Accuracy with Unsharp Masking and EfficientNet Models

Abstract: The rapid progress of deepfake technology, fueled by generative adversarial networks (GANs), has increased the challenge of verifying the authenticity of digital media. This study suggests a more powerful deepfake detection framework based on the EfficientNet convolutional neural network family, coupled with an unsharp masking preprocessing method to highlight manipulation artifacts. Based on a big, diverse dataset of over 5000 video samples, the model was trained and tested on several variants of EfficientNets (B0–B4). The results indicate that the integration of unsharp masking significantly improves the model's ability to detect minor irregularities in facial regions, with its best validation accuracy at 97.77% with EfficientNetB4. The method strikes a balance between computational cost and detection accuracy, rendering it applicable to real-world use cases, such as forensic examination and digital content authentication. The stability of the framework across different datasets and manipulation methods highlights its value as a scalable solution for curbing disinformation and protecting media integrity.

Author 1: Radwa Khaled
Author 2: Hossam M. Moftah
Author 3: Fahad Kamal Alsheref
Author 4: Adel Saad Assiri
Author 5: Kamel Hussein Rahouma
Author 6: Mohammed Kayed

Keywords: Deepfake detection; efficientnet; unsharp masking; convolutional neural networks (CNNs); facial manipulation detection; computer vision; artificial intelligence

PDF

Paper 33: Analysis of the Possibilities of Using LLM Chatbots for Solving Course and Exam Tasks

Abstract: With the widespread introduction of new technologies and, in particular, AI in various areas of life, students are increasingly using large language models (LLMs) such as ChatGPT and other similar tools to help them with their academic tasks. By using them, they can improve their productivity, improve their understanding of complex topics, and support their academic work. LLMs are used both in research, information gathering and preparation for exams and tests, as well as for generating ideas, creating code, and more. This study explores the possibility of using ChatGPT, Claude and DeepSeek for solving course and exam tasks. The results of the analysis could serve as a warning signal and motivation for future transformation of student testing and assessment methods. The ability to use AI systems to search, analyze, and summarize large volumes of information should shift the focus of assessment from classical fact-finding and practical performance of elementary tasks to creativity, combinability, and skills for adapting and applying the already gained knowledge.

Author 1: Svetlana Stefanova
Author 2: Yordan Kalmukov

Keywords: Large language models (LLM); artificial intelligence (AI); ChatGPT; Claude AI; DeepSeek; AI in education; AI for solving exams

PDF

Paper 34: Segment-Based Vehicular Congestion Detection Methods Using Vehicle ID and Loss of Expected Time of Arrival

Abstract: Increasing number of vehicles and rapid urbanization are the significant causes of road traffic congestion. Road traffic congestion is the main issue facing world cities today. Congestion control and mitigation are necessary to mitigate the negative impacts of road traffic congestion, such as delays and increased fuel consumption, among others. There are many congestion detection methods published in the literature; some of these methods, such as the speed threshold, use a single congestion detection metric. Using a single parameter for traffic congestion detection might produce false and inaccurate results. Furthermore, many congestion detection techniques fall short in describing traffic congestion from the user's perspective and vision. To address this, this study develops a segment-based congestion detection method that uses vehicle ID and loss of expected time of arrival. The ID-based method considers both vehicle speed and density, whereas the loss of expected time of arrival focuses on the time loss. These methods are segment-based, where roads are divided into segments using vehicle trajectories. Using a speed threshold of 8.33 m/s, the road is segmented into segments of 8.33 m, 16.66 m, and 24.99 m in length. Vehicle speed and density are monitored using vehicle identification numbers (VINs). Experimental results reveal that the speed threshold and the Microscopic Congestion Detection Protocol recorded false congestion detection. The proposed ID-based congestion detection method is capable of identifying false congestion and accurately detecting real congestion. Moreover, the loss of expected time of arrival shows a promising result in terms of identifying congestion based on motorists’ feelings.

Author 1: Mustapha Abubakar Ahmed
Author 2: Azizul Rahman Mohd Shariff

Keywords: Vehicle ID; traffic congestion; congestion detection; vehicle trajectories; vehicle speed; vehicle density; loss of expected time of arrival

PDF

Paper 35: Stock Market Prediction of the Saudi Telecommunication Sector Using Univariate Deep Learning Models

Abstract: Stock market volatility, randomness, and complexity make accurate stock price prediction very elusive, though it is required for logical investment and risk management. This study compares four Deep Learning (DL) models, Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), and a CNN-LSTM model, to predict the Saudi Telecommunication sector by focusing on the closing price time series. The daily historical closing prices of STC, Mobily, and Zain companies are gathered and preprocessed, involving duplicate removal, feature selection, and Min-Max scaling. Models were trained with MSE loss, whereas validation was done with the RMSE and MAE. The study points toward the ability of deep learning to capture complex nonlinear regression patterns in the ebbs and flows of volatile financial markets. A comparative analysis reveals that the LSTM model yielded the lowest Test RMSE in all cases (Mobily: 1.169705, STC: 0.708495, Zain: 0.27147), therefore, presenting the best overall predictive accuracy. On the other hand, RNN almost always had the highest Test RMSE values (Mobily: 1.688603, STC: 1.143664, Zain: 0.666184), highlighting its limitations. The CNN and CNN-LSTM models showed intermediate performance, with implications for enhanced financial forecasting and decision-making within this specific market segment.

Author 1: Hadi S. AlQahtani
Author 2: Mohammed J. Alhaddad
Author 3: Mutasem Jarrah

Keywords: Deep learning; stock market; prediction; models; regression; time series

PDF

Paper 36: A Systematic Review of Multilingual Plagiarism Detection: Approaches and Research Challenges

Abstract: The existence of voluminous multilingual sources on the web in different fields creates numerous issues, including violations of intellectual property rights. For that, the multilingual plagiarism or cross-language plagiarism detection (CLPD) has become a great challenge, which refers to copying content from a source text in one language into a target text in another without proper attribution. This study presents a systematic literature review (SLR) of methodologies used in CLPD covering works published between 2014 and 2025. This literature review summarizes and diagrams the different approaches used for CLPD. We propose a classification of the different representations of multilingual texts into four types: traditional approaches, multilingual semantic networks, fingerprinting methods, and deep learning models. In addition, we have carried out an in-depth analysis of ten language pairs and have focused on the approaches employed, including translation strategies, feature extraction approaches, classification techniques, similarity methods, dataset types, data granularity, and evaluation metrics. Among the fulfilled results, English appears in 98% of language pairs, and the English-Arabic pair stands out as the most studied. Over 60% of studies involve a translation phase with Google Translate as the most frequently used tool. The mBART model achieves over 95% accuracy for English-Spanish, English-French, and English-German, while BERT reached 96% for English-Russian. As for the assisted translation study based on the Expert translation tool, strong results are obtained for English-Persian, with an accuracy of 98.82%. On the whole, transformers offer better results in several language pairs without the need for translation.

Author 1: Chaimaa BOUAINE
Author 2: Faouzia BENABBOU
Author 3: Zineb Ellaky
Author 4: Amine BOUAINE
Author 5: Chaimae ZAOUI

Keywords: Multilingual plagiarism; systematic literature review; multilingual text representation; translation approaches; natural language processing; machine learning; deep learning

PDF

Paper 37: Towards Explainable and Balanced Federated Learning: A Neural Network Approach for Multi-Client Fraud Detection

Abstract: The growing demand for secure and privacy-preserving machine learning frameworks has resulted in the implementation of federated learning (FL), especially in critical areas like Credit card fraud detection. This study presents a comprehensive federated learning architecture that incorporates Neural Networks as local models, in conjunction with KMeans-SMOTEENN to address class imbalance in distributed datasets. The system utilises the Flower framework, employing the FedAvg algorithm across ten decentralised clients to collectively train the global model while preserving raw data confidentiality. To improve model transparency and cultivate stakeholder trust, Local Interpretable Model-Agnostic Explanations (LIME) is utilized, offering localised, comprehensible insights into model decisions. The experimental results indicate that the suggested method effectively achieves high predictive accuracy and explainability, rendering it appropriate for real-world fraud detection contexts that necessitate data confidentiality and model accountability.

Author 1: Nurafni Damanik
Author 2: Chuan-Ming Liu

Keywords: Component federated learning; K-Means SMOTEENN; credit card fraud detection; LIME

PDF

Paper 38: A Novel Multilevel Framework for DoS Detection in SDN

Abstract: DoS attacks have been the most popular type of attack on SDNs. The threat landscape has widened due to advanced persistent threats. Recent studies have focused on a single level of defence and conventional detection methods, which have become redundant. The study proposes and implements a novel multilevel DoS attack detection, which has a three-pronged approach to counter modern-day DoS attacks. The first level emphasizes the Zero Trust mechanism using Hash SHA-256 to validate the clients. The second level uses hybrid deep learning models to detect DoS attacks, which are trained and tested across three latest datasets, namely NSLKDD, CIC DOS 2019 and IOT2023, giving an accuracy of 95% consistently. The third level is a lightweight adaptive DoS detection, which can detect fast and low-rate DoS attacks, ensuring that the SDN is secure in a few milliseconds by ruling out any possibility of congestion. The results clearly indicate how a three-level approach can thwart most advanced persistent threats.

Author 1: Rejo Rajan Mathew
Author 2: Amarsinh Vidhate

Keywords: Software defined network; distributed denial of service; openflow

PDF

Paper 39: Mid-Upper Arm Circumference Measurement Using Digital Images: A Top-Down Approach with Panoptic Segmentation Using Mask R-CNN

Abstract: Assessing nutritional status, particularly among children and pregnant women, necessitates accurate measurement of Mid-Upper Arm Circumference (MUAC). This research introduces a novel system for MUAC estimation from digital images using the Mask R-CNN algorithm, employing a top-down panoptic segmentation strategy. The proposed model was designed to identify the upper arm region within human body images and compute MUAC values autonomously. Mask R-CNN was selected due to its capacity to perform precise segmentation of objects within visually complex scenes, especially in the mid-upper arm area. Model training was conducted using a dataset of annotated images, with subsequent evaluation confirming its ability to reliably detect and measure MUAC. The system was validated using 72 image samples, yielding a mean absolute error (MAE) of 2.31 cm when compared to manual measurements. Among these samples, 29.2% (21 individuals) exhibited a measurement discrepancy of 0 to 1 cm, 27.8% (20 individuals) showed a 1 to 2 cm difference, and 43.1% (31 individuals) demonstrated deviations exceeding 2 cm. Despite some variations in measurement accuracy, the system presents a promising tool for enhancing the automation and efficiency of nutritional assessments.

Author 1: Maya Silvi Lydia
Author 2: Pauzi Ibrahim Nainggolan
Author 3: Desilia Selvida
Author 4: Doli Aulia Hamdalah
Author 5: Dhani Syahputra Bukit
Author 6: Amalia
Author 7: Rahmita Wirza Binti O. K. Rahmat

Keywords: Mask R-CNN; mid-upper arm circumference; digital images; segmentation; mean absolute error

PDF

Paper 40: Ensemble Learning for Multi-Class Android Malware Detection: A Robust Framework for Family Level Classification

Abstract: The widespread popularity of Android devices has made them a prime target for sophisticated and evolving malware threats. Traditional malware detection techniques rely on binary classification (malicious vs. benign), which fails to capture the nuanced behavioral differences between malware families, critical for threat intelligence and incident response. To address this limitation, we propose a robust multi-class classification approach for Android malware family detection, leveraging ensemble learning and advanced feature selection methods. Our system uses a hybrid feature extraction strategy that combines Chi-Squared and Mutual Information techniques to eliminate low-utility features and retain the most discriminative attributes. These include flow-based metrics, inter-arrival time (IAT), and session duration, key indicators of malicious behavior. We evaluated five baseline classifiers (Random Forest, Gradient Boosting, XGBoost, Extra Trees, and Decision Trees) across three ensemble strategies (bagging, voting, and stacking). Among these, the Stacking ensemble achieved the highest overall performance, with 83% across all evaluation metrics, accuracy, precision, recall, and F1-score, and a True Negative Rate (TNR) of 93.34%. The framework also improves the detection of minority malware families in imbalanced datasets. These findings highlight the advantages of ensemble learning for building scalable and reliable Android systems suitable for real-world deployment.

Author 1: Mana Saleh Al Reshan

Keywords: Malware detection; cyber threat; ML models; feature selection; ensemble methods

PDF

Paper 41: Analyzing Cyber Attack Detection in IoT Healthcare Environments Using Artificial Intelligence

Abstract: The rapid growth of the Internet of Things (IoT) has significantly increased its integration into daily life. In recent years, the integration of IoT technologies in healthcare has significantly enhanced patient care and operational efficiency. One of the most promising areas for using IoT devices in healthcare or interconnecting medical devices is known as the Internet of Medical Things (IoMT). IoMT supports various healthcare services, e.g., remote patient monitoring. However, there are serious cyber-security concerns, as various attacks have targeted these IoMT devices in recent years. This research presents an analytical approach to understanding how Artificial Intelligence (AI) can improve the detection of cyber-attacks within IoT healthcare environments. The main goal of this research is to provide an AI-based model to detect cyber-attacks in IoMT in the healthcare environment. Many researchers have worked on developing a framework in this field to address critical cybersecurity threats. However, these efforts often fall short of covering other important aspects such as data privacy and interoperability. In this study, a model and framework are proposed to monitor IoT networks, and detect potential security breaches in real-time to help in mitigating risks while maintaining healthcare services. The key findings contribute to strengthening cybersecurity protocols in healthcare IoT environments in order to ensure the protection of sensitive information against emerging cybersecurity vulnerabilities.

Author 1: Rawan Marzooq Alharbi
Author 2: Muhammad Asif Khan

Keywords: IoT healthcare security; cyber-attack detection; healthcare security; AI in healthcare; smart medical systems

PDF

Paper 42: Performance Analysis of Proposed Scalable Reversible Randomization Algorithm (SRRA) in Privacy Preserving Big Data Analytics

Abstract: The economy of today’s world is a data-driven knowledge economy, as electronic devices are mostly used for our day-to-day activities, through which organizations collect data actively or passively. The dimensionality of the dataset is also increased, along with the volume of data, because of the advancements in digital devices and communication technology. The feature selection becomes a crucial preprocessing step in big data analytics as a dimensionality reduction technique to eliminate redundant and noisy features. Studying the fluctuations in feature selection results is a vigorous area of research, as it is positively related to data utility, as fluctuations in feature selection results confuse the data analysts’ minds about their research outcomes. Privacy preservation is a major concern in big data analytics to protect sensitive individuals’ data. Application of privacy preservation techniques to modify the dataset will affect the stability of feature selection, as it has recently been proven that it mostly depends on the dataset’s physical characteristics. This study analyses the performance of the proposed Scalable Reversible Randomization Algorithm (SRRA) in terms of privacy preservation, change in characteristics of the dataset, information loss, stability of feature selection, and data utility in big data scenarios.

Author 1: Mohana Chelvan P
Author 2: Rajavarman V N
Author 3: Dahlia Sam

Keywords: Big data; data analytics; high dimensionality; feature selection; selection stability; privacy preservation; information loss

PDF

Paper 43: Shared API Call Insights for Optimized Malware Detection in Portable Executable Files

Abstract: Malware analysis is essential for understanding malicious software and developing effective detection strategies. Traditional detection methods, such as signature-based and heuristic-based approaches, often fail against evolving threats. To address this challenge, this study proposes a static analysis–based malware detection system that employs thirteen classifiers, including Logistic Regression, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Naive Bayes, Decision Tree, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Random Forest, Extra Trees, Gradient Boosting, AdaBoost, and LightGBM. The framework is built on a balanced dataset of 1,318 Windows Portable Executable (PE) files (674 malware, 644 benign), where the features are derived from shared API calls between benign and malicious files to ensure relevance and reduce redundancy. Experimental results show that the Extra Trees classifier achieved the highest accuracy of 98.14%, highlighting its effectiveness in detecting malware. Overall, this study provides a robust, data-driven approach that enhances static malware detection and contributes to strengthening cybersecurity against emerging threats.

Author 1: Mehdi Kmiti
Author 2: Jallal Eddine Moussaoui
Author 3: Khalid El Gholami
Author 4: Yassine Maleh

Keywords: Malware detection; static analysis; portable executable (PE) files; API calls; extra trees classifier

PDF

Paper 44: Intrusion Detection Using Machine Learning and Deep Learning

Abstract: As cyberattacks grow in prevalence, Intrusion Detection Systems (IDS) have become critical for securing network infrastructures. This study proposes an efficient IDS framework utilizing both machine learning (ML) and deep learning (DL) algorithms. The framework is evaluated on the “NF-UNSW-NB15-v2” dataset, which comprises a blend of normal and malicious traffic. A diverse set of advanced models—including Deep Neural Networks (DNN), Long Short-Term Memory (LSTM) networks, eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and K-Nearest Neighbors (KNN)—is deployed for intrusion detection. The approach encompasses both binary classification (normal vs. malicious) and multi-class classification (specific attack categories). Preprocessing steps include feature standardization using StandardScaler, class imbalance correction via SMOTE, and dimensionality reduction through Principal Component Analysis (PCA). Results show that Random Forest and XGBoost models achieve high accuracy in binary classification with F1-scores approaching 0.97, while XGBoost attains the best macro F1-score (0.71) in multi-class tasks. Additionally, RF and XGBoost demonstrate the fastest inference times, underscoring their suitability for real-time deployment. This work contributes a scalable and optimized IDS pipeline for enhancing cybersecurity resilience.

Author 1: Fatima Jobran ALzaher
Author 2: Asma AlJarullah

Keywords: Cybersecurity; cyber-attack; intrusion detection system; machine learning; deep learning

PDF

Paper 45: Comprehensive Analysis of Machine and Deep Learning Models for Stock Market Prediction

Abstract: Stock market prediction is a core task in financial engineering that requires sophisticated methods to extract subtle market and volatility trends. The increasing complexity of the stock market has led to the integration of advanced machine learning (ML) and deep learning (DL) techniques to improve accuracy beyond traditional statistical methods. This research provides a taxonomy of stock market prediction methods and reviews key regression-based models, including linear regression and advanced neural networks like recurrent neural networks (RNNs), long short-term memory (LSTM), and hybrid (CNN-LSTM) models. The study deploys and evaluates three specific models: Linear Regression, RNNs, and LSTMs. The models were trained and tested using modern data preprocessing procedures, including Z-score normalization and temporal sequencing. The findings show that the Linear Regression (LR) model performed better, with a Root Mean Square Error (RMSE) of 0.334 during training and 0.304 during testing, and a Mean Absolute Error (MAE) of 0.203 and 0.207, respectively. This contrasted with the deep learning models, which had higher error rates. The LSTM achieved a training RMSE of 0.355, while the RNN model had a training RMSE of 0.383. These results provide empirical evidence that increased model complexity does not necessarily translate into better forecasting accuracy in financial applications, and that model selection is both context-sensitive and data-driven. The findings mentioned the challenge of nonstationarity in stock market data and the need to periodically retrain models on recent data.

Author 1: Hadi S. AlQahtani
Author 2: Mohammed J. Alhaddad
Author 3: Mutasem Jarrah

Keywords: Deep learning; machine learning; prediction methods; stock market; regression; taxonomy

PDF

Paper 46: Consumer Adoption of Autonomous Vehicles in China: A Bibliometric Review of Intention Drivers and Perceptions

Abstract: Autonomous vehicles (AVs) are playing an increasing role in digitally enabled transportation systems with the dramatic emergence of related technologies. Consumer adoption is arguably a key factor in the deployment of AVs in China. In this study, bibliometric analysis was used to explore intention drivers regarding consumer adoption of AVs and the role of consumer perception in the decision-making process from the perspective of Chinese consumers. The results revealed that consumer perception is a highly critical factor influencing the adoption of AVs. Moreover, most Chinese consumers were more sensitive to perceived losses than to gains. In addition, the main public focus was on highly intelligent shared AVs rather than family-use vehicles. These findings could help governments and enterprises gain a deeper understanding of consumer behavior in the Chinese market, which could be used as a reference for implementing measures to better accelerate the diffusion of AVs.

Author 1: Yunluo Zou
Author 2: Syuhaily Osman
Author 3: Sharifah Azizah Haron

Keywords: Autonomous vehicles; diffusion of AVs; consumer adoption; consumer perception; Chinese market; bibliometric analysis

PDF

Paper 47: RA-ACS_net Network: A Quantum Optical Reconstruction Method for Ultra-high Resolution Bioimaging

Abstract: Ultra-high resolution bioimaging based on quantum optics offers high sensitivity at relatively low cost, yet conventional reconstruction algorithms face challenges of excessive sampling time, long computation, and artifacts that limit imaging quality. To overcome these issues, this study proposes a novel quantum optical bioimaging reconstruction method termed RA-ACS_net, which integrates a ripple algorithm with a hybrid attention mechanism network. The ripple algorithm provides global optimization for network parameter adjustment, while the attention mechanism enhances feature extraction and information fusion. Furthermore, a differentiated loss function (ALoss) is designed to preserve fine structural details and improve visual fidelity compared with conventional MSE loss. A large-scale dataset of quantum optics-based bioimages is employed for training and validation. Experimental results demonstrate that RA-ACS_net achieves superior reconstruction performance, with significantly higher PSNR and SSIM across both low and high sampling ratios, when compared to iterative algorithms (TVAL3) and existing deep learning models (DR2-Net, DPA-Net). The proposed approach exhibits robustness under sparse data conditions, reduces blocking artifacts, and accelerates convergence, thereby addressing critical limitations of current methods. This study highlights the potential of combining quantum optics with advanced deep learning optimization strategies to establish a practical and efficient framework for ultra-high resolution bioimaging.

Author 1: Lin SHANG

Keywords: Ultra-high resolution bioimaging; quantum optics; computer vision; ripple algorithm; attention mechanism

PDF

Paper 48: Privacy-Preserving Content-Based Medical Image Retrieval Using Integrated CNN Fusion and Quantization Optimization

Abstract: Content‑Based Image Retrieval (CBIR) systems have become increasingly crucial in healthcare as the volume of medical imaging data continues to grow exponentially. However, existing systems struggle to balance privacy preservation, computational efficiency and retrieval accuracy, particularly in resource‑constrained healthcare environments. This research proposes a novel multi‑level privacy‑preserving CBIR architecture that integrates multiple convolutional neural network (CNN) architectures with fusion strategies and quantization optimization specifically designed for encrypted medical images. The proposed framework addresses three key challenges: privacy preservation through advanced encryption techniques, feature extraction using optimized CNN fusion strategies and computational efficiency through model quantization. By implementing multiple pre‑trained CNN models—including VGG‑16, ResNet50, DenseNet121 and EfficientNet‑B0—along with various fusion strategies, the system achieves improved feature extraction from encrypted medical images. The framework incorporates quantization techniques to optimize computational efficiency without compromising retrieval accuracy. Experimental results across multiple medical imaging modalities, including X‑ray, magnetic resonance imaging (MRI) and computed tomography (CT) scans, demonstrate the effectiveness of the proposed approach in terms of retrieval accuracy, computational efficiency and security robustness. This research contributes to advancing privacy‑preserving medical image analysis by providing a comprehensive solution that effectively balances security requirements with practical implementation constraints in healthcare settings.

Author 1: Mohamed Jafar sadik
Author 2: Muhammed E Abd Alkhalec Tharwat
Author 3: Noor Azah Samsudin
Author 4: Ezak Fadzrin Bin Ahmad

Keywords: Content-Based Image Retrieval (CBIR); medical image analysis; privacy preservation; deep learning; convolutional neural networks (CNNs); feature fusion; model quantization; healthcare security; encrypted image processing; resource-constrained computing; computed tomography (CT); magnetic resonance imaging (MRI)

PDF

Paper 49: Innovative Model of Tourism on Educational Engineering: Transformation Learning from Experiential to Interactive

Abstract: This study explores the interactive relationship between tourism education and industry development by applying a coupled coordination model and proposing an innovative framework that shifts learning from traditional experiential approaches to interactive teaching. The research establishes comprehensive evaluation indicators for both tourism industry performance and educational engineering, and quantitatively analyzes their coupling degree. Results reveal that although tourism education and industrial development are closely linked, mismatches in resource allocation and talent demand reduce coordination effectiveness. The innovative model, based on educational engineering, demonstrates significant advantages by integrating digital technologies such as VR, AR, and big data analytics into teaching. These tools enhance student engagement, improve knowledge construction, and provide real-time feedback, thereby optimizing both educational outcomes and industrial benefits. The findings indicate that interactive teaching strengthens students’ practical competencies, increases efficiency in resource distribution, and contributes to the sustainable growth of the tourism sector. Furthermore, the degree of coupling coordination has gradually shifted from an initial to a moderate level, suggesting that interactive teaching promotes a more resilient and adaptive education–industry system. However, the transformation requires stronger institutional support, improved teacher training in technological applications, and regional balance in resource allocation. The study concludes that fostering an interactive mechanism between education and industry is essential for achieving synergy, cultivating high-quality professionals, and advancing the long-term competitiveness of tourism. Future research should refine indicator systems, integrate diverse modeling methods, and address regional disparities to strengthen the innovation pathway for tourism education.

Author 1: Yurao Yan
Author 2: Tara Ahmed Mohammed
Author 3: Hailan Liang
Author 4: Mingxi Guan

Keywords: Tourism education; educational engineering; coupling coordination model; industry–education integration

PDF

Paper 50: DeepIndel: A ResNet-Based Method for Accurate Insertion and Deletion Detection from Long-Read Sequencing

Abstract: Structural variations (SVs) play a pivotal role in human genetics, influencing gene expression, disease mechanisms, and phenotypic diversity. Despite the advancements in short-read sequencing technologies, long-read sequencing offers superior resolution for detecting SVs, particularly in complex genomic regions. In this study, DeepIndel, a novel computational framework, is presented that leverages long-read sequencing data combined with a deep learning model to identify SV breakpoints accurately. This approach captures complex breakpoint patterns by aligning long reads to a reference genome and extracting 23 key features at each genomic location, including read support, candidate length, and strand-specific information. DeepIndel has been evaluated on the HG002 dataset, achieving exceptional performance with high precision and reliability in detecting insertions and deletions, with F1 scores (94.27% for insertions, 91.09% for deletions) and thereby demonstrating significant improvements over existing state-of-the-art tools, offering a more precise and robust approach to SV detection. This work advances structural variant analysis, with promising implications for genomic research, disease understanding, and personalized medicine.

Author 1: Md. Shadmim Hasan Sifat
Author 2: Khandokar Md. Rahat Hossain

Keywords: Structural variations (SVs); indels; long-read sequencing; breakpoints; genomic features; diseases; deep learning; ResNet; HG002 dataset; precision medicine; gene expression; phenotypic diversity

PDF

Paper 51: A Modeling Approach for Strategic Fleet Sizing Under Maritime Sovereignty: Application to the Moroccan National Fleet

Abstract: The disruptions experienced by global supply chains in recent years have reignited the importance of maritime sovereignty, particularly through the creation or reinforcement of national shipping fleets. In this context, the present study explores strategic approaches to national fleet sizing, drawing from recent policy directions and maritime planning models. The study is motivated by the need to design resilient and sovereign fleets that reduce dependency on foreign operators and strengthen autonomy in trade logistics. To complement this analysis, a mathematical model is developed in the form of a Mixed-Integer Nonlinear Programming (MINLP) formulation, where sovereignty is captured through the share of vessel operations under national control. In addition to sovereignty, the model integrates criteria of economic viability, environmental impact, and resilience, positioning Maritime Fleet Sizing within the broader scope of Strategic Transport Planning and Green Maritime Transport. Numerical experiments are carried out on a representative dataset of vessels and strategic routes, illustrating how sovereignty thresholds affect fleet composition and deployment. The results highlight a fundamental trade-off between sovereignty and profitability, emphasizing the need for strategic decision-making that carefully balances autonomy objectives with resilience and environmental considerations. Findings also show that moderate sovereignty thresholds support cost-efficient and diversified fleets, while maximalist sovereignty requirements lead to reduced coverage, higher unmet demand, and lower profitability. These insights underline the importance of calibrated strategies, where Sovereignty, Resilience in Maritime Logistics, and sustainability are treated as interconnected pillars of long-term fleet development.

Author 1: Mohamed Anas KHALFI
Author 2: Aziz AIT BASSOU
Author 3: Mustapha HLYAL
Author 4: Jamila EL ALAMI

Keywords: Maritime fleet sizing; sovereignty and national ownership; strategic transport planning; resilience in maritime logistics; green maritime transport

PDF

Paper 52: Prediction of Mining-Induced Subsidence in Saudi Arabia Phosphate Mines Using ANN Method

Abstract: This study develops and validates an artificial neural network (ANN) model to predict mining-induced land subsidence in Saudi Arabia’s Al-Jalamid and Umm Wu’al phosphate mines. A multilayer perceptron is used with optimized hyperparameters based on four inputs (ground point position, distance from extraction center, accumulated exploitation volume, and time). The optimal configuration (5 hidden layers, 64 nodes, 240 epochs) achieves RMSE = 22 mm and MAE = 13 mm, outperforming traditional numerical/statistical baselines. Case-study validation at both mines confirms robustness (e.g., RMSE ≈ 20 mm, MAE ≈ 12 mm), enabling practical mitigation such as ground reinforcement and extraction-rate control. The results demonstrate that a tuned ANN provides accurate, operationally useful subsidence forecasts, supporting safer and more sustainable mine planning.

Author 1: Atef GHARBI
Author 2: Mohamed AYARI
Author 3: Yamen El Touati
Author 4: Zeineb Klai
Author 5: Mahmoud Salaheldin Elsayed
Author 6: Elsaid Md. Abdelrahim

Keywords: Subsidence prediction; phosphate mine; artificial neural network; multilayer perceptron; hyperparameter optimization

PDF

Paper 53: A Review on Image-Based Methods for Plant Disease Identification in Diverse Data Conditions

Abstract: Image-based plant disease identification methods have demonstrated potential in enhancing crop protection through early detection. However, the development of this field faces several challenges, such as the scarcity of high-quality annotated data, significant intra-class variation and high inter-class similarity among plant diseases, and the limited generalization ability of current models under diverse domain conditions. We extensively investigated 110+ latest papers on plant disease identification, aiming to present a timely and comprehensive overview of the most recent advances in the field, along with impartial comparisons of strengths and weaknesses of the existing works. Specifically, we begin by reviewing traditional machine learning and deep learning methods, which form the foundation for many current models. We then introduce a taxonomy of transfer learning methods, including instance-based, mapping-based, and network-based methods, and analyze their effectiveness in enhancing classification performance by leveraging prior knowledge under data-constrained scenarios. Subsequently, we examine recent advances in few-shot learning methods for plant disease identification, categorizing them into model-based, metric-based, and optimization-based methods, and evaluate their capabilities in addressing data scarcity and improving identification accuracy. Finally, we summarize the current limitations and outline promising future research directions, with the aim of guiding continued development in this area.

Author 1: Feilong Tang
Author 2: Rosalyn R Porle
Author 3: Hoe Tung Yew
Author 4: Farrah Wong

Keywords: Few-shot learning; transfer learning; deep learning; crop protection; early detection; data scarcity

PDF

Paper 54: Predictive Modeling for Metro Performance Using MetroPT3 Dataset

Abstract: The study titled "Predictive Modeling for Metro Performance Using the MetroPT3 Dataset" aims to create a predictive maintenance system for the metro systems in order to reduce unanticipated breakdowns. The dataset known as MetroPT3 is primarily used to provide data useful in monitoring the operation of certain features of the APU and includes several types of time-series data like air pressure, the current drawn by a motor and oil temperatures. Some basic data quality enhancement procedures, such as cleaning, interpolation of missing entries and normalization were performed. The analysis aims to develop a Long Short-Term Memory (LSTM) Autoencoder based on an encoder-decoder architecture to perform sequence modeling and identify anomalies. The model learns normal operational patterns and detects deviations using reconstruction error as an anomaly threshold, enabling timely intervention. The results obtained are encouraging since the model performed excellently in reconstructing clean operating values using the Autoencoder structure.

Author 1: Akshitha Mary A C
Author 2: R Rakshinee
Author 3: Stefani Jeyaseelan
Author 4: Sakthivel V
Author 5: Prakash P

Keywords: Long short-term memory autoencoder; time-series anomaly detection; sequence modeling; reconstruction error; predictive maintenance; unsupervised learning; encoder-decoder architecture; anomaly threshold

PDF

Paper 55: Artificial Intelligence in Diagnostic and Therapeutic Interventions: A Systematic Review of Randomized Controlled Trials

Abstract: Artificial intelligence (AI) is increasingly being integrated into diagnostic and therapeutic interventions, offering potential advantages in accuracy, efficiency, and clinical decision-making compared to conventional methods. This systematic review aimed to identify and characterise AI applications assessed in randomised controlled trials (RCTs), and to synthesise the reported clinical outcomes in comparison with standard approaches. A comprehensive search was conducted in PubMed, Scopus, and Web of Science for articles published between 2015 and June 2024. Eligible studies included randomised controlled trials involving patients with various medical conditions who received diagnostic or therapeutic interventions supported by AI technologies. Comparators included conventional diagnostic or treatment methods, placebo, or standard care. Two reviewers independently screened the studies, extracted data, and assessed risk of bias using the Cochrane RoB 1 tool. A total of 13 trials involving 10,566 participants met the inclusion criteria, spanning a range of medical specialties including gastroenterology, dermatology, radiology, oncology, neurology, and ophthalmology. While several trials reported improvements in diagnostic accuracy, treatment planning, or procedural efficiency, other studies showed inconsistent or limited benefits, highlighting the variability in outcomes depending on the clinical context and type of AI application. This review offers an updated synthesis of AI-based clinical interventions evaluated through randomised controlled trials and emphasises the need for further research to validate these tools, standardise their implementation, and assess their broader impact as health technology in modern healthcare systems.

Author 1: Oscar Jimenez-Flores
Author 2: Sandra Pajares-Centeno
Author 3: Oscar Mejia-Sanchez
Author 4: Rodrigo Flores-Palacios

Keywords: AI applications; diagnostic interventions; therapeutic interventions; randomised controlled trials; health technology

PDF

Paper 56: Transformer-Enabled Smartphone System for Intelligent Physical Activity Monitoring

Abstract: This study addresses the prevalent decline in physical activity among university students in the contemporary information society, proposing an innovative deep learning-based framework for intelligent physical activity recognition. Central to this framework is the comprehensive utilization of high-precision Inertial Measurement Units (IMUs) integrated within smartphones, encompassing triaxial accelerometers, gyroscopes, and magnetometers, enabling multi-dimensional, real-time capture of students' daily activity postures. For algorithmic design, this research transcends traditional limitations by adopting the more advanced Transformer architecture as its core classifier. Through the distinct self-attention mechanism inherent to this architecture, the proposed method efficiently and precisely extracts critical spatiotemporal features from vast sensor data, thereby achieving accurate identification and classification of various physical activities, such as walking, running, and climbing stairs. Rigorous evaluation results demonstrate significant advantages in key performance metrics, including recognition accuracy, when compared to conventional recurrent neural networks (e.g., Long Short-Term Memory networks, Recurrent Neural Networks) and classic machine learning algorithms (e.g., Random Forest), with a validation accuracy reaching 93.97%. This forward-looking research outcome not only provides a reliable and efficient technological means for monitoring the physical activity status of university students but also establishes a robust data foundation for the future development and implementation of targeted health intervention measures.

Author 1: Leping Zhang
Author 2: Fengjiao Jiang
Author 3: Guopeng Jia
Author 4: Yue Wang

Keywords: Activity recognition; smartphone; transformer architecture; inertial measurement units

PDF

Paper 57: Game Theory-Optimized Attention-Based Temporal Graph Convolutional Network for Spatiotemporal Forecasting of Sea Level Rise

Abstract: Predicting Sea level rise accurately is crucial in the formulation of effective adaptation plans to counteract the effects of climate change in vulnerable coastal areas, infrastructure, and people. The conventional forecasting models tend to fail in capturing the intricate spatiotemporal relationships affecting sea level variations. In order to overcome the above-mentioned challenges, this research introduces a hybrid predictive model combining a Temporal Graph Convolutional Network (T-GCN) with attention and game theory-based optimization strategy. T-GCN structure is specially tailored to capture spatial dependencies as well as temporal dynamics in sea level change, providing even deeper understanding of the changing dynamics of sea levels. The attention mechanism strengthens the model by dynamically weighing important variables, whereas the game-theoretic optimization efficiently optimizes multiple objectives, e.g., prediction accuracy and robustness. Experimental results, measured in terms of common performance indicators, show the better effectiveness of the proposed model with a correlation coefficient of 0.996512 and an overall error of 0.032154. Through the inclusion of both climatic and socio-economic variables, this methodology provides accurate, data-based insights to inform climate policy and adaptive planning. The results highlight the capabilities of state-of-the-art machine learning methods for solving actual sea level rise challenges.

Author 1: T M Swathy
Author 2: K.Ruth Isabels
Author 3: A. Sindhiya Rebecca
Author 4: Venubabu Rachapudi
Author 5: Yousef A.Baker El-Ebiary
Author 6: Shobana Gorintla
Author 7: Elangovan Muniyandy

Keywords: Temporal graph convolutional networks; attention mechanisms; game theory optimization; sea level rise prediction; climate change adaptation

PDF

Paper 58: Securing Image Messages Using Secure Hash Algorithm 3, Chaos Scheme, and DNA Encoding

Abstract: Security is an essential aspect to consider during data transmission, especially images. Threats that may occur during image transmission include images being stolen by third parties. One way to secure images is through encryption-decryption processes using cryptographic algorithms. One of the algorithms developed for image security involves combining a chaos scheme, DNA encoding, and hashing. Chaos scheme refers to a system sensitive to initial conditions, resulting in behavior that is difficult to predict or appears random. DNA encoding is the process of converting bits into a DNA sequence. Hashing is a mathematical function which takes variable inputs and converts them into a binary sequence with fixed length. In this research, security enhancement is achieved by replacing the hashing algorithm with Secure Hash Algorithm (SHA) 3 Keccak. This study successfully implemented cryptographic algorithms into a website that can simulate image encryption-decryption processes in about 15 seconds per process. The effectiveness of the algorithm used has also been tested in abstracting images through Mean Squared Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) evaluations. Obtained MSE values of 0 and PSNR values of infinity indicated that the original images and decrypted images are identical.

Author 1: Amer Sharif
Author 2: Dian Rachmawati
Author 3: Wilbert

Keywords: Image encryption; image decryption; chaos scheme; DNA encoding; secure hash algorithm 3 Keccak

PDF

Paper 59: Architecting a Privacy-Focused Bitcoin Framework Through a Hybrid Wallet System Integrating Multiple Privacy Techniques

Abstract: Although Bitcoin enables pseudonymous peer-to-peer digital transactions, its transparent public ledger architecture allows for blockchain analysis that can compromise user anonymity. Despite the presence of wallets with privacy-enhancing features, no single solution currently offers comprehensive anonymity independently. Existing privacy-preserving techniques such as CoinJoin, PayJoin, and Stealth Addresses offer differing degrees of anonymity, yet each exhibits intrinsic limitations. This study proposes a hybrid privacy architecture that integrates multiple privacy-enhancing techniques into a unified and coherent transaction workflow. By integrating decentralized CoinJoin mixing, PayJoin for input ownership obfuscation, and Stealth Addresses for unlinkable payments, the proposed model establishes a robust, privacy-oriented framework for Bitcoin transactions. The framework is implemented and evaluated through pre-funded Sparrow and JoinMarket wallets, interconnected via a fully synchronized Bitcoin Core node deployed on the testnet environment. All communications are routed via the Tor network to maintain anonymity at the network layer. Using testnet-based simulations, we evaluate the effectiveness of the architecture. The results show that combining these techniques substantially strengthens resistance to common deanonymization heuristics, enhances transaction unlinkability, and achieves higher overall anonymity than relying on individual methods alone. This demonstrates the synergistic effect of the hybrid model in providing more resilient protection against transaction tracing and blockchain surveillance.

Author 1: Lamiaa Said
Author 2: Hatem Mohamed
Author 3: Diaa Salama
Author 4: Nesma Mahmoud

Keywords: Bitcoin; privacy; anonymity; wallet; blockchain; Coinjoin; Payjoin; stealth address

PDF

Paper 60: VGG-19 and Vision Transformer Enabled Shelf-Life Prediction Model for Intelligent Monitoring and Minimization of Food Waste in Culinary Inventories

Abstract: Food waste, particularly in the prepared food industry, presents a serious worldwide concern with serious ethical, environmental and socioeconomic implications. In restaurants and catering contexts, traditional inventory and waste management systems frequently lack the versatility and granularity to mitigate spoilage in real-time. The study proposes a sophisticated deep learning framework that predicts the remaining shelf-life of prepared food items using visual input, enabling timely interventions to reduce food waste. The proposed hybrid architecture integrates VGG-19 (Visual Geometry Group 19-layer network) for fine-grained feature extraction with Vision Transformer (ViT) that models contextual degradation patterns and temporal cues. The model operates by analyzing food images at regular intervals and predicting the remaining time before spoilage, enabling proactive decision-making for consumption prioritization. Food images are categorized into four freshness states: Fresh, Fit for Consumption, About to Expire and Expired, enabling the model to monitor real-time conditions. An elaborate dataset with 34 distinct food categories was utilized in the study, achieving outstanding performance with 98% accuracy, 97.5% precision, 97.9% recall and an F1-score of 97.75% and yielded an estimated 84% reduction in food waste. The model stands out for its non-invasive, image-based decision-making and the potential scalability across various food service settings. By offering predictive insights into food degradation and by using only visual data, the study advances the integration of artificial intelligence into sustainable food management.

Author 1: Bindhya Thomas
Author 2: Priyanka Surendran

Keywords: Food waste reduction; shelf-life prediction; VGG-19; vision transformer; image-based freshness classification; sustainable food management

PDF

Paper 61: Analysis of an RGB-D Simultaneous Localization and Mapping Algorithm for Unmanned Aerial Vehicle

Abstract: This study investigates the implementation of an RGB-D Simultaneous Localization and Mapping (SLAM) algorithm on an unmanned aerial vehicle (UAV) equipped with an Intel RealSense D435i camera. The study focuses on Real-Time Appearance-Based Mapping (RTAB-Map), a well-established RGB-D SLAM method capable of building 3D maps while simultaneously localizing a robot within its environment. Despite its advanced capabilities, deploying RTAB-Map on UAVs introduces specific challenges due to the dynamics of aerial navigation. This research evaluates the performance of RTAB-Map in terms of robustness, precision, and accuracy to optimize its application in UAV-based RGB-D SLAM. The findings reveal that the sequential frame-matching approach, combined with a minimum inliers threshold of 10, provides the most robust performance. In contrast, the global matching approach with a minimum inliers threshold of 20 offers better precision and accuracy. The results show that this implementation, utilizing an off-the-shelf hardware and software setup, has significant potential for advanced applications such as monitoring and surveillance in environments, where dense 3D mapping is critical.

Author 1: Muhammad Zamir Fathi Mohammad Effendi
Author 2: Norhidayah Mohamad Yatim
Author 3: Zarina Mohd Noh
Author 4: Nur Aqilah Othman

Keywords: Unmanned aerial vehicle; UAV; simultaneous localization and mapping; SLAM; RGB-D; real-time appearance based map; RTAB-map

PDF

Paper 62: In-Depth Comparison of Supervised Classification Models - Performance and Adaptability to Practical Requirements

Abstract: In this paper, we carried out an in-depth comparative analysis of five major supervised classification algorithms: Naïve Bayes, Decision Tree, Random Forest, KNN and SVM. These models were evaluated through a rigorous literature review, based on 20 criteria grouped into five key dimensions: algorithm performance, computational efficiency, practicality and ease of use, data compatibility and practical applicability. The results show that each algorithm has specific strengths and limitations: SVM and Random Forest stand out for their robustness and accuracy in complex environments, while Naïve Bayes and Decision Tree are appreciated for their speed, simplicity and interpretability. KNN, despite its intuitive approach, suffers from high complexity in the prediction phase, limiting its effectiveness on large datasets. This study aims to provide a structured framework for researchers and practitioners in various fields, such as healthcare, finance, industry and education, where supervised classification algorithms play a central role in decision-making. In addition, the results highlight the importance of selecting algorithms according to specific needs, and open up promising prospects, including the development of hybrid models and improved real-time data processing.

Author 1: Mouataz IDRISSI KHALDI
Author 2: Allae ERRAISSI
Author 3: Mustapha HAIN
Author 4: Mouad BANANE

Keywords: Supervised classification; Naïve Bayes; decision tree; Random Forest; k-nearest neighbor; Support Vector Machine; algorithm performance; interpretability

PDF

Paper 63: Mobile Applications that Incorporate AI for Information Search and Recommendation: A Systematic Literature Review

Abstract: The inclusion of artificial intelligence (AI) has become essential for mobile application development, allowing improved personalization and optimization of the user experience. Over the past decade, smart mobile devices have been observed to enhance user experience across a variety of needs. The main objective of this study is to evaluate AI-powered mobile applications that utilize intelligent search mechanisms and more accurate recommendations and analyze their impact on addressing these user needs. The databases used were Scopus, ScienceDirect, Web of Science, and EBSCO. A filtering process using Prisma and a document quality assessment process was performed to select the most relevant articles. The study posed four questions related to the topic. The results showed that mobile apps for search and recommendation are mainly based on hybrid approaches (collaborative and content-based filtering) and deep learning (autoencoders, LSTMs/transformers, and BERT-type semantic retrieval embeddings), complemented by classic techniques (matrix factorization, SVM, K-NN, and trees/boosting) and contextual personalization (location, time, activity). It was concluded that these AI additions benefited users and met their search and recommendation needs. Furthermore, these mechanisms are advancing by leaps and bounds, as we now talk about more precise search and recommendation with voice, images, and even video.

Author 1: Mijael R. Aliaga
Author 2: Jhosep S. Llacctahuaman
Author 3: Carla N. Esquivel
Author 4: Nemias Saboya

Keywords: AI algorithms; recommendation algorithms; mobile applications; search algorithms; AI for information

PDF

Paper 64: Smart Mobile Apps for Responsible Child Management: A Systematic Literature Review

Abstract: Children are increasingly using mobile devices, which raises challenges such as restricting access to inappropriate content, reducing excessive screen exposure, and ensuring safe digital habits. Although various parental control applications exist, most studies focus on isolated aspects such as content filtering or screen time management, with limited integration of artificial intelligence (AI) or consideration of children’s cognitive and emotional development. This highlights a research gap that requires a systematic review to consolidate existing evidence and identify best practices. Using the PRISMA methodology, a systematic search was conducted in four databases (Web of Science, Science@Direct, Scopus, and Semantic Scholar). After applying inclusion and exclusion criteria, 29 studies were selected for detailed analysis. Results show that AI-based applications can enhance personalization, improve detection of harmful content, and support parents in establishing healthier digital routines. However, limitations persist, including scarce training datasets, lack of algorithm transparency, and limited assessment of practical effectiveness. This review contributes by mapping current solutions, highlighting strengths and weaknesses, and providing evidence-based insights for researchers, parents, educators, and developers to design safer and more effective child-centered mobile applications.

Author 1: Daniel-Celestino
Author 2: Harol-Medina
Author 3: Cristian-Lara
Author 4: Nemias Saboya

Keywords: Smart mobile apps; parental control; artificial intelligence; screen time regulation; responsible child management

PDF

Paper 65: DMME-Driven Product Quality Prediction for Semiconductor Manufacturing

Abstract: Defective products in manufacturing can be reduced by accurately predicting quality outcomes based on process parameters. This study proposes a quality prediction framework for semiconductor manufacturing using the Data Mining Methodology for Engineering Applications (DMME). This study extends DMME with domain-specific preprocessing and demonstrates its superiority on the SECOM dataset compared to other classifiers. Experimental results show that the Random Forest algorithm achieved the highest performance, with 92.99% accuracy and an F-measure of 0.9637, confirming the effectiveness of the proposed approach. These findings highlight the potential of structured, engineering-oriented data mining to improve product quality and support informed decision-making in complex manufacturing environments.

Author 1: Alif Ulfa Afifah
Author 2: Angga Prastiyan
Author 3: Fahmi Arif
Author 4: Fadillah Ramadhan

Keywords: Data mining; quality prediction; DMME; semiconductor manufacturing; random forest

PDF

Paper 66: AIoT-Based Waste Classification for Solid Waste Management to Accomplish the SDGs

Abstract: The Fourth Industrial Revolution, known as IR4.0 technologies, has enhanced global economic capabilities and productivity, but there are negative environmental and health impacts due to industrialisation and urbanisation, such as greenhouse gas emissions and global warming. One effective method that reduces environmental impact is to conduct a waste classification program that incorporates the principles of 3R. The proposed work includes educating individuals and businesses on the importance of waste reduction, promoting reusable products and packaging, and implementing effective recycling systems. Additionally, Governments could also incentivise sustainable practices through tax breaks or invest in renewable energy sources to reduce greenhouse gas emissions associated with industrial processes. The proposed study aims to develop automated waste classification technology that can help reach SDGs 11, 12, and 13 by making waste management more efficient, increasing recycling and resource recovery rates, and cutting down on greenhouse gas emissions. The proposed system is developed using a deep learning algorithm with a microprocessor and microcontroller managing sensors and actuators to perform waste sorting based on the classification result. This distinguishes the proposed system from existing manual and RFID-based approaches by integrating AIoT with a user incentive mechanism, improving both accuracy and public adoption. This technology enhances overall sustainability and promotes a more circular economy by enabling the reuse and recycling of materials for their own well-being through process innovation.

Author 1: T. M. Shien
Author 2: M. Batumalay
Author 3: Balasubramaniam Muniandy
Author 4: Pavan Kumar Pagadala
Author 5: Vinoth Kumar. P

Keywords: Solid waste management; waste classification; artificial intelligence of things (AIoT); well-being; process innovation; sustainable development goals (SDG); recycling; circular economy

PDF

Paper 67: A Scalable Machine Learning Framework for Predictive Analytics and Employee Performance Enhancement in Large Enterprises

Abstract: Employee performance prediction and workforce optimization are critical for sustainable growth in large enterprises, yet traditional performance forecasting techniques often rely on regression analysis and conventional machine learning models that fail to capture the dynamic, nonlinear nature of human resource data. The approaches are not flexible, explainable, and actionable in terms of appropriate optimizations, which makes them less effective intelligent decision support systems. To overcome these limitations, this study presents a novel Hybrid Deep Dense Attention Network (HD-DAN) model combined with reinforcement learning (RL) to predict employee performance and optimally manage the workforce. The HD-DAN optimally combines self-attention in dense layers to dynamically emphasize performance-critical aspects, such as engagement, skills, and behavioral attributes. The RL agent learns to map the predictions into optimized interventions, such that continuous performance improvement is achieved. The HD-DAN achieves a Mean Absolute Error (MAE) of 0.076, Root Mean Square Error (RMSE) of 0.129, and an R² of 0.421—corresponding to an 11.5% RMSE reduction and a 15.6% R² increase over the best available baselines. In addition to higher predictive accuracy, the framework delivers interpretability through attention weight visualization and decision reliability through RL-driven optimization, providing a scalable, adaptive, and explainable platform for intelligent decision support in employee performance forecasting and workforce management.

Author 1: Jyoti Singh Kanwar
Author 2: Ranju S Kartha
Author 3: Chamandeep Kaur
Author 4: Behara Venkata Nandakishore
Author 5: Elangovan Muniyandy
Author 6: Vuda Sreenivasa Rao
Author 7: Yousef A.Baker El-Ebiary

Keywords: Employee performance prediction; workforce optimization; performance forecasting; hybrid deep dense attention network

PDF

Paper 68: Deep Learning-Driven Scalable and High-Precision Malaria Detection from Microscopic Blood Smear Images

Abstract: Malaria continues to be a life-threatening disease, especially in tropical and low-resource regions, where timely and accurate diagnosis remains a major challenge. Traditional diagnostic approaches like manual microscopy are not only time-consuming and expertise-dependent but also prone to subjective errors. Existing deep learning methods, such as Convolutional Neural Networks (CNNs), ResNet, and Vision Transformers (ViT), struggle to generalize across variations in staining, resolution, and morphology, leading to misclassification and reduced diagnostic reliability. To overcome these limitations, this study proposes a novel hybrid architecture, Swin-Siamese, which integrates the hierarchical self-attention mechanism of the Swin Transformer with the contrastive similarity learning capability of the Siamese Neural Network. This unique combination enables the model to capture both global and local spatial patterns while accurately distinguishing infected from uninfected blood smear images. The model is implemented using TensorFlow and PyTorch, and trained on a publicly available malaria dataset comprising 13,152 training, 626 validation, and 1,253 test images. Experimental results demonstrate a 3.1% improvement in accuracy over traditional CNNs, achieving 95.3% accuracy, 95.1% precision, 95.4% recall, 95.2% F1-score, and an AUC-ROC of 0.97. This significant performance gain highlights the model's scalability, interpretability, and real-time applicability in clinical and field-deployable diagnostic systems, offering a powerful solution for malaria screening in underserved regions.

Author 1: N. Kannaiya Raja
Author 2: Divya Rohatgi
Author 3: Venkata Lalitha Narla
Author 4: Ganesh Kumar Anbazhagan
Author 5: R. Aroul Canessane
Author 6: Drakshayani Sriramsetti
Author 7: Yousef A.Baker El-Ebiary

Keywords: Automated diagnosis; blood smear images; contrastive learning; deep learning; malaria detection

PDF

Paper 69: Explainable Multimodal Sentiment Analysis Using Hierarchical Attention-Based Adaptive Transformer Models

Abstract: Multimodal Sentiment Analysis (MSA) has emerged as a critical task in Natural Language Processing (NLP), driven by the growth of user-generated content containing textual, visual, and auditory cues. While transformer-based approaches achieve strong predictive performance, their lack of interpretability and limited adaptability restrict their use in sensitive applications such as healthcare, education, and human–computer interaction. To address these challenges, this study proposes an explainable and adaptive MSA framework based on a hierarchical attention-based transformer architecture. The model leverages RoBERTa for text, Wav2Vec2.0 for speech, and Vision Transformer (ViT) for visual cues, with features fused using a three-tier attention mechanism encompassing token/frame-level, modality-level, and semantic-level attention. This design enables fine-grained representation learning, dynamic cross-modal alignment, and intrinsic explainability through attention heatmaps. Additionally, contrastive alignment loss is incorporated to align heterogeneous modality embeddings, while label smoothing mitigates overconfidence, improving generalizability. Experimental evaluation on the CMU-MOSEI benchmark demonstrates state-of-the-art performance, achieving 93.2% accuracy, 93.5% precision, 92.8% recall, and 94.1% F1-score, surpassing prior multimodal transformer-based methods. Unlike earlier models that rely on shallow fusion or post-hoc interpretability, the proposed approach integrates explainability into its architecture, balancing accuracy and transparency. These results confirm the efficacy of the adaptive hierarchical attention-based framework in delivering a robust, interpretable, and scalable solution for English-language multimodal sentiment analysis.

Author 1: Anna Shalini
Author 2: B. Manikyala Rao
Author 3: Ranjitha. P. K
Author 4: Guru Basava Aradhya S
Author 5: S. Farhad
Author 6: Elangovan Muniyandy
Author 7: Yousef A. Baker El-Ebiary

Keywords: Multimodal sentiment analysis; RoBERTa; Wav2Vec 2.0; vision transformer; CMU-MOSEI

PDF

Paper 70: A Data-Driven Approach to Achieve Low-Carbon Building Energy Optimization by Using BIM Technology

Abstract: Low-carbon building energy optimization is the environmental aspect that is impacted due to the construction sector. The integration of Building Information Modelling (BIM) with Artificial Intelligence (AI) techniques offers the design and operation of buildings with a reduced carbon footprint. Usually, such systems lack flexibility and the precision of dynamically optimizing energy usage. This work proposes a novel data-driven framework that merges AI and BIM to optimize the building energy system for low-emission design using Carbon Major Emission Datasets. It aids materials and energy source selections by identifying highly emitting commodities to reduce operational carbon footprints. Initially, data acquisition and emission analysis are performed on the Carbon Majors database to identify high-emission materials. Subsequently, emission factors are linked with the BIM elements using plug-ins such as One Click LCA, which allow the annotation of embodied carbon values. Further, operational energy is optimized by Multi-Agent Assisted NSGA-II, which optimizes parameters and material selection. Additionally, AI-assisted energy prediction supported by the SqueezeNet model and energy simulation techniques was used for minimizing building energy consumption. The results reveal a high-energy prediction accuracy of 0.0212 for MAE, 0.0376 for MSE, and 0.9814 for the R² score. It further helps to reduce carbon emissions by 1155 tons and improve the cost efficiency of 570.25 million, promoting low-carbon building solutions from the earliest stages of design.

Author 1: Xin Yu
Author 2: Guoliang Ren
Author 3: Jie Niu

Keywords: Low-carbon buildings; building information modelling; carbon emissions; operational energy optimization; SqueezeNet; energy simulation

PDF

Paper 71: JellyNovaNet-JSO: A Hybrid TabNet–BiLSTM Model for IoT-Based Crop Yield Prediction

Abstract: Precise prediction of crop yield is essential for sustainable agriculture, resource maximization, and food security. As the use of IoT and Wireless Sensor Networks (WSNs) gains momentum, huge amounts of heterogeneous and time-series environmental data have become readily available from intelligent greenhouses. Despite this, it is still difficult to obtain meaningful insights from these data due to their high dimensionality, noise, and nonlinear temporal behavior. Traditional machine learning and statistical approaches usually fail to effectively capture static as well as sequential relationships, and most current models are difficult to tune hyperparameters and have problems with dealing with data heterogeneity and do not generalize across dynamic environments. To overcome these shortcomings, this paper introduces JellyNovaNet-JSO, a new hybrid deep learning architecture that integrates TabNet and BiLSTM architectures, designed using the Jellyfish Search Optimization (JSO) algorithm. The model exploits TabNet sparse attention for static feature modeling and the temporal memory of BiLSTM for time-series sensor data. The innovation is in utilizing attention-guided tabular learning with bidirectional temporal modeling, with a metaheuristic optimization layer to perform automatic hyperparameter tuning. Experimental outcomes based on real-world IoT greenhouse data demonstrate that JellyNovaNet-JSO attains MAE of 0.012, RMSE of 0.017, R² of 0.991, and MAPE of 1.89%, outperforming state-of-the-art CNN-LSTM, Random Forest, and SVM models substantially. In comparison with the prior approaches, JellyNovaNet-JSO enhances prediction accuracy by as much as 25% while ensuring scalability and robustness. This innovation provides a viable, interpretable, and deployable solution for precision agriculture, enabling smarter irrigation, climate control, and yield management.

Author 1: Huang Zhicheng
Author 2: Zhang Yinjun

Keywords: IoT agriculture; crop yield prediction; BiLSTM; TabNet; jellyfish search optimization

PDF

Paper 72: Functional vs Ethical Drivers in Generative AI Adoption: A PLS-SEM Study in Business Education

Abstract: This study examined the factors influencing the use of ChatGPT by university students enrolled in business and management programs, considering the simultaneous effect of their functional perceptions and ethical or academic concerns. Using a structural equation modeling approach (PLS-SEM) applied to a sample of 118 students in Chile, the study found that functional perceptions, such as efficiency, clarity, and cognitive support, exert a positive and significant effect on the use of the tool. By contrast, concerns related to technological dependency, reliability of responses, and academic authorship showed no significant effect on either perception or usage. These findings reveal a functionalist adoption logic in which ethical judgment and pedagogical risks do not act as meaningful barriers. This study contributes to the literature by simultaneously integrating enabling and inhibiting factors into a single explanatory model and providing empirical evidence from a Latin American context. It concludes that there is a pressing need to develop pedagogical and institutional frameworks that foster critical literacy in the use of generative artificial intelligence, particularly in disciplines in which strategic judgment and ethical responsibility are core competencies. These findings should be interpreted within the context of a single Chilean institution and are not intended for statistical generalization.

Author 1: Jorge Serrano-Malebrán
Author 2: Cristian Vidal-Silva
Author 3: Paola von-Bichoffshausen
Author 4: Romina Gómez-López
Author 5: Franco Campos-Núñez

Keywords: Generative artificial intelligence; student perceptions; ChatGPT; PLS-SEM

PDF

Paper 73: An AI-Driven Approach for Real-Time Noise Level Monitoring and Analysis

Abstract: Now a days, noise pollution is posing a public health risk, especially in residences and indoor environments like workplaces and schools. The proposed work presents a comprehensive analysis of hourly equivalent noise levels measured at 100 locations, such as the indoor environment. The proposed work is an intelligent automated noise pollution monitoring system for the real-time tracking and adaptive management of noise in indoor environments such as offices, homes, and educational institutions. Unlike other systems that merely record noise levels, the proposed solution provides real-time alerts with web-based visualization and AI-enabled noise pattern recognition for enhanced noise classification. The integration of an ESP8266 Wi-Fi module and a cloud-based architecture employs email notifications instantly, which also allows historical trend analysis and predictive insights. In addition, the framework is under-scope for the integration of smart home automation systems and mobile-based alerting, allowing for better accessibility. The IoT-powered innovations within this framework will revolutionize noise management by proactively monitoring, analyzing, and optimizing indoor sound environments. Through real-time adjustments and intelligent automation, these solutions will create a more serene, comfortable, and productivity-enhancing atmosphere. Whether in offices, homes, or public spaces, this advanced noise control system will contribute to overall well-being, concentration, and efficiency.

Author 1: Yellamma Pachipala
Author 2: L K SureshKumar
Author 3: Veeranki Venkata Rama Maheswara Rao
Author 4: Vijaya Chandra Jadala
Author 5: T.Srinivasarao
Author 6: D. Srinivasa Rao

Keywords: Noise pollution; IoT; ESP8266 Wi-Fi; smart automation; AI-enabled noise pattern

PDF

Paper 74: Deep Learning Meets Bibliometrics: A Survey of Transfer Learning Techniques for Breast Cancer Detection

Abstract: This study aims to provide a comprehensive biblio-metric analysis of research on transfer learning in breast cancer detection from 2016 to 2024. It highlights publication trends, influential contributors, collaborations, and keyword patterns. Bibliometric methods are employed to analyze data extracted from the Scopus database. It includes co-occurrence and citation analyses to identify prevalent keywords, highly cited documents, journals, authors, organizations, and countries contributing to this field. The analysis reveals a significant upward trend in publications over the last decade. Key insights include the identification of dominant keywords, influential contributors, and notable collaborations. The results highlight the growing impact of transfer learning techniques in breast cancer detection research, particularly within the domains of medical imaging analysis and predictive analysis. This study offers a systematic overview of the current state of transfer learning in breast cancer detection research, providing valuable insights and guiding future research efforts in this rapidly evolving domain.

Author 1: Amna Wajid
Author 2: Natasha Nigar
Author 3: Hafiz Muhammad Faisal
Author 4: Olukayode Oki
Author 5: Jose Lukose

Keywords: Transfer learning; breast cancer; medical imaging analysis; predictive analysis

PDF

Paper 75: An Adaptive Levy Flight Chicken Swarm Optimization with Differential Evolution for Function Optimization Problem

Abstract: This study proposes an improved swarm algorithm, Adaptive Levy Flight Chicken Swarm Optimization with Differential Evolution (ALCSODE), to overcome the low convergence accuracy and imbalance between exploration and exploitation in the original CSO algorithm. The method incorporates adaptive perturbation based on individual differences and a differential evolution mechanism into the rooster update process. An elitism preservation strategy is also applied to enhance population stability and information sharing. The algorithm is evaluated on 24 benchmark functions, including unimodal, high-dimensional multimodal, and CEC2022 functions. Performance metrics such as search trajectories and convergence curves are used to assess its effectiveness. Experimental results show that ALCSODE achieves a better exploration–exploitation trade-off and shows statistically superior performance over seven classical algorithms, confirming its potential as an effective tool for solving complex optimization problems.

Author 1: Wen-Jun Liu
Author 2: Azlan Mohd Zain
Author 3: Mohamad Shukor Bin Talib
Author 4: Sheng-Jun Ma

Keywords: Chicken swarm optimization; levy flight; differential evolution algorithm; adaptive adjustment strategy; function optimization

PDF

Paper 76: Public Opinion Stage Segmentation in Disaster Events: A Study Based on Multimodal Sentiment Prediction Model

Abstract: Existing approaches for predicting public sentiment and analyzing opinion evolution during disaster events using multimodal data (text, video, audio) suffer from several limitations: an inadequate dynamic fusion of heterogeneous multi-source data, and imprecise division of public opinion stages. To address these issues, this paper proposes an Enhanced Disentangled Cross Fusion (EDCF) model-based framework for analyzing the evolution of public opinion in disaster events. This framework integrates the E-Divisive with Medians (EDM) change point detection method with spatiotemporal sequence modeling techniques to achieve fine-grained stage segmentation. The EDCF model employs Transformers and positional encoding to process time-series signals (audio/video), effectively capturing long-range dependencies. It enhances modality-specific representation capabilities by introducing dedicated encoders for each modality, a shared encoder, and a reconstruction decoder for disentangled representation learning. Furthermore, the model utilizes a cross-modal language-guided attention mechanism for efficient and effective feature fusion. Experimental validation on the publicly available multimodal sentiment dataset CMU-MOSI demonstrates that the proposed EDCF framework significantly outperforms baseline methods on key sentiment prediction metrics.

Author 1: Xiaogang Yuan
Author 2: Jiaxi Chen
Author 3: Dezhi An
Author 4: Xiang Gong

Keywords: Multimodal sentiment prediction; e-divisive with medians; transformer; encoder; decoder; cross-attention

PDF

Paper 77: Machine Learning-Based Climate Prediction in Indonesia: A Baseline Experiment

Abstract: This study presents the results of a series of machine learning experiments conducted on Indonesian climate data collected between 2010 and 2020. The findings offer a comparative foundation for future research. Weather prediction remains a significant challenge due to the complex interplay of various climatic factors. Weather stations typically record data at hourly or daily intervals, resulting in large volumes of historical weather information. When appropriately processed, this extensive dataset offers valuable opportunities for predictive modeling. The study explores two primary approaches to leveraging big data for weather forecasting. The first employs a machine learning classification technique to predict categorical weather conditions based on existing feature values. The second utilizes time series forecasting to predict continuous weather parameters using historical data. Multiple classification and forecasting algorithms were evaluated and compared. Notably, the year-on-year forecasting approach outperformed several modern techniques, including deep learning, in terms of predictive accuracy. Despite the application of deep learning, classification models achieved a maximum accuracy of only 0.811. Forecasting methods generally produced a mean absolute percentage error (MAPE) of 3–4%. However, year-on-year forecasting—identified through exploratory data visualization—reduced the prediction error to below 1.6%. Another key contribution of this research is the emphasis on the critical role of data visualization prior to algorithmic modeling. The findings highlight the importance of human intervention in the early stages of data analysis, particularly for visual exploration and feature assessment. Classification models were found to underperform due to overly generalized feature representations. In contrast, forecasting techniques, supported by informed human-guided preprocessing, yielded more reliable and accurate results.

Author 1: Faisal Rahutomo
Author 2: Bambang Harjito

Keywords: Indonesia; climate data; experiment baseline; machine learning; prediction

PDF

Paper 78: Graph-Based Clustering of Short Texts Using Word Embedding Similarity

Abstract: The exponential growth of short textual content on the Internet, such as social media posts and search snippets, necessitates effective text mining techniques. Short text clustering, a critical tool for organizing this data, contends with two primary challenges: data sparsity, which undermines the quality of traditional clustering methods, and the poor interpretability of machine-generated cluster labels. This study introduces the Semantic Word Graph (SWG) algorithm, a novel graph-based approach designed to address both of these issues simultaneously. Our methodology begins by constructing a global word graph where nodes represent unique terms from the corpus, and edges are weighted by the semantic similarity of word pairs, calculated using a pre-trained Word2Vec model. Cohesive communities of words are then identified using the Louvain method, and documents are assigned to clusters based on these communities. Meaningful cluster labels are generated by ranking representative nouns within each community. To validate our approach, the SWG algorithm was evaluated on three benchmark datasets (AG News, Tweet, and SearchSnippets) and compared against established methods, including Lingo, Suffix Tree Clustering (STC), and K-means. Quantitative results, measured by the F-score, show that SWG achieved up to 0.89 F-score on AG News, 0.85 on Tweets, and 0.82 on SearchSnippets, consistently outperforming baseline algorithms in clustering quality. Further-more, a qualitative analysis confirms that SWG produces more coherent and topically comprehensive cluster labels, improving interpretability. This study concludes that the SWG algorithm is a robust and effective framework for enhancing both the accuracy and interpretability of short text clustering. Future research could explore integrating contextual embeddings such as BERT to capture deeper semantic relationships, optimizing the similarity threshold dynamically for different datasets, and scaling the algorithm to handle larger, real-time streaming text data. These directions would further improve the applicability of SWG in diverse domains such as social media analytics, news aggregation, and real-time topic detection.

Author 1: Supakpong Jinarat
Author 2: Ratchakoon Pruengkarn

Keywords: Clustering; graph-based clustering; semantic similarity; short text; word embedding

PDF

Paper 79: A Prompt-Driven Framework for Reflective Evaluation of Course Alignment Using Large Language Models

Abstract: Large language models (LLMs) such as ChatGPT are gaining attention in educational settings, yet their potential role in supporting course design and academic quality assurance remains underexplored. This study introduces a structured, prompt-driven framework that uses ChatGPT as a reflective tool to help faculty and curriculum designers evaluate the alignment between course learning outcomes (CLOs), assessment methods, and teaching strategies. Grounded in the standards of the National Center for Academic Accreditation and Evaluation (NCAAA) and Bloom’s Revised Taxonomy, the system generates targeted, context-aware feedback using structured prompts modeled after official NCAAA forms. To enhance reliability and reduce hallucinations, the framework employs template-based prompt engineering and rule-based cognitive classification. A large-scale analysis was conducted across 56 CLOs to assess internal consistency, followed by expert validation from two academic reviewers who evaluated a sample of AI-generated feedback for accuracy, usefulness, and cognitive alignment. The findings highlight the tool’s ability to surface alignment issues and offer constructive recommendations, while also demonstrating its potential as a scalable support system for curriculum review and accreditation readiness, complementing rather than replacing human expertise.

Author 1: Mashael M. Alsulami

Keywords: Prompt engineering; Course Learning Outcomes (CLOs); Large Language Models (LLMs); curriculum alignment; educational quality assurance

PDF

Paper 80: AI and 5G Integration for Smart City Energy Systems: A Systematic Literature Review

Abstract: Smart cities increasingly rely on Artificial Intelligence (AI), 5G, and Internet of Things (IoT) technologies to enhance energy management and support real-time decision-making in smart grids. This study presents a systematic literature review of recent research on the integration of AI and 5G in urban energy systems, with a focus on sustainability goals. It examines how these technologies are used for renewable energy integration, demand-side control, and predictive maintenance across smart environments. Using data from OpenAlex, Scopus, and Web of Science covering the period 2018 to 2025, the review was filtered by language, domain, and scientific relevance. Key findings reveal the use of machine learning models for forecasting, anomaly detection, and system optimization. The review also identifies technical, ethical, and infrastructural challenges, including data heterogeneity, limited interoperability, and regional inequalities in deployment. While AI and 5G offer promising capabilities for real-time monitoring and system automation, the literature shows persistent gaps in algorithm robustness and standardized integration frameworks. The paper emphasizes the need for validated, scalable solutions to achieve long-term energy sustainability. This review provides a clear overview of current trends and future directions in smart energy systems, contributing to a better understanding of how digital technologies shape the future of sustainable urban infrastructures.

Author 1: TALBI Chaymae
Author 2: Rahmouni M’hamed
Author 3: OUAHBI Younesse
Author 4: ZITI Soumia

Keywords: Smart cities; artificial intelligence; 5G; energy management; smart grids; renewable energy integration; IoT; machine learning; sustainability

PDF

Paper 81: State-of-the-Art in Software Security Visualization: A Systematic Review

Abstract: Software security visualization is an interdisciplinary field that combines the technical complexity of cyberse-curity, including threat intelligence and compliance monitoring, with visual analytics, transforming complex security data into easily digestible visual formats. As software systems get more complex and the threat landscape evolves, traditional text-based and numerical methods for analyzing and interpreting security concerns become increasingly ineffective. The purpose of this paper is to systematically review existing research and create a comprehensive taxonomy of software security visualization techniques through literature, categorizing these techniques into four types: graph-based, notation-based, matrix-based, and metaphor-based visualization. This systematic review explores over 60 recent key research papers in software security visualization, highlighting its key issues, recent advancements, and prospective future research directions. From the comprehensive analysis, the two main areas were distinctly highlighted as extensive software development visualization, focusing on advanced methods for depicting software architecture: operational security visualization and cybersecurity visualization. The findings highlight the necessity for innovative visualization techniques that adapt to the evolving security landscape, with practical implications for enhancing threat detection, improving security response strategies, and guiding future research.

Author 1: Ishara Devendra
Author 2: Chaman Wijesiriwardana
Author 3: Prasad Wimalaratne

Keywords: Security visualization; vulnerability analysis; threat intelligence; compliance monitoring

PDF

Paper 82: Human Versus AI: A Comparative Study of Zero-Shot LLMs and Transformer Models Against Human Annotations for Arabic Sentiment Analysis

Abstract: Accurate sentiment analysis in Arabic natural language processing (NLP) remains a complex task due to the language’s rich morphology, syntactic variability, and diverse dialects. Traditional annotation approaches require human experts, face significant challenges related to inter-annotator agreement and dialectal understanding. Recent advances in transformer-based models and large language models (LLMs) offer new techniques to generate annotations. This paper presents a comparative evaluation of three sentiment annotation strategies applied to Saudi dialect tweets: human expert labeling, fine-tuned transformer models (specifically CAMeLBERT-DA), and zero-shot inference using GPT-4o. The selected CAMeLBERT-DA which is already trained specifically for Arabic sentiment tasks and dialects, demonstrates robust performance with fast, scalable predictions. On the other hand, the selected GPT-4o shows competitive zero-shot accuracy without fine-tuning, making it a practical solution for real-time applications. We investigate how each approach performs on two datasets, both of more than 4,000 Saudi tweets covering a wide spectrum of dialects and sentiment expressions. Our methodology involves analyzing consistency across annotations using interrater agreement metrics such as Cohen’s Kappa, Pearson correlation, and class-specific agreement rates. The results reveal that while human annotations capture cultural and context subtleties, they suffer from inconsistency, particularly in ambiguous or dialect-specific cases. This study contributes to the growing body of work on annotation methodologies by highlighting the strengths and limitations of both human and AI-based annotators in Arabic NLP. Our findings suggest that the zero-shot use of domain-specific transformers like CAMeLBERT-DA with general-purpose LLMs such as GPT-4o have a moderate correlation compared to actual human annotators. The paper concludes with recommendations for building reliable ground truth datasets and integrating AI-assisted labeling into Arabic NLP tasks.

Author 1: Dimah Alahmadi

Keywords: Large Language Model (LLMs); transformers; NLP; annotation; inter agreement; sentiment analysis; Saudi dialects

PDF

Paper 83: Simulysis: A Method to Change Impact Analysis in Simulink Projects Based on WAVE-CIA

Abstract: MATLAB and Simulink, which have more than 5 million users and are installed at more than 100,000 businesses, universities, and government organizations1, are widely used in numerous large-scale projects across various industries. These projects continually evolve in response to changes in business logic. However, managing the impact of these changes on Simulink projects presents several challenges to guarantee the quality of these projects. To address this, we propose a WAVE-CIA-based method named Simulysis for change impact analysis (CIA) in Simulink projects. The core idea behind Simulysis is to directly analyze Simulink project files and construct the project’s corresponding call graph. By comparing the call graphs from the old and new project versions, Simulysis computes the change set. Subsequently, Simulysis applies the WAVE-CIA method to this change set and the call graph to identify the impact set. Additionally, Simulysis proposes a signal tracing method helping the system engineers to follow, check, and debug signals through the system. We have implemented Simulysis as a tool with the same name and conducted experiments using several open-source Simulink projects. The experiments demonstrate that Simulysis effectively performs the CIA process and retrieves the impact set, producing optimistic results and proving the practical applicability of Simulysis for real-world projects. Further discussions about Simulysis are provided in the paper.

Author 1: Hoang-Viet Tran
Author 2: Ta Van Thang
Author 3: Cao Xuan Son
Author 4: Do Trong Thu
Author 5: Pham Ngoc Hung

Keywords: Change impact analysis; WAVE-CIA; MAT-LAB/Simulink projects

PDF

Paper 84: Feature Selection and Classification of Microarray Datasets Based on an Improved Binary Harris Hawks Optimization Algorithm

Abstract: High-dimensional microarray datasets are prone to the “curse of dimensionality” due to feature redundancy, which impairs the performance of machine learning models, and feature selection is the key to addressing this issue. This study proposes an Improved Binary Harris Hawks Optimization algorithm (IBHHO) for feature selection in high-dimensional microarray data. Core innovations comprise: i) a hybrid filter-wrapper framework integrating a filter method (ReliefF), a wrapper method (HHO) and a classifier (SVM) to simultaneously optimize ReliefF parameters, SVM hyperparameters, and feature subsets; ii) a differentiated exploration–exploitation strategy leveraging HHO’s two-stage behavior (global parameter optimization during exploration; feature refinement and local parameter tuning during exploitation); and iii) an elite feature guidance strategy that reduces redundant exploration and accelerates convergence via fixed key-feature anchor points. Experiments conducted on eight public microarray datasets demonstrate that IBHHO reduces feature counts while improving classification accuracy, achieving comprehensive performance superior to benchmark algorithms. Consequently, IBHHO offers an efficient feature selection frame-work for high-dimensional biomedical data analysis.

Author 1: Guoxia LI
Author 2: Wen SHI
Author 3: Jingyu ZHANG
Author 4: Zhixia GU
Author 5: Jixiang XU
Author 6: Yueyue LI
Author 7: Yaxing SUN

Keywords: Microarray dataset; feature selection; parameter optimization; ReliefF

PDF

Paper 85: Robust Ulcerative Colitis Detection via Integrated Convolutional Feature Encoding, Bidirectional Temporal Context, and Data Augmentation for Class Imbalance

Abstract: Ulcerative Colitis (UC), a chronic inflammatory bowel disease, presents significant diagnostic challenges due to its overlapping symptoms with other gastrointestinal disorders and the complex visual patterns in endoscopic imagery. Accurate and early detection is essential to guide effective treatment and improve patient outcomes. This research introduces a robust hybrid framework that combines convolutional feature extraction with bidirectional temporal modelling for the precise identification of UC from medical imagery. The proposed approach integrates CNNs—including MobileNetV3Large, Inception v3, InceptionResNetV2, and Xception—with Bi-GRU and Bi-LSTM networks. The CNNs are responsible for capturing high-level spatial features, while the Bi-GRU and Bi-LSTM modules enhance temporal context understanding, enabling the model to effectively interpret subtle patterns and transitions characteristic of UC. Each hybrid model was designed, and thoroughly tested on an curated set of experimental data. Among the combinations, the highest accuracy was of 93.10%, obtained with the Xception+ Bi-GRU + Bi-LSTM model. Inception v3 + Bi-GRU + Bi-LSTM followed closely, attaining an accuracy of 92.62%. The different data augmentation techniques is deployed to handle the class imbalance that exists in the LIMUC dataset . Notably, the bidirectional temporal modelling component significantly improved the recognition of sequential dependencies in medical image frames, enhancing the model’s diagnostic robustness. The findings demonstrate that integrating CNNs with bidirectional temporal encoders offers a promising solution for UC detection, providing a valuable tool for clinicians in automated diagnostic systems. This study not only contributes to the advancement of intelligent medical imaging but also paves the way for deploying real-time UC detection models in clinical practice.

Author 1: Dharmendra Gupta
Author 2: Jayesh Gangrade
Author 3: Yadvendra Pratap Singh
Author 4: Shweta Gangrade

Keywords: Ulcerative Colitis Detection (UCD); CNNs; Bi-GRU; Bi-LSTM; medical image

PDF

Paper 86: DALG: A Dual Attention-Based LSTM-GRU Model for Exchange Rate Volatility Forecasting in China’s Forex Sector

Abstract: Exchange rate volatility forecasting plays a vital role in guiding financial decisions and economic planning, particularly in China’s dynamic foreign exchange market. This study proposes a novel deep learning framework, termed DALG (Dual Attention-based LSTM-GRU), designed to capture complex temporal patterns and feature dependencies in high-frequency USD/RMB exchange rate data. By integrating LSTM and GRU architectures with a dual-stage attention mechanism, comprising input and temporal attention, the proposed DALG model enhances the interpretability and accuracy of exchange rate volatility forecasts. The model is empirically evaluated against benchmark models such as LSTM, GRU, and a hybrid LSTM-DA using standard performance metrics, including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE). Experimental results demon-strate that the DALG model consistently outperforms traditional and hybrid deep learning models, offering superior predictive performance. The findings suggest that attention-enhanced deep learning architectures hold significant promise for robust financial time series modeling and forecasting in volatile forex markets.

Author 1: Shamaila Butt
Author 2: Mohammad Abrar
Author 3: Muhammad Ali Chohan
Author 4: Muhammad Farrukh Shahzad

Keywords: Exchange rate forecasting; deep learning; LSTM-GRU hybrid; attention mechanism; financial time series; USD/RMB volatility

PDF

Paper 87: Parameter-Free Negative Extreme Anomalous Undersampling Techniques on Class Imbalance Problems

Abstract: This research addressed the critical challenge of class imbalance in classification, which is a prevalent issue in real-world applications. Standard classifiers often struggled with imbalanced datasets and frequently misclassified the minority class (positive instances) due to the overwhelming presence of the majority class (negative instances). The proposed Negative Extreme Anomalous Undersampling Technique (NEXUT) was introduced as a parameter-free approach. It leveraged the negative extreme anomalous score to strategically eliminate negative instances located in overlapping regions. This targeted removal was designed to improve the classifier’s ability to effectively distinguish between the two classes. To evaluate the effectiveness of the proposed method, we conducted a comprehensive comparison with established undersampling techniques. The evaluation utilized both synthetic datasets and twelve datasets from the UCI repository. Six different classifiers were employed to ensure a diverse and unbiased performance assessment. Results from the Wilcoxon signed-rank test confirmed that the proposed method achieved significantly higher performance compared to existing techniques. These findings demonstrated the potential of NEXUT as a robust and valuable tool for addressing class imbalance problems.

Author 1: Benjawan Jantamat
Author 2: Krung Sinapiromsaran

Keywords: Classification; class imbalance; imbalanced datasets; undersampling; parameter-free method; negative extreme anomalous score

PDF

Paper 88: Blockchain Enabled Healthcare Supply Chain: Review, Case Study and Future Opportunities

Abstract: Blockchain is a major component of future smart healthcare that can improve the security, reliability, trust and automation of healthcare supply chain processes. Blockchain has several applications in areas such as medicine procurement, supply chain tracking, and drug traceability. In this study, we present a review of recent works in the area of Blockchain-enabled healthcare supply chains with a particular focus on pharmaceutical supply chains. We categorize the literature into three major areas namely, procurement, asset management, and system efficiency improvement. We also present a case study of efficient smart contracts for the pharmaceutical supply chain in which we identify different stakeholders of the pharmaceutical supply chain and develop different functions and tasks to be performed by each stakeholder and their interactions with each other. We implement the proposed smart contract in Remix Integrated Development Environment (IDE) using solidity language and evaluate the transaction cost of each function used in the smart contract. Lastly, we also present future opportunities on using Blockchain-enabled healthcare supply chain.

Author 1: Muhammad Saad
Author 2: Kamran Ali
Author 3: Muhammad Awais Javed
Author 4: Ahmad Naseem Alvi
Author 5: Ahmed Alfakeeh

Keywords: Blockchain; smart contracts; healthcare; supply chain

PDF

Paper 89: An Approach Based on Named Entity Recognition and Semantic Analysis for Recruitment Efficiency and Optimization

Abstract: Modern recruitment requires smarter, faster, and more inclusive methods to manage the growing volume and diversity of job applications and candidate resumes. Manual screening is often ineffective and unreliable especially in low-resource or multilingual contexts. To address this challenge, we propose an approach that automates and optimizes key stages of the recruitment process. This three-stage approach includes: 1) extracting structured data from resumes using a robust Named Entity Recognition (NER) system, which comprises a NER annotator, a feature extractor, and a transition-based parser; 2) employing a fine-tuned transformer model to perform semantic matching between candidates and job descriptions; and 3) leveraging a large language model to generate interview questions tailored to specific job requirements, thereby improving the relevance and personalization of candidate assessments. The recruitment system was tested on a large-scale resume and job posting dataset across multiple domains. Our NER model reported an F1-score of 85.11% in entity extraction, and the matching component reported accuracy levels as high as 92%when using hierarchical job classes. The results prove the efficacy of combining deep learning techniques with semantic reasoning in enhancing automation, accuracy, and fairness in hiring.

Author 1: Ismail Ifakir
Author 2: Noureddine Mohtaram
Author 3: El Habib Nfaoui
Author 4: Abderrahim Zannou
Author 5: Mohammed El Hassouni

Keywords: Named entity recognition; large language models; feature extraction; generate question; matching

PDF

Paper 90: Towards a Robust DNA Storage System with a Multilayer Approach

Abstract: The growing demand for data storage requires innovative, resilient solutions to the challenges of cost, space, and energy consumption posed by current methods. DNA stands out as a promising next-generation data storage medium, offering a remarkable storage density of 1019 bits per cubic centimeter, some eight orders of magnitude denser than conventional media. This study explores the potential of DNA storage by proposing an intelligent multi-layer solution to overcome current technological challenges. The system combines the storage capabilities of DNA with sophisticated solutions such as data compression, error correction, and cryptography, transforming the concept of DNA storage into a tangible reality. This study also focused on the first layer dedicated to data compression. The results obtained represent a significant advance in the evaluation of the potential of different compression algorithms, through a comparative study of techniques such as Huffman coding, run-length coding, LZW and LZ77. This analysis enabled us to define the essential components of the first layer of the proposed approach. Finally, the interface in the digital domain to visually present the overall results of the project was introduced, while providing insight into the system’s efficiency, data integrity and ease of use.

Author 1: Ayoub Sghir
Author 2: Manar Sais
Author 3: Douha Bourached
Author 4: Jaafar Abouchabaka
Author 5: Najat Rafalia

Keywords: DNA storage; data compression; error correction; huffman encoding; run-length encoding; LZW; LZ77

PDF

Paper 91: Hybrid Recommender System for Precision Chemical Application in Banana Cultivation Using Matrix Factorization and Content-Based Filtering

Abstract: Proper management of pesticides and fertilizers is critical towards effective control of the banana diseases, but integration of various agricultural data has been a problem. The novelty of this study is the hybrid recommendation system which encompasses Content-Based Filtering (CBF) with Matrix Factorization (MF) to be used when recommending chemical treatment of bananas during cultivation. The system exploits the use of heterogenous data- such as soil nutrient profiles (NPK, pH), climatic variables, and disease signatures to create customized chemical recommendation to manage the disease. A real-world agricultural dataset was used in the evaluation of the hybrid approach and the improvement, precision, recall, F1-score, and the accuracy of the system were measured. The findings indicate that the suggested model performed better than the traditional models of single-method or user-based recommendation systems and predicted the disease outbreak with high accuracy (F1-score) up to 98 percent in Black Sigatoka; these results were highly consistent across other disease classes and different chemical interventions. Notably, the hybrid system helps not only to optimize the costs of chemical use and crop yields, but also to create the environmental sustainability by reducing the number of the superfluous chemical use. Methodology, the characteristics of the dataset and the measures that have been employed are described, which explains how CBF and MF integration solve the complexity and variability in agricultural data. The solution provided in this work is a high-performance scalable tool in precision agriculture, which assists further in the informed decision-making of the farmer and agricultural planners.

Author 1: Ravi Kumar Tirandasu
Author 2: Prasanth Yalla
Author 3: Pachipala Yellamma

Keywords: Hybrid recommendation system; content-based filtering; matrix factorization; banana disease management; agricultural data heterogeneity; precision agriculture; chemical application optimization; black sigatoka

PDF

Paper 92: Autonomous Self-Adaptation in the Cloud: ML-Heal’s Framework for Proactive Fault Detection and Recovery

Abstract: Cloud computing environments increasingly host applications constructed from orchestrated service compositions, which deliver enhanced functionality through distributed work-flows. This paradigm, however, introduces vulnerabilities where component failures can cascade, disrupting entire applications. Conventional fault tolerance often falls short in these dynamic settings. This paper introduces ML-Heal, an autonomous self-healing framework architected to bolster the resilience of such service compositions. ML-Heal leverages machine learning for proactive failure detection, precise diagnosis, and intelligent recovery strategy selection. The framework integrates real-time monitoring data, applies ML-based anomaly detection and classification to identify faults, and plans corrective actions via a learned policy or predictive models. Implemented using Python with scikit-learn models and a custom orchestration layer, its efficacy is demonstrated through simulated fault injection scenarios. Illustrative system architecture and evaluation results show that this ML-driven methodology significantly curtails recovery time and augments availability when confronted with faults, showcasing AI’s potential in creating more robust, self-adaptive cloud service compositions with minimal human oversight.

Author 1: Qais Al-Na’amneh
Author 2: Mahmoud Aljawarneh
Author 3: Rahaf Hazaymih
Author 4: Ayoub Alsarhan
Author 5: Khalid Hamad Alnafisah
Author 6: Nayef H. Alshammari
Author 7: Sami Aziz Alshammari

Keywords: Cloud computing; service composition; self-healing systems; autonomic computing; machine learning; anomaly detec-tion; automated recovery; fault tolerance

PDF

Paper 93: Exploring the Factors Influencing School Dropout: A Logit Model Analysis

Abstract: School dropout negatively impacts a country’s development index, with numerous factors contributing to this complex phenomenon. To investigate the factors associated with school dropout in a specific region of Morocco, a cross-sectional study was conducted, encompassing a weighted sample of 274 junior secondary education students. Data collection was facilitated through a questionnaire administered to school directors. The data processing involved two main stages: preparation and modeling. The modeling phase employed a binary logistic regression model, focusing on the student’s dropout status as the dependent variable. The study’s findings highlighted several significant factors associated with school dropout: academic performance (as indicated by exam grades), the student’s age and gender, and the availability of school transportation services that encourage students to continue their studies. Additionally, while class size also played a significant role, its impact was deemed less critical compared to the other factors identified. These results underscore that school dropout is influenced by a multitude of factors, suggesting the need for targeted interventions to prevent dropout and foster academic success, particularly among female students.

Author 1: Noaman LAKCHOUCH
Author 2: Lamarti Sefian MOHAMMED
Author 3: Mustapha KHALFOUNI

Keywords: School dropout; development index; Morocco; cross-sectional study; logistic regression; academic performance; exam grades; age; gender; school transportation; academic success

PDF

Paper 94: Securing the Healthcare Supply Chain Using Blockchain-Enabled Smart Contracts

Abstract: The blockchain technology is an innovative tool that has shown its effectiveness in many sectors such as health-care and many are likely to experience a revolutionary transformation. In healthcare, the blockchain functions like a distributed network which is constantly updated with records ensuring that you cannot eliminate or alter them without the consensus to do so. In other words, blockchain-based smart contracts will enable clients to do transactions without having to rely on intermediaries and thus it will be more reliable. Smart contracts regulate and constitute numerous activity and dealings of stakeholders, automating processes, augmenting visibility, maximizing productivity, and standing against the clock. Healthcare supply chains may be exploited by blockchain technology that addresses pain points such as connectivity, traceability, as well as fighting counterfeit medicines. The purpose of our analysis was to advance traceability between the healthcare supply chain elements, such as the transportation of pharmaceuticals or medical equipment. And for its more practical implementation we have worked on minimizing the cost of smart contracts being deployed on the blockchain for the health care logistic system.

Author 1: Muhammad Saad
Author 2: Hamail Ashraf
Author 3: Muhammad Saffi Ullah Khan
Author 4: Muhammad Awais Javed
Author 5: Ahmad Naseem Alvi
Author 6: Ahmed Alfakeeh

Keywords: Blockchain; smart contracts; supplychain

PDF

Paper 95: A Scalable and Privacy-Preserving Hybrid Blockchain Architecture for Secure Healthcare Data Management

Abstract: The protection of sensitive medical information has become a critical concern in modern digital healthcare. This study introduces a Hybrid Architecture that ensures secure and reliable healthcare data management through the integration of blockchain technology with off-chain and on-chain mechanisms. Patient records are encrypted using AES-256-GCM, stored in the InterPlanetary File System (IPFS), and verified using Merkle Tree structures, with only the root values anchored on Ethereum smart contracts. This design guarantees data security and integrity while achieving significant gas optimization by reducing on-chain storage costs. Experimental evaluation demonstrates that the proposed system achieves high scalability, efficient transaction processing, and strong resistance to tampering, ensuring confidentiality and auditability. By combining blockchain, cryptographic techniques, and distributed storage, the framework addresses pressing challenges of security, privacy, and trust in healthcare ecosystems. The results highlight the potential of Hybrid Architecture models to deliver a cost-effective, privacy-preserving, and scalable solution for next-generation Healthcare Data Security.

Author 1: Sanjida Sharmin
Author 2: Mohammad Shamsul Arefin
Author 3: Pranab Kumar Dhar
Author 4: Zinnia Sultana
Author 5: Sultana Akter

Keywords: Blockchain; healthcare; AES-256-GCM; merkle tree; IPFS; data security; scalability; gas optimization; ethereum; hybrid architecture

PDF

Paper 96: Advanced Strategies for Big Data Resource and Storage Optimization: An AI Perspective

Abstract: The increasing use of advanced technologies with artificial intelligence in our daily lives has become an urgent necessity to facilitate tasks in a fluid and simple way, which leads to the generation of huge amounts of data. This data comes from various sources: media, social networks, connected objects., online transactions, and smart devices, among other sources. These data are generally organized into three categories: structured, unstructured, and semi-structured. This data is therefore called Big Data. This data is characterized by its enormous size and fast flow, as well as by the diversity of its sources. The importance of data lies in its ability to provide future perspectives and improve the decision-making process. To get the most out of this data, it must be stored and processed, but current technologies face many challenges and are often insufficient to cope with the huge amounts of data generated. It is necessary to look for advanced and highly efficient technologies, capable of storing the entirety of the data and processing it faster. We can also rely on artificial intelligence to help improve the use of storage and processing resources by compressing data or deleting excess data, thus saving storage space. This study discusses various approaches for optimizing Big Data processing, such as the use of AI compression techniques, the PSNR-SSIM method, and many others. The compression ratio for these algorithms is around 90%. With these technologies, it is possible to optimize the use of storage space, ensuring efficient and optimized management.

Author 1: Ayoub Sghir
Author 2: Ayoub Allali
Author 3: Najat Rafalia
Author 4: Jaafar Abouchabaka

Keywords: Artificial intelligence; big data; optimization; re-source; storage

PDF

Paper 97: YOLOv8s-Swin: Enhanced Tomato Ripeness Detection for Smart Agriculture

Abstract: Accurate object detection and classification are paramount in precision agriculture for assessing ripeness stages and optimizing yield, particularly for high-value crops like toma-toes. Traditional manual inspection methods are laborious, time-consuming, and error-prone. Furthermore, existing deep learning models often struggle with real-world agricultural challenges such as varying lighting, occlusions from foliage or other fruits, and dense clustering of small objects. To address these limitations and enhance tomato production efficiency and quality in diverse agricultural conditions, this study introduces YOLOv8s-Swin, an advanced object detection model. YOLOv8s-Swin integrates the powerful YOLOv8s architecture with a Swin Transformer module (C3STR) to capture global and local contextual information, crucial for robust small object detection. It also incorporates Focus, Depthwise Convolution (DWconv), Spatial Pyramid Pooling with Contextual Spatial Pyramid Convolution (SPPCSPC), and C2 modules for preserving fine details, reducing computational overhead, enhancing multi-scale feature fusion, and improving high-level semantic feature extraction, respectively. The Wise Intersection over Union (WIoU) loss function is adopted to enhance localization and address convergence issues. Evaluated on a comprehensive tomato image dataset, YOLOv8s-Swin demonstrated superior performance with a mean Average Precision (mAP@0.5) of 88.3%, precision of 84.4%, recall of 79.9%, and an F1-Score of 0.821. This significantly surpasses the base YOLOv8s (84.7%mAP@0.5, 0.795 F1-Score) and other models like Faster R-CNN, SSD, YOLOv4, YOLOv5s, and YOLOv7, all under identical conditions. Maintaining a competitive inference speed of 166.67 FPS, YOLOv8s-Swin offers a robust and efficient solution for AI-driven crop management and sustainable food production.

Author 1: Jalal Uddin Md Akbar
Author 2: Syafiq Fauzi Kamarulzaman

Keywords: Agricultural automation; attention mechanism; computer vision; smart agriculture; object detection; YOLO; swin transformer

PDF

Paper 98: Next-Generation Network Security: An Analysis of Threats, Challenges and Emerging Intelligent Defenses Within SDN and NFV Architectures

Abstract: The integration of Software Defined Networking (SDN) and Network Function Virtualization (NFV) offers considerable advantages in terms of scalability, interoperability, and cost-efficiency. They redefine network architecture, replacing rigid hardware-based control with a more flexible, software-driven approach. However, this convergence also introduces significant security threats and challenges due to architectural vulnerabilities and an expanded attack surface. This study presents a comprehensive overview of the key security risks associated with SDN/NFV networks. It analyzes existing countermeasures, highlighting their effectiveness in addressing specific threats while identifying limitations in achieving comprehensive security due to inherent architectural vulnerabilities. The study concludes with a discussion on open challenges and future research directions toward more secure and resilient network infrastructures. This study highlights the importance of an integrated security approach and identifies areas where further research is required to enhance SDN/NFV security.

Author 1: Amina SAHBI
Author 2: Faouzi JAIDI
Author 3: Adel BOUHOULA

Keywords: Next generation network security; software defined networking; network function virtualization; network security; artificial intelligence

PDF

Paper 99: Bridging Tradition and Technology: Leveraging ERP Systems for Streamlined Supply Chains and Modernized Keropok Lekor Production Management

Abstract: Online marketplace and social media offers sub-stantial opportunities for business growth for many and it has contributed greatly to the increase demand of keropok lekor (fish cracker) from Terengganu throughout Malaysia as the market reach is increasing. Positive effects of these online platforms as significant digital marketing tools encourage keropok lekor producers in Terengganu to invent and diversify their products with the goal to market and increase the sale of keropok lekor in a larger scale. Some of the innovations are selling keropok lekor in pre-packaged form and introduce different versions of keropok lekor by adding more flavors, textures and shapes to meet broader range of customer preferences. This positive development promotes the commercialization of keropok lekor which then re-quires the producers like ROMA Food Industry Sdn. Bhd. (RFI) to handle higher market demand without significant disruptions. An automated approach is crucial for them to streamline the keropok lekor business operations to enable them handle not only the increased product market demand, but at the same time the volumes of work or expansion without compromising quality or efficiency. ROMAns is an Enterprise Resource Planning (ERP) system built to optimize keropok lekor business processes by facilitating the flow of information across different functions, improve efficiency, and gain a competitive edge by leveraging integrated data and streamlined operations across keropok lekor business operations.

Author 1: Faizah Aplop
Author 2: Wan Muhammad Ikhwan Wan Mohammad
Author 3: Muhammad Nasyrul Adly Mohd Afendy
Author 4: Mustafa Man
Author 5: Fakhrul Adli Mohd Zaki
Author 6: Rosaida Rosly
Author 7: Ismail Abu Bakar

Keywords: Overall equipment effectiveness; real-time information; centralization; data-driven workflows; business digitalization

PDF

Paper 100: Enhanced Phishing Website Detection Using Optimized Ensemble Stacking Models

Abstract: Phishing attacks remain a persistent and evolving cybersecurity threat, necessitating the development of highly accurate and efficient detection mechanisms. This research introduces an optimized ensemble stacking framework for phishing website detection, leveraging advanced machine learning techniques, hybrid feature preprocessing, and meta-learning strategies. The proposed approach systematically evaluates nine diverse base classifiers: XGBoost, CatBoost, LightGBM, Random Forest, Gradient Boosting, Extra Trees, Support Vector Classifier, AdaBoost, and Bagging. We compare baseline classifiers, a standard ensemble stacking model, and four optimized stacking configurations across four balanced and imbalanced datasets. Our optimized ensemble stacking achieves perfect accuracy (one hundred percent) on the first two datasets, and over ninety-nine percent accuracy on the two more challenging imbalanced datasets. A direct comparison with related studies demonstrates that our optimized stacking approach delivers superior detection accuracy.

Author 1: Zainab Alamri
Author 2: Abeer Alhuzali
Author 3: Bassma Alsulami
Author 4: Daniyal Alghazzawi

Keywords: Phishing detection; machine learning; ensemble stacking; cybersecurity

PDF

Paper 101: Grey Clustering Algorithm for Urban Air Quality Classification: A Case Study in Lima, Peru

Abstract: This study introduces a grey clustering algorithm based on the Central Triangular Whitenization Weight Function (CTWF), designed to classify urban air quality under conditions of limited or uncertain data. Based on Grey Systems The-ory (GST), the proposed algorithm facilitates structured multi-criteria assessments using sparse or irregular datasets—a condition frequently encountered in urban environmental monitoring. The algorithm stands out for its low computational complexity, interpretability, and ability to integrate multiple pollutants into a single qualitative classification, making it particularly suitable for smart city applications and real-time decision support systems. To evaluate its performance, the grey clustering algorithm (CTWF) was applied to a case study in Northern Lima, Peru, covering eight semesters between 2011 and 2019 and including four key pollutants: PM10, SO2, NO2, and CO. Although all periods were classified as “Good” under national standards, the disaggregated analysis revealed PM10 as the most persistent concern, while CO levels remained consistently low, and SO2 and NO2 showed moderate fluctuations. These findings validate the algorithm’s capacity to extract pollutant-specific insights and spatiotemporal trends even in data-scarce environments. Future enhancements may include meteorological integration, broader pollutant sets (e.g., PM2.5, ozone), and satellite data to extend forecasting capabilities and spatial resolution.

Author 1: Alexi Delgado
Author 2: Katherine Paredes Guerrero
Author 3: Anderson Carrillo

Keywords: Grey clustering algorithm; air quality classification; grey systems theory; urban air pollution

PDF

Paper 102: Simulation-Driven Improvement of King Khalid University Non-Monthly Entitlement Workflows in AnyLogic

Abstract: Aiming to offer useful recommendations for enhancing process accuracy and operational performance, this study investigates the elements affecting the efficiency and timeliness of disbursing non-monthly financial entitlements to university staff. The study developed a process workflow model using AnyLogic simulation tools and structured interviews and then conducted two main tests to enhance effectiveness. While the second concentrated on dynamic workload distribution to balance duties and maximize performance, the first conducted complete automation using an electronic platform that centralized all departmental tasks involved in financial disbursements. By lowering service times, minimizing manual errors, and thereby simplifying task allocation, the results show that combining automated workflows and real-time workload distribution significantly increased operational efficiency. Faster, more accurate, and fair processing of financial entitlements produced by this change highlights the need for technology-driven solutions in reaching lasting organizational excellence.

Author 1: Elaf Ali Alsisi
Author 2: Osman A. Nasr
Author 3: Badriah Mousa Alqahtani
Author 4: Rodoon Shawan Alnajei

Keywords: Non-monthly financial entitlements; operational efficiency; Anylogic simulation; process optimization; university staff

PDF

Paper 103: From Review to Refinement: An Expert-Informed Environmental Diagnostic Model for Stingless Bee Colony Monitoring

Abstract: The resilience of stingless bee colonies has become increasingly challenged by erratic climate conditions and intensified environmental stressors. While previous studies have introduced diagnostic models for monitoring colony health, most remain constrained by a narrow reliance on either environmental or behavioral parameters alone. This study proposes a refined diagnostic model that builds on existing frameworks and is further shaped by expert insights from the field. The model integrates environmental inputs, specifically temperature and humidity, with behavioral activity detected via video analysis to deliver a multi-dimensional assessment of colony status. Through a structured review of the literature and interviews with apiculture experts, we identify critical gaps in conventional systems and translate those findings into a more responsive and field-deployable architecture. The result is an improved model capable of categorizing colony health with greater sensitivity and clarity, designed to support early intervention and long-term monitoring. The model is visualized through comparative schematic diagrams, showing the evolution from a basic environmental-only logic to a more holistic decision-making system.

Author 1: Yang Lejing
Author 2: Rozi Nor Haizan Nor
Author 3: Yusmadi Yah Jusoh
Author 4: Nur Ilyana Ismarau Tajuddin
Author 5: Khairi Azhar Aziz

Keywords: Stingless bees; environmental monitoring; behavioral analysis; diagnostic model; expert-informed refinement

PDF

Paper 104: A Secure Authentication Protocol for IoT Devices

Abstract: The rapid evolution of the Internet of Things (IoT) offers vast opportunities in automation and connectivity, yet simultaneously introduces critical security challenges. One of the most pressing concerns lies in the heterogeneity and limited computational capabilities of IoT devices, which complicate the deployment of robust security mechanisms. In this work, we present a lightweight and secure authentication protocol designed to establish mutual authentication between a server and smart objects. Our protocol enhances the scheme proposed by Fatma et al., addressing its identified vulnerabilities. Formal security analysis using AVISPA and ProVerif confirms the protocol’s resilience against a wide range of threats. Furthermore, a practical simulation was conducted using a Raspberry Pi as the IoT device and a Core i5-based server to evaluate real-world performance. Results show that the protocol executes efficiently in real-time with a reduced authentication delay, demonstrating its feasibility for resource-constrained environments. This research contributes to the development of effective, scalable, and secure authentication solutions tailored for the IoT landscape.

Author 1: Mohamed Ech-Chebaby
Author 2: Hicham Zougagh
Author 3: Hamid Garmani
Author 4: Zouhair Elhadari

Keywords: IoT; Internet of Things; security; authentication

PDF

Paper 105: QoS-Aware Deployment and Synchronization of Digital Twins Over Federated Cloud Platforms for Smart Infrastructure Monitoring

Abstract: This electronic increasingly, Digital Twin (DT) systems are being leveraged in smart infrastructure settings (e.g., structural health monitoring, intelligent traffic controls, and distributed utility networks). Yet, the available solutions all face hurdles that can prevent real-time synchronization of DT instances across federated cloud platforms, primarily due to latency variation, quality of service (QoS) assurance, and stale data, which are all consequences of heterogeneous computer environments. Most solutions depend on static cloud-only models of deployment, with no option for dynamic negotiation of resources. These provide long update times (typically greater than 200ms), low accuracy rates, and low real-time responsiveness. Additionally, traditional DT models were not developed with multi-regional deployment or QoS workloads in mind. In this work, a QoS-Aware Federated Digital Twin Orchestration Framework (Q-FDTO) is designed to allow latency-critical monitoring of infrastructure across different federated cloud regions, through the integration of a hybrid edge-cloud control plane, adaptive synchronization, jitter consideration for observed intervals, and dynamic resource allocation via reinforcement learning for defined QoS Service Level Objectives (SLOs). This system was evaluated on a smart city testbed of 1200 sensor nodes. The testbed monitored sensor readings for structural strain, vibration, and traffic density across twelve locations. The digital twin pipeline is comprehensive [i.e., (i) ingestion via Wi-Fi MQTT, (ii) stream fusion of all the sensor readings via Kalman filtering, and (iii) twin modeling of prediction, through a temporal graph convolutional network (T-GCN)]. To assess performance, sync policies were evaluated on metrics for average update latency (ms), sync drift (ms), and data consistency rate (%). The results demonstrate that Q-FDTO had an average update latency of 87.3 ms, reduced from 194.6 ms, and a 96.2% consistency rate across federated nodes with less than 2.5% sync drift over 10-minute intervals, showing Q-FDTO architecture ability for network boundaries and also compatible with AWS Outposts and Azure Arc hybrid cloud environments. It establishes a scalable and practical approach to latency-sensitive DT deployments in the realm of smart infrastructure systems.

Author 1: M V Narayana
Author 2: Naveen Reddy N
Author 3: Madhu S
Author 4: Madhu T
Author 5: Niladri Sekhar Dey
Author 6: Sanjeev Shrivastava

Keywords: QoS-aware digital twins; federated cloud synchronization; smart infrastructure monitoring; latency-constrained orchestration; edge-assisted deployment

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Computer Vision Conference
  • Healthcare Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org