The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 16 Issue 4

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Augmented Sensory Experience and Retention: ASER Framework

Abstract: In the process of shifting from traditional teacher-centred systems to more student-engagement ones, Augmented Reality (AR) is coming into its own as a way of improving how information is delivered and received. However, while the use of AR is commonly attributed to increasing engagement, the potential of this technology to support deep, long-term learning is not fully explored. The ASER Framework (Augmented Sensory Experience and Retention) offers a new approach to this gap by integrating emotional memory, interactive storytelling, and gamification within AR environments. After analyzing the current state of AR education research, this study found a lack of frameworks that combine these elements systematically, thus offering a chance to improve cognitive retention and meaningful learning. A multi-sensory model proposes ASER for emotional connection, participation, and knowledge consolidation. The theoretical foundation is strong; however, further empirical validation is required to determine its real-world effectiveness across diverse educational settings. These recommendations provide a starting point for future research and implementation strategies that seek to change the rules of instructional design for engaging and enduring learning experiences.

Author 1: Samer Alhebaishi
Author 2: Richard Stone

Keywords: Augmented Reality (AR); emotional memory; inter-active storytelling; gamification; Augmented Sensory Experience and Retention (ASER) Framework

PDF

Paper 2: Comparing Vision-Instruct LLMs, Vision-Based Deep Learning, and Numeric Models for Stock Movement Prediction

Abstract: This research conducts a comparative study of several stock movement prediction approaches, evaluating large language models (LLMs) and vision-based deep learning models with stock image as input, as well as models that utilize numerical data. Specifically, the study investigates a prompt-based LLM framework that processes candlestick charts, comparing its performance with image-based models such as MobileNetV2, Vision Transformer, and Convolutional Neural Network (CNN), as well as models with numerical inputs including Support Vector Machine (SVM), Random Forest, LSTM, and CNN-LSTM. Although LLMs have demonstrated promising results in stock prediction, directly applying them to stock images poses challenges compared to numerical approaches. To address this, this study further improves LLM performance with post-hoc calibration, reducing prediction biases. Experimental results demonstrate that post-hoc calibrated LLMs with visual input achieve competitive performance compared to other models, highlighting their potential as a viable alternative to traditional stock prediction methods while simplifying the prediction process.

Author 1: Qizhao Chen

Keywords: Convolutional Neural Network (CNN); Large Language Model (LLM); MobileNetV2; stock price prediction; time series forecasting; vision transformer

PDF

Paper 3: Applications of Qhali-Bot in Psychological Assistance and Promotion of Well-being: A Systematic Review

Abstract: Social robots have emerged as efficient tools in the field of psychological assistance and well-being promotion, especially known as Qhalibot in prominent areas such as mental health, education and work environments. The aim of this study is to provide a comprehensive overview of their application in these contexts, through a systematic review based on the PRISMA methodology and a bibliometric analysis. To this end, 41 articles obtained from databases such as Scopus, IEEE Xplore, Web of Science, and JSTOR were evaluated. The findings reveal that social robots offer significant benefits, such as improved adherence to therapeutic treatments, real-time emotional support, and reduced stress levels in various groups of people. These benefits have shown a positive impact on users, especially towards those facing mental health conditions or high-stress situations, improving their overall well-being. However, significant challenges were encountered, including user acceptance of these technologies, personalization of interactions to meet individual needs, and integration of these systems into pre-existing environments. Furthermore, it is identified that most of the studies have been carried out in controlled environments, which limits the transferability of the findings to real-world situations. As future lines of research, it is suggested to explore new methodologies for the implementation of these systems in uncontrolled environments, the development of innovative tools that facilitate human-robot interaction, and the evaluation of the long-term impact of these systems in diverse populations. These investigations are crucial to better understand the effectiveness and applicability of social robots in broader and less controlled contexts, which could lead to a more effective integration into daily life.

Author 1: Sebastián Ramos-Cosi
Author 2: Daniel Yupanqui-Lorenzo
Author 3: Enrique Huamani-Uriarte
Author 4: Meyluz Paico-Campos
Author 5: Victor Romero-Alva
Author 6: Claudia Marrujo-Ingunza
Author 7: Alicia Alva-Mantari
Author 8: Linett Velasquez-Jimenez

Keywords: Qhalibot; robot; psychological assistance; well-being; review

PDF

Paper 4: Level of Anxiety and Knowledge About Breastfeeding in First-Time Mothers with Children Under Six Months

Abstract: The World Health Organization notes that one in five women of reproductive age faces episodes of anxiety. In Latin America, more than 50% of women experience postnatal anxiety, and in Peru, in Huánuco, 40% of first-time mothers have moderate anxiety. The aim of this study is to analyze the relationship between the level of anxiety and knowledge about breastfeeding in first-time mothers with children under six months of age. The study has a correlational quantitative approach, in which STAI questionnaires and the Breastfeeding Knowledge Instrument were applied to a total of 166 mothers, using SPSS and the multinomial logistic regression model. The results indicate that 57.23% of the mothers are young, 53.01% have completed secondary school, 22.89% study, and 63.25% had a normal delivery, with 41.57% experiencing complications. In addition, 56.16% of the children were between 4 and 5 months old. Also, 24.10% of mothers with moderate state anxiety and medium knowledge about breastfeeding and 22.29% with moderate trait anxiety. It was found that complications during childbirth (p=0.026, OR=1.025753) and the mother's occupation (p=0.013, OR=1.149548) are significantly related to anxiety. It is concluded that, although anxiety does not directly affect knowledge about breastfeeding, it is crucial to offer specific psychological and educational support for new mothers, particularly addressing sociodemographic factors.

Author 1: Frank Valverde-De La Cruz
Author 2: Maria Valverde-Ccerhuayo
Author 3: Ana Huamani-Huaracca
Author 4: Gina León-Untiveros
Author 5: Sebastián Ramos-Cosi
Author 6: Alicia Alva-Mantari

Keywords: Anxiety; knowledge; breastfeeding; first-time mothers; children

PDF

Paper 5: Economic Growth and Fiscal Policy in Peru: Prediction Using Machine Learning Models

Abstract: The empirical literature presents several indicators related to fiscal policy and economic growth. The paper aims to predict Peru's economic growth using fiscal policy variables. For this purpose, open data from the Central Reserve Bank of Peru was used, data preprocessing and the study used Python programming through Google Colab to evaluate eight machine learning models. Metrics such as Root Mean Square Error (RMSE), Mean absolute error (MAE), Mean square error (MSE), and Coefficient of Determination (R²) were used to measure their performance. In addition, SHapley Additive exPlanations (SHAP) was applied to interpret the importance of macroeconomic variables. The results show that the K-Nearest Neighbors (KNN) model obtained the best performance, with an R² of 0.972 and low prediction errors. In the same way, important variables in fiscal policy such as Net Debt, Liabilities, and Interest on External Debt were identified. In conclusion, the study shows that KNN and Ensemble Bagging are highly effective models for predicting Peru's economic growth.

Author 1: Fidel Huanco Ramos
Author 2: Yesenia Valentin Ccori
Author 3: Henry Shuta Lloclla
Author 4: Martha Yucra Sotomayor
Author 5: Ilda Mamani Uchasara

Keywords: Machine learning; predictive models; fiscal policy; economic growth

PDF

Paper 6: Evaluating User Acceptance and Usability of AR-Based Indoor Navigation in a University Setting: An Empirical Study

Abstract: This paper presents the development and usability evaluation of a mobile augmented reality (AR) application designed to support indoor navigation within a higher education setting. The system offers real-time visual and audio guidance without requiring additional infrastructure, leveraging spatial anchors, QR code initialization, and compatibility with both ARCore and ARKit platforms. Users can select destinations such as classrooms, offices, and restrooms, and follow augmented reality overlays to reach them efficiently. A review of existing AR navigation systems highlights current technological approaches and gaps in user-centered research, particularly within academic institutions. Building on these findings, the proposed application was tested in a large-scale empirical study involving 256 students, situated in the context of spatial computing within a university environment. Data collection was based on the System Usability Scale and the Technology Acceptance Model, with four research hypotheses examining ease of use, usefulness, system responsiveness, and continued usage intention. Results revealed significant correlations between intuitive design and usability scores, as well as between perceived usefulness and behavioral intention to reuse the application. These findings reinforce the value of user-centered design in developing infrastructure-free mobile AR systems and demonstrate their potential to improve spatial orientation in complex educational building.

Author 1: Toma Marian-Vladut
Author 2: Turcu Corneliu Octavian
Author 3: Pascu Paul

Keywords: Augmented reality; indoor navigation; mobile application; usability evaluation; ARCore; higher education; spatial computing

PDF

Paper 7: A Hybrid Length-Based Pattern Matching Algorithm for Text Searching

Abstract: This paper presents a hybrid algorithm for pattern matching in text, which combines word length preprocessing with the Knuth-Morris-Pratt (KMP) algorithm. Its performance was evaluated against KMP and Boyer-Moore (BM) in two scenarios: synthetic texts and real-world texts. In the former, classical algorithms proved more efficient due to the uniform structure of the data. However, in real-world texts, the hybrid algorithm significantly reduced search times, thanks to its ability to filter matches by length patterns before performing character-by-character comparisons. The algorithm also demonstrated flexibility in recognizing patterns with different delimiters. Among its limitations is the difficulty in detecting substrings within longer words. As future work, the incorporation of partial matching techniques and the adaptation of the approach to multilingual environments and machine learning systems are proposed. The dataset used is provided to encourage reproducibility.

Author 1: Victor Cornejo-Aparicio
Author 2: Cesar Cuarite-Silva
Author 3: Antoni Benavente-Mayta
Author 4: Karim Guevara

Keywords: Knuth-Morris-Pratt; Boyer-Moore; text search; hybrid algorithm; preprocessing; word-length patterns; test text for experiments

PDF

Paper 8: Pothole Detection: A Study of Ensemble Learning and Decision Framework

Abstract: This study investigates the potential use of ensemble learning (YOLOv9 and Mask R-CNN) and Multi-Criteria Decision Making for pothole detection system. A series of experiments were conducted, including variations in confidence thresholds, IoU thresholds, dynamic weight configurations, camera angles and MCDM criteria, to assess their effects on detection performance. The YOLOv9 model achieved a mean Average Precision (mAP) of 0.908 at 0.5 IoU and an F1 score of 0.58 at a confidence threshold of 0.282, indicating a strong balance between precision and recall. However, adjusting IoU thresholds showed that lower thresholds improved recall but resulted in false positives, while higher thresholds improved precision but reduced recall. Dynamic weight configurations were explored, with balanced weights (wY = 0.5, wM = 0.5) yielding the best overall performance, while uneven weights allowed trade-offs between precision and recall based on specific application needs. The MCDM framework refined detection outputs by evaluating pothole features such as size, position, depth, and shape. The proposed algorithm has the potential to be widely used in practical applications. Overfitting is the main drawback of the proposed algorithm, but this is dependent on the use case where the pothole detection will be used.

Author 1: Ken D. Gorro
Author 2: Elmo B. Ranolo
Author 3: Anthony S. Ilano
Author 4: Deofel P. Balijon

Keywords: YOLO; Mask R-CNN; ensemble learning; MCDM

PDF

Paper 9: Approach Detection and Warning Using BLE and Image Recognition at Construction Sites

Abstract: Ensuring the safety of workers in dangerous areas is an important issue at construction sites. In particular, fatal accidents at construction sites often involve falls or traffic accidents, and tend to occur around hazardous areas. In this paper, to prevent such accidents, a proximity detection and warning system based on image recognition and Bluetooth Low Energy (BLE) technology is proposed. This system mainly uses image recognition to detect workers approaching dangerous areas, and uses BLE beacons as an auxiliary to achieve continuous detection even under occlusion conditions. A master-slave operation model is adopted, with image recognition serving as the main detection method and BLE beacons as an auxiliary. When a worker approaches a dangerous area, a real-time warning is issued via a wireless earphone connected to a smartphone, allowing immediate recognition and response. This has made it possible to reach the stage of detecting intrusion into dangerous areas. However, there are still some challenges remaining for this system. The first challenge is individual re-identification. In order to issue a warning to the relevant worker when an intrusion into a dangerous area is detected, the worker needs to be recognized individually. The second challenge is adapting to changes in the structure of the construction site. Since the environment of a construction site changes over time, it is necessary to consider the appropriate placement of cameras. Experiments show that the proposed method works well to locate workers approaching and entering dangerous areas. The proposed system also detects intrusion into dangerous areas through bone conduction wireless earphones from a distance of 115 meters and issues a warning to the corresponding workers.

Author 1: Yuya Ifuku
Author 2: Kohei Arai
Author 3: Mariko Oda

Keywords: Construction site; safety management; intrusion detection; object recognition; trajectory tracking; YOLOv8; ByteTrack; BLE Beacon

PDF

Paper 10: Flexible Software Architecture for Genetic Data Processing in Alpaca Breeding Programs

Abstract: Improving alpaca fiber quality is an important objective in the textile industry. There are different kinds of techniques aimed to enhance breeding outcomes. This study proposes and validates a flexible software architecture for managing genetic information in alpaca breeding, integrating genomic selection methods. The proposed architecture consists of three components: 1) Input—capturing data from individual records, pedigree, phenotypic traits, fiber characteristics, genomic, and non-genomic information; 2) Processing—implementing statistical methods such as BLUP, GBLUP, and SSGBLUP, alongside inbreeding coefficient calculation and machine learning techniques; and 3) Output—generating reports for mating list proposals, estimated breeding values, and genetic evaluations. Designing a software architecture for genetic improvement in alpaca breeding programs could help software developers with maintainability, extensibility, and adaptability, considering different kinds of data sources for future advancements in alpaca breeding. This work shows the implementation and validation of software for an alpaca breeding program based on the proposed architecture.

Author 1: Alfredo Gama-Zapata
Author 2: Fernando Barra-Quipse
Author 3: Elizabeth Vidal

Keywords: Architecture; genomic selection; adaptability

PDF

Paper 11: Method for Providing Exercise Instruction That Allows Immediate Feedback to Trainees

Abstract: Method for providing exercise instruction that allows immediate feedback to trainees is proposed. The purpose of this research is to combine artificial intelligence technology and motion analysis methods to build an effective vocational training support program aimed at supporting the employment of children with disabilities. Specifically, we develop a system that uses DTW (Dynamic Time Warping) to calculate the similarity between the trainee's motion and the model motion, and scores the results based on the results. This system will enable optimal instruction for each disabled child, and is expected to improve motion skills and promote learning motivation. Furthermore, by providing scored feedback, we aim to improve the traditional evaluation that relies on the subjectivity of the instructor and provide an intuitive and easy-to-understand means of confirming results for trainees. In this research, we use skeletal detection technology to record the trainee's three-dimensional coordinate data and perform quantitative evaluation. In addition, we will design a program that allows trainees to visually check their own progress through a motion evaluation function and maximize the learning effect. Through experiment, it is found that the proposed method does work for motion trainings at supporting the employment of children with disabilities. Also, it is found that immediate feedback is better than conventional delayed feedback.

Author 1: Kohei Arai
Author 2: Kosuke Eto
Author 3: Mariko Oda

Keywords: Motion training; immediate feedback; DTW (Dynamic Time Warping); children with disabilities; skeletal detection

PDF

Paper 12: Fear of Missing Out (FoMO) and Recommendation Algorithms: Analyzing Their Impact on Repurchase Intentions in Online Marketplaces

Abstract: The rapid growth of e-commerce has intensified consumers' Fear of Missing Out (FoMO), influencing their repurchase intentions. This study aims to examine the impact of online FoMO on repurchase intentions in marketplaces, emphasizing the role of personalized recommendations and promotional strategies. A quantitative approach was employed, collecting data from 300 respondents who actively shop on online marketplaces. The study utilized Structural Equation Modelling (SEM) to analyze the relationships between FoMO, trust, perceived value, and repurchase intentions. The findings reveal that FoMO significantly influences repurchase intentions, both directly and indirectly, through trust and perceived value. Additionally, personalized recommendations and time-limited promotions amplify FoMO, further strengthening consumers' intention to repurchase. These results highlight the necessity for e-commerce platforms to strategically implement AI-driven personalization and gamification elements to optimize customer retention. The study contributes theoretical insights by integrating psychological and technological perspectives in understanding consumer behavior in digital marketplaces. The originality of this research lies in its empirical validation of the FoMO- repurchase intention relationship using SEM, offering novel insights into how marketplace features shape consumer decision-making. Practically, the findings provide actionable strategies for businesses to enhance customer engagement and retention through behavioral-driven marketing approaches.

Author 1: Ati Mustikasari
Author 2: Ratih Hurriyati
Author 3: Puspo Dewi Dirgantari
Author 4: Mokh Adieb Sultan
Author 5: Neng Susi Susilawati Sugiana

Keywords: Component; FoMO; repurchase intentions; online marketplace; SEM; consumer behavior

PDF

Paper 13: A Hybrid SEM-ANN Method for Developing an Information Technology Acceptance and Utilization Model in River Tourism Services

Abstract: Tourism is a vital sector that contributes significantly to Indonesia's economic growth. However, despite its great potential, the sector faces challenges in the application of information technology, as seen in the Go-Klotok application in Banjarmasin City which has not been well received by tourists. Therefore, it is important to understand the factors that influence the acceptance of information technology in river tourism to improve the tourist experience and support the growth of the sector. This study aims to develop a model of technology acceptance and utilization in river tourism in South Kalimantan. To that end, this study modifies four main models, namely the Tourism Web Acceptance Model (T-WAM), the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), the E-Tourism Technology Acceptance Model (ETAM), and The DeLone and McLean Model. This research identifies and analyzes various factors that influence technology acceptance in the context of river tourism. The research method uses a hybrid SEM-ANN approach, where Partial Least Squares Structural Equation Modeling (PLS-SEM) is used to analyze the relationship between variables, while Artificial Neural Network (ANN) captures more complex data patterns. Data analysis in this study used the Hybrid SEM-ANN method with the SmartPLS application and the IBM SPSS Statistics 27 application. The hypotheses of this study were 14 hypotheses and 9 hypotheses were accepted. The results of the analysis of 471 respondents show that Social Influence, Perceived Benefits, and Information Quality significantly influence user intention to use information technology services, with Social Influence as the most dominant factor.

Author 1: Mutia Maulida
Author 2: Iphan Fitrian Radam
Author 3: Nurul Fathanah Mustamin
Author 4: Yuslena Sari
Author 5: Andreyan Rizky Baskara
Author 6: Eka Setya Wijaya
Author 7: Muhammad Alkaff
Author 8: M. Renald Abdi

Keywords: River tourism; technology acceptance; TWAM; E-TAM; hybrid SEM-ANN

PDF

Paper 14: Mitigating Catastrophic Forgetting in Continual Learning Using the Gradient-Based Approach: A Literature Review

Abstract: Continual learning, also referred to as lifelong learning, has emerged as a significant advancement for model adaptation and generalization in deep learning with the capability to train models sequentially from a continuous stream of data across multiple tasks while retaining previously acquired knowledge. Continual learning is used to build powerful deep learning models that can efficiently adapt to dynamic environments and fast-shifting preferences by utilizing computational and memory resources, and it can ensure scalability by acquiring new skills over time. Continual learning enables models to train incrementally from an ongoing stream of data by learning new data as it comes while saving old experiences, which eliminates the need to collect new data with old data to be retrained together from scratch, saving time, resources, and effort. However, despite continual learning advantages, it still faces a significant challenge known as catastrophic forgetting. Catastrophic forgetting is a phenomenon in continual learning where a model forgets previously learned knowledge when trained on new tasks, making it challenging to preserve performance on earlier tasks while learning new ones. Catastrophic forgetting is a central challenge in advancing the field of continual learning as it undermines the main goal of continual learning, which is to maintain long-term performance across all encountered tasks. Therefore, several types of research have been proposed recently to address and mitigate the catastrophic forgetting dilemma to unlock the full potential of continual learning. As a result, this research provides a detailed and comprehensive review of one of the state-of-the-art approaches to mitigate catastrophic forgetting in continual learning known as the gradient-based approach. Furthermore, a performance evaluation is conducted for the recent gradient-based models, including the limitations and the promising directions for future research.

Author 1: Haitham Ghallab
Author 2: Mona Nasr
Author 3: Hanan Fahmy

Keywords: Deep learning; continual learning; model adaptation and generalization; catastrophic forgetting; gradient-based approach

PDF

Paper 15: IoT-Enabled Waste Management in Smart Cities: A Systematic Literature Review

Abstract: The growing population of cities has increased the pressure on the waste management systems and therefore, new and better approaches are needed. This paper aims to present the theoretical underpinning of the application of Internet of Things (IoT) technologies in the improvement of waste collection in smart cities. In this regard, this paper reviews the latest trends, methodologies, and technologies from a vast collection of peer-reviewed papers published between 2018 and 2024. The areas of focus include real-time monitoring systems, predictive analytics, and optimization algorithms that have created new norms in traditional waste management. The review discusses the novel concept of IoT-based smart bins, dynamic waste collection routing, and data-based decision-making frameworks which yield significant environmental and economic benefits. According to established studies, reported outcomes include reduced overflow and manual labor costs, improved routing efficiency, enhanced recycling processes, optimized bin placement, and increased energy savings. Across a variety of cities, reports comparing pre-IoT operations with IoT-enhanced ones have found remarkable decreases in operating costs, resource allocation, and overall sustainability performance improvements. However, challenges in data security, interoperability and scalability still exist,  highlighting the need for a standardized framework and policies. This review contributes to the existing body of knowledge by identifying research gaps and proposing directions for future work. It emphasizes the importance of hybrid approaches combining IoT with emerging technologies such as artificial intelligence and blockchain to address the limitations of current systems. The findings offer valuable insights for policymakers, urban planners, and researchers aiming to foster sustainable and smart urban ecosystems.

Author 1: Moulay Lakbir Tahiri Alaoui
Author 2: Meryam Belhiah
Author 3: Soumia Ziti

Keywords: Waste management; smart cities; Internet of Things (IoT); smart bins; urban planning

PDF

Paper 16: Wireless Internet of Things System Optimization Based on Clustering Algorithm in Big Data Mining

Abstract: The rapid development of the Internet of Things (IoT) has highlighted the importance of Wi-Fi sensor networks in efficiently collecting data anytime and anywhere. This paper aims to propose an optimized routing protocol that significantly reduces power consumption in IoT systems based on clustering algorithms. The paper begins by introducing the architecture of Wi-Fi sensor networks, sensor nodes, and the key technologies needed for implementation. It distinguishes between cluster-based and planar protocols, noting the advantages of each. The proposed protocol, DKBDCERP (Dual-layer K-means and Density-based Clustering Energy-efficient Routing Protocol), utilizes a two-layer clustering approach. In the first layer, nodes are clustered based on density, while in the second layer, first-level cluster heads are further grouped using the K-Means algorithm. This dual-layer structure balances the responsibilities of cluster heads, ensuring a more efficient distribution of data reception, fusion, and forwarding tasks across different levels. Simulation results demonstrate that the DKBDCERP protocol achieves optimal performance, with the smallest curve value and the most stable amplitude. It significantly reduces energy consumption, with the total cluster-head power consumption recorded at 0.1J and a variance of 0.1×10⁻⁴. The introduction of two election modes during the clustering stage and the adoption of a centralized control mechanism further contribute to reduced broadcast energy loss. This research introduces an innovative two-layer clustering scheme that enhances the energy efficiency of wireless sensor networks in IoT environments. By leveraging clustering algorithms and a network routing protocol optimized through big data mining techniques, the proposed DKBDCERP significantly reduces energy consumption while maintaining communication stability in large- scale wireless Internet of Things (IoT) systems. The optimized routing protocol provides a novel solution for reducing power consumption while maintaining network stability, offering valuable insights for future IoT applications.

Author 1: Jing Guo

Keywords: Wireless sensor; network routing protocol; clustering algorithm; two-layer clustering; Internet of Things

PDF

Paper 17: Hybrid-Optimized Model for Deepfake Detection

Abstract: The advancement of deep learning models has led to the creation of novel techniques for image and video synthesis. One such technique is the deepfake, which swaps faces among persons and then produces hyper-realistic videos of individuals saying or doing things that they never said or done. These deepfake videos pose a serious risk to everyone and countries if they are exploited for extortion, scamming, political disinformation, or identity theft. This work presents a new methodology based on a hybrid-optimized model for detecting deepfake videos. A Mask Region-based Convolutional Neural Network (Mask R-CNN) is employed to detect human faces from video frames. Then, the optimal bounding box representing the face region per frame is selected, which could help to discover many artifacts. An improved Xception-Network is proposed to extract informative and deep hierarchical representations of the produced face frames. The Bayesian optimization (BO) algorithm is employed to search for the optimal hyperparameters' values in the extreme gradient boosting (XGBoost) classifier model to properly discriminate the deepfake videos from the genuine ones. The proposed method is trained and validated on two different datasets; CelebDF-FaceForencics++ (c23) and FakeAVCeleb, and tested also on various datasets; CelebDF, DeepfakeTIMIT, and FakeAVCeleb. The experimental study proves the superiority of the proposed method over the state-of-the-art methods. The proposed method yielded %97.88 accuracy and %97.65 AUROC on the trained CelebDF-FaceForencics++ (c23) and tested CelebDF datasets. Additionally, it achieved %98.44 accuracy and %98.44 AUROC on the trained CelebDF-FaceForencics++ (c23) and tested DeepfakeTIMIT datasets. Moreover, it yielded %99.50 accuracy and %99.21 AUROC on the FakeAVCeleb visual dataset.

Author 1: H. Mancy
Author 2: Marwa Elpeltagy
Author 3: Kamal Eldahshan
Author 4: Aya Ismail

Keywords: Bayesian optimization; deepfake detection; deepfake videos; Mask R-CNN; Xception network; XGBoost

PDF

Paper 18: Enhancing Usability and Cognitive Engagement in Elderly Products Through Brain-Computer Interface Technologies

Abstract: This study addresses the limitations of traditional elderly care products in terms of intelligence and user experience by integrating human-computer interaction (HCI) principles into a product design framework for the elderly. This study explores the importance of feature extraction in human-computer interaction systems, emphasizes its key role in enhancing user adaptability and interaction efficiency, and deeply analyzes its impact on brain-computer interface (BCI) technology. At the same time, the study conducts simulation experiments to evaluate the effectiveness of various algorithms in processing two types of motor imagery tasks. Finally, the obtained results provide a comparative evaluation of the algorithms and highlight their respective strengths and limitations.

Author 1: Daijiao Shi
Author 2: Chao Jiang
Author 3: Chenhan Huang

Keywords: Big data; human-computer interaction; the elderly; product design

PDF

Paper 19: Analyzing RGB and HSV Color Spaces for Non- Invasive Blood Glucose Level Estimation Using Fingertip Imaging

Abstract: Traditional blood glucose measurement methods, including finger-prick tests and intravenous sampling, are invasive and can cause discomfort, leading to reduced adherence and stress. Non-invasive BGL estimation addresses these issues effectively. The proposed study focuses on estimating blood glucose levels (BGL) using “Red-Green-Blue (RGB)” and “Hue-Saturation-Value (HSV) color spaces” by analyzing fingertip videos captured with a smartphone camera. The goal is to enhance BGL prediction accuracy through accessible, portable devices, using a novel fingertip video database from 234 subjects. Videos recorded in the “RGB color space” using a smartphone camera were converted into the “HSV color space”. The “R channel” from “RGB” and the “Hue channel” from “HSV” were used to generate photoplethysmography (PPG) waves, and additional features like age, gender, and BMI were included to improve predictive accuracy. To enhance the precision of blood glucose estimation, the Genetic Algorithm (GA) was used to identify the most significant and optimal features from the large set of features. The “XGBoost”, “CatBoost”, “Random Forest Regression (RFR)”, and “Gradient Boosting Regression (GBR)” algorithms were applied for blood glucose level (BGL) prediction. Among them, “XGBoost” yielded the best results, with an R2 value of 0.89 in the “RGB color space” and 0.84 in the “HSV color space”, showcasing its superior predictive ability. The experimental outcomes were assessed using “Clarke error grid analysis” and a “Bland-Altman plot”. The Bland-Altman analysis showed that only 7.04% of the BGL values fell outside the limits of agreement (±1.96 SD), demonstrating strong agreement with reference values.

Author 1: Asawari Kedar Chinchanikar
Author 2: Manisha P. Dale

Keywords: Blood glucose; Photoplethysmography; non- invasive; genetic algorithm; XGBoost; RGB; HSV

PDF

Paper 20: Machine Learning Advances in Technology Applications: Cultural Heritage Tourism Trends in Experience Design

Abstract: This study investigates the evolving trends in cultural heritage tourism experience design and examines how machine learning technologies are being applied to enhance visitor engagement and heritage preservation. Using bibliometric data from the Web of Science (WoS) and visualization tools such as VOSviewer, the research identifies key themes, author collaborations, and keyword clusters from 2016 to 2025. The analysis reveals a shift in focus from traditional conservation and display methods to user-centered experiences supported by advanced technologies. Machine learning techniques—such as deep learning, natural language processing, and multimodal data fusion—are increasingly used to personalize tours, analyze tourist behavior, restore damaged artifacts, and improve decision-making in resource management. Tools like CNNs and BERT models enable smart guiding systems and interactive Q&A features, while sentiment analysis enhances feedback mechanisms. The study also highlights several ongoing challenges, including data privacy issues, algorithmic bias, and unequal access to technological infrastructure, especially in developing regions. Ethical considerations and the need for human-centered design principles are emphasized to ensure that technological innovation aligns with cultural values and sustainability goals. In conclusion, this research provides a comprehensive overview of academic progress in cultural heritage tourism and illustrates the growing importance of AI and machine learning in creating immersive, efficient, and culturally respectful tourism experiences. The findings offer practical insights for scholars, heritage site managers, and policymakers seeking to leverage digital tools for both preservation and enhanced visitor satisfaction.

Author 1: Meihua Deng

Keywords: Heritage tourism; tourism experience; machine learning; VOSviewer; bibliometric data

PDF

Paper 21: Netizens as Readers, Producers, and Publishers: Communication Ethics and Challenges in Social Media

Abstract: Social media has fundamentally transformed how people communicate and interact, creating a dynamic landscape where today's internet users assume multifaceted roles as readers, producers of text (messages), and publishers of their own content. This evolution empowers individuals to consume information and generate it, offer commentary, and share it widely across platforms. However, this shift brings forth significant ethical considerations that warrant critical examination. This research analyzes the complex issues and challenges surrounding the ethics of social media communication. It emphasizes the urgent need for individuals and society to address these challenges ethically and responsibly in an era where misinformation can spread rapidly, influencing public opinion and societal norms. The research employs a descriptive qualitative method that includes observation of netizen comments on YouTube cases related to corruption and immorality alongside an online questionnaire distributed among social media users. The study draws from two primary data sources: first, netizen comments on various YouTube videos addressing corruption; second, responses from 1,061 participants who completed the online questionnaire. Findings reveal that active participation by netizens enables them to engage in diverse forms of communication—expressing critical views, sharing recommendations for positive change, or even disseminating hate speech in reaction to contentious issues like corruption or moral failings. While some netizens utilize respectful language and promote constructive dialogue through engaging content creation, others contribute to a more toxic environment characterized by negativity. This diversity highlights the potential for positive discourse and the risks associated with unchecked expression on social media platforms. Ultimately, this research underscores that netizens possess substantial opportunities—and responsibilities—to shape public discourse through their actions as readers, producers, and publishers within this evolving digital ecosystem.

Author 1: Burhanuddin Arafah
Author 2: Muhammad Hasyim
Author 3: Herawati Abbas

Keywords: Netizen; communication ethic; challenge; social media

PDF

Paper 22: Meter-YOLOv8n: A Lightweight and Efficient Algorithm for Word-Wheel Water Meter Reading Recognition

Abstract: To address the issues of low efficiency and large parameters in the current word-wheel water meter reading recognition algorithms, this paper proposes a Meter-YOLOv8n algorithm based on YOLOv8n. Firstly, the C2f component of YOLOv8n is improved by introducing an enhanced inverted residual mobile block (iRMB). It enables the model to efficiently capture global features and fully extract the key information of the water meter characters. Secondly, the Slim-Neck feature fusion structure is employed in the neck network. By replacing the original convolutional kernels with GSConv, the model's ability to express the features of small object characters is enhanced, and the number of parameters in the model is reduced. Finally, Inner-EIoU is used to optimize the bounding box loss function. This simplifies the calculation process of the loss function and improves the model's ability to locate dense bounding boxes. The experimental results show that, compared with the original model, the precision, recall, mAP@0.5, and mAP@0.5:0.95 of the improved model have increased by 1.7%, 1.2%, 3.4%, and 3.3% respectively. Meanwhile, the parameters, FLOPs, and model size have decreased by 0.56M, 2.6G, and 0.7MB respectively. The improved model can better balance the relationship between detection performance and computational complexity. It is suitable for the task of recognizing word-wheel water meter readings and has practical application value.

Author 1: Shichao Qiao
Author 2: Yuying Yuan
Author 3: Ruijie Qi

Keywords: Word-wheel water meter; YOLOv8n; global features; slim-neck; loss function

PDF

Paper 23: Optimization Design of Robot Grasping Based on Lightweight YOLOv6 and Multidimensional Attention

Abstract: To address the computational redundancy and robustness limitations of industrial grasping models in complex environments, this study proposes a lightweight capture detection framework integrating Mobile Vision Transformer (MobileViT) and You Only Look Once version 6 (YOLOv6). Three innovations are developed: 1) A cascaded architecture fusing convolution and Transformer to compress parameters; 2) A multidimensional attention mechanism combining channel-pixel dual enhancement; 3) A Pixel Shuffle-Receptive Field Block (PixShuffle-RFB) decoder enabling sub-pixel localization. Experiments demonstrate that the model achieves 0.88 detection accuracy with 66 Frames Per Second (FPS) in simulations and 90.04% grasping success rate in physical tests. The lightweight design reduces computational costs by 37% versus conventional models while maintaining 93.54% segmentation efficiency (2.85 milliseconds inference). This multidimensional attention-driven approach effectively improves industrial robot adaptability, advancing capture detection applications in high-noise manufacturing scenarios.

Author 1: Junyan Niu
Author 2: Guanfang Liu

Keywords: Capture detection; YOLOv6; multidimensional attention; MobileViT; industrial robot; lightweight

PDF

Paper 24: Intellectual Property Protection in the Age of AI: From Perspective of Deep Learning Models

Abstract: The rapid development of Artificial Intelligence (AI), especially Deep Learning (DL) technologies, has brought unprecedented challenges and opportunities for Intellectual Property (IP) protection and management. In this paper, we employ Bibliometrix and Biblioshiny to conduct a bibliometric analysis of global research at the intersection of AI-driven innovation and IP frameworks over the past decade. The findings reveal a significant annual growth rate of 15.34 per cent in publications, with an average of 5.82 citations per study, reflecting increasing academic interest. China, the United States, and India dominate the research output, but the cross-country collaboration rate is only 10.74 per cent, indicating that there is still room for improvement in global collaborative research. The current major research groups in the field, as well as different research themes, are identified through collaborative network and thematic analyses, respectively. Although the field has achieved remarkable results in technological innovation, the deep integration of legal, economic and ethical dimensions is still at an early stage. The study highlights the urgent need for interdisciplinary collaboration and enhanced international cooperation to address pressing issues such as AI-generated content (AIGC) attribution, legal applicability, and the societal impact of DL technologies in IP protection. These findings aim to support academia and industry in clarifying ownership and promoting synergistic innovation in the AI era.

Author 1: Jing Li
Author 2: Quanwei Huang

Keywords: Intellectual property; Artificial Intelligence; Deep Learning; Natural Language Processing; neural network; legal applicability

PDF

Paper 25: Photovoltaic Fault Detection in Remote Areas Using Fuzzy-Based Multiple Linear Regression (FMLR)

Abstract: This research focused on developing and implementing a fault detection model for photovoltaic (PV) systems in remote areas, utilizing a Fuzzy-Based Multiple Linear Regression (FMLR) approach. The study aimed to address the challenges of monitoring PV systems in locations with limited access to conventional power grids and technical resources. The fault detection system integrated environmental parameters such as solar radiation, temperature, wind speed, and rainfall, alongside PV system parameters like panel voltage, current, battery voltage, and inverter performance. Data collection and preprocessing were conducted over a specified period to identify operational patterns under both normal and faulty conditions, ensuring data accuracy through cleaning, normalization, and categorization. The research was conducted in Pandan Arang Village, Kandis District, Ogan Ilir Regency, South Sumatera, Indonesia, contributing to the improvement of reliability and sustainability of renewable energy sources in isolated communities. The total number of data points for 276 rows with 6 attributes each was 1656 records. The MLR model was developed to predict the output power of the PV system, while fuzzy logic was employed to handle uncertainties in the data, offering a more flexible and adaptive decision-making process. The system applied fuzzy rules to determine the charging status (P3), categorizing it into Optimal Charging, Adjusted Charging, Charging Delay, or Fault Alert. The model was tested with real-time data, and its performance was validated through comparison with manual inspections. The results showed that the FMLR-based fault detection system effectively identified faults and optimized the performance of the PV system, making it suitable for remote areas in South Sumatera.

Author 1: Feby Ardianto
Author 2: Ermatita Ermatita
Author 3: Armin Sofijan

Keywords: Photovoltaic; multiple linear regression; fuzzy; fault detection; remote areas

PDF

Paper 26: Path Planning Technology for Unmanned Aerial Vehicle Swarm Based on Improved Jump Point Algorithm

Abstract: Multi-unmanned aerial vehicle path planning encounters challenges with effective obstacle avoidance and collaborative operation. The study proposes a swarm planning technique for unmanned aerial vehicles, based on an improved jump point algorithm. It introduces a geometric collision detection strategy to optimize path search and employs the dynamic window method to constrain the flight range. Additionally, the study presents conflict avoidance strategies for multi-unmanned aerial vehicle path planning and establishes collision fields for unmanned aerial vehicles to achieve collaborative path planning. In single unmanned aerial vehicle path planning, the research model exhibits the lowest control errors in the X, Y, and Z axes, with the Y-axis error being 0.05m. In static planning, the model boasts the shortest planning time and length, with 1002ms and 17.85m in multi-obstacle planning, respectively. In multi-unmanned aerial vehicle path planning, the research model effectively avoids local optimal problems in local conflict scenarios and re-plans the route. During testing on a 29m×29m grid map, the research technology successfully avoids obstacles and re-plans routes. However, similar technological obstacles can cause interference and traps in local convergence, preventing re-planning. The research technology demonstrates good application effects in the path planning of unmanned aerial vehicle swarms and will provide technical support for multi-machine collaborative path planning.

Author 1: Haizhou Zhang
Author 2: Shengnan Xu

Keywords: Unmanned aerial vehicle swarm; path planning; jump point search algorithm; geometric collision detection; dynamic window method

PDF

Paper 27: AHP and Fuzzy Evaluation Methods for Improving Cangzhou Honey Date Supplier Performance Management

Abstract: This study focuses on improving supplier performance management within the Cangzhou honey date industry by integrating the Analytic Hierarchy Process (AHP) and fuzzy evaluation methods. Recognizing the limitations of traditional evaluation systems—such as subjectivity and insufficient quantitative analysis—the research aims to build a comprehensive, data-driven evaluation framework. The methodology involves constructing a supplier performance index system based on five key dimensions: quality, cost, delivery, service, and social responsibility. Using the AHP method, expert opinions are quantified to determine the weight of each indicator. Subsequently, fuzzy evaluation is employed to transform qualitative judgments into numerical scores, enabling more objective assessment. Five major suppliers are evaluated empirically, and statistical methods such as ANOVA and cluster analysis are used to identify performance differences and classify suppliers into performance tiers. The results indicate that Supplier A excels in quality and service, Supplier B leads in delivery performance, while Suppliers C and E require significant improvements. Correlation analysis reveals strong links between supplier performance and key operational metrics such as product defect rates, procurement costs, and customer satisfaction. Based on these findings, the study proposes targeted improvement strategies including the adoption of Six Sigma practices, implementation of VMI and JIT models, and enhanced performance-based incentive mechanisms. The research confirms the effectiveness of combining AHP and fuzzy methods in supplier evaluation and provides actionable insights for improving supply chain efficiency, resilience, and competitiveness. It also suggests that future studies should incorporate larger datasets and intelligent algorithms to refine evaluation accuracy and operational decision-making.

Author 1: Zhixin Wei

Keywords: AHP; fuzzy evaluation method; supplier performance; Cangzhou honey date; supply chain management

PDF

Paper 28: Air Quality Assessment Based on CNN-Transformer Hybrid Architecture

Abstract: Air quality assessment plays a crucial role in environmental governance and public health decision-making. Traditional assessment methods have limitations in handling multi-source heterogeneous data and complex nonlinear relationships. This paper proposes an air quality assessment model based on a CNN-Transformer hybrid architecture, which achieves end-to-end prediction by integrating CNN's local feature extraction capability with Transformer's advantage in modeling global dependencies. The model employs a three-layer CNN for local feature learning, combined with Transformer's multi-head self-attention mechanism to capture long-range dependencies, and uses multilayer perceptrons for final prediction. Experiments on public datasets demonstrate that compared to traditional machine learning methods and single deep learning models, the proposed hybrid architecture achieves a 10.2 percentage improvement in Root Mean Square Error (RMSE) and a 0.57 percentage point improvement in coefficient of determination (R²). Through systematic ablation experiments, we verify the necessity of each model component, particularly the importance of the CNN-Transformer hybrid architecture, positional encoding mechanism, and multi-layer network structure in enhancing prediction performance. The research results provide an effective deep learning solution for air quality assessment.

Author 1: Yuchen Zhang
Author 2: Rajermani Thinakaran

Keywords: Air quality assessment; deep learning; CNN-Transformer hybrid architecture; feature extraction

PDF

Paper 29: A Novel Multitasking Framework for Feature Selection in Road Accident Severity Analysis

Abstract: In machine learning studies, feature selection presents a crucial step especially when handling complex and imbalanced datasets, such as those used in road traffic injury analysis. This study proposes a novel multitasking feature selection methodology that integrates the Grey Wolf Optimizer, knowledge transfer, and the CatBoost ensemble algorithm to enhance the performance and interpretability of road accident severity prediction. The main objective of this study is to identify critical features impacting the prediction of severe injury cases in road accidents. The proposed framework integrates several steps to handle the complexities related to feature selection. The fitness function of the Grey Wolf Optimizer model is designed to prioritize the classification accuracy of the severe injury class. To mitigate early convergence of the model, a knowledge transfer mechanism that generates new wolf instances based on a historical record of wolves used previously is integrated within a multitasking process. To evaluate the prediction performance of the generated feature subsets, the CatBoost algorithm is employed in the evaluation step to assess the effectiveness of the proposed approach. By Integrating these three step methodology which combine metaheuristic feature selection technique with knowledge transfer through a multitasking process, the proposed framework enhances generalization, reduces prediction models complexity and handles imbalanced distributions. It proposed a feature selection model that overcomes key limitations of traditional methods. Applied to real-world road crash data, the methodology significantly improves the identification of factors impacting the severity of injuries. Experimental results demonstrate enhanced model performance, reduced complexity, and deeper insights into the factors contributing to traffic injuries. These findings highlight the potential of advanced machine learning techniques in improving road safety analysis and supporting data-driven decision-making.

Author 1: Soumaya AMRI
Author 2: Mohammed AL ACHHAB
Author 3: Mohamed LAZAAR

Keywords: Feature selection; road accident; injury severity; Grey Wolf Optimizer; multitasking; knowledge transfer

PDF

Paper 30: Assessment of Remote Sensing Image Quality and its Application Due to Off-Nadir Imaging Acquisition

Abstract: One advantage of using microsatellites for remote sensing is their maneuverability so that the target area can be captured from any viewing angle based on specific needs. However, the image captured under off-nadir acquisition will have reduced quality in both geometry and radiometric aspects. This research aims to find the effect of off-nadir acquisition on remote sensing image quality in general and its accuracy on land use land cover (LULC) application based on LAPAN-A3 microsatellite image data. Both images from the nadir and off-nadir acquisition of one specific target, which had several days/weeks difference, are compared to the nearest Landsat-8 image data. Based on several target images used in this research, the imaging viewing angle indeed affects the quality of the remote sensing images, both in general image quality and land use land cover application accuracy. The degradation of LULC accuracy can be considered acceptable however, where in general, it can be modeled to -0.5 percent/degree, i.e., an image taken under 20 degrees off-nadir acquisition will have reduced 10 percent accuracy. This result shows that the off-nadir microsatellite imaging technique can be used for specific remote sensing needs without compromising quality.

Author 1: Agus Herawan
Author 2: Patria Rachman Hakim
Author 3: Ega Asti Anggari
Author 4: Agung Wahyudiono
Author 5: Mohammad Mukhayadi
Author 6: M. Arif Saifudin
Author 7: Chusnul Tri Judianto
Author 8: Elvira Rachim
Author 9: Ahmad Maryanto
Author 10: Satriya Utama
Author 11: Rommy Hartono
Author 12: Atriyon Julzarika
Author 13: Rizatus Shofiyati

Keywords: Land cover; land use; LAPAN-A3; microsatellite; off-nadir; revisit time

PDF

Paper 31: High-Precision Urban Air Quality Prediction Using a LSTM-Transformer Hybrid Architecture

Abstract: With the acceleration of urbanization, accurate air quality prediction is crucial for environmental governance and public health risk management. Existing prediction methods still face challenges in handling complex time-series dependencies and multi-scale features. In this paper, a hybrid deep learning architecture (LT-Hybrid) based on LSTM and Transformer is proposed for high-precision air quality prediction. The model captures the long-term dependencies of time-series data through a two-layer LSTM structure, models the complex interactions among different environmental factors using a multi-head self-attention mechanism, and improves the training stability through a combination of residual connections and layer normalization. Experiments on an urban air quality dataset, containing nine dimensions of environmental characteristics such as temperature, humidity, PM2.5, etc., show that the LT-Hybrid model achieves an RMSE of 0.1021 and an R² of 0.9382, reducing prediction errors by 13.0% and 5.1% compared to benchmark models of traditional LSTM and XGBoost, respectively. Accurate prediction of air quality indicators provides timely risk assessment for respiratory diseases and cardiovascular conditions, enabling proactive public health interventions. Through systematic ablation experiments and hyperparameter analysis, the validity of each core component of the model is verified, providing a high-precision prediction scheme for environmental monitoring and health risk assessment.

Author 1: Yiming Liu
Author 2: Mcxin Tee
Author 3: Liangyan Lu
Author 4: Fei Zhou
Author 5: Binggui Lu

Keywords: Air quality; deep learning; LSTM; transformer; multi-head attention mechanism; temporal prediction; health risk

PDF

Paper 32: The Role of Artificial Intelligence in Brand Experience: Shaping Consumer Behavior and Driving Repurchase Decisions

Abstract: The rapid advancement of Artificial Intelligence (AI) has transformed brand experiences, influencing consumer behavior and repurchase decisions in digital marketplaces. This study aims to examine the role of AI in enhancing brand experience and its impact on consumer purchasing behavior, particularly in driving repurchase intentions. A quantitative research approach was employed, involving a sample of 340 online shoppers who have previously engaged with AI-driven brand interactions. Data were collected through a structured questionnaire and analyzed using Structural Equation Modeling (SEM) with AMOS. The findings reveal that AI-powered brand experience significantly affects consumer trust, satisfaction, and emotional engagement, which in turn positively influences repurchase decisions. The study also highlights that personalized AI-driven interactions, such as chatbots, recommendation systems, and predictive analytics, enhance consumer perception of brand value, fostering long-term loyalty. The implications of this research suggest that businesses should leverage AI technologies to create immersive and personalized brand experiences that strengthen customer retention and maximize sales performance. This study contributes to the literature by integrating AI and brand experience within the consumer decision-making framework, offering a novel perspective on AI’s role in shaping repurchase behavior. Future research could explore industry- specific AI applications and their impact on different demographic segments.

Author 1: Ati Mustikasari
Author 2: Ratih Hurriyati
Author 3: Puspo Dewi Dirgantari
Author 4: Mokh Adieb Sultan
Author 5: Neng Susi Susilawati Sugiana

Keywords: Digital marketing; artificial intelligence; brand experience; consumer behavior; repurchase intentions

PDF

Paper 33: Predicting Human Essential Genes Using Deep Learning: MLP with Adaptive Data Balancing

Abstract: Artificial intelligence (AI) has transformed many scientific disciplines including bioinformatics. Essential gene prediction is one important use of artificial intelligence in bioinformatics since it is necessary for knowledge of the biological pathways needed for cellular survival and disease diagnosis. Essential genes are fundamental for maintaining cellular life as well as for the survival and reproduction of organisms. Understanding the importance of these genes can help one to identify the basic needs of organisms, point out genes connected to diseases, and enable the development of new drugs. Traditional methods for identifying these genes are time consuming and costly, so computational approaches are used as alternatives. In this study, a Multi-Layer Perceptron (MLP) model combined with ADASYN (adaptive synthetic sampling). Furthermore, using deep learning techniques to solve the restrictions of traditional machine learning techniques and raise forecast accuracy attracts a lot of interest. It was proposed to handle data imbalance. The model utilizes features from protein-protein interaction networks, DNA and protein sequences. The model achieved high performance, with a sensitivity of 0.98, overall accuracy of 0.94, and specificity of 0.96, demonstrating its effectiveness in data classification.

Author 1: Ahmed AbdElsalam
Author 2: Mohamed Abdallah
Author 3: Hossam Refaat

Keywords: Artificial intelligence; bioinformatics; deep learning; Multi-Layer Perceptron (MLP); imbalanced-handling techniques; essential gene prediction; sequence characteristics

PDF

Paper 34: Personalized Recommendation for Online News Based on UBCF and IBCF Algorithms

Abstract: With the popularization of the Internet and the widespread use of mobile devices, online news has become one of the main ways for people to obtain information and understand the world. However, the increasing number and variety of news often cause users to feel troubled when searching for content of interest. To solve this problem, the first step is to design a personalized recommendation model for online news. Based on this model, a new personalized recommendation model is designed by combining the item-based collaborative filtering (IBCF) and the user-based collaborative filtering (UBCF). The experimental results showed that the average scores of the volunteers for the performance indicators, coverage indicators, and satisfaction indicators of the model were 85 and 93, 86, respectively. This system has high accuracy, low resource consumption, and higher user satisfaction, providing a new algorithmic approach for the field of recommendation models. The contribution of research is not only improving the accuracy of recommendations, but also increasing the diversity of recommendations, effectively solving the problem of data sparsity and real-time news. By introducing a tag propagation network for clustering analysis of users and projects, the recommendation results are further optimized and user satisfaction is improved. In addition, the research also realizes efficient data processing and storage through real-time user data collection and distributed data processing technology, which significantly improves the performance and response speed of the system.

Author 1: Wei Shi
Author 2: Yitian Zhang

Keywords: IBCF algorithm; UBCF; collaborative filtering; news recommendations; label promotion network

PDF

Paper 35: Comparative Analysis of SVM, Naïve Bayes, and Logistic Regression in Detecting IoT Botnet Attacks

Abstract: The rapid proliferation of Internet of Things (IoT) devices has significantly increased the risk of cyberattacks, particularly botnet intrusions, which pose serious security threats to IoT networks. Machine learning-based Intrusion Detection Systems (IDS) have emerged as effective solutions for detecting such attacks. This study presents a comparative analysis of three widely used machine learning classifiers—Support Vector Machine (SVM), Naïve Bayes (NB), and Logistic Regression (LR)—to assess their performance in detecting IoT botnet attacks. The experiment uses the BoTNeTIoT-L01 dataset, applying preprocessing techniques such as data cleaning, normalization, and feature selection to enhance model accuracy. The models are trained and evaluated based on standard performance metrics, including accuracy, precision, recall, F1-score, and AUC-ROC. The results indicate that SVM outperforms the other classifiers in terms of detection accuracy and robustness, particularly in detecting malware based on PE files. These findings offer valuable insights into selecting suitable machine learning models for securing IoT environments. Future work will further explore integrating advanced feature selection techniques and deep learning models to improve detection performance.

Author 1: Apri Siswanto
Author 2: Luhur Bayu Aji
Author 3: Akmar Efendi
Author 4: Dhafin Alfaruqi
Author 5: M. Rafli Azriansyah
Author 6: Yefrianda Raihan

Keywords: IoT security; botnet detection; machine learning; intrusion detection system; comparative analysis; SVM; naïve bayes; logistic regression

PDF

Paper 36: Bibliometric and Content Analysis of Large Language Models Research in Software Engineering: The Potential and Limitation in Software Engineering

Abstract: Large Language Models (LLM) is a type of artificial neural network that excels at language-related tasks. The advantages and disadvantages of using LLM in software engineering are still being debated, but it is a tool that can be utilized in software engineering. This study aimed to analyze LLM studies in software engineering using bibliometric and content analysis. The study data were retrieved from Web of Science and Scopus. The data were analyzed using two popular bibliometric approaches: bibliometric and content analysis. VOS Viewer and Bibliometrix software were used to conduct the bibliometric analysis. The bibliometric analysis was performed using science mapping and performance analysis approaches. Various bibliometric data, including the most frequently referenced publications, journals, and nations, were evaluated and presented. Then, the synthetic knowledge method was utilized for content analysis. This study examined 235 papers, with 836 authors contributing. The publications were published in 123 different journals. The average number of citations per publication is 1.44. Most publications were published in Proceedings International Conference on Software Engineering and ACM International Conference Proceeding Series, with China and the United States emerging as the leading countries. It was discovered that international collaboration on the issue was inadequate. The most often used keywords in the publications were "software design," "code (symbols)," and "code generation." Following the content analysis, three themes emerged: 1) Integration of LLM into software engineering education, 2) application of LLM in software engineering, and 3) potential and limitation of LLM in software engineering. The results of this study are expected to provide researchers and academics with insights into the current state of LLM in software engineering research, allowing them to develop future conclusions.

Author 1: Annisa Dwi Damayanti
Author 2: Hamdan Gani
Author 3: Feng Zhipeng
Author 4: Helmy Gani
Author 5: Sitti Zuhriyah
Author 6: Nurani
Author 7: Nurhayati Djabir
Author 8: Nur Ilmiyanti Wardani

Keywords: Large Language Models; LLM; software engineering; bibliometric; content analysis

PDF

Paper 37: HSI Fusion Method Based on TV-CNMF and SCT-NMF Under the Background of Artificial Intelligence

Abstract: The fusion of hyper-spectral images has important application value in fields such as remote sensing, environmental monitoring, and agricultural analysis. To improve the quality of reconstructed images, an HSI fusion method based on fully variational coupled non-negative matrix factorization and sparse constrained tensor factorization techniques is proposed. Spectral sparsity description is enhanced through sparse regularization, image spatial characteristics are captured using differential operators, and convergence is improved by combining proximal optimization with augmented Lagrangian methods. The experiment outcomes on the AVIRIS and HYDICE datasets indicate that the proposed method achieves peak signal-to-noise ratios of 38.12 dB and 37.56 dB, respectively, and reduces spectral angle errors to 3.98° and 4.12°, respectively, significantly better than the other two comparative methods. The contribution of each module is further verified through ablation experiments. The complete algorithm performs the best in all indicators, verifying the synergistic effect of sparse regularization, total variation regularization, and coupled factorization strategies. In HSI fusion tasks under various complex lighting and noise conditions, the performance of the proposed algorithm is particularly excellent, fully demonstrating its robustness and applicability in complex scenes. The method proposed by the research effectively improves the fusion quality of HSI, providing an efficient and robust solution for the analysis and application of HSI.

Author 1: Dapeng Zhao
Author 2: Yapeng Zhao
Author 3: Xuexia Dou

Keywords: HSI; NMF; sparse regularization; SCT; augmented Lagrangian method

PDF

Paper 38: Energy Management Controller for Bi-Directional EV Charging System Using Prioritized Energy Distribution

Abstract: The growing adoption of electric vehicles (EVs) has intensified the need for efficient, intelligent, and grid-independent Bi-directional charging systems. Conventional EV charging solutions heavily rely on grid electricity, leading to high energy costs, grid instability, and low renewable energy utilization. Existing Bi-directional charging systems often lack real-time prioritization of energy sources, fail to optimize solar and energy storage system (ESS) usage, and do not incorporate adaptive control mechanisms for varying grid conditions. To address these gaps, this study proposes an Energy Management Controller (EMC) for Bi-Directional EV Charging, integrating a prioritized solar to ESS to grid energy distribution strategy to maximize renewable energy usage while ensuring system stability and cost efficiency. The proposed EMC is implemented on an ESP32 microcontroller and manages energy flow via a 6-channel relay module. A temperature-based safety mechanism is embedded to prevent overheating, shutting down relays if the system temperature exceeds 50°C. The control logic dynamically adjusts power flow based on grid stress levels, solar irradiance, ESS state of charge (SOC), and EV battery SOC. The system is monitored using ThingsBoard for real-time visualization and InfluxDB for historical data analysis. Experimental validation across 12 predefined operational scenarios demonstrated that the EMC effectively reduces grid dependency to 15%, achieves renewable energy utilization of up to 90%, and maintains a fast relay switching response time of 50ms. The safety mechanism successfully prevents overheating, ensuring reliable operation under all test conditions.

Author 1: Ezmin Abdullah
Author 2: Muhammad Wafiy Firdaus Jalil
Author 3: Nabil M. Hidayat

Keywords: Energy management controller; Bi-directional EV charging system; safety features; control algorithms; energy flow optimization; EV battery protection; testing and validation; thingsboard platform; InfluxDB database

PDF

Paper 39: Machine Learning-Based Prediction of Cannabis Addiction Using Cognitive Performance and Sleep Quality Evaluations

Abstract: Cannabis addiction remains a growing public health concern, particularly due to its impact on cognition and sleep quality. Conventional screening tools, such as structured interviews and self-assessments, often lack objectivity and sensitivity. This study aims to develop and compare machine learning (ML) models for the prediction of cannabis addiction using cognitive performance (Montreal Cognitive Assessment – MoCA) and sleep quality (Pittsburgh Sleep Quality Index – PSQI) features. A total of 200 participants aged 13 to 24 were assessed, including 103 diagnosed addicts and 97 controls. Principal Component Analysis (PCA) was used to reduce data dimensionality and enhance model robustness. The study evaluated six supervised machine learning algorithms, namely Logistic Regression (LR), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP). Results showed that LR and MLP models achieved high sensitivity (85.71%) and specificity (100%) on the test set, outperforming the DSM-5-based CUD reference test (sensitivity = 71.43%). Although the RF and XGBoost models achieved perfect classification on the training set, their reduced performance on the test set indicates a potential overfitting issue. Integrating machine learning with validated psychometric assessments enables a more accurate and objective identification of cannabis addiction at early stages, thus supporting timely interventions and more effective prevention strategies.

Author 1: Abdelilah Elhachimi
Author 2: Mohamed Eddabbah
Author 3: Abdelhafid Benksim
Author 4: Hamid Ibanni
Author 5: Mohamed Cherkaoui

Keywords: Cannabis addiction; machine learning; cognitive assessment; sleep quality; predictive modeling

PDF

Paper 40: An Obesity Risk Level (ORL) Based on Combination of K-Means and XGboost Algorithms to Predict Childhood Obesity

Abstract: Childhood obesity is a common and serious public health problem that requires early prevention measures. Identifying children at risk of obesity is crucial for timely interventions that aim to mitigate these adverse health outcomes. Machine learning (ML) offers powerful tools to predict obesity and related complications using large and diverse data sources. The article uses machine learning (ML) techniques to analyze children's data, focusing on a newly developed variable, the Obesity Risk Level (ORL), which categorizes participants into high, medium, and low risk levels. Two primary models were utilized: the K-Means algorithm for clustering participants based on shared characteristics and XGBoost for predicting the risk level and obesity likelihood. The results showed an overall prediction precision of 88.04%, with high precision, recall, and F1 scores, demonstrating the robustness of the model in identifying obesity risks. This approach provides a data-driven framework to improve health interventions and prevent childhood obesity, providing information that could shape future preventive strategies.

Author 1: Ghaidaa Hamed Alharbi
Author 2: Mohammed Abdulaziz Ikram

Keywords: Prediction system; Childhood obesity; K-Means; XGBoost; Machine learning

PDF

Paper 41: Industry 4.0 for SMEs: Exploring Operationalization Barriers and Smart Manufacturing with UKSSL and APO Optimization

Abstract: The research aimed to find out why SMEs have a hard time adopting smart manufacturing, what makes smart manufacturing operational, and if only large companies can afford to take advantage of technological opportunities. It used a knowledge-based semi-supervised framework named Unsupervised Knowledge-based Multi-Layer Perceptron (UKMLP), which has two parts: a contrast learning algorithm that takes the unlabeled dataset and uses it to extract feature representations, and a UKMLP that uses that representation to classify the input data using the limited labelled dataset. Next, an artificial protozoa optimizer (APO) makes the necessary adjustments. This research is based on the hypothesis that large companies may be able to exploit Small and Medium-sized Enterprises (SMEs) to their detriment in cyber-physical production systems, thus cutting them out of the market. Secondary data analysis, which involved evaluating and analyzing data that had already been collected, was crucial in accomplishing the research purpose. Since big companies are usually the center of attention in these discussions, the necessity to delve into this subject stems from the reality that SMEs have a higher research need. The results confirmed the importance of Industry 4.00 in industrial production, particularly with regard to the smart process planning offered by algorithms for virtual simulation and deep learning. The report also covered the various connection choices available to SMEs in order to improve business productivity through the use of autonomous robotic technology and machine intelligence. This research suggests that a substantial value-added opportunity may lie in the way Industry 4.0 interacts with the economic organization of companies.

Author 1: Meeravali Shaik
Author 2: Piyush Kumar Pareek

Keywords: European small and medium-sized enterprises; artificial protozoa optimizer; knowledge-based semi-supervised framework; contrastive learning algorithm; smart manufacturing

PDF

Paper 42: An Improved Sparrow Search Algorithm for Flexible Job-Shop Scheduling Problem with Setup and Transportation Time

Abstract: This study addresses the low production efficiency in manufacturing enterprises caused by the diversification of order products, small batches, and frequent production changeovers. Focusing on minimizing the makespan, this study establishes a Flexible Job-Shop Scheduling Problem (FJSP) model incorporating machine setup and workpiece transportation times, and proposes an improved sparrow search algorithm to effectively solve the problem. Based on the sparrow search algorithm, this study proposes a novel location update strategy that expands the search direction in each dimension and strengthens each individual’s local search capability. In addition, a critical-path-based neighborhood search strategy is introduced to enhance individual search efficiency, and an earliest completion time priority rule is employed during population initialization to further improve solution quality. Several experiments are conducted to validate the effectiveness of the improved strategy, and the results are compared with those obtained using the particle swarm optimization and gray wolf optimization algorithms to demonstrate the efficiency of the proposed model and algorithm. The improved sparrow search algorithm can effectively generate feasible solutions for large-scale problems, provide practical manufacturing scheduling schemes, and enhance the production efficiency of manufacturing enterprises.

Author 1: Yi Li
Author 2: Song Han
Author 3: Zhaohui Li
Author 4: Fan Yang
Author 5: Zhengyi Sun

Keywords: Flexible job shop scheduling; machine setup; transportation; sparrow search algorithm; earliest completion time priority

PDF

Paper 43: A Hybrid Levy Arithmetic and Machine Learning-Based Intrusion Detection System for Software-Defined Internet of Things Environments

Abstract: The convergence of Software-Defined Networking (SDN) and the Internet of Things (IoT) has enabled a more adaptable framework for managing SDN-enabled IoT (SD-IoT) applications, but it also introduces significant cyber security risks. This study proposes a lightweight and explainable intrusion detection system (IDS) based on a hybrid Levy Arithmetic Algorithm (LAA) for SD-IoT environments. By integrating Levy randomization with the Arithmetic Optimization Algorithm (AOA), the LAA enhances feature selection efficiency while minimizing computational overhead. The model was evaluated using the NSL-KDD and UNSW-NB15 datasets. Experimental results demonstrate that the LAA outperformed baseline models, achieving up to 89.2% F1-score and 95.4% precision, while maintaining 100% detection of normal behaviors. These outcomes highlight the proposed system's potential for accurate and efficient detection of cyber-attacks in resource-constrained SD-IoT environments.

Author 1: Wenpan SHI
Author 2: Ning ZHANG

Keywords: Intrusion detection; internet of things; software-defined; feature selection; levy arithmetic

PDF

Paper 44: Reinforcement Learning-Driven Cluster Head Selection for Reliable Data Transmission in Dense Wireless Sensor Networks

Abstract: Wireless Sensor Networks (WSNs) have made significant advances towards practical applications. Data gathering in WSNs has been carried out using various techniques, such as multi-path routing, tree topologies, and clustering. Conventional systems lack a reliable and effective mechanism for dealing with end-to-end connection, traffic, and mobility problems. These deficiencies often lead to poor network performance. We propose an Internet of Things (IoT)-integrated densely distributed WSN system. The system utilizes a tree-based clustering approach dependent on the installed sensors' density. The cluster head nodes are structured in a tree-based cluster to optimize the process of gathering data. Each cluster's most efficient aggregation node is selected using a fuzzy inference-based reinforcement learning technique. The decision is based on three crucial factors: algebraic connectedness, bipartivity index, and neighborhood overlap. The proposed method significantly enhances energy efficiency and outperforms existing methods in bit error rate, throughput, packet delivery ratio, and delay.

Author 1: Longyang Du
Author 2: Qingxuan Wang
Author 3: Zhigang ZHANG

Keywords: Energy efficiency; wireless sensor networks; clustering; reinforcement learning; fuzzy inference system

PDF

Paper 45: LIFT: Lightweight Incremental and Federated Techniques for Live Memory Forensics and Proactive Malware Detection

Abstract: Live Memory Forensics deals with acquiring and analyzing the volatile memory artefacts to uncover the trace of in-memory malware or fileless malware. Traditional forensics methods operate in a centralized manner leading to a multitude of challenges and severely limiting the possibilities of accurate and timely analysis. In this work, we propose a decentralized approach for conducting live memory forensics across different devices. The proposed federated learning-based live memory forensics model uses FedAvg algorithm in order to make a lightweight, incremental approach to conduct live memory forensics. The study demonstrates the performance of federated learning algorithms in anomaly detection, achieving a maximum accuracy of 92.5% with Clustered Federated Learning (CFL) while maintaining a convergence time of approximately 35 communication rounds. Key features such as CPU usage and network activity contributed over 85% to the detection accuracy, emphasizing their importance in the predictive process.

Author 1: Sarishma Dangi
Author 2: Kamal Ghanshala
Author 3: Sachin Sharma

Keywords: Live memory forensics; malware detection; federated learning; fileless malware; anomaly detection

PDF

Paper 46: Design of Control System of Water Source Heat Pump Based on Fuzzy PID Algorithm

Abstract: This study aims to enhance the control and energy efficiency of the central air conditioning system by integrating frequency conversion fuzzy control and advanced control strategies. The focus is on optimizing the motor operation of the central air conditioning system with the help of a frequency converter and improving the system's performance through adaptive control mechanisms, which is an important part of intelligent control. The research adopts frequency conversion fuzzy control for high - power motors in the central air conditioning system, using a pure proportional controller. The system’s response is analyzed, including the rise time (tr = 339.3s) and peak interval (Ts = 633.19s) based on unit step response data. The study also addresses the integration of cooling water heat exchange systems, such as heat pumps and plate heat exchangers, to facilitate energy recycling, achieving the goal of energy saving. System identification is performed using MATLAB’s toolbox for deep well water pump frequency conversion data, forming a basis for further simulation and optimization. The study incorporates a hybrid PID, fuzzy, and neural network - based control strategy to handle the system’s time - varying, nonlinear characteristics. The results indicate that the hybrid control strategy significantly improves the system’s dynamic response. With a rising time of tr = 611s, peak time of tp = 830s, adjustment time (±5%) of ts = 1140s, and an overshoot (Mp) of 16.08%, the system exhibits better performance than conventional PID controllers, particularly in handling large lag and nonlinear behaviors. This work presents an innovative approach by combining frequency conversion fuzzy control with adaptive PID and neural networks for a more efficient air conditioning control system. The integration of cooling water heat recycling and advanced control mechanisms provides a novel solution for enhancing energy efficiency and operational performance in central air conditioning systems, which is highly relevant to energy saving and intelligent control.

Author 1: Min Dong
Author 2: Xue Li
Author 3: Yixuan Yang
Author 4: Zheng Li
Author 5: Hui He

Keywords: Central air conditioning system; frequency converter; fuzzy PID control; intelligent control; energy saving

PDF

Paper 47: Stochastic Nonlinear Analysis of Internet of Things Network Performance and Security

Abstract: Aiming at the problem of poor effect of traditional Internet of Things network performance and security analysis methods, the research uses support vector machine for Internet of Things network security situation assessment. It also introduces the grey wolf optimization algorithm improved by genetic algorithm to optimize it, and designs a stochastic nonlinear integration of Internet of Things network performance algorithm. The results revealed that the mean absolute error, root mean square error, and mean absolute percentage error of the integrated algorithm were 0.0064, 0.041, and 0.0013, respectively, in the performance test. It was significantly lower than that of the other four algorithms, which proved that its prediction accuracy was higher. The recall of the integrated algorithm was 93.7%, and the F1 value was 0.94, which was significantly higher than the other comparative algorithms, proving its better comprehensive performance. In the analysis of practical application effect, when access control was performed by the integrated algorithm, the predicted curve basically overlapped with the actual curve, which proved its better fitting performance. The communication overhead of the integrated algorithm was 81.3 KB, which was significantly lower than the other two calculations. The average communication time of the integrated algorithm was 3.59 s, which was lower than the other two algorithms, proving that it can effectively reduce the communication cost and delay. The integrated algorithm can effectively improve the performance of Internet of Things network security situation assessment, which provides reliable technical support for the security protection of Internet of Things network and has important practical application value.

Author 1: Junzhou Li
Author 2: Feixian Sun

Keywords: Internet of Things; security; stochastic nonlinearity; support vector machines; grey wolf optimization algorithm

PDF

Paper 48: Experiential Landscape Design Using the Integration of Three-Dimensional Animation Elements and Overlay Methods

Abstract: This work aims to optimize users' immersive experiences, enhance design effectiveness, and construct a scientific evaluation system for landscape design. The work begins with the collection and analysis of spatial data from the landscape design area, using 3D animation technology to generate visual models and virtually reconstruct key landscape elements. Next, the overlay method is applied to visually stratify elements within the space, progressively building a multi-layered, logical spatial structure to enhance realism and information communication efficiency in landscape design. To evaluate design effectiveness, a user experience questionnaire and behavior tracking experiments are designed. The questionnaire covers three dimensions: immersion, satisfaction, and interactivity, while the behavioral tracking experiment collects data on user dwell time and gaze movement in virtual scenes. Results indicate that the design scheme based on 3D animation and layering significantly outperforms traditional designs in terms of immersive experience, clarity of structure, and user engagement. In the questionnaire, the average satisfaction rating for the design scheme is 4.7 (out of 5), with an immersion rating average of 4.8. The behavioral tracking experiment shows a 40% increase in dwell time compared to traditional designs, and users' willingness to revisit improves by 26% compared to the control group. This work innovatively applies 3D animation and overlay methods to experiential landscape design, confirming the practical value of this method in optimizing user experience and design effectiveness.

Author 1: Mingjing Sun
Author 2: Ming Wei

Keywords: 3D animation integration; overlay method; experiential landscape design; user immersive experience; evaluation system design

PDF

Paper 49: Database-Based Cooperative Scheduling Optimization of Multiple Robots for Smart Warehousing

Abstract: This study investigates the current state and future directions of cooperative scheduling optimization for multiple robots in smart warehousing environments. With the rapid growth of logistics automation, optimizing the collaboration between intelligent robots has become essential for improving warehouse efficiency and adaptability. The research employs a bibliometric analysis based on the Web of Science (WoS) database, using VOSviewer for keyword co-occurrence, clustering, and density visualization to identify key research hotspots, knowledge structures, and technological trends. The analysis categorizes the field into four major research clusters: robot path planning and navigation, warehouse system optimization and order picking, algorithm design and performance evaluation, and the application of emerging technologies such as edge computing and cloud robotics. Results shows a growing emphasis on dynamic scheduling, real-time data integration, and multi-objective optimization, with increasing use of technologies like deep reinforcement learning and digital twins. The study also incorporates real-world case comparisons from leading domestic and international enterprises, revealing implementation challenges and performance benchmarks. Although promising advancements are evident, issues such as fragmented data systems, limited real-time responsiveness, and insufficient cross-disciplinary integration persist. The study concludes that future research should focus on improving environmental adaptability through edge computing, standardizing robot collaboration protocols, and enhancing system robustness via real-time database architectures. By bridging theoretical insights with practical needs, this research offers a comprehensive foundation for developing next-generation intelligent warehousing systems based on coordinated multi-robot scheduling.

Author 1: Zhenglu Zhi

Keywords: Database; intelligent warehousing; robotics; cooperative scheduling

PDF

Paper 50: A Cross-Chain Mechanism Based on Hierarchically Managed Notary Group

Abstract: Blockchain technology, characterized by decentralization, immutability, traceability, and transparency, provides innovative solutions for data management. However, the limited cross-chain interoperability between blockchains hampers their broader application and development. To address this challenge, this paper proposes a Cross-Chain Mechanism Based on Hierarchically Managed Notary Group, abbreviated as HMNG-CCM, which enables secure and efficient cross-chain transactions between blockchains. To mitigate the centralization issue inherent in traditional cross-chain mechanism based on notary, an innovative notary group management approach is introduced. This approach implements hierarchical management by categorizing notaries into three levels—junior notary, intermediate notary, and senior notary—thereby effectively mitigating the centralization problem. Additionally, a functional division mechanism for notary is designed, wherein the roles of transaction processing and verification within the cross-chain transaction process are separated to enhance system reliability. Furthermore, to tackle the complexity of notary reputation evaluation, a reputation assessment scheme based on an improved PageRank algorithm is proposed. Differentiated reputation evaluation strategies are developed for junior and intermediate notaries to ensure fairness and rationality in the assessment process. The effectiveness of this scheme is validated through experiments conducted on the Hyperledger Fabric platform. The experimental results demonstrate that the proposed mechanism exhibits strong robustness against malicious notaries while significantly improving transaction speed and success rate. This study offers new theoretical and practical foundations for the optimization and advancement of blockchain cross-chain technology.

Author 1: Hongliang Tian
Author 2: Zhiyang Ruan
Author 3: Zhong Fan

Keywords: Blockchain; cross-chain; notary group; hierarchical management; reputation evaluation

PDF

Paper 51: Comprehensive Vulnerability Analysis of Three-Factor Authentication Protocols in Internet of Things-Enabled Healthcare Systems

Abstract: This study evaluates a three-factor authentication protocol designed for IoT healthcare systems, identifying several key vulnerabilities that could compromise its security. The analysis reveals weaknesses in single-factor authentication, time synchronization, side-channel attacks, and replay attacks. To address these vulnerabilities, the study proposes a series of enhancements, including the implementation of multi-factor authentication (MFA) to strengthen user verification processes and the inclusion of timestamps or nonces in messages to prevent replay attacks. Additionally, the adoption of advanced cryptographic techniques, such as masking and shuffling, can mitigate side-channel attacks by minimizing information leakage during encryption. The use of message authentication codes (MACs) ensures communication integrity by verifying message authenticity. These improvements aim to fortify the protocol's security framework, ensuring the protection of sensitive medical data. Future research directions include exploring adaptive security policies leveraging artificial intelligence and optimizing cryptographic operations to enhance efficiency. These efforts are essential for maintaining the protocol's resilience against evolving threats and ensuring the secure operation of IoT-based healthcare systems.

Author 1: Haewon Byeon

Keywords: Three-factor authentication; IoT healthcare security; multi-factor authentication; side-channel attack mitigation; replay attack prevention

PDF

Paper 52: Real-Time Lightweight Sign Language Recognition on Hybrid Deep CNN-BiLSTM Neural Network with Attention Mechanism

Abstract: Sign language recognition (SLR) plays a crucial role in bridging communication gaps for individuals with hearing and speech impairments. This study proposes a hybrid deep CNN-BiLSTM neural network with an attention mechanism for real-time and lightweight sign language recognition. The CNN module extracts spatial features from individual gesture frames, while the BiLSTM module captures temporal dependencies, enhancing classification accuracy. The attention mechanism further refines feature selection by focusing on the most relevant time steps in a sign sequence. The proposed model was evaluated on the Sign Language MNIST dataset, achieving state-of-the-art performance with high accuracy, precision, recall, and F1-score. Experimental results indicate that the model converges rapidly, maintains low misclassification rates, and effectively distinguishes between visually similar signs. Confusion matrix analysis and feature map visualizations provide deeper insights into the hierarchical feature extraction process. The results demonstrate that integrating spatial, temporal, and attention-based learning significantly improves recognition performance while maintaining computational efficiency. Despite its effectiveness, challenges such as misclassification in ambiguous gestures and real-time computational constraints remain, suggesting future improvements in multi-modal fusion, transformer-based architectures, and lightweight model optimizations. The proposed approach offers a scalable and efficient solution for real-time sign language recognition, contributing to the development of assistive technologies for individuals with communication disabilities.

Author 1: Gulnur Kazbekova
Author 2: Zhuldyz Ismagulova
Author 3: Gulmira Ibrayeva
Author 4: Almagul Sundetova
Author 5: Yntymak Abdrazakh
Author 6: Boranbek Baimurzayev

Keywords: Sign language recognition; CNN-BiLSTM; attention mechanism; deep learning; gesture classification; real-time processing; assistive technology

PDF

Paper 53: Investigating the Impact of Hyper Parameters on Intrusion Detection System Using Deep Learning Based Data Augmentation

Abstract: The effects of changing learning rates, data augmentation percentage and numbers of epochs on the performance of Wasserstein Generative Adversarial Networks with Gradient Penalties (WGAN-GP) are evaluated in this study. The purpose of this research is to find out how they affect the data augmentation to enhance stability during training. In this research, the degree of system performance is measured using the Classification Model Utility approach. For this reason, this study aims to determine the interaction between learning rate, augmentation percentage and epoch value when using WGAN-GP to generate synthetic data for the recognition of the system performance. The results will provide the indications on how some of the hyper parameters can be adjusted up or down for having positive or negative consequences on the generation process for further research and use of WGAN-GP. It also provides insights into how the generative model is trained, and how that affects stability and quality of the result in various settings such as image synthesis or other generative tasks.

Author 1: Umar Iftikhar
Author 2: Syed Abbas Ali

Keywords: Artificial intelligence; learning rate; cyber threat; network intrusion detection; deep learning; data augmentation; generative adversarial networks epochs

PDF

Paper 54: Adaptive Crow Search Algorithm for Hierarchical Clustering in Internet of Things-Enabled Wireless Sensor Networks

Abstract: The Internet of Things (IoT) relies on efficient Wireless Sensor Networks (WSNs) for data collection and transmission in various applications, including smart cities, industrial automation, and environmental monitoring. Clustering is a fundamental technique for structuring WSNs hierarchically, enabling load balancing, reducing energy consumption, and extending network lifespan. However, clustering optimization in WSNs is an NP-hard problem, necessitating heuristic and metaheuristic approaches. This study introduces an Adaptive Crow Search Algorithm (A-CSA) for clustering in IoT-enabled WSNs, addressing the inherent limitations of the standard CSA, such as premature convergence and local optima entrapment. The proposed A-CSA incorporates three key enhancements: (1) a dynamic awareness probability to improve global search efficiency during initial population selection, (2) a systematic leader selection mechanism to enhance exploitation and avoid random selection bias, and (3) an adaptive local search strategy to refine cluster formation. Performance evaluations conducted under varying network configurations, including node density, network size, and base station positioning, demonstrate that A-CSA outperforms existing clustering approaches in terms of energy efficiency, network longevity, and data transmission reliability. The results highlight the potential of A-CSA as a robust optimization technique for clustering in IoT-driven WSN environments.

Author 1: Lingwei WANG
Author 2: Hua WANG

Keywords: Internet of things; wireless sensor networks; clustering; energy efficiency; optimization

PDF

Paper 55: Understanding Brain Network Stimulation for Emotion Analyzing Connectivity Feature Map from Electroencephalography

Abstract: In understanding brain functioning by Electroencephalography (EEG), it is essential to be able to not only identify more active brain areas but also understand connectivity among different areas. The functional and efficient connectivity networks of the brain have been examined in this study by constructing a connectivity feature map (CFM) with four widely used connectivity methods from the Database for Emotion Analysis Using Physiological Signals (DEAP) emotional EEG data to research how this connectivity's patterns are influenced by emotion. According to the investigation results, emotions are mainly related to the parietal, central, and frontal regions. The parietal region is more responsible for emotion alteration among these three regions. Positive emotions are associated with more direct correlations and dependencies than negative ones. When experiencing negative emotions, the regions of the brain function more synchronously as well as there are less flow of information. Whether direct or inverse, there is less correlation between brain regions in the higher frequency band than in the lower frequency band. Higher frequencies are associated with increased dependence and directed information transfer between brain regions. Generally, the electrodes in the same lobe show stronger connectivity than those in different lobes. At a glance, the present study is a comprehensive analysis to understand brain network stimulation for emotion from EEG, and it significantly differs from the existing emotion recognition studies typically focused on recognition proficiency.

Author 1: Mahfuza Akter Maria
Author 2: M. A. H. Akhand
Author 3: Md Abdus Samad Kamal

Keywords: Brain connectivity; connectivity feature map; electroencephalography; emotion

PDF

Paper 56: AI-Driven Predictive Analytics for CRM to Enhance Retention Personalization and Decision-Making

Abstract: The advent of Artificial Intelligence (AI) has dramatically altered Customer Relationship Management (CRM) by allowing organizations to anticipate customer behavior, customize interactions and automate service delivery. This research introduces an extensive AI-based predictive analytics framework aimed at improving customer engagement, retention and satisfaction using advanced Machine Learning (ML) and Natural Language Processing (NLP) methodologies. By using XGBoost for churn prediction and BERT-based models for sentiment analysis, the system efficiently handles both structured and unstructured customer data. The methodology involves sophisticated feature engineering, customer segmentation via K-Means clustering, and Customer Lifetime Value (CLV) prediction to aid data-driven business strategies. An NLP-driven chatbot offers real-time, personalized support, response time and improving user experience. Evaluation metrics such as accuracy, precision, recall and F1-score demonstrate the better performance of the proposed system compared to conventional CRM approaches. This work also addresses important issues such as data privacy compliance, algorithmic bias and explainability of AI decision-making. Ethical deployment and transparency of AI are emphasized for building confidence in automated CRM systems. Future evolution will tackle the use of reinforcement learning to facilitate learning-based interaction schemes and federated learning for trusted, decentralized management of data. This architecture does not only provide better CRM functionality but also builds a platform towards intelligent, responsible and scalable solutions for customer relations across industries.

Author 1: Yashika Gaidhani
Author 2: Janjhyam Venkata Naga Ramesh
Author 3: Sanjit Singh
Author 4: Reetika Dagar
Author 5: T Subha Mastan Rao
Author 6: Sanjiv Rao Godla
Author 7: Yousef A.Baker El-Ebiary

Keywords: Artificial Intelligence; predictive analytics; customer relationship management; natural language processing; churn prediction

PDF

Paper 57: Cognitive Load Optimization in Digital (ESL) Learning: A Hybrid BERT and FNN Approach for Adaptive Content Personalization

Abstract: Traditional English as a Secondary Language (ESL) learning platform rely on static content delivery, often failing to adapt to individual learners’ cognitive capacities, leading to inefficient comprehension and increased cognitive load. A novel hybrid Feedforward Neural Network and Bidirectional Encoder Representation Transformer (FNN-BERT) framework stands as our solution because it performs dynamic content personalization through predictions of real-time cognitive load. The proposed approach incorporates Feedforward Neural Networks (FNN) alongside Bidirectional Encoder Representations from Transformers (BERT) to process behavioral analytics for optimized content complexity adjustment and adaptive and scalable learning delivery. Real-time adaptability, scalability and high computational needs of current models reduce their effectiveness in personalized learning environments. Through the application of Test of English for International Communication (TOEIC), International English Language Testing System (IELTS) and Test of English as a Foreign Language (TOEFL) datasets, our methodology uses Feedforward Neural Network (FNN) to forecast cognitive load based on student engagement behaviors and application errors then Bidirectional Encoders Representations from Transformer (BERT) processes content difficulty adjustments automatically. The proposed model delivers a 95.3% accuracy rate, 96.22% precision level, 96.1% recall capability and 97.2% F1-score which surpasses conventional Artificial Intelligence-based English as a Secondary Language (ESL) learning systems. The system makes use of Python for its implementation to improve understanding as well as student focus and mental processing speed. Personalized content presentation methods lead to lower cognitive strain which simultaneously advances student achievement numbers. The research adds value to smart educational frameworks through its introduction of a scalable framework that allows adaptable learning systems for English as a second language (ESL). The following research steps include simplifying system complexity while adding multimodal learning signals including eye monitoring and speech recognition and further developing the model across various educational subject areas. The research works as a promising foundation which propels AI real-time adaptive education systems for students from various backgrounds.

Author 1: Komminni Ramesh
Author 2: Christine Ann Thomas
Author 3: Joel Osei-Asiamah
Author 4: Bhuvaneswari Pagidipati
Author 5: Elangovan Muniyandy
Author 6: B. V. Suresh Reddy
Author 7: Yousef A.Baker El-Ebiary

Keywords: Cognitive load management; artificial intelligence-based English as a secondary language learning; adaptive content personalization

PDF

Paper 58: Enhancing Cybersecurity Through Artificial Intelligence: A Novel Approach to Intrusion Detection

Abstract: Modern cyber threats have evolved to sophisticated levels, necessitating advanced intrusion detection systems (IDS) to protect critical network infrastructure. Traditional signature-based and rule-based IDS face challenges in identifying new and evolving attacks, leading organizations to adopt AI-driven detection solutions. This study introduces an AI-powered intrusion detection system that integrates machine learning (ML) and deep learning (DL) techniques—specifically Support Vector Machines (SVM), Random Forests, Autoencoders, and Convolutional Neural Networks (CNNs)—to enhance detection accuracy while reducing false positive alerts. Feature selection techniques such as SHAP-based analysis are employed to identify the most critical attributes in network traffic, improving model interpretability and efficiency. The system also incorporates reinforcement learning (RL) to enable adaptive intrusion response mechanisms, further enhancing its resilience against evolving threats. The proposed hybrid framework is evaluated using the SDN_Intrusion dataset, achieving an accuracy of 92.8%, a false positive rate of 5.4%, and an F1-score of 91.8%, outperforming conventional IDS solutions. Comparative analysis with prior studies demonstrates its superior capability in detecting both known and unknown threats, particularly zero-day attacks and anomalies. While the system significantly enhances security coverage, challenges in real-time implementation and computational overhead remain. This paper explores potential solutions, including federated learning and explainable AI techniques, to optimize IDS functionality and adaptive capabilities.

Author 1: Mohammed K. Alzaylaee

Keywords: Intrusion detection; machine learning; deep learning; zero-day attacks; anomaly detection; feature selection; reinforcement learning; cybersecurity

PDF

Paper 59: Smoke Detection Model with Adaptive Feature Alignment and Two-Channel Feature Refinement

Abstract: To address issues of missed detections and low accuracy in existing smoke detection algorithms when dealing with variable smoke patterns in small-scale objects and complex environments, FAR-YOLO was proposed as an enhanced smoke detection model based on YOLOv8. The model adopted Fast-C2f structure to optimize and reduce the amount of parameters. Adaptive Feature Alignment Module (AFAM) was introduced to enhance semantic information retrieval for small targets by merging and aligning features across different layers during point sampling. Besides, FAR-YOLO designed an Attention- Guided Head (AG-Head) in which feature guiding branch was built to integrate critical information of both localization and classification tasks. FAR-YOLO refines key features using Dual-Feature Refinement Attention module (DFRAM) to provide complementary guidance for the both two tasks mentioned above. Experimental results demonstrate that FAR-YOLO improves detection accuracy compared to existing. There's a 3.5% Precision increase and a 4.0% AP50 increase respectively in YOLOv8. Meanwhile, the model reduces number of parameters by 0.46M, achieving an FPS of 135, making it proper for real-time smoke detection in challenging conditions and ensuring reliable performance in various scenarios.

Author 1: Yuanpan Zheng
Author 2: Binbin Chen
Author 3: Zeyuan Huang
Author 4: Yu Zhang
Author 5: Chao Wang
Author 6: Xuhang Liu

Keywords: Smoke detection model; adaptive feature alignment; two-channel feature refinement; attention mechanism

PDF

Paper 60: Design and Modeling of a Dynamic Adaptive Hypermedia System Based on Learners' Needs and Profile

Abstract: This study presents the design and modeling of an adaptive hypermedia system, capable of dynamically adjusting to the needs and characteristics of each learner according to their profile. In the digital age, where digital content must respond to varied profiles and adapt to learners' preferences and skills, this system offers a personalized approach that improves the learning and interaction experience. This personalized approach aims to enrich the learning and interaction experience with learning environments. This work consists of analyzing the different types of learner profiles, in order to identify the key criteria for effective personalization. Based on this, the authors developed a model of an adaptive and dynamic hypermedia system, capable of adapting in real time. To ensure a clear and coherent structure, the use of UML (Unified Modeling Language) modeling is increased. Preliminary results show that this system offers a relevant and targeted experience thanks to learner engagement and satisfaction, making learning both more relevant and more enjoyable. This work paves the way for future research on the optimization of hypermedia systems by further integrating the individual behaviors of learners, in a truly adaptive learning environment, which values the potential of each learner.

Author 1: Mohamed Benfarha
Author 2: Mohammed Sefian Lamarti
Author 3: Mohamed Khaldi

Keywords: Design; adaptive hypermedia; learning styles; user modeling; UML models

PDF

Paper 61: From Code Analysis to Fault Localization: A Survey of Graph Neural Network Applications in Software Engineering

Abstract: Graph Neural Networks (GNNs) represent a class of deep machine learning algorithms for analyzing or processing data in graph structure. Most software development activities, such as fault localization, code analysis, and measures of software quality, are inherently graph-like. This survey assesses GNN applications in different subfields of software engineering with special attention to defect identification and other quality assurance processes. A summary of the current state-of-the-art is presented, highlighting important advances in GNN methodologies and their application in software engineering. Further, the factors that limit the current solutions in terms of their use for a wider range of tasks are also considered, including scalability, interpretability, and compatibility with other tools. Some suggestions for future work are presented, including the enhancement of new architectures of GNNs, the enhancement of the interpretability of GNNs, and the design of a large-scale dataset of GNNs. The survey will, therefore, provide detailed insight into how the application of GNNs offers the possibility of enhancing software development processes and the quality of the final product.

Author 1: Maojie PAN
Author 2: Shengxu LIN
Author 3: Zhenghong XIAO

Keywords: Graph neural networks; fault localization; code analysis; software quality

PDF

Paper 62: Designing Quantum-Resilient Blockchain Frameworks: Enhancing Transactional Security with Quantum Algorithms in Decentralized Ledgers

Abstract: Quantum computing is progressing at a fast rate and there is a real threat that classical cryptographic methods can be compromised and therefore impact the security of blockchain networks. All of the ways used to secure blockchain like Rivest–Shamir–Adleman (RSA), Elliptic Curve Cryptography (ECC) and Secure Hash Algorithm 256-bit (SHA256) are the characteristic of the traditional cryptographic techniques vulnerable to attack by quantum algorithms: Shor’s and Grover’s algorithms: can efficiently break asymmetric encryption and speed up brute force attacks. Because of this vulnerability, there exists a need to develop an advance quantum resilient blockchain framework to protect the decentralized ledgers from the future threats of the quantum. This research proposes Post-Quantum Cryptography (PQC), Quantum Key Distribution (QKD) and Quantum Random Number Generation (QRNG) as a formidable architectural integration, to fortify security of blockchain. Classical encryption is replaced with PQC, QKD with secure key exchange by detecting eavesdropping, and QRNG with improving cryptographic randomness to remove the predictable key vulnerability. Only with a small loss of transaction efficiency, we increase transaction encryption accuracy, key exchange security, and resistance to quantum attacks. In this quantum enhanced blockchain design, the idea is to preserve the decentralization, transparency and security and at the same time overcome the future quantum threat. By going through rigorous analysis and comparative evaluation, we demonstrate that the approach saves blockchain networks from the emerging quantum risks to make sure that the decentralized finance, smart contracts and cross chain transactions.

Author 1: Meenal R Kale
Author 2: Yousef A.Baker El-Ebiary
Author 3: L. Sathiya
Author 4: Vijay Kumar Burugari
Author 5: Erkiniy Yulduz
Author 6: Elangovan Muniyandy
Author 7: Rakan Alanazi

Keywords: Quantum resilience; blockchain security; Quantum Key Distribution (QKD); Post-Quantum Cryptography (PQC); Quantum Random Number Generation (QRNG); decentralized ledger

PDF

Paper 63: Pose Estimation of Spacecraft Using Dual Transformers and Efficient Bayesian Hyperparameter Optimization

Abstract: Spacecraft pose estimation is an essential contribution to facilitating central space mission activities like autonomous navigation, rendezvous, docking, and on-orbit servicing. Nonetheless, methods like Convolutional Neural Networks (CNNs), Simultaneous Localization and Mapping (SLAM), and Particle Filtering suffer significant drawbacks when implemented in space. Such techniques tend to have high computational complexity, low domain generalization capacity for varied or unknown conditions (domain generalization problem), and accuracy loss with noise from the space environment causes such as fluctuating lighting, sensor limitations, and background interference. In order to overcome these challenges, this study suggests a new solution through the combination of a Dual-Channel Transformer Network with Bayesian Optimization methods. The innovation is at the center with the utilization of EfficientNet, augmented with squeeze-and-excitation attention modules, to extract feature-rich representations without sacrificing computational efficiency. The dual-channel architecture dissects satellite pose estimation into two dedicated streams—translational data prediction and orientation estimation via quaternion-based activation functions for rotational precision. Activation maps are transformed into transformer-compatible sequences via 1×1 convolutions, allowing successful learning in the transformer's encoder-decoder system. To maximize model performance, Bayesian Optimization with Gaussian Process Regression and the Upper Confidence Bound (UCB) acquisition function makes the optimal hyperparameter selection with fewer queries, conserving time and resources. This entire framework, used here in Python and verified with the SLAB Satellite Pose Estimation Challenge dataset, had an outstanding Mean IOU of 0.9610, reflecting higher accuracy compared to standard models. In total, this research sets a new standard for spacecraft pose estimation, by marrying the versatility of deep learning with probabilistic optimization to underpin the future generation of intelligent, autonomous space systems.

Author 1: N. Kannaiya Raja
Author 2: Janjhyam Venkata Naga Ramesh
Author 3: Yousef A.Baker El-Ebiary
Author 4: Elangovan Muniyandy
Author 5: N. Konda Reddy
Author 6: Vanipenta Ravi Kumar
Author 7: Prasad Devarasetty

Keywords: Dual-channel transformer model; Bayesian optimization; EfficientNet; pose estimation; SLAB dataset

PDF

Paper 64: Energy-Efficient Cloud Computing Through Reinforcement Learning-Based Workload Scheduling

Abstract: The basis for current digital infrastructure is cloud computing, which allows for scalable, on-demand computational resource access. Data center power consumption, however, has skyrocketed because of demand increases, raising operating costs and their footprint. Traditional workload scheduling algorithms often assign performance and cost priority over energy efficiency. This paper proposes a workload scheduling method utilizing deep reinforcement learning (DRL) that adjusts dynamically according to present cloud situations to ensure optimal energy efficiency without compromising performance. The proposed method utilizes Deep Q-Networks (DQN) to perform feature engineering to identify key workload parameters such as execution time, CPU and memory consumption, and subsequently schedules tasks smartly based on these results. Based on evaluation output, the model brings down the latency to 15 ms and throughput up to 500 tasks/sec with 92% efficiency in load balancing, 95% resource usage, and 97% QoS. The proposed approach yields improved performance in terms of key parameters compared to conventional approaches such as Round Robin, FCFS, and heuristic methods. These findings show how reinforcement learning can significantly enhance the scalability, reliability, and sustainability of cloud environments. Future work will focus on enhancing fault tolerance, incorporating federated learning for decentralized optimization, and testing the model on real-world multi-cloud infrastructures.

Author 1: Ashwini R Malipatil
Author 2: M E Paramasivam
Author 3: Dilfuza Gulyamova
Author 4: Aanandha Saravanan
Author 5: Janjhyam Venkata Naga Ramesh
Author 6: Elangovan Muniyandy
Author 7: Refka Ghodhbani

Keywords: Cloud computing; energy efficiency; reinforcement learning; virtual machine; workload scheduling

PDF

Paper 65: WOAAEO: A Hybrid Whale Optimization and Artificial Ecosystem Optimization Algorithm for Energy-Efficient Clustering in Internet of Things-Enabled Wireless Sensor Networks

Abstract: In the Internet of Things (IoT) era, energy efficiency in Wireless Sensor Networks (WSNs) is of utmost importance given the finite power resources of sensor nodes. An efficient Cluster Head (CH) selection greatly influences network performance and lifetime. This paper suggests a novel energy-efficient clustering protocol that hybridizes Whale Optimization Algorithm (WOA) and Artificial Ecosystem Optimization (AEO), called WOAAEO. It utilizes the exploration capabilities of AEO and the exploitation strengths of WOA in optimizing CH selection and balancing energy consumption and network efficiency. The proposed method is structured into two phases: CH selection using the WOAAEO algorithm and cluster formation based on Euclidean distance. The new method was modeled in MATLAB and compared with current algorithms. Results show that WOAAEO increases the network lifetime by a maximum of 24%, enhances the packet delivery rate by a maximum of 21%, and reduces energy consumption by a maximum of 35% compared to related algorithms. The results show that WOAAEO can be a suitable solution to help resolve energy-saving issues in WSNs and can thus be applied to IoT without any issues.

Author 1: Shengnan BAI
Author 2: Ningning LIU
Author 3: Yongbing JI
Author 4: Kecheng WANG

Keywords: Clustering; Internet of Things; energy efficiency; wireless sensor network; network lifespan

PDF

Paper 66: Improvement of Rainfall Estimation Accuracy Using a Convolutional Neural Network with Convolutional Block Attention Model on Surveillance Camera

Abstract: Accurate rainfall estimation is essential for various applications, including transportation management, agriculture, and climate modeling. Traditional measurement methods, such as rain gauges and radar systems, often face challenges due to limited spatial resolution and susceptibility to environmental interferences. These constraints affect the ability of the model to deliver high-resolution, real-time rainfall data, allowing the model to be challenging to capture localized variations effectively. Therefore, this study aimed to introduce a hybrid deep learning architecture that combined a Convolutional Neural Network (CNN) with a Convolutional Block Attention Module (CBAM) to improve rainfall intensity estimation using images captured by surveillance cameras. The proposed model was evaluated using standard datasets and previous unseen images collected at different times of the day, including morning, noon, afternoon, and night, to assess its toughness against temporal variations. The experimental results showed that VGG-CBAM architecture performed better than ResNet (Residual Network)-CBAM across all evaluation metrics, achieving a coefficient of determination (R²) of 0.93 compared to 0.89. Furthermore, when tested on unseen images captured at different periods, the model showed strong generalization capability, with correlation values (R) ranging from 0.77 to 0.98. These results signified the effectiveness of the proposed method in improving the accuracy and adaptability of image-based rainfall estimation, offering a scalable and high-resolution alternative to conventional measurement methods.

Author 1: Iqbal
Author 2: Adhi Harmoko Saputro
Author 3: Alhadi Bustamam
Author 4: Ardasena Sopaheluwakan

Keywords: Rainfall; surveillance camera; hybrid deep learning; CBAM

PDF

Paper 67: Adaptive AI-Based Personalized Learning for Accelerated Vocabulary and Syntax Mastery in Young English Learners

Abstract: Language acquisition is an integral part of early schooling, but young English language learners struggle to learn vocabulary and syntax since they are not provided with specialized instruction. Conventional teaching may vary according to different learning speeds and it leads to unbalanced levels of proficiency among students and possibly leading to disengagement among slow learners. The present computer-assisted learning aids provide practice interactively but without real-time adaptation and personalized feedback, limiting their capacity to address learners' unique problems. To overcome these constraints, this study suggests an Artificial Intelligence based personalized learning system that supports vocabulary and syntax learning via adaptive learning models, NLP-based chatbots and gamified interactive lessons. The system dynamically adapts content according to students' most recent performance in real time to enable a personalized learning experience, which results in efficient Learning. The research has experimental study design, and two groups are considered, an AI-supported learning group and a traditional learning group. Pre-test and post-test design measures the effects of the system on vocabulary recall and syntax correctness. Other learner engagement rates like survey results and qualitative feedback inform learner experience and learning efficacy. Initial results indicate that learners working with the Artificial Intelligence powered learning system gained 25percent in recalling vocabulary and 30percent in syntax accuracy over the control group. Further, learner engagement rates are elevated because of real-time feedback and gamification components. These results emphasize the promise of AI-based personalized learning to boost language acquisition and lay the basis for further effective innovations in adaptive education technologies.

Author 1: Angalakuduru Aravind
Author 2: M. Durairaj
Author 3: Preeti Chitkara
Author 4: Yousef A.Baker El-Ebiary
Author 5: Elangovan Muniyandy
Author 6: Linginedi Ushasree
Author 7: Mohamed Ben Ammar

Keywords: AI-based learning; gamification; language acquisition; personalized feedback; vocabulary

PDF

Paper 68: DenseRSE-ASPPNet: An Enhanced DenseNet169 with Residual Dense Blocks and CE-HSOA-Based Optimization for IoT Botnet Detection

Abstract: The growing prevalence of Internet of Things (IoT) devices has heightened vulnerabilities to botnet-based cyberattacks, necessitating robust detection mechanisms. This paper proposes DenseRSE-ASPPNet, an advanced deep learning framework for botnet detection, incorporating comprehensive preprocessing, feature extraction, and optimization. The preprocessing pipeline includes data cleaning and Min-Max normalization to ensure high-quality input data. The DenseNet169 backbone is enhanced with Residual Squeeze-and-Excitation (RSE) blocks for channel-wise attention recalibration and Atrous Spatial Pyramid Pooling (ASPP) for capturing multi-scale spatial patterns, enabling effective feature extraction. Hyperparameter optimization is performed using the Cyclone-Enhanced Humboldt Squid Optimization Algorithm (CE-HSOA), which balances global exploration and local exploitation, ensuring faster convergence and enhanced robustness. Experimental results demonstrate the superior performance of the proposed framework, achieving 99.00 per cent accuracy, 96.40 per cent sensitivity, and 99.95 per cent specificity, significantly minimizing false positives and false negatives. The proposed DenseRSE-ASPPNet provides an efficient, scalable, and effective solution for mitigating botnet threats in IoT environments.

Author 1: Mohd Abdul Rahim Khan

Keywords: Internet of Things; botnet detection; DenseRSE-ASPPNet; residual squeeze-and-excitation blocks; Cyclone-Enhanced Humboldt Squid Optimization Algorithm

PDF

Paper 69: Clustering Analysis of Physicians' Performance Evaluation: A Comparison of Feature Selection Strategies to Support Medical Decision-Making

Abstract: Evaluating physicians' performance is one of the fundamental pillars of improving the quality of healthcare in medical institutions, as it contributes to measuring their ability to provide appropriate treatment, interact effectively with patients, and work within healthcare teams. This study aims to explore the impact of attribute selection on the accuracy of physician clustering using the K-Means algorithm, to improve physician performance assessment. Three datasets containing professional, medical, and administrative attributes were analyzed, such as age, nationality, job title, years of experience, number of operations, and evaluations from various entities. The optimal number of clusters was determined using the Elbow and Silhouette Score methods. The results showed that the original feature set and Lasso features performed best at k = 3, with a clear distinction between clusters. The "three-star" cluster performed well at k = 2 but lost some fine details. It was also shown that attribute selection directly affects the number and accuracy of clusters resulting from clustering, allowing for a clearer classification of physician categories. The study recommends using either original features or Lasso features to achieve more effective clustering, which supports improved recruitment, training, and management decision-making processes in healthcare organizations.

Author 1: Amani Mustafa Ghazzawi
Author 2: Alaa Omran Almagrabi
Author 3: Hanaa Mohammed Namankani

Keywords: Physicians; performance; evaluation; clustering; k- means; features; decision making

PDF

Paper 70: Exploring Digital Insurance Solutions: A Systematic Literature Review and Future Research Agenda

Abstract: The purpose of this study is to explore the antecedents for the adoption of digital insurance solutions and to present current research trends and future research agendas based on a systematic literature review. The findings revealed key motivators for the adoption of digital insurance solutions, such as trust, perceived usefulness, ease of use, performance and effort expectancy, social influence, subjective norms, self-efficacy, system quality, and attitudes. Meanwhile, the key inhibitors include perceived risk, privacy concerns, complexity, and technology anxiety. The study shows that current research themes primarily focus on the online insurance sector, while lack of attention to emerging technologies. Although the Technology Acceptance Model (TAM) being the most widely applied theory in digital insurance adoption studies, its explanatory power needs to be enhanced by introducing new theories. Moreover, most research samples consist of insurance consumers, with less attention paid to user groups excluded from financial services. Questionnaires and Structural Equation Modeling (SEM) are commonly used methods, but still have limitations when dealing with large samples and complex behavioral changes. This study provides guidance for governments in promoting the implementation of digital insurance solutions, alongside strategic support for insurers to optimise user experience and enhance industry competitiveness.

Author 1: Anni Wei
Author 2: Yurita Yakimin Abdul Talib
Author 3: Zakiyah Sharif

Keywords: Digital insurance; Technology Acceptance Model; antecedents of adoption; systematic literature review; future research agenda

PDF

Paper 71: Towards an Optimization Model for Household Waste Bins Location Management

Abstract: Smart cities require effective, adaptive household waste management systems due to rapid urbanization. Traditional bin placement strategies based on placing bins equidistant among residents fail to account for actual human behavior, leading to overflowing or underused bins. This paper addresses optimizing bin location and capacity through Internet of things (IoT) technologies and data-driven decision-making by deploying LoRaWAN sensors in Tangier City as a case study; real-time usage information was then collected and analyzed. Through statistical analysis and outlier detection techniques, the proposed approach identifies bin placements that are non-optimized by using statistical analysis. It also evaluates data quality and classes bins by their usage level; results show several bins were constantly overused or underused indicating that dynamic placement and capacity adjustment would improve waste collection efficiency, reduce operational costs and enhance citizen satisfaction within a Smart City framework.

Author 1: Moulay Lakbir Tahiri Alaoui
Author 2: Meryam Belhiah
Author 3: Soumia Ziti

Keywords: Smart City; IoT; household waste; LoRaWan; bin location; outlier detection

PDF

Paper 72: Enhancing Electric Vehicle Security with Face Recognition: Implementation Using Raspberry Pi

Abstract: Facial identification has emerged as a key research area due to its potential to enhance biometric security. This research proposes an advanced security system for electric vehicles (EVs) based on facial identification, implemented using Raspberry Pi. The system comprises two main modules: Face Detection and Face Recognition. For face detection, the researchers propose using the Viola-Jones algorithm, which leverages Haar-like features to detect and extract unique facial features, such as the eyes, nose, and mouth. MATLAB will be used as the development tool for this module. For face recognition, the proposed approach integrates Principal Component Analysis (PCA) with Support Vector Machine (SVM). PCA is used to extract the most relevant facial information and construct a computational model, while SVM enhances classification accuracy. The system's performance is evaluated using accuracy and the Receiver Operating Characteristic (ROC) curve, with results demonstrating a face recognition accuracy of 95% and an average execution time of 2.32 seconds, meeting real-time operational requirements. These findings confirm the proposed method’s reliability in offering advanced and efficient biometric protection for modern electric vehicles.

Author 1: Jamil Abedalrahim Jamil Alsayaydeh
Author 2: Chin Wei Yi
Author 3: Rex Bacarra
Author 4: Fatimah Abdulridha Rashid
Author 5: Safarudin Gazali Herawan

Keywords: Face recognition; face detection; Principal Component Analysis (PCA); Support Vector Machine (SVM); Raspberry Pi

PDF

Paper 73: Modelling the Moderating Role of Government Policy in Cryptocurrency Investment Acceptance

Abstract: Without the requirement for third-party approval, cryptocurrency enables anonymous, secure, quick, and inexpensive financial transactions. Although cryptocurrency is gaining global popularity, its applications are still limited. This research aims to investigate the factors influencing the acceptance of cryptocurrency as an investment tool, focusing on the moderating role of government policy. Using the Unified Theory of Acceptance and Use of Technology (UTAUT) extended with awareness, security, and trust, a survey was conducted with 220 respondents. Structural Equation Modelling (SEM) was employed to analyse the data. The findings revealed that the usage of cryptocurrencies is significantly affected by performance expectancy, facilitating conditions, social influence, awareness, and security in investment. However, trust does not affect the acceptance of cryptocurrency as an investment. The outcomes generate vital insights and strategies for cryptocurrency users, offering a crucial examination for stakeholders and professionals keen on understanding the underlying dynamics of cryptocurrency acceptance in investment.

Author 1: Maslinda Mohd Nadzir
Author 2: Rabea Abdulrahman Raweh
Author 3: Hapini Awang
Author 4: Huda Ibrahim

Keywords: Cryptocurrency; acceptance; investment; UTAUT; government policy

PDF

Paper 74: Healthy and Unhealthy Oil Palm Tree Detection Using Deep Learning Method

Abstract: Oil palm trees are the world's most efficient and economically productive oil bearing crop. It can be processed into components needed in various products, such as beauty products and biofuel. In Malaysia, the oil palm industry contributes around 2.2% annually to the nation's GDP. The continuous surge in demand for oil palm worldwide has created an awareness among the local plantation owner to apply more monitoring standards on the trees to increase their yield. However, Malaysia's cultivation and monitoring process still mainly depends on the labor force, which caused it to be inefficient and expensive. This scenario served as a motivation for the owner to innovate the tree monitoring process through the use of computer vision techniques. This paper aims to develop an object detection model to differentiate healthy and unhealthy oil palm trees through aerial images collected through a drone on an oil palm plantation. Different pre-trained models, such as Faster R-CNN (Region-Based Convolutional Neural Network) and SSD (Single-Shot MultiBox Detector), with different backbone modules, such as ResNet, Inception, and Hourglass, are used on the images of palm leaves. A comparison will then be made to select the best model based on the AP and AR of various scales and total loss to differentiate healthy and unhealthy oil palm. Eventually, the Faster R-CNN ResNet101 FPN model performed the best among the models, with AParea = all of 0.355, ARarea = all of 0.44, and total loss of 0.2296.

Author 1: Kang Hean Heng
Author 2: Azman Ab Malik
Author 3: Mohd Azam Bin Osman
Author 4: Yusri Yusop
Author 5: Irni Hamiza Hamzah

Keywords: Component oil palm detection; deep learning models; object detection; Faster R-CNN; drone imagery analysis

PDF

Paper 75: Intelligent Guitar Chord Recognition Using Spectrogram-Based Feature Extraction and AlexNet Architecture for Categorization

Abstract: Chord prediction plays a key role in the advancement of musical technological innovations, such as automatic music transcription, real-time music tutoring, and intelligent composition tools. Accurate chord prediction can assist musicians, educators, and developers in constructing tools that help in learning, playing, and composing music. Background noise and audio distortions may have an impact on chord prediction accuracy, particularly in real-world situations. Chords can have distinct voicings or finger positions on the guitar, resulting in slight variations in audio representation. This study focuses on the classification of guitar chords using techniques of deep learning. There are eight major and minor guitar chords in the dataset. They have been turned into spectrograms, chromagrams, and Mel Frequency Cepstral Coefficients (MFCC) so that features can be extracted. Various deep learning architectures, including CNN, ResNet50, AlexNet, and VGG, were employed to classify the chords. Experimental results demonstrated that the spectrogram-based AlexNet model outperforms others, achieving good accuracy and robustness in chord classification. The proposed study demonstrates the efficiency of spectrograms and advanced deep learning models for audio signal processing in music applications. By automating chord detection, this study provides beneficial resources for music learners as well as educators, enabling more efficient learning and real-time feedback during practice sessions.

Author 1: Nilesh B. Korade
Author 2: Mahendra B. Salunke
Author 3: Amol A. Bhosle
Author 4: Sunil M. Sangve
Author 5: Dhanashri M. Joshi
Author 6: Gayatri G. Asalkar
Author 7: Sujata R. Kadu
Author 8: Jayesh M. Sarwade

Keywords: Chords; prediction; spectrogram; chromagram; Mel Frequency Cepstral Coefficients; AlexNet

PDF

Paper 76: Portable and Lightweight Signal Processing Approach for sEMG-Based Human–Machine Interaction in Robotic Hands

Abstract: Surface electromyography (sEMG) presents a viable biosignal for the control of robotic prosthetic hands, as it directly correlates with underlying muscle activity. This study introduces an efficient, computationally lightweight signal processing methodology designed for real-time embedded systems. The proposed methodology comprises a preprocessing pipeline, incorporating bandpass and notch filtering, followed by segmentation via overlapping sliding windows. Time-domain features, specifically Mean Absolute Value (MAV), Zero Crossing (ZC), Waveform Length (WL), Slope Sign Change (SSC), and Variance (VAR), are extracted to characterize relevant muscular activation patterns. By prioritizing computational efficiency and embedded system feasibility, this method establishes a practical framework for user intent recognition and real-time control of wearable robotic hands, particularly within assistive and rehabilitative applications. The experimental findings clearly indicate that the extracted features effectively differentiate between various hand gestures, allowing for accurate, real-time control of the wearable robotic hand. The system's high responsiveness, low latency, and resilience to noise underscore its suitability for assistive and rehabilitative applications. With its focus on computational simplicity and feasibility for embedded implementation, the proposed method provides a practical basis for recognizing user intent in human-machine interaction systems.

Author 1: Ngoc-Khoat Nguyen

Keywords: sEMG; myo-prosthesis; myosignals; human–prosthesis interface; signal processing

PDF

Paper 77: Enhancing Match Detection Process Using Chi-Square Equation for Improving Type-3 and Type-4 Clones in Java Applications

Abstract: Generic Code Clone Detection (GCCD) is a code clone detection model that use distance measure equation, enabling detection of all types of code clones, naming clone Type-1, Type-2, Type-3 and Type-4 in Java programming language applications. However, the detection process of GCCD did not focus on detecting clones of Type-3 and Type-4. Hence, this paper suggested two experiments to incorporate enhancements to the GCCD in order to improve the detection rate of clone Type-3 and clone Type-4. The implementation of Chi-square distance in the match detection process produced a significant result increase in the experiment specifically on clones Type-3 and Type-4, in comparison with the Euclidean distance in GCCD, which allows the increase of detection rate due to the dissimilarity of the distance measures. Based on the results, the suggested enhancement using Chi-square distance on match detection process outperforms GCCD in terms of improving code clone detection results based on clone Type-3 and Type-4, as the objectives for each experiment are carried, contributes to the research on improving the code clone detection result.

Author 1: Noormaizzattul Akmaliza Abdullah
Author 2: Al-Fahim Mubarak-Ali
Author 3: Mohd Azwan Mohamad Hamza
Author 4: Siti Salwani Yaacob

Keywords: Code clone detection; distance measure; Java language; Chi-square; computational intelligence

PDF

Paper 78: Transforming Internal Auditing: Harnessing Retrieval-Augmented Generation Technology

Abstract: The advent of cloud-based Generative AI models, such as ChatGPT, Google Gemini, and Claude, has created new opportunities for improving education through real-time, adaptive learning experiences. Despite their widespread use globally, their application in South African higher education remains limited and underexplored, resulting in an application gap. This paper, as Phase 1 of a larger project, addresses this gap by focusing on the development of a Retrieval-Augmented Generation (RAG) web application designed to enhance Internal Auditing education at the Durban University of Technology. This is achieved by integrating three powerful Generative AI models—OpenAI GPT-4o-mini, Google Gemini-1.5-flash, and Anthropic Claude-3-haiku—into a single educational platform that will enable lecturers to manage and augment lecture materials while allowing students to access personalized, AI-generated content. This paper presents the design considerations, architecture, and integration techniques employed in the development of the RAG web application, offering insights into the potential of adaptive learning, personalized learning, and AI-driven tutoring in South Africa’s educational landscape. This paper demonstrates how a RAG web application can provide the building blocks for future generative AI applications that could enhance teaching and learning with minimal effort from lecturers and learners in the South African context.

Author 1: Olive Stumke
Author 2: Fanie Ndlovu

Keywords: Adaptive learning; Anthropic Haiku; benefits; challenges; Generative AI; Google Gemini API Pro; higher education; internal auditing; OpenAI GPT-Turbo; personalized learning; RAG (Retrieval-Augmented Generation); South Africa

PDF

Paper 79: Development of an Interactive Oral English Translation System Leveraging Deep Learning Techniques

Abstract: An advanced interactive English oral automatic translation system has been developed using cutting-edge deep learning techniques to address key challenges such as low success rates, lengthy processing times, and limited accuracy in current systems. The core of this innovation lies in a sophisticated deep learning translation model that leverages neural network architectures, combining logarithmic and linear models to efficiently map and decompose the activation functions of target neurons. The system dynamically calculates neuron weight ratios and compares vector levels, enabling precise and responsive interactive translations. A robust system framework is established around a central text conversion module, integrating hardware components such as the I/O bus, I/O bridge, recorder, interactive information collector, and an initial language correction unit. Key hardware includes the WT588F02 recording and playback chip (with external flash) for audio recording and NAND flash memory for efficient data storage. Noise reduction is achieved using the POROSVOC-PNC201 audio processor, while the aml100 chip enhances audio detection capabilities. The extensive neuron network testing using a dataset of 1.8 million translation samples demonstrates the system's superior performance, achieving an impressive success rate exceeding 80%, a rapid translation time of under 50ms, and a remarkable translation accuracy of over 95%. This state-of-the-art system sets a new benchmark in interactive English oral translation, achieving a success rate exceeding 80% (a 10% improvement over existing methods), a rapid translation time of under 50ms (a 30% reduction), and a remarkable translation accuracy of over 95% (a 5% improvement), by combining deep learning advancements with high-performance computing and optimized hardware integration.

Author 1: Dan Zhao
Author 2: HeXu Yang

Keywords: Deep learning; interactive English; spoken English; automatic translation; translation system

PDF

Paper 80: Impact of Cryptocurrencies and Their Technological Infrastructure on Global Financial Regulation: Challenges for Regulators and New Regulations

Abstract: The rise of cryptocurrencies is transforming the landscape of global finance, but their very decentralized nature is triggering unprecedented challenges for regulatory systems. This systematic literature review (SLR) aimed to gather and synthesize information to understand the functioning of cryptocurrencies in relation to their regulatory challenges. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology supports the rigor of the research, where 50 studies published between 2022 and 2025 were selected in databases such as Scopus, Web of Science, IEEE Xplore and Science Direct. Among the results, it was observed that the continents with the greatest contributions were Europe and Asia, representing 60% and 25% of the studies analyzed, respectively. Likewise, the period with the highest scientific production was the year 2024, with 50% of the manuscripts published. Regarding the analysis of keyword co-occurrence using VOSviewer, it was found that "blockchain" and "cryptocurrency" were the most predominant terms, with 18 and 16 mentions, highlighting their centrality in the academic discussion. Ultimately, the research highlights that cryptocurrencies bring with them major regulatory challenges, such as money laundering and lack of legal clarity, while blockchain emerges as an essential tool to improve the transparency and operability of financial regulation.

Author 1: Juan Chavez-Perez
Author 2: Raquel Melgarejo-Espinoza
Author 3: Victor Sevillano-Vega
Author 4: Orlando Iparraguirre-Villanueva

Keywords: Cryptocurrencies; financial regulation; blockchain; regulatory challenges; cryptocurrency laws

PDF

Paper 81: Developing a Comprehensive NLP Framework for Indigenous Dialect Documentation and Revitalization

Abstract: The disappearance of Indigenous languages results in a decrease in cultural diversity, hence making the preservation of these languages extremely important. Conventional methods of documentation are lengthy, and the present AI solutions somehow do not deliver due to data scarcity, dialectal variation, and poor adaptability to low-resource languages. A novel NLP framework is being proposed to solve the existing problems. This framework intermixes Meta-Learning and Contrastive Learning to counter these problems. Thus, adaptation to low-resourced languages becomes rapid via meta-learning (MAML), while dialect differentiation is enhanced through contrastive learning. The model training is carried out on Tatoeba (text) and Mozilla Common Voice (speech) datasets to ensure robust performance in both text and phonetic tasks. The results indicate that there is a reduction of 15% in Word Error Rate (WER), an 18% improvement in BLEU score corresponding to translation, and a 12% improvement in F1-score related to dialect classification. The testing was also done with native speakers to assess its practical viability. It is a real-time translation, transcription, and language documentation system deployed via a cloud-based platform, thereby reaching out to Indigenous communities globally. This dual-learning framework represents a scalable, adaptive, and cost-efficient solution for the revitalization of languages. The models proposed have been a game changer for language preservation, have set new standards for low-resource NLP, and have made some tangible contributions towards the digital sustainability of endangered dialects.

Author 1: Mohammed Fakhreldin

Keywords: Indigenous language preservation; natural language processing; meta-learning; contrastive learning; low-resource languages

PDF

Paper 82: Optimizing Document Classification Using Modified Relative Discrimination Criterion and RSS-ELM Techniques

Abstract: Internet content is increasing daily, and more data are being digitized due to technological advancements. Ever-increasing textual data in words, phrases, terms, sentences, and paragraphs pose significant challenges in classifying them effectively and require sophisticated techniques to arrange them automatically. The vast amount of textual data presents an opportunity to organise and extract valuable insights by identifying crucial pieces of information using feature selection techniques. Our article proposes “a Modified Relative Discrimination Criterion (MRDC) Technique and Ringed Seal Search-Extreme Learning Machine (RSS-ELM) to improve document classification", which prioritizes key data and fits corresponding documents into appropriate classes. The proposed MRDC and RSS-ELM techniques are compared with several existing techniques, such as the Relative Discrimination Criterion (RDC), the Improved Relative Discrimination Criterion (IRDC), GA-EM, and CS-ELM. The MRDC technique produced superior classification results with 91.60% accuracy compared to existing RDC and IRDC for feature selection. Moreover, the RSS-ELM optimization technique improved predictions significantly, with 98.9% accuracy compared to CS-ELM and GA-ELM on the Reuter21578 dataset.

Author 1: Muhammad Anwaar
Author 2: Ghulam Gilanie
Author 3: Abdallah Namoun
Author 4: Wareesa Sharif

Keywords: Feature selection; relative discrimination criterion; ring seal search; extreme learning machine; metaheuristic algorithms; document classification; optimization

PDF

Paper 83: Extracting Facial Features to Detect Deepfake Videos Using Machine Learning

Abstract: Generative adversarial networks (GANs) have gained popularity for their ability to synthesize images from random inputs in deep learning models. One of the notable applications of this technology is the creation of realistic videos known as deepfakes, which have been misused on social media platforms. The difficulty lies in distinguishing these fake videos from real ones with the naked eye, leading to significant concerns. This study proposes a supervised machine learning approach to effectively differentiate between real and counterfeit videos by detecting visual artifacts. To achieve this, two facial features are extracted: eye blinking and nose position, utilizing landmark detection techniques. Both features were trained on supervised machine learning classifiers and evaluated using the publicly available UADFV and Celeb-DF deepfake datasets. The experiments successfully demonstrate that the proposed method achieves a promising and superior performance, with an area under the curve (AUC) of 97% for deepfake detection in contrast to state-of-the-art methods investigating the same datasets.

Author 1: Ayesha Aslam
Author 2: Jamaluddin Mir
Author 3: Gohar Zaman
Author 4: Atta Rahman
Author 5: Asiya Abdus Salam
Author 6: Farhan Ali
Author 7: Jamal Alhiyafi
Author 8: Aghiad Bakry
Author 9: Mustafa Jamal Gul
Author 10: Mohammed Gollapalli
Author 11: Maqsood Mahmud

Keywords: Deepfake; fake videos; facial features; GAN

PDF

Paper 84: Hybrid Approach for Early Road Defect Detection: Integrating Edge Detection with Attention-Enhanced MobileNetV3 for Superior Classification

Abstract: The early detection of road defects is critical for maintaining infrastructure quality and ensuring public safety. This research presents a hybrid approach that combines edge detection techniques with an enhanced deep learning model for efficient and accurate road defect classification. The process begins with edge detection to highlight structural irregularities, such as cracks and potholes, by emphasizing critical features in road surface images. These pre-processed images are then fed into a classification model based on MobileNetV3, augmented with an attention mechanism to improve feature weighting and model focus on defect-prone regions. The proposed system was evaluated on a Crack500 dataset of road surface images, achieving a classification accuracy of 96.2%. This demonstrates significant improvement compared to baseline models without edge detection or attention enhancements. The edge detection stage efficiently reduces noise, while the attention-augmented MobileNetV3 ensures robust feature discrimination, making the approach suitable for real-time and resource-constrained deployment scenarios. This study highlights the effectiveness of combining classical image processing with advanced neural network techniques. The proposed system has the potential to optimize road maintenance workflows, operational costs, and improve road safety by enabling early and precise defect identification.

Author 1: Ayoub Oulahyane
Author 2: Mohcine Kodad
Author 3: El Houcine Addou
Author 4: Sofia Ourarhi
Author 5: Hajar Chafik

Keywords: Road defect detection; edge detection; attention mechanism; MobileNetV3

PDF

Paper 85: Speech Decoding from EEG Signals

Abstract: The field of speech decoding is rapidly evolving, presenting new challenges and new opportunities for people with disabilities such as amyotrophic lateral sclerosis (ALS), stroke, or paralysis, and for those who support them. However, speech decoding is complex: it requires analysing brain waves, across spatial and temporal dimensions, before translating them into speech. Recent work attempts to recreate speech that is never physically spoken by analysing the brain Artificial-intelligence methods offer a breakthrough because they can analyse complex data, including EEG signals. This paper aims to decode imagined speech through training CNN, RNN, and XGBoost models on a suitable dataset consisting of recorded EEG signals. EEG from 23 individuals is acquired from a public online dataset. These data are preprocessed, and the features are extracted using five different methods. After data acquisition, preprocessing is performed to ensure its readability to the proposed models. After that, five different feature extraction methods have been used and evaluated. Training and testing the proposed models are done after pre-processing and feature extraction to produce classification results. The proposed model involves CNN, LSTM, and XGBoost as classifiers to achieve an effective and robust speech decoding process. The ultimate result reflects on the accuracy with which the algorithms can regenerate speech from EEG signal analysis. The findings will advance speech-decoding research by showing the potential of hybrid deep-learning architectures for precise decoding of imagined speech from EEG signals. These advances have promising potential for creating non-invasive communication systems to assist people with severe speech and motor disorders, thereby improving their quality of life and increasing the application scope of brain-computer interfaces.

Author 1: Salma Fahad Altharmani
Author 2: Maha M. Althobaiti

Keywords: Speech decoding; EEG; deep learning; CNN; RNN; hybrid models; Brain-Computer Interfaces (BCI)

PDF

Paper 86: Enhanced Emotion Recognition Using a Hybrid Autoencoder-LSTM Model Optimized with a Hybrid ACO-WOA Algorithm for Hyperparameter Tuning

Abstract: Emotion recognition is vital in the human Computer interaction because it improves interaction. Therefore, this paper proposes an improved method for emotion recognition regarding the Hybrid Autoencoder-Long Short-Term Memory (LSTM) model and the newly developed hybrid approach of the Ant Colony Optimization (ACO) and Whale Optimization Algorithm (WOA) for hyperparameters tuning. In this case, Autoencoder can reduce input data dimensionality for input data and find the features relevant for the model’s work. In addition, LSTM is able to work with temporal structures of sequential inputs like speech and videos. The contribution of this research lies in the novel combination method of ACO-WOA which aims at tweaking hyperparameters of Autoencoder-LSTM model. Global aspect of ACO and WOA thereby improve the search efficiency and the accuracy of the proposed emotion recognition system and its generalization capacity. In context with the benchmark dataset for the experimentations of emotion recognition, it has established the efficiency of the proposed model in terms of the conventional methods. Recall rates in recognitive intended various emotions and different modalities were also higher in the hybrid Autoencoder-LSTM model. The optimization algorithms like the ACO-WOA also supported in reducing the computational cost which arose due to hyperparameters tuning. The implementation of this paper is done through Python Software. This implementation shows a high accuracy of 94.12% and 95.94% for audio datasets and image datasets respectively when compared with other deep learning models of Conv LSTM and VGG16. Therefore, the research shows that the presented hybrid approach can be a useful solution for successfully employing emotion recognition for enhancing the creation of the empathetic AI systems and for improving user interactions within various fields including healthcare, entertainment, and customer support.

Author 1: Vinod Waiker
Author 2: Janjhyam Venkata Naga Ramesh
Author 3: Kiran Bala
Author 4: V. V. Jaya Rama Krishnaiah
Author 5: T. Jackulin
Author 6: Elangovan Muniyandy
Author 7: Osama R. Shahin

Keywords: Emotion recognition; autoencoder; long short-term memory; Ant Colony Optimization (ACO); Whale Optimization Algorithm (WOA)

PDF

Paper 87: Automated Defect Detection in Manufacturing Using Enhanced VGG16 Convolutional Neural Networks

Abstract: Automated defect detection in manufacturing is a critical component of modern quality control, ensuring high production efficiency and minimizing defective outputs. This study presents an enhanced VGG16-based convolutional neural network (CNN) model for defect classification and localization, improving upon traditional vision-based inspection methods. The proposed model integrates advanced deep learning techniques, including batch normalization and dropout regularization, to enhance generalization and prevent overfitting. Extensive experiments were conducted on benchmark manufacturing defect datasets, evaluating performance based on accuracy, loss evolution, precision, recall, and mean average precision (mAP). The results demonstrate that the enhanced VGG16 model outperforms conventional CNN architectures and the standard VGG16, achieving higher defect classification accuracy and superior feature extraction capabilities. The model successfully detects multiple defect types, including surface irregularities, scratches, and deformations, with improved robustness in complex industrial environments. Additionally, the receiver operating characteristic (ROC) analysis confirms the model’s high sensitivity and specificity in distinguishing between defective and non-defective components. Despite its strong performance, challenges such as dataset scarcity, computational costs, and model interpretability remain areas for further research. Future directions include the integration of lightweight architectures for real-time deployment, generative adversarial networks (GANs) for data augmentation, and explainable AI techniques for improved transparency. The findings of this study highlight the transformative potential of deep learning in manufacturing defect detection, paving the way for intelligent, automated quality control systems that enhance production efficiency and reliability. The proposed approach contributes to the advancement of Industry 4.0 by enabling scalable, data-driven decision-making in manufacturing processes.

Author 1: Altynzer Baiganova
Author 2: Zhanar Ubayeva
Author 3: Zhanar Taskalyeva
Author 4: Lezzat Kaparova
Author 5: Roza Nurzhaubaeva
Author 6: Banu Umirzakova

Keywords: Automated defect detection; deep learning; convolutional neural networks; VGG16; quality control; manufacturing inspection; machine vision; Industry 4.0

PDF

Paper 88: Ontology-Based Business Processes Gap Analysis

Abstract: Business processes are subject to change for quality reasons (i.e., efficiency). However, the gap analysis process is a preliminary and essential step in discovering the gap between the to-be and as-is business processes. It usually resorts to a nonstandard and manual analysis process, making it unpredictable and complex. This paper proposes a standard method based on ontology principles and the business process design methodology (DEMO). The ontology unifies the shared vocabulary among worlds of source and target business process to enable this sort of interoperability. Building an essential model is a core concept behind DEMO that provides an ontological view independent of realization and implementation issues and enables understanding of the enterprises' behavior. Moreover, this paper provides heuristics for detecting gaps, based on the premise that producing similar institutional facts reflects similar behavior between the to-be and as-is business processes. Since the domains of the source and target are the same, it is also possible to compare the inputs of corresponding actions. The paper proposes a UML activity model for modeling business processes, enriched with DEMO concepts, to provide a foundational and informative ontology for reasoning about gaps. The expected outcome is a contribution to the broader community of business process management, ERP, and strategic planning, enabling more informed decision-making.

Author 1: Abdelgaffar Hamed Ahmed Ali

Keywords: Business process; gap analysis; ontology for business processes

PDF

Paper 89: Investigation of Convolutional Neural Network Model for Vehicle Classification in Smart City

Abstract: Smart city optimize efficiency by integrating advanced digital technologies, real-time data analytics, and intelligent automation. With the evolution of big data, smart cities enhance infrastructure and provide intelligent solutions for transportation with the integration of high-level adaptability of computer technologies including artificial intelligence (AI). The optimization can be achieved through predictive analytics in providing intelligent solutions for transportation. However, this requires reliable and accurate informative data as input for predictive analytics. Therefore, in this paper, five models of Convolutional Neural Network (CNN) deep learning method are investigated to determine the most accurate model for classification; namely Single Shot Detector (SSD) Resnet50, SSD Resnet152, SSD MobileNet, You Only Look Once (YOLO) YOLOv5 and YOLOv8. A total of 1324 vehicle images are collected to test these CNN models. The images consist of five different categories of vehicles, which are ambulance, car, motorcycle, bus and truck. The performances of all the models are compared. From the evaluation, the model YOLOv8 attained 0.956 of precision, 0.968 of recall and 0.968 of F1 score and outperformed the others. In terms of computational time, YOLOv5 is the fastest. However, a minimal computational time difference is observed between the YOLOv5 and YOLOv8, which were separated by only 20 minutes.

Author 1: Ahsiah Ismail
Author 2: Amelia Ritahani Ismail
Author 3: Nur Azri Shaharuddin
Author 4: Asmarani Ahmad Puzi
Author 5: Suryanti Awang

Keywords: Vehicle classification; Convolutional Neural Network; SSD; YOLO; MobileNets

PDF

Paper 90: Using EPP Theory and BMO-Inspired Approach to Design a Virtual Reality Dashboard Design Ontology

Abstract: This paper introduces the Virtual Reality Dashboard Design Ontology (VRDDO), an ontological framework developed to address the absence of standardized methodologies in designing Virtual Reality (VR) dashboards for complex data visualization, particularly in smart farm monitoring. The VRDDO is built upon the Design Science Research (DSR) approach and anchored in Kernel Theory, specifically the Ecological Psychological Perspective (EPP) theory and Business Model Ontology (BMO). During the design and development phase of DSR, the Unified Ontological Approach (UoA) is applied as the ontology development methodology, to design and construct VRDDO as a design artifact. By offering a structured framework for VR dashboard design, VRDDO aims to enhance data interpretation and decision-making in immersive environments. Additionally, this ontology forms the basis for a Virtual Reality Dashboard Design Method, establishing a systematic and user-centric approach to developing efficient VR dashboards. This research is significant for its potential to improve VR dashboard development across diverse domains, facilitate knowledge sharing, and eliminate fragmented, ad-hoc practices in immersive data visualization.

Author 1: Liew Kok Leong
Author 2: Fazita Irma Tajul Urus
Author 3: Muhammad Arif Riza
Author 4: Mohammad Nazir Ahmad
Author 5: Ummul Hanan Mohamad

Keywords: Design Science Research (DSR); Ontology Development Methodology (ODM); Ecological Psychological Perspective (EPP); Unified Foundational Ontology (UFO); Virtual Reality Dashboard Design Method (VRDDM)

PDF

Paper 91: Quantitative Assessment and Forecasting of Control Risks in the Ore-Stream Quality Management System

Abstract: The paper is aimed at organizational and technological optimization of the system of remote control of ore-stream quality according to technical and economic criteria. The ore-stream in the environment of digital transformation of the mining industry is seen as a system where one of the main functions of management is control. The key importance of the control function in ore-stream quality management becomes in ore quality assessment at the stage of ore material technological preparation, where the homogeneity of the ore massif in terms of the content of the useful component from heterogeneous deposits is formed. Such component in the paper is iron. System technological novelty, which is presented in the paper, consists in realization of constant remote control of ore material quality in the form of monitoring. Remote control is technically realized using unmanned vehicles with subsequent digital processing of information by on-board microprocessor technology and special mathematical and software. The iron content of the ore is estimated from the vertical vector of the magnetic field of the ore material. The implementation of such a concept envisaged the solution of the following tasks: development of a structural and functional model of ore-stream quality control; development of mathematical support for the digital system of data processing of ore material magnetic field measurement data, optimization of metrological indicators of the measuring complex of the control system. It is proposed to use control risks as criteria for quantitative assessment of the functional quality of the ore-stream quality management system. The empirical function of the relationship between the cost of magneto metric remote control of iron content and probable control risks is found. A 3D model of the dependence of the cost of magnetometric control of iron content as a function of accuracy and the value of standards of iron content in ore was built.

Author 1: Almas Mukhtarkhanuly Soltan
Author 2: Bakytzhan Turmyshevich Kobzhassarov

Keywords: Ore-stream; system; model; technology; control; risks; probability; unmanned vehicles

PDF

Paper 92: Detection and Classification of Intestinal Parasites With Bayesian-Optimized Model

Abstract: Automated detection of intestinal parasites in medical imaging enhances diagnostic efficiency and reduces human error. This study evaluates object detection techniques using Faster R-CNN with different backbone architectures such as ResNet, RetinaNet, ResNext and YOLOv8 series for detecting Ascaris lumbricoides and Trichuris trichiura in microscopic images. A dataset of 2000 images was split into training (1500), validation (300), and testing (200). Results show Faster R-CNN with RetinaNet achieves the highest Average Precision (AP) across varying Intersection over Union (IoU) thresholds, making it robust in feature extraction. However, YOLOv8 excels in real-time detection, with YOLOv8n (nano) providing the best trade-off between accuracy and computational efficiency. Bayesian Optimization further improves YOLOv8n, achieving an AP of 99.6% and an Average Recall (AR) of 99.7%, surpassing two-stage architectures. This study highlights the potential of deep learning for automated parasite detection, reducing reliance on manual microscopy. Future research should explore transformer-based models, self-supervised learning, and mobile deployment for real-world clinical applications.

Author 1: Haifa Hamza
Author 2: Kamarul Hawari Ghazali
Author 3: Abubakar Ahmad

Keywords: Intestinal parasites; faster region convolutional neural network; You Look Only Once (YOLOv8); Bayesian Optimization; medical imaging; object detection

PDF

Paper 93: A Comparative Study of Deep Learning and Modern Machine Learning Methods for Predicting Australia’s Precipitation

Abstract: Floods are chaotic weather patterns that cause irreversible and devastating harm to people’s lives, crops, and the socioeconomic system. It causes extensive property damage, animal mortality, and even human fatalities. To mitigate the risk of flooding, it is imperative to create an early warning system that can accurately forecast the amount of rain that will fall tomorrow. Rainfall forecasting is essential to the lives of people and is absolutely important everywhere in the world. The rainfall prediction model reduces risk and helps to prevent further human deaths. Statistics cannot reliably forecast rainfall since the atmosphere is dynamic. Due to the preceding factors, this study uses machine learning and deep learning techniques to estimate precipitation. The purpose of this study is to develop and evaluate a prediction model for forecasting rainfall of 5 cities of Australia (Darwin, Sydney, Perth airport, Melbourne, Brisbane). The Dataset was gathered from the national meteo-rological organization of Australia is the Australian Government Bureau of Meteorology, also known as the BOM. To monitor and forecast meteorological conditions, climatic trends, and natural calamities like cyclones, storms, floods, the Bureau of Meteorology is essential. The dataset includes 14, 5460 size, 23 features detailed city-specific monthly averages for Australia from 2008 to 2017(10 years). An effective rainfall forecasting was produced by integration of a number of Machine Learning and Deep Learning techniques, including Random Forest model (RF), Decision Tree (DT) and Gradient Boosting classifier (GBC), Artificial Neural Network (ANN), and Recurrent Neural Network (RNN). The models were trained to forecast rainfall, reducing the potential impact of floods. Results indicate that combining neural networks and Random Forests provides the most accurate predictions.

Author 1: Hira Farman
Author 2: Qurat-ul-ain Mastoi
Author 3: Qaiser Abbas
Author 4: Saad Ahmad
Author 5: Abdulaziz Alshahrani
Author 6: Salman Jan
Author 7: Toqeer Ali Syed

Keywords: Machine learning; rainfall prediction; neural network; Random Forest; deep learning

PDF

Paper 94: Hardware-Accelerated Detection of Unauthorized Mining Activities Using YOLOv11 and FPGA

Abstract: Illegal mining activities present significant environ-mental, economic, and safety challenges, particularly in remote and under-monitored regions. Traditional surveillance methods are often inefficient, labor-intensive, and unable to provide real-time insights. To address this issue, this study proposes a computer vision-based solution leveraging the state-of-the-art YOLOv11 Nano and Small models, fine-tuned for the detection of illegal mining activities. A specific dataset comprising aerial and ground-level images of mining sites was curated and annotated to train the models for identifying unauthorized excavation, equipment usage, and human presence in restricted zones. The proposed system integrates the hardware-software design of YOLOv11 on the PynqZ1 FPGA, offering a high-performance, low-latency, and energy-efficient solution suitable for real-time monitoring in resource-constrained environments. This hardware-accelerated approach combines FPGA’s parallel processing capabilities with the lightweight deep learning models, enabling efficient deployment for automated illegal mining detection. By providing a scalable, real-time monitoring tool, this work contributes to the development of automated enforcement tools for the mining industry, ensuring better control and surveillance of mining activities. To validate the efficiency of deep learning deployment on edge devices, YOLOv11n was implemented on an FPGA, utilizing 70% of available LUTs, 50% of FFs, and 80%of DSPs, with 8.3 Mbits of on-chip memory. The design achieved 100.33 GOP/s throughput, 18 FPS at 55 ms latency, consuming 4.8 W, and delivering an energy efficiency of 20.90 GOP/s/W.

Author 1: Refka Ghodhbani
Author 2: Taoufik Saidani
Author 3: Amani Kachoukh
Author 4: Mahmoud Salaheldin Elsayed
Author 5: Yahia Said
Author 6: Rabie Ahmed

Keywords: YOLOv11; object detection; mining industry

PDF

Paper 95: Healthcare 4.0: A Large Language Model-Based Blockchain Framework for Medical Device Fault Detection and Diagnostics

Abstract: This paper introduces a novel framework integrating Large Language Models (LLMs) with blockchain technology for medical device fault detection and diagnostics in Health-care 4.0 environments. The proposed framework addresses key challenges, including real-time fault detection, data security, and automated diagnostics through a multi-layered architecture incorporating Internet of Things (IoT) integration, blockchain-based security, and LLM-driven diagnostics. Experimental evaluations demonstrate substantial improvements in diagnostic accuracy and response time while maintaining stringent security standards and regulatory compliance. The system provides enhanced fault detection with real-time monitoring capabilities and secure maintenance record management for smart healthcare. Comparative analysis of different LLMs and traditional Machine Learning (ML) methods shows that Deepseek-R1:7b achieved 97.6% classification accuracy, while O3-mini reached 90.4%and 91.2% in diagnosis accuracy and problem identification, respectively. Claude demonstrated the highest technical accuracy (98.4%), while Traditional ML excelled in processing time (11.7) and processing rate (10.68). Deepseek-R1:7b’s offline capabilities ensure stringent security, privacy, and confidentiality with restricted connectivity, making it particularly suitable for sensitive healthcare applications where data protection is paramount.

Author 1: Khalid Alsaif
Author 2: Aiiad Albeshri
Author 3: Maher Khemakhem
Author 4: Fathy Eassa

Keywords: Healthcare 4.0; Large Language Models; blockchain technology; medical device diagnostics; fault detection; smart healthcare; IoT healthcare security; machine learning

PDF

Paper 96: Knowledge Discovery of the Internet of Things (IoT) Using Large Language Model

Abstract: Internet of Things (IoT) technology quickly trans-formed traditional management and engagement techniques in several sectors. This work explores the trends and applications of the Internet of Things in industries, including agriculture, education, transportation, water management, air quality monitoring, underground mining, smart retail, smart home systems, and weather forecasting. The methodology involves a comprehensive review of the literature, followed by data extraction and analysis using BERT to identify key insights and patterns in IoT applications. The findings show that IoT significantly impacts the improvement of real-time monitoring, increasing efficiency, and encouraging innovative solutions in various sectors. Despite its transformative potential, cybersecurity threats, data privacy concerns, and the need for strong policy frameworks persist. The study emphasizes the necessity of multidisciplinary approaches to address these difficulties and optimize IoT implementation. Future research should focus on establishing secure IoT systems, maintaining data integrity, and encouraging collaboration between disciplines to realise the benefits of IoT technology.

Author 1: Bassma Saleh Alsulami

Keywords: Internet of Things; large language model; BERT; knowledge discovery; data mining; deep learning

PDF

Paper 97: Rib Bone Extraction Towards Liver Isolating in CT Scans Using Active Contour Segmentation Methods

Abstract: Image segmentation is an important aspect of image processing and analysis. Medical imaging segmentation is critical for providing noninvasive information about human body structure that helps physicians analyze body anatomies efficiently. Until recently, various medical imaging segmentation approaches have been presented; however, these approaches are deficient in segmenting abdominal organs due to the significant similarity in their intensity levels. The purpose of this research is to propose a method to facilitate the segmentation of abdominal organs and improve the performance of the segmentation. The core functionality of this research is based on the extraction of rib bone from muscle tissues prior to the application of segmentation. This way, efficient segmentation of abdominal organs can be achieved by isolating the rib bone from the muscle tissues located between the rib bone. The proposed rib bone extraction mechanism is applied to four slices of the MICCAI2007 liver data set to isolate muscle tissues from liver tissues that have significant intensity similarity to liver tissues. The results indicate that the proposed extraction of rib bone efficiently isolated muscle tissues from linked liver tissues and improved the segmentation performance.

Author 1: Mahmoud S. Jawarneh
Author 2: Shahid Munir Shah
Author 3: Mahmoud M. Aljawarneh
Author 4: Ra’ed M. Al-Khatib
Author 5: Mahmood G. Al-Bashayreh

Keywords: Active contour; computed tomography; segmentation; medical diagnostics; medical imaging segmentation

PDF

Paper 98: Revolutionizing Road Safety and Optimization with AI: Insights from Enterprise Implementation

Abstract: This study explores the key factors influencing the adoption of artificial intelligence (AI) in the logistics sector, with a particular emphasis on road logistics management. It examines the technological, organizational, and environmental contexts that shape AI integration, as well as the challenges faced by logistics managers, including the need for digital transformation, carbon emissions reduction, and advanced parcel tracking management. The objective is to identify technological and human-related barriers to AI adoption and to assess the level of interest and readiness among logistics companies, especially in the Moroccan context. A quantitative research approach was adopted, based on an online survey targeting logistics professionals and decision-makers, mainly from European and Moroccan small and medium-sized enterprises (SMEs). The collected data were analyzed using statistical methods, including linear regression and ANOVA, to evaluate the relationships between company characteristics, perceived complexity of AI tools, and the avail-ability of qualified human resources. The findings indicate that perceived complexity and limited access to specialized skills significantly hinder AI adoption. Moreover, the perception of tangible performance benefits—such as increased operational efficiency and reduced CO2 emissions—emerges as a major driver for acceptance. These insights offer practical implications for logistics companies seeking to leverage AI technologies to optimize operations, reduce environmental impact, and enhance parcel tracking systems. A strategic roadmap is proposed to overcome the identified barriers and promote effective AI integration.

Author 1: OUAHBI Younesse
Author 2: ZITI Soumia

Keywords: AI adoption; road logistics; logistics management; digital transformation; CO2 emissions; parcel tracking management

PDF

Paper 99: Big Data-Driven Charging Network Optimization: Forecasting Electric Vehicle Distribution in Malaysia to Enhance Infrastructure Planning

Abstract: The rapid growth of electric vehicles (EVs) globally and in Malaysia has raised significant concerns regarding the adequacy and spatial imbalance of charging infrastructure. Despite government incentives and policy support, Malaysia’s charging network remains insufficient and unevenly distributed, with major urban centers having better access than rural and highway regions. This paper proposes a data-driven approach to optimize EV infrastructure planning by employing a hybrid CEEMDAN-XGBoost model for accurate EV ownership fore-casting and GIS-based spatial optimization for strategic charger deployment. The model achieved superior performance compared to baseline models, with the lowest prediction errors (RMSE: 120; MAE:38;MAPE: 5.6%). Spatial analysis revealed significant infrastructure gaps in underserved regions, guiding equitable and demand-aligned station placement. The results provide valuable insights into future EV distribution and inform policy recommendations for scalable, data-driven planning across Malaysia.

Author 1: Ouyang Mutian
Author 2: Guo Maobo
Author 3: Yu Tianzhou
Author 4: Liu Haotian
Author 5: Yang Hanlin

Keywords: Electric vehicles; charging infrastructure; CEEM-DAN; XGBoost; spatial optimization; data-driven planning; Malaysia

PDF

Paper 100: Dual Neural Paradigm: GRU-LSTM Hybrid for Precision Exchange Rate Predictions

Abstract: The USD/RMB exchange rate is significant when examining the structure of the Chinese financial system. Predicting the accurate USD/RMB exchange rate enables individuals to analyze the condition of the economy and prevent losses. We propose a novel hybrid approach of GRU-LSTM to improve the forecast of the future USD/RMB exchange rate. Deep learning techniques have become the cornerstone of numerous computer vision and natural language processing fields. This paper discusses various aspects and aims to show that they can help predict the exchange rate. We investigate how the newly developed hybrid GRU-LSTM model performs in terms of success rate and profitability compared with the LSTM and GRU models. The evaluation of the model is done on the USD/RMB currency pair and the forecasts made from September 13, 2023, to December 11, 2023. To increase the accuracy of the model, metrics like mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE), and mean absolute relative error (MAPE) were introduced. The study found that the novel hybrid GRU-LSTM model was performing relatively well compared to the models of LSTM and GRU deployed in the survey for exchange rate prediction. This improvement can significantly benefit the analyst or trader in making the right decisions on the management of risks. The study further opens new possibilities for using the hybrid GRU-LSTM model by demonstrating the enhanced potential of this method, which can be more effective in the financial environment. Subsequent studies might improve the forecast by increasing the set of hybrid models and including more economic variables.

Author 1: Shamaila Butt

Keywords: Prediction; LSTM; GRU; USD/RMB exchange rate; deep learning

PDF

Paper 101: AI-Driven Resource Allocation in Edge-Fog Computing: Leveraging Digital Twins for Efficient Healthcare Systems

Abstract: The evolution of healthcare, driven by remote monitoring and connected devices, is transforming medical service de-livery. Digital twins, virtual replicas of patients, enable continuous monitoring and predictive analysis. However, the rapid growth of real-time health data presents major challenges in resource allocation and processing, especially in cardiac event prediction scenarios. This paper proposes an artificial intelligence-based approach to optimize resource allocation in a fog-edge computing environment, with a focus on Mauritania. The system integrates a deep learning model (CNN-BiLSTM), which achieves 98%accuracy in predicting cardiovascular risks from physiological signals, combined with a Deep Q-Network (DQN) to dynamically decide whether tasks should run at the edge or in the fog. Using IoT sensors, real-time health data is collected and processed intelligently, ensuring low latency and rapid response. Digital twins provide a synchronized virtual representation of the physical system for real-time supervision. This architecture improves resource utilization, reduces processing delays, and enhances responsiveness to critical medical conditions, supporting more accurate cardiac event prediction and timely intervention, especially in resource-constrained environments.

Author 1: Brahim Ould Cheikh Mohamed Nouh
Author 2: Rafika Brahmi
Author 3: Sidi Cheikh
Author 4: Ridha Ejbali
Author 5: Mohamedade Farouk Nanne

Keywords: Edge computing; fog computing; digital twin; deep learning; CNN-BiLSTM; Deep Q-Network (DQN); resource allocation; cardiac event prediction; healthcare; Artificial Intelligence (AI); Internet of Things (IoT); real-time

PDF

Paper 102: Predicting Multiclass Java Code Readability: A Comparative Study of Machine Learning Algorithms

Abstract: The classification of program code readability has traditionally focused on two target classes: readable and unreadable. Recently, it has evolved into a multiclass classification task in three categories: readable, neutral, and unreadable. Most of the existing approaches rely on deep learning. This study investigated the multiclass classification of Java code readability using four feature metric datasets and 14 supervised machine learning algorithms. The dataset comprises 200 labeled Java function declarations. Readability features were extracted using Scalabrino’s tool, generating three datasets: Scalabrino, Buse-Weimer, a combined set (Dall), and a fourth (Dcorr) via feature selection based on interfeature correlation. Each model underwent hyperparameter tuning via a Randomized Search and was evaluated through 30 iterations of a five-fold cross-validation. Scaling techniques (MinMax, Standard, Robust, and None) were also compared. The best performance, with an average accuracy of 61.1% and minimal overfitting, was achieved by Random Forest with MinMax scaling on Dcorr. Feature importance analysis using permutation methods identified 22 key metrics related to comments: code complexity, syntax, naming, token usage, and density. Despite its moderate accuracy, the findings offer valuable insights and highlight essential features for advancing code readability research.

Author 1: Budi Susanto
Author 2: Ridi Ferdiana
Author 3: Teguh Bharata Adji

Keywords: Code readability; machine learning; multiclass classification; hyperparameter tuning; future selection

PDF

Paper 103: Deep Learning-Based UI Design Analysis: Object Detection and Image Retrieval Using YOLOv8

Abstract: Data-driven design models support various types of mobile application design, such as design search, promoting a better understanding of best practices and trends. Designing the well User Interface (UI) makes the application practical and easy to use and contributes significantly to the application’s success. Therefore, searching for UI design examples helps gain inspiration and compare design alternatives. However, searching for relevant design examples from large-scale UI datasets is challenging and not easily stricken. The current search approaches rely on various input types, and most of them have limitations that affect their accuracy and performance. This research proposed a model that provides a fine-grained search for relevant UI design examples based on UI screen input. The proposed model will contain two phases. Object detection was implemented using the deep learning model ‘YOLOv8’, achieving 95% precision and 97% average precision. Image retrieval, leveraging the cosine similarity technique to retrieve the top 3 images similar to the input. These results highlight the system’s effectiveness in accurately detecting and retrieving relevant UI elements, providing a valuable tool for UI designers.

Author 1: Roba Alghamdi
Author 2: Adel Ahmad
Author 3: Fawaz alsaadi

Keywords: Data-driven design; YOLOv8; design search; deep learning; user interface design

PDF

Paper 104: Adversarial Attack on Autonomous Ships Navigation Using K-Means Clustering and CAM

Abstract: As Maritime Autonomous Surface Ships (MASSs) increasingly become part of global maritime operations, the reliability and security of their object detection systems have become a major concern. These systems, which play a crucial role in identifying small yet critical maritime objects such as buoys, vessels, and kayaks, are particularly susceptible to adversarial attacks, especially clean-label poisoning attacks. These attacks introduce subtle manipulations into training data without altering their true labels, thereby inducing misclassification during model inference and threatening navigational safety. The objective of this study is to evaluate the vulnerability of maritime object detection models to such attacks and to propose an integrated adversarial framework to expose and analyze these weaknesses. A novel attack method is developed using K-means clustering to segment similar object regions and Class Activation Mapping (CAM) to identify high-importance zones in image data. Adversarial perturbations are then applied within these zones to craft poisoned inputs that target the YOLOv5 object detection model. Experimental validation is performed using the Singapore Marine Dataset (SMD and SMD-Plus), and performance is measured under different perturbation intensities. The results reveal a considerable decline in detection accuracy—especially for small and mid-sized vessels—demonstrating the effectiveness of the attack and its capacity to remain imperceptible to human observers. This research highlights a critical gap in the security posture of AI-based navigation systems and emphasizes the urgent need to develop maritime-specific adversarial defense strategies for ensuring robust and resilient MASS deployment.

Author 1: Ganesh Ingle
Author 2: Kailas Patil
Author 3: Sanjesh Pawale

Keywords: Maritime autonomous surface ships; object detection; clean-label poisoning attacks; adversarial attacks

PDF

Paper 105: NW Logistics: System Architecture and Design for Sustainable Road Logistics

Abstract: The logistics industry is under increasing pressure to reduce carbon emissions and enhance efficiency in response to environmental and regulatory demands. However, optimizing road logistics to achieve these goals requires innovative solutions that balance operational efficiency with sustain-ability. This study addresses this need by introducing NW Logistics, an AI-powered platform that optimizes road logistics to lower CO2 emissions and improve fleet performance. In order to achieve these objectives, real-time CO2 tracking, route optimization, and driver behavior monitoring were integrated into NW Logistics. The system enables precise, real-time tracking of deliveries and vehicle locations, allowing logistics managers to monitor fleet performance with enhanced accuracy. Additionally, onboard cameras and sensors generate individualized driver reports, tracking infractions and fostering safer driving behaviors. Initial simulations of NW Logistics indicate a significant reduction in carbon emissions, along with improvements in route efficiency, delivery tracking accuracy, and driver safety. These results demonstrate the transformative potential of AI to advance sustainable and efficient logistics management.

Author 1: OUAHBI Younesse
Author 2: ZITI Soumia

Keywords: Artificial Intelligence; logistics; supply chain; supply chain management; applications; Internet of Things; road safety; environnment

PDF

Paper 106: A Robust Defense Mechanism Against Adversarial Attacks in Maritime Autonomous Ship Using GMVAE+RL

Abstract: In this paper, we propose a robust defense frame-work combining Gaussian Mixture Variational Autoencoders (GMVAE) with Reinforcement Learning (RL) to counter adversarial attacks in Maritime Autonomous Systems, specifically targeting the Singapore Maritime Database. By modeling complex maritime data distributions through GMVAE and dynamically adapting decision boundaries via RL, our approach establishes a resilient latent representation space that effectively identifies and mitigates adversarial perturbations. Experimental evaluations using adversarial methods such as FGSM, IFGSM, DeepFool, and Carlini-Wagner attacks demonstrate that the proposed GMVAE+RL model outperforms traditional defenses in both accuracy and robustness. Specifically, it achieves a peak accuracy of 87% and robustness of 20.5%, compared to 85.8% and 19.2%for FGSM and significantly lower values for other methods. These results underscore the superiority of our method in ensuring data integrity and operational reliability within complex maritime environments facing evolving cyber threats.

Author 1: Ganesh Ingle
Author 2: Kailas Patil
Author 3: Sanjesh Pawale

Keywords: Maritime autonomous systems; reinforcement learning; defense mechanisms; Gaussian Mixture Variational Auto encoder; Singapore maritime database

PDF

Paper 107: Evaluating the Performance of Tree-Based Model in Predicting Haze Events in Malaysia

Abstract: Predicting haze is crucial in controlling air pollution to reduce its impact, especially on human health. Accurate prediction of extreme values is vital to raising public awareness of this issue and better understanding of air quality management. Extreme values in air pollution refer to unusually high measure-ments of pollutants that diverge significantly from the normal range of observed values. Extreme values are normally caused by haze from various factors. Neglecting extreme values can cause unreasonable predictions. Therefore, this study aims to evaluate the performance of a tree-based algorithm in predicting haze events. Predictive analytics were based on hourly air pollution data from 2013 to 2022 in Shah Alam, Malaysia. The ten parameters are chosen Relative Humidity (RH), Temperature (T), Wind Direction (WD), Wind Speed (WS), PM10, NOx, NO2, SO2, O3 and CO. Decision Tree (DT), Gradient Boosting Regression (GBR) and Extreme Gradient Boosting (XGBoost) are compared in determining the best approach for modeling PM10 concentrations for the next 24 hours (PM10,t+24h) for overall air quality data and three air quality blocks: Good air quality (Block 1), Moderate air quality (Block 2) and Extreme air quality (Block 3). The performance of RMSE, MAE and MAPE indicate that XGBoost outperforms GBR and DT with the RMSE(21.5921), MAE(14.2396) and MAPE(0.4816). When evaluating the performance across the three air quality blocks, XGBoost remains as the top-performing model. However, XGBoost faces challenges in accurately predicting extreme values.

Author 1: Mahiran Muhammad
Author 2: Ahmad Zia Ul-Saufie
Author 3: Fadhilah Ahmad Radi

Keywords: Extreme Gradient Boosting (XGBoost); Gradient Boosting Regression (GBR); Decision Tree (DT); extreme values; Particulate Matter (PM)

PDF

Paper 108: Towards Hybrid Meta-Heuristic Analysis for the Optimization of Fundamental Performance in Robotic Systems

Abstract: This paper examines the concept of implementing a hybrid optimization approach through combining analytical and meta-heuristic approaches to improve the performance of practical engineering systems. Designed in support of artificial intelligence strategy, the proposed approach ensures high stability and efficiency under actuators saturation constraint. This is a well-known and sensitive problem in robotics and control. Specifically, this paper deals with the problem of computing the stability region for controlled systems. While addressing this issue, research approaches take into consideration the fact that actuator saturation may occur. It is imperative to maintain this propriety and ensure the reliability of design control systems, particularly those developed to control robot actuators. Models of the studied systems are based on differential algebraic representations and polytypic regions in state space. The developed technique com-bines LMI with an improved meta-heuristic based optimization approach that fast searches and enlarge domains of attraction for robot actuators. The direct Lyapunov theory is used to analyze and validate stability key performance. A numerical example study has been conducted to validate the proposed approach’s efficacy and efficiency. A comparative benchmarking study has been carried out to highlight the main concepts and results of this study.

Author 1: Boudour Dabbaghi
Author 2: Faical Hamidi
Author 3: Mohamed Aoun
Author 4: Houssem Jerbi

Keywords: Domain of Attraction (DA); Differential Algebraic Representation (DAR); meta-heuristic approach; actuators saturation

PDF

Paper 109: Optimizing Data Transmission and Energy Efficiency in Wireless Networks: A Comparative Study of GA, PSO, and Hybrid Approaches

Abstract: As wireless communication technology evolves, efficient resource allocation in Orthogonal Frequency Division Multiple Access (OFDMA) networks is becoming more important. This study looks at three resource allocation algorithms: Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and a hybrid approach that combines both. The hybrid algorithm takes advantage of the strengths of both methods to improve data transmission and energy efficiency. Using simulations in MATLAB, the study assesses algorithms based on key metrics such as data rate, energy consumption, and computational complexity. The findings show that the hybrid approach generally performs better than both GA and PSO, especially in maximizing data rates. This research offers useful information for network operators looking to implement effective resource management strategies in practical wireless communication settings.

Author 1: Suhare Solaiman

Keywords: Resource allocation; optimization; genetic algorithms; particle swarm optimization; hybrid algorithm

PDF

Paper 110: Enhancing Precision Agriculture with YOLOv8: A Deep Learning Approach to Potato Disease Identification

Abstract: Timely and precise identification of potato leaf diseases plays a critical role in improving crop productivity and reducing the impact of plant pathogens. Conventional detection techniques are often labor-intensive, dependent on expert anal-ysis, and may not be practical for widespread agricultural use. This paper introduces an automated detection system based on YOLOv8, a cutting-edge deep learning framework specialized in object detection, to accurately recognize multiple potato leaf diseases. The proposed model is trained on a carefully prepared dataset that includes both healthy and infected leaves, utilizing robust feature learning to distinguish between different disease types. Our experimental evaluation reveals that the YOLOv8-based method achieves superior performance in terms of accuracy and processing speed when compared to traditional approaches. This work contributes to the ongoing transformation of agriculture through smart technologies by offering an AI-powered tool that facilitates real-time crop monitoring. Future research may focus on deploying this solution on edge devices, such as smartphones or drones, to enable scalable, on-field disease diagnostics. Ultimately, this study supports the vision of sustainable agriculture by integrating intelligent systems into everyday farming operations.

Author 1: Mohammed Aleinzi

Keywords: Potato disease detection; YOLOv8; Agriculture 4.0; deep learning

PDF

Paper 111: Optimizing Medical Image Analysis: A Performance Evaluation of YOLO-Based Segmentation Models

Abstract: Instance segmentation is a critical component of medical image analysis, enabling tasks such as tissue and organ delineation, and disease detection. This paper provides a detailed comparative analysis of two fine-tuned one-stage object detection models, YOLOv11-seg and YOLOv9-seg, tailored for instance segmentation in medical imaging. Leveraging transfer learning, both models were initialized with pretrained weights and sub-sequently fine-tuned on the NuInsSeg dataset, which comprises over 30,000 manually segmented nuclei across 665 image patches from various human and mouse organs. This approach facilitated faster convergence and improved generalization, particularly given the limited size and high complexity of the medical dataset. The models were evaluated against key performance metrics. The experimental results reveal that YOLOv11n-seg outperforms YOLOv9c-seg with a precision of 0.87, recall of 0.84, and mAP50 of 0.89, indicating superior segmentation quality and more accurate delineation of nuclei contours. This study highlights the robust performance and efficiency of YOLOv11n-seg, demonstrating its superiority in medical image segmentation tasks, with notable advantages in both accuracy and real-time processing capabilities.

Author 1: Haifa Alanazi

Keywords: Medical image; instance segmentation; one-stage object detection models; transfer learning; nuclei detection

PDF

Paper 112: Multitask Model with an Attention Mechanism for Sequentially Dependent Online User Behaviors to Enhance Audience Targeting

Abstract: This paper proposes a multitask learning approach with an attention mechanism to predict audience behavior as sequential actions. The goal is to improve click-through and conversion rates by effectively targeting audience behavior. The proposed model introduces specific task sets designed to address the challenges specific to each prediction task. In particular, the first task, click prediction, suffers from data sparsity and a lack of prior knowledge, limiting its predictive power. To address this, a one-dimensional convolutional network (1D CNN) tower is used in the first task to learn local dependencies and temporal patterns of user activity. This design choice allows the model to better detect potential clicks, even without rich historical data. The task of conversion prediction is tackled by a fully connected convolution tower that selectively combines the corresponding features extracted from the first task using an Attention Mechanism, as well as the original shared embedding input data, enabling richer context for performing more accurate prediction. Experimental results show that the proposed multitask architecture significantly outperforms existing state-of-the-art models that do not consider tower architecture design to predict sequential online audience behavior.

Author 1: Marwa Hamdi El-Sherief
Author 2: Mohamed Helmy Khafagy
Author 3: Asmaa Hashem Sweidan

Keywords: Multitask learning; 1D convolution neural networks; attention mechanism; click through rate; conversion rate; audience behavioral targeting; audience behavior

PDF

Paper 113: Secure Optimization of RPL Routing in IoT Networks: Analysis of Metaheuristic Algorithms in the Face of Attacks

Abstract: The security and efficiency of Internet of Things (IoT) networks depend on optimizing the routing protocol for low-power, lossy networks (LPNs) to manage various challenges, including expected number of transmissions (ETX), latency and energy consumption. This study proposes an advanced meta-heuristic optimization framework integrating several algorithms, including Particle Swarm Optimization (PSO), Mixed Integer Linear Programming (MILP), Adaptive Random Search with two-step Adjustment (ARS2A) and Simulated Annealing (SA), to improve the performance of RPL-based IoT networks under attack scenarios. Our methodology focuses on secure routing by integrating dynamic anomaly detection and adaptive optimization mechanisms to mitigate network threats such as Blackhole, Sinkhole, and Wormhole attacks.Simulations were carried out on large-scale IoT networks with 100 and 150 nodes to evaluate the performance of the proposed algorithms. Experimental results indicate that ARS2A and MILP offer the best compromise between security and performance, achieving minimal ETX (1.28), reduced latency (0.12 ms) and optimized energy consumption (0.85 J) in dense networks. Furthermore, simulated annealing demonstrates high adaptability to mitigate routing attacks while guaranteeing stable energy efficiency. The comparative analysis highlights the strengths and weaknesses of each algorithm, underscoring the need for hybrid optimization strategies that balance computational cost and real-time adaptability. This work establishes a secure and scalable optimization framework for IoT networks, contributing to the development of intelligent, resilient and energy-efficient routing solutions.

Author 1: Mansour Lmkaiti
Author 2: Maryem Lachgar
Author 3: Ibtissam Larhlimi
Author 4: Houda Moudni
Author 5: Hicham Mouncif

Keywords: IoT Security; PSO; MILP; ARS2A; simulated annealing; RPL protocol; metaheuristic techniques; routing efficiency; ETX; latency; energy consumption; attack mitigation; blackhole; wormhole; grayhole; cyberattack

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org