The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 15 Issue 12

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: algoTRIC: Symmetric and Asymmetric Encryption Algorithms for Cryptography – A Comparative Analysis in AI Era

Abstract: The increasing integration of artificial intelligence (AI) within cybersecurity has necessitated stronger encryption methods to ensure data security. This paper presents a comparative analysis of symmetric (SE) and asymmetric encryption (AE) algorithms, focusing on their role in securing sensitive information in AI-driven environments. Through an in-depth study of various encryption algorithms such as AES, RSA, and others, this research evaluates the efficiency, complexity, and security of these algorithms within modern cybersecurity frameworks. Utilizing both qualitative and quantitative analysis, this research explores the historical evolution of encryption algorithms and their growing relevance in AI applications. The comparison of SE and AE algorithms focuses on key factors such as processing speed, scalability, and security resilience in the face of evolving threats. Special attention is given to how these algorithms are integrated into AI systems and how they manage the challenges posed by large-scale data processing in multi-agent environments. Our results highlight that while SE algorithms demonstrate high-speed performance and lower computational demands, AE algorithms provide superior security, particularly in scenarios requiring enhanced encryption for AI-based networks. The paper concludes by addressing the security concerns that encryption algorithms must tackle in the age of AI and outlines future research directions aimed at enhancing encryption techniques for cybersecurity.

Author 1: Naresh Kshetri
Author 2: Mir Mehedi Rahman
Author 3: Md Masud Rana
Author 4: Omar Faruq Osama
Author 5: James Hutson

Keywords: Algorithms; analysis; artificial intelligence; asymmetric encryption; cryptography; cybersecurity; symmetric encryption

PDF

Paper 2: A Framework for Privacy-Preserving Detection of Sickle Blood Cells Using Deep Learning and Cryptographic Techniques

Abstract: Sickle cell anemia is a hereditary disorder where abnormal hemoglobin causes red blood cells to become rigid and crescent-shaped, obstructing blood flow and leading to severe health complications. Early detection of these abnormal cells is essential for timely treatment and reducing disease progression. Traditional screening methods, though effective, are time-intensive and require skilled technicians, making them less suitable for large-scale implementation. This paper presents a conceptual framework that integrates transfer learning, cryptographic algorithms, and service-oriented architecture to provide a secure and efficient solution for sickle cell detection. The framework uses MobileNet, a lightweight deep learning model, enhanced with transfer learning to identify sickle cells from medical images while operating on hardware-constrained environments. Advanced Encryption Standards (AES) ensure sensitive patient data remains secure during transmission and storage, while a service-oriented architecture facilitates seamless interaction between system components. Although not yet implemented, the framework serves as a foundation for future empirical testing, addressing the need for accurate detection, data privacy, and system efficiency in healthcare applications.

Author 1: Kholoud Alotaibi
Author 2: Naser El-Bathy

Keywords: Sickle cells; deep learning; transfer learning; encryption; AES; SOA

PDF

Paper 3: Trustworthiness in Conversational Agents: Patterns in User personality-Based Behavior Towards Chatbots

Abstract: As artificial intelligence conversational agent (CA) usage is increasing, research has been done to explore how to improve chatbot user experience by focusing on user personality. This work aims to help designers and industrial professionals understand user trust related to personality in CAs for better human-centered AI design. To achieve this goal, the study investigates the interactions between users with diverse personalities and AI chatbots. We measured participant personalities with a Hogan and Champagnes (1980) typology assessment by categorizing personality dimensions into the extraversion vs. intuition (EN), extraversion vs. sensing (ES), introversion vs. intuition (IN), and introversion vs. sensing (IS) groups. Twenty-nine participants were assigned two tasks to engage with three different AI chatbots: Cleverbot, Kuki, and Replika. Their conversations with the chatbots were analyzed using the open-coding method. Coding schemes were developed to create frequency tables. Results of this study showed that EN personality participants had perceptions of high trustworthiness towards the chatbot, especially when the chatbot was helpful. The ES personality participants, on the other hand, often engaged in brief conversations regardless of whether the chatbot was helpful or not, leading to low trust levels towards the chatbot. The IN personality users experienced mixed outcomes; while some had perceived trusty-worthy conversations despite having unhelpful chatbot responses, others found helpful conversations, yet a perception of low trustworthiness. The IS personality participants typically had the longest conversations, often leading to high perceptions of high trust scores being given to the chatbots. This study indicates that users with diverse personalities have different perceptions of trust toward AI conversational agents. This research provides interpretations of different personality users’ interaction patterns and trends with chatbots for designers as design guidelines to emphasize AI UX design.

Author 1: Jieyu Wang
Author 2: Merary Rangel
Author 3: Mark Schmidt
Author 4: Pavel Safonov

Keywords: Trust; personality; human-centered AI design; user experience

PDF

Paper 4: An Enhanced Real-Time Intrusion Detection Framework Using Federated Transfer Learning in Large-Scale IoT Networks

Abstract: The exponential growth of Internet of Things (IoT) devices has introduced critical security challenges, particularly in scalability, privacy, and resource constraints. Traditional centralized intrusion detection systems (IDS) struggle to address these issues effectively. To overcome these limitations, this study proposes a novel Federated Transfer Learning (FTL)-based intrusion detection framework tailored for large-scale IoT networks. By integrating Federated Learning (FL) with Transfer Learning (TL), the framework enhances detection capabilities while ensuring data privacy and reducing communication overhead. The hybrid model incorporates convolutional neural networks (CNNs), bidirectional gated recurrent units (BiGRUs), attention mechanisms, and ensemble learning. To address the class imbalance, Synthetic Minority Over-sampling Technique (SMOTE) was employed, while optimization techniques such as hyperparameter tuning, regularization, and batch normalization further improved model performance. Experimental evaluations on five diverse IoT datasets, i.e. Bot-IoT, N-BaIoT, TON_IoT, CICIDS 2017, and NSL-KDD, demonstrate that the framework achieves high accuracy (92%-94%) while maintaining scalability, computational efficiency, and data privacy. This approach provides a robust solution to real-time intrusion detection in resource-constrained IoT environments.

Author 1: Khawlah Harahsheh
Author 2: Malek Alzaqebah
Author 3: Chung-Hao Chen

Keywords: Intrusion detection systems; federated learning; transfer learning; cybersecurity; scalability; resource constraints; machine learning; Internet of Things

PDF

Paper 5: Forecasting Unemployment Rate for Multiple Countries Using a New Method for Data Structuring

Abstract: Forecasting the Unemployment Rate (UR) plays a key role in shaping economic policies and development strategies. While most research focuses on predicting UR for individual countries, there has been limited progress in creating a unified forecasting model that works across multiple countries. Traditional time series methods are usually designed for single-country data, making it difficult to develop a model that handles data from various regions. This study presents a new data structuring technique that divides time series into smaller segments, enabling the development of a single model applicable to 44 countries using various economic indicators. Four forecasting models were tested: an artificial neural network (ANN), a hybrid ANN with machine learning (ML), a genetic algorithm-optimized ANN (ANN-GA), and a linear regression model. The linear regression model, which used lagged UR values, delivered the best results with an R² of 0.964 and 89.8% accuracy. The ANN-GA model also performed strongly, achieving an R² of 0.945 and 85.1% accuracy. These results highlight the effectiveness of the proposed data structuring method, demonstrating that a single model can accurately forecast multiple time series across different regions.

Author 1: Amjad M. Monir Aljinbaz
Author 2: Mohamad Mahmoud Al Rahhal

Keywords: Unemployment rate; artificial neural network; time series; hybrid model; genetic algorithm

PDF

Paper 6: Exploring Wealth Dynamics: A Comprehensive Big Data Analysis of Wealth Accumulation Patterns

Abstract: The study offers a thorough examination of the accumulation and distribution of wealth among billionaires through the application of big data analytics methodologies. This research centres on an extensive dataset known as "Billionaires.csv," [19] which encompasses a range of information about billionaires from diverse nations, including their demographic characteristics, company particulars, sources of wealth, and more details. The study aims to get a deeper understanding of the determinants that change the net worth of billionaires and detect trends in the worldwide financial system that can guide entrepreneurial ventures and investment possibilities. The dataset is subjected to analysis and visualisation through the utilisation of Python tools and libraries, including but not limited to Pandas, NumPy, Matplotlib, and Seaborn. The results of this study offer valuable insights into the distribution of wealth among billionaires, the factors that contribute to industry success, gender disparities, age demographics, and other factors that influence the accumulation of billionaire wealth.

Author 1: Karim Mohammed Rezaul
Author 2: Mifta Uddin Khan
Author 3: Nnamdi Williams David
Author 4: Kazy Noor e Alam Siddiquee
Author 5: Tajnuva Jannat
Author 6: Md Shabiul Islam

Keywords: Big data; python; billionaires; net worth; wealth accumulation; wealth inheritance; geographic location; statistical analysis

PDF

Paper 7: AI-Enabled Vision Transformer for Automated Weed Detection: Advancing Innovation in Agriculture

Abstract: Precision agriculture is focusing on automated weed detection in order to improve the use of inputs and minimize the application of herbicides. The presented paper outlines a Vision Transformer (ViT) model for weed detection in crop fields, that tackle difficulties stemming from the resemblance of crops and weeds, especially in complex, diversified settings. The model was trained via pixel-level annotation of the images obtained using high-resolution UAV imagery shot over an organic carrot field with crop, weed, and background. Due to the nature of the mechanism in ViTs that includes self-attention, which allows it to capture long-range spatial dependencies, this approach can very well distinguish crop rows from inter-row weed clusters. To solve the problem of class imbalance and improve the generality of the patches, techniques of data preprocessing such as patch extraction and augmentation were used. The effectiveness of the proposed approach has been confirmed by an accuracy of 89.4% in classification, exceeding the efficiency of basic models such as U-Net and FCN in practical application conditions. This proposed ViT-based approach is a marked improvement in crop management; and provides the prospect for selective weed control, in support of more sustainable agriculture. This model can also be integrated into AI-based tractors for real-time weed management in the field.

Author 1: Shafqaat Ahmad
Author 2: Zhaojie Chen
Author 3: Aqsa
Author 4: Sunaia Ikram
Author 5: Amna Ikram

Keywords: Precision agriculture; weed detection; vision transformer; UAV imagery; crop-weed classification; AI-Tractors

PDF

Paper 8: The Heart of Artificial Intelligence: A Review of Machine Learning for Heart Disease Prediction

Abstract: Heart disease is one of the main heart diseases that cause the death of people worldwide, affecting the engine of the human body: the heart. It has a greater incidence in underdeveloped countries such as Angola, Bangladesh, Ethiopia and Haiti, for this reason, obtaining accurate results based on risk factors manually is a complex task. Therefore, this systematic review allowed us to analyze and study 32 articles applying the PRISMA methodology, which allowed us to evaluate the suitability of the methods and, consequently, their reliability in the results. The results of the study showed that the algorithm with the greatest accuracy in predicting these heart diseases is Random Forest. The most commonly used metrics to evaluate machine learning algorithms are sensitivity, F1 score, precision, and accuracy, with sensitivity highlighted as the primary metric. The most predominant independent aspects for predicting heart disease in machine learning models are age, sex, cholesterol, diabetes, and chest pain. Finally, the most used data distribution is 70% for training and 30% for testing, which achieves great accuracy in the algorithm prediction process. This study offers a promising path for the prevention and timely treatment of this disease through the use of machine learning algorithms. In the future, these advances could be applied in a system accessible to all people, thus improving access to healthcare and saving lives.

Author 1: Brayan R. Neciosup-Bolaños
Author 2: Segundo E. Cieza-Mostacero

Keywords: Machine learning; heart disease; prediction; systematic review; artificial intelligence; algorithms; literature; heart

PDF

Paper 9: Software Design Aimed at Proper Order Management in SMEs

Abstract: The design and evaluation of an order management software oriented to SMEs in Lima is presented. Using Design Thinking, a prototype was developed focusing on Usability, Design and User Satisfaction. Through a Likert scale survey of 308 SME employees, perceptions on operational efficiency and user experience were measured. The results show high acceptance and highlight the intuitiveness of the system. However, areas such as loading speed and e-commerce functionalities require future improvements. This study establishes a framework for similar technological tools in commercial sectors.

Author 1: Linett Velasquez Jimenez
Author 2: Herbert Grados Espinoza
Author 3: Santiago Rubiños Jimenez
Author 4: Juan Grados Gamarra
Author 5: Claudia Marrujo-Ingunza

Keywords: Design thinking; SMEs; order management; software design; usability; user perception

PDF

Paper 10: Design of a Mobile Learning App for Financial Literacy in Young People Using Gamification

Abstract: This research paper addresses the issue of insufficient financial literacy among young people, a challenge that affects their ability to make informed financial decisions. A survey was conducted to assess the current state of financial literacy among young people, whose results show a significant gap in the understanding of key concepts needed to manage their finances, which limits their economic and social development. Based on these findings, an interactive and gamified design aimed at strengthening the level of financial literacy among young people is proposed. This proposal includes wireframes that structure a mobile application, integrating playful elements and educational challenges to promote user participation in their learning process. The methodology of design that is employed focuses on the user experience, which ensures that the tool is accessible and engaging. It is expected that this proposal, based on the survey results, will not only increase the understanding of financial concepts but also motivate young people to apply this knowledge in their daily lives, thus contributing to greater financial independence and a better quality of life.

Author 1: Angie Nayeli Ruiz-Carhuamaca
Author 2: Juliana Alexandra Yauricasa-Seguil
Author 3: Juan Carlos Morales-Arevalo

Keywords: Financial literacy; gamification; financial education; challenge education

PDF

Paper 11: Comprehensive Evaluation of Machine Learning Techniques for Obstructive Sleep Apnea Detection

Abstract: Obstructive Sleep Apnea (OSA) is a prevalent health issue affecting 10-25% of adults in the United States (US) and is associated with significant economic consequences. Machine learning methods have shown promise in improving the efficiency and accessibility of OSA diagnoses, thus reducing the need for expensive and challenging tests. A comparative analysis of Logistic Regression (LR), Support Vector Machine (SVM), Gradient Boosting (GB), Gaussian Naive Bayes (GNB), Random Forest (RF), and K-Nearest Neighbors (KNN) algorithms was conducted to predict Obstructive Sleep Apnea (OSA). To improve the predictive accuracy of these models, Random Oversampling was applied to address the imbalance in the dataset, ensuring a more equitable representation of the minority class. Patient demographics, including age, sex, height, weight, BMI, neck circumference, and gender, were employed as predictive features in the models. The RFC provided outstanding training and testing accuracies of 87% and 65%, respectively, and a Receiver Operating Characteristic (ROC) score of 87%. The GBC and SVM classifiers also demonstrated good performance on the test dataset. The results of this study show that machine learning techniques may be effectively used to diagnose OSA, with the Random Forest Classifier demonstrating the best results.

Author 1: Alaa Sheta
Author 2: Walaa H. Elashmawi
Author 3: Adel Djellal
Author 4: Malik Braik
Author 5: Salim Surani
Author 6: Sultan Aljahdali
Author 7: Shyam Subramanian
Author 8: Parth S. Patel

Keywords: Machine learning; obstructive sleep apnea; random forest classifier; oversampling; classification

PDF

Paper 12: Design of On-Premises Version of RAG with AI Agent for Framework Selection Together with Dify and DSL as Well as Ollama for LLM

Abstract: Currently, most RAGs are cloud-based and include Bedrock. However, there is a trend to return from the cloud to on-premises due to security concerns. In addition, it is common for APIs to call Lambda or EC2 for data access, but it is not easy to select the optimal framework depending on the data attributes. For this reason, the author devised a system for selecting the optimal framework using an AI agent. Furthermore, the author decided to use Dify, which is based on a DSL, as the user interface for the on-premises version of RAG, and ollama as a large-scale language model that can be installed on-premises as well. The author also considered the specifications of the hardware required to build this RAG and confirmed the feasibility of implementation.

Author 1: Kohei Arai

Keywords: RAG (Retrieval-Augmented Generation); API (Application Programming Interface); Lambda; EC2 (Amazon Elastic Compute Cloud); AI agent; Dify; DSL (domain specific language); ollama; YAML (YAML Ain't Markup Language)

PDF

Paper 13: Deep Ensemble Method for Healthcare Asset Mapping Using Geographical Information System and Hyperspectral Images of Tirupati Region

Abstract: The ever-increasing capabilities of deep learning for image analysis and recognition have encouraged some researchers investigates potential benefits of merging Hyperspectral Images (HSI) and Geographic Information Systems (GIS) with deep learning in the healthcare industry. Healthcare is an ever-changing sector that constantly adopts new technologies to improve decision-making and patient service. This research digs into the role that GIS and Remote Sensing (RS) play in modern healthcare and their significance. By delivering data from faraway places and enabling spatial analysis, the combination of RS and GIS has transformed healthcare. This GIS & RS data will have the numerous quantity of data so big data analytics can be helpful for storing and retrieving of data. This analysis can open up a new possibility for better healthcare planning, disease management, and environmental health assessment based on the study area's population. This paper deals healthcare assets mapping based on the population and the study area of the Tirupati district hyperspectral image by applying the Deep Ensemble method.

Author 1: P. Bhargavi
Author 2: T. Sarath
Author 3: Gopichand G
Author 4: G V Ramesh Babu
Author 5: T Haritha
Author 6: A. Vijaya Krishna

Keywords: Geographical information system; hyperspectral image; remote sensing images; big data analytics; deep ensemble methods; healthcare asset

PDF

Paper 14: Path Planning for Laser Cutting Based on Thermal Field Ant Colony Algorithm

Abstract: In laser cutting technology, path planning is the key to optimizing cutting quality. Traditional ant colony optimization path planning does not prevent excessive heat effects after processing. This paper addresses the problem of heat accumulation during drilling by introducing a heat factor and a heat threshold into the traditional ant colony algorithm. The heat factor and threshold are used to dynamically control heating and cooling in the path planning process, and the heat factor is updated to update the local pheromone. Then, the improved 2-opt algorithm with the introduced heat factor is combined to parallelly optimize the path, and a thermal field ant colony algorithm is proposed. The simulation experiments and actual cutting results show that the proposed algorithm is more efficient and effective than traditional ant colony algorithm and improved ant colony algorithm in terms of reducing heat accumulation while ensuring fewer empty path, and improving laser cutting processing efficiency and quality.

Author 1: Junjie GE
Author 2: Guangfa ZHANG
Author 3: Tian CHEN

Keywords: Laser cutting; path planning; ant colony algorithm; thermal field control method

PDF

Paper 15: Laser Distance Measuring and Image Calibration for Robot Walking Using Mean Shift Algorithm

Abstract: In this research, we have measured the physical distance between the robot and its surroundings using a laser distance measuring device that we have developed, designed controllers for, and tested operationally. We will record the distance using the USB camera and integrate the LDMSB board into the laser distance measuring design. We will fasten these two parts to the robot's underside. Developing the experiment in LabVIEW is the next step. The mean shift method enables us to move the robot's position by relocating a laser-based distance measurement device and capturing a photo at that location. In order to record that area, we will perform a perspective camera calibration. This will allow us to set up or adjust the camera system's value, or provide visual assistance to ensure that the viewing angle is precisely aligned with the intended view angle. The laser measurement results ranged from one to fifteen meters. A device that makes use of lasers has 99.25% accuracy. Every calibration location throughout the 10 has a precision rating of 94.03%.

Author 1: Rujipan Kosarat
Author 2: Anan Wongjan

Keywords: Laser distance; image calibration; mean shift algorithm; LabVIEW

PDF

Paper 16: Predicting Chronic Obstructive Pulmonary Disease Using ML and DL Approaches and Feature Fusion of X-Ray Image and Patient History

Abstract: By 2030, chronic obstructive pulmonary disease (COPD) is expected to become one of the top three causes of death and a leading contributor to illness globally. Chronic Obstructive Pulmonary Disease (COPD) is a debilitating respiratory disease and lung ailment caused by smoking-related airway inflammation, leading to breathing difficulties. Our COPD Healthcare Monitoring System for COPD Early Detection addresses this critical need by leveraging advanced Machine Learning (ML) and Deep Learning (DL) technologies. Unlike previous studies that predominantly rely on image datasets alone, our advanced monitoring system utilizes both image and text datasets, offering a more comprehensive approach. Importantly, we manually curated our dataset, ensuring its uniqueness and reliability, a feature lacking in existing literature. Despite the utilization of popular models like nnUnet, Cx-Net, and V-net by other papers, our model outperformed them, achieving superior accuracy. XGBoost led with an impressive 0.92 score. Additionally, deep learning models such as VGG16, VGG19, and ResNet50 delivered scores ranging from 0.85 to 0.89, showcasing their efficacy in COPD detection. By amalgamating these techniques, our system revolutionizes COPD care, offering real-time patient data analysis for early detection and management. This innovative approach, coupled with our meticulously curated dataset, promises improved patient outcomes and quality of life. Overall, our study represents a significant advancement in COPD research, paving the way for more accurate diagnosis and personalized treatment strategies.

Author 1: Fatema Kabir
Author 2: Nahida Akter
Author 3: Md. Kamrul Hasan
Author 4: Md. Tofael Ahmed
Author 5: Mariam Akter

Keywords: Chronic obstructive pulmonary disease; COPD; COPD healthcare; advanced monitoring system; COPD early detection; respiratory disease; machine learning; deep learning

PDF

Paper 17: Cloud Computing: Enhancing or Compromising Accounting Data Reliability and Credibility

Abstract: Business development is intrinsically tied to the evolution of accounting systems, and in today’s digital economy, automation has become indispensable despite increasing setup and maintenance costs. Cloud computing emerges as a promising solution, offering cost reduction and greater flexibility in accounting processes. This paper investigates the influence of cloud technology on accounting practices, emphasizing how IT advancements automate document preparation, streamline data entry, and create new opportunities through cloud services and online platforms. However, cloud adoption is not without its challenges, particularly in the areas of information security and implementation. This study delves into the benefits of cloud-based accounting, with a focus on ensuring data reliability and integrity, while providing practical guidance for secure adoption. By transitioning to cloud systems, organizations can standardize and optimize IT resources. Lastly, the paper outlines strategies to ensure the secure and efficient operation of cloud-based accounting systems within organizations.

Author 1: Mohammed Shaban Thaher

Keywords: Cloud computing; information security; infrastructure as a service; platform as a service; software as a service

PDF

Paper 18: Security Gap in Microservices: A Systematic Literature Review

Abstract: The growing importance of microservices architecture has raised concerns about its security despite a rise in publications addressing various aspects of microservices. Security issues are particularly critical in microservices due to their complex and distributed nature, which makes them vulnerable to various types of cyber-attacks. This study aims to fill the gap in systematic investigations into microservice security by reviewing current state-of-the-art solutions and models. A total of 487 papers were analyzed, with the final selection refined to 87 relevant articles using a snowball method. This approach ensures that the focus remains on security issues, particularly those identified post-2020. However, there is still a significant lack of dedicated security standards or comprehensive models specifically designed for microservices. Key findings highlight the vulnerabilities of container-based applications, the evolving nature of cyber-attacks, and the critical need for effective access control. Moreover, a substantial knowledge gap exists between academia and industry practitioners, which compounds the challenges of securing microservices. This study emphasizes the need for more focused research on security models and guidelines to address the unique vulnerabilities of microservices and facilitate their secure integration into critical applications across various domains.

Author 1: Nurman Rasyid Panusunan Hutasuhut
Author 2: Mochamad Gani Amri
Author 3: Rizal Fathoni Aji

Keywords: Microservice security; cyber-attacks; container; security standards; access control

PDF

Paper 19: New Knowledge Management Model: Enhancing Knowledge Creation with Zack Gap, Brand Equity, and Data Mining in the Sports Business

Abstract: This research improves Socialization, Externalization, Combination, and Internalization (SECI) knowledge management model by combining it with Zack's knowledge gap model, brand equity concept, and data mining. Zack's model is incorporated into the SECI model to identify the gap between the knowledge in the organization and the knowledge that the organization should possess. We add the data mining techniques to determine that knowledge gap. The uniqueness of this study lies in the externalization and combination of the SECI model. In the externalization, "what the firm must know" is added; for that, we compile the questionnaire by adopting brand equity and distributing it to the athletes. In the combination, "what the firm knows" is added; we use a database already owned by sports business management. The modifications resulting from both models with data mining in this study were carried out to develop a new knowledge management model in the sports business sector. This new model will be valuable knowledge for sports business management to build strategies and increase their competitiveness in the sports market. In addition, other service business fields besides sports can also apply this new model to improve their knowledge management, which they can then use to improve their marketing strategies.

Author 1: Fransiska Prihatini Sihotang
Author 2: Ermatita
Author 3: Dian Palupi Rini
Author 4: Samsuryadi

Keywords: SECI model; zack model; data mining; brand equity; sport business

PDF

Paper 20: Systematic Review of Prediction of Cancer Driver Genes with the Application of Graph Neural Networks

Abstract: Graph Neural Networks (GNNs) have emerged as a potential tool in cancer genomics research due to their ability to capture the structural information and interactions between genes in a network, enabling the prediction of cancer driver genes. This systematic literature review assesses the capabilities and challenges of GNNs in predicting cancer driver genes by accumulating findings from relevant papers and research. This systematic literature review focuses on the effectiveness of GNN-based algorithms related to cancer such as cancer gene identification, cancer progress dissection, prediction, and driver mutation identification. Moreover, this paper highlights the requirement to improve omics data integration, formulating personalized medicine models, and strengthening the interpretability of GNNs for clinical purposes. In general, the utilization of GNNs in clinical practice has a significant potential to lead to improved diagnostics and treatment procedures.

Author 1: Noor Uddin Qureshi
Author 2: Usman Amjad
Author 3: Saima Hassan
Author 4: Kashif Saleem

Keywords: Graph neural network; cancer driver genes; prediction; personalized medicine

PDF

Paper 21: Albument-NAS: An Enhanced Bone Fracture Detection Model

Abstract: Diagnosing fracture locations accurately is challenging, as it heavily depends on the radiologist's expertise; however, image quality, especially with minor fractures, can limit precision, highlighting the need for automated methods. The accuracy of diagnosing fracture locations often relies on radiologists' expertise; however, image quality, particularly with smaller fractures, can limit precision, underscoring the need for automated methods. Although a large volume of data is available for observation, many datasets lack annotated labels, and manually labeling this data would be highly time-consuming. This research introduces Albument-NAS, a technique that combines the One Shot Detector (OSD) model with the Albumentation image augmentation approach to enhance both speed and accuracy in detecting fracture locations. Albument-NAS achieved a mAP@50 of 83.5%, precision of 87%, and recall of 65.7%, significantly outperforming the previous state-of-the-art model, which had a mAP@50 of 63.8%, when tested on the GRAZPEDWRI dataset—a collection of pediatric wrist injury X-rays. These results establish a new benchmark in fracture detection, illustrating the advantages of combining augmentation techniques with advanced detection models to overcome challenges in medical image analysis.

Author 1: Evandiaz Fedora
Author 2: Alexander Agung Santoso Gunawan

Keywords: Albumentation; augmentation; bone fracture; deep learning; object detection; YOLO-NAS

PDF

Paper 22: FKMU: K-Means Under-Sampling for Data Imbalance in Predicting TF-Target Genes Interactions

Abstract: Identifying interactions between transcription factors (TFs) and target genes is critical for understanding molecular mechanisms in biology and disease. Traditional experimental approaches are often costly and not scalable. We introduce FKMU, a K-means-based under-sampling method designed to address data imbalance in predicting TF-target interactions. By selecting low-frequency TF samples within each cluster and optimizing the balance ratio to 1:1 between known and unknown samples, FKMU significantly improves prediction accuracy for unobserved interactions. Integrated with a deep learning model that uses random walk sampling and skip-gram embeddings, FKMU achieves an average AUC of 0.9388 ± 0.0045 through five-fold cross-validation, outperforming state-of-the-art methods. This approach facilitates accurate and large-scale predictions of TF-target interactions, providing a robust tool for molecular biology research.

Author 1: Thanh Tuoi Le
Author 2: Xuan Tho Dang

Keywords: K-means clustering; imbalanced data; TF-target gene interactions; heterogeneous network; meta-path

PDF

Paper 23: A Deep Learning-Based LSTM for Stock Price Prediction Using Twitter Sentiment Analysis

Abstract: Numerous economic, political, and social factors make stock price predictions challenging and unpredictable. This paper focuses on developing an artificial intelligence (AI) model for stock price prediction. The model utilizes LSTM and XGBoost techniques in three sectors: Apple, Google, and Tesla. It aims to detect the impact of combining sentiment analysis with historical data to see how much people's opinions can change the stock market. The proposed model computes sentiment scores using natural language processing (NLP) techniques and combines them with historical data based on Date. The RMSE, R², and MAE metrics are used to evaluate the performance of the proposed model. The integration of sentiment data has demonstrated a significant improvement and achieved a higher accuracy rate compared to historical data alone. This enhances the accuracy of the model and provides investors and the financial sector with valuable information and insights. XGBoost and LSTM demonstrated their effectiveness in stock price prediction; XGBoost outperformed the LSTM technique.

Author 1: Shimaa Ouf
Author 2: Mona El Hawary
Author 3: Amal Aboutabl
Author 4: Sherif Adel

Keywords: Sentiment analysis; stocks price prediction; correlation; natural language processing (NLP); machine learning model; LSTM; XGBoost

PDF

Paper 24: A Multimodal Data Scraping Tool for Collecting Authentic Islamic Text Datasets

Abstract: Making decisions based on accurate knowledge is agreed upon to provide ample opportunities in different walks of life. Machine learning and natural language processing (NLP) systems, such as Large Language Models, may use unrecognized sources of Islamic content to fuel their predictive models, which could often lead to incorrect judgments and rulings. This article presents the development of an automated method with four distinct algorithms for text extraction from static websites, dynamic websites, YouTube videos with transcripts, and for speech-to-text conversion from videos without transcripts, particularly targeting Islamic knowledge text. The tool is tested by collecting a reliable Islamic knowledge dataset from authentic sources in Saudi Arabia. We scraped Islamic content in Arabic from text websites of prominent scholars and YouTube channels administered by five authorized agencies in Saudi Arabia. These agencies include the general authority for the affairs of the grand mosque and the prophet’s mosque and charitable foundations in Saudi Arabia. For websites, text data were scraped using Python tools for static and dynamic web scraping such as Beautiful Soup and Selenium. For YouTube channels, data were scraped from existing transcripts or transcribed using automatic speech recognition tools. The final Islamic content dataset comprises 31225 records from regulated sources. Our Islamic knowledge dataset can be used to develop accurate Islamic question answering, AI chatbots and other NLP systems.

Author 1: Abdallah Namoun
Author 2: Mohammad Ali Humayun
Author 3: Waqas Nawaz

Keywords: Web scraping; Islamic knowledge; machine learning; natural language processing; question and answering; AI chatbots

PDF

Paper 25: Hybrid Transfer Learning for Diagnosing Teeth Using Panoramic X-rays

Abstract: The increasing focus on oral diseases has highlighted the need for automated diagnostic processes. Dental panoramic X-rays, commonly used in diagnosis, benefit from advancements in deep learning for efficient disease detection. The DENTEX Challenge 2023 aimed to enhance the automatic detection of abnormal teeth and their enumeration from these X-rays. We propose a unified technique that combines direct classification with a hybrid approach, integrating deep learning and traditional classifiers. Our method integrates segmentation and detection models to identify abnormal teeth accurately. Among various models, the Vision Transformer (ViT) achieved the highest accuracy of 97% using both approaches. The hybrid framework, combining modified U-Net with a Support Vector Machine, reached 99% accuracy with fewer parameters, demonstrating its suitability for clinical applications where efficiency is crucial. These results underscore the potential of AI in improving dental diagnostics.

Author 1: M. M. EL-GAYAR

Keywords: Machine learning; deep learning; dental diagnosis; transfer learning

PDF

Paper 26: Development of Smart Financial Management Research in Shared Perspective: A CiteSpace-Based Analysis Review

Abstract: At a time when information technology is advancing by leaps and bounds, smart financial management is becoming a hotspot of common concern in both academic and practical circles. The purpose of this paper is to systematically sort out the research development trend of smart financial management under the shared vision through the CiteSpace bibliometric analysis method. We select the relevant literature in the Web of Science database during the 10 years from 2014 to 2023 as the research object, set reasonable queer values and time slices, and conduct in-depth analyses of keyword co-occurrence, author cooperation network, keyword clustering, mutant keywords, and time interval, etc., and analyze the research hotspots and evolutionary paths of the smart financial management research under the shared vision, using the data as a renewable basis. Evolutionary path. After the study, it is found that the research in this field presents more obvious stage characteristics, influenced by technological progress, industry demand, and social change, and can be divided into five stages according to the development curve: the construction of the basic framework, the development of the model system, the change of behavioral patterns, personalized recommendations and risks, and the depth of the role of the Internet. As an emerging research field, as research scholars dig deeper into the theoretical logic, the deepening of interdisciplinary research, and the application of emerging technologies, it provides a new impetus and a new direction for intelligent financial management to make financial management more healthy and sustainable development.

Author 1: Rongxiu Zhao
Author 2: Duochang Tang

Keywords: Smart finance; financial management; financial sharing; bibliometrics; CiteSpace

PDF

Paper 27: Explainable AI-Driven Chatbot System for Heart Disease Prediction Using Machine Learning

Abstract: Heart disease (HD) continues to rank as the top cause of morbidity and mortality worldwide, prompting the enormous importance of correct prediction for effective intervention and prevention strategies. The proposed research involves developing a novel explainable AI (XAI)-driven chatbot system for HD prediction, combined with cutting-edge machine learning (ML) algorithms and advanced XAI techniques. This research work highlights different approaches like Random Forest (RF), Decision Tree (DT), and Bagging-Quantum Support Vector Classifier (QSVC). The RF approach achieves the best performance, with 92.00% accuracy, 91.97% sensitivity, 56.81% specificity, 8.00% miss rate, and 99.93% precision compared to other approaches. SHAP and LIME provide XAI methods for which the chatbot's predictions and explanations endow trust and understanding with the user. This novel approach proves the potential of seamless integration of explanations in a wide range of web or mobile applications for healthcare. Future works will extend the work on incorporating other diseases' predictions in the model and improve the explanation of those predictions using more advanced explainable AI approaches.

Author 1: Salman Muneer
Author 2: Taher M. Ghazal
Author 3: Tahir Alyas
Author 4: Muhammad Ahsan Raza
Author 5: Sagheer Abbas
Author 6: Omar AlZoubi
Author 7: Oualid Ali

Keywords: Heart disease prediction; machine learning; chatbot system; XAI

PDF

Paper 28: Integrating Local Channel Attention and Focused Feature Modulation for Wind Turbine Blade Defect Detection

Abstract: In the wind power industry, the health state of wind turbine paddles is directly related to the power generation efficiency and the safe operation of the equipment. In order to solve the problems of low efficiency and insufficient accuracy of traditional detection methods, this paper proposes a wind turbine blade defect detection algorithm that integrates local channel attention and focus feature modulation. The algorithm first introduces the Mixed Local Channel Attention (MLCA) mechanism into the C2f module of the backbone network in YOLOv8 to enhance the extraction capability of the backbone network for key features. Then the Focal Feature Modulation (FFM) module is used to replace the original SPPF module in YOLOv8 to further aggregate global contextual features at different levels of granularity; finally, in the Neck part, the pro-gressive feature pyramid AFPN structure is used to enhance the multi-scale feature fusion capability of the model, which in turn improves the accuracy of small object detection. The experi-mental results show that the proposed algorithm has an accuracy of 82.5%, a mAP50 of 78.6%, and GFLOPS of 8.5. In the detection of wind turbine blade defects, which possesses higher detection performance and real-time performance compared with the traditional methods, and is able to effectively identify common defects such as cracks, corrosion, and abrasion, and exhibits strong robustness and application value.

Author 1: Zheng Cao
Author 2: Rundong He
Author 3: Shaofei Zhang
Author 4: Zhaoyang Qi
Author 5: Sa Li
Author 6: Tong Liu
Author 7: Yue Li

Keywords: Fan blades; YOLO; attention mechanism; defect detection; inner-IoU

PDF

Paper 29: Construction and Optimization of Multi-Scenario Autonomous Call Rule Models in Emergency Command Scenarios

Abstract: In response to the slow processing speed, weak anti-interference, and low accuracy of autonomous call models in current emergency command scenarios, the research focuses on the fire scenario, aiming to improve the emergency response efficiency through technological innovation. The research innovatively integrates digital signal processing algorithm and two-tone multi-frequency signal detection algorithm to develop a hybrid algorithm. Then, a novel autonomous call model based on the hybrid algorithm is constructed. The comparative experimental results indicated that the accuracy of the hybrid algorithm was 0.9 and the error rate was 0.05, which was better than other comparison models. The average accuracy and comprehensive performance score of the model were 0.95 and 97 points, respectively, both of which were better than comparison models. The results confirm that the autonomous call model proposed in this study can accurately and quickly judge emergency scenarios and handle calls, and provide new ideas and theoretical basis for emergency command and rescue of fire and other disasters, with broad application prospects.

Author 1: Weiyan Zheng
Author 2: Chaoyue Zhu
Author 3: Di Huang
Author 4: Bin Zhou
Author 5: Xingping Yan
Author 6: Panxia Chen

Keywords: Digital signal processing algorithm; dual tone multi-frequency signal detection algorithm; fire; autonomous call model

PDF

Paper 30: Enhancing User Comfort in Virtual Environments for Effective Stress Therapy: Design Considerations

Abstract: Mental stress has emerged as a widespread concern in modern society, impacting individuals from diverse demographic backgrounds. Therefore, exploring effective methods for therapy, such as virtual environments tailored for stress management, is vital for advancing mental health and improving coping strategies. Prioritising user comfort in the design of virtual environments is essential for enhancing their efficacy in alleviating stress. By considering four design aspects of virtual environments that influence user comfort: (i) visual clarity, (ii) safety features, (iii) cognitive preparedness, and (iv) social support, this study intends to (i) evaluate the effectiveness of these four user-centered design elements in facilitating stress reduction and (ii) explore the underlying rationale behind their stress-reducing properties. This study utilised a mixed-methods approach comprising (i) experiments, (ii) questionnaires, and (iii) interviews. Following evaluation with the Depression Anxiety Stress Scale (DASS), 40 participants (10 men and 30 females) were chosen from the 55 healthy adults aged 20 to 60 who volunteered for the study. The findings validated the efficacy of all four design aspects in enhancing users' comfort during therapeutic sessions in virtual environments. This study offers important insights not only into the importance of user-centered design in creating virtual environments for stress management, where comfort markedly improves therapy outcomes but also contributes valuable knowledge to the fields of mental health and human-computer interaction, paving the way for further exploration of innovative therapeutic solutions for mental stress.

Author 1: Farhah Amaliya Zaharuddin
Author 2: Nazrita Ibrahim
Author 3: Azmi Mohd Yusof

Keywords: Virtual environment design; virtual reality; stress therapy; user comfort

PDF

Paper 31: A Machine Learning Model for Crowd Density Classification in Hajj Video Frames

Abstract: Managing the massive annual gatherings of Hajj and Umrah presents significant challenges, particularly as the Saudi government aims to increase the number of pilgrims. Currently, around two million pilgrims attend Hajj and 26 million attend Umrah making crowd control especially in critical areas like the Grand Mosque during Tawaf, a major concern. Additional risks arise in managing dense crowds at key sites such as Arafat where the potential for stampedes, fires and pandemics poses serious threats to public safety. This research proposes a machine learning model to classify crowd density into three levels: moderate crowd, overcrowded and very dense crowd in video frames recorded during Hajj, with a flashing red light to alert organizers in real-time when a very dense crowd is detected. While current research efforts in processing Hajj surveillance videos focus solely on using CNN to detect abnormal behaviors, this research focuses more on high-risk crowds that can lead to disasters. Hazardous crowd conditions require a robust method, as incorrect classification could trigger unnecessary alerts and government intervention, while failure to classify could result in disaster. The proposed model integrates Local Binary Pattern (LBP) texture analysis, which enhances feature extraction for differentiating crowd density levels, along with edge density and area-based features. The model was tested on the KAU-Smart-Crowd 'HAJJv2' dataset which contains 18 videos from various key locations during Hajj including 'Massaa', 'Jamarat', 'Arafat' and 'Tawaf'. The model achieved an accuracy rate of 87% with a 2.14% error percentage (misclassification rate), demonstrating its ability to detect and classify various crowd conditions effectively. That contributes to enhanced crowd management and safety during large-scale events like Hajj.

Author 1: Afnan A. Shah

Keywords: Hajj; moderate crowd; overcrowded; very dense crowd; machine learning

PDF

Paper 32: Towards an Ontology to Represent Domain Knowledge of Attention Deficit Hyperactivity Disorder (ADHD): A Conceptual Model

Abstract: Attention deficit/hyperactivity disorder (ADHD) represents a highly heterogeneous and complex medical domain with numerous multidisciplinary research areas. Despite the rising number of research on the pathophysiology of ADHD, the available information in the ADHD domain is still scattered and disconnected. This research study mainly aims to develop a conceptual model of ADHD by applying knowledge engineering processes to structure the domain knowledge, elucidating key concepts and their interrelationships. The methodology for developing the conceptual model is derived from established practices in ontology construction. It adopts a hybrid approach, integrating principles from prominent methodologies such as Ontology Development 101, the Uschold and King methodology, and METHONTOLOGY. The proposed ADHD conceptual model links various aspects of ADHD including subtypes, symptoms, behaviors, diagnostic criteria, treatment, risk factors, comorbidities, and patient profile. Comprising eight top-level classes and highlighting 13 key relationships, it establishes connections between symptoms and recommended treatments, as well as symptoms and their diverse manifestations, risk factors, ADHD subtypes, and potential comorbidities. While the model captures a broad range of ADHD-related concepts, it has certain limitations. It does not extensively address genetic or neurobiological mechanisms, nor does it capture cultural and contextual variations in ADHD manifestations. These limitations highlight opportunities for future expansion, such as incorporating real-world data and diverse demographic contexts. Nevertheless, the model developed in this study is well-suited to serve as a cornerstone for constructing a comprehensive ADHD domain knowledge ontology. Ontologies play a crucial role as a layer for transferring knowledge and serve as a foundation for developing advanced systems, such as decision-support tools and expert systems, to enhance ADHD research and clinical practice.

Author 1: Shahad Mansour Alsaedi
Author 2: Aishah Alsobhi
Author 3: Hind Bitar

Keywords: Conceptual model; ontology; ADHD; knowledge engineering

PDF

Paper 33: Leiden Coloring Algorithm for Influencer Detection

Abstract: In today's digital age, the role of influencers, especially on social media platforms, has grown significantly. A commonly used feature by business professionals today is follower grouping. However, this feature is limited to identifying influencers based solely on mutual followership, highlighting the need for a more sophisticated approach to influencer detection. This study proposes a novel method for influencer detection that integrates the Leiden coloring algorithm and Degree centrality. This approach leverages network analysis to identify patterns and relationships within large-scale datasets. Initially, the Leiden coloring algorithm is employed to partition the network into various communities, considered potential influencer hubs. Subsequently, Degree centrality is utilized to identify nodes with high connectivity, indicating influential individuals. The proposed method was validated using data crawled from Twitter (X) social media, employing the keyword "GarudaIndonesia." The data was collected using Tweet-Harvest between January 1, 2020, and October 16, 2024, resulting in a dataset of 22,623 rows. The dataset was subjected to two experimental scenarios: 1,000 and 5,000 rows. Compared to the Louvain coloring method, the proposed approach demonstrated an increase in the modularity value of the Leiden coloring algorithm by 0.0306, a reduction in time processing by 14.4848 seconds, and a decrease in the number of communities by 1,290.

Author 1: Handrizal
Author 2: Poltak Sihombing
Author 3: Erna Budhiarti Nababan
Author 4: Mohammad Andri Budiman

Keywords: Influencer; Louvain coloring; Leiden; Leiden coloring

PDF

Paper 34: Construction and Optimal Control Method of Enterprise Information Flaw Risk Contagion Model Based on the Improved LDA Model

Abstract: In this study, we construct a risk contagion model for corporate information disclosure using complex network methods and incorporate the manipulative perspective of management tone into it. We employ an enhanced LDA model to analyze and refine the relevant data and models presented in this paper. The results of quantitative analysis show that the improved LDA algorithm optimizes the classification decision boundary, making similar samples closer and different samples more dispersed, thus improving the classification accuracy. Additionally, we combine multi-objective evolutionary optimization techniques with an improved particle swarm optimization algorithm to solve the proposed model while incorporating enhancements through the use of weighted Smote algorithm. The quantization results show that using the weighted Smote algorithm to deal with the imbalance in the dataset significantly improves the classification performance. Furthermore, we compare our proposed method with classical algorithms on four real enterprise information disclosure datasets and observe that our approach exhibits higher efficiency and accuracy compared to traditional optimal control methods. Accounting information disclosure reveals moral hazard and adverse selection, alleviating information asymmetry. Transparent information improves the availability of financing, preventing liquidity risk. High-quality information disclosure reduces financing costs, alleviates confidence crises, ensures capital adequacy, and avoids capital outflows. Research constructs a corporate information disclosure risk contagion model, using an improved LDA model and multi-objective evolutionary optimization methods for analysis, showing high efficiency and good accuracy, effectively controlling environmental and related effects.

Author 1: Jun Wang
Author 2: Zhanhong Zhou

Keywords: Management tone manipulation; enterprise information disclosure; risk contagion; optimal control

PDF

Paper 35: A Machine Learning-Based Intelligent Employment Management System by Extracting Relevant Features

Abstract: In recent years, there has been a significant increase in the number of students trying to broaden the work opportunities available to college graduates. This study presents an intelligent employment management system that may be used in educational institutions for students to gain a better understanding of their occupations and analyzing the sectors in which they will work. In this article, the fundamental concepts of information recommendation are discussed, as well as a customized recommendation system for entrepreneurship that is provided. The fundamental information and personal interest points of college students are represented by feature vectors. These feature vectors provide positive theoretical support for the career planning and employment and entrepreneurship information suggestions of college students. In conclusion, an analysis of the performance of the proposed model is performed to provide college students with a system that is both convenient and quick in terms of information recommendation. This will result in an indirect improvement in the employment rate of graduates and will provide solutions that correspond to the problem of difficult employment.

Author 1: Yiming Wang
Author 2: Chi Che

Keywords: Employment management system; recommendation system; feature index; accuracy and employment intention index

PDF

Paper 36: Optimizing the Fault Localization Path of Distribution Network UAVs Based on a Cloud-Pipe-Side-End Architecture

Abstract: The currently proposed optimization algorithm for cooperative fault inspection of distribution network UAVs struggles to accurately detect fault points quickly, leading to low inspection efficiency. To address these issues, we investigate a new fault localization path optimization algorithm for distribution network UAVs based on a cloud-pipe-edge-end architecture. This architecture employs multiple drones for coordinated control, allowing for the simultaneous detection of suspected fault areas. Communication links facilitate interaction at both the drone and system levels, enabling the transmission of fault diagnosis information. Fault defects are identified, and the information is analyzed within an edge computing framework to achieve precise fault localization. Experimental results demonstrate that the proposed algorithm significantly enhances detection speed and accuracy, providing robust technical support for UAV operations.

Author 1: Lan Liu
Author 2: Ping Qin
Author 3: Xinqiao Wu
Author 4: Chenrui Zhang

Keywords: Cloud-pipe-edge-end architecture; distribution network UAV; cloud-edge collaboration; edge computing

PDF

Paper 37: Predicting the Number of Video Game Players on the Steam Platform Using Machine Learning and Time Lagged Features

Abstract: Predicting player count can provide game developers with valuable insights into players’ behavior and trends on the game population, helping with strategic decision-making. Therefore, it is important for the prediction to be as accurate as possible. Using the game’s metadata can help with predicting accuracy, but they stay the same most of the time and do not have enough temporal context. This study explores the use of machine learning with lagged features on top of using metadata and aims to improve accuracy in predicting daily player count, using data from top 100 games from Steam, one of the biggest game distribution platforms. Several combinations of feature selection methods and machine learning models were tested to find which one has the best performance. Experiments on a dataset from multiple games show that Random Forest model combined with Pearson’s Correlation Feature Selection gives the best result, with R2 score of 0.9943. average R2 score above 0.9 across all combinations.

Author 1: Gregorius Henry Wirawan
Author 2: Gede Putra Kusuma

Keywords: Video games; regression method; feature selection; time series forecasting; machine learning

PDF

Paper 38: Cross-Entropy-Driven Optimization of Triangular Fuzzy Neutrosophic MADM for Urban Park Environmental Design Quality Evaluation

Abstract: The evaluation of urban park environmental design quality focuses on functionality, aesthetics, ecology, and user experience. Functionality ensures practical facilities, clear zoning, and accessibility. Aesthetics emphasizes visual harmony, cultural integration, and artistic appeal. Ecological quality assesses vegetation, biodiversity, and sustainability, promoting environmental protection. User experience evaluates comfort, safety, inclusivity, and the ability to meet diverse needs. A well-designed park balances these elements, fostering harmony between humans and nature while enhancing public well-being, environmental awareness, and the overall urban living experience. The quality evaluation of urban park environmental design is multi-attribute decision-making (MADM). In this study, triangular fuzzy neutrosophic number cross-entropy (TFNN-CE) approach is executed under triangular fuzzy neutrosophic sets (TFNSs). Furthermore, Then, entropy is employed to execute the weight and TFNN-CE approach is executed for MADM under TFNSs. Finally, numerical example for quality evaluation of urban park environmental design is executed the advantages of TFNN-CE approach through different comparisons. The major contributions of this study could be executed: (1) entropy is employed to execute the weight under TFNSs; (2) TFNN-CE approach is executed under TFNSs; (3) TFNN-CE approach is put forward for MADM under TFNSs; (4) numerical example for quality evaluation of urban park environmental design is executed the advantages of TFNN-CE approach through different comparisons.

Author 1: Xing She
Author 2: Xi Xie
Author 3: Peng Xie

Keywords: Multiple-Attribute Decision-Making (MADM) problems; Triangular Fuzzy Neutrosophic Sets (TFNSs); cross-entropy approach; TFNN-CE approach; urban park environmental design

PDF

Paper 39: Improved YOLOv11pose for Posture Estimation of Xinjiang Bactrian Camels

Abstract: Automatic pose estimation of camels is crucial for long-term health monitoring in animal husbandry. There is currently less research on camels, and our study has certain practical application value in actual camel farms. Due to the high similarity of camels, this has brought us a huge challenge in pose estimation. This study proposes YOLOv11pose-Camel, a pose estimation algorithm tailored for Bactrian camels. The algorithm enhances feature extraction with a lightweight channel attention mechanism (ECA) and improves detection accuracy through an efficient multi-scale pooling structure (SimSPPF). Additionally, C3k2 modules in the neck are replaced with dynamic convolution blocks (DECA-blocks) to strengthen global feature extraction. We collected a diverse dataset of Bactrian camel images with farm staff assistance and applied data augmentation. The optimized YOLOv11pose model achieved 94.5% accuracy and 94.1% mAP@0.5 on the Xinjiang Bactrian camel dataset, outperforming the baseline by 2.1% and 2.2%, respectively. The model also maintains a good balance between detection speed and efficiency, demonstrating its potential for practical applications in animal husbandry.

Author 1: Lei Liu
Author 2: Alifu Kurban
Author 3: Yi Liu

Keywords: YOLOv11pose; efficient channel attention; multi-scale pooling structure; DECA-block; Bactrian camel posture estimation;SimSPPF; ECA

PDF

Paper 40: A Hybrid Machine Learning Approach for Continuous Risk Management in Business Process Reengineering Projects

Abstract: This study proposes a hybrid machine learning approach for continuous risk management in Business Process Reengineering (BPR) projects. This approach combines supervised and unsupervised learning techniques, integrating feature selection and preprocessing through Principal Component Analysis (PCA), clustering with K-means, and visualization with t-SNE. The labeled data are then used as input for predictive modeling with XGBoost, optimized using Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), and Grid Search algorithms.PCA reduces data dimensionality, simplifying analysis and improving model performance. K-means and t-SNE are employed for data clustering and visualization, enabling the identification of risk segments and uncovering hidden patterns. XGBoost, a powerful boosting algorithm, is utilized for predictive modeling due to its efficiency, accuracy, and ability to handle missing values. Optimization techniques further enhance XGBoost's performance by fine-tuning its hyperparameters. The approach was applied to a risk database from the automotive sector, demonstrating its practical applicability. Results show that PSO achieves the lowest mean squared error (MSE) and root mean squared error (RMSE), followed by GWO and Grid Search. Mahalanobis distance yields more accurate clustering results compared to Euclidean, Manhattan, and Cosine distances. This hybrid machine learning approach significantly enhances risk detection, evaluation, and mitigation in BPR projects, offering a robust framework for proactive decision-making.

Author 1: RAFFAK Hicham
Author 2: LAKHOUILI Abdallah
Author 3: MANSOURI Moahmed

Keywords: BPR; Risk management; PCA; K-means; XGBoost; PSO; GWO

PDF

Paper 41: Enhancing CURE Algorithm with Stochastic Neighbor Embedding (CURE-SNE) for Improved Clustering and Outlier Detection

Abstract: This study focuses on analyzing stunting data using the CURE and CURE-SNE algorithms for clustering and outlier detection. The primary challenge is identifying patterns in stunting data, which includes variables such as age, gender, height, weight, and nutritional status. Both algorithms were employed to group the data and detect outliers that may affect the results of the analysis. The evaluation methods included determining the optimal number of clusters using the silhouette score and assessing cluster quality using the Davies-Bouldin Index (DBI). The results showed that both algorithms formed four clusters, with CURE-SNE detecting 6,050 outliers, while CURE detected 5,047 outliers. Silhouette score analysis revealed that both algorithms formed four optimal clusters. However, when validated using DBI, CURE achieved a score of 0.523, while CURE-SNE produced a lower score of 0.388, indicating that CURE-SNE outperformed CURE in terms of cluster quality. This suggests that CURE-SNE not only detects more outliers but also produces clusters with better separation and compactness. The findings highlight that both algorithms are effective for clustering stunting data, but CURE-SNE excels in terms of outlier detection and overall cluster quality. Thus, CURE-SNE is more suitable for handling complex datasets with potential outliers, providing more accurate insights into the structure of the data. In conclusion, CURE-SNE demonstrates superior performance compared to CURE, offering a more reliable and detailed clustering solution for stunting data analysis.

Author 1: Dewi Sartika Br Ginting
Author 2: Syahril Efendi
Author 3: Amalia
Author 4: Poltak Sihombing

Keywords: Stunting; clustering algorithm; CURE; CURE-SNE; outliers

PDF

Paper 42: Distributed Networks for Brain Tumor Classification Through Temporal Learning and Hybrid Attention Segmentation

Abstract: Brain Tumor (BT), which is the progress of abnormal cells in brain surface is categorized into different types based on the symptoms and the affected parts in brain. Classification of BT using Magnetic Resonance Imaging (MRI) is an important and challenging task for BT diagnosis. Various approaches are designed to solve the issues and there are so many inconsistencies in detecting the tumor at early stage. The changes in variability and the complexity of size, shape, location and texture of lesions, automatic detection of BT still results a challenging task in the medical research community. Hence, a proposed Hybrid Attention Temporal Difference Learning with Distributed Convolutional Neural Network-Bidirectional Long Short-Term Memory (HATDL-DCNN-BiLSTM) is developed in this research to detect and classify the BT at beginning stage that enables to improve the survival rate of humans. The proposed model uses Gaussian filter for input image enhancement, Hybrid Attention-VNet segmentation to generate region of interest and solves the computational issues through the attention modules by minimizing the dimensions. The proposed model consumed less memory utilization and increase the training speed globally using the distributed learning mechanism. The features extracted using Hybrid Attention based Efficient Statistical Triangular ResNet (HA-ESTER) supports the classification model to increase the training efficiency more accurately. The proposed HATDL-DCNN-BiLSTM attains higher efficiency by the metrics of accuracy, recall, F1-score, and precision of 98.93%, 99.21%, 97.67%, and 96.17% with training data, and accuracy, recall, F1-score, and precision of 96.34%, 96.51%, 96.33%, and 96.15% with k-fold using BraTS 2019 dataset.

Author 1: Sayeedakhanum Pathan
Author 2: Savadam Balaji

Keywords: Brain tumor; magnetic resonance imaging Gaussian filter; hybrid attention-VNet; distributed convolution neural network

PDF

Paper 43: A Distributed Framework for Indoor Product Design Using VR and Intelligent Algorithms

Abstract: This paper presents an innovative approach to the digital design of indoor home products by integrating virtual reality (VR) technology with intelligent algorithms to enhance design accuracy and efficiency. A model combining the Red deer Optimization Algorithm with a Simple Recurrent Unit (SRU) network is proposed to evaluate and optimize the design process. The study develops a digital design framework that incorporates key evaluation factors, optimizing the SRU network through the Red deer Optimization Algorithm to achieve higher precision in design applications. The model’s performance is validated through extensive experiments using metrics such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). Results show that the RDA-SRU model outperforms other methods, with the smallest MAE of 0.133, RMSE of 0.02, and MAPE of 0.015. Additionally, the model achieved an R² value of 0.968 and the shortest evaluation time of 0.028 seconds, demonstrating its superior performance in predicting and evaluating digital design applications for home products. These findings indicate that the integration of VR with intelligent algorithms significantly improves user experience, customizability, and the overall accuracy of digital design processes. This approach offers a robust solution for designers to create more efficient and user-centric home product designs, meeting growing customer demands for immersive and interactive design experiences.

Author 1: Yaoben Gong
Author 2: Zhenyu Gao

Keywords: Interior home products; virtual reality technology; digital design algorithms; improved simple cyclic units; intelligent algorithms for design application evaluation

PDF

Paper 44: Convolutional Layer-Based Feature Extraction in an Ensemble Machine Learning Model for Breast Cancer Classification

Abstract: Mammography and ultrasound are the main medical imaging modalities for identifying breast lesions. Computer-assisted diagnosis (CAD) is an important tool for radiologists, helping them differentiate benign and malignant lesions more quickly and objectively. The use of appropriate features in mammography and ultrasound is one of the key factors determining the success of computer-assisted diagnosis (CAD) results for breast cancer systems. The diversity of feature forms and extraction techniques is a challenge. Additionally, the use of a single classification algorithm often causes noise, bias, and is not robust. We propose a convolutional layer-based feature extraction technique in the ensemble learning model for the classification of breast cancer. This study uses 439 mammography images (203 benign, 236 malignant) and 421 ultrasound images (244 benign, 177 malignant). This research consists of several stages, including data pre-processing, feature extraction, classification, and performance evaluation. We used four convolution layer-based feature extraction techniques: simple convolution (SC), feature fusion convolution (FFC), feature fusion depthwise convolution (FFDC), and feature fusion depthwise separable convolution (FFDSC). The model uses five machine learning algorithms (support vector machine, random forest, k nearest neighbours, decision tree, and logistic regression) that are part of ensemble learning. The experimental results show that the use of the FFC convolution layer in ensemble learning has the best performance for both datasets. In the ultrasound data set, the FFC achieved a value of 0.90 in each of the accuracy, precision, recall, specificity, and F1 score metrics. In the mammography data set, the FFC achieved a value of 0.98 on each of the same metrics. These results show the effectiveness of feature fusion in improving classification performance in the soft voting classifier for ensemble learning.

Author 1: Shofwatul ‘Uyun
Author 2: Lina Choridah
Author 3: Slamet Riyadi
Author 4: Ade Umar Ramadhan

Keywords: Ensemble learning; feature extraction; convolutional layer; breast cancer

PDF

Paper 45: Design and Application of a TOPSIS-Based Fuzzy Algorithm

Abstract: The study aims to evaluate the tourism attractiveness of different tourist attractions in the same region through the TOPSIS model in the perspective of culture and tourism integration, so as to provide theoretical and practical support for the tourism development of the region. On the basis of the concept of culture and tourism integration and its importance in tourism development, the evaluation index system of tourism attraction is constructed, including the indicators of tourism resources, tourism infrastructure, etc. Finally, the entropy weighting method and TOPSIS model are used for the comprehensive evaluation of these indicators, and the weight of each indicator and the comprehensive score of the tourism attraction of a certain place are derived through calculation. The results show that through the analysis of the TOPSIS model, the advantages and shortcomings of the region in terms of tourism resources and cultural characteristics can be clearly understood, and recommendations can be targeted, including strengthening tourism infrastructure construction, excavating and protecting cultural characteristics, and so on. These suggestions can help to further improve and enhance the tourism attractiveness of a certain place, so as to attract more tourists and promote the development of the local economy. Meanwhile, the methodology and framework of this study also provide reference and reference for other regions to carry out similar tourism attractiveness evaluation. In the context of cultural and tourism integration, this study expands the perspective of tourism evaluation and provides new ideas and methods for local tourism development.

Author 1: Fei Liu

Keywords: Cultural and tourism integration; attractiveness; TOPSIS; entropy weight method

PDF

Paper 46: Enhanced Butterfly Optimization Algorithm for Task Scheduling in Cloud Computing Environments

Abstract: Cloud computing is transforming the provision of elastic and adaptable capabilities on demand. A scalable infrastructure and a wide range of offerings make cloud computing essential to today's computing ecosystem. Cloud resources enable users and various companies to utilize data maintained in a distant location. Generally, cloud vendors provide services within the limitations of Service Level Agreement (SLA) terms. SLAs consist of various Quality of Service (QoS) requirements the supplier promises. Task scheduling is critical to maintaining higher QoS and lower SLAs. In simple terms, task scheduling aims to schedule tasks to limit wasted time and optimize performance. Considering the NP-hard character of cloud task scheduling, metaheuristic algorithms are widely applied to handle this optimization problem. This study presents a novel approach using the Butterfly Optimization Algorithm (BOA) for scheduling cloud-based tasks across diverse resources. BOA performs well on non-constrained and non-biased mathematical functions. However, its search capacity is limited to shifted, rotated, and/or constrained optimization problems. This deficiency is addressed by incorporating a virtual butterfly and improved fuzzy decision processes into the conventional BOA. The suggested methodology improves throughput and resource utilization while reducing the makespan. Regardless of the number of tasks, better results are consistently produced, indicating greater scalability.

Author 1: Yue ZHAO

Keywords: Cloud computing; resource utilization; task scheduling; Butterfly Optimization Algorithm; fuzzy decision strategy

PDF

Paper 47: Leveraging Large Language Models for Automated Bug Fixing

Abstract: Bug fixing, which is known as Automatic Program Repair (APR), is a significant area of research in the software engineering field. It aims to develop techniques and algorithms to automatically fix bugs and generate fixing patches in the source code. Researchers focus on developing many APR algorithms to enhance software reliability and increase the productivity of developers. In this paper, a novel model for automated bug fixing has been developed leveraging large language models. The proposed model accepts the bug type and the buggy method as inputs and outputs the repaired version of the method. The model can localize the buggy lines, debug the source code, generate the correct patches, and insert them in the correct locations. To evaluate the proposed model, a new dataset which contains 53 Java source code files from four bug classes which are Program Anomaly, GUI, Test-Code and Performance has been presented. The proposed model successfully fixed 49 out of 53 codes using gpt-3.5-turbo and all 53 using gpt-4-0125-preview. The results are notable, with the model achieving accuracies of 92.45% and 100% with gpt-3.5-turbo and gpt-4-0125-preview, respectively. Additionally, the proposed model outperforms several state-of-the-art APR models as it fixes all 40 buggy programs in QuixBugs benchmark dataset.

Author 1: Shatha Abed Alsaedi
Author 2: Amin Yousef Noaman
Author 3: Ahmed A. A. Gad-Elrab
Author 4: Fathy Elbouraey Eassa
Author 5: Seif Haridi

Keywords: Bug fixing; automated program repair; large language models; software debugging; software maintenance; machine learning

PDF

Paper 48: Towards Secure Internet of Things Communication Through Trustworthy RPL Routing Protocols

Abstract: The Internet of Things (IoT) refers to a network of connected objects for autonomous data exchange and processing. With the increasing growth in IoT, ensuring data transmission integrity and security is essential, as data is subject to many attacks. Currently, the routing protocol for low-power lossy networks is RPL and finds wide deployment in IoT deployments. It also provides a framework to define characteristics related to low-power consumption and resilience to specific routing attacks. RPL trust-based routing protocols improve RPL security by introducing a threshold for Minimum Acceptable Trust, permitting only trusted nodes with a sufficient level of obtained trust to participate in routing. This mechanism is designed to reduce malicious activities and to establish secure communications. This paper will provide an overall review of trustworthy RPL routing methods in IoT and discuss the trust metrics of these approaches and their limitations. To the best of our knowledge, this is the first survey focusing on trust-based RPL protocols in IoT, offering valuable insights into the performance of protocols and possible improvements.

Author 1: Rui LI

Keywords: Internet of Things; routing; trust; data transmission

PDF

Paper 49: Cybersecurity Awareness in Schools: A Systematic Review of Practices, Challenges, and Target Audiences

Abstract: This systematic literature review examines cybersecurity awareness in schools, focusing on effective practices, challenges, and future directions. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, peer-reviewed publications in English were sourced from ACM Digital Library, IEEE Xplore, ScienceDirect, SpringerLink, and Emerald, covering the period from 2019 to 2024. Studies were included if they focused on cybersecurity awareness in primary and secondary educational settings, excluding those unrelated to educational contexts or published before 2019. A total of 816 records were identified, of which 220 were duplicates and removed. After screening and eligibility assessments, 14 studies met the inclusion criteria. Risk of bias was minimized by adhering to strict inclusion criteria, such as limiting the review to high-quality, peer-reviewed studies, and ensuring consistency in the data extraction process. The review highlights effective practices such as using serious games, mobile apps, and tailored programs to enhance cybersecurity awareness. Challenges include inconsistent curricula, insufficient parental involvement, and resource limitations. These results emphasize integrating cybersecurity education across school curricula and regularly updating content to reflect evolving threats. Limitations include the exclusion of non-English and non-peer-reviewed studies. Future research should consider broader contexts and additional sources.

Author 1: Abdulrahman Abdullah Arishi
Author 2: Nazhatul Hafizah Kamarudin
Author 3: Khairul Azmi Abu Bakar
Author 4: Zarina Binti Shukur
Author 5: Mohammad Kamrul Hasan

Keywords: Cybersecurity awareness; threats; awareness programs; education; school security

PDF

Paper 50: Integrating Multi-Agent System and Case-Based Reasoning for Flood Early Warning and Response System

Abstract: This research addresses the limitations of current Multi-Agent Systems (MAS) in Flood Early Warning and Response Systems (FEWRS), focusing on gaps in risk knowledge, monitoring, forecasting, warning dissemination, and response capabilities. These shortcomings reduce the system’s reliability and public trust, highlighting the need for better flood preparedness and learning mechanisms. To tackle these issues, this study proposes a new conceptual framework combining Case-Based Reasoning (CBR) with MAS, aimed at enhancing flood prediction, learning, and decision-making. CBR enables the system to learn from past flood events by retrieving and adapting cases to improve future predictions and responses, while MAS allows for decentralized and collaborative decision-making among various agents within the system. This integration fosters a dynamic, real-time system that adapts to changing conditions and improves over time through continuous feedback. The framework’s effectiveness is evaluated using the quadruple helix model, addressing social, economic, environmental, and governance aspects. Socially, the system increases community resilience through improved early warnings. Economically, it reduces flood impacts by enabling faster and more accurate responses. Environmentally, it enhances monitoring and preservation of ecosystems. In governance, the framework improves coordination between agencies and the public. The CBR-MAS framework significantly improves intelligent detection, decision-making speed, and community resilience, offering substantial improvements over traditional FEWRS. This adaptive approach promises to build a more reliable, trust-worthy system capable of handling the complexities of flood risks in the future.

Author 1: Nor Aimuni Md Rashid
Author 2: Zaheera Zainal Abidin
Author 3: Zuraida Abal Abas

Keywords: Flood; multi-agent system; flood early warning system; case-based reasoning; quadruple helix; flood risk

PDF

Paper 51: Multi-Source Consistency Deep Learning for Semi-Supervised Operating Condition Recognition in Sucker-Rod Pumping Wells

Abstract: How making full use of the multiple measured information sources obtained from the sucker-rod pumping wells based on deep learning is crucial for precisely recognizing the operating conditions. However, the existing deep learning-based operating condition recognition technology has the disadvantages of low accuracy and weak practicality owing to the limitations of methods for handling single-source or multi-source data, high demand for sufficient labeled data, and inability to make use of massive unknown operating condition data resources. To solve these problems, here we design a semi-supervised operating condition recognition method based on multi-source consistency deep learning. Specifically, on the basis of the framework of WideResNet28-2 convolutional neural network (CNN), the multi-head self-attention mechanism and feedforward neural network are first used to extract the deeper features of the measured dynamometer cards and the measured electrical power cards, respectively. Then, the consistency constraint loss based on cosine similarity measurement is introduced to ensure the maximum similarity of the final features expressed by different information sources. Next, the optimal global feature representation of multi-source fusion is obtained by learning the weights of the feature representations of different information sources through the adaptive attention mechanism. Finally, the fused multi-source feature combined with the multi-source semi-supervised class-aware contrastive learning is exploited to yield the operating condition recognition model. We test the proposed model with a dataset produced from an oilfield in China with a high-pressure and low permeability thin oil reservoir block. Experiments show that the method proposed can better learn the critical features of multiple measured information sources of oil wells, and further improve the operating condition identification performance by making full use of unknown operating condition data with a small amount of labeled data.

Author 1: Jianguo Yang
Author 2: Bin Zhou
Author 3: Muhammad Tahir
Author 4: Min Zhang
Author 5: Xiao Zheng
Author 6: Xinqian Liu

Keywords: Operating condition recognition of sucker-rod pumping wells; multi-source consistency learning; semi-supervised learning; CNN; attention mechanism

PDF

Paper 52: Development of a Smart Water Dispenser Based on Object Recognition with Raspberry Pi 4

Abstract: In this project, we develop and apply a Smart Water Dispenser system, which is combined with object recognition and fluid level control supported by Ultrasonic Sensors, Raspberry Pi, and also DC Motors. The essence of this system is to develop a system using the Raspberry Pi 4 Model B with other components that have been integrated and interrelated Hardware and programming using OpenCV, YOLO V8, and other components, the point is that the cup can be detected, and water filling is done precisely and automatically. The process carried out is the detection of cups automatically using Raspberry Pi which is in charge of controlling the DC Motor and also the Ultrasonic sensor (HC-SR04) and detecting based on the volume of water with precision. The dispenser functions to pump water based on the condition of the volume of water in the glass and stop pumping if the volume of the glass has been fulfilled, aka not spilling with a percentage of 90%. In the scenario process, the cup search process is first carried out by scanning three times until a cup is found, if a cup is found, then the sensor component and the valve for the release of water in the hose will stop right at the position of the cup and the water will fill the cup automatically. Otherwise, the system will move backward and the system will be turned off. The first testing process has been successful and shows the effectiveness of the system in the process of finding cups and managing water levels. This innovation shows hope for improving user comfort, especially for disabilities, utilizing advanced technology in object recognition, and of course, saving water usage. In the testing process obtained 95% to 97% accuracy in object detection with different types of cups.

Author 1: Dani Ramdani
Author 2: Puput Dani Prasetyo Adi
Author 3: Andriana
Author 4: Tjahjo Adiprabowo
Author 5: Yuyu Wahyu
Author 6: Arief Suryadi Satyawan
Author 7: Sally Octaviana Sari
Author 8: Zulkarnain
Author 9: Noor Rohman

Keywords: Smart water dispenser; object recognition; Raspberry Pi 4; YOLO VB; ultrasonic sensor

PDF

Paper 53: Machine Learning as a Tool to Combat Ransomware in Resource-Constrained Business Environment

Abstract: Ransomware has emerged as one of the leading cybersecurity threats to microenterprises, which often lack the technological and financial resources to implement advanced protection systems. This study proposes a cybersecurity model based on machine learning, designed not only for the detection and mitigation of ransomware attacks but also as a scalable and adaptable solution that can be integrated into business infrastructures across various sectors. By leveraging advanced techniques to identify malicious behavior patterns, the system alerts businesses before significant damage occurs. Moreover, this approach provides complementary measures such as automated updates and backups, enhancing resilience against cyber threats in resource-constrained environments. This research aims not only to protect critical data but also to contribute to the development of accessible cybersecurity models, improving operational continuity and promoting sustainability in the digital landscape.

Author 1: Luis Jesús Romero Castro
Author 2: Piero Alexander Cruz Aquino
Author 3: Fidel Eugenio Garcia Rojas

Keywords: Ransomware; cybersecurity; machine learning; microenterprise; threat detection

PDF

Paper 54: Traffic Speed Prediction Based on Spatial-Temporal Dynamic and Static Graph Convolutional Recurrent Network

Abstract: Traffic speed prediction based on spatial-temporal data plays an important role in intelligent transportation. The time-varying dynamic spatial relationship and complex spatial-temporal dependence are still important problems to be considered in traffic prediction. In response to existing problems, a Dynamic and Static Graph Convolutional Recurrent Network (DASGCRN) model for traffic speed prediction is proposed to capture the spatial-temporal correlation in the road network. DASGCRN consists of Spatial Correlation Extraction Module (SCEM), Dynamic Graph Construction Module (DGCM), Dynamic Graph Convolution Recurrent Module (DGCRM) and residual decomposition. Firstly, the improved traditional static adjacency matrix captures the relationship between each time step node. Secondly, the graph convolution captures the overall spatial information between the road networks, and the dynamic graph isomorphic network captures the hidden dynamic dependencies between adjacent time series. Thirdly, spatial-temporal correlation of traffic data is captured based on dynamic graph convolution and gated recurrent unit. Finally, the residual mechanism and the phased learning strategy are introduced to enhance the performance of DASGCRN. We conducted extensive experiments on two real-world traffic speed datasets, and the experimental results show that the performance of DASGCRN is significantly better than all baselines.

Author 1: YANG Wenxi
Author 2: WANG Ziling
Author 3: CUI Tao
Author 4: LU Yudong
Author 5: QU Zhijian

Keywords: Intelligent transportation; traffic speed prediction; spatial-temporal correlation; dynamic graph; graph convolution recurrent network

PDF

Paper 55: Enhanced Aquila Optimizer Algorithm for Efficient Stance Classification in Online Social Networks

Abstract: Stance classification in Online Social Networks (OSNs) is essential to comprehend users' standpoints on various issues relating to social, political, and commercial aspects. However, traditional methods applied to large datasets and complex text structures usually face several challenges. This study introduces the Enhanced Aquila Optimizer (EAO), a metaheuristic algorithm designed to improve convergence and precision in stance classification tasks. EAO incorporates three new strategies: Opposition-Based Learning (OBL) to improve the exploration, Chaotic Local Search (CLS) to escape from the local optima, and a Restart Strategy (RS) to rejuvenate the search process. Experimental assessments on benchmark OSN datasets prove the superiority of EAO in terms of accuracy, precision, and computational efficiency compared to state-of-the-art methods. These findings position EAO as a potential revolution for stance classification and other large-scale text analysis tasks by offering a robust solution that can be used in real-time for complex OSN scenarios.

Author 1: Na LI

Keywords: Stance classification; online social networks; opposition-based learning; chaotic local search; Aquila Optimizer

PDF

Paper 56: Math Role-Play Game Using Lehmer’s RNG Algorithm

Abstract: Due to the COVID-19 pandemic, schools in Malaysia have been physically closed for more than 40 weeks and the students have to learn online. As Malaysia transitions to endemicity, many younger students struggle to keep up with their education due to significant learning loss caused by school closures and the challenges of virtual classes, including distractions and reduced engagement. This study aims to address these issues by developing an educational application that integrates gaming elements, focusing on arithmetic for Year 6 primary school students. The application engages students through interactive gameplay, requiring them to solve math problems to progress, thereby promoting a fun and effective way to enhance their arithmetic skills and mitigate learning loss.

Author 1: Chong Bin Yong
Author 2: Rajermani Thinakaran
Author 3: Nurul Halimatul Asmak Ismail
Author 4: Samer A. B. Awwad

Keywords: Lehmer’s RNG algorithm; online educational; gamification

PDF

Paper 57: The Impact of Malware Attacks on the Performance of Various Operating Systems

Abstract: Latest research in the field of cyber security concludes that a permanent monitoring of the network and its protection, based on various tools or solutions, are key aspects for protecting it against vulnerabilities. So, it is imperative that solutions such as firewall, antivirus, Intrusion Detection System, Intrusion Prevention System, Security Information and Event Management to be implemented for all networks used. However, if the attack has reached the network, it is necessary to identify and analyze it in order to be able to assess the damage, to prevent similar events from happening and to build an incident response adapted to the network used. This work analyzes the impact of malicious and benign files that have reached a network. Thus, during the work, various analysis methods (both static and dynamic) of real malicious software will be developed, in two different operating systems (Windows 10 and Ubuntu 22.04). Thereby, both the malware and benign files and their impact on various operating systems will be analyzed.

Author 1: Maria-Madalina Andronache
Author 2: Alexandru Vulpe
Author 3: Corneliu Burileanu

Keywords: Cybersecurity; network security; network monitoring; incident analysis; incident response

PDF

Paper 58: A Malware Analysis Approach for Identifying Threat Actor Correlation Using Similarity Comparison Techniques

Abstract: Cybersecurity is essential for organisations to protect critical assets from cyber threats in the increasingly digital and interconnected world. However, cybersecurity incidents are rising each year, leading to increased workloads. Current malware analysis approaches are often case-by-case, based on specific scenarios, and are typically limited to identifying malware. When cybersecurity incidents are not handled effectively due to these analytical limitations, operations are disrupted, and an organisation’s brand and client trust are negatively impacted, often resulting in financial loss. The aim of this research is to enhance the analysis of Advanced Persistent Threat (APT) malware by correlating malware with its associated threat actors, such as APT groups, who are the perpetrators or authors of the malware. APT malware represents a highly dangerous threat, and gaining insight into the adversaries behind such attacks is crucial for preventing cyber incidents. This research proposes an advanced malware analysis approach that correlates APT malware with threat actors using a similarity comparison technique. By extracting features from APT malware and analysing the correlation with the threat actor, cybersecurity professionals can implement effective countermeasures to ensure that organisations are better prepared against these sophisticated cyber threats. The solution aims to assist cybersecurity practitioners and researchers in making informed decisions by providing actionable insights and a broader perspective on cyber-attacks, based on detailed information about malware tied to specific threat actors.

Author 1: Ahmad Naim Irfan
Author 2: Suriayati Chuprat
Author 3: Mohd Naz'ri Mahrin
Author 4: Aswami Ariffin

Keywords: Malware analysis; APT group; threat actor correlation; CTI

PDF

Paper 59: Usability Heuristic Evaluation of Mobile Learning Applications Based on the Usability Design Model for Adult Learners

Abstract: Adult ownership of mobile devices has exploded over the past few years, and smartphones and tablets have become vital for communication, productivity, entertainment, and learning. Some common problems adults face are that they find it difficult to use new technology-based apps because many devices are small. Tasks on new technology-based apps take longer to complete. Therefore, the usability design model for adult learners has been proposed. The objective of this study is to evaluate the usability design model for adult learners and whether the applications containing the model components will affect the satisfaction of adult learners. The evaluation was based on the heuristics guidelines by Nielsen and has been modified and mapped with the seven components in the model. Two existing mobile learning (m-learning) applications (Duolingo and Lingualia) from the Play Store have been chosen for this evaluation. The results indicate that Duolingo has an overall satisfaction mean score of 4.38 compared to Lingualia, where the score is only 2.43. Duolingo meets most of the model’s criteria and can score a higher satisfaction mean score. This indicated that the seven components play important roles in contributing to satisfaction among adult learners.

Author 1: Amy Ling Mei Yin
Author 2: Ahmad Sobri B Hashim
Author 3: Mazeyanti Bt M Ariffin

Keywords: Usability design model; mobile learning; adult learners; heuristic evaluation

PDF

Paper 60: Radar Spectrum Analysis and Machine Learning-Based Classification for Identity-Based Unmanned Aerial Vehicles Detection and Authentication

Abstract: The significant use of Unmanned Aerial Vehicles (UAVs) in commercial and civilian applications presents various cybersecurity challenges, particularly in detection and authentication. Unauthorized UAVs can be very harmful to the people on the ground, the infrastructure, the right to privacy, and other UAVs. Moreover, using the internet for UAV communication may expose authorized ones to attacks, causing a loss of confidentiality, integrity, and information availability. This paper introduces radar-based UAV detection and authentication using Micro-Doppler (MD) signal analysis. The study provides a unique dataset comprising radar signals from three distinct UAV models captured under varying operational conditions. The dataset enables the analysis of specific features and classification through machine learning models, including k-nearest Neighbor (k-NN), Random Forest, and Support Vector Machine (SVM). The approach leverages radar signal processing to extract MD signatures for accurate UAV identification, enhancing detection and authentication processes. The result indicates that Random Forest achieved the highest accuracy of 100%, with high classification accuracy and zero false alarms, demonstrating its suitability for real-time monitoring. This also highlights the potential of radar-based MD analysis for UAV detection, and it establishes a foundational approach for developing robust UAV monitoring systems, with potential applications in aviation military surveillance, public safety, and regulatory compliance. Future work will focus on expanding the dataset and integrating Remote Identification (RID) policy. A policy that mandates UAVs to disclose their identity upon approaching any territory, this will help to enhance security and scalability of the system.

Author 1: Aminu Abdulkadir Mahmoud
Author 2: Sofia Najwa Ramli
Author 3: Mohd Aifaa Mohd Ariff
Author 4: Muktar Danlami

Keywords: Authentication; detection; cybersecurity; Micro-Doppler; radar; Unmanned Aerial Vehicle (UAV)

PDF

Paper 61: Application of Residual Graph Attention Networks Algorithm in Credit Evaluation for Financial Enterprises

Abstract: In the context of digital transformation of enterprises, credit evaluation of financial enterprises faces new challenges and opportunities. Digital transformation introduces a large amount of data and advanced analytical tools, providing richer information and methods for credit evaluation. In this paper, we propose a credit evaluation model based on improved quantum genetic algorithm and residual graph attention network (DRQGA-ResGAT), which aims to utilize the complex correlation data and multi-dimensional information among enterprises for enterprise credit evaluation. The credit evaluation model based on DRQGA-ResGAT performs well in dealing with large-scale and high-dimensional data and can significantly improve the accuracy of credit evaluation. The experimental results show that the ResGAT model combined with the improved quantum genetic algorithm performs even better, and the proposed model has a high precision rate in the credit evaluation of financial enterprises, which has a greater application value. Compared with the traditional ResGAT model, the model improves about 17.06% in precision rate.

Author 1: Wenxing Zeng

Keywords: Quantum genetic algorithm; residual networks; attention mechanisms; graph neural networks; credit evaluation

PDF

Paper 62: A Conceptual Framework for Agricultural Water Management Through Smart Irrigation

Abstract: The demand for freshwater resources has risen significantly due to population growth and increasing drought conditions in agricultural regions worldwide. Irrigated agriculture consumes a substantial amount of water, often leading to wastage due to inefficient irrigation practices. Recent breakthroughs in emerging technologies, including machine learning, the Internet of Things, wireless communication, and advanced monitoring systems, have facilitated the development of smart irrigation solutions that optimize water usage, enhance efficiency, and reduce operational costs. This paper explores the critical parameters and monitoring strategies for smart irrigation systems, emphasizing soil and water management. It also presents a conceptual framework for implementing sustainable irrigation practices aimed at optimizing water use, improving crop productivity, and ensuring cost-effective management across different agricultural settings.

Author 1: Abdelouahed Tricha
Author 2: Laila Moussaid
Author 3: Najat Abdeljebbar

Keywords: Agriculture; irrigation system; water management; Internet of Things; sustainability

PDF

Paper 63: An Efficient Diabetic Retinopathy Detection and Classification System Using LRKSA-CNN and KM-ANFIS

Abstract: If Diabetic Retinopathy (DR) is not diagnosed in the early stages, it leads to impaired vision and often causes blindness. So, diagnosis of DR is essential. For detecting DR and its diverse stages, various approaches were developed. However, they are limited in considering microstructural changes of visual pathways associated with the visual impairment of DR. Thus, this work proposes an effective Linearly Regressed Kernel and Scaled Activation-based Convolution Neural Network (LRKSA-CNN) to diagnose DR utilizing multimodal images. Primarily, the input Optical Coherence Tomography (OCT) image is preprocessed for contrast enhancement utilizing Contrast-Limited Adaptive Histogram Equalization (CLAHE) and resolution enhancement utilizing the Gaussian Mixture Model (GMM). Likewise, the Magnetic Resonance Imaging (MRI) image’s contrast is also improved, and edge sharpening is performed utilizing Unsharp Mask Filter (USF). Then, preprocessed images are segmented utilizing the Intervening Contour Similarity Weights-based Watershed Segmentation (ICSW-WS) algorithm. Significant features are extracted from the segmented regions. Next, important features are chosen utilizing the Min-max normalization-based Green Anaconda Optimization (MM-GAO) algorithm. By utilizing the LRKSA-CNN technique, the selected features were classified into DR and Non-Diabetic Retinopathy (NDR). Hence, utilizing the Krusinka Membership-based Adaptive Neuro Fuzzy Interference System (KM-ANFIS), various stages of DR were classified based on the presence of intermediate features. Lastly, the proposed system achieves superior outcomes than the baseline systems.

Author 1: Rachna Kumari
Author 2: Sanjeev Kumar
Author 3: Sunila Godara

Keywords: Intervening contour similarity weights based watershed segmentation (ICSW-WS); min-max normalization based green anaconda optimization (MM-GAO); krusinka membership based adaptive neuro fuzzy interference system (KM-ANFIS); linearly regressed kernel and scaled activation based convolution neural network (LRKSA-CNN); deep learning

PDF

Paper 64: Mining High Utility Itemset with Hybrid Ant Colony Optimization Algorithm

Abstract: A significant area of study within data mining is high-utility itemset mining (HUIM). The exponential problem of broad search space usually comes up while using traditional HUIM algorithms when the database size or the number of unique objects is huge. Evolutionary computation (EC) -based algorithms have been presented as an alternate and efficient method to address HUIM problems since they can quickly produce a set of approximately optimum solutions. In transactional databases, finding entire high-utility itemset (HUIs) still need a lot of time using EC-based methods. In order to deal with this issue, we propose a hybrid Ant colony optimization-based HUIM algorithm. Genetic operators’ crossover is applied to the generated solution by the ant in the Ant Colony optimization algorithm. In this study, a single-point crossover is employed wherein, the crossover point is selected randomly and a mutation operator is applied by changing one or many random bits in a string. This technique requires less time to mine the same number of HUIs than state-of-the-art EC-based HUIM algorithms.

Author 1: Keerthi Mohan
Author 2: Anitha J

Keywords: Utility mining; high utility itemset; ant colony optimization; genetic algorithm; evolutionary computation

PDF

Paper 65: Enhancing IoT Security Through User Categorization and Aberrant Behavior Detection Using RBAC and Machine Learning

Abstract: The proliferation of Internet of Things (IoT) technology in recent years has revolutionized several industries, providing customers with reliable and efficient IoT services. However, as the IoT ecosystem grows, attention has switched away from straightforward user access to the crucial topic of security. Among others, there is a need to categorize users according to the actions they conduct as well as according to aberrant user behavior. By utilizing Role-Based Access Control (RBAC) and merging the categorization of access rights with the identification of aberrant behavior, access points to the Internet of Things will be strengthened in terms of security and dependability. A system is proposed to identify security flaws and prompt rapid remediation, with the incorporation of a classification of aberrant user behaviors which, in turn, offers a thorough defense against any outside threats. Three classification methods which are Support Vector Machine (SVM), Local Outlier Factor (LOF), and Isolation Forest (IF), were utilized in the study and their accuracy were compared. The results demonstrate the effectiveness of machine learning approaches using a dataset for IoT users, achieving high accuracy in identifying anomalous user behavior and enabling prompt implementation of necessary actions.

Author 1: Alshawwa Izzeddin A O
Author 2: Nor Adnan Bin Yahaya
Author 3: Ahmed Y. mahmoud

Keywords: Machine learning; classification; SVM; LOF; IF classification methods; aberrant user behavior; Role-Based Access Control (RBAC); IoT user dataset and user categorization

PDF

Paper 66: A Real-Time Nature-Inspired Intrusion Detection in Virtual Environments: An Artificial Bees Colony Approach Based on Cloud Model

Abstract: Real-time intrusion detection in virtual environments is crucial for maintaining the security and integrity of modern computing infrastructures. This paper proposes a nature-inspired mathematical model designed to detect both known and unknown attacks on virtual machines, focusing on enhancing detection accuracy and minimizing false alarm rates. The proposed model, named Developed Artificial Bee Colony Optimization Based on Cloud Model (DABCO_CM), is inspired by the foraging behavior of bee swarms and integrates principles from the Artificial Bee Colony algorithm and the cloud model rooted in fuzzy logic theory. The model was simulated using the UNSW_NB15 datasets in Google Colab and benchmarked against an existing model. It achieved a detection accuracy of 97.98%, compared to the existing model's 95.35%. Sensitivity results showed 99.92% for the proposed model, compared to 96.90% for the existing model, while specificity increased to 93.86%, in contrast to the existing model's 90.71%. These findings demonstrate a 3.02% increase in sensitivity, a 2.63% increase in accuracy, and a 3.15% increase in specificity, highlighting the model's superior capability in detecting attacks and its potential to learn from unlabeled data, addressing key challenges in virtual machine security.

Author 1: Ayanseun S. Ayanboye
Author 2: John E. Efiong
Author 3: Temitope O. Ajayi
Author 4: Rotimi A. Gbadebo
Author 5: Bodunde O. Akinyemi
Author 6: Emmanuel A. Olajubu
Author 7: Ganiyu A. Aderounmu

Keywords: Real-time intrusion detection; virtual environments; artificial bee colony algorithm; cloud model algorithms; intrusion detection system; feature selection; classification; swarm intelligence; fuzzy logic; DNN; ABC_DNN DABCO_CM

PDF

Paper 67: YOLO-Driven Lightweight Mobile Real-Time Pest Detection and Web-Based Monitoring for Sustainable Agriculture

Abstract: Nowadays, pest infestations cause significant reductions in agricultural productivity all over the world. To control pests, farmers often apply excessive volumes of pesticides due to the difficulty of manually detecting the pest at an early stage. Their overuse of pesticides has led to environmental pollution and health risks. To address these challenges, many novel systems have been developed to identify pests early, allowing farmers to be alerted about the exact location where pests are detected. However, these systems are constrained by their lack of real-time detection capabilities, limited mobile integration, ability to detect only a small number of pest classes, and the absence of a web-based monitoring system. This paper introduces a pest detection system that leverages the lightweight YOLO deep learning framework and is integrated with a web-based monitoring platform. The YOLO object detection architectures, including YOLOv8n, YOLOv9t, and YOLOv10-N, were studied and optimized for pest detection on smartphones. The models were trained and validated using merging publicly datasets containing 29 pest classes. Among them, the YOLOv9t achieves top performance with a mAP@0.5 value of 89.8%, precision of 87.4%, recall of 84.4%, and an inference time of 250.6ms. The web-based monitoring system enables dynamic real-time monitoring by providing farmers with instant updates and actionable insights for effective and sustainable pest management. From there, farmers can take necessary actions immediately to mitigate pest damage, reduce pesticide overuse, and promote sustainable agricultural practices.

Author 1: Wong Min On
Author 2: Nirase Fathima Abubacker

Keywords: Pest detection; YOLO; deep learning; real-time monitoring; smartphone application; web-based platform; object detection; pest management; pesticide reduction; sustainable agriculture

PDF

Paper 68: Improved Decision Tree, Random Forest, and XGBoost Algorithms for Predicting Client Churn in the Telecommunications Industry

Abstract: Traditional machine learning models, especially decision trees, face great challenges when applied to high-dimensional and imbalanced telecommunication datasets. The research presented in this paper aims to enhance the performance of traditional Decision Tree (DT), Decision Tree with grid search (DT+), random forest (RF), and XGBoost (XGB) models. This is accomplished by augmenting them with robust preprocessing techniques, as well as optimizing them through grid search. We then evaluated how well the enhanced models can accurately predict customer churn and compared their performance metrics in detail. We utilized a dataset derived from the benchmark Cell2Cell dataset by applying combined preprocessing methods including KNN imputation, normalization, and resampling with SMOTE Tomek to address class imbalance. The findings reveal that XGBoost outperformed all other models with an accuracy of 0.82, demonstrating strong precision, recall, and F1 scores. RF also delivered robust results, achieving an accuracy of 0.82, benefiting from its ensemble nature to improve generalization and reduce overfitting.

Author 1: Mohamed Ezzeldin Saleh
Author 2: Nadia Abd-Alsabour

Keywords: Churn prediction; decision trees; grid search; random forest; XGBoost

PDF

Paper 69: Cyber Security Risk Assessment Framework for Cloud Customer and Service Provider

Abstract: The rapid development of cloud computing demands an effective cybersecurity framework for protecting the sensitive information of the infrastructure. Currently, many organizations depend on cloud services for their operation, increasing the risk of cybersecurity. Hence, an intelligent risk assessment mechanism is significant for detecting and mitigating the cybersecurity threats associated with cloud environments. Although various risk assessment methods were developed in the past, they lack the efficiency to handle the dynamic and evolving nature of threats. In this study, we proposed an innovative framework for cybersecurity risk assessment in cloud customers and service providers. Initially, the historical cloud customer and service provider database was collected and fed into the system. The collected dataset contains historical security risks, network traffic, system behavior, etc., and the accumulated dataset was pre-processed to improve the quality of the dataset. The data pre-processing steps not only ensure quality but also transform the dataset into appropriate format for subsequent analysis. Further, a risk assessment module was created using the combination of deep recurrent neural network with krill herd optimization (DRNN-KHO) algorithm. In this module, the DRNN was trained using the pre-processed database to learn the pattern and interconnection between normal and abnormal network traffic. Subsequently, the KHO refines the DRNN parameters in its training phase, increasing the efficiency of risk assessment. This integrated module ensures adaptability to the system, leading to accurate prediction of evolving security threats. Then, a secure data exchange protocol was created for secure transmission between cloud customer and service provider. This protocol is designed by integrating artificial bee colony optimization with the elliptic curve cryptography (ABC-ECC). Thus, this collaborative framework ensures security in the cloud customer and service providers.

Author 1: N. Sujata Kumari
Author 2: Naresh Vurukonda

Keywords: Deep recurrent neural network; krill herd optimization; artificial bee colony optimization; elliptic curve cryptography

PDF

Paper 70: Optimizing Cervical Cancer Diagnosis with Correlation-Based Feature Selection: A Comparative Study of Machine Learning Models

Abstract: Cervical cancer remains a significant global health issue, particularly in developing countries where it is a leading cause of mortality among women. The development of machine learning-based approaches has become essential for early detection and diagnosis of cervical cancer. This research explores the optimization of classification algorithms through Correlation-Based Feature Selection (CFS) for early cervical cancer detection. A dataset consisting of 198 samples and 22 attributes from medical records was processed to reduce dimensionality. CFS was used to select the most relevant features, which were then applied to three classification algorithms: Naïve Bayes, Decision Tree, and k-Nearest Neighbor (k-NN). The results showed that CFS significantly improved classification accuracy, with Decision Tree achieving the highest accuracy of 85.89%, followed by Naïve Bayes with 83.34%, and k-NN with 82.32%. These findings demonstrate the importance of feature selection in enhancing classification performance and its potential application in the development of cervical cancer detection tools.

Author 1: Wiwit Supriyanti
Author 2: Sujalwo
Author 3: Dimas Aryo Anggoro
Author 4: Maryam
Author 5: Nova Tri Romadloni

Keywords: Cervical cancer; feature selection; machine learning

PDF

Paper 71: Intelligent System for Stability Assessment of Chest X-Ray Segmentation Using Generative Adversarial Network Model with Wavelet Transforms

Abstract: Accurate segmentation of chest X-rays is essential for effective medical image analysis, but challenges arise due to inherent stability issues caused by factors such as poor image quality, anatomical variations, and disease-related abnormalities. While Generative Adversarial Networks (GANs) offer automated segmentation, their stability remains a significant limitation. In this paper, we introduce a novel approach to address segmentation stability by integrating GANs with wavelet transforms. Our proposed model features a two-network architecture (generator and discriminator). The discriminator differentiates between the original mask and the mask generated after the generator is trained to produce a mask from a given image. The model was implemented and evaluated on two X-ray datasets, utilizing both original images and perturbed images, the latter generated by adding noise via the Gaussian noise method. A comparative analysis with traditional GANs reveals that our proposed model, which combines GANs with wavelet transforms, outperforms in terms of stability, accuracy, and efficiency. The results highlight the efficacy of our model in overcoming stability limitations in chest X-ray segmentation, potentially advancing subsequent tasks in medical image analysis. This approach provides a valuable tool for clinicians and researchers in the field of medical image analysis.

Author 1: Omar El Mansouri
Author 2: Mohamed Ouriha
Author 3: Wadiai Younes
Author 4: Yousef El Mourabit
Author 5: Youssef El Habouz
Author 6: Boujemaa Nassiri

Keywords: Deep learning; X-rays; segmentation; medical imaging; Generative Adversarial Networks; wavelet transforms

PDF

Paper 72: Real-Time Monitoring and Analysis Through Video Surveillance and Alert Generation for Prompt and Immediate Response

Abstract: The efficacy of Closed-Circuit Television systems (CCTV) in residential areas is often linked to the lack of real-time alerts and rapid response mechanisms. Enabling immediate notifications upon the identification of irregularities or aggressive conduct can greatly enhance the possibility of averting serious incidents, or at the very least, significantly mitigate their impact. The integration of an automated system for anomaly detection and monitoring, augmented by a real-time alert mechanism, is now a critical necessity. The proposed work presents an advanced methodology for real-time detection of accidents and violent activities, incorporating a sophisticated alarm system that not only triggers instant alerts but also captures and stores video frames for detailed post-event analysis. MobileNetV2 is utilized for spatial analysis due to its computational efficiency compared to other Convolutional Neural Networks (CNN) architectures, while Visual Geometry Group 16 (VGG16) enhances model accuracy, especially on large-scale datasets. The integration of Bi-directional Long Short-Term Memory (BiLSTM) strengthens temporal continuity, significantly reducing false alarms. The proposed system aims to improve both safety and security by enabling authorities to intervene timely to incidents. Combining rapid computation with high detection accuracy, the proposed model is ideally suited for real-time deployment across both urban and residential settings.

Author 1: Akshat Kumar
Author 2: Renuka Agrawal
Author 3: Akshra Singh
Author 4: Aaftab Noorani
Author 5: Yashika Jaiswal
Author 6: Preeti Hemnani
Author 7: Safa Hamdare

Keywords: Rapid response; anomaly detection; MobileNetV2; VGG16; BiLSTM

PDF

Paper 73: Sentiment Analysis of Web Images by Integrating Machine Learning and Associative Reasoning Ideas

Abstract: To achieve automatic recognition and understanding of image sentiment analysis, the study proposes an image sentiment prediction network based on multi-excitation fusion. This network simultaneously handles multiple excitations, such as color, object, and face, and is designed to predict the sentiment associated with an image. A visual emotion inference network based on scene-object association is proposed using the association reasoning method to describe the emotional associations between different objects. The multi-excitation fusion image sentiment prediction network achieved the highest accuracy of 75.6% when the loss weight was 1.0. The network had the highest accuracy of 76.5% when the object frame data was 10. The average accuracy of the visual sentiment inference network based on scene-object association was 91.8%, which was an improvement of about 3.7% compared to the image sentiment association analysis model. The outcomes revealed that the multi-stimulus fusion method performed better in the image emotion prediction task. The visual emotion inference network based on scene-object association can recognize objects and scenes in images more accurately, and both the scene-based attention mechanism and the masking operation can improve the network performance. This research provides a more effective approach to the field of image sentiment analysis and helps to improve the computer's ability to recognize and understand emotional expressions.

Author 1: Yuan Fang
Author 2: Yi Wang

Keywords: Sentiment analysis; multi-excitation fusion; image emotion prediction; associative reasoning; attention mechanism

PDF

Paper 74: Deep Learning for Coronary Artery Stenosis Localization: Comparative Insights from Electrocardiograms (ECG), Photoplethysmograph (PPG) and Their Fusion

Abstract: Coronary artery stenosis (CAS) is a critical cardiovascular condition that demands accurate localization for effective treatment and improved patient outcomes. This study addresses the challenge of enhancing CAS localization through a comparative analysis of deep learning techniques applied to electrocardiogram (ECG), photoplethysmograph (PPG), and their combined signals. The primary research question centers on whether the fusion of ECG and PPG signals, analyzed through advanced deep learning architectures, can surpass the accuracy of individual modalities in localizing stenosis in the left anterior descending (LAD), left circumflex (LCX), and right coronary arteries (RCA). Using a dataset of 7,165 recordings from CAS patients, three models—CNN, CNN-LSTM, and CNN-LSTM-ATTN—were evaluated. The CNN-LSTM-ATTN model achieved the highest localization accuracy (98.12%) and perfect AUC scores (1.00) across all arteries, demonstrating the efficacy of multimodal signal integration and attention mechanisms. This research highlights the potential of combining ECG and PPG signals for non-invasive CAS diagnostics, offering a significant advancement in real-time clinical applications. However, limitations include the relatively small dataset size and the focus on single-lead ECG and PPG signals, which may affect the generalizability to broader populations. Future studies should explore larger datasets and multi-lead signal integration to further validate the findings.

Author 1: Mohd Syazwan Md Yid
Author 2: Rosmina Jaafar
Author 3: Noor Hasmiza Harun
Author 4: Mohd Zubir Suboh
Author 5: Mohd Shawal Faizal Mohamad

Keywords: Coronary artery stenosis; deep learning; ECG; PPG; ECG-PPG fusion; CNN; LSTM; attention mechanism

PDF

Paper 75: Unlocking the Potential of Cloud Computing in Healthcare: A Comprehensive SWOT Analysis of Stakeholder Readiness and Implementation Challenges

Abstract: The adoption of cloud computing in healthcare holds the potential to revolutionize healthcare delivery, particularly in developing regions. Despite its promise of scalability, cost-effectiveness, and improved data management, challenges such as digital literacy gaps, infrastructure deficiencies, and security concerns hinder its implementation. This study evaluates the readiness for adopting cloud computing in Sudan's healthcare sector through a comprehensive SWOT analysis. Findings reveal that 93.75% of patients are willing to learn electronic health systems (EHS), yet 53.12% prefer paper records, indicating trust issues. Among medical staff, 34.38% report poor digital literacy, and 46.88% cite limited access to technology as a barrier. Ministry of Health employees highlight poor infrastructure (33.33%) and limited resources (30%) as significant obstacles. By identifying strengths, weaknesses, opportunities, and threats, this research provides actionable recommendations for overcoming these barriers. The findings contribute to the ongoing discourse on digital health transformation, offering insights into fostering trust in cloud technologies for enhanced healthcare outcomes.

Author 1: Alaa Abas Mohamed

Keywords: Cloud computing; SWOT; strength; weakness; opportunities; threat

PDF

Paper 76: A Novel Approach Based on Information Relevance Perspective and ANN for Predicting the Helpfulness of Online Reviews

Abstract: This study presents a novel approach to predicting the helpfulness of online reviews using Artificial Neural Networks (ANNs) focused on information relevance. As online reviews significantly influence consumer decision-making, it is critical to understand and identify reviews that provide the most value. This research identifies four key textual features namely content novelty, content specificity, content readability, and content reliability, that contribute to perceived helpfulness and incorporates them as primary inputs for the ANN model. Datasets of Amazon reviews are analyzed, and various preprocessing steps are employed to ensure data quality. Reviews are classified as helpful or unhelpful based on helpful vote thresholds, with experiments conducted across multiple helpful vote thresholds to determine the optimal threshold value. Performance was evaluated using accuracy, precision, recall, and F1 scores, with the best-performing classifier achieving 74.34% accuracy at a helpful vote threshold of 12 votes. These results highlight the potential of information relevance-based criteria to enhance the accuracy of online review helpfulness prediction models.

Author 1: Nur Syadhila Bt Che Lah
Author 2: Khursiah Zainal-Mokhtar

Keywords: Review helpfulness; online reviews; information relevance; review novelty; review readability; review specificity; Artificial Neural Networks

PDF

Paper 77: An Advanced Semantic Feature-Based Cross-Domain PII Detection, De-Identification, and Re-Identification Model Using Ensemble Learning

Abstract: The digital data being core to any system requires communication across peers and human machine interfaces; however, ensuring (data) security and privacy remains a challenge for the industries, especially under the threat of man-in-the-middle attacks, intruders and even ill-intended unauthorized access at warehouses. Almost all digital communication practices embody personally identifiable information (PII) like an individual's address, contact details, identification credentials etc. The unauthorized or ill-intended access to these PII attributes can cause major losses to the individual and therefore it is inevitable to identify and de-identify aforesaid PII elements across digital platforms to preserve privacy. Unfortunately, the diversity of PII attributes across disciplines makes it challenging for state-of-arts to perform PII detection by using a predefined dictionary. The model developed for a specific PII type can’t be universally viable for other disciplines. Moreover, applying multiple dictionaries for the different disciplines can make a solution more exhaustive. To alleviate these challenges, in this paper a robust ensemble of ensemble learning assisted semantic feature driven cross-discipline PII detection and de-identification model (EESD-PII) is proposed. To achieve it, a large set of text queries encompassing diverse PII attributes including personal credentials, healthcare data, finance attributes etc. were considered for training based PII detection and classification. The input texts were processed for the different preprocessing tasks including stopping-word removal, punctuation removal, website-link removal, lower case conversion, lemmatization and tokenization. The tokenized text was processed for Word2Vec driven continuous bag-of-word (CBOW) embedding that not only provided latent feature space for analytics but also enabled de-identification to preserve security aspects. To address class-imbalance problems, synthetic minority over-sampling techniques like SMOTE, SMOTE-BL, SMOTE-ENN were applied. Subsequently, the resampled features were processed for the feature selection by using Wilcoxon Rank Sum Test (WRST) method that in sync with 95% confidence interval retained the most significant features. The selected features were processed for Min-Max Normalization to alleviate over-fitting and convergence problems, while the normalized feature vector was classified by using ensemble of ensemble learning model encompassing Bagging, Boosting, AdaBoost, Random Forest and Extra Tree Classifier as base classifier. The proposed model performed a consensus-based majority voting ensemble to annotate each text-query as PII or Non-PII data. The positively annotated query can later be processed for dictionary-based PII attribute masking to achieve de-identification. Though, the use of semantic embedding serves the purpose towards NLP-based PII detection, de identification and re-identification tasks. The simulation results reveal that the proposed EESD-PII model achieves PII annotation accuracy of 99.77%, precision 99.81%, recall 99.63% and F-Measure of 99.71%.

Author 1: Poornima Kulkarni
Author 2: Cauvery N K
Author 3: Hemavathy R

Keywords: PII Detection; machine learning; natural language processing; artificial intelligence; de-identification

PDF

Paper 78: Risk Assessment for Geological Exploration Projects Based on the Fuzzy-DEMATEL Method

Abstract: This paper briefly introduces the analytic hierarchy process (AHP) method and uses the fuzzy decision-making and trial evaluation laboratory (DEMATEL) method to adjust the index weight in it. The geological exploration project of Qingdao undersea tunnel project in Shandong Province was selected as the subject of case study. Firstly, the fuzzy-DEMATEL method was used to analyze the degree of influence between different risk factors in the project and the types of risk factors. Then, the AHP method divided the risk factors and calculated their weight. Finally, the influence parameters calculated by the fuzzy-DEMATEL method was employed to adjust the weight of the indicators in the AHP method. The fuzzy-DEMATEL analysis obtained the driving, conclusion, and transitional risk factors. It was found from the analytic results of the AHP method that the construction supervision unit’s qualification risk, management mechanism, and awareness risk had the greatest impact on the risk of the project, and the overall risk level of the project was 2.1 points.

Author 1: Zhenhua Yang
Author 2: Hua Shi
Author 3: Ning Tian
Author 4: Juan Bai
Author 5: Xiaoyu Han

Keywords: Geological exploration project; analytic hierarchy process; DEMATEL; fuzzy theory; risk assessment

PDF

Paper 79: Blockchain-Based Financial Control System

Abstract: In order to solve the problems of data security and low efficiency of information transmission in traditional financial control systems, this paper discusses in depth the application of blockchain technology in financial control systems. In order to optimize the performance of the traditional financial control system, this paper introduces blockchain technology into it and analyzes the structure and function of the financial control system. By constructing a blockchain-based financial data collection, information exchange and security consensus mechanism, a more efficient financial control system is designed, which can significantly improve the cost efficiency, shorten the audit cycle and enhance the data security. In the model, resource allocation within the financial control system is optimized, information exchange is more efficient, and a consensus mechanism is established. The experimental results prove that the model simplifies data entry and storage, reduces the workload of financial staff and improves transparency. The study bridges the gap between blockchain and traditional financial frameworks and advances the development of modern financial control systems.

Author 1: Tedan Lu

Keywords: Blockchain technology; financial control system; resource allocation; information exchange; consensus mechanism

PDF

Paper 80: User Interface Design of SEVIMA EdLink Platform for Facilitating Tri Kaya Parisudha-Based Asynchronous Learning

Abstract: This research aims to show the user interface design of the SEVIMA EdLink platform to facilitate Tri Kaya Parisudha-based asynchronous learning in the nuances of independent learning. This research used the Research and Development method with the Borg & Gall development model, which focused on several stages, including research and field data collection, planning, design development, initial trial, and revision of the initial trial results. The number of respondents involved in the initial trial of the user interface design was two education experts, two informatics experts, 40 teachers of Tourism Vocational Schools in Bali, and 60 students of Tourism Vocational Schools in Bali. The data collection tool for the initial trial of the user interface design was a questionnaire consisting of ten questions. The analysis was conducted by comparing the effectiveness percentage of the user interface design with the effectiveness categorization standard referring to the five scales. The results showed that the user interface design of the SEVIMA EdLink platform was effective in facilitating Tri Kaya Parisudha-based asynchronous learning. The impact of this research on stakeholders in the field of education is the existence of new information related to the existence of an online learning platform called SEVIMA EdLink, which is integrated with an asynchronous learning strategy, independent learning policy, and Balinese local wisdom.

Author 1: Agus Adiarta
Author 2: I Made Sugiarta
Author 3: Komang Krisna Heryanda
Author 4: I Komang Gede Sukawijana
Author 5: Dewa Gede Hendra Divayana

Keywords: Design user interface; SEVIMA EdLink; asynchronous; Tri Kaya Parisudha; independent learning

PDF

Paper 81: Deep Learning-Optimized CLAHE for Contrast and Color Enhancement in Suzhou Garden Images

Abstract: Suzhou gardens are renowned for their unique color palettes and rich cultural significance. This study introduces a deep learning-optimized Contrast Limited Adaptive Histogram Equalization (CLAHE) method to enhance image contrast and improve color extraction accuracy in Suzhou garden images. An initial collection of 18,502 images was refined to 11,526 high-quality images from a single dataset. A pre-trained VGG16 convolutional neural network was used to extract image features, which were then employed to dynamically optimize the CLAHE parameters, thereby preserving the original color tones while enhancing contrast. The optimized CLAHE achieved significant improvements in the Structural Similarity Index (SSIM) by 24.69 percent and in the Peak Signal-to-Noise Ratio (PSNR) by 24.36 percent, and a reduction in Loss of Edge (LOE) by 36.62 percent,compared to the standard CLAHE. Additionally, enhanced structural detail and color complexity were observed. High-Resolution Network (HRNet) was utilized for semantic segmentation, enabling precise color feature extraction. K-means clustering was used to identify key color characteristics and complementary relationships among the primary and secondary colors in Suzhou gardens. A mathematical model capturing these relationships was developed to form the basis of a color palette generator, which can be applied to digital archiving, cultural preservation, aesthetic education, and virtual reality.

Author 1: Chuanyuan Li
Author 2: Ziyun Jiao

Keywords: Deep Learning-Optimized CLAHE; image contrast enhancement; color extraction; Suzhou gardens; VGG16; semantic segmentation

PDF

Paper 82: Surface Roughness Prediction Based on CNN-BiTCN-Attention in End Milling

Abstract: Surface roughness is a pivotal indicator of surface quality for machined components. It directly influences the performance and lifespan of manufactured products. Precise prediction of surface roughness is instrumental in refining production processes and curtailing costs. However, despite the use of identical processing parameters, the final surface roughness would be different. Thus, it challenges the effectiveness of traditional prediction models based solely on processing parameters. Current prevalent approaches for surface roughness prediction rely on handcrafted features, which require expert knowledge and considerable time investment. To address these challenges, we comprehensively consider the advantages of various deep learning methods and propose a novel end-to-end architecture. It synergistically integrates convolutional neural networks (CNN), bidirectional temporal convolutional networks (BiTCN), and attention mechanism, termed the CNN-BiTCN-Attention (CBTA) architecture. This architecture leverages CNN for automatic spatial feature extraction from signals, BiTCN to capture temporal dependencies, and the attention mechanism to focus on important features related to surface roughness. Experiments are conducted with popular deep learning methods on the public ACF dataset, which includes vibration, current, and force signals from the end milling process. The results demonstrate that the CBTA model outperforms other compared models. It achieves exceptional prediction performance with a mean absolute percentage error as low as 0.79% and an R2 as high as 99.81%. This validates the effectiveness and superiority of CBTA in end milling surface roughness prediction.

Author 1: Guanhua Xiao
Author 2: Hanqian Tu
Author 3: Yunzhe Xu
Author 4: Jiahao Shao
Author 5: Dongming Xiang

Keywords: Surface roughness prediction; end milling; CNN-BiTCN-Attention; deep learning

PDF

Paper 83: Enriching Sequential Recommendations with Contextual Auxiliary Information

Abstract: Recommender Systems (RS) play a key role in offering suggestions and predicting items for users on e-commerce and social media platforms. Sequential recommendation systems (SRS) leverage the user’s previous interaction history to forecast the next user-item interaction. Although deep learning methods like CNNs and RNNs have enhanced recommendation quality, current models still face challenges in accurately predicting future items based on a user’s past behavior. Transformer-based SRS have shown a significant performance boost in generating accurate recommendations by using only item identifiers which are not sufficient to generate meaningful and relevant results. These models can be improved by incorporating descriptive features of the items, such as textual descriptions. This paper proposes a transformer-based SRS, ConSRec, Contextual Sequential Recommendations, that incorporates auxiliary information of the items, such as textual features, along with item identifiers to model user behavior sequences for producing more accurate recommendations. ConSRec builds upon the BERT4Rec model by integrating auxiliary information through sentence representations derived from the textual features of items. Extensive experiments conducted on several benchmark datasets demonstrate substantial improvements compared to other advanced models.

Author 1: Adel Alkhalil

Keywords: Recommender system; sequential recommendation; auxiliary information; sentence transformer; sentence embedding

PDF

Paper 84: On the Context-Aware Anomaly Detection in Vehicular Networks

Abstract: Transportation systems are moving towards autonomous and intelligent vehicles due to advancements in embedded systems, control algorithms, and wireless communications. By enabling connectivity among vehicles, a vehicular network can be developed which offers safe and efficient driving applications. Security is a major challenge for vehicular networks as application reliability depends on it. In this paper, we highlight the security challenges faced by a vehicular network especially related to jamming and data integrity attacks. Such attacks cause major disruptions in the wireless connectivity of users with the centralized servers. We propose a context-aware anomaly detection technique for vehicular networks that considers factors such as signal strength, mobility, and data pattern to find abnormal behaviors and malicious users. We further discuss how an intelligent learning system can be developed using efficient anomaly detection. We implement a vehicular network scenario with malicious users and provide simulation results to highlight the performance gain of the proposed technique. We also highlight several appropriate future opportunities related to the security of vehicular network applications.

Author 1: Mohammed Abdullatif H. Aljaafari

Keywords: Fog computing; load balancing; task offloading

PDF

Paper 85: TLDViT: A Vision Transformer Model for Tomato Leaf Disease Classification

Abstract: Accurate and efficient diagnostic methods are essential for crop health monitoring due to the substantial impact of tomato leaf diseases on crop yield and quality. Traditional machine learning models, such as convolutional neural networks (CNNs), have shown promise in plant disease classification; however, they often require extensive data preprocessing and struggle with complex variations in leaf appearance. This study introduces TLDViT (Tomato Leaf Disease Vision Transformer), a Vision Transformer model specifically designed for the classification of tomato leaf diseases. TLDViT reduces the need for preprocessing by learning disease-specific features directly from raw images, leveraging Vision Transformers’ ability to capture long-range dependencies within images. We evaluated TLDViT on the Plant Village Dataset, which includes healthy and diseased samples across multiple classes. For comparative analysis, two Vision Transformer models, ViT-r50-l32 and ViT-l16-fe, were tested. Among these, ViT-r50-l32 achieved the highest performance, surpassing both ViT-l16-fe with an accuracy of 98%. These findings highlight TLDViT’s potential as an effective tool for crop health monitoring and automated plant disease diagnosis.

Author 1: Sami Aziz Alshammari

Keywords: Tomato Leaf Disease; Vision Transformer (ViT); crop health monitoring; plant disease classification

PDF

Paper 86: Hybrid Approach of Classification of Monkeypox Disease: Integrating Transfer Learning with ViT and Explainable AI

Abstract: Human monkeypox is a persistent global health challenge, ranking among the most common illnesses worldwide. Early and accurate diagnosis is critical to developing effective treatments. This study proposes a comprehensive approach to monkeypox diagnosis using deep learning algorithms, including Vision Transformer, MobileNetV2, EfficientNetV2, ResNet-50, and a hybrid model. The hybrid model combines ResNet-50, Mo-bileNetV2, and EfficientNetV2 to reduce error rates and improve classification accuracy. The models were trained, validated, and tested on a specially curated monkeypox dataset. EfficientNetV2 demonstrated the highest training accuracy (99.94%), validation accuracy (97.80%), and testing accuracy (97.67%). ResNet-50 achieved 99.87% training accuracy, 99.85% validation accuracy, and 97.18% testing accuracy. MobileNetV2 reached 95.47% training accuracy, with validation and testing accuracies of 79.51%and 78.18%, respectively. Designed to mitigate overfitting, the Vision Transformer achieved 100% training accuracy, 87.51%validation accuracy, and 99.41% testing accuracy. Our hybrid model yielded 99.33% training accuracy and 99.09% testing accuracy. The Vision Transformer emerged as the most promising model due to its robust performance and high accuracy, followed closely by the hybrid model. Explainable AI (XAI) techniques, such as Grad-CAM, were applied to enhance the interpretability of predictions, providing visual insights into the classification process. The results underscore the potential of Vision Transformer and hybrid deep learning models for accurate and interpretable monkeypox diagnosis.

Author 1: MD Abu Bakar Siddick
Author 2: Zhang Yan
Author 3: Mohammad Tarek Aziz
Author 4: Md Mokshedur Rahman
Author 5: Tanjim Mahmud
Author 6: Sha Md Farid
Author 7: Valisher Sapayev Odilbek Uglu
Author 8: Matchanova Barno Irkinovna
Author 9: Atayev Shokir Kuranbaevich
Author 10: Ulugbek Hajiev

Keywords: Monkeypox; vision transformer; hybrid model; transfer learning; explainable artificial intelligence

PDF

Paper 87: Explainable Deep Transfer Learning Framework for Rice Leaf Disease Diagnosis and Classification

Abstract: Rice plays a vital role in the food stock. But sometimes this crop leaf falls into disease. And, the amount of food consumed will decrease due to leaf disease. So, discovering the rice leaf disease is necessary to improve rice productivity. Currently, many researchers use deep learning methods to solve this problem. Unfortunately, their research results were less accurate. In this paper, we construct transfer learning models to diagnose and categorize illnesses affecting rice leaves. To further improve the model performance, we construct three ensemble learning models to combine various architectures. In order to bring transparency to the disease diagnostic process, we explore the explainable AI (XAI) problem of the visual object detector and integrate Gradient-weighted Class Activation Mapping (Grad-CAM) into three ensemble models to generate explanations for individual object detections for assessing performance. The results of Ensemble Learning indicate that merging different architectures can be effective in disease diagnosis, as evidenced by their best accuracy of 99.78% which is better than other state-of-the-art works. This research demonstrates that the integration of deep learning and transfer learning models yields improved prediction interpretability and classification accuracy of rice leaf disease. So, we established a dependable method of deep, transfer, and ensemble learning for the diagnosis of diseases affecting rice leaves.

Author 1: Md Mokshedur Rahman
Author 2: Zhang Yan
Author 3: Mohammad Tarek Aziz
Author 4: MD Abu Bakar Siddick
Author 5: Tien Truong
Author 6: Md. Maskat Sharif
Author 7: Nippon Datta
Author 8: Tanjim Mahmud
Author 9: Renzon Daniel Cosme Pecho
Author 10: Sha Md Farid

Keywords: Rice leaf; ensemble-learning; explainable AI; disease diagnosis; transfer learning

PDF

Paper 88: Multi-Label Decision-Making for Aerobics Platform Selection with Enhanced BERT-Residual Network

Abstract: In response to the increased demand for individualized workout routines, online aerobics programs are struggling to fulfil the needs of their various user bases with specialized suggestions. Current systems seldom combine multiple data sources to analyze user preferences, reducing customization accuracy and engagement. Enhanced BERT-Residual Network (EBRN) evaluates multimodal input using residual processing blocks and contextual embeddings based on BERT to bridge textual and structural user characteristics. EBRN’s deep insights may help understand user engagement, fitness goals, and enjoyment. An innovative data balancing and feature selection method, Dynamic Equilibrium Sampling and Feature Transformation (DES-FT), improves data preparation and model accuracy. Two novel metrics, Contextual Scheduling Consistency (CSC) and Complexity-Weighted Accuracy (CWA), may quantify EBRN stability in multi-attribute classification, particularly for complex data. EBRN outperforms standard AI models on a Toronto fitness platform dataset with 98.7% recall, 98.9% precision, and 99.3% accuracy. Its limited geographical dataset and lack of real-time validation hinder the research. The data show individualized aerobics recommendations that include instructor quality, platform accessibility, and material variety may boost involvement. Researchers need additional datasets and real-time flexibility to make this concept more practical. EBRN’s tailored ideas revolutionized digital fitness platform user engagement and enjoyment.

Author 1: Yan Hu

Keywords: Personalized fitness; aerobics recommendations; artificial intelligence; Enhanced BERT-Residual Network (EBRN); hybrid models; user engagement

PDF

Paper 89: Recursive Center Embedding: An Extension of MLCE for Semantic Evaluation of Complex Sentences

Abstract: A novel method for representing hierarchical sentences, named Multi-Leveled Center Embedding (MLCE), has recently been introduced. The approach utilizes the concept of center-embedded structures to demonstrate the structural complexity of complex sentences through iterative calculations of differences between the original and modified embeddings of its hierarchy. Through an implementation of Recursive Center-Embedding (RCE), we enhance the concept of MLCE by incorporating additional leveled features from the center-word hierarchy. The features are essential for training the Word2Vec model, enabling it to generate sophisticated vectors that perform well in sentence similarity analysis. RCE produces vectors via a hier-archical arrangement of center components, illustrating sentence structure that exceeds that of traditional word vectors and the BERT-base contextual model. The aim is to assess the similarity performance of the proposed RCE strategy. Furthermore, it examines its contextual ability obtained through leveled feature vectors that successfully correlated pairs of complex sentences across multiple benchmark datasets.

Author 1: ShivKishan Dubey
Author 2: Narendra Kohli

Keywords: Recursive Center Embedding (RCE); Multi-Level Center Embedding (MLCE); complex sentences; structural similarity

PDF

Paper 90: Fault-Tolerant Control of Nonlinear Delayed Systems Using Lyapunov Approach: Application to a Hydraulic Process

Abstract: Designing stabilizing controllers for delayed non-linear systems with control constraints presents a significant challenge. This paper addresses this issue by proposing a fault-tolerant control approach for a specific class of delayed nonlin-ear systems with actuator faults based on Lyapunov redesign principle. Initially, an assumption is introduced to facilitate the control design for the nominal system. Then, a new control law is developed to resolve the difficulty caused by actuator failures. The proposed nonlinear controller demonstrates the ability to compensate for actuator faults. To validate its effectiveness, the method is applied to a hydraulic system.

Author 1: Tayssir Abdelkrim
Author 2: Adel Tellili
Author 3: Nouceyba Abdelkrim

Keywords: Delayed nonlinear system; actuator faults; delayed hydraulic process; additive fault tolerant control; redesign Lya-punov approach

PDF

Paper 91: Advanced Deep Learning Approaches for Fault Detection and Diagnosis in Inverter-Driven PMSM Systems

Abstract: This paper presents a comprehensive approach to fault detection and diagnosis (FDD) in inverter-driven Permanent Magnet Synchronous Motor (PMSM) systems through the innovative integration of transformer-based architectures with physics-informed neural networks (PINNs). The methodology addresses critical challenges in power electronics reliability by incorporating domain-specific physical constraints into the learning process, enabling both high accuracy and physically consistent predictions. The proposed system combines advanced sensor fusion techniques with real-time monitoring capabilities, processing multiple input streams including phase currents, temperatures, and voltage measurements. The architecture’s dual-objective optimization approach balances traditional classification metrics with physics-based constraints, ensuring predictions align with fundamental electromagnetic and thermal principles. Experimental validation using a comprehensive dataset of 10,892 samples across nine distinct fault scenarios demonstrates the system’s exceptional performance, achieving 98.57% classification accuracy while maintaining physical consistency scores above 0.98. The model ex-hibits robust performance across varying operational conditions, including speed variations (97.45-98.57% accuracy range) and load fluctuations (97.91-98.12% accuracy range). Notable achievements include perfect detection rates for certain critical faults, such as high-side short circuits and thermal anomalies, with area under ROC curve (AUC) scores of 1.0. This research establishes new benchmarks in condition monitoring and fault diagnosis for power electronic systems, offering practical implications for predictive maintenance and system reliability enhancement.

Author 1: Abdelkabir BACHA
Author 2: Ramzi El IDRISSI
Author 3: Fatima LMAI
Author 4: Hicham EL HASSANI
Author 5: Khalid Janati Idrissi
Author 6: Jamal BENHRA

Keywords: Fault detection and diagnosis; PMSM; deep learning; transformers; physics-informed neural networks; power electronics

PDF

Paper 92: A Framework for Age Estimation of Fish from Otoliths: Synergy Between RANSAC and Deep Neural Networks

Abstract: This study represents a significant advancement in fish ecology by applying deep learning techniques to automate and improve the counting of growth rings in otoliths, which are essential for determining the age and growth patterns of fish. Traditionally, manual methods have been used to analyze these rings, but these approaches are time-consuming, require significant expertise, and are prone to bias. To address these limitations, we propose a novel methodology that combines convolutional neural networks (CNNs) with the RANSAC algorithm, enhancing the accuracy and reliability of ring detection, even in the presence of noise or natural image variations. Unlike manual techniques, which depend on observer expertise and subjective interpretation, our approach improves performance, often surpassing human experts while reducing analysis time. The results demonstrate the potential of deep learning and RANSAC in otolith research, offering powerful tools for sustainable fish population management and transforming research practices in marine ecology by providing faster, more reliable, and accessible analytical methods, setting new standards for more rigorous research.

Author 1: Souleymane KONE
Author 2: Abdoulaye SERE
Author 3: Dekpeltaki´e Augustin METOUALE SOMDA
Author 4: Jos´e Arthur OUEDRAOGO

Keywords: Otoliths; deep learning; pattern recognition; RANSAC; automated counting

PDF

Paper 93: Enhancing Steganography Security with Generative AI: A Robust Approach Using Content-Adaptive Techniques and FC DenseNet

Abstract: Content-adaptive image steganography based on minimizing the additive distortion function and Generative Ad-versarial Networks (GAN) is a promising trend. This approach can quickly generate an embedding probability map and has a higher security performance than hand-crafted methods. however, existing works have ignored the semantic information between neighbouring pixels and the NaN-loss scenarios, which leads to improper convergence. Such cases will degrade the generated Stego images’ quality, decreasing the secret payload’s security. FT GAN performance, which incorporates feature reuse in generator architecture, has been investigated by proposing the FC DenseNet-based generator herein. This investigation explores the superior semantic segmentation capabilities of FC DenseNet, including feature reuse, implicit deep supervision, and the vanishing gradient problem alleviation of DenseNet, toward enhancing visual results, increasing security performance, and accelerating training. The ability to maintain high-quality visual characteristics and robust security even in resource-constrained environments, such as Internet of Things (IoT) contexts, demonstrates the practical benefits of this approach. The qualitative analysis of the visual results regarding the texture regions’ localization and intensity exhibited augmented visual quality. Moreover, an improvement in the security attribute of 0.66% has also been demonstrated regarding average detection errors made by the SRM EC Steganalyzer across all target payloads.

Author 1: Ayyah Abdulhafidh Mahmoud Fadhl
Author 2: Bander Ali Saleh Al-rimy
Author 3: Sultan Ahmed Almalki
Author 4: Tami Alghamdi
Author 5: Azan Hamad Alkhorem
Author 6: Frederick T. Sheldon

Keywords: Content adaptive; distortion function; GAN; FC DenseNet; steganography; steganalysis

PDF

Paper 94: Novel Collaborative Intrusion Detection for Enhancing Cloud Security

Abstract: Intrusion Detection Models (IDM) often suffer from poor accuracy, especially when facing coordinated attacks such as Distributed Denial of Service (DDoS). One significant limitation of existing IDM solutions is the lack of an effective technique to determine the optimal period for sharing attack information among nodes in a distributed IDM environment. This article pro-poses a novel collaborative IDM model that addresses this issue by leveraging the Pruned Exact Linear Time (PELT) change point detection algorithm. The PELT algorithm dynamically determines the appropriate intervals for disseminating attack information to nodes within the collaborative IDM framework. Additionally, to enhance detection accuracy, the proposed model integrates a Gradient Boosting Machine with a Support Vector Machine (GBM-SVM) for collaborative detection of malicious activities. The proposed model was implemented in Apache Spark using the NSL-KDD benchmark intrusion detection dataset. Experimental results demonstrate that this collaborative approach significantly improves detection accuracy and responsiveness to coordinated attacks, providing a robust solution for enhancing cloud security.

Author 1: Widad Elbakri
Author 2: Maheyzah Md. Siraj
Author 3: Bander Ali Saleh Al-rimy
Author 4: Sultan Ahmed Almalki
Author 5: Tami Alghamdi
Author 6: Azan Hamad Alkhorem
Author 7: Frederick T. Sheldon

Keywords: Cloud security; intrusion detection; collaborative model; feature selection; anomaly detection; Pruned Exact Linear Time (PELT); gradient boosting machine; support vector machine; NSL-KDD; DDoS

PDF

Paper 95: Near-Optimal Traveling Salesman Solution with Deep Attention

Abstract: The Traveling Salesman Problem (TSP) is a well-known problem in computer science that requires finding the shortest possible route that visits every city exactly once. TSP has broad applications in logistics, routing, and supply chain management, where finding optimal or near-optimal solutions efficiently can lead to substantial cost and time reductions. However, traditional solvers rely on iterative processes that can be computationally expensive and time-consuming for large-scale instances. This research proposes a novel deep learning architecture designed to predict optimal or near-optimal TSP tours directly from the problem’s distance matrix, eliminating the need for extensive iterations to save total solving time. The proposed model leverages the attention mechanism to effectively focus on the most relevant parts of the network, ensuring accurate and efficient tour predictions. It has been tested on the TSPLIB benchmark dataset and observed significant improvements in both solution quality and computational speed compared to traditional solvers such as Gurobi and Genetic Algorithm. This method presents a scalable and efficient solution for large-scale TSP instances, making it a promising approach for real-world traveling salesman applications.

Author 1: Natdanai Kafakthong
Author 2: Krung Sinapiromsaran

Keywords: Traveling salesman problem; deep learning; genetic algorithm

PDF

Paper 96: Leveraging Deep Learning for Enhanced Information Security: A Comprehensive Approach to Threat Detection and Mitigation

Abstract: Forcing developments in cyberspace means protecting information resources requires enhanced and more dynamic protection models. Traditional approaches don’t adequately address the numerous, sophisticated, varied, and frequently intersecting emergent security challenges, such as malware, phishing, and DDoS attacks. This paper introduces a novel hybrid deep learning framework leveraging convolutional neural networks (CNN) and recurrent neural networks (RNN) for enhanced threat detection and mitigation within a Zero Trust Architecture (ZTA). The model identifies anomalies indicative of potential security threats by analysing large network traffic datasets. To decrease false positive instances, autoencoders are integrated, significantly improving the system’s ability to differentiate between normal and anomalous behaviour. Extensive experiments were conducted using a benchmark cybersecurity dataset, achieving an accuracy rate of 98.75% and a false positive rate of only 1.43%. Compared to traditional approaches, this dynamic deep learning framework is highly adaptable, requiring little oversight to respond effectively to new and evolving threats. From the study results, it can be concluded that deep learning provides a robust and scalable solution for addressing emerging cyber threats and creating a more secure and reliable information security environment. Future work will focus on extending the framework to improve its accuracy and robustness, further advancing cybersecurity capabilities. This research significantly contributes to information security, establishing a promising direction for applying machine learning to enhance cybersecurity.

Author 1: KaiJing Wang

Keywords: Artificial intelligence; deep learning; information security; threat detection; cybersecurity; convolutional neural net-work; recurrent neural network; mitigation

PDF

Paper 97: SGCN: Structure and Similarity-Driven Graph Convolutional Network for Semi-Supervised Classification

Abstract: Traditional Graph Convolutional Networks (GCNs) primarily utilize graph structural information for information aggregation, often neglecting node attribute information. This approach can distort node similarity, resulting in ineffective node feature representations and reduced performance in semi-supervised node classification tasks. To address these issues, this study introduces a similarity measure based on the Minkowski distance to better capture the proximity of node features. Building on this, SGCN, a novel graph convolutional network, is proposed, which integrates this similarity information with conventional graph structural information. To validate the effectiveness of SGCN in learning node feature representations, two classification models based on SGCN are introduced: SGCN-GCN and SGCN-SGCN. The performance of these models is evaluated on semi-supervised node classification tasks using three benchmark datasets: Cora, Citeseer, and Pubmed. Experimental results demonstrate that the proposed models significantly outperform the standard GCN model in terms of classification accuracy, highlighting the superiority of SGCN in node feature representation learning. Additionally, the impact of different distance metrics and fusion factors on the models’ classification capabilities is investigated, offering deeper insights into their performance characteristics. The code and datasets are available at https://github.com/YONGLONGHU/SGCN.git.

Author 1: WenQiang Guo
Author 2: YongLong Hu
Author 3: YongYan Hou
Author 4: BoFeng Xue

Keywords: Graph convolutional networks; semi-supervised node classification; Minkowski distance; similarity information

PDF

Paper 98: Empowering Home Care: Utilizing IoT and Deep Learning for Intelligent Monitoring and Management of Chronic Diseases

Abstract: Integrating Internet of Things (IoT) with Artificial Intelligence (AI) is one of the catalysts for improving traditional healthcare services. This integration has created many opportunities that have led to healthcare shifting towards enabling home care, the concept that harnesses the technologies advanced potential such as the IoT and deep learning for intelligent monitor and manage chronic diseases. As population growth increases, restrictions on traditional healthcare services increase. Some diseases, such as chronic diseases, require innovative solutions that go beyond the boundaries of traditional healthcare settings due to their impact on individuals’ health for example traditional healthcare systems have little capacity to provide high-quality and real-time services. Empowering home care services using deep learning and internet of things technology is promising. It enables continuous monitoring through interconnected devices and deep learning, which provides intelligent insights from massive data sets. This brief explores the key components of enabling home care, including continuous patient health monitoring, predictive analytics, medication management, and remote patient support by healthcare providers, and provides friendly interfaces for end-users. The conjunction between the IoT and deep learning in home-care signals a shift toward precision medicine, enhancing patient outcomes and creating a sustainable model for chronic disease management in the era of decentralized healthcare. This review article aims to discuss the following aspects: presenting the latest technologies in home care systems, showing the merit of combining the Internet of Medical Things (IoMT) and deep learning and its role in monitoring patient conditions and managing chronic disease to improve patient health status accurately, in real-time, and cost-effective, and lastly, debating future studies and providing recommendations for the ongoing development of home care remote monitoring applications.

Author 1: Nouf Alabdulqader
Author 2: Khaled Riad
Author 3: Badar Almarri

Keywords: IoT; IoMT; intelligent monitoring; chronic diseases; deep learning; home care; physiological data; mHealth

PDF

Paper 99: Performance Comparison of Object Detection Models for Road Sign Detection Under Different Conditions

Abstract: During driving, drivers often overlook the traffic signs along the roads compromising road safety and increasing the risk of accidents. To address this, artificial intelligence (AI) and deep learning techniques are employed, taking into consideration the improvement of advances in Artificial Neural Networks (ANNs) and image processing for robust road sign detection. In this work, we compare the performance of existing state-of-the-art object detection models for road sign detection, including YOLOv8, YOLOv9, RTMDet, Faster-RCNN and RetinaNet, using a large dataset of images of road signs. These models are fine-tuned and hyperparameters are optimized with varied settings like auto-orientation and augmentation during the preprocessing and training phase. The models are then tested, and key performance indicators such as mean average precision (mAP), number of inferences performed per second [frames per second (fps)], and total loss are evaluated. Our study reaffirms the earlier findings in which YOLOv9 and YOLOv8 outperform other detectors in real-time detection tasks because they are faster in inference or prediction than most detectors, but with a compromise in accuracy, as highlighted by the fast fps rates. In contrast, RTMDet is fast and reliable, making it a highly effective option for detecting various road signs. The insights presented in this research are useful in identifying the appropriateness and drawbacks of each model, thereby benefiting from the selection of the best suited model for real-world applications, such as autonomous vehicles or self-driving cars.

Author 1: Zainab Fatima
Author 2: M. Hassan Tanveer
Author 3: Hira Mariam
Author 4: Razvan Cristian Voicu
Author 5: Tanazzah Rehman
Author 6: Rizwan Riaz

Keywords: Artificial intelligence; artificial neural networks; image processing; deep learning; road signs detection

PDF

Paper 100: Accuracy Optimization and Wide Limit Constraints of DC Energy Measurement Based on Improved EEMD

Abstract: In modern power systems, with the increasing application of renewable energy, direct current transmission technology has put forward new requirements for energy metering. In order to solve the accuracy problem of traditional electric energy metering under DC energy, the research is based on the classical empirical modal decomposition (EEMD), and introduces the artificial chemical reaction optimization algorithm (ACROA) to enhance the global search capability and decomposition accuracy of the original algorithm, and at the same time safeguards the accuracy of metering equipment under extreme conditions through the wide quantitative constraints, and ultimately puts forward a new type of optimization model for the accuracy of DC electric energy metering. The highest measurement accuracy of this model could reach 90%, and it performed better in power signal decomposition and accuracy optimization. Especially under high-frequency interference and complex signal conditions, the measurement error could be reduced to 6.87%, the highest decomposition stability was 94.02%, and the shortest measurement time was 1.12 seconds. Therefore, the model constructed in this study exhibits excellent decomposition accuracy and robustness in complex energy environments, solving the shortcomings of traditional energy metering methods and providing new ideas for future optimization of DC energy metering.

Author 1: Xiaoyu Wang
Author 2: Xin Yin
Author 3: Xinggang Li
Author 4: Jiangxue Man
Author 5: Yanhe Liang
Author 6: Fan Xu

Keywords: EEMD; direct current energy; measurement; width limit; ACROA

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org