The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 16 Issue 2

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: 6G-Enabled Autonomous Vehicle Networks: Theoretical Analysis of Traffic Optimization and Signal Elimination

Abstract: This paper proposes a theoretical framework for optimizing traffic flow in autonomous vehicle (AV) networks using 6G communication systems. We propose a novel technique to eliminate conventional traffic signals through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. The article demonstrates traffic flow optimization, density, and safety improvements through real-time management and decision-making. The theoretical foundation involves the combination of multi-agent deep reinforcement learning, coupled with complex analytical models across the partition managing intersections, thus forming the basis of proposed innovative city advancements. From the theoretical analysis, the proposed approach shows a relative improvement of 40-50% in intersection waiting time, 50-70% in accident probability, and 35% in carbon footprint. The above improvements are obtained by applying ultra-low latency 6G communication with the sub-millisecond response and accommodating up to 10000 vehicles per square kilometre. In addition, an economic evaluation revealed that such a system would generate a return on investment by 6.7 years, making this system a technical and financial system for enhancing an intelligent city.

Author 1: Daniel Benniah John

Keywords: 6G Communication systems; autonomous vehicle networks; traffic flow optimization; signal-free traffic management; Vehicle-to-Vehicle Communication (V2V); Vehicle-to-Infrastructure Communication (V2I); multi-agent deep reinforcement learning; real-time traffic management

PDF

Paper 2: Light-Weight Federated Transfer Learning Approach to Malware Detection on Computational Edges

Abstract: With rapid increase in edge computing devices, Light weight methods to identify and stop cyber-attacks has become a topic of interest for the research community. Fast proliferation of smart devices and customer’s concerns regarding the data security and privacy has necessitated new methods to counter cyber attacks. This work presents a unique light weight transfer learning method to leverage malware detection in federated mode. Existing systems seems insufficient in terms of providing cyber security in resource constrained environment. Fast IoT device deployment raises a serious threat from malware attacks, which calls for more efficient, real-time detection systems. Using a transfer learning model over federated architecture (with federated learning support), the research suggests to counter the cyber risks and achieve efficiency in detection of malware in particular. Using a real-world publicly accessible IoT network dataset, the study assessed the performance of the model using Aposemat IoT-23 dataset. Extensive testing shows that with train-ing accuracy approaching around 98% and validation accuracy reaching 0.97.6% with 10 epoch, the proposed model achieves great detection accuracy of over 98%. These findings show how well the model detects Malware threats while keeping reasonable processing times—critical for IoT devices with limited resources.

Author 1: Sakshi Mittal
Author 2: Prateek Rajvanshi
Author 3: Riaz Ul Amin

Keywords: Alware detection; transfer learning; light weight transfer learning; federated learning alware detection; transfer learning; light weight transfer learning; federated learning

PDF

Paper 3: An Automated Mapping Approach of Emergency Events and Locations Based on Object Detection and Social Networks

Abstract: The high prevalence of cellphones and social networking platforms such as Snapchat are obviously dissolving traditional barriers between information providers and end-users. It is certainly relevant in emergency events, as individuals on the site produce and exchange real-time information about the event. However, notwithstanding their demonstrated significance, obtaining event-related information from real-time streams of vast numbers of snaps is a significant challenge. To address this gap, this paper proposes an automated mapping approach of emergency events and locations based on object detection and social networks. Furthermore, employing object detection methods on social networks to detect emergency events will construct reliable, flexible and speedy approach by utilizing the Snapchat hotspot map as a reliable source to discover the exact location of emergency events. Moreover, the proposed approach aims to yields high accuracy by employing the state of the arts object detectors to achieve the objectives of this paper. Furthermore, this paper evaluates the performance of four object detection baseline models and the proposed ensemble approach to detect emergency events. Results show that the proposed approach achieved a very high accuracy of 96% for flood dataset and 94% for fire dataset.

Author 1: Khalid Alfalqi
Author 2: Martine Bellaiche

Keywords: Machine learning; deep learning; big data; social networks; object detection; emergency event detection; snapchat; hotspot map

PDF

Paper 4: Resampling Imbalanced Healthcare Data for Predictive Modelling

Abstract: Imbalanced datasets pose significant challenges in healthcare for developing accurate predictive models in medical diagnostics. In this work, we explore the effectiveness of combining resampling methods with machine learning algorithms to enhance prediction accuracy for imbalanced heart and lung disease datasets. Specifically, we integrate undersampling techniques such as Edited Nearest Neighbours (ENN) and In-stance Hardness Threshold (IHT) with oversampling methods like Random Oversampling (RO), Synthetic Minority Oversampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN). These resampling strategies are paired with classifiers including Decision Trees (DT), Random Forests (RF), K-Nearest Neighbours (KNN), and Support Vector Machines (SVM). Model performance is evaluated using accuracy, precision, recall, F1 score, and the Area Under the Curve (AUC). Our results show that tailored resampling significantly boosts machine learning model performance in healthcare settings. Notably, SVM with ENN undersampling markedly improves accuracy for lung cancer predictions, while SVM and RF with IHT achieve higher validation accuracies for both diseases. Random oversampling shows variable effectiveness across datasets, whereas SMOTE and ADASYN consistently enhance accuracy. This study underscores the value of integrating strategic resampling with machine learning to improve predictive reliability for imbalanced healthcare data.

Author 1: Manoj Yadav Mamilla
Author 2: Ronak Al-Haddad
Author 3: Stiphen Chowdhury

Keywords: Imbalanced data; resampling; machine learning; healthcare

PDF

Paper 5: Exploiting Ray Tracing Technology Through OptiX to Compute Particle Interactions with Cutoff in a 3D Environment on GPU

Abstract: Particle interaction simulation is a fundamental method of scientific computing that require high-performance solutions. In this context, computing on graphics processing units (GPUs) has become standard due to the significant performance gains over conventional CPUs. However, since GPUs were originally designed for 3D rendering, they still retain several features that are not fully exploited in scientific computing. One such feature is ray tracing, a powerful technique for rendering 3D scenes. In this paper, we propose exploiting ray tracing technology via OptiX and CUDA to compute particle interactions with a cutoff distance in a 3D environment on GPUs. To this end, we describe algorithmic techniques and geometric patterns for efficiently determining the interaction lists for each particle. Our approach enables the computation of interactions with quasi-linear complexity in the number of particles, eliminating the need to construct a grid of cells or an explicit kd-tree. We compare the performance of our method to a classical grid-based approach and demonstrate that our approach is faster in most cases with non-uniform particle distributions.

Author 1: David Algis
Author 2: Berenger Bramas

Keywords: CUDA; graphics processing unit; high-performance computing; OptiX; particle interactions; ray tracing; scientific computing

PDF

Paper 6: RSCHED: An Effective Heterogeneous Resource Management for Simultaneous Execution of Task-Based Applications

Abstract: Modern parallel architectures have heterogeneous processors and complex memory hierarchies, offering up to billion-way parallelism at multiple hierarchical levels. Their exploitation by HPC applications greatly boosts scientific discoveries and advances, but they are still not fully utilized, leading to proportionally high energy consumption. The task-based programming model has demonstrated promising potential in developing scientific applications on modern high-performance platforms. This work introduces a new framework for managing the concurrent execution of task-based applications, RSCHED. The framework aims to minimize the overall time spent executing a set of applications and maximize resource utilization. RSCHED is a two-level resources management framework: resource distribution and task scheduling, with sharable and reusable resources on the fly. A new model of Gradient Descent has been proposed, among other strategies for resource distribution, due to its well-known speedy convergence event in fast-growing systems. We implemented our proposal on StarPU and evaluated it on real applications. RSCHED demonstrated the potential to speed up the overall makespan of executed applications compared to consecutive execution with an average factor of 10x and the potential to increase resource utilization.

Author 1: Etienne Ndamlabin
Author 2: Berenger Bramas

Keywords: Heterogeneous resource management; scheduling; task-based applications; gradient descent; StarPU

PDF

Paper 7: Enhanced Network Bandwidth Prediction with Multi-Output Gaussian Process Regression

Abstract: Modern network environments, especially in do-mains like 5G and IoT, exhibit highly dynamic and nonlinear traffic behaviors, posing significant challenges for accurate time series analysis and predictive modeling. Traditional approaches, including stochastic ARIMA and deep learning-based LSTM, frequently encounter difficulties in capturing rapid signal variations and inter-channel dependencies, often due to data sparsity or excessive computational cost. To address these issues, this paper proposes a Multi-Output Gaussian Process (MOGP) framework augmented with a novel signal processing strategy, where additional signals are generated by summing adjacent elements over multiple window sizes. Such multi-scale enrichment effectively leverages cross-channel correlations, enabling the MOGP model to discover complex temporal patterns in multi-channel data. Experimental results on real-world network traces highlight that the proposed method achieves consistently lower RMSE compared to conventional single-output or deep learning methods, thereby underscoring its value for robust bandwidth estimation. Our findings suggest that integrating MOGP with multi-scale augmentation holds promise for a wide range of predictive analytics applications, including resource allocation in 5G networks and traffic monitoring in IoT systems.

Author 1: Shude Chen
Author 2: Takayuki Nakachi

Keywords: Network traffic prediction; Multi-Output Gaussian Process (MOGP); signal processing; time series analysis; predictive modeling; multi-channel data; IoT traffic monitoring; 5G networks

PDF

Paper 8: Automated Subjective Perception of a Driver’s Pain Level Based on Their Facial Expression

Abstract: One factor that has a positive correlation with the risk of traffic accidents is the pain experienced by drivers. This pain is sometimes expressed facially by the driver and can be subjectively perceived by others. By observing the facial expression of drivers, it can estimate the pain experienced at that point in time and intervene to prevent some accidents. A method to automatically estimate the pain level expressed by a driver using their facial expression will be proposed in this study. The model is trained by a convolution neural network based on a public dataset of facial expressions at various pain levels. This model is then used to automatically classify the pain level perceived using only the facial expressions of drivers. The result of the automated classification is then compared to ratings of subjective feelings of the driver’s pain evaluated by a medical doctor. The experiment results showed that the model classified the pain level expressed facially by the drivers matched that of the classification by the medical doctor at a rate of 80%.

Author 1: F. Hadi
Author 2: O. Fukuda
Author 3: W. LYeoh
Author 4: H. Okumura
Author 5: Y. Rodiah
Author 6: Herlina
Author 7: A. Prasetyo

Keywords: Pain; driver; convolution neural network; facial expression

PDF

Paper 9: Mobile Application Based on Geolocation for the Recruitment of General Services in Trujillo, La Libertad

Abstract: Currently, there is no technological solution that efficiently facilitates the offering of general services by independent workers in the city of Trujillo. This limitation reduces job opportunities, as workers secure fewer contracts due to reliance on client recommendations, a method that is often inefficient due to long response times and low accessibility. Leveraging the versatility of mobile applications. This study contributes to computer science by demonstrating how cloud-based data management, real-time communication, and location-based service matching using Google APIs optimize service efficiency and user experience. The study follows an applied research approach with a quantitative methodology, employing a pre-experimental explanatory design and a sample of 22 workers selected through non-probabilistic convenience sampling. The development was carried out using the Flutter framework and the Dart programming language, with an SQL database hosted on Microsoft Azure cloud services. The Mobile-D agile methodology guided the development process. After implementing the application, the results showed an 86.79% reduction in the average hiring process time, a 50% increase in the number of contracts completed, and a 51.27% improvement in workers' average satisfaction. These findings highlight the effectiveness of mobile and cloud computing technologies, along with ranking algorithms and geolocation services, in streamlining labor market interactions and improving user experience.

Author 1: Melissa Giannina Alvarado Baudat
Author 2: Camila Vertiz Asmat
Author 3: Fernando Sierra-Liñan

Keywords: Mobile application; recruitment; geolocation; general services

PDF

Paper 10: Development of a Software Tool for Learning the Fundamentals of CubeSat Angular Motion

Abstract: The development of tools for understanding and simulating CubeSat angular motion is essential for both educational and research purposes in space technology. In this context, this paper presents the development of a MATLAB-based software tool designed to facilitate the comprehension of CubeSat angular motion. This tool allows users to simulate CubeSat dynamics by adjusting parameters, such as initial conditions and physical properties, enabling the observation of different types of motion, including rotatory, oscillatory, both stable and unstable behaviors. The mathematical models selected for simulating the CubeSat dynamics are presented. The interface of the tool, designed for intuitive parameter input and visualization of phase portraits of the system under consideration, is described. The software is demonstrated using a CubeSat 3U configuration, and simulation results, including angle of attack, angular velocity, and altitude decay, are presented. This tool aims to enhance the understanding of CubeSat angular motion, contributing to the design and operation of CubeSat missions in low Earth orbit.

Author 1: Victor Romero-Alva
Author 2: Angelo Espinoza-Valles

Keywords: CubeSat; angular motion; simulation; learning tool MATLAB

PDF

Paper 11: Performance Evaluation and Selection of Appropriate Congestion Control Algorithms for MPT Networks

Abstract: Recent academic research highlights a growing interest in multipath technologies, which offer promising solutions to networking challenges in complex environments. This interest is reflected in the emergence of protocols such as Multipath TCP (MPTCP) and Multipath UDP-in-GRE (MPT-GRE). The development of network protocols, particularly various iterations of the Transmission Control Protocol (TCP), has been distinguished by congestion detection and control algorithms, such as HighSpeed, CUBIC, Reno, LP, BBR, and Illinois. This paper evaluates the performance and suitability of these algorithms for multipath MPT-GRE networks under varying conditions, including delay, jitter, and data loss at different transmission speeds (both symmetric and asymmetric). Using StarBED resources, we applied delay, jitter, or packet loss to one of two physical paths to simulate congestion. The results demonstrate that some algorithms, HighSpeed and BBR among them, significantly enhance Quality of Service (QoS) metrics and network throughput in multipath MPT-GRE networks. These findings provide valuable insights into their performance and practical applications.

Author 1: Naseer Al-Imareen
Author 2: Gábor Lencse

Keywords: Packet loss; congestion control; MPT-GRE; delay; throughput; jitter

PDF

Paper 12: A Chatbot for the Legal Sector of Mauritius Using the Retrieval-Augmented Generation AI Framework

Abstract: Mauritius is known to have a hybrid legal system as the logical consequence of being both a former French and English colony. From its independence in 1968 to date, the legal environment has changed to reflect the constant need to provide a framework to address the country’s diverse needs. With over 1200 pieces of legislation available for consultation, including those which are no longer in force, it is very difficult to know all of them. Yet, there is a legal maxim that says, “nemo censetur ignorare legem”. In other words, ignorance of the law is no excuse. This study aims to provide a solution for professionals and non-professionals to have better access to the law through the development of a chatbot. A Retrieval Augmented Generation (RAG) chatbot system has been developed to achieve this objective. A RAG system is one that leverages the use of Large Language Models (LLM) to process a query and generate a response, while ensuring accuracy by performing similarity searches against documents stored in a vector database. A sample of 46 legal documents (acts and regulations) were retrieved from the website of the Supreme Court of Mauritius. They were broken down into chunks and stored as vectors in Chroma, a vector database. The chatbot combines and processes the queries with a text prompt, searches the relevant legal texts, and generates an appropriate response using OpenAI GPT-4o-mini or MistralAI Open-Mixtral-8x22B. Since most legal texts are in English, a translation layer is included for queries in French. Sources for the answers are also displayed for easy cross-validation. This chatbot will undoubtedly be a useful tool for the Mauritian people.

Author 1: Taariq Noor Mohamed
Author 2: Sameerchand Pudaruth
Author 3: Ivan Coste-Manière

Keywords: Law; chatbot; retrieval augmented generation; large language model; OpenAI; Mistral AI

PDF

Paper 13: Model for Training and Predicting the Occurrence of Potato Late Blight Based on an Analysis of Future Weather Conditions

Abstract: Plant diseases pose a significant challenge to agriculture, leading to serious economic losses and a risk to food security. Predicting and managing diseases such as potato blight requires an analysis of key environmental factors, including temperature, dew point, and humidity, that influence the development of pathogens. The current study uses machine learning to integrate this data for the purpose of early detection of diseases. The use of local weather data from sensors, combined with forecast data from public weather API servers, is a prerequisite for accurate short-term forecasting of adverse events. The results highlight the potential of predictive models to optimize prevention strategies, reduce losses and support sustainable crop management. Machine learning provides powerful tools for analyzing and predicting data related to plant diseases. Combining different approaches allows the creation of more precise and adaptive models for disease management.

Author 1: Daniel Damyanov
Author 2: Ivaylo Donchev

Keywords: Machine learning; potato late blight; data analysis; forecast; prediction models

PDF

Paper 14: Lung Parenchyma Segmentation Using Mask R-CNN in COVID-19 Chest CT Scans

Abstract: During the COVID-19 pandemic, the precise evaluation of lung impairments using computed tomography (CT) scans became critical for understanding and managing the disease; however, specialists faced a high workload and the urgent need to deliver fast and accurate results. To address this, deep learning models offered a promising solution by automating lung identification and lesion localization associated with COVID-19. This study employs the semantic segmentation technique Mask R-CNN, integrated with a ResNet-50 backbone, to analyze CT scans of COVID-19 patients. The model was trained using an annotated dataset, enhancing its ability to accurately segment and delineate the lung parenchyma in CT images. The results showed that Mask R-CNN achieved a Dice Similarity Coefficient (DSC) of 93.4%, demonstrating high concordance between the segmented areas and clinically relevant regions. These findings highlight the effectiveness of the proposed approach for precise lung tissue segmentation in CT scans, enabling quantitative assessments of lung impairments and providing valuable insights for diagnosis and patient monitoring.

Author 1: Wilmer Alberto Pacheco Llacho
Author 2: Eveling Castro-Gutierrez
Author 3: Luis David Huallpa Tapia

Keywords: Mask R-CNN; ResNet-50; computed tomography; lung parenchyma; COVID-19

PDF

Paper 15: Impact of the TikTok Algorithm on the Effectiveness of Marketing Strategies: A Study of Consumer Behavior and Content Preferences

Abstract: TikTok has become one of the most widely used platforms, its innovative video format has allowed companies and users to increase their visibility, transforming the way brands communicate their strategies. This systematic literature review (SLR) explored how the TikTok algorithm influences marketing strategies during the period 2021 to 2024. For this purpose, research was conducted based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. Also, reliable and relevant research databases were consulted, specifically Springer, Science Direct and EBSCO, from which 64 studies aligned with the inclusion and exclusion criteria were extracted, all corresponding to academic articles. After compilation, it was determined that 2024 was the year with the highest number of publications, representing 50% of the total number of articles. Likewise, the country that stood out was China with 28.13% of the related documents. Regarding the research approach, quantitative research predominated, followed by qualitative and mixed research. Finally, the study helped to understand the positive impact of TikTok on marketing, showing how it improves the visibility of brands, as well as identifying trends in consumer preferences, which allows the creation of more accurate strategies that are closer to the public.

Author 1: Raquel Melgarejo-Espinoza
Author 2: Mauricio Gonzales-Cruz
Author 3: Juan Chavez-Perez
Author 4: Orlando Iparraguirre-Villanueva

Keywords: TikTok; algorithm; consumer behavior; marketing

PDF

Paper 16: Data Mart Design to Increase Transactional Flow of Debit and Credit Card in Peruvian Bodegas

Abstract: The objective of this research is to design a Data Mart to identify tactical actions and increase the use of POS (points of sale) in the bodega business sector of Lima, Peru. A quantitative approach, using transaction history data, is applied using the Kimball methodology. This involves the ETL (Extract, Transform, Load) process to create a dimensional model and to develop a dashboard to visualize key indicators using Power BI. This solution is expected to improve the detection and analysis of transactional errors, categorized by geographic location and business sector while enhancing decision-making processes. This research improves the transactional flow and digital payment adoption in small businesses, fostering greater financial inclusion in the Peruvian market. Therefore, the methodology and tools to be applied in this research offer a framework as a model for similar contexts, especially in emerging markets, which will allow closing gaps in digital payment adoption and financial inclusion.

Author 1: Juan Carlos Morales-Arevalo
Author 2: Erick Manuel Aquise-Gonzales
Author 3: William Yohani Carpio-Ore
Author 4: Emmanuel Victor Mendoza Sáenz
Author 5: Carlos Javier Mazzarri-Rodriguez
Author 6: Erick Enrique Remotti-Becerra
Author 7: Edison Humberto Medina-La Plata
Author 8: Luis F. Luque-Vega

Keywords: Business intelligence; Extract, Transform, Load (ETL); dashboard; data mart; Point of Sale (POS)

PDF

Paper 17: Evaluation of Convolutional Neural Network Architectures for Detecting Drowsiness in Drivers

Abstract: Drowsiness in drivers is a condition that can manifest itself at any time, representing a constant challenge for road safety, especially in a context where artificial intelligence technologies are increasingly present in driver assistance systems. This paper presents a comparative evaluation of convolutional neural network (CNN) architectures for drowsiness detection, focusing on the identification of signals such as eye state and yawning. The research was of an applied type with a descriptive level, comparing the performance of LeNet, DenseNet121, InceptionV3 and MobileNet under challenging conditions, such as lighting and motion variations. A non-experimental design was used, with two datasets: a public dataset from Kaggle that included images classified into two categories (yawn and no yawn) and another created specifically for this study, which included images classified into three main categories (eyes open, eyes closed and undetected). The results indicated that, although all architectures performed well in controlled conditions, MobileNet stood out as the most accurate and consistent in challenging scenarios. DenseNet121 also showed good performance, while LeNet was effective in eye-state detection. This study provided a comprehensive assessment of the capabilities and limitations of CNNs for applications in drowsiness monitoring systems, and suggested future directions for improving accuracy in more challenging environments.

Author 1: Mario Aquino Cruz
Author 2: Bryan Hurtado Delgado
Author 3: Marycielo Xiomara Oscco Guillen

Keywords: Architectures; detection; drowsiness; neural networks

PDF

Paper 18: Integrating Deep Learning in Art and Design: Computational Techniques for Enhancing Creative Expression

Abstract: Deep learning and art design are being integrated, which is an innovative process that has the potential to reframe the way the human imagination is defined. This paper is an exploration of a broad field that showcases how AI enhances the experience of artist practice, especially content deep learning. This study comprises an exhaustive analysis of the cutting-edge models including generative adversarial networks (GANs), neural style transfer, and multimodal AI that assist in the creation, modification, and optimization of the artistic experience. This research points to implementations of those in the visual arts, graphic design, and interactive media while providing contemporary examples where deep learning has been an addition to traditional media and created new forms of art. Besides, the paper points to the challenges and ethical considerations concerning algorithmic art, including issues of authorship, biases, and intellectual property. The integration of computational methods in the realm of artistic expression is made in the paper and the paper provides insights into the change that deep learning can affect for artists, designers, and technologists.

Author 1: Yanjie Deng
Author 2: Qibing Zhai

Keywords: Deep learning; art; design; creative expression; computational techniques

PDF

Paper 19: Scallop Segmentation Using Aquatic Images with Deep Learning Applied to Aquaculture

Abstract: This study evaluates the performance of deep learning-based segmentation models applied to underwater images for scallop aquaculture in Sechura Bay, Peru. Four models were analyzed: SUIM-Net, YOLOv8, DETECTRON2, and CenterMask2. These models were trained and tested using two custom datasets: SEG SDS GOPRO and SEG SDS SF, which represent diverse underwater scenarios, including clear and turbid waters, varying current intensities, and sandy substrates. The primary aim was to automate scallop identification and segmentation to improve the efficiency and safety of aquaculture monitoring. The evaluation showed that SUIM-Net achieved the highest accuracy of 93% and 94% on the SEG SDS GOPRO and SEG SDS SF datasets, respectively. CenterMask2 performed best on the SEG SDS SF dataset, with an accuracy of 96.5%. Additionally, a combined dataset was used, where YOLOv8 achieved an accuracy of 88%, demonstrating its robustness across varied conditions. Beyond scallop segmentation, the models were extended to detect six additional marine classes, achieving a maximum accuracy of 39.90% with YOLOv8. This research under-scores the potential of deep learning techniques to revolutionize aquaculture by reducing operational risks, minimizing costs, and enhancing monitoring accuracy. The findings contribute valuable insights into the challenges and opportunities of applying artificial intelligence in underwater environments.

Author 1: Wilder Nina
Author 2: Nadia L. Quispe
Author 3: Liz S. Bernedo-Flores
Author 4: Marx S. Garcia
Author 5: Cesar Valdivia
Author 6: Eber Huanca

Keywords: Image segmentation; object detection; deep learning; computer vision; aquaculture; scallop segmentation; aquatic images

PDF

Paper 20: Performance Optimization with Span<T> and Memory<T> in C# When Handling HTTP Requests: Real-World Examples and Approaches

Abstract: Optimization of application performance is a critical aspect of software development, especially when dealing with high-throughput operations such as handling HTTP requests. In modern C#, structures Span<T> and Memory<T> provide powerful tools for working with memory more efficiently, reducing heap allocations, and improving overall performance. This paper explores the practical applications of Span<T> and Memory<T> in the context of optimizing HTTP request processing. Real-world examples and approaches that demonstrate how these types can minimize memory fragmentation are presented, avoid unnecessary data copying, and enable high-performance parsing and transformation of HTTP request data. By leveraging these advanced memory structures, developers can significantly enhance the throughput and responsiveness of their applications, particularly in resource-constrained environments or systems handling many concurrent requests. This paper aims to provide developers with actionable insights and strategies for integrating these techniques into their .NET applications for improved performance.

Author 1: Daniel Damyanov
Author 2: Ivaylo Donchev

Keywords: NET; C#; optimization; memory optimization; span; memory; development; HTTP requests; data structures

PDF

Paper 21: A Systematic Review of the Benefits and Challenges of Data Analytics in Organizational Decision Making

Abstract: Data analytics has been relied heavily in organizational decision-making, which allows accuracy, timeliness, and data-driven processes in a wide range of industries. These factors are influential as the figure and complexity of data are on the rise, along with problems like authentication, integration, and organizational resistance. The current study seeks to systematically review the benefits and challenges of data analytics on decision-making in different sectors using the PRISMA guidelines. A total of 32 articles published from 2020 until 2024 were identified through this review from reputable databases, including Scopus, Web of Science, IEEE Xplore, ProQuest, and Emerald Insight. These insights underscore the power of data analytics in driving change, enabling more accurate, faster, and aligned decision-making with organizational objectives. Challenges remain though, including the availability of broken data systems, hindrance due to a non-standardized norm across the whole sector, and resistance in places where data literacy is low or cultures resist data-driven practices. To mitigate the challenges, this review offers organizations practical recommendations for management. Companies that successfully incorporate analytics into their overall business strategies and create an organization-wide value for data and insights will be able to leverage analytics more effectively to enhance efficiency, encourage innovative growth, and navigate future disruptions. However, tackling these challenges is more than just optimizing performance—it is about future-proofing organizations in a world increasingly defined by data.

Author 1: Juan Carlos Morales-Arevalo
Author 2: Ciro Rodríguez

Keywords: Data analytics; decision-making; data-driven processes; big data analytics; systematic review

PDF

Paper 22: DyGAN: Generative Adversarial Network for Reproducing Handwriting Affected by Dyspraxia

Abstract: Dyspraxia primarily affects coordination and is categorized into two forms: 1) Motor, and 2) Verbal ororal. This study focuses on motor dyspraxia, which influences individuals in learning movement-related tasks. Consequently, the DyGAN initiative employs deep convolutional aversarial generation networks, using deep learning to create characters resembling human handwriting. The methodology in this study is structured into two main stages: 1) the creation of a first-order cybernetic model, and 2) the execution phase. Using four independent variables and three dependent variables, eight outcomes were analyzed using variance analysis. DyGAN is a twin Deep Convolutional Neural Networks and it is highly sensitive to the Learning Rate. It scored a 67% on the proposal, suggesting that characters can sound written by a human. The project will feature writers from different backgrounds and will help augment data for writing resources for dyspraxia, potentially benefiting those struggling with writing difficulties and improving our understanding of education. The model is designed to be widely applicable. Future work could customize the model to mimic the way a specific child writes, with neural networks, for example.

Author 1: Jes´us Jaime Moreno Escobar
Author 2: Hugo Quintana Espinosa
Author 3: Erika Yolanda Aguilar del Villar

Keywords: Children with neurodevelopmental disorders; dyspraxia; generative adversarial network; deep learning; deep convo-lutional neural network; human handwriting

PDF

Paper 23: Enhanced Cyber Threat Detection System Leveraging Machine Learning Using Data Augmentation

Abstract: In the modern era of cyber security, cyber-attacks are continuously evolving in terms of complexity and frequency. In this context, organizations need to enhance Network Intrusion Detection Systems (NIDS) for anomaly detection. Although the existing Machine Learning models are in place to cater to the situations but new challenges emerge rapidly which affects the performance and efficiency of existing models specifically the unreachability of large datasets and unorganized data. This results in degraded efficiency for the identification of complex attacks. In this paper, data augmentation has been done of NSL-KDD which is a standard dataset for Intrusion Detection Systems (IDS) specifically for IoT-based devices. The improvement in performance and efficiency of NIDS has been performed by training the augmented dataset using the K-Nearest Neighbor (KNN) ML model.

Author 1: Umar Iftikhar
Author 2: Syed Abbas Ali

Keywords: Anomaly detection; cyber threat intelligence; generative adversarial networks; data augmentation; Wasserstein GAN with gradient penalty

PDF

Paper 24: Data Analytics for Product Segmentation and Demand Forecasting of a Local Retail Store Using Python

Abstract: In today's competitive business environment, understanding customers' expectations and choices is a necessity for the successful operations of a retail store. Forecasting demand also plays an important role in maintaining inventory at an optimum level. The work utilises data analytics for product segmentation and demand forecasting in a local retail store. Python is being used as a programming language for data analytics. Historical sales data of a local store has been used to categorise products into different segments. Statistical techniques and a k-means clustering algorithm have been used to understand different segments of the product. Machine learning algorithms and time series models have been used to forecast future sales trends. The business insights allow the retail store to meet customers' expectations, manage inventory at an optimum level and enhance supply chain efficiency. The present work seeks to illustrate how data-driven tactics can enhance operational decision-making in retail.

Author 1: Arun Kumar Mishra
Author 2: Megha Sinha

Keywords: Data analytics; product segmentation; demand forecasting; multicriteria ABC classification; seasonality

PDF

Paper 25: YOLOv7-b: An Enhanced Object Detection Model for Multi-Scale and Dense Target Recognition in Remote Sensing Images

Abstract: To address the challenges of dense object distribution, scale variability, and complex shapes in remote sensing images, this paper proposes an improved YOLOv7-b model to enhance multi-scale target detection accuracy and robustness. First, deformable convolution (DCNv2) is introduced into the YOLOv7 backbone to replace the standard convolutions in the last two ELAN modules, thereby providing more flexible sampling capabilities and improving adaptability to irregularly shaped targets. Next, a Bi-level Routing Attention (BRA) module is integrated after the SPPCSPC module, employing both coarse- and fine-grained routing strategies to focus on densely distributed targets while suppressing irrelevant background. Finally, training and evaluation are conducted on the large-scale DIOR remote sensing dataset under unified hyperparameter settings and evaluation metrics, allowing a systematic assessment of the overall model performance. Experimental results show that, compared with the original YOLOv7, the improved YOLOv7-b achieves significant enhancements in Precision, Recall, mAP@0.5, and mAP@0.5:0.95, with mAP@0.5 and mAP@0.5:0.95 reaching 85.72% and 66.55%, respectively. Visualization further demonstrates that YOLOv7-b provides stronger recognition and localization for densely arranged, small-scale, and morphologically complex targets, effectively reducing missed and false detections. Overall, YOLOv7-b delivers higher detection accuracy and robustness in multi-scale remote sensing target detection. By combining deformable convolution with a dynamic sparse attention mechanism, the model excels in detecting highly deformable objects and dense scenes, offering a more adaptive and accurate solution for small-target detection, dense target recognition, and multi-scale detection in remote sensing imagery.

Author 1: Yulong Song
Author 2: Hao Yang
Author 3: Lijun Huang
Author 4: Song Huang

Keywords: YOLOv7-b; remote sensing images; object detection; deformable convolution; bi-level routing attention; multi-scale

PDF

Paper 26: Long Short-Term Memory-Based Bandwidth Prediction for Adaptive High Efficiency Video Coding Transmission Enhancing Quality of Service Through Intelligent Optimization

Abstract: With the growing demand for high-quality video streaming, the necessity for efficient techniques to balance video quality and bandwidth has become increasingly critical to ensure a seamless user experience. Existing traditional adaptive streaming methods only react to network fluctuations, which often leads to delays, quality degradation, and buffering. This paper introduces an AI-powered approach for adaptive High Efficiency Video Coding (HEVC) transmission, using a predictive model based on Long Short-Term Memory (LSTM) networks to predict bandwidth variations and proactively adjust encoding parameters. The proposed approach uses historical and real-time network data to anticipate network changes, offering smoother transitions and reducing buffering. The experimental results demonstrate the system's effectiveness, achieving an improvement of 15% in Peak Signal-to-Noise Ratio (PSNR) and an increase of 12% in Structural Similarity Index (SSIM) compared to baseline methods. Additionally, the system reduces buffering events by 25% while improving bitrate stability by 20%, guaranteeing consistent video quality with minimal interruptions. This proactive approach significantly enhances Quality of Service (QoS) by providing stable video quality and uninterrupted streaming, representing a significant advancement in adaptive streaming technologies.

Author 1: Hajar Hardi
Author 2: Imade Fahd Eddine Fatani

Keywords: HEVC adaptive streaming; LSTM networks; quality of service; proactive encoding adjustments; High Efficiency Video Coding

PDF

Paper 27: Detection of Stopwords in Classical Chinese Poetry

Abstract: In this research, we address the problem of stopword detection in Classical Chinese Poetry, an area that has not been explored previously. Stopword detection is crucial in text mining tasks, as identifying and removing stopwords is essential for improving the performance of various natural language processing models. Inspired by the TF-IDF method, we propose a novel approach that utilizes external knowledge to reconstruct the Term Weight matrix. Our key finding is that incorporating external knowledge significantly refines the granularity of the term weight, thereby improving the effectiveness of stopword detection. Based on these findings, we conclude that external knowledge can enhance the ability of text representation, especially for the short texts in Classical Chinese Poetry.

Author 1: Lei Peng
Author 2: Xiaodong Ma
Author 3: Zheng Teng

Keywords: TF-IDF; stopwords; Chinese; poetry; frequency

PDF

Paper 28: IoT CCTV Video Security Optimization Using Selective Encryption and Compression

Abstract: Data security and privacy are critical concerns when integrating Closed-Circuit Television (CCTV) cameras with the Internet of Things (IoT). To enhance security, IoT data must be encrypted before transmission and storage. However, to minimize overheads related to storage space, computational time, and transmission energy, data can be compressed prior to encryption. H.264/AVC (Advanced Video Coding) offers a balanced solution for video compression by addressing processing demands, video quality, and compression efficiency. Encryption is vital for safeguarding data security, yet the integrity of IoT data may sometimes be compromised. Ineffective data selection can lead to inefficiencies and potential security risks, highlighting the importance of addressing CCTV video data security carefully. This study proposes an algorithm that integrates compression with selective encryption techniques to reduce computational overhead while ensuring access to critical information for real-time analysis. By employing frame intervals, the algorithm enhances efficiency without compromising security. The execution details and merits of the proposed approach are analyzed, demonstrating its effectiveness in safeguarding the privacy and integrity of IoT CCTV video data. Results reveal superior performance in terms of compression efficiency and encryption/decryption times, with an average encryption time of 0.00171 seconds for a 128-bit key, enabling fast processing suitable for real-time applications. The decryption time matches the encryption time, confirming the method’s viability for practical IoT CCTV implementations. Metrics such as correlation coefficient, bitrate overhead, and histogram analysis further validate the approach’s robustness against statistical attacks.

Author 1: Kawalpreet Kaur
Author 2: Amanpreet Kaur
Author 3: Yonis Gulzar
Author 4: Vidhyotma Gandhi
Author 5: Mohammad Shuaib Mir
Author 6: Arjumand Bano Soomro

Keywords: Closed-Circuit Television (CCTV); decryption; encryption; internet of things (IoT); security

PDF

Paper 29: Integrating Artificial Intelligence to Automate Pattern Making for Personalized Garment Design

Abstract: This paper introduces an innovative AI-assisted pattern construction tool that leverages machine learning models to revolutionize pattern generation in garment design. The proposed system automatically generates patterns from 3D body scans, which are converted into 3D shell meshes and subsequently flattened into 2D patterns using advanced data augmentation techniques and CAD flattening algorithms. This approach eliminates the need for expertise in traditional pattern-making, enabling seamless transformation of 3D models into realistic garment patterns. The tool accommodates various garment styles, including fitted, standard fit, and relaxed fit, while also enabling high levels of personalization by adapting patterns to individual body dimensions. Through its AI-driven automation and user-friendly interface, this plug-in enhances accessibility, allowing individuals without conventional design skills to create customized apparel efficiently.

Author 1: Muyan Han

Keywords: Machine learning models; pattern generation; AI-assisted pattern construction; data augmentation techniques; CAD flattening

PDF

Paper 30: Enhancing Recurrent Neural Network Efficacy in Online Sales Predictions with Exploratory Data Analysis

Abstract: Online sales forecasting has become an essential aspect of effective business planning in the digital era. The widespread adoption of digital transformation has enabled companies to collect substantial datasets related to consumer behavior, market trends, and sales drivers. This study attempts to uncover patterns and predict sales growth by utilizing product images and their associated filenames as input. To achieve this, we use EDA combined with LSTM and Gated Recurrent Unit (GRU), which excel in processing sequential data. However, the performance of these networks is significantly affected by the quality of data and the preprocessing methods applied. This study highlights the importance of Exploratory Data Analysis (EDA) and Ensemble Methods in enhancing the efficacy of RNNs for online sales forecasting. EDA plays a crucial role in identifying significant patterns such as trends, seasonality, and autocorrelation while addressing data irregularities such as missing values and outliers. These findings show that integrating EDA substantially improves the performance metrics of RNN, as indicated by the reduction in loss and mean absolute error (MAE) values across training epochs (e.g. loss: 0.0720, MAE: 0.1918 at epoch 15). These results indicate that EDA improves the accuracy, stability, and efficiency of the model, allowing RNN to provide more reliable sales predictions while minimizing the risk of overfitting.

Author 1: Erni Widiastuti
Author 2: Jani Kusanti
Author 3: Herwin Sulistyowati

Keywords: Exploratory data analysis; recurrent neural networks; online sales prediction; sequential data; trend patterns

PDF

Paper 31: A Rapid Drift Modeling Method Based on Portable LiDAR Scanner

Abstract: Traditional measurement methods in underground mining tunnels have faced inefficiencies, limited accuracy, and operational challenges, consuming significant time and labor in complex environments. These limitations severely restrict the efficiency and quality of mine management and engineering design. To enhance the efficiency and accuracy of 3D modeling in underground tunnels, this study combines portable 3D LiDAR scanning technology with simultaneous localization and mapping. This integration enables autonomous positioning and efficient modeling without external positioning signals. The proposed approach effectively acquires high-resolution 3D data in complex environments, ensuring data accuracy and model reliability. High-resolution scanning of multiple critical areas was conducted on-site, with inertial navigation systems correcting the device's pose information. Automated data processing software was used for filtering, denoising, and modeling the collected data, leading to precise 3D tunnel models. Validation results indicate that portable laser scanning technology offers significant advantages in efficiency, accuracy, and safety, meeting the geological surveying and engineering needs of mining operations. The application of portable 3D laser scanning technology demonstrates considerable benefits in the rapid modeling of underground tunnels, providing effective technical support to improve mine management efficiency and safety. It also reveals broad application prospects.

Author 1: Zhao Huijun
Author 2: Liu Chao
Author 3: Qi Yunpu
Author 4: Song Zhanglun
Author 5: Xia Xu

Keywords: Underground mining; 3D modeling; portable 3D laser scanning; simultaneous localization and mapping (SLAM); mine surveying; inertial measurement unit (IMU)

PDF

Paper 32: Dialogue-Based Disease Diagnosis Using Hierarchical Reinforcement Learning with Multi-Expert Feedback

Abstract: In order to minimize the stochasticity of agents used in disease diagnosis within the dialogue system, and to enable them to interact with users based on the inherent connections between symptoms and diseases, while simultaneously addressing the issue of limited medical data, we propose the Hierarchical Reinforcement Learning with Multi-expert Feedback framework. The framework constructs a reward model in the lower-level networks of the hierarchical structure. Here, the discriminator leveraging the concept of adversarial networks generates rewards by evaluating the authenticity of symptom query sequences generated by the agent, and the large language model of human experts synthesizes various factors to assess the reasonableness of the agent's current symptom queries, thereby guiding the learning of the policy network. The algorithm addresses the deficiencies in data characteristics and improves the policy's capability to leverage feature information, thus making the process of disease diagnosis more aligned with clinical practice. Experimental results demonstrate that the proposed framework achieves diagnostic success rates of 61.5% on synthetic datasets and 84.4% on real-world datasets, while requiring fewer dialogue turns on average. Both metrics surpass those of conventional approaches, further indicating the framework's strong generalization ability.

Author 1: Shi Li
Author 2: Xueyao Sun

Keywords: Disease diagnosis; dialogue system; large language model; reinforcement learning; reward model; adversarial network; dialogue agent

PDF

Paper 33: BlockMed: AI Driven HL7-FHIR Translation with Blockchain-Based Security

Abstract: Blockchain is a peer-to-peer (P2P) network that distributes information and protects data integrity, security, and privacy. Constant simplification is required for information exchange. This comprehensive assessment seamlessly integrates Electronic Health Record (EHRs) with blockchain technology. EHRs are represented with different standards mainly HL7 and FHIR. EHR should be interpreted to both parties after exchange. Such interpretation after exchange may face few interoperable challenges. To overcome EHR interoperability difficulties, 18 blockchain-based alternatives were examined. Despite their promise, these systems have a variety of drawbacks, including reliability, privacy, data integrity, and collaborative sharing. Six phases make up the systematic review: research, investigation, article curation, keyword abstraction, data distillation, and project trajectory monitoring. In total, 18 seminal articles on EHR interoperability and Blockchain integration were identified. Many unique interoperability methods are proposed for Blockchain-integrated EHR systems in these contributions. Several Blockchain applications, standards, and issues associated with EHR interoperability are described and analyzed. Implemented and proposed blockchain-based EHR frameworks are numerous. The security aspects have been covered, but standards compliance and interoperability requirements are lacking. Research in this area is needed. This research study has analyzed the different national and international EHR standards. This paper describes the current state of EHRs, including blockchain-based implementations, along with the interoperability issues between existing blockchain-based EHR frameworks. The research has proposed novel BlockMed framework which is interoperable for the HL7 and FHIR EHR standards. BlockMed framework is evaluated with Data Accuracy, Mapping Quality, Response Time, Latency, Interoperability Coverage, AI Model Efficiency, Consent and Security Management, Cross-Chain Support, Patient and Provider Satisfaction.

Author 1: Yonis Gulzar
Author 2: Faheem Ahmad Reegu
Author 3: Abdoh Jabbari
Author 4: Rahul Ganpatrao Sonkamble
Author 5: Mohammad Shuaib Mir
Author 6: Arjumand Bano Soomro

Keywords: Blockchain; health care; electronic health records (EHRs); interoperability; and healthcare system

PDF

Paper 34: Improving Air Quality Prediction Models for Banting: A Performance Evaluation of Lasso, mRMR, and ReliefF

Abstract: This study explores the effectiveness of various feature selection methods in forecasting next-day PM2.5 levels in Banting, Malaysia. The accurate prediction of PM2.5 concentrations is crucial for public health, enabling authorities to take timely actions to mitigate exposure to harmful pollutants. This study compares three feature selection methods: Lasso, mRMR, and ReliefF using a dataset consisting of 43,824 data points collected from Banting air quality monitoring stations (CA22B). The dataset includes ten variables, including pollutant concentrations such as O3, CO, NO2, SO2, PM10, and PM2.5, along with meteorological parameters such as temperature, humidity, wind direction and wind speed. The results revealed that Lasso outperformed both mRMR and ReliefF in terms of various performance metrics, including accuracy, sensitivity, precision, F1 score, and AUROC. Lasso demonstrated superior ability to handle multicollinearity, significantly improving the interpretability of the model by retaining only the most important variables. This suggests that the effectiveness of feature selection methods is highly dependent on the characteristics of the dataset, such as correlations among features. Thus, the top eight features to predict PM2.5 levels in Banting selected by Lasso method are relative humidity, PM2.5, wind direction, ambient temperature, PM10, NO2, wind speed, and O3. The findings from this study contribute to the growing body of knowledge on air quality prediction models, highlighting the importance of selecting the appropriate feature selection method to achieve the best model performance. Future research should explore the application of Lasso method in other geographical regions, including urban, suburban and rural areas, to assess the generalizability of the results.

Author 1: Siti Khadijah Arafin
Author 2: Suvodeep Mazumdar
Author 3: Nurain Ibrahim

Keywords: PM2.5 concentration; feature selection; Lasso; mRmR; RBFNN; ReliefF

PDF

Paper 35: Lightweight CA-YOLOv7-Based Badminton Stroke Recognition: A Real-Time and Accurate Behavior Analysis Method

Abstract: With the rapid development of sports technology, accurate and real-time recognition of badminton stroke postures has become essential for athlete training and match analysis. This study presents an improved YOLOv7-based method for badminton stroke posture recognition, addressing limitations in accuracy, real-time performance, and automation. To optimize the model, pruning techniques were applied to the backbone structure, significantly enhancing processing speed for real-time demands. A parameter-free attention module was integrated to improve feature extraction without increasing model complexity. Furthermore, key stroke action nodes were defined, and a joint point matching module was introduced to enhance recognition accuracy. Experimental results show that the improved model achieved a mAP@0.5 of 0.955 and a processing speed of 44 frames per second, demonstrating its capability to deliver precise and efficient badminton stroke recognition. This research provides valuable technical support for coaches and athletes, enabling better analysis and optimization of stroke techniques.

Author 1: Yuchuan Lin

Keywords: Badminton shot; pose recognition; YOLO V7; size adaptive input; model pruning; attention mechanism

PDF

Paper 36: Fuzzy Evaluation of Teaching Quality in "Smart Classroom" with Application of Entropy Weight Coupled TOPSIS

Abstract: This research aims to investigate the scientific assessment methodology for the teaching quality of smart classrooms and to develop a multi-dimensional evaluation system utilizing a combination of the entropy weight technique and the TOPSIS approach. To comprehensively assess the pedagogical proficiency of educators, this paper selects the dimensions of teaching preparation, the process, teaching effect and teaching reflection, and combines the questionnaire survey and statistical data to collect and analyze the data. The research methodology initially standardized the raw data to mitigate discrepancies among various scales; subsequently, the weight method was employed to ascertain the weight of each evaluation index, thereby indicating the significance of the indices through information entropy; ultimately, the TOPSIS method was utilized to evaluate teachers' performance across each dimension and rank them based on their proximity to the optimal and negative ideal solutions, culminating in a comprehensive assessment of teaching quality. The results of the study show that the entropy weight method can effectively determine the weight of each index, and the TOPSIS method provides teachers with a clear ranking of teaching quality by calculating the distance from the ideal solution, helping to identify strengths and weaknesses in teaching. This paper concludes that the evaluation method combining the entropy weight method and TOPSIS method can provide an objective and comprehensive teaching quality assessment for the smart classroom, but there are limitations such as the small sample data size and some teaching dimensions are not adequately covered, etc. Future research can further improve the evaluation system by expanding the sample size and increasing the evaluation dimensions to enhance its applicability and accuracy, so as to provide stronger support for the continuous optimization of the smart classroom.

Author 1: Yajuan SONG

Keywords: Smart classroom; entropy weight method; TOPSIS method; teaching quality; optimization and improvement

PDF

Paper 37: Long-Term Recommendation Model for Online Education Systems: A Deep Reinforcement Learning Approach

Abstract: Intelligent tutoring systems serve as tools capable of providing personalized learning experiences, with their efficacy significantly contingent upon the performance of recommendation models. For long-term instructional plans, these systems necessitate the provision of highly accurate, enduring recommendations. However, numerous existing recommendation models adopt a static perspective, disregarding the sequential decision-making nature of recommendations, rendering them often incapable of adapting to novel contexts. While some recent studies have delved into sequential recommendations, their emphasis predominantly centers on short-term predictions, neglecting the objectives of long-term recommendations. To surmount these challenges, this paper introduces a novel recommendation approach based on deep reinforcement learning. We conceptualize the recommendation process as a Markov Decision Process, employing recurrent neural networks to simulate the interaction between the recommender system and the students. Test results demonstrate that our model not only significantly surpasses traditional Top-N methods in hit rate and NDCG concerning the enhancement of long-term recommendations but also adeptly addresses scenarios involving cold starts. Thus, this model presents a new avenue for enhancing the performance of intelligent tutoring systems.

Author 1: Wei Wang

Keywords: Deep reinforcement learning; long-term recommendation; intelligent tutoring system; Markov Decision Process; recurrent neural network

PDF

Paper 38: Advanced Football Match Winning Probability Prediction: A CNN-BiLSTM_Att Model with Player Compatibility and Dynamic Lineup Analysis

Abstract: In recent years, with the continuous expansion of the football market, the prediction of football match-winning probabilities has become increasingly important, attracting numerous professionals and institutions to engage in the field of football big data analysis. Pre-match data analysis is crucial for predicting match outcomes and formulating tactical strategies, and all top-level football events rely on professional data analysis teams to help teams gain an advantage. To improve the accuracy of football match winning probability predictions, this study has taken a series of measures: using the Word2Vec model to construct feature vectors to parse the compatibility between players; developing a winning probability prediction model based on LSTM to capture the dynamic changes in team lineups; designing an improved BILSTM_Att winning probability prediction model, which distinguishes the different impacts of players on match outcomes through an attention mechanism; and proposing a CNN-BILSTM_Att winning probability prediction model that combines the local feature extraction capability of CNN with the time series analysis of BILSTM. These research efforts provide more refined data support for football coaching teams and analysts. For the general audience, these in-depth analyses can help them understand the tactical layouts and match developments on the field more deeply, thereby enhancing their viewing experience and understanding of the matches.

Author 1: Tao Quan
Author 2: Yingling Luo

Keywords: Football big data; match prediction; feature vector; tactical understanding; match analysis

PDF

Paper 39: Effectiveness of Immersive Contextual English Teaching Based on Fuzzy Evaluation

Abstract: Investigating the real-world impact of immersive contextual instruction on English language education, verifying its contribution to the enhancement of linguistic skills and the improvement of learning attitudes, and evaluating the practicality and worth of fuzzy evaluation in gauging teaching efficacy. A fuzzy complete assessment model was built utilizing the language competency test and the learning attitude questionnaire, and the teaching effect was quantitatively examined based on the experimental data using methods such as affiliation function and weight calculation. The study's findings revealed that students in the experimental group performed much better than students in the control group in terms of language competence and learning attitudes, with an overall fuzzy score of 88.5 compared to 74.8 in the latter. The statistical test indicated a significant difference between the groups (𝑝<0.001). The study also confirmed the scientific and practical validity of fuzzy evaluation in the assessment of multidimensional educational efficacy. Immersion contextual English teaching provides considerable benefits for improving students' language skills and learning attitudes. The fuzzy assessment method introduces a new instrument for quantitative research on teaching efficacy and has a wide range of potential applications.

Author 1: Mei Niu

Keywords: Fuzzy evaluation; immersion; contextual English teaching; teaching effectiveness; teaching assessment

PDF

Paper 40: Multi-Classification Convolution Neural Network Models for Chest Disease Classification

Abstract: Chest diseases significantly affect public health, causing more than one million hospital admissions and approximately 50,000 deaths annually in the United States. Chest X-ray imaging technology, which is a critically important imaging technique, helps in examining, diagnosing, and managing chest conditions by providing essential insights about the presence and severity of disease. This study introduces a novel chest X-ray classification framework leveraging a fine-tuned VGG19 model (16 layers) enhanced with CLAHE for improved contrast, binary mask attention to highlight abnormalities and advanced data augmentation for better generalization. Key innovations include the use of a Probabilistic U-Net for lung segmentation to isolate critical features and weighted masks to focus on pathological regions, addressing class imbalance with computed class weights for fair learning. By achieving 95% accuracy and superior class-specific metrics, the proposed method outperforms existing deep learning approaches, providing a robust and interpretable solution for real-world healthcare applications, where a test accuracy of 94.8% is achieved using different customized models based on VGG19 without using a mask. The experimental results indicate that our proposed method surpasses current deep learning techniques in terms of overall classification accuracy for chest disease detection.

Author 1: Noha Ayman
Author 2: Mahmoud E. A. Gadallah
Author 3: Mary Monir Saeid

Keywords: Convolution neural network; classification; chest X-ray; image preprocessing; U-Net; deep learning

PDF

Paper 41: Deep Learning-Based Attention Mechanism Algorithm for Blockchain Credit Default Prediction

Abstract: With the rise of internet finance and the increasing demand for personal credit risk management, accurate credit default prediction has become essential for financial institutions. Traditional models face limitations in handling complex and large-scale data, especially in the blockchain domain, which has emerged as a crucial technology for securing and processing financial transactions. This paper aims to improve the accuracy and generalization of blockchain-based credit default prediction models by optimizing deep learning algorithms with the Special Forces Algorithm (SFA) and attention mechanism (AM) networks. The study introduces a hybrid approach combining SFA with AM to optimize hyperparameters of the credit default prediction model. The model preprocesses blockchain credit data, extracts critical features such as user and loan information, and applies the SFA-AM algorithm to improve classification accuracy. Comparative analysis is conducted using other machine learning algorithms like XGBoost, LightGBM, and LSTM. Results: The SFA-AM model outperforms traditional models in key metrics, achieving higher precision (0.8289), recall (0.8075), F1 score (0.8180), and AUC value (0.9407). The model demonstrated better performance in identifying both default and non-default cases compared to other algorithms, with significant improvements in reducing misclassifications. The proposed SFA-AM model significantly enhances blockchain credit default prediction accuracy and generalization. While effective, the study acknowledges limitations in dataset diversity and model interpretability, suggesting future research could expand on these areas for more robust applications across different financial sectors.

Author 1: Wangke Lin
Author 2: Yue Liu

Keywords: Deep learning; attention mechanism; blockchain credit default prediction; special forces algorithm

PDF

Paper 42: Modeling Cloud Computing Adoption and its Impact on the Performance of IT Personnel in the Public Sector

Abstract: This study investigates the factors influencing cloud computing adoption in the public sector, emphasizing the performance of IT personnel. Through qualitative interviews with five IT management professionals in the public sector, we identify key challenges in integrating cloud computing systems. The primary issues include technical complexity, skill and knowledge deficits in data governance and budget constraints. These insights inform the development of the Cloud Computing Capacity and Integration Model for the Public Sector, which proposes a comprehensive strategy to address these barriers. Our findings identified five key challenges to cloud computing adoption in the public sector. First, compatibility issues and system integration challenges resulting from conflicts between cloud platforms and older infrastructure contributed to operational inefficiency. Second, data migration issues due to incompatible formats and structures resulted in data loss and delays. Third, network constraints, such as limited bandwidth and high latency, hampered cloud service performance. Finally, a lack of staff training and budget constraints hampered successful cloud integration, emphasizing the importance of focused capacity-building initiatives and additional financial support. Thus, the "Cloud Computing Integration and Cloud Computing Acceptance and Performance Model" (CCAPM) presented in this research paper aims to deliver a comprehensive model that tackles a wide array of technical, operational, and human resource challenges to create an effective cloud computing ecosystem, enhance the adoption of cloud computing within the public sector, elevate the capabilities of public sector IT personnel, and develop a secure, resilient, and sustainable cloud computing environment in the public sector.

Author 1: Noorbaiti Mahusin
Author 2: Hasimi Sallehudin
Author 3: Nurhizam Safie Mohd Satar
Author 4: Azana Hafizah Mohd Aman
Author 5: Farashazillah Yahya

Keywords: Cloud computing; cloud integration model introduction; performance of IT personnel; public sector; system integration

PDF

Paper 43: TPGR-YOLO: Improving the Traffic Police Gesture Recognition Method of YOLOv11

Abstract: In open traffic scenarios, gesture recognition for traffic police faces significant challenges due to the small scale of the traffic police and the complex background. To address this, this paper proposes a gesture recognition network based on an improved YOLOv11. This method enhances feature extraction and multi-scale information retention by integrating RFCAConv and C2DA modules into the backbone network. In the Neck part of the network, an edge-enhanced multi-branch fusion strategy is introduced, incorporating target edge information and multi-scale information during the feature fusion phase. Additionally, the combination of WIoU and SlideLoss loss functions optimizes the positioning of bounding boxes and the allocation of sample weights. Experimental validation was conducted on multiple datasets, and the proposed method achieved varying degrees of improvement in all metrics. Experimental results demonstrate that this method can accurately perform the task of recognizing traffic police gestures and exhibits good generalization capabilities for small targets and complex backgrounds.

Author 1: Xuxing Qi
Author 2: Cheng Xu
Author 3: Yuxuan Liu
Author 4: Nan Ma
Author 5: Hongzhe Liu

Keywords: Traffic police gesture recognition; loss function; YOLO algorithm; multi-scale feature fusion

PDF

Paper 44: A Hybrid SETO-GBDT Model for Efficient Information Literacy System Evaluation

Abstract: Information literacy (IL) is essential for vocational education talents to thrive in the modern information age. Traditional assessment methods often lack quantitative precision and systematic evaluation models, making it difficult to accurately measure IL levels. This paper aims to develop a robust, data-driven model to assess information literacy in vocational education talents. The goal is to improve the accuracy and efficiency of IL evaluations by combining machine learning techniques with optimization algorithms. The proposed method integrates the Stock Exchange Trading Optimization (SETO) algorithm with the Gradient Boosting Decision Tree (GBDT) to construct the SETO-GBDT model. This model optimizes parameters such as the number of decision trees and tree depth. A comprehensive evaluation index system for IL is built, focusing on learning attitude, process, effect, and practice. The SETO-GBDT model was trained and tested using real-world data on IL indicators. The SETO-GBDT model outperformed traditional models such as Decision Tree, Random Forest, and GBDT optimized by other algorithms like SCA and SELO. Specifically, it achieved an RMSE of 0.13, an R² of 0.98, and reduced evaluation time to 0.092 s, demonstrating superior accuracy and efficiency. The research concludes that the SETO-GBDT model offers a significant improvement in evaluating IL for vocational education talents. The model’s high accuracy and reduced evaluation time make it an effective tool for assessing and enhancing information literacy, aligning with the educational goals of developing well-rounded, information-savvy professionals.

Author 1: Jiali Dai
Author 2: Hanifah Jambari
Author 3: Mohd Hizwan Mohd Hisham

Keywords: Vocational education; talent; information literacy; system building; educational evaluation; gradient augmentation; decision tree

PDF

Paper 45: Bridging the Gap Between Industry 4.0 Readiness and Maturity Assessment Models: An Ontology-Based Approach

Abstract: The rapid evolution of Industry 4.0 technologies has created a complex and interconnected landscape of readiness and maturity assessment models. However, these models often fail to address the full spectrum of organizational readiness across strategic, technological, operational, and cultural dimensions, while also not accounting for emerging paradigms such as Industry 5.0. This paper proposes a conceptual model for an ontology that integrates all relevant domain knowledge into a unified framework, capturing strategic, technological, operational, and cultural readiness and maturity within a single comprehensive model. The ontology provides a systematic approach to understanding the interconnectedness of I4.0 and Industry 5.0 assessment models, facilitating a holistic view of an organization’s preparedness for digital transformation. By bridging the gap between these two stages of industrial evolution, the model enables interoperability across diverse frameworks, promoting more informed decision-making and strategic planning. This research highlights the potential of the proposed ontology to support the ongoing shift from Industry 4.0 to Industry 5.0, offering a valuable tool for researchers, practitioners, and decision-makers navigating the complexities of next-generation industrial ecosystems. The paper further discusses the theoretical underpinnings and practical applications of the model in fostering a smooth transition toward a more human-centric, sustainable, and technologically advanced industrial future.

Author 1: ABADI Asmae
Author 2: ABADI Chaimae
Author 3: ABADI Mohammed

Keywords: Industry 4.0; readiness assessment; maturity assessment; digital transformation; ontology development; conceptual model; knowledge engineering

PDF

Paper 46: Eco-Efficiency Measurement and Regional Optimization Strategy of Green Buildings in China Based on Three-Stage Super-Efficiency SBM-DEA Model

Abstract: With the increasing attention of society to sustainable development, green building as an important sustainable building form has attracted much attention. However, the comprehensive assessment of eco-efficiency of green buildings faces many challenges, including the insufficient comprehensive analysis of all stages of the building life cycle and the oversimplification of multidimensional input-output relationships. In addition, the existing methods have subjectivity and uncertainty in data processing and weight allocation, which reduces the reliability of evaluation. To overcome these difficulties, a measurement method based on the three-stage super-efficient data Enveloping analysis (SBM-DEA) model is introduced in this study. By constructing a three-stage super-efficiency SBM-DEA model, the eco-efficiency measurement model of green buildings is established, taking building resources and energy as input and economic and environmental value as output. The results show that after removing the interference of external environment variables and random errors, the measurement results of stage 3 are more reasonable. From 2011 to 2018, the eco-efficiency of green buildings in China showed obvious regional differences, showing a decreasing trend of "the highest in the east (0.884), followed by the central (0.704) and the lowest in the west (0.578)". The innovation of this study lies in the full consideration of timing and dynamics, which provides new theoretical and practical ideas for promoting sustainable development in the field of green building, and is expected to improve the assessment accuracy and reliability in the field of green building.

Author 1: Xianhong Qin
Author 2: Yaou Lv
Author 3: Yunfang Wang
Author 4: Jian Pi
Author 5: Ze Xu

Keywords: Three stages; data envelopment analysis; super efficiency model; green buildings; ecological efficiency

PDF

Paper 47: Watermelon Rootstock Seedling Detection Based on Improved YOLOv8 Image Segmentation

Abstract: Automated grafting is an important means for modern agriculture to improve production efficiency and graft seedling quality, among which the use of visual systems to quickly segment target rootstock seedlings is the key technology to achieve automated grafting. This study aims to solve the problems of inaccurate image segmentation and slow detection speed in traditional rootstock seedling segmentation algorithms. To address these challenges, this study proposes a lightweight segmentation method based on an improved version of YOLOv8s-seg. The improved YOLOv8-seg introduces FasterNet as the backbone network and designs an RCAAM module to enhance feature extraction ability and lightweight model. The D-C2f module is improved to enhance feature fusion ability, achieving efficient and accurate segmentation of watermelon rootstock seedlings and improving grafting efficiency. This article designs a series of comparative experiments, comparing the improved version of YOLOv8-seg with classic models such as Unet, SOLO v2, Mask R-CNN, Deeplabv3+ on a test set containing watermelon rootstock seedlings, and evaluating the recognition performance and detection effect of the model. The experimental results show that the improved version of YOLOv8-seg outperforms other models in mAP coefficient index and can segment seedlings more accurately. This study provides reliable deep learning-based solution for the development of automatic grafting robots, which can effectively reduce labor costs and improve grafting efficiency, meeting the requirements of automated equipment for inference efficiency and hardware resources.

Author 1: Qingcang Yu
Author 2: Zihao Xu
Author 3: Yi Zhu

Keywords: Image segmentation; YOLOv8s-seg; lightweight; deep learning

PDF

Paper 48: Object Recognition IoT-Based for People with Disabilities: A Review

Abstract: This research focuses on a literature study on developing a Mini Smart Camera (MSC) system that utilizes Internet of Things (IoT) technology to help people with disabilities interact with their environment. The MSC serves as an assistive device, which integrates object recognition and speech recognition technologies along with an internet-based two-way communication system. Utilizing state-of-the-art hardware and software, the system captures images, processes audio, and transmits data via Real Time Streaming Protocol (RTSP) and Message Queuing Telemetry Transport (MQTT). These protocols serve different purposes: managing data transmission and enabling communication between machines. The MSC is equipped with a 5 MP camera, 2.5 GHz Quad-Core processor, and 4G connectivity, and is connected to a high-performance Ubuntu 22.04 Linux cloud server. The use of OpenCV libraries and machine learning algorithms ensures fast and precise image analysis. By integrating machine learning and natural language processing (NLP), MSC efficiently handles both visual and audio inputs. Key features, including text-to-speech (TTS) and speech-to-text (STT), provide an interactive and adaptive communication interface. The system is designed to improve accessibility and encourage greater independence for people with disabilities in daily activities. The development of multispectral cameras for disabilities will provide a more detailed analysis for the detection of surrounding objects.

Author 1: Andriana
Author 2: Elli Ruslina
Author 3: Zulkarnain
Author 4: Fajar Arrazaq
Author 5: Sutisna Abdul Rahman
Author 6: Tjahjo Adiprabowo
Author 7: Puput Dani Prasetyo Adi
Author 8: Yudi Yuliyus Maulana

Keywords: Internet of Things; mini smart camera; object recognition; speech recognition; assistive technology

PDF

Paper 49: Transfer Learning for Named Entity Recognition in Setswana Language Using CNN-BiLSTM Model

Abstract: This research proposes a hybrid approach for Named-Entity Recognition (NER) for Setswana, a low-resource language, that combines a bidirectional long short-term memory (BiLSTM) with a transfer learning model and a convolutional neural network (CNN). Among the 11 official languages of South Africa, Setswana is a morphologically rich language that is underrepresented in the field of deep learning for natural language processing (NLP). The fact that it is a language with limited resources is one of the reasons for this gap. The suggested NER hybrid transfer learning approach and an open-source Setswana NER dataset from the South African Centre for Digital Language Resources (SADiLaR), which contains an estimated 230,000 tokens overall, are used in this research to close this gap. Five NER models are created for the study and contrast with one another to determine which performs best. The performance of the top model is then contrasted with that of the baseline models. The latter three models are trained at sentence-level, whereas the first two are at word-level. Sentence-level models interpret the entire sentence as a series of word embeddings, while word-level models represent each word as a character sequence or word embedding. CNN is the first model, and CNN-BiLSTM transfer learning based on Word level is the second. Sentence-Level is the basis for the last three CNN, CNN-BiLSTM Transfer Learning, and CNN-BiLSTM models. With 99% of accuracy, the CNN-BiLSTM Transfer Learning sentence-level outperforms all other models. Furthermore, it outperforms the state-of-the-art models for Setswana in the literature that were created using the same dataset.

Author 1: Shumile Chabalala
Author 2: Sunday O. Ojo
Author 3: Pius A. Owolawi

Keywords: Natural language processing; named entity recognition; convolutional neural network; bidirectional long short-term memory; Setswana

PDF

Paper 50: Planning and Design of Elderly Care Space Combining PER and Dueling DQN

Abstract: With the continuous development of the aging phenomenon in society, people's attention to the planning of elderly care spaces is increasing. Currently, many scholars have used various spatial planning models to plan and design elderly care spaces. However, the resource utilization rate and comfort of the elderly care spaces designed by these models are low, and the models still need to be optimized. This study first integrates the Prioritized Experience Replay mechanism with the Dueling deep Q-network algorithm, and constructs a spatial planning model based on the fused algorithm, to use this model to plan elderly care spaces reasonably. The study first conducts comparative experiments on the fusion algorithm, and the outcomes indicate that the fusion algorithm has the best prediction performance, with a minimum prediction error rate of only 0.9% and a prediction speed of up to 8.7bps. In addition, the denoising effect of the algorithm is the best, and the performance of the algorithm is much higher than that of the comparative algorithm. Further analysis of the spatial planning model based on this algorithm shows that the average time required for elderly care space planning is only 1.3 seconds, and the comfort level of the planned elderly care space reaches 98.7%, the resource utilization rate reaches 89.7%, and the planned elderly care space can raise the living standard of the elderly by 67.7%. From the above information, the spatial planning model raised in the study can validly enhance the resource utilization and comfort of elderly care spaces, and raise the living standard of the elderly.

Author 1: Di Wang
Author 2: Hui Ma
Author 3: Yu Chen

Keywords: Elderly care space; planning and design; prioritized experience replay; dueling deep q-network algorithm; spatial planning

PDF

Paper 51: All Element Selection Method in Classroom Social Networks and Analysis of Structural Characteristics

Abstract: To deeply investigate the complex relationship between learners' structural characteristics in classroom social networks and the dynamics of learning emotions in smart teaching environments, an innovatively improved RP-GA. All Element Selection Method based on genetic algorithm is proposed. The method calculates the importance of factors based on the random forest model and guides the population initialization together with random numbers to achieve the differentiation and efficiency of factor selection; and utilized the Partial Least Squares regression model in conjunction with a cross-validation optimization model to enhance the accuracy of fitness evaluation, efficiently tackling the issues of premature convergence and low prediction accuracy inherent in traditional genetic algorithms for factor selection. Based on this method, the elements affecting learning emotions are precisely screened, and the intrinsic links between elemental changes and structural properties are deeply analyzed. Experiments show that RP-GA selects a small and efficient number of key elements on public datasets and significantly improves the prediction performance of classifiers such as SVM, NB, MLP, and RF. The proposed learning sentiment all-essential selection method provides effective conditions for classroom network structure characterization and future learning sentiment computation.

Author 1: Zhaoyu Shou
Author 2: Zhe Zhang
Author 3: Jingquan Chen
Author 4: Hua Yuan
Author 5: Jianwen Mo

Keywords: Genetic algorithms; element selection; random forest; partial least squares; classroom network

PDF

Paper 52: An NLP-Enabled Approach to Semantic Grouping for Improved Requirements Modularity and Traceability

Abstract: The escalating complexity of modern software systems has rendered the management of requirements increasingly arduous, often plagued by redundancy, inconsistency, and inefficiency. Traditional manual methods prove inadequate for addressing the intricacies of dynamic, large-scale datasets. In response, this research introduces SQUIRE (Semantic Quick Requirements Engineering), a cutting-edge automated framework leveraging advanced Natural Language Processing (NLP) techniques, specifically Sentence-BERT (SBERT) embeddings and hierarchical clustering, to semantically organize requirements into coherent functional clusters. SQUIRE is meticulously designed to enhance modularity, mitigate redundancy, and strengthen traceability within requirements engineering processes. Its efficacy is rigorously validated using real-world datasets from diverse domains, including attendance management, e-commerce systems, and school operations. Empirical evaluations reveal that SQUIRE outperforms conventional clustering methods, demonstrating superior intra-cluster cohesion and inter-cluster separation, while significantly reducing manual intervention. This research establishes SQUIRE as a scalable and domain-agnostic solution, effectively addressing the evolving complexities of contemporary software development. By streamlining requirements management and enabling software teams to focus on strategic initiatives, SQUIRE advances the state of NLP-driven methodologies in Requirements Engineering, offering a robust foundation for future innovations.

Author 1: Rahat Izhar
Author 2: Shahid Nazir Bhatti
Author 3: Sultan A. Alharthi

Keywords: Requirements Engineering (RE); semantic clustering; sentence-BERT; natural language processing (NLP)

PDF

Paper 53: Data-Driven Technology Augmented Reality Digitisation in Cultural Communication Design

Abstract: The digitalisation of intangible cultural heritage and big data technology provide great potential for the development of intangible cultural heritage in low-carbon reform tourism, which not only increases the accuracy of AR digital design, but also contributes to the management and protection of intangible cultural heritage based on tourism. Aiming at the lack of testing and evaluation process of the current tourism intangible cultural heritage AR digital design process, this paper proposes an intangible cultural heritage AR digital design testing algorithm based on data-driven technology using Qinhuai lanterns and colours as a case study. Firstly, an AR digitization scheme based on Qinhuai lanterns intangible cultural heritage is designed; then, around the scheme, key technical contents of AR digitization design of intangible cultural heritage are analysed; secondly, combining the dragonfly algorithm with the restricted Boltzmann machine model, a test method for the AR digitization design of tourism intangible cultural heritage of low-carbon reform based on the optimization of the structural parameters of restricted Boltzmann machine by the dragonfly algorithm is put forward; lastly, relying on the collected data, the design of the AR digital design model of Qinhuai lanterns and colours tourism, and also analysed the effectiveness of the intelligent testing algorithm proposed in this paper. The results show that the proposed digital design method is effective, while the optimised test method has improved convergence speed and increased accuracy, and the test score prediction accuracy reaches 93.5%.

Author 1: Na YIN

Keywords: Intangible cultural heritage AR digitization; low-carbon tourism; design test analysis; dragonfly algorithm; restricted Boltzmann machine model

PDF

Paper 54: Spatial Attention-Based Adaptive CNN Model for Differentiating Dementia with Lewy Bodies and Alzheimer's Disease

Abstract: Differentiation of Alzheimer's Disease (AD) and Dementia with Lewy Bodies (DLB) utilizing brain perfusion Single Photon Emission Tomography (SPECT) is crucial and it might be difficult to distinguish between the two illnesses. The most recently discovered characteristic of DLB for a possible diagnosis is the Cingulate Island Sign (CIS). This work aims to differentiate DLB and AD by utilizing a deep learning model and this model is named AD-DLB-DNet. Initially, the required images are collected from the benchmark dataset. Further, the Spatial Attention-Based Adaptive Convolution Neural Network (SA-ACNN) is used to visualize the CIS features from the images where the attributes are tuned using Improved Random Function-based Birds Foraging Search (IRF-BFS). Further, CIS features attained from the SA-ACNN are used to accurately differentiate the DLB and AD. Finally, the Dilated Residual-Long Short-Term Memory (DR-LSTM) layer is proposed to accurately perform the AD and DLB differentiation for identifying the clinical characteristics of the DLB. The suggested model is used for differentiating between AD and DLB for taking effective therapeutic measures. Finally, the validation is performed to validate the effectiveness of the introduced system.

Author 1: K Sravani
Author 2: V RaviSankar

Keywords: Alzheimer's disease and dementia with lewy bodies differentiation; spatial attention-based adaptive convolution neural network; cingulate island sign; improved random function-based birds foraging search; dilated residual-long short-term memory

PDF

Paper 55: Energy-Balance-Based Out-of-Distribution Detection of Skin Lesions

Abstract: Skin lesion detection plays a crucial role in the diagnosis and treatment of skin diseases. Due to the wide variety of skin lesion types, especially when dealing with unknown or rare lesions, models tend to exhibit overconfidence. Out-of-distribution (OOD) detection techniques are capable of identifying lesion types that were not present in the training data, thereby enhancing the model's robustness and diagnostic reliability. However, the issue of class imbalance makes it difficult for models to effectively learn the features of minority class lesions. To address this challenge, a Balanced Energy Regularization Loss is proposed in this paper, aimed at mitigating the class imbalance problem in OOD detection. This method applies stronger regularization to majority class samples, promoting the model's learning of minority class samples, which significantly improves model performance. Experimental results demonstrate that the Balanced Energy Regularization Loss effectively enhances the model's robustness and accuracy in OOD detection tasks, providing a viable solution to the class imbalance issue in skin lesion detection.

Author 1: Jiahui Sun
Author 2: Guan Yang
Author 3: Yishuo Chen
Author 4: Hongyan Wu
Author 5: Xiaoming Liu

Keywords: Balanced energy regularization loss; skin lesions; out-of-distribution detection; convolutional neural networks

PDF

Paper 56: Using Fuzzy Matter-Element Extension Method to Cultural Tourism Resources Data Mining and Evaluation

Abstract: This study explores the mining and evaluation of cultural and tourism resources based on fuzzy matter-element extension in the context of cultural and tourism integration. Through fieldwork and analysis of cultural and tourism resources, it is found that the fuzzy matter-element extension theory can be effectively applied to the mining and evaluation of cultural and tourism resources in the context of cultural and tourism integration. The application of integration of cultural and tourism resources has a significant driving effect in tourism development, which can effectively enhance the tourist experience and improve the visibility and attractiveness. Meanwhile, through field research and data analysis, this study also puts forward relevant improvement suggestions for the characteristics and actual situation of the research object, aiming at further optimising the development mode, realising the organic integration of culture and tourism resources, and promoting the prosperity and development of the local cultural industry. Overall, this study has certain theoretical and practical significance for promoting the integrated development of culture and tourism and the sustainable development of tourism.

Author 1: Fei Liu

Keywords: Cultural and tourism integration; fuzzy object meta-theory; development; organic integration

PDF

Paper 57: Arabic Sentiment Analysis Using Optuna Hyperparameter Optimization and Metaheuristics Feature Selection to Improve Performance of LightGBM

Abstract: Sentiment Analysis (SA) effectively examines big data, such as customer reviews, market research, social media posts, online discussions, and customer feedback evaluation. Arabic Language is a complex and rich language. The main reason for the need to enhance Arabic resources is the existence of numerous dialects alongside the standard version (MSA). This study investigates the impact of stemming and lemmatization methods on Arabic sentiment analysis (ASA) using Machine Learning techniques, specifically the LightGBM classifier. It also employs metaheuristic feature selection algorithms like particle swarm optimization, dragonfly optimization, grey wolf optimization, harris hawks optimizer, and a genetic optimization algorithm to identify the most relevant features to improve LightGBM’s model performance. It also employs the Optuna hyperparameter optimization framework to determine the optimal set of hyperparameter values to enhance LightGBM model performance. It also underscores the importance of preprocessing strategies in ASA and highlights the effectiveness of metaheuristic approaches and Optuna hyperparameter optimization in improving LightGBM model performance in ASA. It also applies different stemming and lemmatization methods, Metaheuristic Feature Selection algorithms, and the Optuna hyperparameter optimization on eleven datasets with different Arabic dialects. The findings indicate that metaheuristics feature selection with the LightGBM classifier, using suitable stemming and lemmatization or combining them, enhances LightGBM's accuracy by between 0 and 8%. Still, Optuna hyperparameter optimization with the LightGBM classifier, using suitable stemming and lemmatization or combining them, depending on data characteristics, improves LightGBM's accuracy by between 2 and 11%. It achieves superior results than metaheuristics feature selection in more than 90% of cases. This study is of significant importance in the field of ASA, providing valuable insights and directions for future research.

Author 1: Mostafa Medhat Nazier
Author 2: Mamdouh M. Gomaa
Author 3: Mohamed M. Abdallah
Author 4: Awny Sayed

Keywords: Arabic Sentiment Analysis (ASA); big data; Light Gradient Boosting Machine (LightGBM); Optuna hyperparameter optimization; metaheuristics feature selection; machine learning

PDF

Paper 58: Flexible Framework for Lung and Colon Cancer Automated Analysis Across Multiple Diagnosis Scenarios

Abstract: Among humans, lung and colon cancers are regarded as primary contributors to mortality and morbidity. They may grow simultaneously in organs, having a harmful influence on the lives of people. If tumor is not diagnosed early, it is likely to spread to both of those organs. This research presents a flexible framework that employs lightweight Convolutional Neural Networks architecture for automating lung and colon cancer diagnosis in histological images across multiple diagnosis scenarios. The LC25000 dataset is commonly used for this task. It includes 25000 histopathological images belonging to 5 distinct classes, which are lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and benign colonic tissue. This work includes three diagnosis scenarios: (S1) evaluates lung or colon samples, (S2) distinguishes benign from malignant images, and (S3) classifies images into five categories from the LC25000 dataset. Across all the scenarios, the scored accuracy, recall, precision, F1-score, and AUC exceeded 0.9947, 0.9947, and 0.9995, respectively. This investigation with a lightweight Convolutional Neural Network containing only 1.612 million parameters is extremely efficient for automated lung and colon cancer diagnosis, outperforming several current methods. This method might help doctors provide more accurate diagnoses and improve patient outcomes.

Author 1: Marwen SAKLI
Author 2: Chaker ESSID
Author 3: Bassem BEN SALAH
Author 4: Hedi SAKLI

Keywords: Lung and colon cancers; histopathological images; LC25000 dataset; lightweight convolutional neural networks; multiple diagnosis scenarios

PDF

Paper 59: Machine Learning-Based Denoising Techniques for Monte Carlo Rendering: A Literature Review

Abstract: Monte Carlo (MC) rendering is a powerful technique for achieving photorealistic images by simulating complex light interactions. However, the inherent noise introduced by MC rendering necessitates effective denoising techniques to enhance image quality. This paper presents a comprehensive review and comparative analysis of various machine learning (ML) methods for denoising MC renderings, focusing on four main categories: radiance prediction using convolutional neural networks (CNNs), kernel prediction networks, temporal rendering with recurrent architectures, and adaptive sampling approaches. Through systematic analysis of 7 peer-reviewed studies from 2019-2024, the author's findings reveal that deep learning models, particularly generative adversarial networks (GANs), achieve superior denoising performance. The study identifies key challenges including computational demands, with some methods requiring significant GPU resources, and generalization across diverse scenes. Additionally, we observe a trade-off between denoising quality and processing speed, particularly crucial for real-time applications. The study concludes with recommendations for future research, emphasizing the need for hybrid approaches combining physics-based models with ML techniques to improve robustness and efficiency in production environments.

Author 1: Liew Wen Yen
Author 2: Rajermani Thinakaran
Author 3: J. Somasekar

Keywords: Convolutional neural network; Monte Carlo rendering; generative adversarial network; deep learning; machine learning; denoising techniques

PDF

Paper 60: Optimization Technology of Civil Aircraft Stand Assignment Based on MSCOEA Model

Abstract: The Chinese aviation transportation industry is constantly developing towards multiple objectives and constraints. The conventional optimization method for stand assignment of civil aviation aircraft has low efficiency and can no longer meet practical needs. Based on this, the paper firstly focuses on the problem of convergence and uniformity in multi-objective optimization, and uses the multi-strategy algorithm to optimize the multi-strategy algorithm of Multi-strategy competitive-cooperative co-evolutionary algorithm (MSCOEA). Then, for the problem of high time complexity in the traditional chromosome coding mode, the characteristics of quantum evolution algorithm can be reduced by MSCOEA algorithm. Front the results, the prediction accuracy of the research method was above 90% on both the training and validation sets. With the increase of iterations, the final accuracy was 96.8% and 97.53%, respectively. This algorithm achieved the same performance as some other comparative algorithms in most of the objectives. The optimal flight allocation rate reached 98.4%. The mean, optimal value, and variance of the number of flights allocated to remote stands were 5.75E+00, 4.00E+00, and 1.04E+00, respectively, which were superior to other comparative algorithms. The deigned stand assignment optimization method achieves efficient stand assignment, and improves the allocation efficiency of large and multi-objective stands.

Author 1: Qiao Xue
Author 2: Yaqiong Wang
Author 3: Hui Hui

Keywords: Collaborative evolution; quantum algorithm; stand assignment; multi-objective optimization; population

PDF

Paper 61: Enhancing Urban Mapping in Indonesia with YOLOv11

Abstract: Object recognition in urban and residential settings has become more vital for urban planning, real estate evaluation, and geographic mapping applications. This study presents an innovative methodology for house detection with YOLOv11, an advanced deep-learning object detection model. YOLO is based on a Convolutional Neutral Network (CNN), a type of deep learning model well suited for image analysis. In the case of YOLO, it is designed specifically for real-time object detection in images and videos. The suggested method utilizes sophisticated computer vision algorithms to recognize residential buildings precisely according to their roofing attributes. This study illustrates the potential of color-based roof categorization to improve spatial analysis and automated mapping technologies through meticulous dataset preparation, model training, and rigorous validation. This research enhances the field by introducing a rigorous methodology for accurate house detection relevant to urban development, geographic information systems, and automated remote sensing applications. By leveraging the power of deep learning and computer vision, this approach not only improves the efficiency of urban planning processes but also contributes to the development of more resilient and adaptive urban environments.

Author 1: Muhammad Emir Kusputra
Author 2: Alesandra Zhegita Helga Prabowo
Author 3: Kamel
Author 4: Hady Pranoto

Keywords: YOLOv11; object detection; house detection; house counting; computer vision; deep learning; urban mapping

PDF

Paper 62: A Supervised Learning-Based Classification Technique for Precise Identification of Monkeypox Using Skin Imaging

Abstract: The monkeypox epidemic has spread to nearly every nation. Governments implement several strict policies, to stop the virus that causes monkeypox. For effective handling and treatment, early identification and diagnosis of monkeypox using digital skin lesion images is critical, and this work employed deep learning architectures to achieve this goal. This article presents a supervised learning-based classification method designed for the precise identification of monkeypox cases. The analysis was conducted using an open-source dataset from Kaggle, consisting of digital images of monkeypox, which were processed using advanced image processing and deep learning techniques. The data was categorized based on findings related and unrelated to monkeypox. A deep neural network with 50 layers and up to 35 folds was utilized to identify regions of interest, which could be indicative of characteristics relevant to computer-assisted medical diagnosis and enable us to solve image processing and natural language processing tasks with high accuracy. In terms of performance, the proposed method achieved an accuracy of 96% during cross-validation classification testing. This outcome demonstrates the potential for computer-assisted diagnosis as a supplementary tool for medical professionals. Amid the monkeypox outbreak, this method offers a technical and objective assessment of patients' skin conditions, thereby simplifying the diagnostic process for specialists.

Author 1: Vandana
Author 2: Chetna Sharma
Author 3: Yonis Gulzar
Author 4: Mohammad Shuaib Mir

Keywords: Deep learning; monkeypox; medical image processing; image classification; cross validation

PDF

Paper 63: Classifying Weed Development Stages Using Deep Learning Methods

Abstract: The control of harmful weeds holds a significant place in the cultivation of agricultural products. A crucial criterion in this control process is identifying the development stages of the weeds. The technique to be used is determined based on the weed's growth stage. This study addresses the application of deep learning methods in classifying growth stages using images of various weed species to predict their development periods. Four different weed species, obtained from seeds collected in Turkey-Afyonkarahisar-Sinanpaşa Plain, were used in the study. The images were captured with a Nikon D7000 camera equipped with three different lenses, and the ROI extraction was performed using Lifex software. Using these ROI images, deep learning models such as DenseNet, EfficientNet, GoogleNet, Xception, and SqueezeNet were evaluated. Performance metrics including accuracy, F1 score, precision, and recall were employed. In the 4-class dataset with ROI annotations, DenseNet and Xception achieved an accuracy of 86.57%, while EfficientNet demonstrated the highest performance with an accuracy of 89.55%. Following the initial tests, it was concluded that classes 3 and 4 exhibited extreme similarity caused most of the prediction errors. Merging the said classes significantly increased the accuracy and F1 scores across all models. In image classification tests, SqueezeNet and GoogleNet demonstrated the shortest processing times. However, while EfficientNet lagged slightly behind these models in terms of speed, it exhibited superior accuracy. In conclusion, although the use of ROI improved classification performance, class merging strategies resulted in a more significant performance enhancement.

Author 1: Yasin ÇIÇEK
Author 2: Eyyüp GÜLBANDILAR
Author 3: Kadir ÇIRAY
Author 4: Ahmet ULUDAG

Keywords: Deep learning; weed development stages; classification; DenseNET; Xception; SqueezeNET; GoogleNET; EfficientNET; ROI

PDF

Paper 64: Target Detection of Leakage Bubbles in Stainless Steel Welded Pipe Gas Airtightness Experiments Based on YOLOv8-BGA

Abstract: Gas-tightness experiment is an effective means to detect leakage of stainless steel welded pipe, and the vision-based bubble recognition algorithm can effectively improve the efficiency of gas-tightness detection. This study proposed a new detection network of YOLOv8-BGA using the YOLOv8 model as a baseline, which can achieve effective identification of leakage bubbles and bubble images are collected under different lighting conditions in a practical industrial inspection environment to create a bubble dataset. Firstly, a C2f_BoT module was designed to replace the C2f module in the backbone network, which improved the feature extraction capability of the model; secondly, the convolutional layer of the neck network was replaced by using the GSConv module, which achieved the model lightweighting; thirdly, the C2f-BM attention mechanism was added before the detection layer, which effectively improved the model performance; finally, the WIoU was used to improve the loss function, which improved the detrimental effect of small bubbles of low-quality samples in the dataset on the gradient, and significantly improved the convergence speed of the network. The experimental results showed that the average leakage bubble detection accuracy of the YOLOv8-BGA model algorithm reached 97.7%, which improved by 5.3% compared with the baseline, and meets the needs of practical industrial inspection.

Author 1: Huaishu Hou
Author 2: Zikang Chen
Author 3: Chaofei Jiao

Keywords: Image processing; stainless steel welded pipe; non-destructive testing; YOLOv8; attention mechanism; loss function

PDF

Paper 65: Broccoli Grading Based on Improved Convolutional Neural Network Using Ensemble Deep Learning

Abstract: The demand for broccoli in Indonesia has been increasing significantly, with an annual growth of approximately 15% to 20%. However, the supply availability remains insufficient, and its quality is often inconsistent. Therefore, a grading process is needed to classify broccoli into grades A, B, and C based on color, size, and shape. Currently, the grading process is carried out solely by market intermediaries, while farmers and the general public have a limited understanding of this process. This research developed an automated grading method using a Convolutional Neural Network (CNN) based on two broccoli images’ top and side views. Three main parameters, namely color, size, and shape, were identified from these images and used as grading determinants. An ensemble deep learning technique was applied by training each parameter separately using several CNN models, namely ResNet50, EfficientNetB2, VGG16, and Improved CNN. These were then combined in the testing phase using a voting technique. The test was conducted 64 times with various model combinations to achieve the best accuracy. A significant contribution of the Improved CNN lies in the shape feature, which achieved a maximum performance of 95%. This study also compared evaluation metrics such as precision, recall, F-Score, and accuracy across different model combinations.

Author 1: Zaki Imaduddin
Author 2: Yohanes Aris Purwanto
Author 3: Sony Hartono Wijaya
Author 4: Shelvie Nidya Neyman

Keywords: Grading; convolution neural network; ensemble deep learning; voting

PDF

Paper 66: A Custom Deep Learning Approach for Traffic Flow Prediction in Port Environments: Integrating RCNN for Spatial and Temporal Analysis

Abstract: Port congestion poses a significant challenge to maritime logistics, especially for industries dealing with perishable goods like seafood. This study presents a custom deep learning model using Transformer architecture to predict real-time traffic flow at the Port of Virginia, with a focus on optimizing the movement of fish trucks. The model integrates multimodal data from 36 sensors, capturing traffic flow, occupancy, and speed at five-minute intervals, and processes high-dimensional, time-series data for accurate predictions. The model utilizes attention mechanisms to capture spatial and temporal dependencies, significantly improving predictive performance. Evaluation results indicate that the Transformer-based model outperforms existing models like RandomForest, GradientBoosting, and Support Vector Regression, with an R-squared value of 0.89, Pearson correlation of 0.91, and a Root Mean Squared Error (RMSE) of 0.0208. These results suggest that the model can effectively manage dynamic port traffic and optimize resource allocation, ensuring the timely delivery of perishable goods.

Author 1: Abdul Basit Ali Shah
Author 2: Xinglu Xu
Author 3: Zheng Yongren
Author 4: Zijian Guo

Keywords: Traffic flow prediction; transformer model; port congestion; deep learning

PDF

Paper 67: Enhanced Virtual Machine Resource Optimization in Cloud Computing Using Real-Time Monitoring and Predictive Modeling

Abstract: Effective resource estimation is essential in cloud computing to minimize operational costs, optimize performance, and enhance user satisfaction. This study proposes a comprehensive framework for virtual machine optimization in cloud environments, focusing on predictive resource management to improve resource efficiency and system performance. The framework integrates real-time monitoring, advanced resource management techniques, and machine learning-based predictions. A simulated environment is deployed using PROXMOX, with Prometheus for monitoring and Grafana for visualization and alerting. By leveraging machine learning models, including Random Forest Regression and LSTM, the framework predicts resource usage based on historical data, enabling precise and proactive resource allocation. Results indicate that the Random Forest model achieves superior accuracy with a MAPE of 2.65%, significantly outperforming LSTM's 17.43%. These findings underscore the reliability of Random Forest for resource estimation. This research demonstrates the potential of predictive analytics in advancing cloud resource management, contributing to more efficient and scalable cloud computing practices.

Author 1: Rim Doukha
Author 2: Abderrahmane Ez-zahout

Keywords: Cloud computing; virtual machine optimization; resource allocation; machine learning

PDF

Paper 68: Traffic Safety in Mixed Environments by Predicting Lane Merging and Adaptive Control

Abstract: Autonomous driving technology is primarily developed to enhance traffic safety through advancements in motion prediction and adaptive control mechanisms. Highway lane merging remains a high-risk scenario, accounting for approximately 7% of highway collisions globally due to misjudged vehicle interactions, according to international statistics. This paper proposes a two-stage deep learning framework for autonomous lane merging in mixed traffic. Using the Argoverse dataset, which contains over 300,000 vehicle trajectories mapped to high-definition road networks, we first predict vehicle trajectories using a Seq2Seq model with LSTM layers, achieving a 21% improvement in prediction accuracy over a baseline Multi-layer Perceptron model. In the second stage, reinforcement learning is employed for maneuver generation, where a Dueling Deep Q-Network outperforms a standard DQN by 8% in collision avoidance. Experimental results indicate that the combined trajectory prediction and RL-based framework significantly reduces merging delays, enhances data-driven decision-making in mixed traffic environments, and provides a scalable solution for safer autonomous highway merging.

Author 1: Aigerim Amantay
Author 2: Shyryn Akan
Author 3: Nurlybek Kenes
Author 4: Amandyk Kartbayev

Keywords: Autonomous driving; lane merging; traffic safety; trajectory prediction; deep learning; LiDAR; LSTM

PDF

Paper 69: Modular Analysis of Complex Products Based on Hybrid Genetic Ant Colony Optimization in the Context of Industry 4.0

Abstract: With the development of science and technology, industrial construction has entered the era of 4.0 intelligent construction, and various algorithms have been widely applied in the modularization of production products. This study focuses on the modular optimization problem of complex products and establishes a hybrid genetic algorithm based on the ant colony algorithm framework. The new algorithm incorporates visibility analysis of the genetic algorithm, using the obtained solution as the pheromone source for the new algorithm to quickly obtain the optimal solution. The results showed that the algorithm could quickly achieve modularization of complex industrial products, adapt to products with a large number of parts and complex compositions, and obtain the optimal solution. The new algorithm reduced the running time of modular complex products by 35.06% compared to the particle swarm optimization algorithm. The new algorithm optimized the product design process for core components, reducing production costs by 23.46% and increasing production efficiency by 39.20%. Consequently, the novel algorithm modularizes complex products, thereby enhancing production efficiency and providing a novel intelligent method for the design process of complex products.

Author 1: Yichun Shi
Author 2: Qinhe Shi

Keywords: Industry 4.0; genetic algorithm; ant colony; complex products; modularization; production efficiency

PDF

Paper 70: Detection and Prediction of Polycystic Ovary Syndrome Using Attention-Based CNN-RNN Classification Model

Abstract: Polycystic Ovary Syndrome (PCOS) has many challenges when it comes to its diagnosis and treatment due to the diversity of presentation and potential long-term consequences for health. For this reason, sophisticated data pre-processing and classification methods are implemented to enhance the accuracy of PCOS diagnosis. A number of innovative techniques are employed in the process to enhance the accuracy and reliability of PCOS diagnosis. To identify ovarian cysts, real-time ultrasound images are pre-processed initially with the Contrast-Limited Adaptive Histogram Equalization (CLAHE) model to improve image contrast and sharpness. The ultrasound images are segmented with the K-means clustering algorithm, Particle Swarm Optimization (PSO), and a fuzzy filter, enabling precise analysis of regions of interest. An attention-based Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model is employed for classification and does so effectively to capture the temporal and spatial characteristics of the segmented data. The proposed model has a very good accuracy rate of 96% and works very well on a variety of evaluation metrics such as accuracy, precision, sensitivity, F1-score, and specificity. The results are evidence of the robustness of the model in minimizing false positives and enhancing PCOS diagnostic accuracy. Nevertheless, it is noted that bigger data sets are required to maximize the precision and generalizability of the model. The aim of subsequent research is to use Explainable AI (XAI) methods to enhance clinical decision-making and establish trust by making the model's predictions clearer and understandable for patients and clinicians. Along with enhancing PCOS detection, this comprehensive approach sets a precedent for integrating explainability into AI-based medical diagnostic devices.

Author 1: Siji Jose Pulluparambil
Author 2: Subrahmanya Bhat B

Keywords: Polycystic ovary syndrome; contrast limited adaptive histogram equalization; particle swarm optimization; k- means clustering algorithm; convolutional neural network; recurrent neural network

PDF

Paper 71: A Review of AI and IoT Implementation in a Museum’s Ecosystem: Benefits, Challenges, and a Novel Conceptual Model

Abstract: The museums need to transform into modern museums by developing a digital ecosystem that integrates all elements in the museum to optimize organizational outcomes and impact people's welfare in the era of Society 5.0. This paper aims to conduct a review of the museum's digital ecosystem based on the implementation of artificial intelligence (AI) and internet of things (IoT). PRISMA methodology for literature review was adopted to search for the answers to the research questions, knowing digital technology trends, challenges, and benefits of a digital museum ecosystem development, and proposed a novel conceptual model of the museum ecosystem based on AI and IoT implementation. The dataset contained metadata from Scopus, Google Scholar, and IEEExplore databases. Several stages were implemented in the literature review process so that it is known that AI and IoT technologies have never been separated in the development of digital museums since 2020, but there has yet to be research on the digital museum ecosystem model that integrates IoT and AI. The museum's digital ecosystem implementation benefits will improve museum resources and increase museum competitiveness. However, there will be challenges related to cybersecurity issues, data integration in multi-media formats, and interface designs to overcome user acceptance challenges of the technology constructed in the digital museum ecosystem. The proposed AI and IoT-based model also require an evaluation for implementation validation at the museum in future works.

Author 1: Shinta Puspasari
Author 2: Indah Agustien Siradjuddin
Author 3: Rachmansyah

Keywords: AI; IoT; digital museum; digital ecosystem

PDF

Paper 72: Optimizing the GRU-LSTM Hybrid Model for Air Temperature Prediction in Degraded Wetlands and Climate Change Implications

Abstract: Accurate air temperature prediction is critical, particularly for micro air temperatures. The temperature of micro air changes quickly. Micro and macro air temperatures vary, particularly in degraded wetlands. By predicting air temperature, climate change in a degraded wetland environment can be predicted earlier. Furthermore, micro and macro air temperatures are drought index parameters. Knowing the drought index can help you avoid disasters like fires and floods. However, the right indicators for predicting micro or macro temperatures have yet to be found. LSTM excels at tasks requiring complex long-term memory, whereas GRU excels at tasks requiring rapid processing. We proposed a deep learning strategy based on the GRU-LSTM Hybrid model. Both of these deep learning models are excellent for predicting time series. The performance of this hybrid model is affected by changes in model indicators. The preprocessing stage, the number of input parameters, and the presence or absence of a Dropout Layer in the model architecture are among the most influential indicators of model performance. The best macro temperature prediction performance was obtained using 12 monthly average data to predict the next month’s temperature, yielding an RMSE of 0.056807, MAE of 0.046592, and R2 of 0.989371. This model also performed well in predicting daily micro temperature, with an RMSE of 0.227086, MAE of 0.190801, and R2 of 0.981802.

Author 1: Yuslena Sari
Author 2: Yudi Firmanul Arifin
Author 3: Novitasari Novitasari
Author 4: Samingun Handoyo
Author 5: Andreyan Rizky Baskara
Author 6: Nurul Fathanah Musatamin
Author 7: Muhammad Tommy Maulidyanto
Author 8: Siti Viona Indah Swari
Author 9: Erika Maulidiya

Keywords: Predictions; temperature; Gated Recurrent Unit (GRU); Long Short-Term Memory (LSTM); performance; indicators

PDF

Paper 73: Lightweight Parabola Chaotic Keyed Hash Using SRAM-PUF for IoT Authentication

Abstract: This paper introduces a lightweight and efficient keyed hash function tailored for resource-constrained Internet of Things (IoT) environments, leveraging the chaotic properties of the Parabola Chaotic Map. By combining the inherent unpredictability of chaotic systems with a streamlined cryptographic design, the proposed hash function ensures robust security and low computational overhead. The function is further strengthened by integrating it with a Physical Unclonable Function (PUF) based on SRAM initial values, which serves as a secure and tamper-resistant source of device-specific keys. Experimental validation on an ESP32 microcontroller demonstrates the function's high sensitivity to input variations, exceptional statistical randomness, and resistance to cryptographic attacks, including collisions and differential analysis. With a mean bit-change probability nearing the ideal 50% and 100% reliability in key generation under varying conditions, the system addresses critical IoT security challenges such as cloning, replay attacks, and tampering. This work contributes a novel solution that combines chaos theory and hardware-based security to advance secure, efficient, and scalable authentication mechanisms for IoT applications.

Author 1: Nattagit Jiteurtragool
Author 2: Jirayu Samkunta
Author 3: Patinya Ketthong

Keywords: SRAM PUF; PUF key generation; chaotic keyed hash; device authentication; discrete-time chaotic

PDF

Paper 74: A Systematic Review of Metaheuristic Algorithms in Human Activity Recognition: Applications, Trends, and Challenges

Abstract: Metaheuristic algorithms have emerged as promising techniques for optimizing human activity recognition (HAR) systems. This systematic review examines the application of these algorithms in HAR by analyzing relevant literature published between 2019 and 2024. A comprehensive search across multiple databases yielded 27 studies that met the inclusion criteria. The analysis revealed that Genetic Algorithms (GA) exhibit classification accuracy rates ranging from 88.25% to 96.00% in activity recognition and up to 90.63% in localization tasks. Notably, Oppositional and Chaos Particle Swarm Optimization (OCPSO) combined with MI-1DCNN significantly improves detection accuracy, demonstrating a 2.82% improvement over standard PSO with Support Vector Machine (SVM) as classifier approaches. Our analysis highlights a growing trend toward hybrid metaheuristic approaches that enhance feature selection and classifier optimization. However, challenges related to computational cost and scalability persist, underscoring key areas for future research. These findings emphasize the potential of metaheuristic algorithms to significantly advance HAR. Future studies should explore the development of more computationally efficient hybrid models and the integration of metaheuristic optimization with deep learning architectures to enhance system robustness and adaptability.

Author 1: John Deutero Kisoi
Author 2: Norfadzlan Yusup
Author 3: Syahrul Nizam Junaini

Keywords: Metaheuristic algorithm; human activity recognition; systematic review; application; trend; challenge; literature

PDF

Paper 75: Bridging Data and Clinical Insight: Explainable AI for ICU Mortality Risk Prediction

Abstract: Despite advancements in machine learning within healthcare, the majority of predictive models for ICU mortality lack interpretability, a crucial factor for clinical application. The complexity inherent in high-dimensional healthcare data and models poses a significant barrier to achieving accurate and transparent results, which are vital in fostering trust and enabling practical applications in clinical settings. This study focuses on developing an interpretable machine learning model for intensive care unit (ICU) mortality prediction using explainable AI (XAI) methods. The research aimed to develop a predictive model that could assess mortality risk utilizing the WiDS Datathon 2020 dataset, which includes clinical and physiological data from over 91,000 ICU admissions. The model's development involved extensive data preprocessing, including data cleaning and handling missing values, followed by training six different machine learning algorithms. The Random Forest model ranked as the most effective, with its highest accuracy and robustness to overfitting, making it ideal for clinical decision-making. The importance of this work lies in its potential to enhance patient care by providing healthcare professionals with an interpretable tool that can predict mortality risk, thus aiding in critical decision-making processes in high-acuity environments. The results of this study also emphasize the importance of applying explainable AI methods to ensure AI models are transparent and understandable to end-users, which is crucial in healthcare settings.

Author 1: Ali H. Hassan
Author 2: Riza bin Sulaiman
Author 3: Mansoor Abdulhak
Author 4: Hasan Kahtan

Keywords: Explainable AI; healthcare; machine learning; predictive model

PDF

Paper 76: Comparative Analysis of Undersampling, Oversampling, and SMOTE Techniques for Addressing Class Imbalance in Phishing Website Detection

Abstract: Since this is one of the most challenging tasks in cyber security, many of them are affected by class imbalance when it comes to the performance of machine learning. This paper evaluates various strategies using a number of resampling-based approaches: ROS, RUS, and SMOTE-based methods in conjunction with XGBoost classifier techniques to solve such an imbalanced dataset. Key performance measures include precision, F1 score, recall, precision, ROC-AUC, and geometric mean score. Among the methods, the highest was found with regard to the SMOTE-NC-XGB with precision equal to 98.0% and a recall of 98.5%, thus ensuring an effective trade-off between sensitivity and specificity. Although the stand-alone XGB model performs really well, adding resampling techniques makes its efficiency much higher, especially in cases of evident imbalance between classes. These results also revealed that resampling techniques are really helpful to enhance detection performance; hence, the SMOTE-NC-XGB is found out as the best among all of these. It will be of great contribution for future works in order to enhance the development of phishing detection systems and investigate other new hybrid resampling methods.

Author 1: Kamal Omari
Author 2: Chaimae Taoussi
Author 3: Ayoub Oukhatar

Keywords: Phishing website detection; class imbalance; XGBoost; SMOTE-NC

PDF

Paper 77: Deep Learning-Driven Detection of Terrorism Threats from Tweets Using DistilBERT and DNN

Abstract: As globalization accelerates, the threat of terrorist attacks poses serious challenges to national security and public safety. Traditional detection methods rely heavily on manual monitoring and rule-based surveillance, which lack scalability, adaptability, and efficiency in handling large volumes of real-time social media data. These approaches often struggle with identifying evolving threats, processing unstructured text, and distinguishing between genuine threats and misleading information, leading to delays in response and potential security lapses. To address these challenges, this study presents an advanced terrorism threat detection model that leverages DistilBERT with a Deep Neural Network (DNN) to classify Twitter data. The proposed approach efficiently extracts contextual and semantic information from textual content, enhancing the identification of potential terrorist threats. DistilBERT, a lightweight variant of BERT, is employed for its ability to process large volumes of text while maintaining high accuracy. The extracted embeddings are further analyzed using a Dense Neural Network, which excels at recognizing complex patterns. The model was trained and evaluated on a labeled dataset of tweets, achieving an impressive 93% accuracy. Experimental results demonstrate the model’s reliability in distinguishing between threatening and non-threatening tweets, making it an effective tool for early detection and real-time surveillance of terrorism-related content on social media. The findings highlight the potential of deep learning and natural language processing (NLP) in automated threat identification, surpassing traditional machine learning approaches. By integrating advanced NLP techniques, this model contributes to enhancing public safety, national security, and counter-terrorism efforts.

Author 1: Divya S
Author 2: B Ben Sujitha

Keywords: Terrorism; global safety; terrorist attacks; data mining; artificial intelligence; natural language processing; DistilBERT; deep neural network

PDF

Paper 78: Utilizing NLP to Optimize Municipal Services Delivery Using a Novel Municipal Arabic Dataset

Abstract: The natural language processing paradigm has emerged as a vital tool for addressing complex business challenges, mainly due to advancements in machine learning (ML), deep learning (DL), and Generative AI. Advanced NLP models have significantly enhanced the efficiency and effectiveness of NLP applications, enabling the seamless integration of various business processes to improve decision-making. In the municipal sector, the Kingdom of Saudi Arabia is trying to harness the power of NLP to promote urban development, city planning, and infrastructure enhancements, ultimately elevating the quality of life for its residents. In the municipal sector, approximately 300 services are available through multiple channels, including the Baladi application, unified communication services, WhatsApp, a dedicated beneficiary center (serving citizens and residents), and social media accounts. These channels are supported by a dedicated team that operates 24/7. This paper examines the implementation of ML and DL methods to categorize requests and suggestions submitted by residents for various municipal services in the Kingdom of Saudi Arabia. The primary aim of this work is to enhance service quality and reduce response times to community inquiries. However, a significant challenge arises from the lack of Arabic datasets specifically tailored to the municipal sector for training purposes, which limits meaningful progress. To address this issue, we have created a novel dataset consisting of 3,714 manually classified requests and suggestions collected from the X platform. This dataset is organized into eight classes: tree maintenance, lighting, construction waste, old and neglected assets, road conditions, visual pollution, billboards, and cleanliness. Our findings indicate that ML models, particularly when optimized with hyperparameters and appropriate pre-processing, outperformed DL models, achieving an F1 score of 90% compared to 88%. By releasing this novel Arabic dataset, which will be open sourced for the scientific community, we believe this work provides a foundational reference for further research and significantly contributes to improving the municipal sector's service delivery.

Author 1: Homod Hamed Alaloye
Author 2: Ahmad B. Alkhodre
Author 3: Emad Nabil

Keywords: Arabic text classification; machine learning (ML); deep learning (DL); hyperparameter optimization; municipal services

PDF

Paper 79: A Novel Hybrid Model Based on CEEMDAN and Bayesian Optimized LSTM for Financial Trend Prediction

Abstract: Financial time series prediction is inherently complex due to its nonlinear, nonstationary, and highly volatile nature. This study introduces a novel CEEMDAN-BO-LSTM model within a decomposition-optimization-prediction-integration framework to address these challenges. The Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) algorithm decomposes the original series into high-frequency, medium-frequency, low-frequency, and trend components, enabling precise time window selection. Bayesian Optimization (BO) algorithm optimizes the parameters of a dual-layer Long Short-Term Memory (LSTM) network, enhancing prediction accuracy. By integrating predictions from each component, the model generates a comprehensive and reliable forecast. Experiments on 10 representative global stock indices reveal that the proposed model outperforms benchmark approaches across RMSE, MAE, MAPE, and R² metrics. The CEEMDAN-BO-LSTM model demonstrates robustness and stability, effectively capturing market fluctuations and long-term trends, even under high volatility.

Author 1: Yu Sun
Author 2: Sofianita Mutalib
Author 3: Liwei Tian

Keywords: LSTM; Bayesian optimization; CEEMDAN; financial time series; time window selection

PDF

Paper 80: Improving Performance with Big Data: Smart Supply Chain and Market Orientation in SMEs

Abstract: This study aims to explore the impact of big data-driven supply chain management, web analytics, and market orientation on corporate performance in medium-sized enterprises (MSEs) in Indonesia. By integrating these contemporary elements, the research seeks to provide insights into how digital technologies and strategic market practices can enhance organizational effectiveness. The study adopts a quantitative approach, utilizing survey data collected from 350 MSEs across various sectors in Indonesia. Purposive sampling was employed to ensure that the selected firms actively implement big data analytics and market-oriented strategies. Structural Equation Modeling (SEM) was conducted using SmartPLS to analyze the relationships among the variables. The findings reveal that big data-driven supply chain management and web analytics significantly contribute to improved corporate performance, with market orientation serving as a critical mediating factor. These results emphasize the importance of aligning digital tools with strategic business objectives to achieve competitive advantages. Furthermore, the study highlights the practical implications for MSEs, suggesting that integrating big data and web analytics into supply chain operations can optimize resource allocation, enhance decision-making, and foster market responsiveness. This research contributes to the literature on digital transformation and strategic management in emerging economies, offering a novel perspective on how MSEs can leverage technological advancements to remain competitive. Future studies may explore longitudinal impacts and sector-specific adaptations.

Author 1: Miftakul Huda
Author 2: Agus Rahayu
Author 3: Chairul Furqon
Author 4: Mokh Adib Sultan
Author 5: Nani Hartati
Author 6: Neng Susi Susilawati Sugiana

Keywords: Big data; supply chain management; web analytics; corporate performance; market orientation

PDF

Paper 81: Color Multi-Focus Image Fusion Method Based on Contourlet Transform

Abstract: Color Multi-Focus Image Fusion (MFIF) technology finds extensive use in areas such as microscopy, astronomy, and multi-scene photography where high-quality and detailed images are vital. This paper presents the Contourlet Transform alongside its enhanced version, the Non-Subsampled Contourlet Transform (NSCT), aimed at improving the outcomes of image fusion, with the support of Laplacian Pyramid (LP) decomposition. The NSCT framework overcomes challenges like spectral aliasing and directional sensitivity, leading to images with sharper edges, enriched texture details, and preserved delicate information. Experimental findings highlight the NSCT-based fusion algorithm's superiority. Subjective assessments indicate that using the NSCT method results in images with sharp and well-defined object boundaries, outstanding contrast, and abundant textures without the creation of artifacts, markedly excelling beyond traditional techniques such as the Contourlet Transform, Non-Subsampled Shearlet Transform (NSST) and Rolling Guidance Filtering (RGF). Objective measures verify its effectiveness: In the first dataset, it attains an average gradient (AG) of 8.36 and an edge intensity (EI) of 3.29E-04, while in the second dataset, it reports an AG of 21.39 and an EI of 4.06E-04, significantly outperforming other methods. Moreover, the NSCT method offers competitive computational speed, balancing runtime with high-quality fusion performance. These results establish the proposed method as a powerful and efficient solution for color MFIF, offering notable performance benefits and practical utility in various imaging fields.

Author 1: Zhifang Cai

Keywords: Contourlet transform; image fusing; NSCT; Laplacian Pyramid; color multi-focus

PDF

Paper 82: Enhanced Colon Cancer Prediction Using Capsule Networks and Autoencoder-Based Feature Selection in Histopathological Images

Abstract: The malignant development of cells in the colon or rectum is known as colon cancer, and because of its high incidence and possibility for death, it is a serious health problem. Because the disease frequently advances without symptoms in its early stages, early identification is essential. Improved survival rates and more successful therapy depend on an early and accurate diagnosis. The reliability of early detection can be impacted by problems with traditional diagnostic procedures, such as high false-positive rates, insufficient sensitivity, and inconsistent outcomes. This unique approach to colon cancer diagnosis uses autoencoder-based feature selection, capsule networks (CapsNets), and histopathology images to overcome these problems. CapsNets capture spatial hierarchies in visual input, improving pattern identification and classification accuracy. When employed for feature extraction, autoencoders reduce dimensionality, highlight important features, and eliminate noise, all of which enhance model performance. The suggested approach produced remarkable outcomes, with a 99.2% accuracy rate. The model's strong capacity to detect cancerous lesions with few mistakes is demonstrated by its high accuracy in differentiating between malignant and non-malignant tissues. This study represents a substantial development in cancer detection technology by merging autoencoders with Capsule Networks, so overcoming the shortcomings of existing approaches and offering a more dependable tool for early diagnosis. This method may improve patient outcomes, provide more individualized treatment regimens, and boost diagnostic accuracy.

Author 1: Janjhyam Venkata Naga Ramesh
Author 2: F. Sheeja Mary
Author 3: S. Balaji
Author 4: Divya Nimma
Author 5: Elangovan Muniyandy
Author 6: A. Smitha Kranthi
Author 7: Yousef A. Baker El-Ebiary

Keywords: Colon cancer prediction; capsule network; autoencoder; histopathological images; early cancer detection

PDF

Paper 83: Revolutionizing AI Governance: Addressing Bias and Ensuring Accountability Through the Holistic AI Governance Framework

Abstract: Artificial intelligence (AI) possesses the capacity to transform numerous facets of our existence; however, it concomitantly engenders considerable risks associated with bias and discrimination. This article explores emerging technologies like Explainable AI (XAI), Fairness Metrics (FMs), and Adversarial Learning (AL) for bias mitigation while emphasizing the critical role of transparency, accountability, and continuous monitoring and evaluation in AI governance. The Holistic AI Governance Framework (HAGF) is introduced, featuring a comprehensive, five-layered structure that integrates top-down and bottom-up strategies. HAGF prioritizes foundational principles and resource allocation, outlining five lifecycle-specific phases. Unlike the OECD AI Principles, which offer a general ethical framework lacking holistic perspective and resource allocation guidance, and the Berkman Klein Center's Model, which provides a broad framework but omits resource allocation and detailed implementation, HAGF offers actionable mechanisms. Tailored Key Performance Indicators (KPIs) are proposed for each HAGF layer, enabling ongoing refinement and adaptation to the evolving AI landscape. While acknowledging the need for enhancements in data governance and enforcement, the embedded KPIs ensure accountability and transparency, positioning HAGF as a pivotal framework for navigating the complexities of ethical AI.

Author 1: Ibrahim Atoum

Keywords: Artificial intelligence; framework; bias; discrimination; governance; key performance indicators

PDF

Paper 84: Enhanced Early Detection of Diabetic Nephropathy Using a Hybrid Autoencoder-LSTM Model for Clinical Prediction

Abstract: Early detection and precise prediction are essential in medical diagnosis, particularly for diseases such as diabetic nephropathy (DN), which tends to go undiagnosed at its early stages. Conventional diagnostic techniques may not be sensitive and timely, and hence, early intervention might be difficult. This research delves into the application of a hybrid Autoencoder-LSTM model to improve DN detection. The Autoencoder (AE) unit compresses clinical data with preservation of important features and dimensionality reduction. The Long Short-Term Memory (LSTM) network subsequently processes temporal patterns and sequential dependency, enhancing feature learning for timely diagnosis. Clinical and demographic information from diabetic patients are included in the dataset, evaluating variables such as age, sex, type of diabetes, duration of disease, smoking, and alcohol use. The model is done using Python and exhibits better performance compared to conventional methods. The Hybrid AE-LSTM model proposed here attains an accuracy of 99.2%, which is a 6.68% improvement over Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression. The findings demonstrate the power of deep learning in detecting DN early and accurately and present a novel tool for proactive disease control among diabetic patients.

Author 1: U. Sudha Rani
Author 2: C. Subhas

Keywords: Autoencoder-LSTM; Diabetic nephropathy; early disease detection; machine learning; clinical data analysis; hybrid models

PDF

Paper 85: A Review of Cybersecurity Challenges and Solutions for Autonomous Vehicles

Abstract: With the continuously increasing demand for new technologies, many concepts have emerged in recent decades and the Internet of Things is one of the most popular. IoT is revolutionizing several aspects of human life with a large range of applications including the transportation sector. Based on IoT technologies and Artificial Intelligence, new-generation vehicles are being developed with autonomous or self-driving capabilities to handle transportation in future smart cities. Regarding human-based errors such as accidents, traffic congestion, and disruptions, autonomous vehicles are presented as an alternative solution to increase traffic safety, efficiency, and mobility. However, by transferring from a human-based to a computer-based driving style, the transportation area is inheriting existing cyber-security challenges. Due to their connectivity and data-driven decision-making, the security of autonomous vehicles is a high-level concern since it involves human safety in addition to economic losses. In this paper, a comprehensive review is conducted to discuss the security threats and existing solutions for autonomous vehicles. In addition to that, the open security challenges are discussed for further investigations toward trusted and widespread deployment of autonomous vehicles.

Author 1: Lasseni Coulibaly
Author 2: Damien Hanyurwimfura
Author 3: Evariste Twahirwa
Author 4: Abubakar Diwani

Keywords: Internet of Things; smart transportation; autonomous vehicles; cybersecurity

PDF

Paper 86: Handling Imbalanced Data in Medical Records Using Entropy with Minkowski Distance

Abstract: Medical records are essential for disease detection to help establish a diagnosis. Many issues with imbalanced classification are discovered in many cases of early disease detection and diagnosis using machine learning methods, resulting in decreased accuracy values due to imbalanced data distribution caused by the number of positive patients with less disease than normal individuals. To improve the accuracy of the results, a classification architectural model is proposed through a modified oversampling method (SMOTE) using Minkowski distance and adding entropy as a weight value estimation to figure out the number of samples to be made. The feature selection procedure will adopt the hybrid Particle Swarm Optimisation Grey-Wolf Optimisation approach (PSO GWO). Dataset selection evaluated high, medium, and low data dimensions based on the number of features and the total number of dataset samples. The six classification algorithms were compared using datasets involving diabetes, heart, and breast cancer. The final classification results indicated an average accuracy of 74% for diabetes, 83% for heart, and 96% for breast cancer. The proposed approach successfully solves imbalances in medical record data, outperforming Naïve Bayes, Logistic Regression, Support Vector Machine (SVM), and Random Forest classification approaches.

Author 1: Lastri Widya Astuti
Author 2: Ermatita
Author 3: Dian Palupi Rini

Keywords: Medical record; imbalanced data; classification; distance; entropy

PDF

Paper 87: IoMT-Enabled Noninvasive Lungs Disease Detection and Classification Using Deep Learning-Based Analysis of Lungs Sounds

Abstract: Noninvasive and accurate methods for diagnosing respiratory diseases are essential to improving healthcare consequences. The Internet of Medical Things (IoMT) is critical in driving developments in this field. This work presents an IoMT-enabled approach for lung disease detection and classification, using deep learning techniques to analyze lung sounds. The proposed approach uses three datasets: the Respiratory Sound, the Coronahack Respiratory Sound, and the Coswara Sound. Traditional machine learning models, including the Extra Tree Classifier and AdaBoost Classifier, are used to benchmark performance. The Extra Tree Classifier achieved 94.12%, 95.23%, and 94.21% across the datasets, while the AdaBoost Classifier showed improvements with 95.42%, 96.33%, and 94.76%. The proposed deep neural network (DNN) achieves accuracies of 98.92%, 99.33%, and 99.36% for the same datasets. This study explores the transformative potential of the Internet of Medical Things (IoMT) in augmenting diagnostic precision and advancing the field of respiratory healthcare.

Author 1: Muhammad Sajid
Author 2: Wareesa Sharif
Author 3: Ghulam Gilanie
Author 4: Maryam Mazher
Author 5: Khurshid Iqbal
Author 6: Muhammad Afzaal Akhtar
Author 7: Muhammad Muddassar
Author 8: Abdul Rehman

Keywords: Deep learning; respiratory sound; coronahack respiratory sound and coswara sound; IoMT

PDF

Paper 88: Readmission Risk Prediction After Total Hip Arthroplasty Using Machine Learning and Hyperparameter Optimized with Bayesian Optimization

Abstract: Machine learning techniques are increasingly used in orthopaedic surgery to assess risks such as length of stay, complications, infections, and mortality, offering an alternative to traditional methods. However, model performance varies depending on private institutional data, and optimizing hyperparameters for better predictions remains a challenge. This study incorporates automatic hyperparameter tuning to improve readmission prediction in orthopaedics using a public medical dataset. Bayesian Optimization was applied to optimize hyperparameters for seven machine learning algorithms—Extreme Gradient Boosting, Stochastic Gradient Boosting, Random Forest, Support Vector Machine, Decision Tree, Neural Network, and Elastic-net Penalized Logistic Regression—predicting readmission risk after Total Hip Arthroplasty (THA). Data from the MIMIC-IV database, including 1,153 THA patients, was used. Model performance was evaluated using Precision, Recall, and AUC-ROC, comparing optimized algorithms to those without hyperparameter tuning from previous studies. The optimized Extreme Gradient Boosting algorithm achieved the highest AUC-ROC of 0.996, while other models also showed improved accuracy, precision, and recall. This research successfully developed and validated optimized machine learning models using Bayesian Optimization, enhancing readmission prediction following THA based on patient demographics and preoperative diagnosis. The results demonstrate superior performance compared to prior studies that either lacked hyperparameter optimization or relied on exhaustive search methods.

Author 1: Intan Yuniar Purbasari
Author 2: Athanasius Priharyoto Bayuseno
Author 3: R. Rizal Isnanto
Author 4: Tri Indah Winarni

Keywords: Total hip arthroplasty; orthopaedic surgery; Bayesian Optimization; machine learning algorithm; hyperparameter optimization

PDF

Paper 89: Forecasting Models for Predicting Global Supply Chain Disruptions in Trade Economics

Abstract: Global supply chain disruptions have evolved into a critical challenge for trade economics, and have caused them to reach across industries and economies around the globe. The ability to foresee these disruptions is crucial for policymakers, businesses, and supply chain managers who want to develop actionable strategies for stability. The current document focuses on analyzing the potential application of forecasting models to predict global supply chain disruptions and their efficacy and limitations. A comparison of statistical, machine learning, and hybrid models is performed, and the best methods for predicting disruptions arising from geopolitical events, pandemics, natural disasters, and other external factors are identified. The study considers real-world datasets and various scenario analyses to provide actionable insights. The key findings were obtained by integrating various sources of information, including trade volume fluctuations, transportation bottlenecks, and economic indicators, into predictive frameworks. It is thus a novel contribution to the field of study done by this research to build up an advanced forecasting model that can boost the resilience level and elasticity level of global supply chains, finally playing a key role in the sustainability of trade economics.

Author 1: Limei Fu

Keywords: Supply chain disruptions; forecasting models; trade economics; predictive analytics; global resilience

PDF

Paper 90: Developing an IoT Testing Framework for Autonomous Ground Vehicles

Abstract: Autonomous ground vehicles play a crucial role in the Internet of Things, offering transformative potential for applications such as urban transportation and delivery services. These vehicles can operate autonomously in uncertain environments, making reliable testing essential. This study develops and analyzes a testing framework for autonomous ground vehicles, focusing on their motion control systems and electronic modules. The research reviews testing methods for printed circuit boards (PCBs), highlighting the need for JTAG testing implementation for vehicle modules. Functional testing was conducted on key components such as cameras, LiDARs, and wireless interfaces under various conditions. Results show that JTAG testing successfully detects faults with precise localization, while functional tests confirm stable component performance. Environmental tests revealed that most components perform reliably within optimal conditions, with failures occurring at temperatures beyond ±70°C and humidity levels exceeding 90% RH. The developed testing system enhances the reliability of autonomous delivery vehicles.

Author 1: Murat Tashkyn
Author 2: Amanzhol Temirbolat
Author 3: Nurlybek Kenes
Author 4: Amandyk Kartbayev

Keywords: Autonomous vehicle; testing system; IoT; functional testing; electronic modules; delivery automation

PDF

Paper 91: AI-Powered Intelligent Speech Processing: Evolution, Applications and Future Directions

Abstract: This paper provides an overview of the historical evolution of speech recognition, synthesis, and processing technologies, highlighting the transition from statistical models to deep learning-based models. Firstly, the paper reviews the early development of speech processing, tracing it from the rule-based and statistical models of the 1960s to the deep learning models, such as deep neural networks (DNN), convolutional neural networks (CNN), and recurrent neural networks (RNN), which have dramatically reduced error rates in speech recognition and synthesis. It emphasizes how these advancements have led to more natural and accurate speech outputs. Then, the paper examines three key learning paradigms used in speech recognition: supervised, self-supervised, and semi-supervised learning. Supervised learning relies on large amounts of labeled data, while self-supervised and semi-supervised learning leverage unlabeled data to improve generalization and reduce reliance on manually labeled datasets. These paradigms have significantly advanced the field of speech recognition. Furthermore, the paper explores the wide-ranging applications of AI-driven speech processing, including smart homes, intelligent transportation, healthcare, and finance. By integrating AI with technologies like the Internet of Things (IoT) and big data, speech technology is being applied in voice assistants, autonomous vehicles, and speech-controlled devices. The paper also addresses the current challenges facing intelligent speech processing, such as performance issues in noisy environments, the scarcity of data for low-resource languages, and concerns related to data privacy, algorithmic bias, and legal responsibility. Overcoming these challenges will be crucial for the continued progress of the field. Finally, the paper looks to the future, predicting further improvements in speech processing technology through advancements in hardware and algorithms. It anticipates increased focus on personalized services, real-time speech processing, and multilingual support, along with growing integration with other technologies such as augmented reality. Despite the technical and ethical challenges, AI-driven speech processing is expected to continue its transformative impact on society and industry.

Author 1: Ziqing Zhang

Keywords: Intelligent speech recognition; AI speech synthesis; speech processing; AI technology

PDF

Paper 92: An Enhanced Whale Optimization Algorithm Based on Fibonacci Search Principle for Service Composition in the Internet of Things

Abstract: Service composition in the Internet of Things (IoT) poses significant challenges owing to the dynamics in IoT ecosystems and the exponential increase in service candidates. This paper proposes an Enhanced Whale Optimization Algorithm (EWOA) by introducing the Fibonacci search principle for service composition optimization to overcome certain shortcomings of conventional approaches, including slow convergence and being stuck in local optima, in addition to imbalanced exploration-exploitation trade-offs. The proposed EWOA combines the application of nonlinear crossover weights with a Fibonacci search to optimize the global exploration and local exploitation searches of the basic version, thereby producing a better solution. Several simulations were performed for IoT functions. Among the experiments involving different QOS-based service compositions, the results show that the EWOA achieves superior and faster convergence capability with enhanced convergence compared to recent methods.

Author 1: Yun CUI

Keywords: Service composition; Internet of Things; quality of service; whale optimization; Fibonacci search

PDF

Paper 93: SQRCD: Building Sustainable and Customer Centric DFIS for the Industry 5.0 Era

Abstract: Artificial Intelligence (AI) is considered a big turning point for the financial industry. Introducing Artificial General Intelligence (AGI) enhances the capability of all the areas where AI shows its power. The development of AGI is directly proportional to the need for more advanced automation by enhancing the features quick responsive, customization/personalization and the refined decision making capabilities in different industries. The current study aims to discuss the respondent's views on the adoption of an AGI-enabled Sustainability, Quick responsiveness, Risk management, Customer-centric, and Data privacy (SQRCD) system in Digital Financial Inclusion System (DFIS). A total of 630 responses were collected from the respondents belonging to 90 different finance institutes. The result shows that SQRCD had a significant positive relationship with the attitude to adopt an AGI enabled-SQRCD system. The three cultural dimensions of Hofstede’s theory power distance index, collectivism-individualism, and uncertainty acceptance are also taken as moderators. The effect of moderators is seen in the different relationships. The study develops a direct hypothesis to analyze the adoption of a new financial system which includes the mentioned factors. The result of the study is beneficial in the development of a renewed financial system where the mentioned parameters are essential for Industry 5.0.

Author 1: Ruchira Rawat
Author 2: Himanshu Rai Goyal
Author 3: Sachin Sharma
Author 4: Bina Kotiyal

Keywords: Artificial General Intelligence (AGI); Digital Financial Inclusion System (DFIS); industry 5.0; customer-centric; sustainability

PDF

Paper 94: Efficient Personalized Federated Learning Method with Adaptive Differential Privacy and Similarity Model Aggregation

Abstract: In recent years, personalized federated learning (PFL) has garnered significant attention due to its potential for safeguarding data privacy while addressing data heterogeneity across clients. However, existing PFL approaches remain vulnerable to privacy breaches, particularly under adversarial inference and client-side data reconstruction attacks. To address these concerns, we propose DP-FedSim, a novel PFL framework incorporating adaptive differential privacy mechanisms. First, to mitigate the limitations posed by fixed-layer personalization strategies, we evaluate parameter significance using the Fisher information matrix. By selectively retaining parameters with higher Fisher values, DP-FedSim reduces the noise impact, enabling more efficient dynamic personalization. Second, we introduce a layered adaptive gradient clipping method. By leveraging the mean and standard deviation of the gradients within each layer, this method allows DP-FedSim to automatically adjust clipping thresholds in response to real-time privacy demands and model states, enhancing the adaptability to various model structures. This ensures a more accurate balance between privacy preservation and model performance. Furthermore, we present a model similarity-based aggregation method utilizing cosine similarity. This technique dynamically adjusts each client's contribution to the global model update, prioritizing clients with models more similar to the global model. This improves the global model's performance and generalization by allowing DP-FedSim to better handle a variety of data distributions and client model attributes. Experimental results on multiple SVHN cifar-10 datasets show that DP-FedSim outperforms the state-of-the-art PFL algorithm by an average of 5% when data heterogeneity is at its strongest. The efficiency of the suggested modules is validated by ablation tests, and the visualization results shed light on the reasoning behind important hyperparameter settings.

Author 1: Shiqi Mao
Author 2: Fangfang Shan
Author 3: Shuaifeng Li
Author 4: Yanlong Lu
Author 5: Xiaojia Wu

Keywords: Federated learning; differential privacy; gradient clipping; model aggregation

PDF

Paper 95: Smart Night-Vision Glasses with AI and Sensor Technology for Night Blindness and Retinitis Pigmentosa

Abstract: This paper presents the conceptualization of Smart Night-Vision Glasses, an innovative assistive device aimed at individuals with night blindness and Retinitis Pigmentosa (RP). These conditions, characterized by significant difficulty in seeing in low-light or dark environments, currently have no effective medical solution. The proposed glasses utilize advanced sensor technologies such as LiDAR, infrared, and ultrasonic sensors, combined with artificial intelligence (AI), to create a real-time, visual representation of the surroundings. Unlike conventional camera-based systems, which require light to function, this device relies on non-visible, non-harmful rays to detect environmental data, making it suitable for use in pitch-dark conditions. The AI processes the sensor data to generate a simplified, user-friendly view of the environment, outlined with clear, cartoon-like visuals for easy identification of objects, obstacles, and surfaces. The glasses are designed to look like regular prescription eyewear, ensuring comfort and discretion, while a button or trigger can switch them to "night mode" for enhanced vision in low-light settings. This concept aims to improve the independence, safety, and quality of life for individuals with night blindness and RP, offering a transformative solution where no medical alternatives currently exist. However, challenges such as sensor miniaturization, power consumption, and AI integration must be addressed for successful implementation. Beyond its direct benefits for users, the device could have broader societal and economic impacts by enhancing accessibility, reducing nighttime accidents, and fostering technological innovation in assistive wearables. The paper also discusses future directions for research and refinement of the technology while supporting the Process Innovation.

Author 1: Shaheer Hussain Qazi
Author 2: M. Batumalay

Keywords: Night-vision; glasses; night blindness; Retinitis Pigmentosa (RP); IoT; assistive technology; sensor technology; AI; data processing; low-light navigation; wearable devices; process innovation

PDF

Paper 96: Comparative Analysis of Cardiac Disease Classification Using a Deep Learning Model Embedded with a Bio-Inspired Algorithm

Abstract: Cardiac disease classification is a crucial task in healthcare aimed at early diagnosis and prevention of cardiovascular complications. Traditional methods such as machine learning models often face challenges in handling high-dimensional and noisy datasets, as well as in optimizing model performance. In this study, we propose and compare a novel approach for heart disease prediction using deep learning models embedded in bioinspired algorithms. The integration of deep learning techniques allows for automatic feature learning and complex pattern recognition from raw data, while bioinspired algorithms provide optimization capabilities for enhancing model accuracy and generalization. Specifically, the cuckoo search algorithm and elephant herding optimization algorithm are employed to optimize the architecture and hyperparameters of deep learning models, facilitating the exploration of diverse model configurations and parameter settings. This hybrid approach enables the development of highly effective predictive models by efficiently leveraging the complementary strengths of deep learning and bioinspired optimization. Experimental results on benchmark heart disease datasets demonstrate the superior performance of the proposed method compared to conventional approaches, achieving higher accuracy and robustness in predicting heart disease risk. The proposed framework holds significant promise for advancing the state-of-the-art in heart disease prediction and facilitating personalized healthcare interventions for at-risk individuals.

Author 1: Nandakumar Pandiyan
Author 2: Subhashini Narayan

Keywords: Cardiac disease; heart disease; bio-inspired; machine learning; deep learning; prediction; classification

PDF

Paper 97: Quantum Swarm Intelligence and Fuzzy Logic: A Framework for Evaluating English Translation

Abstract: This study introduces the Quantum Swarm-Driven Fuzzy Evaluation Framework (QSI-Fuzzy) for assessing English translation software across multiple domains and criteria. The principal aim is to develop a scalable, adaptive, and interpretable evaluation framework that optimizes dynamic weight assignments while managing linguistic uncertainties. A major challenge in translation software evaluation lies in ensuring accurate and unbiased assessments of semantic accuracy, fluency, efficiency, and user satisfaction, particularly across diverse domains such as Legal, Medical, and Conversational contexts. To address this, QSI-Fuzzy integrates Quantum Swarm Intelligence (QSI) for dynamic weight optimization with fuzzy logic for handling linguistic uncertainties, ensuring robust and adaptive decision-making. Experimental results demonstrate that QSI-Fuzzy outperforms benchmark algorithms including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Simulated Annealing (SA), achieving faster convergence (55 iterations on average vs. 120 for SA) and exhibiting greater robustness under noisy conditions (maintaining a performance score of 0.80 at 20% noise, compared to 0.70, 0.68, and 0.65 for GA, PSO, and SA, respectively). These findings confirm that QSI-Fuzzy provides an efficient, scalable, and high-performance solution for translation software evaluation, with broader implications for real-time systems, complex decision-making, and multi-domain optimization challenges.

Author 1: Pei Yang

Keywords: English translation software; quantum swarm intelligence; fuzzy logic; multi-domain evaluation; optimization; linguistic performance analysis

PDF

Paper 98: Optimizing Athlete Workload Monitoring with Supervised Machine Learning for Running Surface Classification Using Inertial Sensors

Abstract: Monitoring athlete movement is important to improve performance, reduce fatigue, and decrease the likelihood of injury. Advanced technologies, including computer vision and inertial sensors, have been widely explored in classifying sport-specific movements. Combining automated sports action labeling with athlete-monitoring data provides an effective approach to enhance workload analysis. Recent studies on categorizing sport-specific movements show a trend toward training and evaluation methods based on individual athletes, allowing models to capture unique features peculiar to each athlete. This is particularly beneficial for movements that exhibit large variations in technique between athletes. The current study uses supervised machine learning models, including Neural Networks and Support Vector Machines (SVM), to distinguish between running surfaces, namely, athletics track, hard sand, and soft sand, using features extracted from an upper-back inertial measurement unit (IMU) sensor. Principal Component Analysis (PCA) is applied for feature selection and dimensionality reduction, enhancing model efficiency and interpretability. Our results show that athlete-dependent training approaches considerably enhance the classification performance compared to athlete-independent approaches, achieving higher weighted average precision, recall, F1-score, and accuracy (p < 0.05).

Author 1: WenBin Zhu
Author 2: QianWei Zhang
Author 3: SongYan Ni

Keywords: Athlete monitoring; machine learning models; running surface classification; Inertial Measurement Units (IMU); neural networks; Support Vector Machines (SVM); Principal Component Analysis (PCA)

PDF

Paper 99: LDA-Based Topic Mining for Unveiling the Outstanding Universal Value of Solo Keroncong Music as an Intangible Cultural Heritage of UNESCO

Abstract: Outstanding Universal Value (OUV) is an essential value of culture and nature. It is so extraordinary that it transcends national boundaries and becomes generally crucial for all humanity's current and future generations. A culture with this value needs permanent protection because it is considered a critical heritage for the world community. Solo keroncong music, as one of the local wisdom owned by the Indonesian nation, has yet to be recognized as one of the UNESCO Intangible Cultural Heritage (ICH). It has even become an instrument of Indonesia's soft power diplomacy in several countries, such as Malaysia, England, and the United States. It must be of OUV and meet at least one of ten selection criteria to be included on the World Heritage List. This study explored the OUV of Solo keroncong music using Latent Dirichlet Allocation. The primary data were obtained by conducting an FGD with the Indonesian Keroncong Music Artist Community (KAMKI) Surakarta and in-depth interviews with several keroncong figures in Solo. The result showed there are four topics with a coherent score of 0.51. Then, the expert mapped those four topics into three OUVs of Solo keroncong music as temporary findings. Keroncong music is a masterpiece of human creativity, a witness to civilization, and has traditional values. These findings showed that Solo keroncong music is worthy of being proposed as one of the UNESCO ICH.

Author 1: Denik Iswardani Witarti
Author 2: Danis Sugiyanto
Author 3: Atik Ariesta
Author 4: Pipin Farida Ariyani
Author 5: Rusdah

Keywords: LDA; OUV; Solo keroncong; text mining; topic modeling

PDF

Paper 100: Enhancing Chronic Kidney Disease Prediction with Deep Separable Convolutional Neural Networks

Abstract: Chronic Kidney Disease (CKD) is a chronic disease that progressively impairs kidney function to the point of wasting filtration, electrolyte imbalance, and blood pressure control. Early and precise prediction becomes necessary for successful disease management. This research demonstrates a new method involving Deep Separable Convolutional Neural Networks (DS-CNNs) in improving CKD prediction. Based on the Chronic Kidney Disease Dataset available at Kaggle, the model employs DS-CNNs combined with optimized techniques of optimization for better predictive accuracy. DS-CNNs utilize depthwise and pointwise convolutions to facilitate effective feature extraction and classification with efficient computation. To enhance model performance, the Learning Rate Warm-Up with Cosine Annealing technique is used to guarantee stable convergence and controlled rate of reduction in the learning rate. This solution remedies the inadequacies of traditional CKD detection solutions that are insensitive to early stages and entail expensive, invasive procedures. At 94.50% accuracy, the new DS-CNN model outcompetes conventional methods, featuring better prediction performance. The results demonstrate the utility of deep learning and optimization in early detection of CKD and introduce a promising tool for enhanced clinical decision-making.

Author 1: Janjhyam Venkata Naga Ramesh
Author 2: P N S Lakshmi
Author 3: Thalakola Syamsundararao
Author 4: Elangovan Muniyandy
Author 5: Linginedi Ushasree
Author 6: Yousef A. Baker El-Ebiary
Author 7: David Neels Ponkumar Devadhas

Keywords: Chronic kidney disease; deep separable convolutional neural networks; learning rate warm-up with cosine annealing; predictive accuracy; optimization techniques

PDF

Paper 101: Hybrid Artificial Bee Colony and Bat Algorithm for Efficient Resource Allocation in Edge-Cloud Systems

Abstract: Integrating edge and cloud computing systems builds up a powerhouse, a framework for realizing real-time data processing and conducting large-scale computation tasks. However, efficient resource allocation and task scheduling are outstanding challenges in these dynamic, heterogeneous environments. This paper proposes an innovative hybrid algorithm that amalgamates the features of the Bat Algorithm (BA) and Artificial Bee Colony (ABC) to meet such challenges. The ABC algorithm's solid global search capabilities and the BA's efficient local exploitation are merged for efficient task scheduling and resource allocation. Dynamic adaptation of the proposed hybrid algorithm accommodates such conditions by balancing exploration and exploitation through periodic solution exchanges. Experimental evaluations highlight that the proposed algorithm can minimize execution time and costs involving resource utilization by guaranteeing proper management of task dependencies using a Directed Acyclic Graph (DAG) model. Compared to the available methods, the proposed hybrid technique generates better performance metrics concerning reduced makespan, improved resource utilization, and lower computational delays concerning resource optimization in an edge-cloud context.

Author 1: Jiao GE
Author 2: Bolin ZHOU
Author 3: Na LIU

Keywords: Cloud computing; edge computing; resource allocation; optimization; task scheduling

PDF

Paper 102: Pneumonia Detection Using Transfer Learning: A Systematic Literature Review

Abstract: Deep learning models have significantly improved pneumonia detection using X-ray image analysis in the field of AI-driven healthcare, showing a major advancement in the effectiveness of medical decision systems. In this paper, we have conducted a systematic literature review of pneumonia detection techniques that applied transfer learning combined with other methods. The review protocol has been developed thoroughly and it identifies recent research related to pneumonia detection from the past five years. We have used very famous research repositories such as IEEE, Elsevier, Springer, and ACM digital library. After a thorough search process, 35 papers are finalized. The review summarizes those past papers that have implemented different methods of pneumonia detection and results are compared based on the best performing models. Also, these models have been categorized into three approaches to pneumonia detection: Deep Learning methods, Transfer Learning techniques, and hybrid methods. Then, there is a performance comparison of the best-performing models for pneumonia detection. This study concludes that while transfer learning holds substantial potential for improving pneumonia detection, further research is necessary to optimize these models for clinical application. This study concludes that while transfer learning holds substantial potential for improving pneumonia detection, further research is necessary to optimize these models for clinical application. This review is very helpful for the researchers in identifying the research gap for pneumonia detection techniques and how these gaps can be addressed shortly.

Author 1: Mohammed A M Abueed
Author 2: Danial Md Nor
Author 3: Nabilah Ibrahim
Author 4: Jean-Marc Ogier

Keywords: Pneumonia; machine learning; COVID-19: deep learning

PDF

Paper 103: Adaptive and Scalable Cloud Data Sharing Framework with Quantum-Resistant Security, Decentralized Auditing, and Machine Learning-Based Threat Detection

Abstract: The increasing prevalence of cloud environments makes it important to ensure secure and efficient data sharing between dynamic teams, especially in terms of user access and termination based on proxy re-encryption and hybrid authentication management schemes aimed at increasing scalability, flexibility, and adaptability and exploring a multi-proxy server architecture to distribute re-encryption tasks, improve fault tolerance and load balancing in large deployments. In addition, to this eliminated the need for trusted third-party auditors, integrate blockchain-based audit mechanisms for immutable decentralized monitoring of data access, revocation events To future-proof systems provides quantum-resistant cryptographic mechanisms for long-term security as well as to develop revolutionary approaches that drive the user out of the box, driven by machine learning to predict and execute addressing potential threats in real-time. Proposed systems also introduce fine-grained, multi-level access controls for discrete data security and privacy, meeting different roles of users and data sensitivity levels mean improvements greater in terms of computing performance, security and scalability, making this enhanced system more effective for secure data sharing at dynamic and large clouds around us.

Author 1: P Raja Sekhar Reddy
Author 2: Pulipati Srilatha
Author 3: Kanhaiya Sharma
Author 4: Sudipta Banerjee
Author 5: Shailaja Salagrama
Author 6: Manjusha Tomar
Author 7: Ashwin Tomar

Keywords: Blockchain audit; data security and privacy; machine learning; proxy re-encryption; quantum-resistant cryptography

PDF

Paper 104: ALE Model: Air Cushion Impact Characteristics of Seaplane Landing Application

Abstract: Seaplane landing is a strong nonlinear gas-liquid-solid multiphase coupling problem, and the coupling impact characteristics of air cushion are very complicated, and it is difficult to maintain the stability of the air-frame. In this paper, The ALE method is used to study the landing of seaplane at different initial attitude angles and velocities. Firstly, a comparative study of the structure entry model and the air cushion effect model of flat impact water surface is conducted to verify the reliability of the numerical model in this paper, and the influence of the velocity, the water shape and the air cushion are accurately analyzed. Then, a seaplane landing is systematically studied, and the vertical acceleration, attitude angle, aircraft impact force and flow field distribution are analyzed. The results show that the air cushion has a great influence on the landing of seaplane. The smaller the initial horizontal velocity, the more obvious the cushioning effect of the air cushion. Cavitation causes a secondary impact on the tail and produces a pressure value exceeding the initial value, which may cause damage to the aircraft structure. The air cushion has a buffering effect on the seaplane, the pitch angle increases at a slower rate and the pressure value at the monitoring point decreases. The larger the initial attitude angle, the more significant the air cushion. By analyzing the landing rules of seaplane, the range of speed and attitude angle suitable for seaplane takeoff and landing process is given. The results of this paper can provide theoretical guidance for the stability design of seaplane takeoff and landing process.

Author 1: Yunsong Zhang
Author 2: Ruiyou Li Shi
Author 3: Bo Gao
Author 4: Changxun Song
Author 5: Zhengzhou Zhang

Keywords: Seaplane; ALE method; multiphase coupling; air cushion

PDF

Paper 105: Self-Organizing Neural Networks Integrated with Artificial Fish Swarm Algorithm for Energy-Efficient Cloud Resource Management

Abstract: Cloud computing's exponential expansion requires better resource management methods to solve the existing struggle between system performance and energy efficiency and functional scalability. Traditional resource management practices frequently lead systems in large-scale cloud environments to produce suboptimal results. This research presents a brand-new computational framework that unites Self-Organizing Neural Networks (SONN) with Artificial Fish Swarm Algorithm (AFSA) to enhance energy efficiency alongside optimized resource allocation and scheduling improvements. The SONN system groups workload information and automatically changes its structure to support fluctuating demand rates then the AFSA optimizes resource management through swarm-based intelligent protocols for high performance with scalable benefits. The SONN-AFSA model achieves substantial performance gains by analyzing real-world CPU usage statistics and memory usage behavior together with scheduling data from Google Cluster Data. The experimental findings show 20.83% lower energy utilization next to 98.8% prediction rates alongside 95% SLA maintenance and an outstanding 98% task execution rate. The proposed model delivers reliability outcomes superior to traditional approaches PSO and DRL and PSO-based neural networks which achieve accuracy rates above 88% and reach 92% accuracy. The adaptive platform delivers better power management to cloud computations yet preserves operational agility by adapting workload distributions. The learning ability of SONN joined with AFSA optimization segments produces superior resource direction capabilities which yield better service delivery quality. Research will proceed beyond its current scope to study real-time feedback structures as it evaluates multi-objective enhancement through large-scale dataset validation work to boost cloud computing sustainability across various platforms.

Author 1: A. Z. Khan
Author 2: B. Manikyala Rao
Author 3: Janjhyam Venkata Naga Ramesh
Author 4: Elangovan Muniyandy
Author 5: Eda Bhagyalakshmi
Author 6: Yousef A. Baker El-Ebiary
Author 7: David Neels Ponkumar Devadhas

Keywords: Energy-efficient cloud resource management; Self-Organizing Neural Networks (SONN); Artificial Fish Swarm Algorithm (AFSA); cloud optimization; swarm intelligence; resource utilization; task scheduling

PDF

Paper 106: Depression Detection in Social Media Using NLP and Hybrid Deep Learning Models

Abstract: One type of feeling that possesses a detrimental effect on people's day-to-day lives is depression. Globally, the number of persons experiencing long-term sentiments is rising annually. Many psychiatrists find it difficult to recognize mental disease or unpleasant emotions in patients before it's too late to improve treatment. Finding depression in individuals quickest possible time represents one of the most difficult problems. To create tools for diagnosing depression, researchers are employing NLP to examine written content shared on social media sites. Traditional techniques frequently have problems with scalability and poor precision. To overcome the drawbacks of the prior methods, it is suggested to introduce an improved depression detection system based on the RoBERTa (Robustly optimized BERT approach) and BiLSTM (Bidirectional Long Short-Term Memory) approach. This proposed work aims is to take advantage of the contextualized word embeddings from RoBERTa and the sequential learning properties of BiLSTM to determine depression from social media text. The technique is innovative because it combines the use of BiLSTM to accurately describe the temporal patterns of text sequences with RoBERTa to capture subtle linguistic aspects. It removes stopwords and punctuations form the input data to provide clean data to the model for processing. The system illustrates preference over the existing models as they achieve a 99.4 % accuracy, 98. 5% precision, 97. 1% recall, and 97. 3% F1 score. Thus, these results clearly highlight the effectiveness of the combination of the proposed technique with the traditional method in identifying depression with more accuracy and less variance. The proposed method is implemented using python.

Author 1: S M Padmaja
Author 2: Sanjiv Rao Godla
Author 3: Janjhyam Venkata Naga Ramesh
Author 4: Elangovan Muniyandy
Author 5: Pothumarthi Sridevi
Author 6: Yousef A.Baker El-Ebiary
Author 7: David Neels Ponkumar Devadhas

Keywords: Depression detection; RoBERTa; BiLSTM; social media analysis; deep learning

PDF

Paper 107: Detecting Chinese Sexism Text in Social Media Using Hybrid Deep Learning Model with Sarcasm Masking

Abstract: Sexist content is prevalent in social media, which seriously affects the online environment and occasionally leads to offline disputes. For this reason, many scholars have researched how to automatically detect sexist content in social media. However, the presence of sarcasm complicates this task. Thus, recognizing sarcasm to improve the accuracy of sexism detection has become a crucial research focus. In this study, we adopt a deep learning approach by combining a sexism lexicon and a sarcasm lexicon to work on the detection of Chinese sexist content in social media. We innovatively propose a sarcasm-based masking mechanism, which achieves an accuracy of 82.65% and a macro F1 score of 80.49% on the Sina Weibo Sexism Review (SWSR) dataset, significantly outperforming the baseline model by 2.05% and 2.89%, respectively. This study combines the irony masking mechanism with sexism detection, and the experimental results demonstrate the effectiveness of the deep learning method based on the irony masking mechanism in Chinese sexism detection.

Author 1: Lei Wang
Author 2: Nur Atiqah Sia Abdullah
Author 3: Syaripah Ruzaini Syed Aris

Keywords: Sexism; chinese; deep learning; sarcasm; masking

PDF

Paper 108: Machine Learning-Enabled Personalization of Programming Learning Feedback

Abstract: Acquiring programming skills is daunting for most learners and is even more challenging in heavily attended courses. This complexity also makes it difficult to offer personalized feedback within the time constraints of instructors. This study offers an approach to predict programming weaknesses in each learner to provide appropriate learning resources based on machine learning. The machine learning models selected for training and testing and then compared are Random Forest, Logistic Regression, Support Vector Machine, and Decision Trees. During the comparison based on the features of prior knowledge, time spent, and GPA, Logistic Regression was found to be the most accurate. Using this model, the programming weaknesses of each learner are identified so that personalized feedback can be given. The paper further describes a controlled experiment to evaluate the effectiveness of the personalized programming feedback generated based on the model. The findings indicate that learners receiving personalized programming feedback achieve superior learning outcomes than those receiving traditional feedback. The implications of these findings are explored further, and a direction for future research is suggested.

Author 1: Mohammad T. Alshammari

Keywords: Machine learning; programming; learning outcome; feedback; personalization

PDF

Paper 109: Improving English Writing Skills Through NLP-Driven Error Detection and Correction Systems

Abstract: Error detection and correction is an important activity that ensures the quality of written communication, especially in education, business, and legal documentation. State-of-the-art NLP approaches have several issues, including overcorrection, poor handling of multilingual texts, and poor adaptability to domain-specific errors. Traditional methods, based on rule-based approaches or single-task models, fail to capture the complexity of real-world applications, especially in code-switched (multilingual) contexts and resource-scarce languages. To overcome these limitations, this research proposes an advanced error detection and correction framework based on transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT). The hybrid approach integrates a Seq2Seq architecture with attention mechanisms and error-specific layers for handling grammatical and spelling errors. Synthetic data augmentation techniques, including back-translation, improve the system's robustness across diverse languages and domains. The architecture attains maximum accuracy of 99%, surpassing the state-of-the-art models, in this case, GPT-3 fine-tuned for grammatical error correction at 98%. It demonstrates superior performance in various multilingual and domain-specific settings, in addition to complex spelling challenges such as homophones and visually similar words. The system was realized using Python with TensorFlow and PyTorch. The system applies C4-200M for training and evaluation. The precision and recall rates, with real-time processing of text, render the model highly useful for practice applications in the areas of education, content development, and platforms for communication. This research fills a gap in present systems and hence contributes to an enhancement of automated improvement of writing skills in the English language, with a sound and scalable solution.

Author 1: Purnachandra Rao Alapati
Author 2: A. Swathi
Author 3: Jillellamoodi Naga Madhuri
Author 4: Vijay Kumar Burugari
Author 5: Bhuvaneswari Pagidipati
Author 6: Yousef A.Baker El-Ebiary
Author 7: Prema S

Keywords: Natural Language Processing (NLP); error detection; writing skills improvement; language models; AI-Driven writing tools

PDF

Paper 110: Hybrid Attention-Based Transformers-CNN Model for Seizure Prediction Through Electronic Health Records

Abstract: Seizures are a serious neurological disease, and proper prognosis by electroencephalography (EEG) dramatically enhances patient outcomes. Current seizure prediction methods fail to deal with big data and usually need intensive preprocessing. Recent breakthroughs in deep learning can automatically extract features and detect seizures. This work suggests a CNN-Transformer model for epileptic seizure prediction from EEG data with the goal of increasing precision and prediction rates by investigating spatial and temporal relationships within data. The innovation is in employing CNN for spatial feature extraction and a Transformer-based architecture for temporal dependencies over the long term. In contrast to conventional methods that depend on hand-crafted features, this method uses an optimization approach to enhance predictive performance for large-scale EEG datasets. The dataset, which was obtained from Kaggle, consists of EEG signals from 500 subjects with 4097 data points per subject in 23.6 seconds. CNN layers extract spatial characteristics, while the Transformer takes temporal sequences in through a Self-Attention Profiler to process EEG's temporality. The suggested CNN-Transformer model also performs well with 98.3% accuracy, 97.9% precision, 98.73% F1-score, 98.21% specificity, and 98.5% sensitivity. These outcomes show how the model identifies seizures while being low on false positives. The results indicate how the hybrid CNN-Transformer model is effective at utilizing spatiotemporal EEG features in seizure prediction. Its high sensitivity and accuracy indicate important clinical promise for early intervention, enhancing treatment for epilepsy patients. This method improves seizure prediction, allowing for better management and early therapeutic response in the clinic.

Author 1: Janjhyam Venkata Naga Ramesh
Author 2: M. Misba
Author 3: S. Balaji
Author 4: K. Kiran Kumar
Author 5: Elangovan Muniyandy
Author 6: Yousef A. Baker El-Ebiary
Author 7: B Kiran Bala
Author 8: Radwan Abdulhadi .M. Elbasir

Keywords: Epileptic seizure prediction; EEG signal analysis; CNN-Transformer model; deep learning in healthcare; spatiotemporal feature extraction; neural network optimization

PDF

Paper 111: AI-Driven Transformer Frameworks for Real-Time Anomaly Detection in Network Systems

Abstract: The detection of evolving cyber threats proves challenging for traditional anomaly detection because signature-based models do not identify new or zero-day attacks. This research develops an AI Transformer-based system with Bidirectional Encoder Representations from Transformers (BERT) technology with Zero-Shot Learning (ZSL) for real-time network system anomaly detection while solving these security challenges. The goal positions the development of an effective alerting system that detects Incident response and proactive defenses cyber threats both known and unknown while needing minimal human input. The methodology uses BERT to transform textual attack descriptions found in CVEs alongside MITRE ATT&CK TTPs into multidimensional embedding features. Visual embeddings generated from textual documents undergo comparison analysis with current network traffic data containing packet flow statistics and connection logs through the cosine similarity method to reveal potential suspicious patterns. The Zero-Shot Learning extension improves the system by enabling threat recognition of new incidents when training data remains unlabeled through its analysis of semantic links between familiar and unfamiliar attack types. Here utilizes three different tools that include Python for programming purposes alongside BERT for embedding analytics and cosine similarity for measuring embedded content similarities. Numerical experiment outcomes validate the proposed framework by achieving a 99.7% accuracy measure with 99.4% precision, 98.8% recall while maintaining a sparse 1.1% false positive rate. The system operates with a detection latency of just 45ms, making it suitable for dynamic cybersecurity environments. The results indicate that the AI-driven Transformer framework outperforms conventional methods, providing a robust, real-time solution for anomaly detection that can adapt to evolving cyber threats without extensive manual intervention.

Author 1: Santosh Reddy P
Author 2: Tarunika Chaudhari
Author 3: Sanjiv Rao Godla
Author 4: Janjhyam Venkata Naga Ramesh
Author 5: Elangovan Muniyandy
Author 6: A. Smitha Kranthi
Author 7: Yousef A.Baker El-Ebiary

Keywords: Anomaly detection; network security; transformer framework; bidirectional encoder representations from transformers; zero-shot learning

PDF

Paper 112: Optimizing Social Media Marketing Strategies Through Sentiment Analysis and Firefly Algorithm Techniques

Abstract: The dramatic expansion of social media platforms reshaped business-to-customer interactions so organizations need to refine their marketing strategies toward maximizing both user engagement and marketing return on investment (ROI). Present-day social media marketing methods struggle to embrace user emotions fully while responding to market variations thus demonstrating the necessity for developing innovative social media marketing tools. Studies seek to boost social media marketing performance through an FA integration with sentiment analysis for content strategy optimization and better user engagement results. This study adopts novel techniques by combining sentiment analysis with the Firefly Algorithm to optimize marketing strategies in real-time and it represents an underutilized approach in present research. Eventually combined fields generate a sentiment-driven and data-oriented decision-making capability in social media marketing applications. The proposed system combines sentiment analysis technology that measures social media emotion levels alongside the Firefly Algorithm which applies optimization methods to marketing tactics based on present feedback. The framework operates through dynamic adjustments of content strategies which maximize user engagement. The proposed method demonstrated 98.4% precision in forecasting user engagement metrics and adapting content strategies. Results show traditional marketing strategies yield to these approaches by improving user interaction alongside campaign effectiveness. The research introduces a new optimization method in social media marketing which integrates sentiment analysis with Firefly Algorithm technology. Research findings suggest this combined methodology brings substantial precision improvements to marketing strategies by offering companies an effective method to optimize digital marketplace outcomes.

Author 1: Sudhir Anakal
Author 2: P N S Lakshmi
Author 3: Nishant Fofaria
Author 4: Janjhyam Venkata Naga Ramesh
Author 5: Elangovan Muniyandy
Author 6: Shaik Sanjeera
Author 7: Yousef A.Baker El-Ebiary
Author 8: Ritesh Patel

Keywords: Sentiment analysis; firefly algorithm; social media marketing; optimization; user engagement; marketing strategies

PDF

Paper 113: Accurate AI Assistance in Contract Law Using Retrieval-Augmented Generation to Advance Legal Technology

Abstract: Understanding legal documentation is a complex task due to its inherent subtleties and constant changes. This article explores the use of artificial intelligence-driven chatbots, enhanced by retrieval-augmented generation (RAG) techniques, to address these challenges. RAG integrates external knowledge into generative models, enabling the delivery of accurate and contextually relevant legal responses. Our study focuses on the development of a semantic legal chatbot designed to interact with contract law data through an intuitive interface. This AI Lawyer functions like a professional lawyer, providing expert answers in property law. Users can pose questions in multiple languages, such as English and French, and the chatbot delivers relevant responses based on integrated official documents. The system distinguishes itself by effectively avoiding LLM hallucinations, relying solely on reliable and up-to-date legal data. Additionally, we emphasize the potential of chatbots based on LLMs and RAG to enhance legal understanding, reduce the risk of misinformation, and assist in drafting legally compliant contracts. The system is also adaptable to various countries through the modification of its legal databases, allowing for international application.

Author 1: Youssra Amazou
Author 2: Faouzi Tayalati
Author 3: Houssam Mensouri
Author 4: Abdellah Azmani
Author 5: Monir Azmani

Keywords: AI Lawyer; contract law; legal technology; Retrieval-Augmented Generation (RAG); Large Language Models (LLMs); GPT; chatbots

PDF

Paper 114: Fourth Party Logistics Routing Optimization Problem Based on Conditional Value-at-Risk Under Uncertain Environment

Abstract: In order to improve the level of logistics service and considering the impact of uncertainties such as bad weather and highway collapse on fourth party logistics routing optimization problem, this paper adopts Conditional Value-at-Risk (CVaR) to measure the tardiness risk, which is caused by the uncertainties, and proposes a nonlinear programming mathematical model with minimized CVaR. Furthermore, the proposed model is compared with the VaR model, and an improved Q-learning algorithm is designed to solve two models with different node sizes. The experimental results indicate that the proposed model can reflect the mean value of tardiness risk caused by time uncertainty in transportation tasks and better compensate for the shortcomings of the VaR model in measuring tardiness risk. Comparative analysis also shows that the effectiveness of the proposed improved Q-learning algorithm.

Author 1: Guihua Bo
Author 2: Qiang Liu
Author 3: Huiyuan Shi
Author 4: Xin Liu
Author 5: Chen Yang
Author 6: Liyan Wang

Keywords: Logistics service; routing optimization; tardiness risk; conditional value-at-risk; improved Q-learning algorithm

PDF

Paper 115: Optimized Dynamic Graph-Based Framework for Skin Lesion Classification in Dermoscopic Images

Abstract: Early and accurate classification of skin lesions is critical for effective skin cancer diagnosis and treatment. However, the visual similarity of lesions in their early stages often leads to misdiagnoses and delayed interventions. This lack of transparency makes it challenging for dermatologists to interpret with validate decisions made by such methods, reducing their trust in the system. To overcome these complications, Skin Lesions Classification in Dermoscopic Images using Optimized Dynamic Graph Convolutional Recurrent Imputation Network (SLCDI-DGCRIN-RBBMOA) is proposed. The input image is pre-processed utilizing Confidence Partitioning Sampling Filtering (CPSF) to remove noise, resize, and enhance image quality. By using the Hybrid Dual Attention-guided Efficient Transformer and UNet 3+ (HDAETUNet3+) it segment ROI region of the preprocessed dermoscopic images. Finally, segmented images are fed to Dynamic Graph Convolutional Recurrent Imputation Network (DGCRIN) for classifying skin lesion as actinic keratosis, dermatofibroma, basal cell carcinoma, squamous cell carcinoma, benign keratosis, vascular lesion, melanocytic nevus, and melanoma. Generally, DGCRIN does not express any adaption of optimization strategies for determining optimal parameters to exact skin lesion classification. Hence, Red Billed Blue Magpie Optimization Algorithm (RBBMOA) is proposed to enhance DGCRIN that can exactly classify type of skin lesion. The proposed SLCDI-DGCRIN-RBBMOA technique attains 26.36%, 20.69% and 30.29% higher accuracy, 19.12%, 28.32%, and 27.84% higher precision, 12.04%, 13.45% and 22.80% higher recall and 20.47%, 16.34%, and 20.50% higher specificity compared with existing methods such as a deep learning method dependent on explainable artificial intelligence for skin lesion classification (DNN-EAI-SLC), multiclass skin lesion classification utilizing deep learning networks optimal information fusion (MSLC-CNN-OIF), and classification of skin cancer from dermoscopic images utilizing deep neural network architectures (CSC-DI-DCNN) respectively.

Author 1: J. Deepa
Author 2: P. Madhavan

Keywords: Confidence partitioning sampling filtering; dynamic graph convolutional recurrent imputation network; ISIC-2019 skin disease dataset; red billed blue magpie optimization algorithm; hybrid dual attention-guided efficient transformer and UNet 3+

PDF

Paper 116: Optimized Wavelet Scattering Network and CNN for ECG Heartbeat Classification from MIT–BIH Arrhythmia Database

Abstract: Early detection of cardiovascular diseases is vital, especially considering the alarming number of deaths worldwide caused by heart attacks, as highlighted by the world health organization. This emphasizes the urgent need to develop automated systems that can ensure timely and accurate identification of cardiovascular conditions, potentially saving countless lives. This paper presents a novel approach for heartbeats classification, aiming to enhance both accuracy and prediction speed in classification tasks. The model is based on two distinct types of features. First, morphological features that obtained by applying wavelet scattering network to each ECG heartbeat, and the maximum relevance minimum redundancy algorithm was also applied to reduce the computational cost. Second, dynamic features, which capture the duration of two pre R–R intervals and one post R–R interval within the analyzed heartbeat. The feature fusion technique is applied to combine both morphological and dynamic features, and employ a convolutional neural network for the classification of 15 different ECG heartbeat classes. Our proposed method demonstrates an overall accuracy of 98.50% when tested on the Massachusetts institute of Technology -Boston’s Beth Israel hospital arrhythmia database. The results obtained from our approach highlight its superior performance compared to existing automated heartbeat classification models.

Author 1: Mohamed Elmehdi AIT BOURKHA
Author 2: Anas HATIM
Author 3: Dounia NASIR
Author 4: Said EL BEID

Keywords: Electrocardiogram (ECG); Convolutional Neural Network (CNN); Arrhythmia Rhythm (ARR); Maximum Relevance Minimum Redundancy (MRMR); Wavelet Scattering Network (WSN)

PDF

Paper 117: Personalized Motion Scheme Generation System Design for Motion Software Based on Cloud Computing

Abstract: The increase of national attention has also promoted the growth of the scale of sports health industry. However, for ordinary people who lack professional knowledge, intuitive data cannot make correct sports planning. Therefore, aiming at the problem that it is difficult for ordinary people to make correct exercise plan according to intuitive data, a personalized exercise plan generation system based on cloud computing is proposed. By analyzing the user's movement and physical data, the system uses cloud computing resources and machine learning algorithms to provide customized exercise recommendations for users. The key innovation of the research is the combination of improved random forest algorithm and reinforcement learning, while improving the performance of the algorithm on unbal-anced sample sets. The results indicated that the accuracy of the improved random forest was 0.985 higher than that of the precision weighted random forest. The research algorithm was 9.04% higher on average than the original random forest algo-rithm and 2.71% higher than the accuracy weighted random forest algorithm. In terms of the accuracy of personalized mo-tion scheme generation of motion software, the improved algo-rithm reached 95.05% at most, and its recall rate reached 83.46% at most. Compared with the existing sports software solutions, the research system can generate personalized sports programs more accurately, promote the development of the sports health industry and improve the national physical health level. The system can provide users with personalized sports suggestions, and utilize the powerful computing power of cloud computing to realize real-time processing and analysis of large-scale user data, providing users with timely sports feedback and suggestions.

Author 1: Jinkai Duan

Keywords: Cloud computing; sports; random forest algorithm; personaliza-tion; system

PDF

Paper 118: Enhancing Emotion Prediction in Multimedia Content Through Multi-Task Learning

Abstract: This study presents a robust multimodal emotion analysis model aimed at improving emotion prediction in film and television communication. Addressing challenges in modal fusion and data association, the model integrates a Transformer-based framework with multi-task learning to capture emotional associations and temporal features across various modalities. It overcomes the limitations of single-modal labels by incorporating multi-task learning, and is tested on the Cmumosi dataset using both classification and regression tasks. The model achieves strong performance, with an average absolute error of 0.70, a Pearson correlation coefficient of 0.82, and an accuracy of 47.1% in a seven-class task. In a two-class task, it achieves an accuracy and F1 score of 88.4%. Predictions for specific video segments are highly consistent with actual labels, with predicted scores of 2.15 and 1.4. This research offers a new approach to multimodal emotion analysis, providing valuable insights for film and television content creation and setting the foundation for further advancements in this area.

Author 1: Wan Fan

Keywords: Multi task learning; multimodal emotion analysis; timing; transformer; attention

PDF

Paper 119: Validation of an Adaptive Decision Support System Framework for Outcome-Based Blended Learning

Abstract: The Adaptive Decision Support System Learning Framework (A-DSS-LF) was developed to address diverse learner needs in blended learning environments by integrating learning styles, cognitive levels, practical skills, and value practices. This study validates the framework using the Fuzzy Delphi Method (FDM), a consensus-building tool that synthesizes expert opinions and addresses uncertainties in subjective judgments. A panel of 15 experts evaluated the framework’s constructs: Learning Process, Learning Assessment, Decision Support System, and Adaptive Learning Profile. All constructs met the FDM’s consensus criterion, achieving threshold values between 0.087 and 0.118 (≤0.2), indicating high consistency and low variability. The defuzzification process confirmed values exceeding 0.5, with scores ranging from 0.873 to 0.922 and expert agreement surpassing 75 percent for all elements. These findings confirm the robustness and applicability of the A-DSS-LF, validating its role in enhancing personalized learning outcomes and supporting teachers in tailoring adaptive learning resources. The framework is scalable and can be implemented in secondary school computer science education and online learning platforms to create personalized learning paths, improve engagement, and bridge the gap between online and offline learning. This study reinforces the significance of expert validation in adaptive learning frameworks, ensuring their scalability and adaptability for future applications in diverse educational settings.

Author 1: Rahimah Abd Halim
Author 2: Rosmayati Mohemad
Author 3: Noraida Hj Ali
Author 4: Anuar Abu Bakar
Author 5: Hamimah Ujir

Keywords: Learner needs; adaptive learning; blended learning; fuzzy delphi method; decision support system

PDF

Paper 120: Towards Two-Step Fine-Tuned Abstractive Summarization for Low-Resource Language Using Transformer T5

Abstract: This study explores the potential of two-step fine-tuning for abstractive summarization in a low-resource language, focusing on Indonesian. Leveraging the Transformer-T5 model, the research investigates the impact of transfer learning across two tasks: machine translation and text summarization. Four configurations were evaluated, ranging from zero-shot to two-step fine-tuned models. The evaluation, conducted using the ROUGE metric, shows that the two-step fine-tuned model (T5-MT-SUM) achieved the best performance, with ROUGE-1: 0.7126, ROUGE- 2: 0.6416, and ROUGE-L: 0.6816, outperforming all baselines. These findings demonstrate the effectiveness of task transfer-ability in improving abstractive summarization performance for low-resource languages like Indonesian. This study provides a pathway for advancing natural language processing (NLP) in low-resource language through two-step transfer learning.

Author 1: Salhazan Nasution
Author 2: Ridi Ferdiana
Author 3: Rudy Hartanto

Keywords: Abstractive summarization; low-resource language; Transformer T5; transfer learning

PDF

Paper 121: AI-Driven Construction and Application of Gardens: Optimizing Design and Sustainability with Machine Learning

Abstract: Artificial intelligence (AI) integration into environ-mental analysis has revolutionized various fields. Including the construction and application of gardens, by enabling precise classification and decision-making for sustainable practices. This paper presents a strong AI-driven framework uses convolutional neural network (CNN) and pretrained models like VGG16 and InceptionV3 to classify eight distinct environmental classes. The CNN achieved superior performance Among the tested models and reaching an impressive 98% accuracy with optimized batch sizes. This demonstrate its effectiveness for precise environmental condition classification. This work highlights the crucial role of AI in advancing the construction and application of gardens. It offers insights into optimizing garden design through accurate environmental data analysis. The diverse dataset used ensures the framework’s adaptability to real-world applications, making it a valuable resource for sustainable development and eco-friendly design strategies. This paper not only contributes to the field of AI-driven environmental analysis but also provides a foundation for future innovations in garden management and sustainability, paving the way for intelligent solutions in the evolving landscape of ecological design.

Author 1: Jingyi Wang
Author 2: Yan Song
Author 3: Haozhong Yang
Author 4: Han Li
Author 5: Minglan Zhou

Keywords: Artificial intelligence; machine learning; construction and application of garden design; convolutional neural network; VGG16; InceptionV3

PDF

Paper 122: Multi-Objective Osprey Optimization Algorithm-Based Resource Allocation in Fog-IoT

Abstract: Fog Computing (FC) paradigm offers significant potential for hosting diverse delay-sensitive Internet of Things (IoT) applications. However, the limited resources of fog devices pose significant challenges for deploying multiple applications, particularly in heterogeneous and dynamic IoT scenarios, due to the absence of effective mechanisms for resource estimation and discovery. An efficient resource allocation strategy is crucial for meeting the Quality of Service (QoS) requirements of IoT applications while enhancing overall system performance. Identifying the optimal allocation strategy for IoT applications with multiple QoS parameters is a complex and computationally intensive challenge, classified as an NP-complete problem. This paper proposes a Multi-Objective Optimization Algorithm (MOOA) for optimal resource allocation using the Osprey Optimization Algorithm (OOA) to efficiently allocate available resources. The proposed algorithm was evaluated against existing approaches, including the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), under varying task loads ranging from 100 to 500 tasks. The simulation results demonstrate significant performance improvements, including an average reduction in execution time by 12.45% compared to PSO and 22.97% compared to GA, response time by 32.57% compared to GA and 24.45% compared to PSO, and completion time by 44.39% compared to GA and 33.23% compared to PSO. These findings highlight the proposed algorithm’s ability to efficiently handle task allocation in dynamic FC environments and its potential to address complex QoS requirements in real-world IoT applications.

Author 1: Nagarjun E
Author 2: Dharamendra Chouhan
Author 3: Dilip Kumar S M

Keywords: Fog computing; IoT; resource allocation and real-location; task allocation

PDF

Paper 123: Leveraging Deep Semantics for Sparse Recommender Systems (LDS-SRS)

Abstract: RS (Recommender Systems) provide personalized suggestions to the user(s) by filtering through vast amounts of similar data, including media content, e-commerce platforms, and social networks. Traditional recommendation system (RS) methods encounter significant challenges. Collaborative Filtering (CF) is hindered by the lack of sufficient user-product engagement data, while CBF (Content Based Filtering) depends extensively on feature extraction techniques in order to describe the items, which requires an understanding of both content contextual and semantic relevance of the information. To address the sparsity issue, various matrix factorization methods have been developed, often incorporating pre-processed auxiliary information. However, existing feature extraction techniques generally fail to capture both the semantic richness and topic-level insights of textual data. This paper introduces a novel hybrid recommendation system called Topic-Driven Semantic Hybridization for Sparse Recommender Systems (LDS-SRS). The model leverages the semantic features from item descriptions and incorporates topic-specific data to effectively tackle the challenges posed by data sparsity. By extracting embeddings that capture the deep semantics of textual content—such as reviews, summaries, comments, and narratives—and embedding them into Probabilistic Matrix Factorization (PMF), the framework significantly alleviates data sparsity. The LDS-SRS framework is also computationally efficient, offering low deployment time and complexity. Experimental evaluations conducted on publicly available datasets, such as AIV (Amazon Instant Video) and Movielens (1 Million & 10 Million), demonstrate the exceptional ability of the method to handle sparse user-to-item ratings, outperforming existing leading methods. The proposed system effectively addresses data sparsity by integrating embeddings that encapsulate the deep textual semantics content, including sum-maries, comment(s), and narratives, within PMF (Probabilistic Matrix Factorization). The LDS-SRS framework is also highly efficient, characterized by minimal deployment time and low computational complexity. Experimental evaluations conducted on publicly available MovieLens (1 Million and 10 Million) and AIV (Amazon Instant Video) benchmark datasets demonstrate the framework’s exceptional ability to handle sparse user-item ratings, surpassing existing advanced methods.

Author 1: Adel Alkhalil

Keywords: LDA-2-Vec technique; content representation; topic-based modeling; probabilistic matrix decomposition

PDF

Paper 124: A Deep Learning Approach for Nepali Image Captioning and Speech Generation

Abstract: This article introduces a novel approach for Image-to-Speech generation that aims in converting images into textual captions along with spoken descriptions in Nepali Language using deep learning techniques. By leveraging computer vision and natural language processing, the system analyzes images, extracts features, generates human-readable captions, and produces intelligible speech output. The experimentation utilizes state-of-the-art transformer architecture for image caption generation complemented by ResNet and EfficientNet as feature extractors. BLEU score is used as an evaluation metric for generated captions. The BLEU scores obtained for BLEU-1, BLEU-2, BLEU-3, and BLEU-4 n-grams are 0.4852, 0.2952, 0.181, and 0.113, respectively. Pretrained HifiGaN(vocoder) and Tacotorn2 are used for text to speech synthesis. The proposed approach contributes to the underexplored domain of Nepali-language AI applications, aiming to improve accessibility and technological inclusivity for the Nepali-speaking population.

Author 1: Sagar Sharma
Author 2: Samikshya Chapagain
Author 3: Sachin Acharya
Author 4: Sanjeeb Prasad Panday

Keywords: Image captioning; speech generation; image-to-speech generation; deep learning; BLEU score; HiFiGaN; TTS

PDF

Paper 125: Knowledge Graph Path-Enhanced RAG for Intelligent Residency Q&A

Abstract: As the demand for efficient information retrieval in specialized domains continues to rise, vertical domain question-answering systems play an increasingly important role in addressing domain-specific knowledge needs. This paper proposes a retrieval-augmented generation method that integrates path search in knowledge graphs to enhance intelligent question-answering systems for professional information retrieval. The proposed approach leverages fine-tuned large language models to identify entities and extract relations from user queries, combining pruned marker method with a shortest path generation tree algorithm to efficiently retrieve relevant information. The retrieval results are then integrated with user queries using prompt engineering to generate precise and contextually relevant answers. To validate the practicality of the proposed method, this paper develops a knowledge graph encompassing policies, regulations, and social services within the household registration vertical domain. The experimental results within this vertical domain reveal that the proposed method significantly outperforms existing methods in terms of evaluation metrics such as BLEU, ROUGE, and METEOR, achieving improvements exceeding 3%. Furthermore, ablation experiments validate the importance of combining path search algorithms with fine-tuning techniques in enhancing the question-answering performance.

Author 1: Jian Zhu
Author 2: Huajun Zhang
Author 3: Jianpeng Da
Author 4: Hanbing Huang
Author 5: Chongxin Luo
Author 6: Xu Peng

Keywords: Retrieval-augmented generation; path search; knowledge graph; household registration policy vertical field

PDF

Paper 126: Leveraging Machine-Aided Learning in College English Education: Computational Approaches for Enhancing Student Outcomes and Pedagogical Efficiency

Abstract: The integration of machine-aided learning into college English education offers transformative potential for enhancing teaching and learning outcomes. This paper investigates the application of computational models, including machine learning algorithms and natural language processing tools, to optimize pedagogical practices and improve student performance. A series of experiments were conducted to evaluate the effectiveness of machine-aided learning in various aspects of English language education. The study focuses on six key parameters: 1) student test scores, 2) learning engagement, 3) learning time efficiency, 4) language proficiency, 5) student retention, and 6) teacher workload. The results demonstrate significant improvements across these parameters: a 25% increase in student test scores, a 30% improvement in overall learning engagement, a 20%reduction in learning time for complex language tasks, a 15%enhancement in language proficiency, a 10% increase in student retention, and a 5% reduction in teacher workload. These findings underscore the potential of machine-aided learning to reshape college English education by promoting personalized, data-driven learning environments. This paper provides valuable insights for educators, researchers, and policymakers aiming to harness the power of computational methods in educational settings.

Author 1: Danxia Zhu

Keywords: Machine learning; natural language processing; computational intelligence; data analytics; pedagogy

PDF

Paper 127: A Novel Hybrid Attentive Convolutional Autoencoder (HACA) Framework for Enhanced Epileptic Seizure Detection

Abstract: Epilepsy, a prevalent neurological disorder, requires accurate and efficient seizure detection for timely intervention. This study presents a Hybrid Attentive Convolutional Autoen-coder (HACA) framework designed to address challenges in EEG signal processing for seizure detection. The proposed method integrates signal reconstruction, innovative feature extraction, and attention mechanisms to focus on seizure-critical patterns. Compared to conventional CNN- and RNN-based approaches, HACA demonstrates superior performance by enhancing feature representation and reducing redundant computations. The proposed HACA framework achieved 99.4% accuracy, 99.6%sensitivity, and 99.2% specificity on the CHB-MIT dataset. Moreover, the training time is reduced by 40%, which makes the model more relevant for real-time applications and portable seizure monitoring systems.

Author 1: Venkata Narayana Vaddi
Author 2: Madhu Babu Sikha
Author 3: Prakash Kodali

Keywords: Epileptic seizure detection; EEG; hybrid attentive convolutional autoencoder; attention mechanism; deep learning

PDF

Paper 128: Deep Learning in Heart Murmur Detection: Analyzing the Potential of FCNN vs. Traditional Machine Learning Models

Abstract: This research investigates the performance of machine learning and deep learning models in detecting heart murmurs from audio recordings. Using the PhysioNet Challenge 2016 dataset, we compare several traditional machine learning models—Support Vector Machine, Random Forest, AdaBoost, and Decision Tree—with a Fully Convolutional Neural Network (FCNN). The findings indicate that while traditional models achieved accuracies between 0.85 and 0.89, they faced challenges with data complexity and maintaining a balance between precision and recall. Ensemble methods such as Random Forest and AdaBoost demonstrated improved robustness but were still outperformed by deep learning approaches. The FCNN model, leveraging artificial intelligence, significantly outperformed all other models, achieving an accuracy of 0.99 with a precision of 0.94 and a recall of 0.96. These results highlight the potential of AI-driven cardiovascular diagnostics, as deep learning models exhibit superior capability in identifying intricate patterns in heart sound data. Our findings suggest that deep learning models offer substantial advantages in medical diagnostics, particularly for cardiovascular diagnostics, by providing scalable and highly accurate tools for heart murmur detection. Future work should focus on improving model interpretability and expanding dataset diversity to facilitate broader adoption in clinical settings.

Author 1: Hajer Sayed Hussein
Author 2: Hussein AlBazar
Author 3: Roxane Elias Mallouhy
Author 4: Fatima Al-Hebshi

Keywords: Heart murmur detection; machine learning; deep learning; cardiovascular diagnostics; artificial intelligence; phys-ioNet dataset

PDF

Paper 129: Securing Internet of Medical Things: An Advanced Federated Learning Approach

Abstract: The Internet of Medical Things (IoMT) is transforming healthcare through extensive automation, data collection, and real-time communication among interconnected devices. However, this rapid expansion introduces significant security vulnerabilities that traditional centralized solutions or device-level protections often fail to adequately address due to challenges related to latency, scalability, and resource constraints. This study presents a novel federated learning (FL) framework tailored for IoMT security, incorporating techniques such as stacking, federated dynamic averaging, and active user participation to decentralize and enhance attack classification at the edge. Utilizing the CICIoMT2024 dataset, which encompasses 18 attack classes and 45 features, we deploy Random Forest (RF), AdaBoost, Support Vector Machine (SVM), and Deep Learning (DL) models across 10 simulated edge devices. Our federated approach effectively distributes computational loads, mitigating the strain on central servers and individual devices, thereby enhancing adaptability and resource efficiency within IoMT networks. The RF model achieves the highest accuracy of 99.22%, closely followed by AdaBoost, demonstrating the feasibility of FL for robust and scalable edge security. While this study validates the proposed framework using a single realistic dataset in a controlled environment, future work will explore additional datasets and real-world scenarios to further substantiate the generalization and effectiveness of the approach. This research underscores the potential of federated learning to address the unique security and computational constraints of IoMT, paving the way for practical, decentralized deployments that strengthen device-level defenses across diverse healthcare settings.

Author 1: Anass Misbah
Author 2: Anass Sebbar
Author 3: Imad Hafidi

Keywords: Internet of Medical Things (IoMT); federated learning; machine learning; security; intrusion detection systems; decentralized framework

PDF

Paper 130: Chinese Relation Extraction with External Knowledge-Enhanced Semantic Understanding

Abstract: Relation extraction is the foundation of constructing knowledge graphs, and Chinese relation extraction is a particularly challenging aspect of this task. Most existing methods for Chinese relation extraction rely either on character-based or word-based features. However, the former struggles to capture contextual information between characters, while the latter is constrained by the quality of word segmentation, resulting in relatively low performance. To address this issue, a Chinese relation extraction model enhanced with external knowledge for semantic understanding is proposed. This model leverages external knowledge to improve semantic understanding in the text, thereby enhancing the performance of relation prediction between entity pairs. The approach consists of three main steps: first, the ERNIE pre-trained language model is used to convert textual information into dynamic word embeddings; second, an attention mechanism is employed to enrich the semantic representation of sentences containing entities, while external knowledge is used to mitigate the ambiguity of Chinese entity words as much as possible; and finally, the semantic representation enhanced with external knowledge is used as input for classification to make predictions. Experimental results demonstrate that the proposed model outperforms existing methods in Chinese relation extraction and offers better interpretability.

Author 1: Shulin Lv
Author 2: Xiaoyao Ding

Keywords: Chinese relation extraction; knowledge graph; external knowledge; semantic understanding; attention mechanism

PDF

Paper 131: Temperature Prediction for Photovoltaic Inverters Using Particle Swarm Optimization-Based Symbolic Regression: A Comparative Study

Abstract: Accurate temperature modeling is crucial for maintaining the efficiency and reliability of solar inverters. This paper presents an innovative application of symbolic regression based on particle swarm optimization (PSO) for predicting the temperature of photovoltaic inverters, offering a novel approach that balances accuracy and computational efficiency. The study evaluates the performance of a PSO-based symbolic regression model compared to multiple linear regression (MLR) and a symbolic regression model based on genetic algorithms (GA). The models were developed using a dataset that included inverter temperature, active power, and DC bus voltage, collected over a year in hourly intervals from a rooftop photovoltaic system in a tropical region. The dataset was divided, with 70% used for training and the remaining 30% for testing. The symbolic regression model based on PSO demonstrated superior performance, achieving lower values of the root mean square error (RMSE) and mean absolute error (MAE) of 3.97 and 3.31, respectively. Furthermore, the PSO-based model effectively captured the nonlinear relationships between variables, outperforming the MLR model. It also exhibited greater computational efficiency, requiring fewer iterations than traditional symbolic regression approaches. These findings open new possibilities for real-time monitoring of photovoltaic inverters and suggest future research directions, such as generalizing the PSO model to different environmental conditions and inverter types.

Author 1: Fabian Alonso Lara-Vargas
Author 2: Jesus Aguila-Leon
Author 3: Carlos Vargas-Salgado
Author 4: Oscar J. Suarez

Keywords: Particle swarm optimization; photovoltaic inverters; multiple linear regression; symbolic regression; temperature pre-diction

PDF

Paper 132: Towards Effective Anomaly Detection: Machine Learning Solutions in Cloud Computing

Abstract: Cloud computing has transformed modern Information Technology (IT) infrastructures with its scalability and cost-effectiveness but introduces significant security risks. More-over, existing anomaly detection techniques are not well equipped to deal with the complexities of dynamic cloud environments. This systematic literature review shows the advancements in Machine Learning (ML) solutions for anomaly detection in cloud computing. The study categorizes ML approaches, examines the datasets and evaluation metrics utilized, and discusses their effectiveness and limitations. We analyze supervised, unsupervised, and hybrid ML models showing their advantages in dealing with a certain threat vector. It also discusses how advanced feature engineering, ensemble learning and real-time adaptability can improve detection accuracy and reduce false positives. Some key challenges, such as dataset diversity and computational efficiency, are highlighted, along with future research directions to improve ML based anomaly detection for robust and adaptive cloud security. Hybrid approaches are found to increase the accuracy reaching up to 99.85% and reduces the number of false positives. This review provides a comprehensive guide to researchers aiming to enhance anomaly detection in cloud environments.

Author 1: Hussain Almajed
Author 2: Abdulrahman Alsaqer
Author 3: Abdullah Albuali

Keywords: Anomaly; cloud; machine learning; detection

PDF

Paper 133: Enhanced Fuzzy Deep Learning for Plant Disease Detection to Boost the Agricultural Economic Growth

Abstract: Plant disease detection is a crucial technology to ensure agricultural productivity and sustainability. However, traditional methods tend to fail as they do not address imprecise and uncertain data in a satisfactory way. We propose the Enhanced Fuzzy Deep Neural Network (EFDNN) which integrate the fuzzy logic with deep neural networks. This study aims to incorporate and allow assessment of the economic impact of the EFDNN on agricultural productivity for plant diseases detection. Data for the research framework were collected from remote sensing and economic sources. Preprocessing of data was done, namely normalization and feature extraction to make sure that the inputs are high quality. Deep Belief Networks (DBNs) were used as a way to pretrain the EFDNN model and supervised learning was then fine-tuned using this. Then, the model was evaluated with accuracy, precision, recall and area under the receiver operating characteristic curve (AUC-ROC), and compared against baseline models: convolutional neural networks (CNNs), traditional DNNs, and fuzzy neural network (FNNs). The plant disease detection performance of the EFDNN model was 95.2% accuracy, 94.8%precision, 95.6% recall, and 0.978 AUC-ROC. The accuracy of the EFDNN model was greater than the accuracy of CNNs by 92.3%, greater than traditional DNNs by 89.7% and FNNs’ accuracy by 90.4%. In economic analysis, however, a reduced pesticide use and an increase in crop yield of USD120 per acre were calculated. 14.3%, leading to higher farmer revenues. The EFDNN model is an effective enhancement to plant disease detection that offers economic and agricultural benefits. This validates the potential of combining fuzzy logic with deep learning to enhance the performance and sustainability of agricultural practices.

Author 1: Mohammad Abrar

Keywords: Deep learning; plant disease; fuzzy deep learning; agricultural production

PDF

Paper 134: Investigating Retrieval-Augmented Generation in Quranic Studies: A Study of 13 Open-Source Large Language Models

Abstract: Accurate and contextually faithful responses are critical when applying large language models (LLMs) to sensitive and domain-specific tasks, such as answering queries related to quranic studies. General-purpose LLMs often struggle with hallucinations, where generated responses deviate from authoritative sources, raising concerns about their reliability in religious contexts. This challenge highlights the need for systems that can integrate domain-specific knowledge while maintaining response accuracy, relevance, and faithfulness. In this study, we investigate 13 open-source LLMs categorized into large (e.g., Llama3:70b, Gemma2:27b, QwQ:32b), medium (e.g., Gemma2:9b, Llama3:8b), and small (e.g., Llama3.2:3b, Phi3:3.8b). A Retrieval-Augmented Generation (RAG) is used to make up for the problems that come with using separate models. This research utilizes a descriptive dataset of Quranic surahs including the meanings, historical context, and qualities of the 114 surahs, allowing the model to gather relevant knowledge before responding. The models are evaluated using three key metrics set by human evaluators: context relevance, answer faithfulness, and answer relevance. The findings reveal that large models consistently outperform smaller models in capturing query semantics and producing accurate, contextually grounded responses. The Llama3.2:3b model, even though it is considered small, does very well on faithfulness (4.619) and relevance (4.857), showing the promise of smaller architectures that have been well optimized. This article examines the trade-offs between model size, computational efficiency, and response quality while using LLMs in domain-specific applications.

Author 1: Zahra Khalila
Author 2: Arbi Haza Nasution
Author 3: Winda Monika
Author 4: Aytug Onan
Author 5: Yohei Murakami
Author 6: Yasir Bin Ismail Radi
Author 7: Noor Mohammad Osmani

Keywords: Large-language-models; retrieval-augmented generation; question answering; Quranic studies; Islamic teachings

PDF

Paper 135: Advanced Optimization of RPL-IoT Protocol Using ML Algorithms

Abstract: This study explores the transformative potential of machine learning (ML) algorithms in optimizing the Routing Protocol for Low-Power and Lossy Networks (RPL), addressing critical challenges in Internet of Things (IoT) networks, such as Expected Transmission Count (ETX), latency, and energy consumption. The research evaluates the performance of Random Forest, Gradient Boosting, Artificial Neural Networks (ANNs), and Q-Learning across IoT network simulations with varying scales (50, 100, and 150 nodes). Results indicate that tree-based models, particularly Random Forest and Gradient Boosting, demonstrate robust predictive capabilities for ETX and latency, achieving consistent results in smaller and medium-sized networks. Specifically, for 50-node networks, Neural Networks achieved the best performance with the lowest latency (2.43862 ms) and the best ETX (5.29557), despite slightly higher energy consumption. For 100-node networks, Q-Learning stood out with the lowest energy consumption (1.62973 J) and competitive ETX (2.70647), though at the cost of increased latency. In 150-node networks, Q-Learning again outperformed other models, achieving the lowest latency (0.68 ms) and energy consumption (2.21 J), though at the cost of higher ETX. Neural Networks excel in capturing non-linear dependencies but face limitations in energy-related metrics, while Q-Learning adapts dynamically to network changes, achieving remarkable latency reductions at the cost of transmission efficiency. The findings highlight key trade-offs between performance metrics and emphasize the need for algorithmic strategies tailored to specific IoT applications. This work not only validates the scalability and adaptability of ML approaches but also lays the foundation for intelligent and efficient IoT network optimization, laying the groundwork for future advancements in sustainable and scalable IoT networks.

Author 1: Mansour Lmkaiti
Author 2: Ibtissam Larhlimi
Author 3: Maryem Lachgar
Author 4: Houda Moudni
Author 5: Hicham Mouncif

Keywords: IoT; RPL; machine learning; routing efficiency; energy consumption; expected transmission count; network optimization; Artificial Intelligence (AI)

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org