The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 16 Issue 3

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Federated Learning-Driven Privacy-Preserving Framework for Decentralized Data Analysis and Anomaly Detection in Contract Review

Abstract: Contract review is a critical legal task that involves several processes such as compliance validation, clause classification, and anomaly detection. Traditional, centralized models for the analysis of contracts raise significant data privacy and compliance challenges due to the highly sensitive nature of legal documents. This paper proposes a contract review-oriented federated learning framework, where model training can be performed in a completely decentralized way with data confidentiality. It leverages privacy preserving methods such as Differential Privacy (“DP”) and Secure Multi-Party Computation (“SMPC”) that provide protection for sensitive information during collaborative learning. The proposed framework reaches a clause classification accuracy of 94.2% while securing privacy requirements. Performance analysis of the training efficiency revealed that the federated model needed 13.1 hours instead of 10.4 hours for a centralized model while still protecting the security of the system. This research offers a scalable and secure approach toward contract review and offers a path forward for privacy-conscious AI-driven legal solutions.

Author 1: Raj Sonani
Author 2: Vijay Govindarajan
Author 3: Pankaj Verma

Keywords: Federated learning; privacy preservation; clause classification; compliance validation; anomaly detection

PDF

Paper 2: Distributed Identity for Zero Trust and Segmented Access Control: A Novel Approach to Securing Network Infrastructure

Abstract: Distributed Identity is the transition from centralized identity with Decentralized Identifiers (DID) and Verifiable Credentials (VC) for secure and privacy positive authentications. With distributed identity, identity data is brought back under the control of the user, freeing them from the single point of failure presented by credentials, and hence preventing credential-based attacks. In this study, some security improvement to the Zero Trust Architecture (ZTA) with use of the distributed identity were be evaluated, especially on migrations laterally within segmented networks. Furthermore, it discusses the implementation specification of the framework, the benefits and disadvantages of the method to organizations, and the compatibility and generalizability issues. Moreover, the study also considers privacy and regulatory issues like the General Data Protection Regulation (GDPR) and the California Consumer Data Privacy Act (CCPA) along with possible solutions. However, the study indicates that distributed identities can give an order of magnitude improvement to overall security posture through contextual and least privileged authorization as well as user privacy. Results show that by integrating distributed identity into ZTA, unauthorized lateral movement is reduced approximately 65%, authentication security is increased 78 percent relative to traditional, and it is not possible for a credential to be compromised through a phishing attack more than 80 percent of the time. Also, General Data Protection Regulation (GDPR) and California Consumer Data Privacy Act (CCPA) compliance are bolstered because of increased user identity data control. It identifies privacy and regulatory compliance problems and looks at solutions of these problems. The findings indicate that a great improvement in overall security posture can be had by incorporating distributed identities and promoting contextual and least-privilege authorization while protecting user privacy. The research suggests that technical standards need to be refined, distributed identity needs to be expanded into practice, and that it be discussed as an application to the current digital security landscape.

Author 1: Sina Ahmadi

Keywords: Distributed identity; ZTA; DID; VC; lateral movement; privacy; credential security

PDF

Paper 3: A Novel System for Managing Encrypted Data Using Searchable Encryption Techniques

Abstract: The motivation for this study arises; from the insufficient security measures provided by cloud service providers, particularly with regard to data integrity and confidentiality. In today’s digital landscape, nearly every international organization stores data in the cloud, whether through in-house servers or third-party providers. While encrypting data prior to storage addresses certain security concerns, it does not fully resolve the issue. Specifically, how can a server effectively process or search the data without decrypting it? This challenge is addressed by the concept of searchable encryption. Therefore, the objective of this study is to implement and evaluate a contemporary set of searchable encryption algorithms within a web-based platform. The study includes a comprehensive performance analysis of the implemented algorithms and an evaluation of the system based on the statistical outcomes of these algorithms. Therefore, this study aims to contribute to the advancement of secure and efficient methods for managing encrypted data in cloud environments. This study evaluates an image search system using the FAST protocol, achieving an average search time of 28.696 ms per image and an average deletion time of 0.557 seconds. While slower than FAST’s benchmarks due to limited computational resources and additional processing steps, the system demonstrated reliable performance within its constraints. These results highlight the trade-offs between security, functionality, and performance, offering valuable insights for future optimizations in resource-constrained environments.

Author 1: Vijay Govindarajan

Keywords: Cloud service providers; encrypting; security; web-based platform

PDF

Paper 4: Emotional Engagement and Teaching Innovations for Deep Learning and Retention in Education: A Literature Review

Abstract: The goal of this examination is to identify key factors that enhance educational settings through innovative teaching methods and the integration of technology, emphasizing the transformative role of digital tools, particularly in mathematics and science education, and their impact on student engagement, problem-solving skills, and conceptual understanding. The increasing digitalization of education necessitates the adoption of pedagogical strategies that enhance both cognitive and emotional engagement, ensuring students develop critical thinking and long-term knowledge retention skills. Various educational theories, including Behaviorism, Cognitivism, Constructivism, and Social Learning Theory, are analyzed to demonstrate their relevance in both traditional and online learning environments. Emotional engagement is explored as a crucial element in learning, focusing on its connection to memory retention and cognitive development. Pedagogical recall is highlighted as essential for optimizing long-term knowledge retention, particularly in online and blended learning environments, while the effectiveness of different teaching strategies in fostering deep learning and sustaining knowledge over time is evaluated. The findings advocate for a holistic educational approach that integrates both cognitive and emotional factors, leveraging technological advancements and innovative pedagogical methods to create inclusive, adaptive, and effective learning environments. Continuous pedagogical evolution is necessary to address emerging educational challenges and enhance student success in an increasingly digitalized academic landscape.

Author 1: Samer Alhebaishi
Author 2: Richard Stone
Author 3: Mohammed Ameen

Keywords: Emotional engagement; pedagogical recall; long-term knowledge retention; augmented reality in education; blended learning

PDF

Paper 5: A Hybrid AI-Based Risk Assessment Framework for Sustainable Construction: Integrating ANN, Fuzzy Logic, and IoT

Abstract: The construction industry is central to the advancement of economic growth all over the world but it has various problems in risk management especially concerning sustainable construction projects. Standard risk management techniques like AHP and Monte Carlo simulation do not afford the flexibility and accuracy needed in construction sites. Based on the identified limitations, this study offers a new system of risk assessment that combines Artificial Neural Networks (ANN), Fuzzy Logic, and Internet of Things (IoT) technologies. Real-time IoT sensor data and historical project data are integrated into a real-time and adaptive system which can identify, suggest, and minimize potential risks for improved decision making. The ANN component is distinctive in pattern recognition and risk prediction while Fuzzy Logic brings ease of interpretation and reasoning in the uncertain environment. Raw IoT data are live data which may be processed and updated frequently relative to the devices and their environment. The effectiveness of this framework can be ascertained through experimental proof; the framework’s accuracy is 92.7%; project delay and cost have been minimized. The results reveal that the presented framework is highly resistant to noise, and its performance changes fairly slowly if the project requirements change. This integrative approach ensures the identification of the comprehensive solution for the sustainable construction risk management, which may help with the development of the safer, more efficient and non-harmful to the environment construction techniques.

Author 1: André Luís Barbosa Gomes Góes
Author 2: Rafaqat Kazmi
Author 3: Aqsa
Author 4: Siddhartha Nuthakki

Keywords: Risk assessment; sustainable construction; artificial neural networks; fuzzy logic; predictive analytics

PDF

Paper 6: Smart Insoles for Multi-User Monitoring: A Case Study on Received Signal Strength Indicator-Based Distance Measurement

Abstract: In the current context of high adoption of wearables and Internet of Things (IoT) devices, this work develops a smart insole system to measure the distance between users using the RSSI signal (Received Signal Strength Indicator). ESP32 WROOM microcontrollers with Bluetooth Low Energy, Wi-Fi, and multiple functionalities were used. The prototype includes sensors to count steps, detect activity (walking/running) and a configurable alarm to alert when the distance is less than a threshold. Collected data are sent directly and in real-time to a database using the ThingSpeak web platform, which allows to visualize the data acquired from the insole sensors. Using the RSSI signal provided by the Bluetooth LE module, a significant response was interpreted and modeled using a multilayer perceptron (MLP) neural network, achieving an average distance estimation accuracy of 90.89% using data measured in real time.

Author 1: Victor Huilca Cabay
Author 2: Alexandra Flores
Author 3: Paul Hernan Machado Herrera
Author 4: Byron Paul Huera Paltan

Keywords: Internet of Things; RSSI; smart insole; distance; wearables; neural network

PDF

Paper 7: Privacy Protection in JPEG XS: A Lightweight Spatio-Color Scrambling Approach

Abstract: This paper presents a lightweight JPEG XS coding scheme incorporating spatio-color scrambling for privacy protection. The proposed approach follows an Encryption-then-Compression (EtC) framework, maintaining compatibility with the JPEG XS standard. Prior to encoding, input images undergo scrambling operations, including line permutation, line reversal, and color permutation. Security analysis indicates that the scrambling technique provides a large key space, making brute-force attacks computationally challenging. Experimental results demonstrate that the proposed method achieves a rate-distortion (RD) performance nearly equivalent to conventional JPEG XS compression while enhancing visual security. Additionally, a rectangular block-based scrambling technique is explored, which offers a trade-off among low latency, reduced memory usage, and visual concealment performance. While real-time processing is possible with or without block-based scrambling, the block-based approach is particularly beneficial for applications that demand lower latency and reduced memory usage. The effectiveness of the proposed method is validated through simulations on 8K ultra-high-definition (UHD) images.

Author 1: Takayuki Nakachi
Author 2: Yasuhisa Kato
Author 3: Mitsuru Maruyama

Keywords: JPEG XS; UHD video; Encryption-then-Compression; privacy protection; perceptual scrambling

PDF

Paper 8: Knowledge Management Application for Small and Medium-Sized Service-Oriented Enterprises Based on the SECI Model

Abstract: This paper analyzes the current situation and development bottlenecks of small and medium-sized service industry enterprises using the T nail salon as an example. It emphasizes the importance of knowledge management and proposes the need to establish a knowledge system within the company that combines both humanistic and technological aspects. From the practice of using the SECI model in the T nail salon, we can also conclude that small and medium-sized service-oriented enterprises can use appropriate means and less cost to achieve effective knowledge conversion among individuals, teams, organizations, and customers, achieve orderly knowledge management, and ultimately achieve a comprehensive effect of improving the quality of enterprise services and competitiveness.

Author 1: Chen Chang
Author 2: Manabu Sawaguchi
Author 3: Yasuaki Mori

Keywords: Knowledge management; Socialization Externalization Combination Internalization (SECI); nail salon; Small and Medium-Sized Enterprises (SMEs)

PDF

Paper 9: A Model for Simulation of the Energy Flows in a Heat Pipe Solar Collector

Abstract: The domestic sector is one of the major energy consumers and hot water is a compulsory service in modern society. Therefore, one of the possibilities for reducing energy expenses is heating water using solar collectors. However, the optimization of such installations requires careful planning and preliminary simulations. This study presents a model for simulating the energy flows in a heat pipe solar collector. Unlike previous studies, it also accounts for the self-shading of the vacuum tubes at certain hours of the day. An experimental setup was organized to collect reference data for model validation, and the data was automatically stored in a database by a microcontroller-based electronic system. The modeled and experimental data were compared and a PME of 1.55%, and a PMAE of 16.33% were obtained. The proposed model could be used for simulating the useful power of hybrid hot-water systems under different application scenarios.

Author 1: Boris Evstatiev
Author 2: Nadezhda Evstatieva

Keywords: Model; simulation; heat pipe solar collector; useful power

PDF

Paper 10: Evaluation of the Usability and User Experience of a Digital Platform for Mental Health Assessment

Abstract: This study evaluated the usability and user experience of a mental health digital platform among college students. Usability tests were conducted using quantitative measures, user feedback, and direct observations. User experience is also aimed at gaining insights of what works and what does not work in the system. A total of 3,396 second year students participated in the assessment with university guidance counselors serving as facilitators. Results from the usability test indicated an above- average score among students suggesting high satisfaction in terms of ease-of-use, well-integrated functions, and performance. Strengths of the platform generated from the users’ feedback are effectiveness and efficiency, ease of use, innovation, organization and structure, and reliability and performance. Further enhancements in functionality, including loading time, usability, readability, language preference, and lengthy questionnaires, were identified as key concerns among respondents. These findings highlight the usability of the platform while also identifying areas for improvement to ensure continuous engagement and user-friendly experience for users.

Author 1: Jerina Jean M. Ecleo
Author 2: Mia Amor C. Tinam-isan
Author 3: Kristine Mae E. Galera
Author 4: Ric Adrian C. Balaton
Author 5: Imelu G. Mordeno
Author 6: Cenie M. Vilela-Malabanan

Keywords: Mental health; usability testing; user experience; mental health assessment; digital platform

PDF

Paper 11: Development of an Algorithm-Based Analysis and Compression Integrated Communication Tracking Management Information System (iCTMIS)

Abstract: This study addresses the challenges of administrative tasks and communication tracking at Visayas State University Alangalang (VSUA), highlighting the inefficiencies in the current manual processes. The objective is to develop an Integrated Communication Tracking Management Information System (iCTMIS) that enhances operational efficiency by integrating Optical Character Recognition (OCR) and Lempel-Ziv-Welch (LZW) Lossless and Zlib compression algorithms. By employing a developmental research design and ADDIE model, the system proves that there is an improvement on data analysis and reduces disk space through efficient compression. Significant findings reveal that OCR achieves up to 90% accuracy in text conversion, while LZW compressions substantially deflate data sizes. This was evaluated against ISO 9126 Software Quality Characteristics, the iCTMIS has shown to optimize storage and address VSUA's operational challenges effectively. This research therefore concludes that the systematic integration of advanced algorithmic frameworks in iCTMIS significantly enhances organizational communication and administrative workflows efficiency.

Author 1: Carlo Jude P. Abuda
Author 2: Ritchell S. Villafuerte

Keywords: Information system; optical character recognition; Lempel-Ziv-welch lossless compression; zlib compression; communication tracking

PDF

Paper 12: Implementation of a Web System to Optimize the Quotation Process in the Company KSF Representaciones EIRL, 2022

Abstract: This research seeks to demonstrate whether the implementation of a web-based system influences the optimization of activities related to the quoting process, saving time and money for KSF Representaciones EIRL. Therefore, the following question arises: To what extent does the implementation of a web-based system optimize the quoting process? This is an applied, pre-experimental design with a quantitative approach. The population consists of average daily quote records for 24 business days per month. For the convenience sample, an average of 24 quote records from May were used for the pre-test and an average of 24 quote records from June for the post-test, collected using an observation sheet. The results regarding the quoting process variable show that the application reduces the time to generate quotes. For the second dimension, the application results in a higher percentage of quote fulfillment. In conclusion, the implementation of the web-based system improved quote generation by an average of 28 minutes and increased the compliance rate of submitted quotes by an average of 89.8%.

Author 1: Betsy Nataly Llacchuarimay-De La Cruz
Author 2: Segundo Alexandher Gutierrez Argomedo
Author 3: Luis Alberto Torres-Cabanillas

Keywords: Web system; optimization; quotation; customer satisfaction; efficiency

PDF

Paper 13: Application of the Business Process Management (BPM) Methodology in the Process of Incorporating Human Talent in the Retail Business Sector

Abstract: The lack of a well-defined onboarding process for new talent in a retail company specializing in beauty products and accessories for women has generated the need to undertake this research. The objective of which was to evaluate the positive impact that the implementation of business process management (BPM) could generate in this area, whose deficiencies lay in inadequate communication and the lack of appropriate digital tools. The study focused on three key dimensions to understand how this improvement could transform the process of integrating new talent. As a research method, an applied pre-experimental design was chosen, with a quantitative approach. Likewise, the survey was applied to collect data, using a questionnaire as a measurement instrument. As a result, it was observed that by following the characteristics and life cycle of the BPM methodological framework, it was necessary to implement digital actions and tools to optimize the process and generate positive impacts in its three dimensions. In addition, there was a 44% increase in the satisfaction and commitment of the participants in the process, a 47% increase in the positive perception about monitoring and tracking the entry of new talent, and a 38% increase in the perception about the distribution of tasks among the actors in the process. In conclusion, the application of the methodology has generated a notable improvement in the process, which has directly contributed to enriching the experience of new talents in the incorporation process of the retail.

Author 1: Anyela Alanya-Ramos
Author 2: Argenis Moreno-Rosales
Author 3: Luis Acosta-Medina

Keywords: BPM; human talent; incorporation process; process optimization; methodology

PDF

Paper 14: Security Onion as a Network Auditing Tool at the San Cristóbal de Huamanga National University

Abstract: In a context of evolving cyber threats, the San Cristobal de Huamanga National University (UNSCH) faces the need to improve its network security infrastructure. This study implements Security Onion as a network auditing tool at this institution with the objective of evaluating its effectiveness in three key areas: security monitoring, log management, and intrusion detection. The study employs an applied, descriptive, and experimental approach to demonstrate that Security Onion is a robust solution for incident detection. It enables comprehensive analysis of network logs and early identification of suspicious activities, providing a holistic view of the network. Based on the results, the study suggests best practices for protecting institutional information and the network, and contributes to understanding Security Onion's capabilities in similar network infrastructures. Furthermore, it provides a replicable model for other institutions.

Author 1: Kimberlly Nena Barraza Tudela
Author 2: Hubner Janampa Patilla

Keywords: Network security; network auditing; Security Onion; IDS; CIS Controls

PDF

Paper 15: Business Intelligence in Public Management

Abstract: The present research seeks to demonstrate the improvement of the visualization of indicators applying Business Intelligence in the district municipality of Lince. The entity has among its different institutional objectives to strengthen the modernization of the administrative and functional systems of institutional management. The research was proposed as an applied type, with a pre-experimental and quantitative design. A sample of 10 users belonging to Tax Administration Management was available, applying the questionnaire technique and the survey-type instrument. From the data collected by the instrument in the Pre-Test and Post- Test, the results were obtained that allowed us to determine a positive relationship in relation to decision-making for tax collection. For the Pre-Test tests, a score of 50% was obtained, in the low-level score as opposed to the Post Test tests, which obtained a 50% general level. The investigation allowed, in its interpretation, the meaningful change for decision-making supported by indicators generated by Business Intelligence, when evaluating the results and finding changes among the respondents on time, productivity and presentation of information in relation to the use of Business Intelligence. On the other hand, decision-making was positively affected from the direction, control, and evaluation organization, from the perception of the respondents in the changes represented by the use of a business tool to obtain information capable of responding to the needs of the institution for decision-making, focused on tax collection. The research is structured in six sections. The first section details the problematic situation and the justification of the study in relation to the research objectives. The second explains the background and previous research that supports the problematic situation based on the key constructs of the work. The third mentions the methodology used, through the quantitative approach, and the fourth shows the results obtained. The fifth section makes a comparison of what the study achieved compared to other previous studies and finally, the conclusions provide the final scopes on the achievement of the objectives and the contribution to future research.

Author 1: Javier Benavides-Redhead
Author 2: Jenny Gutiérrez-Flores

Keywords: Business intelligence; municipality; taxation; decision making; indicators; public management; information presentation; technology tool; modernization; productivity

PDF

Paper 16: Bioplastic Thickness Estimation Using Terahertz Time-Domain Spectroscopy and Machine Learning

Abstract: In the sustainable packaging industry, multiple parameters require regulation to achieve a high-quality final product that meets contemporary demands. In bioplastic manufacturing, the control of the film thickness is critical because it in-fluences the mechanical properties and other key characteristics. Terahertz time-domain spectroscopy (THz-TDS) has emerged as a promising technology for the non-invasive characterization of polymeric materials. The present study evaluates the integration of THz-TDS with chemometric techniques and machine learning models to predict the thickness of bioplastic samples fabricated from potato and maize starch. Three distinct thickness levels were produced by solution casting, and a spectral analysis was performed in the range of 0.5 to 1.2 THz. Four regression models were developed, including partial least squares regression, support vector regression, binary regression tree, and a feedforward neural network. The performance of the model was assessed using the coefficient of determination (R2), root mean square error (RMSE) and the ratio of performance to deviation (RPD). R2 values ranged from 0.8379 to 0.9757, the RMSE values ranged from 0.1259 to 0.3368, and the RPD values ranged from 2.4399 to 6.8106. These findings underscore the potential of THz-TDS and machine learning for non-invasive analysis of thin polymeric films and lay the groundwork for future research aimed at enhancing reliability and functionality.

Author 1: Juan-Jes´us Garrido-Arismendis
Author 2: Luis Juarez
Author 3: Jorge Mogollon
Author 4: Brenda Acevedo-Ju´arez
Author 5: Himer Avila-George
Author 6: Wilson Castro

Keywords: Terahertz spectroscopy; machine learning; chemo-metrics; thickness; bioplastic

PDF

Paper 17: Optimization of IIR Digital Filters Using Differential Evolution: A Comparative Analysis of FDDE and AMECoDEs Algorithms

Abstract: Infinite impulse response (IIR) digital filters are fundamental components in various digital signal processing applications, particularly those requiring optimized use of computational resources, such as memory and processing power. This study presents the design of classical IIR filters, including low-pass, high-pass, band-pass, and band-stop configurations, as well as multiple-passband filters featuring dual and triple passbands. Two differential evolution algorithms are utilized: FDDE (Differential Evolution Algorithm with Fitness and Diversity Ranking-Based Mutation Operator) and AMECoDEs (Adaptive Multiple-Elites-Guided Composite Differential Evolution Algorithm with a Shift Mechanism). To date, no study has investigated the application of the FDDE algorithm to IIR digital filter design, whereas the AMECoDEs algorithm has seen limited application in this context. Consequently, this work investigates the design of IIR filters using these algorithms and assesses their performance based on the mean squared error (MSE). Comparative analysis reveals that, for classical filters, the FDDE algorithm yields a slightly lower MSE in the magnitude response compared to the AMECoDEs algorithm. Conversely, for multiple-passband filters, the AMECoDEs algorithm outperforms FDDE by achieving a lower MSE. In the proposed model, IIR filters are implemented using a cascade structure of second-order sections (SOS), with their fitness function evaluated based on the MSE, computed using a constant weight function within each frequency band. Additionally, the magnitude response characteristics of the designed filters are compared with those of classical and dual-passband filters designed with the AMECoDEs algorithm in recent studies. The results indicate that the filters designed in this study show significant improvements across most evaluated metrics, particularly in terms of improved stopband attenuation. One of the key contributions of this work is the novel application of differential evolution algorithms to the design of triple-passband IIR filters, demonstrating their effectiveness through successful validation on a development board.

Author 1: Wildor Ferrel Serruto

Keywords: IIR digital filter; differential evolution; FDDE algorithm; AMECoDEs algorithm; triple-passband IIR filter

PDF

Paper 18: Machine Learning-Based Terahertz Spectroscopy for Starch Concentration Prediction in Biofilms

Abstract: Food preservation and safety require advanced detection methods to ensure transparency in supply chains. Terahertz (THz) spectroscopy has emerged as a powerful, non-invasive tool for material characterization. This study explores the integration of THz spectroscopy and machine learning for accurately quantifying maize starch adulteration in bioplastics derived from potato starch. Bioplastic samples with varying concentrations of maize starch were prepared, molded into three different thicknesses, and subjected to a two-stage drying process, resulting in 81 samples (27 treatments with three replicates each). The spectral profiles at THz (0.5 to 2 THz) were recorded and analyzed using three regression models: support vector regression, partial least squares regression, and multiple linear regression. The models were evaluated using the coefficient of determination (R2), Root Mean Square Error (RMSE), and the Residual Predictive Deviation (RPD). The results showed R2 values ranging from 0.7283 to 0.9495, RMSE between 0.0594 and 0.1393, and RPD values from 1.8753 to 4.4479, demonstrating strong predictive performance. These findings highlight the potential of THz spectroscopy and machine learning in the noninvasive detection of starch adulterants in bioplastics, paving the way for future research to enhance model robustness and applicability.

Author 1: Juan-Jesus Garrido-Arismendis
Author 2: Jimy Oblitas
Author 3: Cesar Nino
Author 4: Himer Avila-George
Author 5: Wilson Castro

Keywords: Terahertz spectroscopy; machine learning; chemo-metrics; starch detection; biofilms

PDF

Paper 19: Unified Deep Learning for Real-Time Pedestrian Detection, Pose Estimation, and Tracking

Abstract: This study introduces a novel unified deep learning framework for real-time pedestrian and Vulnerable Road User (VRU) detection, pose estimation, and tracking using YOLOv8. Unlike traditional approaches that separately handle these tasks, our integrated multi-task model leverages YOLOv8’s advanced multi-scale feature extraction and optimized architecture to efficiently perform simultaneous detection, pose estimation, and tracking. Experimental evaluations demonstrate superior performance compared to baseline YOLOv8 configurations, achieving an mAP@0.5 of 57.2%, OKS of 76.1% (COCO dataset), MOTA of 67.1%, and IDF1 of 64.3%. The framework's robust performance is validated through comprehensive testing under realistic urban scenarios and challenging conditions. By effectively addressing limitations in current autonomous vehicle (AV) perception systems, such as handling occlusions, varying lighting, and dense pedestrian environments, this integrated approach significantly enhances AV safety and navigation reliability at critical junctions and pedestrian crossings.

Author 1: Joseph De Guia
Author 2: Madhavi Deveraj

Keywords: Pedestrian detection; pose estimation; tracking; YOLOv8; deep learning

PDF

Paper 20: Impact of Emerging Technologies on Customer Loyalty: A Systematic Review

Abstract: The rapid evolution of emerging technologies has generated growing interest in their potential to transform customer loyalty into digital environments. This study aims to conduct a systematic literature review (SLR) to analyze how emerging technologies influence customer loyalty. This review is focused on identifying how these technologies affect loyalty indicators in markets with developed digital environments. A total of 453 articles from the Scopus database were identified by applying the PRISMA methodology. After removing duplicates and applying filters by language and document type, 103 relevant articles were selected. Then, a detailed review based on inclusion and exclusion criteria was conducted. Hence, 51 documents were finally included for analysis. The main technologies investigated were Big Data, IoT, and Machine Learning. Big Data and Data Analytics were the most researched technologies, followed by IoT and Machine Learning. The systematic review demonstrated that emerging technologies significantly impact customer loyalty. Artificial intelligence and data analytics are key tools for improving customer experience and retention, which contributes to business growth. It is concluded that adopting these technologies enhances customer experience by offering personalization, behavior prediction, and inventory optimization, resulting in greater customer satisfaction and loyalty.

Author 1: Jonattan Andia-Reyna
Author 2: Yorhs Malasquez-Villanueva

Keywords: Emerging technologies; loyalty programs; customer loyalty; business growth

PDF

Paper 21: Unmasking AI-Generated Texts Using Linguistic and Stylistic Features

Abstract: As Artificial Intelligence (AI) generated texts become increasingly sophisticated, distinguishing between human-written and AI-generated content presents a growing challenge. Reliably detecting AI-generated texts is of primary importance in fields that involve a lot of text such as journalism, education and law. In this study, several methods for detecting AI-generated texts by analysing a range of linguistic and stylistic features were investigated. It incorporated features such as text length, punctuation count, vocabulary richness, readability indices and sentiment polarity, to identify patterns in AI-generated content. Out of the six machine learning classifiers which were tested, the Random Forest classifier achieved the highest accuracy of 82.6%. A dataset of 483,360 essays was used in this study. Thus, the findings of this study provide a framework for the development of more sophisticated detection tools that can be applied to various real-world scenarios.

Author 1: Muhammad Irfaan Hossen Rujeedawa
Author 2: Sameerchand Pudaruth
Author 3: Vusumuzi Malele

Keywords: AI-generated texts; human-written texts; machine learning; linguistic features; stylistic features

PDF

Paper 22: Abnormal Data Detection Model Based on Autoencoder and Random Forest Algorithm: Camera Sensor Data in Autonomous Driving Systems

Abstract: This project develops an AI-based anomaly detection system. In the field of autonomous driving, abnormal data will directly affect the safety of autonomous driving systems, especially in terms of abnormal camera sensor data. Sensor failure, environmental changes, or bad weather can lead to the emergence of abnormal data, which can affect the decision-making process and may have disastrous consequences. Based on the above problems, this study addresses this challenge by proposing a hybrid anomaly detection model (called CAE-RF) that combines convolutional autoencoders and random forest algorithms to achieve efficient and accurate identification of abnormal data patterns to improve the safety of autonomous driving systems. The proposed method will use convolutional autoencoders to calculate the reconstruction error and combine the hidden features extracted by the encoder as the input of the random forest to distinguish normal data from abnormal data. The key performance indicators such as accuracy, precision, recall, and F1 score are used to evaluate the model, and the robustness is guaranteed by cross-validation. Experimental results show that the CAE-RF model has an accuracy of 92% in distinguishing normal and abnormal data. Compared with traditional methods, the CAE-RF model achieves higher accuracy and reliability. The implementation of this model can timely identify and process abnormal data, reduce the risks brought by sensor failure or external environment changes, prevent potential accidents, and improve the safety and reliability of the autonomous driving system.

Author 1: Geng Shengwen
Author 2: Mohd Hafeez Osman

Keywords: Automatic driving; anomaly data detection; convolutional autoencoder; random forest; CAE-RF

PDF

Paper 23: Career Recommendation Based on Feature Selection for Undergraduate Students Using Machine Learning Techniques

Abstract: Undergraduate students worldwide face difficulties choosing the career paths that should stay with them for at least several years. It is widespread for graduates to work in jobs or join a career path they are not interested in. Also, sometimes these jobs do not suit the skills and preferences of undergraduates. On the other hand, some jobs require certain criteria and various skills that may not be available to some undergraduates. Although an undergraduate can study a major that he/she is interested in, this does not guarantee that he/she will be successful in his/her future career path. Undergraduates in various majors need advice on career paths that suit their skills and interests. When a graduate feels dissatisfied with his/her job, this dissatisfaction can impact his/her productivity and performance in his/her assigned tasks and job responsibilities. Moreover, the overall performance of the organization where these workers work can be negatively affected by having less talented and less motivated workers. As a result, in this paper, a recommendation system is designed and proposed to guide undergraduates in choosing the optimal career path. Various machine-learning techniques were used in the recommendation system. The proposed system was applied to two datasets related to Information Technology jobs; “Dataset A” consisted of 20,000 records and “Dataset B” consisted of 500 records. Feature selection techniques were applied on “Dataset A” to determine the most important features that enhance the accuracy of the proposed recommendation system. It has been shown that the random forests technique performed the best among the other machine learning techniques.

Author 1: Samar El-Keiey
Author 2: Dina ElMenshawy
Author 3: Ehab Hassanein

Keywords: Career path; feature selection; machine learning techniques; recommendation systems

PDF

Paper 24: Flood Prevention System Using IoT

Abstract: Floods are one of the most severe natural disasters in Malaysia, occurring frequently in recent years and causing significant socio-economic and environmental impacts. These recurring disasters lead to huge losses and prolonged recovery period. Flood management involves four phases: prevention, preparedness, response, and recovery. However, existing flood management systems primarily focus on preparedness, response, and recovery, often neglecting preventive measures, especially in river basin which serve as the primary channels for water flow. The lack of emphasis on the prevention phase has resulted in frequent flood occurrences, economic losses, loss of lives, and extensive environmental damage. To address this gap, this study proposes an IoT-based Flood Prevention System specifically designed for river basin management to mitigate flood risks. The system effectively regulates and maintains river water flow and quality, with the integration of Internet of Things (IoT) and Automated Water Turbines. By using real-time data collection from IoT sensors with historical flood data, the system can autonomously take appropriate actions to regulate and maintain the water flow and water level in river basin. These proactive measures allow for better water discharge to the sea, even during periods of heavy rainfall. The implementation of this system contributes to sustainable flood mitigation strategies with advanced technologies enhancing disaster management capabilities.

Author 1: Balasubramaniam Muniandy
Author 2: Siti Sarah Maidin
Author 3: M. Batumalay
Author 4: Lakshmi Dhandapani
Author 5: Prakash. S

Keywords: Flood prevention system; Internet of Things (IoT); automated water turbines; river basin management; real-time monitoring; AI-based flood prediction; environmental sustainability; smart infrastructure

PDF

Paper 25: Improved CNN Recognition Algorithm for Identifying Bird Hazards in Transmission Lines

Abstract: With the expansion of the power grid, bird activities have become the main factor causing transmission line failures. How to accurately identify hazard birds has received widespread attention from all sectors of society. However, the current bird identification methods for transmission line hazards suffer from low accuracy due to the small size of bird targets. This study proposes an enhanced Convolutional Neural Network (CNN) with Support Vector Machines (SVM) to improve the accuracy of identifying hazardous birds on transmission lines. At the same time, a dataset of bird species affected by transmission lines is constructed, and data augmentation methods and denoising deep convolutional networks are used to process the data. Thus, a bird identification algorithm for transmission line hazards based on improved CNNs and SVM is constructed by combining the three. The study conducts a performance comparison analysis of the algorithm and finds that its average recognition speed and accuracy are 9.8 frames per second and 97.4%, respectively, significantly better than the compared algorithms. In addition, an analysis of the application effect of the algorithm is conducted, and it is found that the algorithm can accurately identify hazard birds. In some recognition results, the recognition results and confirmation probabilities for Pica Pica, ciconia boyciana, egretta garzetta, and hirundo rusticas are 98.73%, 97.68%, 96.54%, and 91.34%, respectively, all above 90%. The above findings indicate that the proposed identification algorithm has good performance and practical value, which helps to improve the accuracy of identifying hazard birds on transmission lines.

Author 1: Junzhou Li
Author 2: Yao Li
Author 3: Wen Wang

Keywords: CNN; hazard birds; transmission line; distinguish; support vector machine

PDF

Paper 26: Super-Twisting Sliding Mode Distributed Consensus for Nonlinear Multi-Agent Systems with Unknown Bounded External Disturbances

Abstract: This paper addresses the distributed consensus tracking problem for nonlinear multi-agent systems subject to unknown but bounded external disturbances by leveraging a super-twisting sliding mode (STSM) control framework. Two STSM-based consensus algorithms are proposed—one for first-order and another for second-order multi-agent systems—to achieve finite-time convergence despite disturbances. A disturbance observer is integrated into the consensus control protocols to estimate and compensate for these disturbances, ensuring robust tracking without requiring time-derivative sliding variables or smoothing algorithms. The proposed consensus protocols build upon the concepts of finite-time stability, Lipschitz-bounded functions, relative degree analysis of input-output dynamics, and positive-definite matrix properties. Stability and finite-time convergence are rigorously established using Lyapunov-based proofs, Rayleigh’s inequality, and finite-time settling results. Unstructured disturbances are modelled as zero-mean Gaussian noise and structured disturbances are expressed via a regressor formulation. Numerical simulations confirm that the integrated STSM-based consensus approach and disturbance observer ensure high tracking accuracy, robustness, and smooth control performance under diverse disturbance conditions.

Author 1: Belkacem Kada
Author 2: Khalid Munawar

Keywords: Distributed consensus; cooperative control; nonlinear multiagent systems; robustness; super-twisting sliding mode

PDF

Paper 27: AI-Driven Intrusion Detection in IoV Communication: Insights from CICIoV2024 Dataset

Abstract: The increasing interconnectivity of vehicular networks through the Internet of Vehicles (IoV) introduces significant security challenges, particularly for the Controller Area Network (CAN), a widely adopted protocol vulnerable to cyberattacks such as spoofing and Denial-of-Service (DoS). To address these challenges, this study explores the potential of Intrusion Detection Systems (IDSs) leveraging artificial intelligence (AI) techniques to detect and mitigate malicious activities in CAN communications. Using the CICIoV2024 dataset, which provides a realistic testbed of vehicular traffic under benign and malicious conditions, we evaluate 25 machine learning (ML) models across multiple metrics, including accuracy, balanced accuracy, F1-score, and computational efficiency. A systematic and repeatable approach was proposed to facilitate testing multiple models and classification scenarios, enabling a comprehensive exploration of the dataset's characteristics and providing insights into various ML algorithms' effectiveness. The findings highlight the strengths and limitations of various algorithms, with ensemble-based and tree-based models demonstrating superior performance in handling imbalanced data and achieving high generalization. This study provides insights into optimizing IDSs for vehicular networks and outlines recommendations for improving the robustness and applicability of security solutions in real-world IoV scenarios.

Author 1: Nourah Fahad Janbi

Keywords: Intrusion Detection System; controller area network; Internet of Vehicles; CICIoV2024; machine learning; Artificial Intelligence; security

PDF

Paper 28: Modification of C-Grabcut for Segmentation and Classification of Coffee Leaf Diseases in Complex Backgrounds

Abstract: Visual changes, including spots, discoloration, and deformation characterize coffee leaf diseases. In real-world image data, complex backgrounds present challenges for classification using deep learning models. Irrelevant objects, such as soil, other leaves, and miscellaneous items, can hinder the model's ability to accurately recognize disease patterns. Furthermore, the absence of effective segmentation techniques has resulted in low accuracy in previous studies. This work aims to address these limitations by enhancing the performance of the MobileNet-V2 model for coffee leaf disease classification. We applied a modified C-Grabcut segmentation technique to improve the isolation of diseased areas from complex backgrounds. The results demonstrate a significant performance improvement, achieving an Intersection over Union (IoU) of 0.8369 and an accuracy of 94.83%. These findings suggest that the modified MobileNet-V2 model, combined with the improved C-Grabcut segmentation, offers robust performance for in-field coffee leaf disease classification, striking a better balance between effectiveness and accuracy compared to previous studies.

Author 1: Anastia Ivanabilla Novanti
Author 2: Agus Harjoko

Keywords: Image segmentation; in-field image; mobilenet-v2; coffee leaf diseases; background complexity

PDF

Paper 29: Adaptive Deep Learning Framework with Unicintus Optimization for Anomaly Detection in Streaming Data

Abstract: Anomaly detection in streaming data is crucial for identifying unusual patterns or outliers that may indicate significant issues. Traditional methods struggle with the inability in efficiently handling high-velocity data, adapting to changing data distributions, and maintain performance over time. Further, the conventional methods struggled with scalability, adaptability, and computational efficiency, leading to delays in detection or an increased rate of false positives. To address these limitations, Unicintus Escape Energy enabled Sampling based Drift Deep Belief Network-Bidirectional Long Short Term Memory (UES2-DTM) is proposed in the research. The research model incorporates the combination of adaptive reservoir sampling as well as the adaptive sliding window mechanisms into the base model, which elevates the efficiency of the model to work with the streaming data. Moreover, the adaptive sliding window mechanisms for drift detection integrates the Unicintus Escape Energy Optimization (UE2O) Algorithm to boost efficiency by dynamically adjusting the sliding window size and parameters, based on real-time streaming data characteristics. Further, Adaptive reservoir sampling helps in maintaining a representative sample of the data stream, for effective detection. Overall, the UES2-DTM model demonstrates superior adaptability and accuracy, which is evaluated with the metrics such as precision, recall, F1-score, and Mean Square Error (MSE) attained 97.199%, 94.827%, 95.998%, and 3.461 respectively.

Author 1: Srividhya V R
Author 2: Kayarvizhy N

Keywords: Streaming data; sliding window; anomaly detection; reservoir sampling; Unicintus escape energy optimization

PDF

Paper 30: A Deep Learning Ordinal Classifier

Abstract: Deep learning models such as TabNet have gained popularity for handling tabular data. However, most existing architectures treat categorical variables as nominal, ignoring the inherent ordering in ordinal data, which can lead to suboptimal classification performance, particularly in tasks where ordinal relationships carry meaningful information, such as quality assessment, disease severity staging, and risk prediction. This study investigates the impact of explicitly modeling ordinal relationships in deep learning by developing an ordinal classification model and comparing it with its nominal counterpart. The proposed approach integrates TabNet a deep learning framework with ordinal constraints, leveraging a proportional odds model to better capture the ordinal structure and Beta cross-entropy as the loss function to enforce ordering during training. To evaluate the effectiveness of the proposed ordinal classification approach, experiments were conducted on two publicly available datasets: the White Wine Quality dataset and the Hepatitis C dataset. The results demonstrate that incorporating ordinal constraints leads to improvements across multiple evaluation metrics, including 1-off accuracy, average mean absolute error (AMAE), maximum mean absolute error (MMAE), and quadratic weighted kappa (QWK) compared to a nominal classification model trained under the same conditions. These findings underscore the importance of ordinal modeling in tabular classification and contribute to the advancement of deep learning techniques for structured data.

Author 1: Tiphelele Lwazi Nxumalo
Author 2: Richard Maina Rimiru
Author 3: Vusi Mpendulo Magagula

Keywords: Ordinal classification; TabNet; proportional odds model; tabular data

PDF

Paper 31: Intelligent Real-Time Air Quality Index Classification for Smart Home Digital Twins

Abstract: This paper investigates the application of machine learning and deep learning models for intelligent real-time Air Quality Index (AQI) classification within a smart home digital twin context. Leveraging sensor data encompassing CO2 and TVOC levels, we perform a comparative analysis of eight models: Transformer Neural Network (TNN), Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), Recurrent Neural Networks (RNN), Support Vector Machines (SVM), Random Forest (RF), Gradient Boosting (GB), and K-Nearest Neighbors (KNN). These models aim to accurately classify air quality into six categories corresponding to AQI levels, ranging from Good to Hazardous, which are critical for assessing health risks. The performance of each model is rigorously evaluated using metrics including accuracy, precision, recall, F1-score, and ROC curves. Our findings demonstrate that the implemented models exhibit strong performance. This high-accuracy classification enables the smart home digital twin to move beyond passive monitoring, enabling proactive environmental control. For instance, the digital twin can use this real-time AQI classification to automatically adjust HVAC systems, trigger air purifiers when indoor air quality degrades, and potentially inform occupancy schedules. This integration allows for intelligent, adaptive management of the home's environment, ensuring optimal indoor air quality and occupant well-being. The paper also discusses the limitations of each model and suitable application scenarios for intelligent AQI management within the digital twin framework, offering valuable insights for the selection of appropriate air quality classification models in smart home environments.

Author 1: Saley Saleh
Author 2: A. S. Abohamama
Author 3: A. S. Tolba

Keywords: Air quality classification; machine learning; deep learning; Convolutional Neural Networks; Recurrent Neural Networks; transformer; Support Vector Machines; Random Forest; Gradient Boosting; k-nearest neighbors; CCS811 sensor data

PDF

Paper 32: Sentiment Analysis and Emotion Detection Using Transformer Models in Multilingual Social Media Data

Abstract: The rapid expansion of multilingual social media platforms has resulted in a surge of user-generated content, introducing challenges in sentiment analysis and emotion detection due to code-switching, informal text, and linguistic diversity. Traditional rule-based and machine learning models struggle to process multilingual complexities effectively, necessitating advanced deep-learning approaches. This study develops a transformer-based sentiment analysis and emotion detection system capable of handling multilingual and code-mixed social media text. The proposed fine-tuned Cross-lingual Language Model – Robust (XLM-R) model is compared against state-of-the-art transformer models (mBERT, T5) and traditional classifiers (support vector machine (SVM), Random Forest) to assess its cross-lingual sentiment classification performance. A multilingual dataset was compiled from Twitter, YouTube, Facebook, and Amazon Reviews, covering English, Spanish, French, Hindi, Arabic, Tamil, and Portuguese. Data preprocessing included tokenization, stopword removal, emoji normalization, and code-switching handling. Transformer models were fine-tuned using cross-lingual embeddings and transfer learning, with accuracy, F1-score, and confusion matrices for performance evaluation. Results show that XLM-R outperformed all baselines, achieving an F1-score of 90.3%, while multilingual Bidirectional Encoder Representations from Transformers (mBERT) and T5 scored 84.5% and 87.2%, respectively. Preprocessing improved performance by 7%, particularly in code-mixed datasets. Handling code-switching increased accuracy by 8.9%, confirming the model’s robustness in multilingual sentiment analysis. The findings demonstrate that XLM-R effectively classifies sentiments and emotions in multilingual social media data, surpassing existing approaches. This study supports integrating transformer-based models for cross-lingual natural language processing (NLP) tasks, paving the way for real-time multilingual sentiment analysis applications.

Author 1: Sultan Saaed Almalki

Keywords: Multilingual sentiment analysis; emotion detection; transformer models; XLM-R; mBERT; T5; code-switching; cross-lingual NLP; social media text processing; deep learning

PDF

Paper 33: Popularity-Correction Sampling and Improved Contrastive Loss Recommendation

Abstract: In recommendation systems, negative sampling strategies are crucial for the calculation of contrastive learning loss. Traditional random negative sampling methods may lead to insufficient quality of negative samples during training, thereby affecting the convergence and performance of the model. In addition, the Bayesian Personalized Ranking (BPR) loss function usually converges slowly and is prone to falling into suboptimal local solutions. To address the above problems, this paper proposes a recommendation algorithm based on popularity-corrected sampling and improved contrastive loss. First, a dynamic negative sampling method with popularity correction is proposed, which reduces the impact of item popularity distribution bias on model training and dynamically screens out negative samples to improve the quality of model recommendations. Second, an improved contrastive loss is proposed, which selects the most challenging negative samples and introduces a boundary threshold to control the sensitivity of the loss, enabling the model to focus more on samples that are difficult to distinguish and further optimize the recommendation effect. Experimental results on the Amazon-Book, Yelp2018, and Gowalla datasets show that the proposed model significantly outperforms mainstream state-of-the-art models in recommendation tasks. Specifically, the Recall metric, which reflects model accuracy, improves by 16.8%, 12.9%, and 5.72% respectively on these three datasets. The NDCG metric, which measures ranking quality, increases by 20.7%, 16.4%, and 7.76% respectively. These results confirm the effectiveness and superiority of the recommendation algorithm across different scenarios. Compared with baseline models, it demonstrates stronger adaptability in complex situations, such as the sparse dataset Gowalla and the long - tail distribution dataset Amazon-Book, with the highest improvement in core metrics exceeding 20%.

Author 1: Wei Lu
Author 2: Xiaodong Cai
Author 3: Minghui Li

Keywords: Recommendation algorithms; contrast loss; difficult negative samples object; popularity bias

PDF

Paper 34: Developing Motion Templates of Sport Training Using R-GDL Approach for Evaluating Extrinsic Feedback of Penalty Kicks

Abstract: The study developed Motion Templates (MTs) using the Reverse-Gesture Description Language (R-GDL) method to evaluate extrinsic feedback in football penalty kick training. Traditional coaching methods often rely on subjective and qualitative assessments. To address this, motion capture (MoCap) technology was employed to collect kinematic data from two university football players (right- and left-footed) performing penalty kicks toward left (Set 1) and right (Set 2) goalpost and Score Rubric Assessment (SRA) form was used by professional coach to evaluate the performance. From the collected MoCap data, 40 successful penalty kicks were selected, converted into SKL format and generate MTs through Gesture Description Language (GDL) system using R-GDL, which standardized movement patterns through adaptive machine-learning-derived rules. The MTs incorporated features such as joint angles and limb trajectories, producing five rules per template for comparative analysis. Results demonstrated that MTs effectively differentiated players’ techniques across sets (e.g., Player A required fewer attempts in Set 1 than Player B in Set 2). Cross-validation against coach-evaluated Score Rubric Assessment (SRA) outcomes revealed that extrinsic feedback scores from MTs did not surpass SRA benchmarks, confirming the uniqueness of each player’s motion patterns. This highlights MTs’ reliability in providing objective, granular feedback for skill improvement. The study concludes that R-GDL-based MTs offer a robust tool for enhancing sports training analytics, enabling data-driven coaching strategies. Future work will focus on scalability, cost reduction, and extending this approach to other sports.

Author 1: Amir Irfan Mazian
Author 2: Wan Rizhan
Author 3: Normala Rahim
Author 4: Muhammad D. Zakaria
Author 5: Mohd Sufian Mat Deris
Author 6: Fadzli Syed Abdullah
Author 7: Ahmad Rafi

Keywords: Motion templates; motion capture; penalty kick; extrinsic feedback; reverse-gesture description language

PDF

Paper 35: Data Segmentation and Concatenation for Controlling K-Means Clustering-Based Gamelan Musical Nuance Classification

Abstract: The musical nuance classification model is proposed using a clustering-based classification approach. Gamelan, a traditional Indonesian music ensemble, is used as the subject of this study. The proposed approach employs initial and final data segmentation to analyze symbolic music data, followed by concatenation of the clustering results from both segments to generate a more complex label. Structural-based segmentation divides the composition into an initial segment, representing theme introduction, and a final segment, serving as a closing or resolution. This aims to capture the distinct characteristics of the initial and final segments of the composition. The approach reduces clustering complexity while maintaining the relevance of local patterns. The clustering process, performed using the K-Means algorithm, demonstrates strong performance and promising results. Furthermore, the classification rules derived from data segmentation and concatenation help mitigate clustering complexity, resulting in an effective classification outcome. The model evaluation was conducted by measuring the similarity within the classes formed from data merging using Euclidean distance score, where values below three indicate high similarity, and values greater than ten indicate strong dissimilarity. Three of the 13 formed classes with more than one data point, Class 5, Class 12, and Class 18, demonstrate high similarity with a value below three. Five other classes, Class 7, Class 10, Class 11, Class 15, and Class 20, exhibit near-high similarity, with values ranging from three to four, while the remaining five classes fall within the range of four to five.

Author 1: Heribertus Himawan
Author 2: Arry Maulana Syarif
Author 3: Ika Novita Dewi
Author 4: Abdul Karim

Keywords: Musical emotion clustering; classification; clustering-based classification; K-Means algorithm; symbolic music; gamelan music

PDF

Paper 36: Micro Laboratory Safety Hazard Detection Based on YOLOv4: A Lightweight Image Analysis Approach

Abstract: In hazardous chemical laboratories, identifying and managing safety hazards is critical for effective safety management. This study, grounded in safety engineering principles, focuses on laboratory environments to develop an efficient hazard detection model using deep learning and object detection techniques. The lightweight YOLOv4-Tiny algorithm, with fewer parameters, was selected and optimized for detecting unsafe factors in laboratories. The CIOU loss function was employed to enhance the stability of candidate box regression, while three attention mechanism modules were embedded into the backbone feature extraction network and the feature pyramid's upsampling layer, forming an improved YOLOv4-Tiny object detection algorithm. To support the detection tasks, a specialized dataset for laboratory hazards was created. The improved YOLOv4-Tiny model was then used to construct two detection models: one for identifying the status of chemical bottles and another for detecting general laboratory safety hazards. The chemical bottle status detection model achieved AP values of 93.06% (normal), 95.31% (disorderly stacking), and 90.72% (label detachment), with an mAP of 93.03% and an FPS of 272, demonstrating both high accuracy and speed. The laboratory hazard detection model achieved AP values of 97.40%, 90.14%, 96.80%, and 68.95% for normal experimenters, individuals not wearing protective equipment, individuals smoking, and open flames, respectively, with a mAP of 88.32% and an FPS of 116. These results confirm the effectiveness of the proposed models in accurately and efficiently identifying laboratory safety hazards.

Author 1: Yuan Lin

Keywords: Hazardous chemical safety; unsafe factors; deep learning; target detection; YOLO-v4-tiny; laboratory safety

PDF

Paper 37: Machine Learning-Based Identification of Cellulose Particle Pre-Bridging and Bridging Stages in Transformer Oil

Abstract: The deterioration of transformer oil quality is influenced by factors including the presence of acids, water, and other contaminates such as cellulose particles and metal dust. The dielectric strength of the oil decreases over time and depending on the service conditions. This study introduces an efficient machine learning method to classify the pre-bridging and bridging stages by analyzing the formation of cellulose particle bridges in synthetic ester transformer oil. It is important to note that the pre-bridging and bridging stages indicate a pre-breakdown condition. The machine learning approach implements the combination of digital image processing (DIP) technique and support vector machine (SVM). The DIP technique, specifically the feature extraction method, captures the feature descriptors from the cellulose particles bridging images including area, MajorAxisLength, MinorAxisLength, orientation, contrast, correlation, homogeneity and energy. These descriptors are used in SVM to assess the pre-bridging and bridging stages in transformer oil without human intervention. Various SVM models were implemented, including linear, quadratic, cubic, fine Gaussian, medium Gaussian, and coarse Gaussian. The results achieved 96.5% accuracy using quadratic and cubic SVM models with the eight feature descriptors. This research has significant implications, allowing early detection of transformer breakdown, prolonging transformer lifespan, ensuring uninterrupted power plant operations, and potentially reducing replacement costs and electricity disruptions due to late breakdown detection.

Author 1: Nur Badariah Ahmad Mustafa
Author 2: Marizuana Mat Daud
Author 3: Hidayat Zainuddin
Author 4: Nik Hakimi Nik Ali
Author 5: Fadilla Atyka Nor Rashid

Keywords: Cellulose bridging; feature classification; feature extraction; oil deterioration; support vector machine; synthetic transformer oil

PDF

Paper 38: Related Applications of Deep Learning Algorithms in Medical Image Fusion Systems

Abstract: As the continuous advancement of medical technology, image fusion technology has also been used in it. However, current medical image fusion systems still have drawbacks such as low image clarity, low accuracy, and slow computing speed. To address this drawback, this study utilized speeded up robust features image recognition algorithms to optimize deep residual network algorithms and proposed an optimization algorithm based on residual network deep learning algorithms. Based on this optimization algorithm, a medical image fusion system was constructed. Comparative experiments were organized on the improved algorithm, and the experiment outcomes denoted that the accuracy of image feature extraction was 0.98, the average time for feature extraction was 0.12 seconds, and the extraction capability was significantly better than that of the comparative algorithms HPF-CNN, PSO and PCA-CNN. Subsequently, experiments were conducted on the image fusion system, and the outcomes denoted that the accuracy and clarity of the fused images were 0.98 and 0.97, respectively, which were superior to other systems. The above outcomes indicate that the proposed medical image fusion system based on optimized deep learning algorithms can not only improve the speed of image fusion, but also enhance the clarity and accuracy of fused images. This study not only improves the accuracy of medical diagnosis, but also provides a theoretical basis for the field of image fusion.

Author 1: Hua Sun
Author 2: Li Zhao

Keywords: Image fusion; image recognition; residual network; medical image; speeded up robust features; medical diagnosis

PDF

Paper 39: Carbon Pollution Removal in Activated Sludge Process of Wastewater Treatment Systems Using Grey Wolf Optimization-Based Approach

Abstract: Managing wastewater to effectively remove water pollution is inherently difficult. Ensuring that the treated water meets stringent standards is a main priority for several countries. Advances in control and optimization strategies can significantly improve the elimination of harmful substances, particularly in the case of carbon pollution. This paper presents a novel optimization-based approach for carbon removal in Activated Sludge Process (ASP) of Wastewater Treatment Plants (WWTPs). The developed pollution removal algorithm combined the concepts of Takagi-Sugeno (TS) fuzzy modeling, Model Predictive Control (MPC) and Grey Wolf Optimization (GWO), as a parameters-free metaheuristics algorithm, to boost the carbon elimination in terms of standard metrics, namely Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD5) and Total Suspended Solids (TSS). To enhance such a pollution removal, the proposed fuzzy predictive control for all wastewater variables, i.e. effluent volume, concentrations of heterotrophic biomass, biodegradable substrate and dissolved oxygen, is formulated as a constrained optimization problem. The MPC parameters’ tuning process is therefore performed to select appropriate values for weighting coefficients, prediction and control horizons of local TS sub-models. To demonstrate the effectiveness of the proposed parameters-free GWO algorithm, comparisons with homologous state-of-the-art solvers such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), as well as the standard commonly used Parallel Distributed Compensation (PDC) technique, are carried out in terms of key purification indices COD, BOD5, and TSS. Additionally, an ANOVA study is conducted to evaluate the reported competing metaheuristics using Friedman ranking and post-hoc tests. The main findings highlight the superiority of the proposed GWO-based carbon pollution removal in WWTPs with elimination efficiencies of 93.9% for COD, 93.4% for BOD5, and 94.1% for TSS, in comparison with lower percentages for PSO, GA and PDC techniques.

Author 1: Saïda Dhouibi
Author 2: Raja Jarray
Author 3: Soufiene Bouallègue

Keywords: Wastewater treatment systems; carbon pollution removal; fuzzy predictive control; metaheuristics optimization; Grey Wolf Optimizer; ANOVA tests

PDF

Paper 40: Big Data Privacy Protection Technology Integrating CNN and Differential Privacy

Abstract: To solve the difficulty of balancing privacy and availability in big data privacy protection technology, this study integrates the powerful feature extraction ability of convolutional neural network models with the efficiency of differential privacy technology in data privacy protection. An innovative privacy protection method combining gradient adaptive noise and adaptive step size control is proposed. The experiment findings denote that the research method outperforms existing advanced privacy protection technologies in terms of performance, with an average accuracy of 97.68% and a performance improvement of about 20% to 30%. In addition, for larger privacy budgets, increasing the threshold appropriately can further optimize the effectiveness of research methods. This indicates that through refined noise control and step size adjustment, not only can the privacy protection process be optimized, but also the high efficiency and accuracy of data processing can be maintained. In summary, while ensuring data utility, research methods can not only significantly reduce the risk of privacy breaches, but also optimize privacy protection mechanisms, achieving an ideal balance between protecting personal privacy and maximizing data utility. This innovative approach provides an efficient probability distribution function solution for the field of privacy protection, with the potential to promote further development of related technologies and applications.

Author 1: Yanfeng Liu
Author 2: Ping Li
Author 3: Min Zhang
Author 4: Qinggang Liu

Keywords: Convolutional neural network; differential privacy; adaptive noise addition; big data; privacy protection

PDF

Paper 41: Multi-Strategy Improved Rapid Random Expansion Tree (RRT) Algorithm for Robotic Arm Path Planning

Abstract: The purpose of this paper is to propose an improved RRT algorithm that incorporates multiple improvement strategies to solve the problems of low efficiency, long and unsmooth paths in the traditional rapid random expansion tree (RRT) algorithm for path planning of robotic arms. The algorithm first uses a bidirectional tree extension strategy to generate trees from both the starting point and the target position simultaneously, improving search efficiency and reducing redundant paths. Secondly, the algorithm introduces target bias sampling in combination with local Gaussian sampling, which renders the sampling points more focused on the target area, and dynamically adjusts the distribution to improve sampling efficiency and path connection speed. Concurrently, the algorithm is equipped with an adaptive step size strategy, which dynamically adjusts the expansion step size according to the target distance, thereby achieving a balance between rapid expansion over long distances and precise search at close range. Finally, a collision-free operation is ensured by a path verification mechanism, and the path is smoothed using cubic B-splines and minimum curvature optimisation techniques, significantly improving the smoothness of the path and the feasibility of the robot arm movement. As demonstrated by simulation experiments, the improved RRT algorithm exhibits a reduction in the average path length by 18.15%, planning time by 96.29%, the number of nodes by 92.13%, and the number of iterations by 91.60%, in comparison with the conventional RRT algorithm, when operating in complex map mode. These findings substantiate the efficacy and practicality of the improved RRT algorithm in the domain of robotic arm path planning.

Author 1: Yuan Sun
Author 2: Shoujun Zhang

Keywords: Robotic arm; RRT algorithm; path planning; target-biased sampling; Gaussian sampling; bidirectional tree extension; adaptive step-size

PDF

Paper 42: Comparative Analysis of YOLO and Faster R-CNN Models for Detecting Traffic Object

Abstract: The identification of traffic objects is a basic aspect of autonomous vehicle systems. It allows vehicles to detect different traffic entities such as cars, pedestrians, cyclists, and trucks in real-time. The accuracy and efficiency of object detection are crucial in ensuring the safety and reliability of autonomous vehicles. The focus of this work is a comparative analysis of two object detection models: YOLO (You Only Look Once) and Faster R-CNN (Region-based Convolutional Neural Networks) using the KITTI dataset. The KITTI dataset is a widely accepted reference dataset for work in autonomous vehicles. The evaluation included the performance of YOLOv3, YOLOv5, and Faster R-CNN on three established levels of difficulty. The three levels of difficulty range from Easy, Moderate, to Hard based on object exposure, lighting, and the existence of obstacles. The results of the work show that Faster R-CNN achieves maximum precision in detection of pedestrians and cyclists, while YOLOv5 has a good balance of speed and precision. As a result, YOLOv5 is found to be highly suitable for applications in real-time. In this aspect, YOLOv3 shows computational efficacy but displayed poor performance in more demanding scenarios. The work presents useful insights into the strength and limitation of these models. The results help in improving more resilient and efficient systems of detection of traffic objects, hence advancing the construction of more secure and reliable self-driving cars. Moreover, this study provides a comparative analysis of YOLO and Faster R-CNN models, highlighting key trade-offs and identifying YOLOv5 as a strong real-time candidate while emphasizing Faster R-CNN’s precision in challenging conditions.

Author 1: Iqbal Ahmed
Author 2: Roky Das

Keywords: Faster R-CNN; YOLOV3; YOLOV5 Traffic object detection; image detection; autonomous driving

PDF

Paper 43: A Deep Learning-Based Framework for Real-Time Detection of Cybersecurity Threats in IoT Environments

Abstract: The rapid adoption of Internet of Things (IoT) devices has led to an exponential increase in cybersecurity threats, necessitating efficient and real-time intrusion detection systems (IDS). Traditional IDS and machine learning models struggle with evolving attack patterns, high false positive rates, and computational inefficiencies in IoT environments. This study proposes a deep learning-based framework for real-time detection of cybersecurity threats in IoT networks, leveraging Transformers, Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) architectures. The proposed framework integrates hybrid feature extraction techniques, enabling accurate anomaly detection while ensuring low latency and high scalability for IoT devices. Experimental evaluations on benchmark IoT security datasets (CICIDS2017, NSL-KDD, and TON_IoT) demonstrate that the Transformer-based model outperforms conventional IDS solutions, achieving 98.3% accuracy with a false positive rate as low as 1.9%. The framework also incorporates adversarial defense mechanisms to enhance resilience against evasion attacks. The results validate the efficacy, adaptability, and real-time applicability of the proposed deep learning approach in securing IoT networks against cyber threats.

Author 1: Sultan Saaed Almalki

Keywords: IoT security; intrusion detection system; cybersecurity threats; deep learning; real-time detection; adversarial robustness; anomaly detection

PDF

Paper 44: Enhancing Visual Communication Design and Customization Through the CLIP Contrastive Language-Image Model

Abstract: This study explores the impact of the CLIP (Contrastive Language-Image Pretraining) model on visual communication design, particularly focusing on its application in design innovation, personalized element creation, and cross-modal understanding. The research addresses how CLIP can meet the increasing demand for personalized and diverse design solutions in the context of digital information overload. Through a comprehensive analysis of the CLIP model’s capabilities in image-text pairing and large-scale learning, this study examines its ability to enhance design efficiency, customization, and creative expression. Quantitative data is presented, showcasing improvements in design processes and outcomes. The use of the CLIP model has resulted in a 30% increase in design efficiency, with a 20% improvement in originality and a 15% boost in market relevance of creative solutions. Personalized design solutions have seen a 40% increase in accuracy and user satisfaction. Additionally, the model’s cross-modal understanding has enhanced the coherence and immersion of visual experiences, improving user satisfaction by 25%. This research highlights the transformative potential of AI-driven models like CLIP in revolutionizing visual communication design, offering insights into how AI can foster design innovation, optimize user experience, and respond to the growing demands for personalized visual solutions in the digital age.

Author 1: Xiujie Wang

Keywords: CLIP; language image model; visual communication design; element customization

PDF

Paper 45: Optimization of Automated Financial Statement Information Disclosure System Based on AI Models

Abstract: In the context of the digital transformation of the global economy and the rapid advancement of enterprise informatization, ensuring accurate and timely financial statement disclosure has become a critical priority for businesses and regulatory bodies. This study aims to address the inefficiencies, high error rates, and slow response times inherent in traditional financial information disclosure processes, which fail to meet the real-time data accuracy demands of modern enterprises. The study introduces an AI-driven optimization scheme for an automated processing network system for financial statement information disclosure. By leveraging advanced machine learning techniques and large language models, the proposed system enhances the accuracy, speed, and cost-effectiveness of disclosure processes. The system was tested and compared against traditional manual methods, focusing on processing time, accuracy rates, and operational cost savings. The optimized system significantly reduces the average processing time from three hours to 20 minutes, achieving a 90% efficiency improvement. Accuracy is enhanced from 92% to over 97%, while the response speed increases by 40%. Additionally, the system reduces operational costs by 15%, resulting in annual labor cost savings of approximately 12 million yuan. These findings demonstrate the transformative potential of AI technologies in addressing the limitations of traditional financial disclosure processes. This study highlights an innovative application of AI in the realm of intelligent finance, offering a scalable solution that aligns with the evolving demands for real-time, accurate financial information. The research contributes to the growing field of AI-driven automation by showcasing its practical implications and substantial benefits in financial statement disclosure.

Author 1: Yonghui Xiao
Author 2: Haikuan Zhang

Keywords: Information disclosure of financial statements; artificial intelligence; automated processing; system optimization

PDF

Paper 46: Bibliometric Analysis of the Evolution and Impact of Short Videos in E-Commerce (2015-2024): New Research Trends in AI

Abstract: Over a decade of rapid growth in short video content has opened increasingly in-depth perspectives on this topic, with increasingly diverse scientific publications exploring different aspects of this phenomenon. Short videos have rapidly transformed the e-commerce landscape, influencing consumer behavior, marketing strategies, and technological advancements. This study used bibliometric analysis to evaluate existing research on short videos in e-commerce and identify key trends, research clusters, and influential publications. Using Scopus (2015-2024) data, co-citation, keyword co-occurrence, and bibliographic matching analyses were conducted. Publication analysis revealed three stages: initial (2015-2018) with limited research, growth (2019-2020) with increased interest, and explosive growth (2021-2024). Keyword co-occurrence analysis highlights interconnected research topics, with "video platforms," "short video," and "social media" forming a central cluster. The cluster indicates a recent focus on the "social context" of short videos in e-commerce. Co-citation analysis identifies key research clusters covering e-commerce and user behavior, user experience, advertising effectiveness of short videos, methodology, and underlying theories. These findings are helpful for researchers seeking to understand short-form video utilization in e-commerce. Insights are required to develop effective marketing strategies, improve user experiences, and capitalize on technological innovation in this rapidly evolving space.

Author 1: Duy Nguyen Binh Phuong
Author 2: Tien Ngo Thi My
Author 3: Thuy Nguyen Binh Phuong
Author 4: Thi Pham Nguyen Anh
Author 5: Hung Le Huu

Keywords: Short video; AI; co-citation analysis; keyword co-occurrence analysis; bibliographic coupling

PDF

Paper 47: Classroom Behavior Recognition and Analysis Technology Based on CNN Algorithm

Abstract: Students’ classroom behavior can effectively reflect the learning efficiency and the teaching quality of teachers, but the accuracy of current students’ classroom behavior identification methods is not high. Aiming at this research gap, an improved algorithm based on multi-task learning cascaded convolutional neural network architecture is proposed. Through the improved algorithm, a face recognition model is constructed to identify students' classroom behavior more accurately. In the performance comparison experiment of the improved convolutional network algorithm, it was found that the recall rate of the improved algorithm was 88.8%, higher than the three comparison models. The result demonstrated that the improved algorithm performed better than the contrast model. In the empirical analysis of the face recognition model based on the improved algorithm, it was found that the accuracy of the proposed face recognition model was 90.2%, which was higher than the traditional face recognition model. The findings indicate that the model developed in this study is capable of accurately reflecting the students' state in the classroom, thereby facilitating the formulation of targeted teaching strategies to enhance their classroom efficiency.

Author 1: Weihua Qiao

Keywords: Convolution neural network; multi-task learning; face recognition; classroom; student behavior

PDF

Paper 48: Malicious Domain Name Detection Using ML Algorithms

Abstract: With the ever-increasing rate of cyber threats, especially through malicious domain names, the need for their effective detection and prevention becomes very urgent. This study mainly investigates the classification of domain names into either benign or malicious classes based on DNS logs using machine learning. We evaluated five strong ML models: XGBoost, LightGBM, CatBoost, Stacking, and Voting Classifier, in an effort to obtain high accuracy, F1 score, AUC, recall, and precision. The challenge in that direction is to achieve a very good solution, without using deep learning techniques for low computational cost. Moreover, this project has an obligation to upgrade the cybersecurity landscape by embedding the best-performing model into the DNS firewall to enable protection against harmful domains in real time. Our dataset was collected and curated to include 90,000 domain names, including an equal number of safe and harmful, respectively, extracting 34 features from DNS logs and further enriched using publicly available data.

Author 1: Lamis Alshehri
Author 2: Samah Alajmani

Keywords: DNS Security; machine learning; malicious domain detection; XGBoost; LightGBM; CatBoost

PDF

Paper 49: Defect Detection of Photovoltaic Cells Based on an Improved YOLOv8

Abstract: Currently, defect detection in photovoltaic (PV) cells faces challenges such as limited training data, data imbalance, and high background complexity, which can result in both false positives and false negatives during the detection process. To address these challenges, a defect detection network based on an improved YOLOv8 model is proposed. Firstly, to tackle the data imbalance problem, five data augmentation techniques—Mosaic, Mixup, HSV transformation, scale transformation, and flip—are applied to improve the model’s generalization ability and reduce the risk of overfitting. Secondly, SPD-Conv is used instead of Conv in the backbone network, enabling the model to better detect small objects and defects in low-resolution images, thereby enhancing its performance and robustness in complex backgrounds. Next, the GAM attention mechanism is applied in the detection head to strengthen global channel interactions, reduce information dispersion, and enhance global dependencies, thereby improving network performance. Lastly, the CIoU loss function in YOLOv8 is replaced with the Focal-EIoU loss function, which accelerates model convergence and improves bbox regression accuracy. Experimental results show that the optimized model achieves a mAP of 86.6% on the augmented EL2021 dataset, representing a 5.1% improvement over the original YOLOv8 model, which has 11.24 × 10^6 parameters. The improved algorithm outperforms other widely used methods in photovoltaic cell defect detection.

Author 1: Zhihui LI
Author 2: Liqiang WANG

Keywords: Photovoltaic cells; defect detection; YOLOv8; loss function

PDF

Paper 50: Virtual Reality (VR) Technology in Civics Practice Teaching Evaluating the Effect of Immersive Experience

Abstract: In order to improve the low precision of the current immersive experience effect assessment method, a virtual reality Civics practice teaching immersive experience effect assessment method with enterprise development optimisation algorithm and mixed kernel extreme learning machine is proposed. Firstly, we analyse the current status of research on virtual reality Civic and political practice teaching, design the idea of assessing the application of VR technology in Civic and political practice teaching, extract the relevant assessment features, and construct the effect assessment system; secondly, we use the enterprise development optimization algorithm to optimize the parameters of the mixed kernel extreme learning machine, and construct the immersive experience effect assessment model; finally, we use the data of Civic and political practice teaching based on VR technology to verify and analyse the proposed model. The results show that the proposed model effectively improves the assessment accuracy of the immersive experience effect assessment method and achieves a higher precision of the Civic and political practice teaching effect assessment.

Author 1: Hao Qin
Author 2: Yangqing Zhang
Author 3: Jiali Wei

Keywords: Virtual reality technology; civics practice teaching; immersive experience effect assessment; enterprise development optimisation algorithm

PDF

Paper 51: Sentiment Analysis: An Insightful Literature Review

Abstract: Understanding the consumer is becoming crucial in today's customer-focused company culture. Sentiment analysis is one of many methods that can be used to evaluate the public’s sentiment toward a specific entity in order to generate actionable knowledge. In the commercial sector, sentiment analysis is critical in enabling businesses to establish strategy and obtain insight into user feedback on their products. Unfortunately, there are still many companies that do not hear customer feedback and run the business as usual, even though there is an analysis of sentiment that can reflect services and products of companies. The problem can be overcome by implementing sentiment analysis. When a company implements sentiment analysis, they can more easily discover what the consumers want, what they disapprove of, and what measures can be taken to sustain, which will help companies improve their products and services' performance. The purpose of this paper is to find out the uses of sentiment analysis in a company and the methodology that companies use to implement sentiment analysis. The research used in this paper was done by reviewing 22 papers that discuss sentiment analysis. This paper aims to learn more about the methodology and uses of sentiment analysis in a company.

Author 1: Indrajani Sutedja
Author 2: Hendry

Keywords: Sentiment analysis; sentiment analysis approach; text mining

PDF

Paper 52: Detection Optimization of Brute-Force Cyberattack Using Modified Caesar Cipher Algorithm Based on Binary Codes (MCBC)

Abstract: Information security is considered vital aspects that are employed to protect user credentials and digital information from cyber security threats. A Caesar cipher is an ancient cryptography algorithm, and it is susceptible to being easily broken and vulnerable to brute-force attack. Brute-force attack is a cyberattack that uses trial and error to crack passwords, login credentials, and encryption keys to unauthorized access and illegal to a system and individual accounts. However, several research has been developed to defeat the existing vulnerabilities in Caesar cipher, but are still suffering from their limitations and failing to provide a high level of attack detection and encryption strength. Therefore, Modified Caesar Cipher Algorithm Based on Binary Codes (MCBC) has been proposed to mitigate brute-force attack more optimistically based on different scenarios. First scenario, converting message to binary numbering system and the second scenario, employ binary shifting technique and then convert it to hexadecimal code. The performance metrics that were taken into consideration to evaluate the MCBC proposed algorithm are detection rate, strength rate, true positive rate and time required for decryption. The experimental results show that the proposed approach MCBC performance metrics outperformed other algorithms against brute force attack by ensuring the confidentiality of information.

Author 1: Muhannad Tahboush
Author 2: Adel Hamdan
Author 3: Mohammad Klaib
Author 4: Mohammad Adawy
Author 5: Firas Alzobi

Keywords: Brute-force attack; encryption; Caesar cipher; binary code; security

PDF

Paper 53: The Power of Digitalization: How Information Disclosure Shapes Company Value

Abstract: This study aims to explore how business digitalization influences firm value within the Indonesia Stock Exchange (IDX). It seeks to offer a thorough examination of the effects of digital transformation on corporate valuation. The findings highlight a strong positive correlation between digitalization and firm valuation, supporting signaling theory, which asserts that a company's transparency in disclosing its digital transformation efforts serves as a strategic indicator for investors and consumers. Greater transparency and specificity in disclosing digitalization information improve perceptions of corporate stability and future growth prospects, ultimately increasing firm value. As Indonesia undergoes rapid digital transformation, this research gains heightened relevance by offering critical insights into how companies that proactively communicate their digitalization strategies can strengthen their market positioning and secure a competitive edge in the financial landscape. This study makes a significant contribution by providing empirical evidence on the role of business digitalization in shaping firm value, particularly in an emerging market context where digital adoption is accelerating. This investigation highlights the strategic importance of digitalization disclosure in the Indonesian market, offering novel insights into how transparency in digital initiatives can serve as a competitive advantage.

Author 1: Lina Nur Hidayati
Author 2: Muniya Alteza
Author 3: Mahendra Ryansa Gallen Gagah Pratama

Keywords: Information; digitalization; business; firm value

PDF

Paper 54: A Systematic Literature Review on the Sand Cat Swarm Algorithm: Enhancements, Applications, and Future Directions

Abstract: The Sand Cat Swarm Algorithm (SCSA) has emerged as a promising metaheuristic optimization technique inspired by the behavior of sand cats in their natural habitat. This paper presents a systematic literature review synthesizes the enhancement, performance comparing algorithms, applications of SCSA across various domains and future direction on SCSA enhancement. The study aims to contribute to the evolution, enhancements, applications, and performance of the Sand Cat Swarm Algorithm (SCSA), providing a comprehensive analysis of its development, performances evaluation, application, limitations, and future research opportunities in SCSA in solving optimization problems. The SLR methodology was applied, and a total of 77 scientific articles were analyzed. The analysis reveals that SCSA demonstrates competitive performance across a wide range of benchmark problems and real-world applications in engineering, computer science, and other fields such as engineering design optimization, feature selection, energy systems optimization, flexible job shop scheduling and medical diagnosis problems. This review also identifies several key strengths of SCSA, including its ability to balance exploration and exploitation effectively, its adaptability to various problem domains, and its potential for hybridization with other algorithms. Lastly, this paper outlines potential improvements and future research directions, such as the development of multi-objective SCSA variants, integration with machine learning techniques, and exploration of parallel and distributed implementations. Overall, this paper provides researchers and practitioners with valuable insights into the current state of SCSA, its practical applications, and promising avenues for future research in the field of metaheuristic optimization.

Author 1: Wirawati Dewi Ahmad
Author 2: Azuraliza Abu Bakar
Author 3: Mohd Nor Akmal Khalid

Keywords: Sand cat swamp algorithm; sand cat optimization; optimization; metaheuristic

PDF

Paper 55: Designing Minimum Data Set and Data Model for Electronic Health Record Systems in Indonesia

Abstract: This study aimed to design a minimum data set (MDS) and Data Model for electronic health record system (EHRS) in Indonesia. The content of the MDS in this study is different from the MDS from the results of the study in other advanced countries. The technical preparation of the MDS in this study follows the medical service process provided to patients from the time they first enter the hospital until they complete receiving services at the hospital with the aim that the MDS designed is aligned with real-world hospital workflows. The initial stage of this research began by identifying data elements through literature reviews sourced from medical record documents of general hospitals and psychiatric hospitals in Indonesia, papers regarding minimum data set in other advanced countries, websites, and clinical guidelines. The Delphi technique was employed to validate the identified data elements through a survey of medical experts. A questionnaire was designed to determine data elements in both administrative and clinical departments. There were 5 and 21 data classes agreed upon by experts in the administrative and clinical sections with 28 and 858 data elements, respectively. This MDS could be a reliable tool for data standardization in EHRS that can improve the quality of data and medical services in hospitals. The designed data model consist of conceptual, logical and physical component. This MDS and data model can facilitate system developers to build physical EHRS database and health surveillance center for more efficient health data management.

Author 1: Teddie Darmizal
Author 2: Nor Hasbiah Ubaidullah
Author 3: Aslina Saad

Keywords: Minimum data set; data element; data model; electronic health record; electronic health record system

PDF

Paper 56: Optimization of LED Luminaire Life Prediction Algorithm by Integrating Feature Engineering and Deep Learning Models

Abstract: With the wide application of LED luminaires in various fields, it has become particularly important to accurately predict their lifetime. The lifetimes of LED luminaires are affected by a variety of factors, including temperature, current, voltage, light intensity, and operating time, and there are complex interactions among these factors. Traditional prediction methods are often difficult to capture these nonlinear relationships, so a more powerful prediction model is needed. In this study, we aim to develop an efficient life prediction model for LED luminaires, and propose a hybrid neural network structure that incorporates a convolutional neural network (CNN), a long short-term memory network (LSTM), and an attention mechanism by combining feature engineering and deep learning techniques. In the research process, we first collected the operation record data provided by a well-known LED lighting manufacturer and performed detailed data preprocessing, including missing value processing, outlier detection, normalization/standardization, data smoothing, and time series segmentation. Then, we designed and implemented several benchmark models (e.g., linear regression, support vector machine regression, random forest regression, and deep learning model using only LSTM) as well as the proposed hybrid neural network model. Through a detailed experimental design including parameter setting, training and testing, we evaluate the performance of these models and analyze the results. The experimental results show that the proposed hybrid neural network model significantly outperforms the conventional model in key performance metrics such as root mean square error (RMSE), mean absolute error (MAE) and coefficient of determination (R²). In particular, the hybrid model outperforms in terms of Mean Absolute Percentage Error (MAPE) and Maximum Absolute Error (Max AE). In addition, through cross-validation and testing on different datasets, the model shows stable performance under various environments and conditions, verifying its good generalization ability and robustness.

Author 1: Xiongbo Huang

Keywords: Feature engineering; deep learning; LED lamps; life prediction; algorithm optimization

PDF

Paper 57: Study on Human Hazardous Behavior Recognition and Monitoring System in Slide Facilities Based on Improved HRNet Network

Abstract: In recent years, accidents involving slide playground equipment have frequently occurred due to various reasons, attracting significant attention. Reducing or even eliminating these accidental injuries has become an urgent technical issue to address. Currently, the safety management of slide playground facilities still relies on manual monitoring, and the level of technology for detecting and intelligently recognizing hazardous behaviors on slides needs improvement. This paper proposes a behavior detection system based on human skeleton sequence information to address the issue of recognizing hazardous behaviors on slides. To resolve the feature fusion loss problem that arises when HRNet extracts feature information from images of different resolutions, this paper introduces a Flow Alignment Module (FAM) and an Attention-aware Feature Fusion (AFF) module to improve the network structure. Experimental results show that the improved skeleton sequence extraction model exhibits good computational efficiency and accuracy on the dataset, achieving an accuracy rate of over 90%. The human behavior recognition system proposed in this paper effectively meets detection requirements, providing new technical assurance for the safe use of slide playground equipment.

Author 1: Chen Chen
Author 2: Huiyu Xiang
Author 3: Song Huang
Author 4: Yanpei Zhang

Keywords: Playground equipment; object detection; skeleton sequence; flow alignment module; human behavior recognition

PDF

Paper 58: Improving Road Safety in Indonesia: A Clustering Analysis of Traffic Accidents Using K-Medoids

Abstract: Traffic accidents pose a significant public health and safety challenge in Indonesia, ranking fifth globally in terms of traffic fatality rates. This study aims to identify patterns in traffic accident data to inform effective mitigation strategies. Utilizing the K-Medoids algorithm, we clustered traffic accident data from the Indonesian Central Bureau of Statistics for the period 1992–2022. Prior to clustering, rigorous data preprocessing was conducted to ensure accuracy. The K-Medoids algorithm successfully partitioned the data into distinct clusters, revealing variations in accident patterns across different regions of Indonesia, including disparities in accident frequency and severity. This research provides valuable insights for policymakers and transportation authorities to develop targeted interventions and improve road safety in Indonesia. Additionally, this study successfully applied the K-Medoids algorithm to cluster traffic accident data in Indonesia using data from 2018 to 2022.

Author 1: Handrizal
Author 2: Hayatunnufus
Author 3: Maryo Christopher Davinci Nababan

Keywords: Traffic accidents; K-Medoids; clustering; data mining

PDF

Paper 59: Tree Seed Algorithm-Based Optimized Deep Features Selection for Glaucoma Disease Classification

Abstract: Glaucoma is a common eye condition that can cause irreversible blindness if left untreated. Glaucoma can be identified by the optic nerve disorder (a perilous path that carries the potential risk) and leads to blindness. Therefore, early glaucoma detection is critical for optimizing treatment outcomes and preserving vision. The majority of afflicted people typically do not exhibit any overt symptoms. Since many afflicted people go untreated as a result, early detection is essential for successful therapy. Systems for detecting glaucoma have been developed through a great deal of research. These manual, time-consuming, and frequently erroneous traditional diagnostic methods are not suitable for glaucoma diagnosis thus, automated methods are required. This research study proposes a novel glaucoma diagnosis model that addresses the difficulty of determining the complex cup-to-disc ratio. For accurate feature extraction, a publicly available dataset with two classes (Glaucoma positive and negative) is utilized from Kaggle. The dataset is augmented using the Flip technique and resized. A two-step approach using the Mobilenetv2 model is used to extract features from positive and negative classes. Accurate features are selected with the help of Transfer Function Sequential Analysis (TSA). The enriched features are then classified using three different classifiers: Cubic SVM, Ensemble Subspace KNN, and Fine KNN. The experimental evaluation comprises 7 and 8 cross-validation folds. On 7 folds Ensemble Subspace KNN provides an accuracy of 97.33%, and on 8 folds Fine KNN provides the best accuracy of 97.92%.

Author 1: Sherif Tawfik Amin

Keywords: Deep learning; tree seed algorithm; feature extraction; mobilenetv2

PDF

Paper 60: The Effect of Climate Change on Animal Diseases by Using Image Processing and Deep Learning Techniques

Abstract: Climate change is one of the most talked-about topics of this decade, affecting all economic output sectors, including the economy of cow farming. In many scenarios, exceptionally severe climate change is predicted for the Mediterranean region. As a result, practical measures must be taken to strengthen the sector's resilience, particularly for smallholders involved in the cattle production industry. As a result, technology is required to stop animal disease outbreaks. There are benefits to using automatic methods for detecting animal disease and cellulite. Climate change seriously threatens animal health, which is changing ecosystems, changing weather patterns, and posing new difficulties for animal existence. But this crisis also offers a chance for imagination and cooperation in a changing climate, a comprehensive strategy that includes adaptation and mitigation strategies that can boost resilience and safeguard animal populations. In conclusion, knowledge of climate change and adaptation measures are the main factors driving the rising demand for animal products. Furthermore, we have a variety of adaptation strategies at our disposal to mitigate the effects of climate change, which must be used to limit its further expansion.

Author 1: Gehad K. Hussien
Author 2: Mohamed H. Khafagy
Author 3: Hossam M. Elbehiery

Keywords: Climate change; sustainability; smallholder; animal disease; image processing; deep learning; animal skin diseases

PDF

Paper 61: The Application of Optimized JPEG-LS Algorithm in Efficient Transmission of Multi-Spectral Images

Abstract: Currently, multi-spectral image transmission faces challenges such as high storage costs and low transmission efficiency. Although various technologies are attempted to solve these problems recently, such as improving encoding methods in some algorithms, there are still issues such as insufficient compression ratio and slow processing speed. Therefore, the research focuses on optimizing the Joint Photographic Experts Group Lossless Standard (JPEG-LS) algorithm and constructing a multi-spectral image processing system. Regarding the JPEG LS algorithm process, improvements are made to the conventional encoding method by adopting sub-block compression strategy and block compression algorithm based on dynamic image bit width. The results show that the optimized JPEG LS algorithm has an average compression ratio of 5.81, which is higher than the comparison algorithm. The average compression time is 0.35 seconds, the average peak signal-to-noise ratio (PSNR) is 43.6, and the average structural similarity (SSIM) is 0.97, all of which are better than the comparison algorithm. In terms of system performance, stability testing of each module shows that the overall system tends to be stable, and the resource utilization rate of the image compression module is low, with a large resource margin that can meet practical application needs.

Author 1: Huanping Hu
Author 2: Xing Wang

Keywords: Multi-spectral; image transmission; JPEG-LS algorithm; compression ratio; signal-to-noise ratio

PDF

Paper 62: Early Warning Model Construction for Deformation Monitoring and Management of Deep Foundation Pit Project Combined with Artificial Intelligence

Abstract: In various engineering construction projects, construction safety problems caused by pit deformation continue to be solved. The existing early warning model for pit deformation management cannot effectively meet the needs of actual construction for complex pit projects. Artificial intelligence technology has more obvious advantages in foundation pit deformation detection due to its wide applicability, flexibility, and other characteristics. This study uses Gaussian regression analysis model to construct a corresponding deep foundation pit deformation monitoring and management warning model. The purpose is to better monitor and manage the deformation of deep foundation pits, ensuring the smooth and stable development of the entire construction project. In the experimental analysis, different performance indicators were used to verify the effectiveness of the research method, including different error indicators, precision, recall rate, F1 score, etc. MAE can effectively evaluate the deviation between predicted values and actual values, which indicates that the model is closer to the true value. Precision, recall, and F1 score can better evaluate the proportion of correctly classified samples and demonstrate the model's discriminative ability. These indicators comprehensively measure the performance of the model from different perspectives. In specific construction projects, the results showed that the proposed method had an RMSE of 0.012 and a MAE of 0.015, both significantly lower than the comparative methods, indicating better performance. The precision, recall, and F1 score of GRGA were 92.37%, 47.52%, and 0.17, respectively. In the comparison of existing foundation pit deformation monitoring methods BPNN, CNN, and GM, the precision was 90.52%, 90.03%, and 89.95%, respectively, the recall was 34.20%, 32.01%, and 29.67%, respectively, and the F1 score was 0.10, 0.13, and 0.14, respectively. The research method has more obvious advantages. The results demonstrate that the early warning model is an effective method for analyzing and predicting the deformation of deep foundation pits. The combination of Gaussian regression and genetic algorithm for deep excavation management can model and predict nonlinear deformation data, optimize the parameters of Gaussian regression process, and improve prediction accuracy. Compared with existing warning methods, the method proposed in this study utilizes Gaussian regression process to better model and analyze the deformation process of foundation pits, thus accurately analyzing the detailed changes of foundation pits.

Author 1: Xiaoyuan Zhang
Author 2: Xin Wang

Keywords: Deep foundation pit; deformation; Gaussian regression analysis; management warning; artificial intelligence

PDF

Paper 63: A Deep Learning-Based Generative Adversarial Network for Digital Art Style Migration

Abstract: This study introduces the ConvNeXt-CycleGAN, a novel deep learning-based Generative Adversarial Network (GAN) designed for digital art style migration. The model addresses the time-consuming and expertise-driven nature of traditional artistic creation, aiming to automate and accelerate the style transfer process using artificial intelligence. The ConvNeXt-CycleGAN integrates ConvNeXt blocks within the CycleGAN framework, enhancing convolution capabilities and leveraging self-attention mechanisms for precise and nuanced artistic style capture. The model undergoes rigorous evaluation using multiple performance metrics, including Inception Score (IS), Peak Signal-to-Noise Ratio (PSNR), and Fréchet Inception Distance (FID), ensuring its effectiveness in generating high-quality, diverse images while retaining fidelity during style transfer. The ConvNeXt-CycleGAN surpasses traditional GAN models across key metrics: it achieves an IS of 12.7004 (higher image diversity), a PSNR of 14.0211 (better preservation of original artwork integrity), and an FID of 234.1679 (closer resemblance to real artistic distributions). Additionally, its ability to efficiently train on unpaired images via unsupervised learning enhances its real-world applicability. This research presents an architectural innovation by combining ConvNeXt blocks with the CycleGAN framework, offering robust performance across diverse datasets and artistic styles. The ConvNeXt-CycleGAN represents a significant advancement in the integration of AI with creative processes, providing a powerful tool for rapid prototyping in digital art creation and innovation.

Author 1: Wenting Ou

Keywords: Generative Adversarial Networks (GANs); deep learning; style transfer; unsupervised learning; neural style transfer

PDF

Paper 64: On the Impact of Various Combinations of Preprocessing Steps on Customer Churn Prediction

Abstract: This paper investigates various combinations of preprocessing methods (attribute selection, normalization, resampling, and imputation) and evaluates their impact on the performance of decision tree models for predicting customer churn. The experiments were performed on the benchmark Cell2Cell dataset due to its ability to address diverse aspects of customer behavior, including value-added services, usage patterns, demographic information, customer service interactions, personal data, and billing data. This comprehensive view of client activities makes it ideal for studying customer churn. The aim of this work is to identify the most effective preprocessing method that can be applied to a real-world telecommunications dataset to improve the effectiveness of customer churn prediction methods. The study systematically examines the effects of imputation methods (K-Nearest Neighbors and statistical imputation), normalization techniques (Median and Median Absolute Deviation Normalization, Min-Max Scaling, and Z-Score Standardization), feature selection using Lasso regression, and resampling using SMOTE Tomek. This results in 16 distinct preprocessed datasets, each reflecting a unique combination of preprocessing steps. An analysis of these datasets was conducted, evaluating the performance metrics of the Decision Tree model on each dataset, including accuracy, precision, recall, F1 score, and ROC-AUC. Key findings highlight that Statistical Imputation, Median and Median Absolute Deviation Normalization, and Lasso feature selection achieved the highest performance, with 0.78 in precision, 0.77 in accuracy, recall, and F1 Score, and 0.74 in ROC-AUC.

Author 1: Mohamed Ezzeldin Saleh
Author 2: Nadia Abd-Alsabour

Keywords: Attribute selection; churn prediction; decision trees; imputation methods; machine learning; normalization techniques

PDF

Paper 65: IoT-Based Smart Accident Detection and Early Warning System for Emergency Response and Risk Management

Abstract: Driving in dense fog creates significant challenges, particularly in Asian countries like Pakistan, where increasing traffic and air pollution contribute to reduced visibility, elevate the risk of ac-cidents, property damage, and fatalities. Accidents in such conditions are worsened by vehicle congestion and poor weather, such as dense fog. To address these issues, this study proposes an IoT-based intelligent accident detection and early warning system that uses integrated smartphone sensors to detect and monitor vehicular collisions. The system enhances risk manage-ment by autonomously detecting accidents and instantly trans-mitting essential information, including precise location, to emergency response networks for timely intervention and deci-sion-making. Additionally, the system alerts driver to possible near-collisions or hazardous conditions through real-time warn-ing alert, displayed via the Blynk application. Utilizing a smartphone's built-in sensors to detect vehicular collisions and notify the nearest first responders, along with providing real-time location tracking for paramedics and emergency victims, can significantly enhance recovery chances for victims while reducing both time and costs. The operational reliability and accuracy of the IoT-based framework for smart transportation are evaluated through numerical and simulation-based experiments, validating its efficacy in harsh environmental conditions.

Author 1: Jinsong Tao
Author 2: Rahat Ali
Author 3: Shakeel Ahmad
Author 4: Fasahat Ali

Keywords: IoT; Blynk application; smart transportation; accident detecting and early warning system; risk management

PDF

Paper 66: Analysis of Estimation Methods for Submarine Towing Resistance

Abstract: In order to estimate the drag of submarine towing effectively, based on the analysis of the drag components of submarine towing, the friction resistance and residual resistance of submarine towing are estimated according to the empirical formula of towing surface ship resistance. Subsequently, CFD is used to simulate the towing resistance of submarine on water surface. The CFD simulation results are compared with those estimated by empirical formula. It is shown that the friction resistance of submarine Towing on the surface can be calculated by “Towing Guide at Sea” and “Towing” empirical formula, and the residual resistance can be estimated by the “Towing” formula or Shen Pugen’s formula. However, a head shape coefficient of approximately 1.5 is found to be more suitable for the residual resistance estimation formula of a towed submarine.

Author 1: Shancheng Li
Author 2: Guanghui Zeng
Author 3: Guangda Wang

Keywords: Submarine; towing resistance; CFD simulation; empirical formulas; maritime rescue

PDF

Paper 67: Machine Learning Applications in Workforce Management: Strategies for Enhancing Productivity and Employee Engagement

Abstract: Workforce management is a critical component of organizational success, encompassing employee scheduling, task allocation, and engagement strategies. Traditional methods rely heavily on rule-based systems and manual supervision, leading to inefficiencies and suboptimal workforce utilization. Existing machine learning (ML) approaches, such as supervised learning and statistical models, have improved certain aspects but often fail to dynamically adapt to evolving workforce demands. Additionally, these models struggle with real-time decision-making, requiring constant retraining and manual intervention. This study introduces a reinforcement learning (RL)-based workforce management framework to optimize productivity and employee engagement. Unlike conventional ML models, RL enables adaptive decision-making by continuously learning from interactions within the workforce environment. The proposed method employs deep Q-networks (DQN) and policy gradient techniques to enhance scheduling, task distribution, and incentive structures, leading to a more efficient and responsive workforce management system. The methodology involves collecting real-time workforce data, pre-processing it for feature extraction, and training the RL model using simulated and historical workforce scenarios. The model’s performance is evaluated based on efficiency gains, employee satisfaction, and task completion rates compared to traditional workforce management techniques. Experimental results demonstrate that the RL-based approach significantly improves task allocation accuracy by 18%, reduces scheduling conflicts by 22%, and enhances employee satisfaction scores by 15%. These findings underscore the potential of reinforcement learning in revolutionizing workforce management by fostering data-driven, real-time optimization, ultimately leading to enhanced organizational productivity and employee well-being.

Author 1: Mano Ashish Tripathi
Author 2: Joel Osei-Asiamah
Author 3: Avanti Chinmulgund
Author 4: Aanandha Saravanan
Author 5: T Subha Mastan Rao
Author 6: Ramya H P
Author 7: Yousef A. Baker El-Ebiary

Keywords: Machine learning; workforce management; employee engagement; task allocation; productivity optimization

PDF

Paper 68: Chronic Kidney Disease Classification Using Bagging and Particle Swarm Optimization Techniques

Abstract: Chronic kidney disease (CKD) is a serious chronic illness without a definitive cure. According to WHO in 2015, 10% of the population suffers from CKD, with 1.5 million patients undergoing global haemodialysis. The incidence of CKD is increasing by 8% annually, ranking it as the 20th highest cause of global mortality. The Random Forest (RF) technique utilizes decision trees as an ensemble model, where class predictions are derived from the combination of results from each tree. The final decision is based on the highest outcome of class predictions generated by each decision tree, employed in this study. In testing, Random Forest with PSO-based Bagging achieved the highest performance with precision of 98.12%, recall of 100.00%, and AUC of 0.999. The Random Forest with PSO-based Bagging model demonstrates high performance in CKD detection, but metrics like precision, recall, and AUC alone do not guarantee clinical applicability. Balancing false positives and negatives is crucial, and its real-world integration should be evaluated to assess its impact on patient outcomes and clinical workflows. Research on predicting chronic kidney disease using the Random Forest algorithm with Bagging based on Particle Swarm Optimization (PSO) indicates that Bagging with PSO feature selection can enhance accuracy and kappa values. These findings contribute to understanding the roles of Bagging and PSO methods in improving the performance of several algorithms, including Random Forest.

Author 1: Suhendro Y. Irianto
Author 2: Dephi Linda
Author 3: Immaniar I. M. Rizki
Author 4: Sri Karnila
Author 5: Dona Yuliawati

Keywords: Kidney disease; PSO; bagging; Random Forest

PDF

Paper 69: Fuzzy Logic with Kalman Filter Model Framework for Children’s Personal Health Apps

Abstract: The increasing prevalence of obesity among children under five has led to a growing demand for improved food nutrition advisory systems. Current food nutrition recommendation models struggle with parameter estimation, contextual adaptation, and real-time accuracy, often relying on traditional fuzzy logic models that lack responsiveness to evolving dietary needs. This study proposes an Adaptive Extended Kalman Filter Fuzzy Logic (AEKFFL) model to enhance the accuracy and reliability of food nutrition recommendations. The AEKFFL model integrates the Extended Kalman Filter (EKF) for dynamic estimation of nutritional values and Fuzzy Logic for adaptive decision-making, effectively addressing parametric uncertainties in nutrition estimation. The research employs a Design Science Research Methodology (DSRM), incorporating stakeholder interviews, literature review, and data from food composition databases, user reviews, and ingredient information. The proposed hybrid model is tested against baseline methods, including standalone Fuzzy Logic, Support Vector Machine (SVM), Neural Networks (NN), and a hybrid Fuzzy-NN approach. Experimental results demonstrate that the AEKFFL model achieves the highest accuracy (94.8%) with the lowest error rates (MAE = 0.031, RMSE = 0.045), outperforming alternative models. Additionally, AEKFFL exhibits superior classification performance (F1-score = 94.4%) and usability (SUS score = 92.1%), indicating its effectiveness in real-time nutritional guidance. These findings suggest that AEKFFL provides an innovative and computationally efficient framework for personal health and food recommendations, contributing to enhanced dietary management and obesity prevention among children. Future work will focus on refining model adaptability and integrating real-time IoT data for further improvements in precision and responsiveness.

Author 1: Noorrezam Yusop
Author 2: Massila Kamalrudin
Author 3: Nuridawati Mustafa
Author 4: Nor Aiza Moketar
Author 5: Tao Hai
Author 6: Siti Fairuz Nurr Sardikan

Keywords: Fuzzy logic; Kalman filter; food Nutrition; personal health; food recommendations

PDF

Paper 70: Enhanced Reconstruction of Occluded Images Using GAN and VGG-Net Preprocessing

Abstract: Facial recognition is widely used in security and identification systems, but occlusions like masks or glasses remain a major challenge. Recent approaches, such as GANs and partial feature extraction methods, attempt to reconstruct or identify occluded facial images. However, these approaches still have limitations in handling severe occlusions, computational efficiency, and dependency on large labeled datasets. In this paper, a GAN-based framework for synthetic reconstruction of occluded facial images is proposed, incorporating multiple specialized modules including a VGG-Net-based perceptual loss component to enhance visual quality. Our architecture improves the fidelity and robustness of reconstructed faces under varied occlusion types. Experimental evaluation on different occlusion scenarios demonstrated high reconstruction quality, with PSNR up to 33.106 and SSIM up to 0.983. The model also maintained strong recognition performance across diverse occlusion combinations. These findings support the framework's potential to enhance face recognition systems in real-world, unconstrained environments.

Author 1: Salamun
Author 2: Shamsul Kamal Ahmad Khalid
Author 3: Ezak Fadzrin Ahmad Shaubari
Author 4: Noor Azah Samsudin
Author 5: Luluk Elvitaria

Keywords: Face recognition; occlusion; image reconstruction; generative adversarial networks; VGG-Net; occluded images; feature extraction

PDF

Paper 71: Parameter Adaptation of Enhanced Ant Colony System for Water Quality Rules Classification

Abstract: Water quality monitoring in aquaculture involves classifying and analyzing the collected data to assess the water quality that is appropriate for breeding, rearing and harvesting aquatic organisms. Systematic data classification is essential when it comes to managing large amounts of data that are continuously sensed in real time and have various attributes in each instance of a sequence. Ant Colony System (ACS) has been employed in optimizing the data classification in smart aquaculture, where the majority of the research focuses on enhancing the classification procedure using predetermined parameters within a specified range. Nevertheless, this approach does not guarantee ideal performance. This paper enhances the ACS algorithm by introducing the Enhanced Ant Colony System-Rule Classification (EACS-RC) algorithm, which improves rule construction by integrating pheromone and heuristic values while incorporating advanced pheromone update techniques. The optimal parameter values to be used by the proposed algorithm are obtained from parameter adaptation experiments in which different values within the defined range were applied to obtain the optimal value for each parameter. Experiments were performed on the Kiribati water quality dataset and the results of the EACS-RC algorithm were evaluated against the AntMiner and AGI-AntMiner algorithms. Based on the results, the proposed algorithm outperforms the benchmark algorithms in classification accuracy and processing time. The output of this study can be adopted by the other ACS variants to achieve optimal performance for data classification in smart aquaculture.

Author 1: Husna Jamal Abdul Nasir
Author 2: Mohd Mizan Munif
Author 3: Muhammad Imran Ahmad
Author 4: Tan Shie Chow
Author 5: Ku Ruhana Ku-Mahamud
Author 6: Abu Hassan Abdullah

Keywords: Parameter adaptation; rules classification; water quality monitoring; ant colony system; pheromone update techniques

PDF

Paper 72: The Application of Face Recognition Model Based on MLBP-HOG-G Algorithm in Smart Classroom

Abstract: The development of Internet and Internet of things technology has accelerated the informatization construction of smart education. But the traditional face recognition algorithm used in smart classrooms inevitably has problems such as large amount of calculation, obvious resource and memory consumption, and poor recognition accuracy. In order to promote the informatization construction of colleges and universities and the accuracy of face recognition, a face recognition model based on multi-feature Local Binary Pattern Directional Gradient Histogram Gabor Filter algorithm is proposed. The model first extracts the binary texture image, and then carries out secondary feature extraction, dimension reduction processing and serial fusion with the gray level co-occurrence matrix feature weighting to improve the recognition accuracy. The results show that the recognition rate of the proposed method in ORL database, CMU_PIE database and Yale database can reach 95%, 94.12% and 93.33%, which is better than other algorithms. And in the comprehensive data set, the training and verification recognition accuracy of the proposed method for face recognition is basically 98% and 97.23%, which has good generalization and stability, and its cumulative error result of face key point detection is less than that of other comparison methods. The proposed method can provide new opportunities and possibilities for the application effect of face recognition, smart classroom construction and teaching development.

Author 1: Xiaoxia Li

Keywords: Multi feature local binary pattern; directional gradient histogram; Gabor filter; face recognition; smart classroom

PDF

Paper 73: AI-Driven NAS-GBM Model for Precision Agriculture: Enhancing Crop Yield Prediction Accuracy

Abstract: Precision agriculture has emerged as a vital approach for optimizing crop yield prediction, enabling data-driven decision-making to improve agricultural productivity. Traditional forecasting methods encounter difficulties due to extreme complexity within environmental factors while operating under dynamic farming conditions. An AI framework combining NAS and GBM serves as the solution to address these issues through enhancing predictive capabilities. This study works to produce an automated system which selects optimal models through optimization processes for more accurate crop yield forecasts. Through NAS component exploration the optimal neural network architecture can be identified whereas GBM component effectively analyzes non-linear dependencies in data which leads to superior predictive capabilities. Data processing techniques precede model development by using Recursive Feature Elimination (RFE) for feature selection which leads to training NAS-optimized deep learning architectures together with GBM. The researchers applied the model to real agriculture datasets which included essential agricultural variables comprising soil conditions and weather elements and crop health measurements. The experimental results prove that the developed NAS-GBM framework achieves superior performance compared to standard models across three major aspects including predictive accuracy and computation efficiency in addition to generalization capability. The research project uses TensorFlow and Scikit-learn alongside Optuna for model optimization while it depends on cloud-based computational resources for extensive processing requirements. AI-driven hybrid models based on the research demonstrate their capability to improve decision-making capabilities for farmers together with agronomists.

Author 1: Sudhir Anakal
Author 2: Poornima N
Author 3: Abdurasul Bobonazarov
Author 4: Janjhyam Venkata Naga Ramesh
Author 5: Elangovan Muniyandy
Author 6: Mandava Manjusha
Author 7: Yousef A. Baker El-Ebiary

Keywords: Network sensor; crop yield prediction; neural architecture search; Gradient Boosting Machine (GBM)

PDF

Paper 74: Challenges and Solutions in Agile Software Development: A Managerial Perspective on Implementation Practices

Abstract: Agile software development is much used as it is flexible and is customer centric style but its implementation there are still challenges in which in transferring from traditional project management. The implementation is, however, beset with much trouble, especially in transitioning organizations from old project management frameworks. This research elaborates on the challenges of Agile implementation and the methods managers use to overcome these challenges, thus providing a managerial perspective toward Agile adoption. The main challenges derived from the reviewed literature and case studies are resistance to change, lack of Agile expertise, poor team coordination, and inconsistent stakeholder buy-in. These usually lead to performance degradation because teams cannot maintain productivity and meet deadlines in delivering quality work. This paper outlines a number of managerial interventions that help mitigate such challenges, such as Agile training, leadership support, incremental transition plans, and effective communication strategies, among others. These interventions are assessed using performance indicators such as team productivity, stakeholder satisfaction, and time-to-market to establish the role such interventions play in making transitions smoother to Agile frameworks. It also makes a comparison on how Agile frameworks work in Scrum, Kanban, and SAFe compared to the traditional practices of project management, respectively, in regard to risk management, team integration, and return on investment. Data from industry reports and surveys show that Agile methodologies are generally faster, more flexible, and better at engaging stakeholders than traditional methods, although success with Agile depends significantly on the maturity level of the organization and the managerial support provided. While Agile offers great advantages, it is still highly challenging to implement it successfully. Managerial involvement has been the theme of this research in overcoming these barriers with continuous improvement, adaptive practices, and creating a collaborative environment for sustainable success in Agile adoption.

Author 1: Geetha L S
Author 2: Yousef A. Baker El-Ebiary
Author 3: Bandla Srinivasa Rao
Author 4: Revati Ramrao Rautrao
Author 5: T Subha Mastan Rao
Author 6: Janjhyam Venkata Naga Ramesh
Author 7: Omaia Al-Omari

Keywords: Agile software development; implementation challenges; managerial interventions; agile frameworks; performance evaluation

PDF

Paper 75: AEDGAN: A Semi-Supervised Deep Learning Model for Zero-Day Malware Detection

Abstract: Malware presents an increasing threat to cyberspace, drawing significant attention from researchers and industry professionals. Many solutions have been proposed for malware detection; however, zero-day malware detection remains challenging due to the evasive techniques used by malware authors and the limitations of existing solutions. Traditional supervised learning methods assume a fixed relationship between malware and their class labels over time, but this assumption does not hold in the ever-changing landscape of evasive malware and its variants. That is malware developers intentionally design malicious software to share features with benign programs, making zero-day malware. This study introduces the AEDGAN model, a zero-day malware detection framework based on a semi-supervised learning approach. The model leverages a generative adversarial network (GAN), an autoencoder, and a convolutional neural network (CNN) classifier to build an anomaly-based detection system. The GAN is used to learn representations of benign applications, while the auto-encoder extracts latent features that effectively characterize benign samples. The CNN classifier is trained on an integrated feature vector that combines the latent features from the autoencoder with hidden features extracted by the GAN’s discriminator. Extensive experiments were conducted to evaluate the model’s effectiveness. Results from two benchmark datasets show that the AEDGAN model outperforms existing solutions, achieving a 5% improvement in overall accuracy and an 11% reduction in false alarms compared to the best-performing related model.

Author 1: Abdullah Marish Ali
Author 2: Fuad A. Ghaleb
Author 3: Faisal Saeed

Keywords: Malware detection; zero-day; anomaly detection; generative adversarial network; autoencoder; convolutional neural network

PDF

Paper 76: Development and Evaluation of Accounting Information System and Shopee Open Application Programming Interface for a Small Business, Thailand

Abstract: This research aimed to develop and evaluate an integrated Accounting Information System (AIS) with Shopee Open API for the Ban Huai Luek Agricultural Community Enterprise in Thailand, designed to enhance financial data management efficiency and optimize online marketing operations. The research employed a mixed-method approach, combining qualitative interviews with 30 stakeholders in three groups and quantitative assessments of system effectiveness with 388 consumers and 30 farmers. Interview findings revealed diverse stakeholder needs: Enterprise members prioritized financial management and operational costs, farmers emphasized security and technology access, while customers focused on e-commerce capabilities and market positioning. The developed AIS features 41 database tables and nine core functions, incorporating Shopee's e-commerce platform through Application Programming Interface (API) integration, enabling automated product listing, inventory management, and financial calculations. System evaluation demonstrated high user satisfaction across all groups. Consumer analysis showed an overall strong approval, with security and perceived benefits ranking highest, while performance efficiency scored lowest. Farmer assessments indicated high satisfaction, with ease of use and system accuracy rated highest, though security concerns emerged during initial technology adoption. Demographic factors, particularly age and income, significantly influenced user perceptions.

Author 1: Kewalin Angkananon
Author 2: Piyabud Ploadaksorn

Keywords: Accounting information system; e-commerce integration; agricultural community enterprise; shopee open API

PDF

Paper 77: Detection of Structural Vulnerabilities in Multi-Cavity Steel Plate Shear Walls Using Improved Deep Neural Networks

Abstract: Steel Plate Shear Walls (SPSWs) are a significant structural system because they can dissipate energy and have a very high lateral stiffness. However, the discovery and elimination of vital structural vulnerabilities, mainly in multi-cavity configurations, is still a major challenge. This study utilizes developments in the deep learning era to improve the identification and representation of such vulnerabilities. An improved DNN architecture was employed to analyze the effectiveness of multi-cavity SPSWs under different loading conditions. The proposed method combines hybrid information extraction techniques with various geometries and materials to ensure a reliable prediction of structural element failures. The tests have shown highly positive results, with the enhanced DNN outperforming conventional procedures by achieving higher accuracy, lower false-positive rates, and superior generalization across various test cases. This work demonstrates a new way to detect weaknesses in a structure, thereby developing an effective tool for engineers to prevent the sustainability and safety of SPSWs in critical infrastructure.

Author 1: Zhang Bo
Author 2: Xu Dabin

Keywords: Structural vulnerabilities; deep neural networks; steel plate shear walls; seismic design; machine learning

PDF

Paper 78: Intrusion Detection System-Based Network Behavior Analysis: A Systemic Literature Review

Abstract: An Intrusion Detection System (IDS) in cyberspace, as of now, plays primarily as a means of detecting illegal access and activity in a network. Due to the rapidly evolving cyber threats, the traditional signature-based IDS have started losing their effectiveness, leading to the emergence of advanced alternatives to these traditional technologies, such as Network Behavior Analysis (NBA). Unlike conventional signature-based systems, NBA monitors behavioral patterns for deviations and potential threats, which is a far more flexible and powerful way of detecting intrusion. While NBA-based IDS is a growing field of interest, the existing research in this area is mostly disoriented, mostly concentrating on single features like machine learning, deep learning algorithms, specific detection processes, or unique environments such as IoT and cloud systems. This systematic literature review (SLR) follows the guidelines proposed by Kitchenham to collect various studies, highlights research gaps, and provides an overview of the existing evidence. Spanning literature from January 2014 to April 2024, it comprehensively highlights the methods, datasets, types of detectable cyber-attacks, performance metrics, and the challenges that besiege existing NBA-based IDS. This shows the urgency for much more flexible and robust solutions, i.e., providing solutions through advanced Artificial Intelligence (AI) techniques in response to the increasing cyberspace complexities. Therefore, this review provides fundamental perspectives for researchers and practitioners and makes an important contribution towards stimulating future research efforts to design more effective and robust IDS solutions.

Author 1: Mohammed Janati
Author 2: Fayçal Messaoudi

Keywords: Artificial Intelligence (AI); deep learning; machine learning; cybersecurity; Intrusion Detection System; Network Behavior Analysis (NBA); Systematic Literature Review (SLR)

PDF

Paper 79: Dynamic Obstacle Avoidance and Path Planning for Mobile Robots Integrating Improved Rapidly-Exploring Random Tree-Star and Improved Dynamic Window Approach

Abstract: With the application and popularization of artificial intelligence and intelligent robots in daily life, the autonomous navigation and flexible operation capabilities of mobile robots have become particularly critical. Mobile robots perform well in regular environments, but face problems such as low accuracy in dynamic obstacle avoidance and weak adaptability to complex terrains. This study proposes to enhance the adaptability of the Rapidly-exploring Random Tree Star algorithm and integrate it with the A-Star algorithm, the Dynamic Window Approach, and visual sensor to construct an obstacle avoidance model. The objective is to enable the improved model to recognize various terrain features and enhance the accuracy of the path planning algorithm. The proposed model performed well in obstacle avoidance, with a success rate of 95.78% after ten training epochs and no more than four collisions within 4 minutes. In the experiment, as the obstacle increased every minute, the response speed of the proposed model remained below 25 seconds. The above results indicate that the quality of the planned path is higher than that of the other three models. The path optimization improvement combined with the A* algorithm is effective and has high real-time and accuracy, which can make mobile robots widely used in industries such as services, navigation, and logistics.

Author 1: Xianyong Wei
Author 2: Hongying Si

Keywords: Rapidly-exploring random tree-star; dynamic window approach; A-star algorithm; dynamic obstacle avoidance; path planning; mobile robot

PDF

Paper 80: Resource Utilization Prediction Model for Cloud Datacentre: Survey

Abstract: This survey aims to analyze resource prediction models in cloud environments to improve resource allocation strategies. It can be difficult for cloud service providers to maintain the required Quality of Service (QoS) requirements without going against a service level agreement (SLA). Improving cloud performance requires accurate workload prediction. To enhance customer service quality (QoS), cloud computing provides virtualisation, scalability, and on-demand services. Resource provisioning is a major challenge in the cloud environment due to its dynamic nature and the rapid increase in resource demand. Over-provisioning of resources leads to energy waste and increased expenses while under-provisioning can result in SLA breaches and reduced QoS. It is crucial to allocate resources as closely as possible to current demands. Cloud elasticity plays a key role in adapting to workload changes and maintaining performance levels. Predicting future resource demand is essential for effective resource allocation, which is the focus of this survey. Our survey uniquely focuses on comparing univariate and multivariate input cases for cloud resource prediction, a perspective that has not been deeply explored in similar surveys. Unlike existing works that primarily categorize models by methodologies or application characteristics, our study offers a novel analysis of how different input scenarios impact prediction accuracy, resource efficiency, and scalability. By addressing this overlooked aspect, our survey provides unique insights and practical guidance for researchers and practitioners aiming to optimize resource utilization in cloud environments. A thorough analysis of resource prediction models in cloud systems is presented in this research, including a comparison of predicted resources, prediction algorithms, datasets, performance metrics, a prediction summary, and a taxonomy of prediction methods. This survey not only synthesizes current knowledge but also identifies key gaps and future directions for the development of more robust and efficient resource prediction models.

Author 1: Doaa Bliedy
Author 2: Mohamed H. Khafagy
Author 3: Rasha M. Badry

Keywords: Cloud computing; resource utilization; prediction; cloud datacenter; machine learning models; resource allocation

PDF

Paper 81: Handwritten Arabic Calligraphy Generation: A Systematic Literature Review

Abstract: Arabic calligraphy is famous for its distinct artistic style. It is written by skilled calligraphers to highlight the beauty of Arabic letters and represent its rich artistry. Due to the complexity of Arabic text compared to other languages' scripts, Arabic calligraphy writing demands a significant investment of time and effort, as well as the acquisition of high skills from calligraphers to correctly form the curves of Arabic script and accurately represent its various styles. This Systematic Literature Review (SLR) aims to provide a comprehensive analysis of the current state of research in Arabic calligraphy generation using deep learning and generative models. The review follows the PRISMA guidelines and examines 19 primary studies selected from a systematic search of academic databases, with publications spanning from January 2009 to December 2024. The findings indicate that Generative Adversarial Networks (GANs) and their variants are the most commonly used models for generating Arabic calligraphy. Additionally, the review highlights a significant gap in the availability of large, standardized handwritten datasets for model training and evaluation, as most existing datasets are small, custom-made, or privately held. In conclusion, the review offers valuable insights that can help researchers and practitioners advance the field, enabling the generation of high-quality Arabic calligraphy that satisfies both artistic and functional needs.

Author 1: Afnan Sumayli
Author 2: Mohamed Alkaoud

Keywords: Arabic calligraphy; deep learning; generative models; handwritten dataset; Generative Adversarial Networks

PDF

Paper 82: Music Emotion Recognition and Analysis Based on Neural Network

Abstract: The close connection between music and human emotions has always been an important topic of research in psychology and musicology. Scientists have proven that music can affect a person's emotional state, thereby possessing the potential for therapy and stress relief. With the development of information technology, automatic music emotion recognition has become an important research direction. The MultiSpec-DNN model proposed in this article is a multi-spectral deep neural network that integrates multiple features and modalities of music, including but not limited to melody, rhythm, harmony, and lyrical content, thus achieving efficient and accurate recognition of music emotions. The core of the MultiSpec-DNN model lies in its ability to process and analyze various types of data inputs. By combining audio signal processing and natural language processing technologies, the MultiSpec-DNN model can extract and analyze the comprehensive emotional characteristics in music files, thereby achieving more accurate emotion classification. In the experimental section, the MultiSpec-DNN model was tested on two standard emotional speech databases: EmoDB and IEMOCAP. The experimental results show that the MultiSpec-DNN model has a significant improvement in accuracy compared to traditional single-modal recognition methods, which proves the effectiveness of integrated features in emotion recognition.

Author 1: Zhao Hanbing
Author 2: Jin Xin
Author 3: Guo Jinfeng

Keywords: Music emotion recognition; multimodal fusion; audio signal processing; neural network; sentiment analysis; user experience

PDF

Paper 83: Medical Named Entity Recognition for Enhanced Electronic Health Record Maintenance

Abstract: The increasing use of electronic health records (EHRs) has led to a surge in unstructured data, making it challenging to extract valuable insights. This study proposes Natural Language Processing (NLP) based techniques to standardize Electronic Health Record (EHR) data. Conducted in a healthcare setting, the research focuses on transforming unstructured EHR text into structured data using Part-of-Speech tagging and Named Entity Recognition (NER). NER techniques are applied to extract and categorize medical terms, enhancing data accuracy and consistency. The framework’s performance is evaluated using precision and recall rates. Experimental results demonstrate that NER effectively identifies and organizes medical entities, facilitating improved data analysis and decision-making in healthcare. This approach promises to enhance interoperability and the overall utility of EHR systems.

Author 1: Muralikrishna S. N
Author 2: Raghavendra Ganiga
Author 3: Raghurama Holla
Author 4: Ruppikha Sree Shankar

Keywords: Electronic health records; named entity recognition; natural language processing; part-of-speech

PDF

Paper 84: Optimizing Large Language Models for Low-Resource Languages: A Case Study on Saudi Dialects

Abstract: Large Language Models (LLMs) have revolutionized natural language processing (NLP); however, their effectiveness remains limited for low-resource languages and dialects due to data scarcity. One such underrepresented variety is the Saudi dialect, a widely spoken yet linguistically distinct variant of Arabic. NLP models trained on Modern Standard Arabic (MSA) often struggle with dialectal variations, leading to suboptimal performance in real-world applications. This study aims to enhance LLM performance for the Saudi dialect by leveraging the MADAR dataset, applying data augmentation techniques, and fine-tuning a state-of-the-art LLM. Experimental results demonstrate the model’s effectiveness in Saudi dialect classification, achieving 91% accuracy, with precision, recall, and F1-scores all exceeding 0.90 across different dialectal variations. These findings underscore the potential of LLMs in handling dialectal Arabic and their applicability in tasks such as social media monitoring and automatic translation. Future research can further improve performance by refining fine-tuning strategies, integrating additional linguistic features, and expanding training datasets. Ultimately, this work contributes to democratizing NLP technologies for low-resource languages and dialects, bridging the gap in linguistic inclusivity within AI applications.

Author 1: Bayan M. Alsharbi

Keywords: LLM; Saudi Dialect; deep learning

PDF

Paper 85: Smart Homes, Family Bonds, and Societal Resilience: A Comparative Analysis of AraBERT, MarBERT, and DistilBERT on Arabic Twitter Data

Abstract: This study explores the concept of Smart Homes & Families by analyzing 1,174,912 Arabic tweets from Saudi Arabia to understand societal perceptions, challenges, and expectations. Recognizing that homes play a vital role in nurturing relationships, values, morals, and societal cohesion, the research emphasizes that the "smartness" of homes lies not only in technological advancements but also in supporting core family functions and contributing to sustainability. A machine learning tool was developed, integrating data collection, preprocessing, embedding generation, dimensionality reduction, clustering, visualization, and validation. The study conducts a comparative analysis of AraBERT, MarBERT, and DistilBERT (models based on Bidirectional Encoder Representations from Transformers, or BERT), identifying AraBERT as the optimal model for Arabic X (formerly Twitter) analysis. Coherence metrics and thematic evaluation were used to assess model performance. Thematic analysis revealed 22 key parameters grouped into three macro-parameters, offering a structured understanding of public discourse. The study provides policy recommendations and outlines future research directions, delivering actionable insights for stakeholders to support family well-being, societal resilience, and sustainable development through smart home technologies.

Author 1: Eman Alqahtani
Author 2: Rashid Mehmood
Author 3: Sanaa Sharaf
Author 4: Saad Alqahtany

Keywords: Smart homes; smart families; sustainability; Bidirectional Encoder Representations from Transformers (BERT); AraBERT; MarBERT; DistilBERT; coherence metrics; Twitter

PDF

Paper 86: Improving Financial Forecasting Accuracy Through Swarm Optimization-Enhanced Deep Learning Models

Abstract: Financial forecasting is a crucial factor for decision-making in numerous fields, it demands very accurate predictive models. Traditional methods, like Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Gradient Boosting Machines (GBM), display suitable performance however have proven not totally efficient in complex high-dimensional financial data. This paper introduces a new approach combining swarm-based algorithms and deep learning architectures to improve predicative accuracy in financial forecasting. The proposed method relies on elite data preprocessing algorithms to optimize the learning process and prevent overfitting. By experimenting with large variety of dataset, the optimized model was able to achieve accuracy of 98% out running traditional models such as CNN (80%), RNN (83%), and GBM (95.6%). Furthermore, the model performed a good precision-recall trade-off, strengthening it applicability to real world work of predictive tasks, such as stock price prediction and market trend analysis. Through optimizations of essential hyperparameters by means of swarm intelligence, the framework handles the non-linear dependencies as well as volatility of financial data. The study shows high robustness and adaptability of the proposed concept provides solutions to the shortcomings of conventional financial forecasting tools. This study furthers the state of intelligent financial analytics proposing a byword framework for additional studies fostering deep learning and optimisation technologies together. The results align with the potential application of swarm-optimizer models for overcoming the limitation of predictive reliability of financial forecasting systems and future research in machine learning driven economic modelling and risk analysis.

Author 1: Balakrishnan S
Author 2: Y. Srinivasa Rao
Author 3: Karaka Ramakrishna Reddy
Author 4: Janjhyam Venkata Naga Ramesh
Author 5: Elangovan Muniyandy
Author 6: M. V. A. L. Narasimha Rao
Author 7: Yousef A. Baker El-Ebiary
Author 8: B Kiran Bala

Keywords: Financial forecasting; deep learning; swarm optimization; predictive modeling; machine learning

PDF

Paper 87: A Fuzzy-Neural Network Approach to Market Supervision and Product Recall Prediction

Abstract: The paper suggests a fuzzy-neural network market monitoring and product recall prediction method. This method uses fuzzy logic and neural networks to handle complex and ambiguous input. The fuzzy logic component fuzzes product quality, customer complaint, and market trend index input variables. The neural network component learns fuzzified data patterns to predict product recalls. Online information is used for product recalls. Customer complaint rate, product quality rating, and market trend index are in this dataset. Fuzzy sets and membership functions finish input variable fuzzying. A neural network trained on fuzzified data predicts product recalls. We assess the proposed method's accuracy, precision, recall, and F1-score. After testing, the suggested technique had an accuracy of 0.863, precision of 0.854, recall of 0.872, F1-score of 0.863, and MSE of 0.123. The fuzzy-neural network technology improves market monitoring and product recall predictions. Fuzzy logic and neural networks analyze complicated and unexpected data, improving prediction accuracy. This strategy may assist market supervisors and manufacturers decide on product recalls.

Author 1: Wei Chen

Keywords: Fuzzy-neural network; customer complaint rate; product quality rating; market trend index; market supervision; accuracy; precision; recall; F1-Score and MSE

PDF

Paper 88: Analysis of the Application and Potential of Renewable Energy in Landscape Architecture

Abstract: The field of landscape architecture is constantly evolving to address sustainability and climate change. There is a rising chance to use these technology into landscape design as renewable energy sources become more prevalent. An effective technique for evaluating the possibility of incorporating renewable energy management into landscape architecture is currently required. As a result, decision-making procedures are now manual and subjective, requiring greater precision and consistency. Deep learning algorithms can be used to examine the possibilities for renewable energy management in landscape architecture, which would help to solve this problem. Deep learning is a branch of artificial intelligence that automatically extracts complicated relationships and patterns from data using multi-layer neural networks. With inputs like topography, solar radiation, and climate, the algorithm can determine where in a particular landscape renewable energy installations would be most effective.

Author 1: YaWei Wu
Author 2: Xiang Meng

Keywords: Landscape architecture; sustainability; renewable energy; decision-making; deep learning; artificial intelligence

PDF

Paper 89: Performance Evaluation of Machine Learning-Based Cyber Attack Detection in Electric Vehicles Charging Stations

Abstract: Electric Vehicles (EV) chargers rely on resource-constrained embedded hardware to execute critical charging operations. However, conventional security solutions may not adequately meet the needs of these devices. Increasingly, machine learning techniques are being leveraged to detect cyber attacks during electric vehicle charging. This study aims to evaluate various base machine learning methods and conduct binary and multi-class classification experiments to enhance security and operational efficiency in EV charging stations. The experiments utilize the CICEVSE2024 dataset, curated by the Canadian Institute for Cybersecurity at the University of New Brunswick, designed specifically for anomaly detection and establishing behavioral patterns in EV charging stations. The analysis highlights nuances in performance across different machine learning classifiers. For instance, Random Forest achieved 95.07% accuracy in binary classification by constructing robust decision trees. Ensemble methods such as CatBoost and LightGBM further improved binary classification to 95.37% and 95.41%, respectively through gradient boosting techniques. In multi-class attack classification, ensemble methods demonstrated superior performance, with the Stacking Ensemble achieving 91.1% accuracy by combining multiple models, and Voting Ensemble achieving 90.7%. Notably, among homogeneous base classifiers, Extra Trees and HistGradient Boosting were particularly effective, achieving 90.2% and 89.8% accuracy respectively in multi-class classification tasks. These findings underscore the efficacy of machine learning in enhancing cybersecurity measures for EV charging infrastructure.

Author 1: Mutaz A. B. Al-Tarawneh
Author 2: Omar Alirr
Author 3: Hassan Kanj

Keywords: Machine learning; cyber attack detection; cyber threats; distributed denial of service attack; charging stations

PDF

Paper 90: Adaptive Ensemble Selection for Personalized Cardiovascular Disease Prediction Using Clustering and Feature Selection

Abstract: Cardiovascular disease (CVD) remains one of the leading causes of mortality worldwide, highlighting the need for early and precise prediction to support timely intervention. This study introduces an ensemble-based adaptive approach that personalizes CVD prediction by dynamically adjusting model configurations based on patient subgroups. To achieve this, various clustering techniques, including KMeans, DBSCAN, and MeanShift, are employed alongside feature selection methods such as chi-square, Mutual Information, and a baseline that incorporates all features. By tailoring classifier selection to each cluster, the proposed approach optimizes predictive performance, with ensemble models configured using Multi-Layer Perceptron (MLP) or Decision Tree classifiers. Through extensive experiments utilizing 10-fold cross-validation, results indicate that the adaptive ensemble consistently surpasses the static ensemble in key performance metrics, including accuracy, precision, recall, F1 score and AUC. In particular, the highest accuracy of 95.57%was achieved using MeanShift clustering with the entire set of features, demonstrating the effectiveness of density-based clustering in improving classification performance. Notably, this accuracy exceeds the best-reported results in previous studies, establishing a new benchmark for CVD prediction. These findings highlight the potential of adaptive ensemble selection to significantly improve diagnostic precision, providing valuable insights for personalized CVD prediction and broader applications in medical decision making.

Author 1: Mutaz A. B. Al-Tarawneh
Author 2: Khaled S. Al-Maaitah
Author 3: Ashraf Alkhresheh

Keywords: Cardiovascular disease prediction; adaptive ensemble selection; clustering techniques; feature selection; personalized healthcare

PDF

Paper 91: MAHYA: Facial Recognition-Based Pilgrim Identification System for Enhanced Health Monitoring and Assistance

Abstract: During the Hajj season, Saudi Arabia experiences the arrival of millions of pilgrims from diverse linguistic and geographical backgrounds. This influx poses significant challenges for emergency medical care services. The primary objective of this study is to explore the technological shortcomings and difficulties encountered by healthcare teams during such large-scale gatherings and to propose improvements for more effective emergency medical response systems. This study introduces MAHYA, a mobile health technology application designed to enhance emergency medical responses. MAHYA integrates advanced facial recognition technology, utilizing Inception ResNet V1 and Siamese network algorithms, to quickly and accurately identify individuals and retrieve their medical histories. This quick access to vital medical information is crucial for timely and efficient emergency medical care. The app incorporates a few-shot learning approach to bolster its facial recognition capabilities, which is vital to manage the large number of pilgrims. Further technical aspects of MAHYA include its use of Flask for back-end operations, Python for data processing, and NGROK to ensure secure external connectivity. These features collectively empower the application to offer a highly effective, secure, and adaptive facial recognition service, tailored for the dynamic and densely populated environment of the Hajj. The findings of the deployment of this application indicate a substantial improvement in the operational efficiency of healthcare professionals on the ground, leading to faster response times and improved overall quality of emergency medical services.

Author 1: Shahad Albalawi
Author 2: Lujin Alamri
Author 3: Jumanah Atut
Author 4: Shatha Albalawi
Author 5: Reem Haddaddi
Author 6: A’aeshah Alhakamy

Keywords: Facial recognition; emergency medical care; ResNet inception; siamese network; mobile health technology

PDF

Paper 92: Machine Learning-Driven Preventive Maintenance for Fibreboard Production in Industry 4.0

Abstract: The transition to Industry 4.0 has necessitated the adoption of intelligent maintenance strategies to enhance manufacturing efficiency and reduce operational disruptions. In fibreboard production, conventional preventive maintenance, reliant on fixed schedules, often leads to inefficient resource allocation and unexpected failures. This study proposes a machine learning-driven predictive maintenance (PdM) framework that utilises real-time sensor data and predictive analytics to optimise maintenance scheduling and improve system reliability. The proposed approach is validated using real-world industrial data, where Random Forest and Gradient Boosting regression models are applied to predict machine wear progression and estimate the remaining useful life (RUL) of critical components. Performance evaluation shows that Random Forest outperforms Gradient Boosting, achieving a lower Mean Squared Error (MSE) of 0.630, a lower Mean Absolute Error (MAE) of 0.613, and a higher R-squared score of 0.857. Feature importance analysis further identifies surface grade as a key determinant of equipment wear, suggesting that redistributing production across lower-impact grades can significantly reduce long-term wear and extend machine lifespan. These findings underscore the potential of artificial intelligence in predictive maintenance applications, contributing to the advancement of smart manufacturing in Industry 4.0. This research lays the foundation for further investigations into adaptive, real-time maintenance frameworks, supporting sustainable and efficient industrial operations.

Author 1: Sirirat Suwatcharachaitiwong
Author 2: Nikorn Sirivongpaisal
Author 3: Thattapon Surasak
Author 4: Nattagit Jiteurtragool
Author 5: Laksiri Treeranurat
Author 6: Aree Teeraparbseree
Author 7: Phattara Khumprom
Author 8: Sirirat Pungchompoo
Author 9: Dollaya Buakum

Keywords: Predictive maintenance; machine learning; fibre-board production; operational efficiency; Industry 4.0; smart manufacturing

PDF

Paper 93: Small Object Detection in Complex Images: Evaluation of Faster R-CNN and Slicing Aided Hyper Inference

Abstract: Small object detection has many applications, including maritime surveillance, underwater computer vision, agriculture, traffic flow analysis, drone surveying, etc. Object detection has made notable improvements in recent years. Despite these advancements, there is a notable disparity in performance between detecting small and large objects. This gap is because small objects have less information and a weaker ability to express features. This paper investigates the performance of Faster Region-Based Convolutional Neural Networks (R-CNN), one of the most popular and user-friendly object detection models for head detection and counts in artworks rather than images of real humans. The impacts of Slicing Aided Hyper Inference (SAHI) on the enhancement of the model’s capability to detect small heads in large-size images are also being analyzed. The Kaggle-hosted Artistic Head Detection dataset was used to train and evaluate the proposed model. The effectiveness of the proposed methodology was demonstrated by integrating SAHI into two other object detection models, Cascaded R-CNN and Adaptive Training Sample Selection (ATSS). The experimental results reveal that applying SAHI on top of any object detector enhances its ability to recognize and detect tiny and various scaled heads in large-scale images, which is a significant challenge in numerous applications. At a confidence level of 0.8, the SAHI-enhanced Faster R-CNN achieved the best private Root Mean Square Error (RMSE) score of 5.31337, while the SAHI-enhanced Cascaded R-CNN obtained the highest public RMSE score of 3.47005.

Author 1: Fatma Mazen Ali Mazen
Author 2: Yomna Shaker

Keywords: Faster R-CNN; Cascaded R-CNN; SAHI; ATSS; artistic head detection; small object detection

PDF

Paper 94: Enhancing Vision-Based Religious Tourism Systems in Makkah Using Fine-Tuned YOLOv11 for Landmark Detection

Abstract: Makkah, one of the most significant cities in the Islamic world, possesses a rich architectural and cultural heritage that requires precise detection and identification of its landmarks. Accurate landmark detection plays a vital role in urban planning, cultural preservation, and enhancing tourism experiences. In this study, a fine-tuned versions of the YOLOv11 network, specifically the nano and small variants, are proposed for efficient and precise detection of Makkah’s landmarks. The YOLOv11 framework, renowned for its real-time object detection capabilities, was carefully adapted to address the unique challenges posed by the diverse visual characteristics of Makkah’s landmarks, including varying scales, intricate textures, and challenging environmental conditions. To further enhance the models for deployment in embedded systems with low-latency requirements, a quantization technique is applied. This process significantly reduces model size and increases inference speed, optimizing the network for resource-constrained environments while maintaining high detection accuracy. Beyond technical improvements, this approach supports real-world applications such as interactive tourism via mobile and AR systems, automated heritage documentation, and continuous monitoring of historic sites for conservation efforts. Additionally, integration into smart city infrastructures can enhance security and management of cultural landmarks. Experimental results show that the fine-tuned YOLOv11 models, particularly the small version, achieve high accuracy, with notable improvements in precision and recall compared to baseline models. This research demonstrates the potential of deep learning techniques for cultural heritage detection and lays the foundation for future applications in urban analytics, geospatial mapping, and real-time vision-based systems for tourism and heritage preservation.

Author 1: Kaznah Alshammari

Keywords: YOLOv11; object detection; Makkah landmark

PDF

Paper 95: Automated DoS Penetration Testing Using Deep Q Learning Network-Quantile Regression Deep Q Learning Network Algorithms

Abstract: Penetration test is essential to determine the security level of a network. A penetration test attack path simulates an attack to identify vulnerabilities, reduce likely losses, and continuously enhance security. It helps to facilitate the simulation of different attack scenarios, develops robust security measures, and enables proactive risk assessment. We have combined Mul-VAL with DQN and QR-DQN algorithms to solve the problem of incorrect route prediction and problematic convergence associated with attack path planning training. As a result of this algorithm, an attack tree is generated, paths within the attack graph are searched for, and a deep-first search method is used to create a transfer matrix. In addition, QR-DQN and DQN algorithms determine the optimal attack path for the target system. The results of this study show that although the QR-DQN algorithm requires more resources and takes longer to train than the traditional (DQN) algorithm, it is effective in identifying vulnerabilities and optimizing attack paths.

Author 1: Mariam Alhamed
Author 2: M M Hafizur Rahman

Keywords: DQN; QR-DQN; MulVAL; DFS; penetration testing; DoS

PDF

Paper 96: Capacity Analysis of MIMO Channels Under High SNR Using Nakagami-q Fading Distribution

Abstract: This study explores the capacity of multiple-input multiple-output (MIMO) wireless channels under high signal-to-noise ratio (SNR) conditions, incorporating Nakagami-q fading distribution alongside Rayleigh and Rician fading models. The main objective is to develop an analytical framework that accurately models MIMO channel capacity under high-SNR conditions using Nakagami-q fading and compares its performance with conventional fading models. By employing a robust wireless channel modeling approach, the study examines the impact of various antenna configurations on system performance. The derived framework assesses how different fading conditions affect capacity, showing that MIMO systems effectively mitigate multipath effects. The results reveal that channel capacity improves with an increasing number of antennas and favorable fading parameters, emphasizing the significance of antenna configurations in enhancing performance. The comparative analysis highlights substantial differences in capacity across fading models, offering critical insights to optimize next-generation wireless channel modeling in diverse environments.

Author 1: Syeda Anika Tasnim
Author 2: Md. Mazid-Ul-Haque
Author 3: Md. Sajid Bin Faisal
Author 4: Rakin Sad Aftab

Keywords: MIMO systems; Nakagami-q; high-SNR capacity; antenna configurations; wireless channel modeling

PDF

Paper 97: Integrating BDI Cognitive Intelligence in IIoT: A Framework for Advanced Decision-Making in Manufacturing and Policy Development

Abstract: This paper presents an innovative system frame-work that integrates multiple domains—Smart Cities, Underwater Environments, and Healthcare—using advanced Data Analytics Platforms enhanced by BDI (Belief-Desire-Intention) cognitive intelligence. Current data analytics systems, while capable of collecting and processing large amounts of data, exhibit significant gaps in intelligent decision-making, particularly in dynamic and context-sensitive environments. By leveraging the BDI model, which mimics human cognitive processes through beliefs, desires, and intentions. This system proposes a context-aware, adaptive approach to decision-making by leveraging BDI cognitive intelligence, which outperforms traditional AI-based analytics by enabling dynamic, goal-driven responses to real-time data in IIoT environments. The system is designed to dynamically respond to real-time data collected from IoT-enabled devices and actuators, improving efficiency, safety, and adaptability. The proposed framework addresses the limitations of existing platforms by incorporating the latest technology and techniques for proactive, intelligent decision-making. The qualitative analysis of the proposed model shows promising results, particularly in its ability to respond to rapid environmental changes, highlighting its potential for transformative applications in urban management, marine conservation, and healthcare delivery.

Author 1: Ammar Ahmed E. Elhadi

Keywords: BDI cognitive intelligence; IIoT; smart manufacturing; decision-making; adaptive systems

PDF

Paper 98: The Impact of Cybersecurity Through Knowledge Sharing Practices: Limitations, Analysis of Current Trends and Future Research Directions

Abstract: Research examines Saudi Arabian cyber security knowledge-sharing programs during its digital transformation under Vision 2030 through a combination of literature reviews and expert specialist insights to analyze current cybersecurity professional information transfer systems. This analysis shows how technological developments along with organizational and cultural elements impact these practices since the constant drive for innovation aims to enhance knowledge transfer so researchers discovered that cultural obstacles from resistance to openness, lack of trust and hierarchical structures and division within organizations and insufficient workflow systems along with worry about trust and outdated technological capabilities limit successful knowledge sharing. Through analysis of knowledge-sharing programs established by the National Cybersecurity Authority (NCA) Saudi Aramco and the King Abdulaziz City for Science and Technology (KACST), researchers show that strategic programs improve national cybersecurity readiness effectiveness. The research provides actionable advice that combines the design of a national security plan and secure technology funding with does-based mentorship initiatives across sectors and integrated incident reporting along with educational programs and performance-driven reward systems for motivation. The research offers combined theory and practice-oriented guidance that helps Saudi Arabia’s policymakers along with organizations and cybersecurity practitioners to build effective strategies as they establish their leadership position in collaborative cybersecurity practices internationally.

Author 1: Moneer Alshaikh
Author 2: Sajid Mehmood
Author 3: Rashid Amin
Author 4: Faisal S. Alsubaei

Keywords: Cybersecurity; knowledge sharing; Saudi Arabia; Vision 2030; digital transformation; cybersecurity education; cyber threats; cybersecurity framework; cultural barriers; National Cybersecurity Authority (NCA)

PDF

Paper 99: Exploring the Synergy Between Digital Twin Technology and Artificial Intelligence: A Comprehensive Survey

Abstract: The integration of Digital Twin Technology with Artificial Intelligence (AI) represents a transformative advancement across multiple domains. Digital twins are dynamic, real-time virtual representations of physical systems, leveraging technologies such as Internet of Things (IoT), augmented and virtual reality (AR/VR), big data analytics, 3D modeling, and cloud computing. Initially conceptualized by Michael Grieves in 2003 and further developed by organizations such as NASA, digital twins have been widely adopted in manufacturing, healthcare, smart cities, and energy systems. This paper provides a comprehensive analysis of how real-time data streams, continuous feedback loops, and predictive analytics within digital twins enhance AI capabilities, enabling anomaly detection, predictive maintenance, and data-driven decision-making. Additionally, the study examines technical and operational challenges, including data integration, sensor accuracy, cybersecurity, and computational overhead. By evaluating current methodologies and identifying future research directions, this survey underscores the potential of digital twins to drive adaptive, intelligent, and resilient systems in an increasingly data-driven world.

Author 1: Wael Y. Alghamdi
Author 2: Rayan M. Alshamrani
Author 3: Ruba K. Aloufi
Author 4: Shaikhah O. Ba Lhamar
Author 5: Retaj A. Altwirqi
Author 6: Fatimah S. Alotaibi
Author 7: Shahad M. Althobaiti
Author 8: Hadeel M. Altalhi
Author 9: Shatha A. Alshamrani
Author 10: Atouf S Alazwari

Keywords: Digital twin; artificial intelligence; internet of things; big data; predictive analytics; real-time monitoring

PDF

Paper 100: Improved Monte Carlo Localization for Agricultural Mobile Robots with the Normal Distributions Transform

Abstract: Localization is crucial for robots to navigate autonomously in agricultural environments. This paper introduces an improved Adaptive Monte Carlo Localization (AMCL) algorithm integrated with the Normal Distributions Transform (NDT) to address the challenges of navigation in agricultural fields. 2D Light Detection and Ranging (LiDAR) measures distances to surrounding objects using laser light, and captures distance data in a single horizontal plane, making it ideal for detecting obstacles and field features such as trees and crop rows. While conventional AMCL has been studied for indoor environments, there is a lack of research on its application in outdoor agricultural settings, particularly when using 2D LiDAR. The proposed method enhances localization accuracy by applying the NDT after the conventional AMCL estimation, refining the pose estimate through a more detailed alignment of the 2D LiDAR data with the map. Simulations conducted in a palm oil plantation environment demonstrate a 53% reduction in absolute pose error and a 50%reduction in relative position error compared to conventional AMCL. This highlights the potential of the AMCL-NDT approach with 2D LiDAR for cost-effective and scalable deployment in precision agriculture.

Author 1: Brian Lai Lap Hong
Author 2: Mohd Azri Bin Mohd Izhar
Author 3: Norulhusna Binti Ahmad

Keywords: Adaptive monte carlo localization; normal distributions transform; pose estimation; precision agriculture; agricultural robotics; outdoor localization

PDF

Paper 101: Improving Satellite Flood Image Classification Using Attention-Based CNN and Transformer Models

Abstract: Floods are among the most frequent and devastating natural disasters, significantly impacting infrastructure, ecosystems, and human communities. Accurate satellite-based flood image classification is crucial for assessing flood-affected regions and supporting emergency response efforts. This study uses Convolutional Neural Networks (CNNs) and transformer-based architectures to enhance flood classification, integrating the Convolutional Block Attention Module (CBAM) to improve feature extraction. Using the xView2 xBD dataset, we classify houses as completely or partially surrounded by flood-water. Experimental evaluations demonstrate that ResNet101v2 achieved an accuracy of 86.87%, while a hybrid CNN model (MobileNetV2- DenseNet201) attained 85.83%, further improving to 89.54CBAM. The Vision Transformer (ViT) with CBAM achieved the highest accuracy of 90.75%, showcasing the effectiveness of attention-based hybrid models for flood image classification. These results highlight the potential of integrating CBAM with deep learning architectures to enhance classification accuracy and improve flood impact assessment.

Author 1: Sanket S Kulkarni
Author 2: Ansuman Mahapatra

Keywords: CNN; DenseNet; ResNet101v2; VGG16; hybrid CNN model; CBAM; vision transformer; xView2 Building Damage (xBD)

PDF

Paper 102: Deep Learning-Based Behavior Analysis in Basketball Video: A Spatiotemporal Approach

Abstract: The study of sports movement analysis technologies based on video has significant practical applications. Digital video footage, human-computer communication, as well as additional technologies can greatly improve the effectiveness of sports training. This research looks at the players’ technical proficiency in a basketball contest footage and suggests a behaviour assessment technique inspired by the use of deep learning and attention mechanisms. First, we develop an approach for effortlessly obtaining the marking lines from the basketball arena and stadium. After that, the most significant frames of the footage have been shot using a spatial and temporal ranking technique. Next, we design a behaviour comprehension and prediction technique by implementing an autoencoder design. The results of the study may be sent to instructors and data scientists instantly to support them in determining their strategies and professional decisions. An extensive dataset of basketball films is used to test the proposed method. The outcomes demonstrate that the recommended attention mechanism-based strategy competently recognises the movement of video individuals while attaining substantial behavioural assessment efficiency.

Author 1: Jingyi Wang

Keywords: Basketball; player movement analysis; player technique analysis; deep learning; attention mechanism

PDF

Paper 103: Enhancing Agile Requirements Change Management: Integrating LLMs with Fuzzy Best-Worst Method for Decision Support

Abstract: Agile Requirements Change Management (ARCM) in Global Software Development (GSD) posed significant challenges due to the dynamic nature of project requirements and the complexities of distributed team coordination. One approach used to mitigate these challenges and ensure efficient collaboration is the identification and prioritization of success factors. Traditional Multi-Criteria Decision-Making methods, such as the Best-Worst Method (BWM), had been employed successfully to prioritize success factors. However, these methods often failed to capture the inherent uncertainties of decision-making in a GSD. To address this limitation, this study integrated Large Language Models (LLMs) with the Fuzzy Best-Worst Method (FBWM) to enhance prioritization accuracy and decision support. We propose a model for comparing the prioritization outcomes of human expert assessments and LLM-generated decisions to evaluate the consistency and effectiveness of machine-generated decisions relative to those made by human experts. The findings indicate that the LLM-driven FBWM exhibit high reliability in mirroring expert judgments, demonstrating the potential of LLMs to support strategic decision-making in ARCM. This study contributed to the evolving landscape of AI-driven project management by providing empirical evidence of LLMs’ utility in improving ARCM for GSD.

Author 1: Bushra Aljohani
Author 2: Abdulmajeed Aljuhani
Author 3: Tawfeeq Alsanoosy

Keywords: Fuzzy Best-Worst Method; Large Language Models; Agile Requirements Change Management; Global Software Development

PDF

Paper 104: Detection of Wheat Pest and Disease in Complex Backgrounds Based on Improved YOLOv8 Model

Abstract: Detecting wheat diseases and pests, particularly those characterized by small targets amidst complex background interference, presents a significant challenge in agricultural re-search. To address this issue and achieve precise and efficient detection, we propose an enhanced version of YOLOv8, termed MGT-YOLO, which incorporates multi-scale edge enhancement and visual remote dependency mechanisms. Our methodology begins with the creation of a comprehensive dataset, WheatData, comprising 2393 high-resolution images capturing various wheat diseases and pests across different growth stages in diverse agricultural settings. To improve the detection of small targets, we implemented a multi-scale edge amplification technique within the backbone network of YOLOv8, enhancing its ability to capture minute details of wheat diseases and pests. Furthermore, we introduced the C2f GlobalContext module in the neck network, which integrates global contextual relationships and facilitates the fusion of features from small-sized objects by leveraging remote dependencies in visual imagery. Additionally, we incorporated a Vision Transformer module into the neck network to enhance the processing efficiency of small-scale disease and pest features. The proposed MGT-YOLO network was rigorously evaluated on the WheatData dataset. The results demonstrated significant improvements, with mAP@0.5 values of 90.0% for powdery mildew and 65.5% for smut disease, surpassing the baseline YOLOv8 by 5.3% and 6.8%, respectively. The overall mAP@0.5 reached 89.5%, representing a 2.0% improvement over YOLOv8 and outperforming other state-of-the-art detection methods. These findings suggest that MGT-YOLO is a promising solution for real-time detection of agricultural diseases and pests, offering enhanced accuracy and efficiency in complex agricultural environments.

Author 1: Dandan Zhong
Author 2: Penglin Wang
Author 3: Jie Shen
Author 4: Dongxu Zhang

Keywords: Wheat disease and pest; YOLOv8; edge amplification; visual remote dependency; global context; vision transformer

PDF

Paper 105: MEXT: A Parameter-Free Oversampling Approach for Multi-Class Imbalanced Datasets

Abstract: Machine learning classifiers face significant challenges when confronted with class-imbalanced datasets, particularly in multi-class scenarios. The inherent skewness in class distributions often leads to biased model predictions, with classifiers struggling to accurately identify instances from underrepresented classes. This paper introduces MEXT, a novel parameter-free oversampling technique specifically designed for multi-class imbalanced datasets. Unlike conventional approaches that often rely on the one-against-all strategy and require manual parameter tuning for each class, MEXT addresses these limitations by simultaneously balancing all classes. By leveraging anomalous score analysis, MEXT automatically determines optimal locations for synthesizing new instances of minority classes, eliminating the need for manual parameter selection. The technique aims to achieve a balanced class distribution where each class has an equal number of instances. To evaluate MEXT’s effectiveness, the experiments were conducted extensively on a collection of multi-class datasets from the UCI repository. The proposed MEXT algorithm was evaluated against a suite of state-of-the-art SMOTE-based oversampling techniques, including SMOTE, ADASYN, Safe-Level SMOTE, MDO, and DSRBF. All comparative algorithms were implemented within the one-against-all framework. Hyperparameter optimization for each algorithm was performed using grid search. An automated machine learning pipeline was employed to identify the optimal classifier-hyperparameter combination for each dataset and oversampling technique. The Wilcoxon signed-rank test was subsequently utilized to statistically assess the performance of MEXT relative to the other oversampling techniques. The results demonstrate that MEXT consistently outperforms the other methods in terms of average ranking of key evaluation metrics, including macro-precision, macro-recall, F1-measure, and G-mean, indicating its superior ability to address multi-class imbalanced learning problems.

Author 1: Chittima Chiamanusorn
Author 2: Krung Sinapiromsaran

Keywords: Class imbalance; classification; extreme anomalous; multiclass; oversampling; parameter-free

PDF

Paper 106: Genetic Algorithm-Driven Cover Set Scheduling for Longevity in Wireless Sensor Networks

Abstract: This paper aims to develop an efficient scheduling approach based on Genetic Algorithms to optimize energy consumption and maximize the operational lifetime of Wireless Sensor Networks (WSNs). Effective energy management is crucial for prolonging the operational lifespan of wireless sensor networks (WSNs) that include a substantial number of sensors. Simultaneously activating all sensors results in a fast depletion of energy, thus diminishing the overall lifespan of the network. To address this issue, it is necessary to schedule sensor activity in an effective manner. This task, known as the maximum coverage set scheduling (MCSS) problem, is highly complex and has been demonstrated to be NP-hard. This article presents a customized genetic algorithm designed to tackle the MCSS problem, aiming to improve the longevity of Wireless Sensor Networks (WSNs). Our methodology effectively detects and enhances combinations of coverage sets and their corresponding schedules. The program incorporates key criteria such as the detection ranges of individual sensors, their energy levels, and activity durations to optimize the overall energy efficiency and operational sustainability of the network. The performance of the suggested algorithm is assessed through simulations and compared to that of the Greedy algorithm and the Pattern search algorithm. The results indicate that our genetic algorithm not only maximizes network lifetime but also enhances the efficiency and efficacy of solving the MCSS problem. This represents a significant improvement in managing the energy consumption in WSNs.

Author 1: Ibtissam Larhlimi
Author 2: Mansour Lmkaiti
Author 3: Maryem Lachgar
Author 4: Hicham Ouchitachen
Author 5: Anouar Darif
Author 6: Hicham Mouncif

Keywords: Maximum network lifetime; wireless sensor network; coverage; sets scheduling; genetic algorithm; pattern search algorithm

PDF

Paper 107: A Cross-Layer Framework for Optimizing Energy Efficiency in Wireless Sensor Networks: Design, Implementation, and Future Directions

Abstract: Environmental monitoring, healthcare, and industrial automation are among the numerous modern applications in which Wireless Sensor Networks (WSNs) are becoming increasingly indispensable. Despite this, the scalability and endurance of these networks are still significantly impeded by the energy constraints of sensor nodes. This study proposes a novel cross-layer framework that dynamically optimizes energy consumption across the entire communication hierarchy by integrating the Application, Network, Data Link, and Physical layers to address this issue. The framework introduces significant innovations, including an adaptive Low-Traffic Aware Hybrid Medium Access Control (LTH-MAC) protocol that is intended to adjust transmission schedules in response to real-time traffic conditions, and energy-aware routing algorithms that consider both node energy levels and network topology when determining the most energy-efficient communication paths. The framework exhibits substantial enhancements in energy efficiency, reaching a reduction in energy consumption of up to 43%, as evidenced by extensive simulations conducted with OPNET. Furthermore, the network lifetime is extended by 8%, and transmission is improved by 10% compared to conventional statically defined layered architectures. These findings underscore the potential of the proposed cross-layer framework to not only improve overall network performance but also reduce energy consumption, thereby guaranteeing sustainable and efficient operation in resource-constrained environments. Additionally, the solution’s scalability renders it suitable for a diverse array of WSN applications, providing a promising solution for overcoming the constraints of energy and establishing the foundation for more durable and efficient sensor networks. This study establishes the foundation for future research on adaptive, cross-layer protocols that can further enhance energy-efficient communication in WSNs.

Author 1: Sami Mohammed Alenezi

Keywords: Wireless sensor network; cross-layer; energy efficient; performance; OPNET

PDF

Paper 108: A Novel Paradigm for Parameter Optimization of Hydraulic Fracturing Using Machine Learning and Large Language Model

Abstract: Hydraulic fracturing is a common practice in the oil and gas industry meant to increase the production of oil and natural gas. In this process, appropriate fracturing design parameters are important to maximize the efficiency of fracture propagation. However, conventional fracturing parameter design methods often rely on expert experience or fail to take into account complex geological conditions, resulting in suboptimal parameter design schemes. Therefore, this paper presents PPOHyFrac, a novel paradigm for optimizing hydraulic fracturing parameters with large language model and machine learning, which aims to automatically extract, assess and optimize fracturing parameters. PPOHyFrac uses advanced large language model to perform the extraction of key parameters from hundreds of fracturing design documents, and then refines the extracted data using statistical methods such as missing value imputation and feature normalization. Besides, the techniques in correlation analysis are utilized to identify key influencing factors and finally machine learning methods are implemented to optimize and predict the key influencing factors. This paper also presents a comparative study of five machine learning methods. Experiments show that random forest is the best choice for parameter optimization and can improve the prediction and optimization accuracy of key parameters.

Author 1: Chunxi Yang
Author 2: Chuanyou Xu
Author 3: Yue Ma
Author 4: Bang Qu
Author 5: Yiquan Liang
Author 6: Yajun Xu
Author 7: Lei Xiao
Author 8: Zhimin Sheng
Author 9: Zhenghao Fan
Author 10: Xin Zhang

Keywords: Hydraulic fracturing; parameter optimization; large language model; machine learning

PDF

Paper 109: The Optimization Design of the Pattern Matrix Based on EXIT Chart for PDMA Systems

Abstract: The maximum degree of function node of pattern matrix (PM) dominates the detection complexity of belief propagation algorithm for pattern division multiple access (PDMA) systems. This work proposes a method to search the optimal PM ensemble for PDMA system under constrained detection complexity. This issue is converted to find the optimal variable node (VN) degree distribution (DD) of PM with function node DD concentrated. Utilizing extrinsic information transfer chart (EXIT) techniques, the DD of PM with overload rate of 150%is obtained and its DD is designed by progressive edge growth (PEG) algorithm. The performance of this PDMA system is evaluated and compared with the ones of the same overload rate in literature to verify the effectiveness of the proposed method. Furthermore, for iterative detection and decoding (IDD), the concatenated LDPC code is optimized to enhance the overall performance. EXIT analysis and Monte Carlo simulations confirm that the designed pattern matrix outperforms other pattern matrix about 2.3 dB in bit error rate when both schemes employ the same LDPC code, and 0.2 dB when using the optimized codes respectively.

Author 1: Hanqing Ding
Author 2: Jiaxue Li
Author 3: Jin Xu

Keywords: PM optimization; EXIT chart; PDMA system

PDF

Paper 110: Vulnerability Testing of RESTful APIs Against Application Layer DDoS Attacks

Abstract: In recent years, modern mobile, web applications are shifting from monolithic application to microservice based application because of the issues such as scalability and ease of maintenance.These services are exposed to the clients through Application programming interface (API). APIs are built, integrated and deployed quickly.The very nature of APIs directly interact with backend server, the security is paramount important for CAP. Denial of service attacks are more serious attack which denies service to legitimate request. Rate limiting policies are used to stop the API DoS attacks. But by passing rate limit or flooding attack overload the backend server. Even sophisticated attack using http/2 multiplexing with multiple clients leads severe disruptions of service. This research shows that how sophisticated multi client attack on high workload end point leads to a dos attack.

Author 1: Sivakumar K
Author 2: Santhi Thilagam P

Keywords: DDoS; rate-limiting; HTTP/1.1; HTTP/2; API; micro service; multiplexing; security; DoS; security testing

PDF

Paper 111: Adaptive Sine-Cosine Optimization Technique for Stability and Domain of Attraction Analysis

Abstract: In the last few years, researchers have concentrated on estimating and maximizing the Domain of Attraction of autonomous nonlinear systems. Based on the Lyapunov theory, the proposed approach in this paper aims to give an accurate estimation of the Domain of Attraction with high performance against the existing conventional methods. The Adaptive Sine-Cosine Algorithm has been considered one of the most advanced algorithms. It combines a large exploration with a strong local search and provides high-quality convergence conditions. This paper uses the benefits of the Adaptive Sine-Cosine Algorithm to develop a flexible method to estimate the Domain of Attraction by an oriented sampling to guarantee the largest sublevel related to the given Lyapunov function. The approach is applied to some benchmark examples and validates its efficiency and its ability to provide performant results.

Author 1: Messaoud Aloui
Author 2: Faical Hamidi
Author 3: Mohammed Aoun
Author 4: Houssem Jerbi

Keywords: Domain of Attraction; nonlinear autonomous systems; Lyapunov function; Lyapunov’s theory; stability; optimization; Adaptive Sine-Cosine Algorithm

PDF

Paper 112: SSFed: Statistical Significance Aggregation Algorithm in Federated Learning

Abstract: Federated learning enables collaborative model training across multiple clients without sharing raw data, where the global server aggregates local models. One of the primary challenges in this setting is dealing with non-i.i.d data, which can lead to biased aggregations, as well as the overhead of frequent communication between clients and the server. Our approach improves state-of-art aggregation by adding statistical significance testing. This step assigns greater weight to client updates with higher statistical impact. Only statistically significant updates are included in the global model. The process begins with each client training a local model on its dataset. Clients then send these trained parameters to the server. At the global server, statistical significance testing is applied by calculating z-scores for each parameter. Updates with z-scores below a set threshold are included, with each update weighted based on its significance.SSFed achieves a final accuracy of 88.71% in just 20 rounds, outperforming baseline algorithms and resulting in an average improvement of 25% over traditional federated learning methods. This demonstrates faster convergence and stronger performance, especially under highly non-i.i.d client data distributions. Our SSFed implementation is available on GitHub.

Author 1: Yousef Alsenani

Keywords: Federated learning; non-i.i.d data; model aggregation; privacy-preserving AI; federated optimization; decentralized learning; data heterogeneity; distributed machine learning

PDF

Paper 113: Image-Based Air Quality Estimation Using Convolutional Neural Network Optimized by Genetic Algorithms: A Multi-Dataset Approach

Abstract: Air pollution poses significant threats to human health and the environment, making effective monitoring increasingly essential. Traditional methods using fixed monitoring stations have challenges related to high costs and limited coverage. This paper proposes a new approach using convolutional neural networks with genetic algorithms for estimating air quality directly from images. The convolutional neural network is optimized using genetic algorithms, which dynamically tune hyper-parameters such as learning rate, batch size, and momentum to improve performance and generalizability across diverse environmental conditions. Our approach improves performance and reduces the risk of overfitting, thus ensuring balanced and robust results. To mitigate overfitting, we implemented dropout layers, batch normalization, and early stopping, significantly enhancing the model’s generalization capability. Specifically, three different open-access datasets were combined into a single training dataset, capturing extensive temporal, spatial, and environmental variability. Extensive testing of the model performance was conducted with a broad set of metrics, including precision, recall, and F1 score. The results demonstrate that our model not only achieves high accuracy but also maintains well-balanced performance across all metrics, ensuring robust classification of different air quality levels. For instance, the model achieved a precision of 0.97, a recall of 0.97, and an overall accuracy of 0.9544 percent, outperforming baseline methods significantly in all metrics. These improvements underscore the effectiveness of Genetic Algorithms in optimizing the model.

Author 1: Arshad Ali Khan
Author 2: Mazlina Abdul Majid
Author 3: Abdulhalim Dandoush

Keywords: Convolutional neural network; Genetic Algorithm; air quality estimation; image processing

PDF

Paper 114: Analyzing Consumer Decision-Making in Digital Environments Using Random Forest Algorithm and Statistical Methods

Abstract: In an era characterized by the rapid digital transformation of the marketplace, understanding consumer behavior is essential for effective decision-making and the development of marketing strategies. This study investigates the impact of demographic attributes such as age, income, education, and lifestyle preferences, alongside social media engagement, on the consumer decision-making process in the Al-Qassim region of Saudi Arabia. A survey was distributed, gathering responses from 684 participants. The study specifically tests the hypotheses that demographic factors significantly influence each stage of the decision-making journey: problem recognition, information search, evaluation of alternatives, purchase decision, and post-purchase behavior, with social media engagement acting as a mediating factor in these stages. By utilizing management information systems to analyze this comprehensive dataset, a Random Forest Classifier was employed, achieving an overall accuracy of 88% and revealing significant correlations between demographic characteristics and consumer behavior. The model demonstrated particularly strong performance in the Evaluation of Alternatives stage, with a precision of 0.90 and a recall of 0.95. Additionally, the findings underscore the critical role of social media engagement in enhancing consumer awareness and influencing purchasing decisions. This study provides actionable insights for marketers in the Al-Qassim region, equipping them with the necessary tools to optimize their strategies in the rapidly evolving digital landscape, ultimately improving consumer satisfaction and fostering long-term loyalty.

Author 1: Hussain Mohammad Abu-Dalbouh
Author 2: Mushira Mustafa Freihat
Author 3: Rayah Ismaeel Jawarneh
Author 4: Mohammed Abdalwahab Mohammed Salim
Author 5: Sulaiman Abdullah Alateyah

Keywords: Consumer behavior; demographics marketing strategies; data analysis; digital transformation

PDF

Paper 115: A Comparative Evaluation of Ontology Learning Techniques in the Context of the Qur’an

Abstract: Ontology Learning refers to the automatic or semi-automatic process of creating ontologies by extracting terms, concepts, and relationships from text written in natural languages. This process is essential, as manually building ontologies is time-consuming and labour-intensive. The Qur'an, a vast source of knowledge for Muslims, presents linguistic and cultural complexities, with many words carrying multiple meanings depending on context. Ontologies offer a structured way to represent this knowledge, linking concepts systematically. Although various ontologies have been developed from the Qur'an for purposes such as advanced querying and analysis, most rely on manual creation methods. Few studies have examined the use of Ontology Learning for Qur’anic ontologies. Thus, this study evaluates three Ontology Learning techniques: Named Entity Recognition (NER), statistical methods, and Quranic patterns. The NER aims to find names represented by entity, statistical techniques aimed at finding frequently occurring words, and pattern-based techniques aim to identify complex relationships and multi-word expressions. The Ontology Learning techniques were evaluated based on precision, recall, and F-measure to assess extraction accuracy. The NER technique achieved an average precision of 0.62, statistical methods of 0.45, and pattern-based techniques of 0.58, indicating the strengths and weaknesses of each approach for extracting relevant terms as concepts, instances, or relations. This indicates that improvements or enhancements to the existing techniques are necessary for more accurate results. Future work will focus on refining or adapting patterns based on the structure of the Qur'an translation using LLMs.

Author 1: Rohana Ismail
Author 2: Mokhairi Makhtar
Author 3: Hasni Hasan
Author 4: Nurnadiah Zamri
Author 5: Azilawati Azizan

Keywords: Ontology learning; Qur’an; NER; statistical; pattern-Based; hajj

PDF

Paper 116: Design of a Rural Tourism Satisfaction Monitoring System Based on the Improved INFO Algorithm

Abstract: The increasing influx of tourists to scenic areas has raised significant security concerns, often surpassing the management capacity of these locations. Despite the growing need for effective solutions, many regions have not yet developed strategies to address these issues. This study aims to enhance rural tourist satisfaction monitoring systems to better manage tourist flows and improve security. The research explores rural tourist satisfaction, which has significant potential for large-scale monitoring due to its self-expanding nature. The paper discusses the critical role of tourist satisfaction within scenic areas, particularly focusing on tourist tracking systems. It also introduces key features and positioning algorithms used for monitoring satisfaction. A new collaborative positioning approach, based on subnetwork fusion, is proposed to address the limitations of traditional non-line-of-sight INFO positioning algorithms. The proposed subnetwork fusion method outperforms the traditional INFO algorithm, with a 39.7% reduction in localization error when more than 130 nodes are used. Furthermore, when anchor nodes exceed 10%, the DPeNet algorithm achieves an average precision value of 0.768, surpassing the 0.75 threshold due to its enhanced multi-channel convolution and downsampling structure, which optimally utilizes the deep features of small-sized targets. This paper introduces an innovative collaborative positioning strategy for rural tourist satisfaction monitoring, overcoming existing algorithm limitations and enhancing localization accuracy in real-time tourist management systems. The findings contribute to improving both tourist experience and safety in rural scenic areas, offering a scalable solution for broader applications in tourist destinations.

Author 1: Meihua Qiao

Keywords: Enhanced INFO algorithm; rural tourism satisfaction; tourist monitoring system design; collaborative positioning methodology

PDF

Paper 117: Development of Cybersecurity Awareness Model Based on Protection Motivation Theory (PMT) for Digital IR 4.0 in Malaysia

Abstract: This study aims to examine the complex interplay among perceived threat severity, perceived threat vulnerability, fear, perceived response efficacy, perceived self-efficacy, and response cost using Partial Least Squares Structural Equation Modelling (PLS-SEM) via SmartPLS 4.0, grounded in the Protection Motivation Theory (PMT). The analysis is situated within the context of cyber security and information security in Industry Revolution 4.0 (IR 4.0) environments, where interconnected systems are increasingly exposed to cyber threats. Both measurement and structural model assessments were performed, revealing strong indicator loadings, high Cronbach’s alpha, composite reliability (CR), and adequate average variance extracted (AVE), confirming the model’s reliability and validity. The Fornell-Larcker criterion and heterotrait-monotrait (HTMT) ratio confirmed discriminant validity, while variance inflation factor (VIF) values under 5 and an R² value of 0.554 indicated no collinearity issues and moderate explanatory power in the structural model. Findings demonstrate that perceived threat severity and vulnerability significantly increased fear, which mediated the threat perception-protection motivation relationship, emphasising the role of emotional responses in decision-making. Coping appraisal components, namely perceived response efficacy and self-efficacy, were strong positive predictors of protection motivation, while response cost negatively influenced protective behaviour intentions. Although intrusion detection systems are essential in mitigating cyber risks, this study highlights the equally critical behavioural component of cyber defence. The outcomes underscore the value of PMT in modelling security behaviour, offering theoretical and practical implications for behavioural interventions, public health strategies, and policy design in IR 4.0 domains. These insights contribute to strengthening cybersecurity and information security culture across digitally-driven industries.

Author 1: Siti Fatiha Abd Latif
Author 2: Noor Suhana Sulaiman
Author 3: Nur Sukinah Abd Aziz
Author 4: Azliza Yacob
Author 5: Akhyari Nasir

Keywords: Cyber security; information security; intrusion detection; IR 4.0; PLS SEM

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org