The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 15 Issue 6

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Integrating Advanced Language Models and Vector Database for Enhanced AI Query Retrieval in Web Development

Abstract: In the dynamic field of web development, the integration of sophisticated AI technologies for query processing has become increasingly crucial. This paper presents a framework that significantly improves the relevance of web query responses by leveraging cutting-edge technologies like Hugging Face, FAISS, Google PaLM, Gemini, and LangChain. We explore and compare the performance of both PaLM and Gemini, two powerful LLMs, to identify strengths and weaknesses in the context of web development query retrieval. Our approach capitalizes on the synergistic combination of these freely accessible tools, ultimately leading to a more efficient and user-friendly query processing system.

Author 1: Xiaoli Huan
Author 2: Hong Zhou

Keywords: LLM (Large Language Model); vector databases; retrieval-augmented generation

PDF

Paper 2: Designing a Conversational Agent for Education using a Personality-based Approach

Abstract: Conversational agents (CA) for education are the dialog systems that can interact with students intelligently. They are gaining popularity because of the potential benefits of education. However, there is very little research focusing on personality-based educational CA design. Therefore, we designed and built a high-fidelity educational CA prototype with four personality dimensions via Juji. This personality-based UX design supports the interaction between the CA and diverse users with eight personality styles within four dimensions. During the analysis and design phase, we extracted the keywords, attributes, distinctive behaviors, and interaction expectations to streamline the literal description of personalities into concrete design guidelines applicable to the prototype. The design guidelines were generated based on the extraction to specify interaction features, user expectations, and potential behaviors or actions that should be avoided. Based on the guidelines, we further developed four personality-based design logic in this integrated prototype. This work provides design guidelines for future user personality-based educational CA design. Moreover, the design is among the first group to provide four personality dimensions of design logic in one integrated prototype to better serve students. It sheds light on the future development of human-centred personality-based AI design in the industry while most chatbots are still rapidly developing.

Author 1: Jieyu Wang
Author 2: Jim Q. Chen
Author 3: Dingfang Kang
Author 4: Susantha Herath
Author 5: Abdullah AbuHussein

Keywords: Conversational agent/chatbot; personality-based UX design; human-centered AI

PDF

Paper 3: A Quantitative Study on Real-Time Police Patrol Route Optimization using Dynamic Hotspot Allocation

Abstract: A quantitative study on the optimization of police patrol routes in real-time using dynamic hotspot allocation is presented in this article. Ensuring public safety necessitates addressing the difficulties law enforcement agencies encounter in optimizing patrol routes within limited resources. In dynamic environments, static patrol route planning and traditional random routing are inadequate. In order to prevent crime, this study suggests using big data analysis to pinpoint crime hotspots and create patrol routes that are most effective. Our suggested approach, when paired with the Random Forest algorithm, predicts crime-prone areas by combining 911 incident response data and crime datasets. This allows for the efficient use of police resources and successful preventive measures. A greedy algorithm is used to steer patrol units toward the best routes, maximizing their presence close to hotspots. Besides, a Hamilton way is powerfully made based on overhauled hotspots and crisis call hubs. Whereas the spatial selection technique addresses restrictions of randomized investigation, productive policing remains pivotal for societal well-being and financial development. Progressions in innovation enable decision-makers with real-time data on criminal exercises, guaranteeing resource-friendly strategies inside budgetary imperatives. Successful communication with the public is crucial, as security impacts different perspectives of society, including venture choices. Hence, cutting-edge approaches are crucial for informed decision-making and keeping up with general security.

Author 1: Rakesh Ramakrishnan
Author 2: Soumithri Chilakamarri
Author 3: Roopalatha Mangalseth Budda
Author 4: Ashik Dawood Mohammed Anifa

Keywords: Route optimization; redesigning police patrol; data-driven strategies; novel patrol routing; random forest; real-time crime prediction; crime data; 911 incident response; hamilton path

PDF

Paper 4: Operator Machine Augmentation Resource Framework

Abstract: The growing number of people gathering in public and the massive incidents that have occurred in recent years. It raises questions about public safety and security. This paper illustrates the technical implementation of the operator machine augmentation resource (OMAR) framework, which integrates advanced technologies, including a Computer Vision model and CCTV operators’ training techniques, to address the limitations of traditional surveillance systems. The OMAR framework enhances the productivity of surveillance systems by facilitating operators’ tasks and improving theirs. The framework’s components, including Alert Triggers, a Computer Vision model, and human training, work together to create better output, and a more convincing system will improve the quality of security and reduce human effort. Although the OMAR framework represents a potentially significant step forward in surveillance security systems, it remains a theoretical model requiring further investigation and rigorous testing. Future work will focus on evaluating the effectiveness of the OMAR framework through an empirical study, examining its impact on various aspects of human performance and adaptations.

Author 1: Mohammed Ameen
Author 2: Richard Stone
Author 3: Majed Hariri
Author 4: Faisal Binzagr

Keywords: Crowd monitoring; public security; Operator Machine Augmentation Resource (OMAR) framework; CCTV operator; surveillance system; crowd monitoring systems

PDF

Paper 5: Word-Pattern: Enhancement of Usability and Security of User-Chosen Recognition Textual Password

Abstract: Knowledge-based authentication systems are the most common methods used to verify users’ identity, especially textual passwords. However, periodic changes in password complexity exacerbate human’s limitations of remembering hard passwords over time. Therefore, a novel authentication method called Word Pattern Recognition Textual Password (WPRTP) was proposed to overcome these issues. WPRTP is based on drawing pattern on a grid with a specific security requirement to balance between usability and security. This paper aims to compare WPRTP with a recall passphrase to explore its potential for enhancing user experience, usability, and security. Fifty-four users evaluated the efficiency of WPRTP on memorability, registration time, and login time. The results indicated that WPRTP is significantly more memorable over long-term periods, with a 100% success rate, and required less registration time (29 seconds for WPRTP and 122 seconds for recall passphrase). Additionally, WPRTP users demon-strated faster login times (20 seconds for WPRTP and 42 seconds for recall passphrase). Thus, WPRTP is a potential alternative to conventional authentication methods. Future work will focus on systematically managing and reducing the tendency among users to depend on familiar, repetitive patterns in the creation of a weak password.

Author 1: Hassan Wasfi
Author 2: Richard Stone
Author 3: Ulrike Genschel

Keywords: Authentication; password; passphrase; recognition; recall; pattern; usability; security

PDF

Paper 6: Revolutionizing Campus Communication: NLP-Powered University Chatbots

Abstract: Artificial intelligence (AI) based chatbots leverage programmed software instructions to simulate human speech and user interaction. These versatile tools can be employed in various domains, from managing smart home devices to providing personal virtual assistants. They can also be useful in responding to common queries and can make information easier to access. In response to this need, we developed a specialized chatbot tailored for the academic environment by training an NLP model to answer frequently asked questions (FAQs) the need of searching through the university website. The main goal is to optimize user engagement and streamline information retrieval within a university setting. By employing ML and NLP techniques, we enhance the chatbot's capabilities, enabling it to provide effective and precise answers, contributing to a more seamless and efficient experience for users seeking information about the university. The study discusses the pivotal decision-making process between implementing a custom neural network and the BERT model. Through a comparative analysis, the custom neural network emerges as the preferred solution, displaying efficiency, quick deployment, and superior accuracy in handling task-specific queries. While BERT presents unparalleled versatility in natural language processing, its resource-intensive pre-training, and challenges in adapting to the intricacies of the university-specific dataset limit its efficiency in this application. This research emphasizes the importance of customization to meet the unique demands of a university chatbot, providing valuable insights for developers seeking to strike a balance between efficiency and specialization in similar applications.

Author 1: Ritu Ramakrishnan
Author 2: Priyanka Thangamuthu
Author 3: Austin Nguyen
Author 4: Jinzhu Gao

Keywords: Artificial intelligence; natural language processing; chatbot; machine learning; recommender systems; neural network; BERT

PDF

Paper 7: Capability Assessment Framework for Artificial Intelligence and Blockchain Adoption in Public Sector of United Arab Emirates (UAE)

Abstract: This is an ongoing study with the aim to develop a maturity model for efficient deployment of Artificial Intelligence (AI) and Blockchain (BC) in the United Arab Emirates (UAE) public sector. The organizations would leverage this maturity model to assess their efficacy of deploying AI and BC technologies in their operations, highlighting their capabilities for successful integration of these technologies while underscoring their incompetency and directing their attention towards areas of improvement. To achieve this aim, initially the conceptual framework is proposed which would act as primary frame of reference for conducting empirical research in this prospect and developing a maturity model. This study presents the conceptual framework, which highlights the essential dimensions and factors that should be assessed and enhanced for successful implementation of AI and BC technologies. The framework also introduces five stages of maturity/development to mark the progress of each dimension of conceptual framework. This conceptual framework is 4x5 grid which vertically presents four dimensions and horizontally it presents five stages of maturity. Strategy & Governance, Technology, People, and Process are dimensions of framework whereas initial, developed, defined, managed and optimized are stages of maturity.

Author 1: Ahmad Mofleh Al Graibeh
Author 2: Saba Khan
Author 3: Salah Al-Majeed
Author 4: Shujun Zhang

Keywords: Conceptual framework; Artificial Intelligence; Blockchain; maturity model

PDF

Paper 8: Utilizing Machine Learning Techniques to Assess Technical Document Quality

Abstract: Information is disseminated through images in newspapers, periodicals, the internet, and academic journals. With the aid of various tools such as Adobe, GIMP, and Corel Draw, distinguishing between an original image and a forgery has become increasingly challenging. Most conventional methods rely on constructed traits for detecting image counterfeiting. Image verification plays a crucial role in securing and ensuring the authenticity of individuals' identities in sensitive documents. This research proposes a machine learning approach (Support Vector Machine, SVM, and Histogram of Oriented Gradients, HOG) to identify images and confirm their authenticity. The Histogram of Oriented Gradients (HOG) is employed to extract diverse features including matching, image size, and dimensions for image verification. The training and testing phases are carried out using a Support Vector Machine (SVM). The proposed image verification technique is evaluated using extensive datasets to ascertain image recognition accuracy, alongside metrics such as specificity, sensitivity, and precision. Comparative analysis with existing techniques reveals that the average image verification accuracy of the proposed method stands at 98%, surpassing previous image verification methods.

Author 1: Muhammad Junaid Iqbal
Author 2: Fabio Massimo Zanzotto
Author 3: Usman Nawaz

Keywords: Image verification; machine learning; ensemble approach; multi-feature image recognition

PDF

Paper 9: Evaluating the Effect on Heart Rate Variability of Adults Exposed to Radio-Frequency Electromagnetic Fields in Modern Office Environment

Abstract: The objective of the study was to investigate whether heart rate variability (HRV) is an appropriate method to describe potential effects of RF-EMF on humans considering a modern office environment radiation level with the frequencies 1.8 GHz (DECT) and 2.45 GHz (Wi-Fi) and an exposure time of 10 min. The emitters were 1 m distant from the test subjects. The HRV parameters SDNN, RMSSD, LF and HF were recorded from 60 adults in three runs, totaling up to 154 recordings. Effects were evident for the parameter SDNN. In two runs, HRV changed from control to exposure phase, in one run from exposure phase to control. The cofactors smoking, coffee consumption, and the use of strong medications did not modulate EMF effects. HRV seems to be suitable to detect effects of radio-frequency electromagnetic fields on humans under certain conditions. In the future, prolonged exposure and new frequencies (5G) should be included in order to provide a better description of RF-EMF effects in modern office environments.

Author 1: Sanda Dale
Author 2: Romulus Reiz
Author 3: Sorin Popa
Author 4: Andreea Ardelean-Dale
Author 5: Julian Keller
Author 6: Jens Uwe Geier

Keywords: Radio frequency electromagnetic fields; heart rate variability; office environment; Wi-Fi; DECT

PDF

Paper 10: Can Semi-Supervised Learning Improve Prediction of Deep Learning Model Resource Consumption?

Abstract: As computational demands for deep learning models escalate, accurately predicting training characteristics like training time and memory usage has become crucial. These predictions are essential for optimal hardware resource allocation. Traditional performance prediction methods primarily rely on supervised learning paradigms. Our novel approach, TraPPM (Training characteristics Performance Predictive Model), combines the strengths of unsupervised and supervised learning to enhance prediction accuracy. We use an unsupervised Graph Neural Network (GNN) to extract complex graph representations from unlabeled deep learning architectures. These representations are then integrated with a sophisticated, supervised GNN-based performance regressor. Our hybrid model excels in predicting training characteristics with greater precision. Through empirical evaluation using the Mean Absolute Percentage Error (MAPE) metric, TraPPM demonstrates notable efficacy. The model achieves a MAPE of 9.51% for predicting training step duration and 4.92% for memory usage estimation. These results affirm TraPPM’s enhanced predictive accuracy, significantly surpassing traditional supervised prediction methods. Code and data are available at: https://github.com/karthickai/trappm

Author 1: Karthick Panner Selvam
Author 2: Mats Brorsson

Keywords: Performance model; deep learning; Graph neural network

PDF

Paper 11: PhyGame: An Interactive and Gamified Learning Support System for Secondary Physics Education

Abstract: With the rapid development of affordable digital technology, digital transformation is progressing in different sectors of society. Education is no exception; especially online education has been widely spreading since the coronavirus pandemic. While online education enables individuals to over-come the constraints associated with traditional offline formats (e.g. flexibility regarding time and place), it also poses several challenges. Particularly, in STEM subjects that require hands-on experience, there are limits to what online education can offer. Therefore, online education platforms for such subjects should be developed with a goal to replicate offline hands-on experience as much as possible. It has been reported that many learners lose their motivation and drop out of online courses. Previous research has shown that virtual hands-on experiments are vital for enhancing learners’ motivation. Taking these factors into consideration, we have developed a system called PhyGame for secondary-level students’ physics education using interactive elements and gamification. Through evaluation by 44 secondary-level students, the system has been proven to be an effective learning platform for learning physics with enjoyment while maintaining a high level of student motivation and engagement.

Author 1: Toshiki Katanosaka
Author 2: M. Fahim Ferdous Khan
Author 3: Ken Sakamura

Keywords: Gamification; interactive learning; online education; engagement; STEM

PDF

Paper 12: Modified SFWBP Framework for Vocal Teaching Quality Evaluation Based on the MEREC Technique

Abstract: With the gradual improvement of people's living standards, their pursuit of art is also constantly increasing. Vocal music is not only an important course in the training process of music majors but also an important factor in improving personal qualities and expanding one's own abilities. In the process of vocal teaching, there are many factors that affect the quality of teaching, among which the teacher-student factor is one of the important influencing factors. How to enhance the role of teacher-student factors in improving the quality of vocal teaching has become one of the main directions for the development of vocal teaching. The vocal teaching quality evaluation could be looked as the multiple-attribute group decision-making (MAGDM). Spherical fuzzy sets (SFSs) could portray the uncertainty and fuzziness during the vocal teaching quality evaluation more effectively and deeply. In this paper, based on bidirectional projection, we shall propose the spherical fuzzy bidirectional projection (SFBP) technique and spherical fuzzy weighted bidirectional projection (SFWBP) technique. First of all, the definition of SFSs is introduced. Furthermore, the SFBP technique and SFWBP technique with SFSs are proposed based on the bidirectional projection. Based on the developed SFWBP technique, the MAGDM technique is organized and all computing steps are organized. Finally, a numerical example for vocal teaching quality evaluation is employed to verify the SFWBP technique and some comparisons are employed to verify advantages of SFWBP technique with SFSs.

Author 1: Lei Huang

Keywords: Multiple-attribute group decision-making; Spherical fuzzy sets (SFSs); MEREC; bidirectional projection technique; vocal teaching quality evaluation

PDF

Paper 13: Advanced IoT Techniques for Detecting Water Leaks in Supply Networks with LoRaWAN

Abstract: Water leaks are a common problem when water flows through pipes, causing significant losses of this valuable resource. Our solution uses the Internet of Things (IoT) to address these losses. We employ LoRaWAN (Long Range Wide Area Network) technology to collect data from sensors, allowing real-time monitoring of pipelines and the detection of leaks and bursts as soon as they occur. Our goal is to contribute to the preservation of available water resources. We propose non-destructive ultrasonic level sensors to mitigate this issue, thereby avoiding water supply interruptions. These sensors are easy to install and maintain, with a cost that is affordable compared to other existing solutions. Our work aims to gather as much information as possible from water pipelines to ensure rapid leak detection. By using IoT and the LoRaWAN communication protocol, we automate the management of water supply facilities, enhancing efficiency and reducing wastage of this precious resource. We achieved satisfactory results using this solution on our test water pipe.

Author 1: Essouabni Mohammed
Author 2: El Mhamdi Jamal
Author 3: Jilbab Abdelilah

Keywords: Internet of things; LoRaWAN; leak detection; pipeline monitoring; ultrasonic liquid level sensor

PDF

Paper 14: The Utilization of a Multi-Layer Perceptron Model for Estimation of the Heating Load

Abstract: The growing significance of energy-efficient building management techniques has led to research that combines precise heating demand predictions with sophisticated optimization algorithms. This research seeks a comprehensive solution to enhance building energy efficiency, addressing the growing concern for sustainability and responsible resource use in contemporary research and practice. In this research endeavor, the complex topic of energy optimization within the complex domain of heating, ventilation, and air conditioning (HVAC) systems is being tackled with a combination of creative problem-solving techniques and thorough examination. The significance of accurate heating load forecasts for raising HVAC system efficiency and cutting expenses is emphasized in this study. It introduces innovative methods by combining two advanced optimization algorithms, the Artificial Hummingbird Algorithm (AHA) and the Improved Arithmetic Optimization Algorithm (IAOA), with the Multi-Layer Perceptron (MLP) model. The main objective is to improve heating load forecast accuracy and expedite HVAC system optimization procedures. This study emphasizes how important precise heating load forecasts are to attaining energy efficiency, cost savings, and the ultimate objective of encouraging environmental sustainability in building management. The assessments unequivocally illustrate that the MLAH (Multi-Layer Perceptron with Artificial Hummingbird Algorithm) model in the second layer emerges as the most exceptional predictor. It attains an impressive maximum Coefficient of Determination (R2) value of 0.998 during the testing phase, reflecting a remarkable explanatory capacity and displaying remarkably low Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) values of 0.43 and 0.337, indicating minimal prediction discrepancies compared to alternative models.

Author 1: Ken Chen
Author 2: Wenyao Zhu

Keywords: Heating energy consumption; heating load; Multi-Layer Perceptron; Artificial Hummingbird Algorithm; Improved Arithmetic Optimization Algorithm

PDF

Paper 15: Obtaining the California Bearing Ratio Prediction via Hybrid Composition of Random Forest

Abstract: Artificial intelligence algorithms have become much more sophisticated, so the most complex and challenging problems can be solved with them. California Bearing Ratio (CBR) is a time-consuming testing parameter, and univariate and multivariate regression methods are used to address this challenge. Therefore, the CBR value is an essential parameter in indexing the resistance provided by a structure's subterranean formation or foundation soil. CBR is a crucial factor in pavement design. However, its determination in laboratory conditions can be a time-consuming process. This makes it necessary to look for an alternative method to estimate CBR in the soil subgrade, especially the developed layers of the soil. This study has developed one of the machine learning (ML) models, including Random Forest (RF), to predict the CBR. Additionally, some meta-heuristic algorithms have been used for improving the accuracy and optimizing the output of the prediction, consisting of Gold Rush optimizer (GRO), Stochastic Paint optimizer (SPO), and Electrostatic Discharge algorithm (EDA). The results of the hybrid models were compared via some criteria to choose the desired model. SPO had the most desirable performance when coupled with RF compared to other optimizers, exhibiting high R2 and low RMSE.

Author 1: Bensheng Wu
Author 2: Yan Zheng

Keywords: California bearing ratio; gold rush optimizer; stochastic paint optimizer; electrostatic discharge algorithm; random forest

PDF

Paper 16: Optimization of Body Pressure Relief Support Wearable Devices Integrating 3D Printing and Gait Recognition Algorithms

Abstract: To improve wearing comfort and achieve individual recognition, this study designs an ankle exoskeleton that simulates natural human movement based on the joint structure of the human lower limbs. The function of the sole spring is achieved through compression springs on the exoskeleton framework coupled with the foot, and a customized insole is designed using 3D printing technology. This study uses a gait recognition algorithm based on a convolutional gated recurrent unit fully convolutional network with a dual attention mechanism to achieve individual recognition. The results showed that compared to the natural state, when walking with exoskeletons, the integrated electromyographic signals of the gastrocnemius and tibialis anterior muscles decreased by 5.4% and 3.6%, respectively, and the intelligent insole reduced plantar pressure to a certain extent. The accuracy of the proposed gait recognition algorithm could reach 95.26%, which was 2.03% higher than that of fully convolutional networks. In addition, the fuzzy output signals of the left and right feet were combined to obtain the proportions of single support phase and double support phase during walking, which were 92.7% and 7.3%, respectively. This study indicates that a body pressure reducing support wearable device that integrates 3D printing and gait recognition algorithms can reduce lower limb joint pressure, providing a new possibility for improving wearing comfort and achieving individual recognition. It also helps to improve the quality of life for the target audience.

Author 1: Yaqiong Zhou
Author 2: Bing Hu

Keywords: 3D printing; gait recognition; body decompression support; wearing devices; electromyographic signal

PDF

Paper 17: Implementation of Improved Raft Consensus Algorithm in IoT Information Security Management

Abstract: In the context of the rapid expansion of the Internet of Things, information security management has become particularly crucial. In response to the performance bottleneck of traditional Raft consensus algorithms, this study proposes an improved Raft algorithm that combines density noise spatial clustering algorithm and vote change mechanism, aiming to improve the quantity processing efficiency and consistency of Internet of Things systems in large-scale environments. Firstly, a density noise spatial clustering algorithm is added to the traditional Raft algorithm to partition all consensus nodes into multiple sub clusters. Subsequently, a vote change mechanism is introduced to optimize the leadership election process. Finally, an Internet of Things information security management model is built using the improved Raft algorithm. The results showed that the improved Raft algorithm could complete 500 client requests in just 9.5 minutes of consensus trading time. The log replication accuracy of the management model built using this algorithm under four bandwidth conditions of 0.5Mbps, 5Mbps, 50Mbps, and 500Mbps was as high as 0.98, 0.99, 0.98, and 0.97, respectively. Therefore, the designed consensus algorithm not only has good data processing capabilities, but the model built using this algorithm can also achieve good performance in practical applications.

Author 1: Mingzhen Zhang

Keywords: Blockchain; consensus algorithm; Internet of Things; information; management; raft

PDF

Paper 18: Smart City Traffic Data Analysis and Prediction Based on Weighted K-means Clustering Algorithm

Abstract: Urban traffic congestion is becoming a more serious issue as urbanization picks up speed. This study improved the conventional K-means method to create a new traffic flow prediction algorithm that can more accurately estimate the city's traffic flow. Firstly, the traditional K-means algorithm is given different weights by weighting, so as to analyze the traffic congestion in five urban areas of Chengdu by changing the weight values, and based on this, a traffic flow prediction model is further designed by combining with Holt's exponential smoothing algorithm. The findings showed that the weighted K-means method is capable of accurately identifying the patterns of traffic congestion in Chengdu's five urban regions and the prediction model combined with Holt's exponential smoothing algorithm had a better prediction performance. Under the environmental conditions of high traffic flow, when the time was close to 12:00, the designed model was able to obtain a prediction value of 9.81 pcu/h, which was consistent with the actual situation. This shows that this study not only provides new ideas and methods for traffic management in smart cities but also provides a reference value for the design of traffic prediction models.

Author 1: Lei Li

Keywords: K-means; smart cities; traffic flow; prediction; holt; weight

PDF

Paper 19: Application of Improved Deep Convolutional Neural Network Algorithm in Damaged Information Restoration

Abstract: The repair of damaged documents has practical significance in multiple fields and can help people better analyze data information. This study proposes an improved algorithm model based on deep convolutional neural networks to address the issues of poor restoration performance and insufficient restoration information in the current process of restoring damaged document information. The new model improves the ability of document image classification and recognition data by using deep convolutional neural networks and incorporates grayscale rules to enhance the edge information restoration problem in the document information restoration process. The results indicated that in the repair of document data, the research model could achieve good document repair results. The average accuracy of the research model was 94.2%, which was 4.6% higher than the 89.6% of other models. The average percentage error of the model was around 3.6, which was about 2.2 lower than other models. The algorithm model used had the lowest average root mean square error of only 4.4, which was 1.9 lower than the highest model, and its stability was the best among several models. Therefore, the new model has a good repair effect in document information restoration, which has good guiding significance for the research of damaged information restoration.

Author 1: Wenya Jia

Keywords: Damaged document information; restoration; deep convolutional neural network; grayscale rules

PDF

Paper 20: Designing the VPN with Top-Down to Improve Information Security

Abstract: In this article, presents a systematic review of virtual private networks (VPNs) and their contribution to improving information security, with a particular focus on the Andia Consortium. It examines how VPN technology, through its ability to provide a secure channel for communication between devices, can protect organizations' valuable digital data against cyber-attacks. Various types of VPN systems, their security strategies, advantages and disadvantages, and their dependence on different protocols and standards are discussed. Additionally, tunneling technology, a key technology in VPN implementation, is explored. Through this study, we seek to identify the benefits and limitations of using VPN to improve information security. This work aims to provide a deeper understanding of how VPNs can be designed from the top down to improve information security in organizations.

Author 1: Valero Andia Billy Scott
Author 2: Sanchez Atuncar Giancarlo

Keywords: VPN; cyber-attacks; security information

PDF

Paper 21: Design of Network Attack Intrusion Detection System Based on Improved FWA Algorithm

Abstract: The increasing diversity of network attack behaviors has led to increasingly serious network security issues. Based on this, this study proposes an optimized fireworks algorithm to build an intrusion detection model. Firstly, the traditional algorithm is optimized by improving the uniformity of initial individual distribution and designing a fitness value update strategy, which greatly reduces the computational burden of the model and improves recognition accuracy. Then, the feature analysis detection strategy is selected and the model is fused to ensure system stability. Finally, to validate the effectiveness of the model, a comparative experimental analysis is conducted. The results validated that the average accuracy of the research model was 99.06%, with an average detection rate of 96.98%, which is relatively higher than the other models by 2.57%. The error warning rate was only 0.13%, lower than the other models of 1.60%. In summary, the proposed intrusion detection model based on the fireworks algorithm and feature analysis can effectively identify attack behaviors and classify them correctly.

Author 1: Qingsong Chang
Author 2: Weiyan Feng
Author 3: Xingguo Wang

Keywords: Fireworks algorithm; fitness; initial cluster; characteristics; intrusion detection; network

PDF

Paper 22: Fuzzy Control-based Adaptive Adjustment of Dynamic Stiffness for Stewart Platforms

Abstract: An adaptive adjusting strategy of Stewart platform dynamic stiffness based on fuzzy control is explored in this paper. The transient response, steady-state accuracy, anti-disturbance and robustness of Stewart platform are improved remarkably. Simulation experiments and data analysis show that compared with traditional fixed stiffness or PID control, this fuzzy control strategy can quickly achieve steady state under various operating conditions, effectively deal with load mutation, parameter change and model uncertainty, and greatly enhance the overall stability and performance of Stewart platform. In an application example, the strategy is used in precision machining field to optimize Stewart platform support and accurately control high-speed machine table, facing frequent fluctuation of dynamic load. The fuzzy controller takes displacement error, speed error, cutting force and material hardness as inputs and dynamic stiffness as outputs, and constructs fuzzy rule base and optimized membership function suitable for various machining conditions. The evaluation shows that fuzzy control performs well in transient response, and the response time is shortened by about 30% in the face of large load sudden change. In steady-state accuracy, displacement error ± 0.05 mm and velocity error ±0.1°/s are strictly controlled, which is better than pure PID control. In anti-disturbance test, fuzzy control successfully reduces the influence of random disturbance on platform trajectory by 70%. Robustness tests show that the fuzzy controller maintains stable control effect even when the system parameters vary by ±10%, and the system performance score is above 8.5, which is far superior to that of traditional PID controller under the same conditions.

Author 1: Zhiqiang Zhao
Author 2: Yuetao Liu
Author 3: Changsong Yu
Author 4: Changsong Yu

Keywords: Fuzzy control; regulation methods; Stewart platform; stiffness adaptive

PDF

Paper 23: Financial Risk Prediction and Management using Machine Learning and Natural Language Processing

Abstract: With the continuous development and changes in the global financial markets, financial risk management has become increasingly important for the stable operation of enterprises. Traditional financial risk management methods, primarily relying on financial statement analysis and historical data statistics, show clear limitations when dealing with large-scale unstructured data. The rapid development of machine learning and Natural Language Processing (NLP) technologies in recent years offers new perspectives and methods for financial risk prediction and management. This paper explores and conducts empirical analysis financial risk management using these advanced technologies, with a particular focus on the application of NLP in measuring financial risk tendencies, and the financial risk prediction and management based on a Deep neural network - Factorization Machine (DeepFM) model. Through in-depth analysis and research, this paper proposes a new financial risk management model that combines NLP and deep learning technologies, aimed at improving the accuracy and efficiency of financial risk prediction. This study not only broadens the theoretical horizons of financial risk management but also provides effective technical support and decision-making references for practical operations.

Author 1: Tianyu Li
Author 2: Xiangyu Dai

Keywords: Financial risk management; machine learning; Natural Language Processing (NLP); Deep FM model; risk prediction

PDF

Paper 24: Computer Image Encryption Technology Based on Chaotic Sequence Algorithm

Abstract: With the wide application of computer images and the popularization of network transmission, the public demand for image encryption technology is becoming more and more urgent. Privacy and data security can be effectively guaranteed through image encryption, but the existing encryption technology still has problems such as high overhead and poor encryption performance. Therefore, in order to improve the processing efficiency of encryption technology, the study constructs a two-dimensional composite chaotic system based on the analysis of existing chaotic sequence algorithms. Additionally, a novel approach to picture encryption is put forth by merging the composite chaotic system following the algorithmic optimization of disruption and diffusion in the image encryption phase. The chaotic mapping performed best, according to the experimental results, when the chaotic system's parameters were between 10 and 75. At this time, the algorithm had the highest encryption speed of 632 Mbit/s and decryption speed of 583 Mbit/s, the lowest resource consumption rate of 21.4% and the lowest delay rate of 11.5%. It can be seen that the method proposed in the study shows significant advantages in terms of security and effectiveness of image encryption, and is capable of realizing high-quality encryption of computer images. The novel image encryption technique that the research proposed has a high degree of security and feasibility and can achieve high-quality encryption of computer images.

Author 1: Li Shen

Keywords: Chaotic sequence algorithm; image encryption; mapping effect; pixel code; security

PDF

Paper 25: Neural Network-Powered Intrusion Detection in Multi-Cloud and Fog Environments

Abstract: Cloud Computing has revolutionized the technological landscape, offering a platform for resource provisioning where organizations can access computing resources, storage, applications, and services. The shared nature of these resources introduces complexities in ensuring security and privacy. With the advent of edge and fog computing alongside cloud technologies, the processing, data storage, and management paradigm faces challenges in safeguarding against potential intrusions. Attacks on fog computing, IoT cloud, and related advancements can have pervasive and detrimental consequences. To address these concerns, various security standards and schemes have been suggested and deployed to enhance fog computing security. In particular, the focus of these security measures has become vital due to the involvement of multiple networks and numerous fog nodes through which end-users interact. These nodes facilitate the transfer of sensitive information, amplifying privacy concerns. This paper proposes a multi-layered intermittent neural network model tailored specifically for enhancing security in fog computing, especially in proximity to end-users and IoT devices. Emphasizing the need to mitigate privacy risks inherent in extensive network connections, the model leverages a customized adaptation of the NSLKDD dataset, a challenging dataset commonly applied to evaluate intrusion detection systems. A range of current models and feature sets are rigorously investigated to quantify the effectiveness of the proposed approach. Through comprehensive research findings and replication studies, the paper demonstrates the stability and robustness of the suggested method versus various performance metrics employed for intrusion detection. The assessment illustrates the model's superior capability in addressing privacy and security challenges in hybrid cloud environments incorporating intrusion detection systems, offering a promising solution for the evolving landscape of cloud-based computing technologies.

Author 1: Yanfeng ZHANG
Author 2: Zhe XU

Keywords: Cloud computing; fog computing; intrusion detection; privacy protection; neural network

PDF

Paper 26: Multi-Sensor Fusion and YOLOv5 Model for Automated Detection of Aircraft Cabin Door

Abstract: This study investigated perception technology of an autonomous driving system to enable independent connection between an aircraft and a boarding bridge. GigE video sensors and solid-state lidars were installed on the cabin side of the boarding bridge, and a technology that fuses the data from these two different sensors was developed and applied. Using the fused data, a technology for identifying the aircraft door was researched using Yolo-v5, one of the feature point extractors. Yolo-v5 is a deep learning-based feature point extractor that was able to identify the door after being trained with more than 10,000 frames of images under predetermined weather and time conditions. Additionally, a parallel alignment control function was applied between the aircraft body and the cabin of the boarding bridge to increase the reliability of the aircraft door identification technology based on the fused data. To achieve this, a certain area of interest was set within the fused data so that the distance deviation to the left and right of the cabin could be calculated. Finally, to verify the research results, tests were conducted to identify aircraft doors under various environmental conditions with more than six airlines selected. Originally, the Yolo-v5 model secured 93.5% accuracy, but through this study, the detection accuracy for limited-environment aircraft doors was increased to over 95%.

Author 1: Ihnsik Weon
Author 2: Soon-Geul Lee

Keywords: Jet bridge; Yolo-v5; sensor fusing; segmentation; door detects; automation docking system

PDF

Paper 27: Developing a Digital Twin Model for Improved Pasture Management at Sheep Farm to Mitigate the Impact of Climate Change

Abstract: Small-scale livestock farmers experience significant losses because of decreased productivity caused by decline in pasture production brought on by climate change. Technology in livestock farming introduced the idea of "smart farming," which has simplified pasture management. Internet of Things (IoT), Artificial Intelligence (AI) and data analytics are just a few of the cutting-edge technology techniques that smart farming incorporates. Digital twin technology is proposed in this study to alleviate the challenge of changing weather patterns that affect pasture management. Digital twin model is developed to predict pasture height to ascertain the predicted amount of pasture and ensure that the sheep have access to enough food for sustainable production. Pasture growth is influenced by temperature, rainfall and soil moisture; thus, pasture height predictions depend on these factors. Digital twin is made of predictive models built on historical and real-time data collected from the IoT sensors and stored in ThingSpeak® cloud. Data analysis was performed in MATLAB® using the neural network algorithm and predictions of the system are modelled in SIMULINK® platform. Digital twin predicted the pasture height to be 52 cm while the observed reading was 56 cm. Therefore, with the prediction error of -4, the digital twin can serve to enhance pasture management through its capabilities and assist farmers in decision making.

Author 1: Ntebaleng Junia Lemphane
Author 2: Ben Kotze
Author 3: Rangith Baby Kuriakose

Keywords: Artificial intelligence; artificial neural network; climate change; digital twin; Internet of Things; machine learning; pasture management; smart farming

PDF

Paper 28: A Theoretical Framework for Temporal Graph Warehousing with Applications

Abstract: The evolution of data management systems has witnessed a paradigm shift towards dynamic and temporal representations of relationships. Graph databases, positioned as key players in managing highly-connected data with a fundamental requirement for relationship analysis, have recognized the need for incorporating temporal features. These features are crucial for capturing the temporal dynamics inherent in various applications, offering a more comprehensive understanding of relationships over time. This theoretical exploration emphasizes the importance of incorporating temporal dimensions into graph data warehousing for contemporary applications. Temporal features introduce a dynamic dimension to graph data, enabling a more nuanced understanding of relationships and patterns over time. The integration of temporal features in graph data management and analysis not only addresses the dynamic nature of contemporary applications but also contributes to enhanced modeling and analytical capabilities.

Author 1: Annie Y. H. Chou
Author 2: Frank S. C. Tseng

Keywords: Data warehousing; graph database; graph warehousing; social computing; temporal data

PDF

Paper 29: Analysis of the Entropy of the Heart Rate Signal During the Creative Process

Abstract: Among the most important cognitive behaviors, creativity is essential for the flourishing of societies and mastery of various aspects of life around us. The effects of creative activities on the brain have only been examined in a few limited studies to date. The effects of such activities on the autonomic system have not been extensively studied. In this study, the changes in the heart rate signal before and during creative activity were examined using methods based on extracting chaotic and non-linear features from the heart rate signal. In particular, this study explores the qualitative changes in entropy during creative thinking and compares them with the resting state to determine whether or not creative activity is progressing. Based on analyzing the heart rate signals of 52 people while performing the three activities of the Torrance creativity test and comparing them with the resting state, the amount of approximate entropy and fuzzy entropy increased with the progress of the creative process. In contrast, comparing each stage of creativity to the previous stage during each activity in both types of entropy shows an increase in the average value at the end of each activity. The comparison of these steps with the last step two minutes ago shows completely incremental changes in activity 3 of both entropies. These entropies increase as the signal becomes more irregular and complex during the creative process. Our findings reveal significant increases in both approximate entropy and fuzzy entropy during creative activities compared to the resting state, suggesting heightened complexity and irregularity in heart rate dynamics as creativity unfolds. These results not only advance our understanding of the physiological correlates of creativity but also highlight the potential of heart rate entropy analysis as a tool for evaluating and enhancing creative abilities.

Author 1: Ning Zhu

Keywords: Heart rate signal; creative process; entropy; autonomous signals

PDF

Paper 30: Designing an Experimental Setup for Data Provenance Tracking using a Public Blockchain: A Case Study using a Water Bottling Plant

Abstract: Data provenance, in an end-to-end supply chain context, refers to the tracking of the origin and history of every raw material, process, packaging and distribution involved in a manufacturing network. The traditional client-server architecture utilised in centralised systems, stores data in a single location, making it vulnerable to single points of failure, data tampering, and unau-thorised access. As a result, a lack of data provenance and standardisation for products in a manufacturing supply chain. This leads to a lack of traceability and transparency. Therefore, this article presents a hypothesis that these challenges can be overcome by incorporating data provenance into blockchain-based smart contracts for traceability and transparency. This article is structured such that it firstly discusses data prove-nance traceability with a focus on the cloud-based storage sys-tem architecture domains for data provenance traceability across end-to-end supply chains. Secondly, this article sheds more light on the design of an experimental setup for block-chain-based data provenance traceability in a manufacturing supply chain using a case study of a water bottling plant. Final-ly, it showcases and discusses the results of the experiments for this purpose.

Author 1: O. L. Mokalusi
Author 2: R. B. Kuriakose
Author 3: H. J. Vermaak

Keywords: Data provenance; public blockchain; smart contracts; supply chain; smart manufacturing

PDF

Paper 31: Increasing the Accuracy of Writer Identification Based on Bee Colony Optimization Algorithm and Hybrid Deep Learning Method

Abstract: It is one of the most important and challenging classification issues to identify the writer's identity from offline handwriting images, which has been the focus of many researchers in recent years. This article presents a novel approach to identifying the author of offline Pertian manuscripts from scanned images based on deep convolutional neural networks. For the first time in the proposed network, the bee colony algorithm has been used in the middle layers of a deep convolutional neural network in order to improve the accuracy of identifying the author and to optimize the parameters, as well as improve the learning performance. In terms of the presented scenario, it was tested independently of the written language in both Persian and English. The proposed method is more accurate than previous studies for the IMA dataset, with an accuracy of 97.60%. Moreover, for the Firemaker dataset, the proposed model has significantly improved over the existing results, with the accuracy of the current model being 99.71%, a value that is 1.78% higher than the results of the previous models.

Author 1: Hao Libo
Author 2: Xu Jingqi

Keywords: Optimization; bee colony algorithm; deep learning; author identity recognition; handwriting

PDF

Paper 32: An IoT Solution to Detect Overheated Idler Rollers in Belt Conveyors

Abstract: It is common knowledge that mechanical systems need oversight and maintenance procedures. There are numerous prevalent operation monitoring techniques, and in the era of IoT and predictive maintenance, it is possible to find multiple solutions to supervise these systems. This article describes the design and implementation of a low-cost system, which use an IoT approach to detect overheated idlers in conveyors belt in mining facilities. The system involves the use of temperature sensors, coordinately with heat map image sensors. The users (i.e., mining operators) can monitor overheated idlers in the whole conveyor belt, making on-demand queries using Telegram or a website, and also receiving autonomous warnings. Prototypes of this system were installed on a conveyor belt at a construction materials manufacturing company, and also in a copper mining company, both located in Apurimac, Peru. The usability and usefulness of the system were evaluated by 20 experts in maintenance and operation of conveyor belts, who filled the questionnaire proposed by TAM (Technology Acceptance Model). The results show that 91% of them consider the system useful for detecting the overheating of idlers in a conveyor belt, and 93% of them considers the solution as easy to use.

Author 1: Manuel J. Ibarra-Cabrera
Author 2: Jaime Guevara Rios
Author 3: Dennis Vargas Ovalle
Author 4: Mario Aquino-Cruz
Author 5: Hugo D. Calderon-Vilca
Author 6: Sergio F. Ochoa

Keywords: IoT system; overheated idler detection; conveyor belts; mining companies; autonomous and on-demand monitoring

PDF

Paper 33: Incremental Learning for GRU and RNN-based Assamese UPoS Tagger

Abstract: This research paper introduces a novel approach to enhance the performance of Universal Part-of-Speech (UPoS) tagging for the low-resource language Assamese, employing Recurrent Neural Networks (RNNs) and Gated Recurrent Units (GRUs). The novelty added in this study is the experimentation with Incremental Learning, a dynamic paradigm allowing the models to continually refine their understanding as they encounter new set of linguistic data. The proposed model utilizes the strengths of GRUs and traditional RNNs to capture long range sequential dependencies and contextual information within Assamese sentences. Incorporation of Incremental Learning ensures the model's adaptability to evolving linguistic patterns, particularly crucial for under-resourced languages like Assamese. Experimental results showcase the superiority of the proposed approach, achieving state-of-the-art accuracy in Assamese UPoS tagging. The research not only contributes to the field of natural language processing but also addresses the specific challenges posed by under-resourced languages. The significance of Incremental Learning is highlighted, showcasing its role in dynamically updating the model's knowledge base with new UPoS-tagged data. This feature proves essential in real-world scenarios where language evolves, ensuring sustained optimal performance in Assamese UPoS tagging. The paper presents the details of the innovative framework for UPoS tagging in Assamese, combining the significance of Incremental Learning with Deep Learning techniques, pushing the boundaries of natural language processing models for low resource languages exploring the importance of dynamic learning paradigms.

Author 1: Kuwali Talukdar
Author 2: Shikhar Kumar Sarma

Keywords: Assamese UPoS; PoS tagger; RNN; GRU; incremental learning

PDF

Paper 34: A Smart Construction Benefit Evaluation Method Combining C-OWA Operator and Grey Clustering

Abstract: Currently, there is a lack of effective objective quantitative methods for evaluating the benefits of smart construction. Therefore, this study proposes a comprehensive method for evaluating the benefits of smart construction. This method establishes an indicator system from the perspective of evaluation objectives, and on this basis, uses a continuous ordered weighted average operator to ensure the objectivity of indicator weight allocation. Afterwards, the grey clustering method is used to form a scoring matrix, achieving effective comprehensive quantitative evaluation. The results showed that for the selected project, the comprehensive benefit value evaluated was 8.342, indicating that the smart construction efficiency of the project had reached a good level. Meanwhile, the extensive benefits of the project showed a stepwise upward trend from 2021 to 2023. This study aims to design and apply a smart construction benefit evaluation method that integrates continuous ordered weighted average operator and grey clustering, which is practical and can provide data reference for project management of smart buildings.

Author 1: Yunzhu Sun
Author 2: Yunfeng Zhang

Keywords: C-OWA; Grey system; clustering; intelligent construction; sustainability

PDF

Paper 35: Deep Learning Algorithm Research and Performance Optimization of Financial Treasury Big Data Monitoring Platform

Abstract: With the rapid development of information technology and the advent of the digital age, the management of fiscal treasury is facing unprecedented challenges and opportunities. In order to improve the efficiency and effectiveness of deep learning algorithms in the financial and treasury big data monitoring platform, this paper further studies the performance optimization methods of the model. This paper deeply studies deep learning algorithm research and performance optimization of financial Treasury big data monitoring platforms. This paper reviews the basic concepts, methods, and applications of deep learning and their application in the financial database big data monitoring platform. In the financial Treasury big data monitoring platform, deep learning algorithms are widely used in image recognition, natural language processing, recommendation systems and other fields. This article first conducts in-depth theoretical research on deep learning algorithms, including various neural network structures (such as convolutional neural network CNN, recurrent neural network RNN, etc.), optimization algorithms (such as gradient descent method and its variants), regularization techniques, etc. In addition, we also studied the practical applications of deep learning in fields such as image processing, natural language processing, and recommendation systems. In order to verify the effectiveness of deep learning algorithms in the financial and treasury big data monitoring platform, we designed corresponding experiments. These experiments include using deep learning algorithms for image recognition of financial documents, natural language processing, and building recommendation systems. We collected real fiscal treasury data as the experimental dataset and preprocessed and annotated the data.

Author 1: Yanbing Wang
Author 2: Ding Ding

Keywords: Deep learning; financial database big data monitoring; algorithm research; performance optimization

PDF

Paper 36: Postpartum Depression Identification: Integrating Mutual Learning-based Artificial Bee Colony and Proximal Policy Optimization for Enhanced Diagnostic Precision

Abstract: Postpartum depression (PPD) affects approximately 12% of mothers, posing significant challenges for maternal and child health. Despite its prevalence, many affected women lack adequate support. Early identification of those at high risk is cost-effective but remains challenging. This study introduces an innovative model for PPD detection, combining the Mutual Learning-based Artificial Bee Colony (ML-ABC) method with Proximal Policy Optimization (PPO). This model uses a PPO-based algorithm tailored to the imbalanced dataset characteristics, employing an artificial neural network (ANN) for policy formation in categorization tasks. PPO enhances stability by preventing drastic policy shifts during training, treating the training process as a series of interconnected decisions, with each data point considered a state. The network, acting as an agent, improves at recognizing fewer common classes through rewards or penalties. The model incorporates an advanced pre-training strategy using ML-ABC to adjust initial weight configurations to increase classification precision, enhancing early pattern recognition. Evaluated on a Swedish study (2009-2018) dataset comprising 4313 cases, the model demonstrates superior precision and accuracy, with accuracy and F-measure scores of 0.91 and 0.88, respectively, proving highly effective for identifying PPD.

Author 1: Yayuan Tang
Author 2: Tangsen Huang
Author 3: Xiangdong Yin

Keywords: Postpartum depression; imbalanced classification; Proximal Policy Optimization; Artificial Bee Colony; reinforcement learning

PDF

Paper 37: A GAN-based Hybrid Deep Learning Approach for Enhancing Intrusion Detection in IoT Networks

Abstract: Internet of Things (IoT) strongly involves intelligent objects sharing information to achieve tasks in the environment with an excellence of living standards. In resource-constrained it is extremely difficult chore to impart security against intrusion. It is unprotected from Distributed Denial of Service (DDoS), Gray hole, sinkhole, wormhole attacks, spoofing, and Sybil attacks. Recent years, deep neural network (DNN) methodologies are widely used to detect malicious attacks. We develop a Hybrid deep learning based GAN Network to detect malicious attacks in IoT networks. Due to composite and time-varying vigorous environment of IOT networks, the model trainig samples are insufficient since intrusion samples combined with normal samples will lead to high false detection rate. We created a dynamic distributed IDS to detect malicious behaviors without centralized controllers. Preprocessing sets threshold values to identify malicious behaviors. Experimental results show HDGAN outperforms existing algorithms with higher accuracy 98%, precision 98% and 95% lower False Positive Rate (FPR).

Author 1: S. Balaji
Author 2: G. Dhanabalan
Author 3: C. Umarani
Author 4: J. Naskath

Keywords: Distributed Denial of Service (DDoS); Internet of Things (IoT); Deep Neural Network (DNN); intrusion detection; Generative Adversarial Network (GAN)

PDF

Paper 38: Natsukashii: A Sentiment Emotion Analytics Based on Recent Music Choice on Spotify

Abstract: Natsukashii offers a delightful platform for users to seamlessly connect with their Spotify accounts and delve into cherished musical moments, fostering a profound emotional connection with their recent experiences. This platform harnesses the power of Spotify's data, facilitating a secure connection to users' accounts while ensuring that no Spotify data is stored locally. Its array of features includes captivating data visualizations, such as display cards, radar charts, and area charts, elegantly showcasing both recent favorites and top-listened tunes. However, the crowning jewel of Natsukashii lies in its ability to provide users with a heartfelt insight into their current mood, derived from the audio features of their recent playlist selections. By meticulously preparing and analyzing the audio features provided by Spotify, Natsukashii delivers a personalized sentiment analysis, offering users a poignant glimpse into their emotional state through the lens of their musical preferences. Moreover, this enriching experience is seamlessly accessible across desktop and mobile platforms, compatible with popular web browsers like Google Chrome, Firefox, and Microsoft Edge.

Author 1: Khor Zhen Win
Author 2: Mafas Raheem

Keywords: Spotify; sentiment analysis; data preparation; data visualization; web development

PDF

Paper 39: Latent Variables Improve Hard-Constrained Controllable Text Generation on Weak Correlation

Abstract: Hard-constrained controllable text generation aims to forcefully generate texts that contain specified constrained vocabulary, fulfilling the demands of more specialized application scenarios in comparison to soft constraint controllable text generation. However, in the presence of multiple weak correlation constraints in the constraint set, soft-constrained controllable models aggravate the constraint loss phenomenon, while the hard-constrained controllable models significantly suffer from quality degradation. To address this problem, a method for hard-constrained controllable text generation based on latent variables improving on weak correlations is proposed. The method utilizes latent variables to capture both global and local constraint correlation information to guide the language model to generate hard-constrained controllable text at the macro and micro levels, respectively. The introduction of latent variables not only reveals the latent correlation between constraints, but also helps the model to precisely satisfy these constraints while maintaining semantic coherence and logical correctness. Experiment findings reveal that under conditions of weak correlation hard constraints, the quality of text generation by the method proposed exceeds that of the currently established strong baseline models.

Author 1: Weigang Zhu
Author 2: Xiaoming Liu
Author 3: Guan Yang
Author 4: Jie Liu
Author 5: Haotian Qi

Keywords: Latent variables; controllable text generation; weak correlation; hard constraint

PDF

Paper 40: Foliar Nitrogen Estimation with Artificial Intelligence and Technological Tools: State of the Art and Future Challenges

Abstract: Nitrogen plays a fundamental role in plant growth, but its high application has significant negative impacts for the farmers and the environment. This nutrient is often provided in excess to prevent plant growth limitations when it ought to be administered in the exact quantities because many farmers do not have access to technology or affordable soil and plant chemical analyses. Precision agriculture through monitoring of crop nutrition may be possible with quantitative, non-destructive methods and technological tools that allow farmers to conduct a rapid and representative verification of their fertilizer applications. In this sense, we carried out a systematic review and bibliometric analysis of recent scientific research to answer the questions: 1) Can artificial intelligence-based, non-destructive analysis of plant nutrition provide relevant information for decision-making in agricultural systems?, 2) Have recent studies reached the stage of developing technological tools to be applied in agricultural systems and field conditions?, and 3) What is the way forward to achieve popularization of the application and development of technological tools in agricultural systems? We found that non-destructive analyses of foliar nutrition need to provide more supportive information for decision-making given the challenge of interpreting and replicating results in agricultural systems operating under uncontrolled conditions, such as field conditions. To address this issue, we propose developing accessible technological tools, such as mobile applications, tailored to farmers’ needs. However, most studies had not yet considered developing a technological tool as part of their objectives. Therefore, it is critical to develop accessible and affordable technologies and monitoring systems that approach precision agriculture since the conservation and sustainable management of natural resources demands translating scientific knowledge into supporting tools that reach farmers and decision-makers worldwide. The way forward is innovation through technological developments that enhance current agricultural systems.

Author 1: Angeles Gallegos
Author 2: Mayra E. Gavito
Author 3: Heberto Ferreira-Medina

Keywords: Digital images; spectral data; estimation models; technological tools; nitrogen

PDF

Paper 41: Image Technology Investigation Based on Fingerprint Devices and Artificial Intelligence

Abstract: In response to the inaccurate visual positioning of fingerprint data images in investigative techniques, a new method based on wireless networks and artificial intelligence is proposed. The new method integrates wireless networks and image vision, while enhancing fingerprint data and images using cross temporal generative networks and channel state information. The research results indicated that the maximum positioning error value of the new model was 1.3m, which was 0.7m, 0.2m, and 0.4m lower than other models. The minimum positioning error value in indoor environments was 0.9m, which was lower compared with the 1.0m, 1.4m, and 1.6m of other models. The model used in the study had higher localization performance and recognition accuracy. The average accuracy was improved by about 4.5% compared with the TDF method with the lowest accuracy. The average root mean square error value was relatively low, with a minimum of 2.15. Compared with the highest SDF model, it was 4.43 lower. Therefore, the proposed method has better fingerprint recognition localization and investigation techniques, which has a better research guidance role for fingerprint localization and image recognition localization.

Author 1: Xuemei Zhao

Keywords: Investigation technology; fingerprint devices; image vision; fingerprint localization; image recognition

PDF

Paper 42: Artistic Color Matching Technology Based on Silhouette Coefficient and Visual Perception

Abstract: Now-a-days, traditional color matching methods cannot meet the current market demand. Meanwhile, there are many factors to consider in the design, which affect the design efficiency. Therefore, it is necessary to seek more efficient design methods. So, this study proposed an improved K-Means based on silhouette coefficients and designed an image main color adaptive extraction model. Subsequently, an evaluation method for artistic color matching schemes based on visual perception and similarity measurement was introduced. Finally, Pix2Pix based on visual aesthetics was designed to develop color matching schemes. These results confirmed that in the objective evaluation of the main color extraction results, the structural similarity of the main color images generated using the silhouette coefficient was superior to other methods. The maximum structural similarity of this method was 0.675, with an average of 0.663. Meanwhile, the peak signal-to-noise ratio of the main color image generated by this method reached a maximum of 21.49 dB, with an average of 21.05 dB. In the validation of Pix2Pix based on visual aesthetics, the average color palette similarity of Pix2Pix's design scheme based on visual aesthetics was 0.807. Meanwhile, the average comprehensive evaluation index of this method was 0.798, which was better than Pix2Pix without integrating visual aesthetics. In the experimental verification of computational efficiency, the average color matching time of the Pix2Pix network model based on visual aesthetics is only 13.75ms. The average time consumption of the K-Means clustering algorithm model is as high as 135.67ms. Overall, the designed image main color adaptive extraction model and color matching model have strong practical applicability. These methods provide effective auxiliary design solutions for the design and development of artistic products, which helps to improve design efficiency.

Author 1: Huizhou Li
Author 2: Wubin Zhu

Keywords: Silhouette coefficient; visual perception; K-Means; color matching; similarity measurement; Pix2Pix

PDF

Paper 43: Virtual Second Life Affects the Existence of Arab Residents

Abstract: The 3D virtual community known as Second Life (SL) which is available on the Internet (www.secondlife.com) represents the latest online services for business, learning, training, and entertainment. People, regional and ethnic groups, business organizations, social activities, and various societal environments populate this world. People who live in this virtual world, known as residents, use personal avatars to declare themselves. People from the Arab region also exist in this world, and they practice their activities as human beings, emotions, and actions. For the Arab residents, there is no escape from living in these communities, like others, using this unlimited space and time. The SL Societies honour their own traditions, ethics, and behaviors as personal values. And since Arab society, in particular, has its own values, traditions, and ethics, could there be a significant reflection of these values in the Second Life Society? This paper aims to pinpoint the possible consequences that certain ethical attitudes attribute to Arab residents, while also posing the crucial question of whether these values and ethics align with the diverse societies within the SL realm. The paper identifies the possibility of a decline in the popularity and population of SL with the reluctance of Arab societies, although, a large number of Arabs have access to Internet services as enriched with technical issues and Internet provision in most Arab countries.

Author 1: Galal Eldin Abbas Eltayeb

Keywords: Internet; second life; virtual worlds; Arab; values; ethics

PDF

Paper 44: Multi-Class Flower Counting Model with Zha-KNN Labelled Images Using Ma-Yolov9

Abstract: The flowering period is a critical time for the growth of plants. Counting flowers can help farmers predict the corresponding fields yield information. As there are several works proposed for flower counting purposes, they lack the prediction of different flowers with counts. Hence, a novel model has been proposed in this study. Initially, this model is fed with different flower images as input, then these images undergo pre-processing. In pre-processing, the images are converted to grayscale for improved accuracy, and then the images noise is removed using bilateral filters. Noise-removed images are then given for edge detection, using GI-CED. Edge-detected images are then augmented to improve the learning rate of the model. Augmented images are labeled using ZHA-KNN. Labeled images feature extracted and are given to MA-YoloV9, which is pre-trained to detect flowers in the image count and obtained as output. Overall, the proposed model was implemented and obtained an accuracy value of about 98.8% and F1-Score obtained 92.2% which is far better than the previous counting models.

Author 1: A. Jasmine Xavier
Author 2: S. Valarmathy
Author 3: J. Gowrishankar
Author 4: B. Niranjana Devi

Keywords: Flower counting; bilateral filter; Zhang Shasha Algorithm distance measured-K-Nearest Neighbor (ZSA-KNN); Gradient Intensity-Canny Edge Detection (GI-CED); mish-activated YoloV9

PDF

Paper 45: Friend Recommender System to Influence Friends on Social Networks Based on B-Mine Method

Abstract: Social networks are linked by one or more particular kinds of connections, including web links, friends, family, and the sharing of ideas and money. Graph theory is used to investigate social relationships in social network analysis. The individuals within the networks are the vertices, and the connections among them are the edges. Between vertices, there can be a wide variety of edges. Due to the rise in Internet usage, online shopping, and social media usage in recent years, recommender systems have become more and more popular. Numerous websites have been successful in putting this recommender system into place. This thesis introduced an approach that uses the B-mine method to explore common patterns and enhance the accuracy of identifying influential nodes in social networks. In this method, two user similarity criteria—coverage and confidence—were used simultaneously to improve the recommender system. The behavior of previous users is analyzed, and recommendations are made to the current user based on friends' behavior and similarity, as well as on their interactions and preferences across different groups. According to the simulation results, the suggested approach performs satisfactorily, with accuracy and sensitivity of 89% and 76%, respectively.

Author 1: Tingting Feng
Author 2: Wenya Jin
Author 3: Wei Li

Keywords: Influential nodes; recommender; social networks and B-mine

PDF

Paper 46: Transfer Learning-based Weed Classification and Detection for Precision Agriculture

Abstract: Artificial intelligence (AI) technologies, including deep learning (DL), have seen a sharp rise in application in agriculture in recent years. Numerous issues in agriculture have led to crop losses and detrimental effects on the environment. Precision agriculture tasks are becoming increasingly complicated; however, AI facilitates huge improvement in learning capacity brought about by the advancements in deep learning techniques. This study examined how CNN and VGG16 (transfer learning) were used for weed classification for the application of spraying herbicides selectively in palm oil plantations based on the type of optimizer, values of learning rate and weight decay used on the models. The result shows that the VGG 16 BN model with Adagrad optimizer, learning rate value of 0.001 and weight decay of 0.0001 shows the average accuracy of 97.6 percent and highest accuracy of 99 percent.

Author 1: Nurul Ayni Mat Pauzi
Author 2: Seri Mastura Mustaza
Author 3: Nasharuddin Zainal
Author 4: Muhammad Faiz Bukhori

Keywords: Artificial intelligence; deep learning; CNN; transfer learning; VGG16

PDF

Paper 47: A Hybrid Framework to Implement DevOps Practices on Blockchain Applications (DevChainOps)

Abstract: As the adoption and utilization of blockchain technology continue to expand in enterprise software development, integrating the established DevOps approach emerges as a rational decision. This integration has the potential to accelerate software development and delivery, enhance software quality, and improve overall productivity. However, there is currently a lack of guidance on a structured DevOps approach, specifically within the realm of blockchain-based software development. This paper emphasizes the importance of implementing an effective DevOps process and investigates its utilization in the development of blockchain smart contracts. Specifically, this study introduces a framework that aims to seamlessly integrate DevOps into the process of smart contract development. Specifically, this research paper presents a framework that has been developed to seamlessly incorporate DevOps principles into the process of smart contract development. The primary focus of this framework is to streamline the continuous delivery and deployment of blockchain smart contracts packaged in containers. It comprises two fundamental components: delivery and deployment, which communicate through Git-distributed version control. Smart contract applications and node-specific deployment configurations are stored in dedicated GitHub repositories. The delivery component guarantees the synchronization of the deployment package with the most recent version of the smart contract application and the node deployment configuration files. The deployment component, meanwhile, is responsible for executing blockchain-decentralized applications in containers across all blockchain nodes. It leverages GitHub, Jenkins, and Docker for this purpose. To validate its effectiveness, multiple tests have been conducted on Quorum's simple storage, Sawtooth's XO Integerkey, and Corda's token decentralized applications (dapps) dappsto evaluate the effectiveness of the proposed method.

Author 1: Ramadan Nasr
Author 2: Mohamed I. Marie
Author 3: Ahmed El Sayed

Keywords: Blockchain; decentralized applications (dapps); DevOps; smart contracts; continuous integration (CI); continuous deployment (CD); model-driven development (MDD)

PDF

Paper 48: Developing a Reliable Hybrid Machine Learning Model for Objective Soccer Player Valuation

Abstract: Football is both a popular sport and a big business. Managers are concerned about the important decisions that team managers make when it comes to player transfers, player valuation issues, and particularly the determination of market values and transfer fees. Market values are important because they can be thought of as estimates of transfer fees or prices that could be paid for a player on the transfer market. Football specialists have historically estimated the market. However, expert opinions are opaque and imprecise. Thus, data analytics may offer a reliable substitute or supplement to expert-based market value estimates. This paper suggests a quantitative, objective approach to value football players on the market. The technique is based on applying machine learning algorithms to football player performance data. To achieve this objective, Decision Tree Regression (DTR) was employed to predict the market value of football players. Additionally, two novel metaheuristic algorithms, Honey Badger Algorithm (HBA) and Jellyfish Search Optimizer (JSO), were utilized to enhance the performance of the DTR model. The experiment made use of FIFA 20 game data that was gathered from sofifa.com. In addition, it aims to examine the information and pinpoint the key elements influencing market value assessment. The trial results showed that the DTJS hybrid model performed better in predicting the participants' market pricing than other algorithms. With an R2 value of 0.984 and the lowest error ratio when compared to the baseline, it gets the highest accuracy score. Lastly, it is thought that these findings may be crucial in the discussions that occur between football teams and the agents of players. This strategy may be used as a springboard to expedite the negotiation process and provide a quantifiable, objective assessment of a player's market worth.

Author 1: Hongtao Yu
Author 2: Jialiang Li

Keywords: Market value; machine learning; soccer player; decision tree regression; Honey Badger Algorithm; Jellyfish Search ptimizer

PDF

Paper 49: Security and Privacy Issues in Network Function Virtualization: A Review from Architectural Perspective

Abstract: Network Function Virtualization (NFV) delivers numerous benefits to customers since it is a cost-effective evolution of legacy networks, allowing for rapid network augmentation and extension at a low cost as network functions are virtualized. However, on the other hand, there is a big security concern for NFV users because of shared infrastructure. There are many studies in the literature that report various NFV security threats. In this paper, we categorize these threats according to the alignment of NFV architecture and delineate a taxonomy for NFV security threats. This work provides detailed information about security threats, causes, and countermeasures to reduce the security vulnerabilities of NFV. We believe that the study of NFV security threats from an architectural perspective is a step forward for better insight into these threats since the roots of many NFV threats are connected to their architecture. We also present how NFV design should be revamped to mitigate NFV security threats, something that is a recent trend in this area. Finally, we highlight future research directions to provide enhanced security for future NFV-based networks.

Author 1: Bilal Zahran
Author 2: Naveed Ahmed
Author 3: Abdel Rahman Alzoubaidi
Author 4: Md Asri Ngadi

Keywords: Network functions virtualization; virtualized network function; network security; security threat; cloud computing

PDF

Paper 50: An Anomaly Detection Model Based on Pearson Correlation Coefficient and Gradient Booster Mechanism

Abstract: Anomaly detection aims to build a decision model that estimates the class of new data based on historical sample features. However, the distance between samples in the feature space is very close sometimes, resulting in samples being invisible to the detection model that is the class overlap problem. To address this issue, an anomaly detection model based on Pearson correlation coefficient and gradient booster mechanism is proposed in this paper. Different from traditional resampling methods, the proposed method groups and sorts features from different dimensions such as feature correlation, feature importance, and feature exclusivity firstly. Then, it selects features with higher correlation and lower importance for deletion to improve the training accuracy of the detector. Furthermore, through the unilateral gradient sampling mechanism, ineffective or inefficient training samples can be further reduced to improve the training efficiency of the detector. Finally, the proposed method was compared with three feature selection methods and six anomaly detection ensemble models on six datasets. The experimental results showed that the proposed method has significant advantages on feature selection, detection performance, detection stability, and computational cost.

Author 1: Tuo Ding
Author 2: He Sui

Keywords: Anomaly detection; class overlap; Pearson correlation coefficient; gradient booster mechanism

PDF

Paper 51: Image Change Detection Based on Fuzzy Clustering and Neural Networks

Abstract: In the change detection of synthetic aperture radar images, the image quality and change detection accuracy are difficult to meet the application requirements due to the influence of speckle noise. Therefore, the study improved the fuzzy C-means algorithm by introducing fuzzy membership degree and Gabor texture features. Features were weighted through channel attention, resulting in an image change detection model, namely, the fuzzy local information C-means for Gabor textures and multi-scale channel attention wavelet convolutional neural network. The segmentation accuracy of the model was 0.995, which improved by 0.119 compared to the traditional fuzzy C-means algorithm. When adding multiplicative noise with different variances, the noise variance reached 0.30, and the accuracy of the algorithm still reached 0.982. In practical application analysis, the detection and segmentation accuracy of river images was 0.983 with a partition coefficient of 0.935, and the segmentation accuracy of farmland images was 0.960 with a partition coefficient of 0.902. Therefore, the algorithm has good stability and anti-noise performance. The algorithm can be widely applied in various fields of synthetic aperture radar image change detection, such as disaster assessment, urban development monitoring, and environmental change monitoring. This paper provides more accurate analysis results, which help with policy formulation and effective resource management.

Author 1: Chenwei Wang
Author 2: Xiating Li

Keywords: Fuzzy C-means algorithm; fuzzy membership degree; Gabor texture; channel attention; neural networks; synthetic aperture radar images

PDF

Paper 52: Adaptive Channel Coding to Enhance the Performance in Rayleigh Channel

Abstract: Rayleigh fading channel model is usually used to model real time wireless mobile communication as it has the potential to emulate the multipath scattering effect, dispersion, fading, reflection, refraction and Doppler shift. Mobility and interferences, will change the channel conditions over the time and so will the error environment and results in variable bit error rates (BER). Fixed channel coding schemes have proven in providing reliability of the data despite of poor channel conditions, but fails to contend with time varying channel conditions. Hence they suffer loss in the information rate during good channel conditions. There is need for adaptive scheme that adapts dynamically to channel conditions improving the overall performance and reliability in communication. An adaptive channel coding technique(ACC) is proposed in this paper which requires a simple statistics from the receiver and switches two channel coding schemes dynamically to the changing environment and makes it different from other schemes which deals dynamic tuning of parameters of one Error control coding (ECC) scheme. This strategy not only guarantees reliability but also spectral efficiency as channel capacity is utilized effectively by switching between two ECCs, less robust (high data rate) Convolutional ECC is used when the channel conditions are good and more robust (low data rate) Turbo ECC is used when the channel conditions degraded. Proposed concept is implemented using MATLAB and results outperforms the conventional fixed ECC schemes, an effective reduction of Eb/N0 requirement is obtained for a target BER compared to the fixed or predetermined ECCs. ACC is tested under various mobile channel environment and proven resilient to varying channel conditions. It is beneficial in providing flexibility in QoS by changing the switching criteria according to the application.

Author 1: Srividya L
Author 2: Sudha P. N

Keywords: Adaptive error control coding; turbo coding; convolutional coding; bit error rate; throughput; Rayleigh channel

PDF

Paper 53: Evaluating the Effectiveness of Brain Tumor Image Generation using Generative Adversarial Network with Adam Optimizer

Abstract: Deep learning models known as Generative Adversarial Networks (GANs) have shown great potential in several applications, such as computer vision and image synthesis. They are now a viable tool in medical imaging, useful for tasks like improving diagnostic model performance, generating new images, and augmenting existing data. This paper aims to utilize the capabilities of GANs to produce synthetic MRI images, with the purpose of enhancing the training dataset for tumor classification. A new method is presented to classify tumors in MRI images by combining GANs and Convolutional Neural Networks (CNNs). This method employed the Adam optimizer and the Binary Cross Entropy (BCE) with Logits Loss as the criterion, where they contributed in optimizing the training process and stabilizing the GANs. The proposed method in this paper achieved an average accuracy of 95.1% and an average loss of 0.080 with large images. Furthermore, the proposed method is evaluated based on Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) and is compared to the existing models of GAN. These outcomes highlight the potential of the GAN-based approach in contributing to improved medical diagnostics and treatments.

Author 1: Aryaf Al-Adwan

Keywords: Generative Adversarial Networks; images; medical; Convolutional Neural Networks

PDF

Paper 54: Elevating Aspect-Based Sentiment Analysis in the Moroccan Cosmetics Industry with Transformer-based Models

Abstract: In navigating the dynamic consumer landscape, this study emphasizes the collaborative synergy between influencers and brands, focusing on a cosmetics brand in the Moroccan market. Employing advanced Natural Language Processing (NLP) models, the research explores multifaceted aspects to provide a comprehensive insight into consumer sentiments and product aspects. The primary objective is to empower decision-makers by identifying both the strengths and weaknesses of their products, including evaluating how effectively the influencer promotes their product. Central to this study is the introduction of the MultiLingual Aspect-Based Sentiment Transformer (MABST) framework, a hybrid sentiment analysis model tailored for the beauty and cosmetics industry. MABST integrates cutting-edge transformer models such as Albert, DistillBERT, Electra, and XLNet, enabling advanced sentiment extraction across diverse linguistic contexts in cosmetic product reviews and influencer collaborations. This framework enhances understanding of influencer marketing dynamics and equips businesses with insights to inform strategic decisions and refine promotional strategies in the competitive digital landscape.

Author 1: Kawtar Mouyassir
Author 2: Abderrahmane Fathi
Author 3: Noureddine Assad

Keywords: MABST; Aspect-Based Sentiment Analysis (ABSA); transformer-based models; Moroccan cosmetics industry; natural language processing (NLP); influencer marketing; albert; DistillBERT; electra; XLNet (Transformer models)

PDF

Paper 55: Efficient Squeeze-and-Excitation-Enhanced Deep Learning Method for Automatic Modulation Classification

Abstract: The rapid proliferation of mobile devices and Internet of Things (IoT) gadgets has led to a critical shortage of spectral resources. Cognitive Radio (CR) emerges as a propitious technology to tackle this issue by enabling the opportunistic use of underexploited frequency bands. Automatic Modulation Classification (AMC), which serves as a technique to blindly identify modulation types of received signals, plays a pivotal role in carrying out several CR functions, including inference detection and link adaptation. Recent research has turned to Deep Learning (DL) networks to overcome the shortcomings of traditional AMC techniques. However, most existing DL approaches are impractical for resource-limited systems. To address this challenge, we propose a novel lightweight hybrid neural network for AMC that fuses Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs) layers, along with a customized Squeeze and Excitation (SE) block. The integration of CNNs and GRUs allows for the learning of both spatial and temporal dependencies in modulated signals, while the SE block recalibrates features by modeling interdependencies between CNN network channels. Our experimental results, using the RadioML 2016.10A dataset, clearly demonstrate the superior performance of our approach in effectively managing the tradeoff between accuracy and complexity compared to baseline methods. Specifically, our approach achieves the highest accuracy of 91.73%, surpassing all reference models while reducing the memory footprint by at least 45%. In future work, further investigation is warranted to differentiate modulations sharing temporal or frequency domain characteristics and enhance classification accuracy in high-noise environments.

Author 1: Nadia Kassri
Author 2: Abdeslam Ennouaary
Author 3: Slimane Bah

Keywords: Cognitive radio; modulation classification; deep learning; convolutional neural networks; Gated Recurrent Units; squeeze and excitation

PDF

Paper 56: Personalized Art Design of Wheel Rims Based on Image Mapping of Image Requirements

Abstract: In the customization of wheel rims, to convert users’ emotional images and needs into design solutions, research is conducted based on pixel theory, using clustering algorithms, principal component analysis and other technologies to establish image association sample libraries, obtain image mapping relationships, and construct a wheel rim shape design platform system and system design improvements. The results showed that unlike methods such as support vector machines, the K-means algorithm had higher classification accuracy and smaller average absolute error. The classification accuracy of the K-means algorithm was 93.15%, and the support vector machine was 84.33%. The minimum average absolute error of the K-means algorithm was 0.56. In the application of the wheel personalized customization platform system, the improved design improved user satisfaction and ease of use, with corresponding scores of 4.40 and 4.35, respectively. The research method can transform user image needs into wheel shape design schemes to meet user needs.

Author 1: Jianhui Li

Keywords: Wheels; art design; styling design; user needs; image clustering

PDF

Paper 57: Enhanced CoCoSo Technique for Sport Teaching Quality Evaluation with Double-Valued Neutrosophic Number Multiple-Attribute Decision-Making

Abstract: Only by effectively combining online and offline teaching, and vigorously promoting the integration of online and offline teaching in college physical education, can we maximize the reform and innovation of college physical education teaching, and continuously improve teaching quality. Although blended teaching has become one of the important techniques in college physical education teaching and has continuously achieved new results, there are still some problems in the process of organization and implementation that need to be seriously improved. The blended teaching quality evaluation is regarded as the defined multiple-attribute decision-making (MADM). Recently, the CoCoSo and entropy technique was utilized to cope with MADM. The double-valued neutrosophic sets (DVNSs) are utilized as a technique for characterizing fuzzy information during the blended teaching quality evaluation. In this study, CoCoSo is constructed for MADM under DVNSs. Then, the double-valued neutrosophic number CoCoSo (DVNN-CoCoSo) technique is constructed for MADM. Finally, numerical example for blended teaching quality evaluation is put forward to show the DVNN-CoCoSo technique.

Author 1: Xuan Wen
Author 2: Changhong Pan

Keywords: Multiple-attribute decision-making (MADM); double-valued neutrosophic sets (DVNSs); CoCoSo technique; blended teaching quality evaluation

PDF

Paper 58: An Enhanced Secure User Authentication and Authorized Scheme for Smart Home Management

Abstract: Due to rapid and advanced technology, home automation has gained popularity, making daily life easier. Digitalization and automation have encompassed a wide range of activities and industries. IoT will make home automation more affordable and appealing. With IoT-enabled remote appliance control, smart home automation should improve living standards. A home gateway configures smart, multimedia, and home networks for IoT devices. Safety of life and property is essential to human fulfilment. Automating homes and using related apps have increased the ease, comfort, security, and safety of living in one. Home automation systems have motion detection capabilities and surveillance features to enhance home security. The logic of avoiding excessive or fraudulent notifications remains difficult. Using intelligent response and monitoring mechanisms improves the efficiency of smart home automation. This study introduces a smart home automation system designed to control household devices, monitor environmental conditions, and identify unauthorized entry inside the smart home network and its immediate surrounding area. A smart home network design and configuration that enables secure IoT services with an Access Control List (ACL) for home networks. The research aims to design a robust authentication scheme for guaranteed secure communication in a smart home environment. Implementing a Next Generation Access Control (NGAC) technique serves with Telnet, SSH, IPSec and VPN to detect unauthorized access and mitigate security issues. The efficacy of the suggested design and configuration is validated using a simulation, demonstrating a notable performance in the context of enhanced security measures.

Author 1: Md. Razu Ahmed
Author 2: Mohammad Osiur Rahman

Keywords: Smart home automation; Internet of Things; security and privacy; ACL; IPSec; VPN

PDF

Paper 59: Integrating Causal Inference and Machine Learning for Early Diagnosis and Management of Diabetes

Abstract: In the context of the increasing prevalence of diabetes, this work focuses on integrating causal inference with Machine Learning (ML) for early diagnosis and effective management of diabetes. We applied a series of advanced techniques to improve model performance, including the use of data preprocessing methods, evaluation of variable importance and causal analysis, Feature Engineering methods, and hyperparameter optimization. The diabetes prediction model is a Stacking ensemble model that combines the predictions of several base models (namely: Random Forest Classifier, XGBClassifier, Gradient Boosting Classifier). Initial results showed a precision of 0.70, a recall of 0.70, an Area Under Curve (AUC) of 0.768, and a Mean Cross Entropy (MCE) of 0.299. After optimization, precision increased to 0.73, recall to 0.73, AUC to 0.798, and MCE improved to 0.271. This approach has demonstrated a significant improvement in diabetes prediction, suggesting that the integration of causal inference and Machine Learning is a promising path for the diagnosis and management of diabetes. The reduction in MCE, alongside improvements in precision, recall, and AUC, underscores the effectiveness of our optimization techniques in enhancing model reliability and performance.

Author 1: Sahar Echajei
Author 2: Mohamed Hafdane
Author 3: Hanane Ferjouchia
Author 4: Mostafa Rachik

Keywords: Machine learning; classification; causal inference; Bayesian networks; ensemble technique; diabetes diagnosis

PDF

Paper 60: Evaluating Noise-Robustness of Convolutional and Recurrent Neural Networks for Baby Cry Recognition

Abstract: Reliable baby cry recognition plays a crucial role in infant care and monitoring, yet real-world environment poses challenges to system accuracy due to its background noises. This study proposes a novel CNN architecture for baby cry recognition under varying noise conditions, featuring three convolutional layers, a max pooling layer, and 0.5 dropout set, and compares its performance against standard RNN models. The models were trained for 100 epochs with a batch size of 64 and evaluated in both clean and noisy environments. To simulate real-world scenarios, recordings were transformed into audio signals and subjected to varying levels of background noise, particularly at different signal-to-noise ratios (SNRs). Results indicate that both models achieved high accuracy (>89%) in noise-free conditions. However, the proposed CNN maintained higher precision (93%) and overall accuracy (91%) than the RNN under 10dB noise, demonstrating its superior noise robustness for baby cry recognition. This improvement is attributed to the CNN’s capacity to capture spatial features in audio signals, making it susceptible to noise disruptions. These findings contribute to the development of more reliable and robust baby cry recognition systems.

Author 1: Medhanita Dewi Renanti
Author 2: Agus Buono
Author 3: Karlisa Priandana
Author 4: Sony Hartono Wijaya

Keywords: Baby cry recognition; deep learning; gated recurrent unit; long short-term memory; noise robustness; signal-to-noise ratio

PDF

Paper 61: Automated Detection of Learning Styles using Online Activities and Model Indicators

Abstract: Understanding learning styles is essential for learners and instructors to identify strengths and weaknesses in the education system. Although the Felder-Silverman Learning Style Model (FSLSM) is commonly used for this purpose, its reliance on in-person surveys can be time-consuming and prone to inaccuracies. This paper proposes an automated approach using Machine Learning (ML) to detect learning styles. This method extracts features from online activity data in Learning Management System (LMS) databases, aligning them with FSLSM indicators to label different learning styles. The dataset is divided into training and testing groups, respectively, to build and evaluate Support Vector Machine (SVM) classifiers. Feature selection is performed using the Recursive Feature Elimination (RFE) algorithm to improve the performance of the classifier, which results in the SVM-RFE algorithm. The experimental results showed promising accuracy for all model dimensions, i.e., 95.76% for processing, 85.88% for perception, 93.16% for input, and 96.42% for understanding dimensions. This approach offers a robust framework for automated learning style detection, which significantly reduces reliance on manual surveys and improves efficiency in educational settings.

Author 1: Alia Lestari
Author 2: Armin Lawi
Author 3: Sri Astuti Thamrin
Author 4: Nurul Hidayat

Keywords: Learning style; Felder-Silverman Learning Style Model; machine learning; support vector machine; recursive feature elimination; accuracy

PDF

Paper 62: Strategies for Optimizing Personalized Learning Pathways with Artificial Intelligence Assistance

Abstract: With the deepening application of artificial intelligence (AI) in the field of education, Personalized Learning Pathways (PLPs) as a strategy to revolutionize traditional educational models have garnered widespread attention. This paper aims to explore strategies for optimizing PLPs with the aid of AI, in order to enhance learning efficiency, stimulate students' interest in learning, and foster their holistic development. The background section discusses the "one-size-fits-all" teaching methods prevalent in traditional education models and the importance and necessity of PLPs. Following this, the study delves into the limitations of existing methods for optimizing PLPs, especially in terms of dynamic adaptability and real-time feedback mechanisms. The paper consists of two main parts: the first part constructs a dynamic model to simulate the impact of PLP design features on the student learning process; the second part proposes a dynamic PLP resource recommendation algorithm based on incremental learning. By updating students' abilities, preferences, and knowledge states in real-time, the algorithm can provide more precise learning resource recommendations. The experimental results demonstrate that the proposed dynamic PLP resource recommendation algorithm based on incremental learning exhibits significant effects in optimizing PLP design. It can improve the accuracy of the recommendation system and positively influence the long-term learning state transition of students. This proves the potential and practical application value of dynamic models in the field of personalized education. The methods and findings of this study not only enrich the theoretical foundation of the field of personalized learning but also offer robust technical support for practical educational practices, holding significant academic and practical value.

Author 1: Weifeng Deng
Author 2: Lin Wang
Author 3: Xue Deng

Keywords: Personalized learning pathways (PLPs); artificial intelligence (AI); dynamic model; incremental learning; resource recommendation

PDF

Paper 63: ERFN: Leveraging Context for Enhanced Emotion Detection

Abstract: The majority of previous methods for identifying emotions concentrate on facial expressions rather than taking into account the rich contextual information that suggests significant emotional states. To fully utilize the contextual information in order to make up for the lack of emotion information. In this work, The Emotion Recognition Fusion Network (ERFN) is a novel model that uses advanced techniques for efficient context-aware identification of human emotion recognition. It incorporates the Flow Context Aware Loss Fusion (FCALF) model, which focuses on emotion analysis in a video sequence. The model uses deep feature extraction (VGG16), Farnebäck optical flow model, and L1 loss to calculate the Average Contextual Loss (ACL) for selecting key frames. The selected frames are used to obtain resultant optical flow images. Data augmentation techniques are applied exclusively to the training images. The resultant optical flow images undergo feature extraction using both InceptionResNetV2 and VGG16, fine-tuned by adding layer followed by GlobalMaxPool2D and a dense layer, capturing intricate details and flow-contextual information from face, body, and scene. The fused features are fed into a Softmax layer for classification. Experimental results show that the ERFN outperforms existing models in terms of accuracy and generalization, contributing to its effectiveness in capturing context-aware emotions. The proposed approach shows promising results in real-world uncontrolled environments (CAER-S) and laboratory-controlled (CK+) datasets.

Author 1: Navneet Gupta
Author 2: R. Vishnu Priya
Author 3: Chandan Kumar Verma

Keywords: Context-based emotion recognitions; deep learning; optical flow; CNN

PDF

Paper 64: Educational Big Data Mining: Comparison of Multiple Machine Learning Algorithms in Predictive Modelling of Student Academic Performance

Abstract: Utilisation of Educational Data Mining (EDM) can be useful in predicting academic performance of students to mitigate student attrition rate, allocation of resources, and aid in decision-making processes for higher education institution. This article uses a large dataset from the Programme for International Student Assessment (PISA) consisting of 612,004 participants from 79 countries, supported by the machine learning approach to predict student academic performance. Unlike most of the literature that is confined to one geographical location or with limited datasets and factors, this article studies other factors that contribute to academic success and uses student data from various backgrounds. The accuracy of the proposed model to predict student performance achieved 74%. It is discovered that Gradient Boosted Trees surpass the other classification models that were considered (Logistic Regression, Naïve Bayes, Deep Learning, Random Forest, Fast Large Margin, Generalised Linear Model, Decision Tree and Support Vector Machine). Reading skills and habits are of the highest importance in predicting the academic performance of students.

Author 1: Ting Tin Tin
Author 2: Lee Shi Hock
Author 3: Omolayo M. Ikumapayi

Keywords: Academic performance; CGPA; education data mining; machine learning; predictive modelling; R&D investment

PDF

Paper 65: Maximizing Human Capital: Talent Decision-Making Using Information Technology

Abstract: In the current fiercely competitive landscape, an organization’s ability to succeed depends on its ability to leverage information technology to support personnel decisions that optimise the use of its people resources. This research examines five different strategies for optimising human capital through the use of information technology within the framework of multi-criteria decision-making (MCDM). Alternatively, you can leverage data-driven performance monitoring systems, artificial intelligence-driven talent acquisition platforms, virtual reality (VR) onboarding and training simulations, predictive analytics tools for succession planning and talent forecasting, and machine learning algorithms for skill assessment and development. Eight criteria—efficacy, efficiency, accuracy, accessibility, scalability, ethical concerns, influence on the organization’s success, and trend adaptability—were developed to assess these options. We may determine the weights associated with each choice and rate them by applying the entropy weighted WASPAS (weighted aggregated sum product assessment) approach on top of the T-spherical fuzzy set (T-SFS) theory. This study adds to our understanding of how businesses could utilize information technology wisely to enhance human resource management in addition to providing guidance on how to assess various approaches based on how well they perform across a variety of metrics. Human resource specialists and organizational leaders may make use of the useful suggestions made by the study to improve personnel decision-making procedures and to make the most of their workforce’s potential in the digital age.

Author 1: Rui Zhang
Author 2: Xiaobai Li
Author 3: Gang Liu

Keywords: WASPAS; information technology; virtual reality; entropy; machine learning; T-spherical fuzzy Sets

PDF

Paper 66: Power Up on the Go: Designing a Piezoelectric Shoe Charger

Abstract: As modern society continues to thrive, electricity has become an essential component of daily life. However, as the demand for electricity rises, some electrical loads struggle to perform. This can even affect simple tasks, such as charging a mobile phone. In order to meet the ever-expanding energy demands, it is crucial to explore cleaner and renewable power sources. This paper highlights a promising electricity generation method that utilizes piezoelectric materials. Specifically, the study employs the piezoelectric (PZT) material to convert pressure from human movements into electrical power. A bridge rectifier circuit is designed to store this power in a battery, which can be used to charge mobile phones. In addition, a microcontroller is implemented to program the auto-lacing light function and utilize the piezoelectric material as a power supply for the microcontroller. The circuit is designed to calculate the total power produced by the piezoelectric material. Multisim software was utilized to simulate the circuit design, and the results indicate that the power generated is sufficient to charge mobile phones. The study finds that a single piezoelectric plate can generate 5mA in one second when placed under mechanical stress (i.e., human movement). By utilizing four piezoelectric materials, the study was able to generate 13.48V in one second when mechanical force was applied. This is more than enough to supply power to charge a mobile phone, as well as power an LED and 5V servomotor.

Author 1: Jamil Abedalrahim Jamil Alsayaydeh
Author 2: Rex Bacarra
Author 3: Abdul Halim Bin Dahalan
Author 4: Pugaaneswari Velautham
Author 5: Khaled Abidallah Salameh Aldarab’ah

Keywords: Piezoelectric (PZT); generates electricity; energy harvesting; eco-friendly charging; servomotor; sustainable technology; kinetic power generation; Arduino control

PDF

Paper 67: Validation of a Supply Chain Innovation System Based on Blockchain Technology

Abstract: Technologies play a pivotal role in achieving competitive advantage and operational efficiency. This paper explores the transformative potential of blockchain technology within the context of supply chain operations. While the theoretical promise of blockchain as a secure, transparent, and decentralized transaction recording system is undeniable, practical adoption in supply chain systems remains ensnared in skepticism and caution. In the dynamic field of global supply chain management, the adoption of cutting-edge technologies is critical for securing a competitive edge and enhancing operational efficiencies. This paper delves into the revolutionary impact of blockchain technology on supply chain operations, acknowledging its theoretical benefits as a secure, transparent, and decentralized system for recording transactions. However, it also notes the cautious approach towards its practical implementation within supply chains due to prevailing skepticism. This investigation aims to unravel the efficacy of blockchain in enhancing security, efficiency, accuracy, and cost-effectiveness within supply chain systems. By bridging theoretical aspirations with practical realities, this study sheds light on both the advantages and constraints of incorporating blockchain into supply chain management. The application of a blockchain-based system in this research demonstrates significant enhancements in supply chain processes and supplier selection within a decentralized framework. Key performance indicators underscore the system's robustness and utility. Furthermore, the deployment of smart contracts, facilitating automatic verification of data modifications and access rights, underscores the platform's capability in handling diverse operations. Despite ongoing concerns regarding blockchain's performance and scalability, this study observes a positive trend towards overcoming these challenges. the findings contribute to the growing body of knowledge on blockchain technology, marking a significant leap forward in its application within the realm of supply chain management.

Author 1: Ahmed El Maalmi
Author 2: Kaoutar Jenoui
Author 3: Laila El Abbadi

Keywords: Supply chain management; blockchain technologies; traceability; security validation; business validation

PDF

Paper 68: Acne Severity Classification on Mobile Devices using Lighweight Deep Learning Approach

Abstract: Acne is a prevalent skin condition affecting millions of people globally, impacting not just physical health but also mental well-being. Early detection of skin diseases such as acne is important for making treatment decisions to prevent the spread of the disease. The main goal of this project is to develop an Android mobile application with deep learning that allows users to diagnose skin diseases and also detect the severity level of skin diseases in three levels: mild, moderate, and severe. Most of the deep learning methods require devices with high computational resources which hardly implemented in mobile applications. To overcome this problem, this research will focus on lightweight Convolutional Neural Networks (CNN). This study focuses on the efficiency of MobileNetV2 and Android applications that are used in this project to detect skin diseases and severity levels. Android Studio is used to create a GUI interface, and the model works perfectly and successfully by using TensorFlow Lite. The skin disease images of acne with severity levels (mild, moderate, and severe) achieve 92% accuracy. This study also demonstrated good results when it was implemented on an Android application through live camera input.

Author 1: Nor Surayahani Suriani
Author 2: Syaidatus Syahira Ahmad Tarmizi
Author 3: Mohd Norzali Hj Mohd
Author 4: Shaharil Mohd Shah

Keywords: Acne detection; severity level; MobileNetV2; convolutional neural network

PDF

Paper 69: SVNN-ExpTODIM Technique for Maturity Evaluation of Digital Transformation in Retail Enterprises Under Single-Valued Neutrosophic Sets

Abstract: The digital economy has become an important force driving the transformation of old and new driving forces in China's economy, and also provides an opportunity for retail enterprises to "overtake" by changing lanes. The evaluation of the maturity of digital transformation in retail enterprises plays an important role in their digital transformation process. Although more and more retail enterprises are realizing the important role of digital transformation in their own development, the digital transformation of retail enterprises is a complex issue that involves all aspects of retail enterprise management. There are still many retail enterprises that lack clear strategic goals and practical paths, as well as effective supporting assessments and institutional incentives in the process of digital transformation, which may further widen the digital level gap between retail enterprises. The maturity evaluation of digital transformation in retail enterprises is multiple-attribute group decision-making (MAGDM). Recently, the Exponential TODIM (ExpTODIM) technique was employed to cope with MAGDM. The single-valued neutrosophic sets (SVNSs) are presented as decision tool for characterizing fuzzy information during the maturity evaluation of digital transformation in retail enterprise. In this study, the single-valued neutrosophic number Exponential TODIM (SVNN-ExpTODIM) technique is presented to solve the MAGDM under SVNSs. At last, numerical study for maturity evaluation of digital transformation in retail enterprise is presented to validate the SVNN-ExpTODIM technique through comparative analysis.

Author 1: Xiaoling Yang

Keywords: Multiple-attribute group decision-making (MAGDM); single-valued neutrosophic sets (SVNSs); information entropy; exponential TODIM; maturity evaluation of digital transformation

PDF

Paper 70: Analysis of Research Trends in Maritime Communication

Abstract: Maritime industry plays an important role in the transport of various goods and passengers; it is the major contributor to global trade. With the advent of new communication technologies, advances in Artificial Intelligence, and the ubiquitous Internet of Things, the maritime industry is evolving day by day. Effective communication plays a key role in ensuring the smooth operation of maritime activities. However, the researchers in this domain need to understand and analyze various research trends that can offer various insights. In this view, this paper provides a clear understanding of the scientific landscape in maritime communication based on the data available in the Scopus database. Scopus is the largest abstract and citation database from Elsevier which provides comprehensive detail about the literature in various subject fields. This research considers the last 10 years data, i.e. from 2013 to 2023 for the analysis. A total of 505 publications were obtained from the database. These publications include various document types such as articles, conference papers, reviews, etc. The analysis is carried out from various perspectives including year, country, subject area, funding sponsor, document type, affiliation, author, and source. Further, to understand the mutual relations, collaborations between different countries, the co-occurrence of various keywords, and the bibliographic coupling among diverse sources are also analyzed. This analysis provides a clear view and serves the researchers willing to work in this area and other stakeholders to understand various perspectives in this domain.

Author 1: G. Pradeep Reddy
Author 2: Shrutika Sinha
Author 3: Soo-Hyun Park

Keywords: Artificial intelligence (AI); internet of things (IoT); maritime communication; maritime research trends; Scopus

PDF

Paper 71: Adaptive Residual Attention Recommendation Model Based on Interest Social Influence

Abstract: Existing social recommendation models mostly directly use original social data in the social space. However, original social data may contain a large amount of redundant and noisy social relationships. Additionally, existing feature fusion methods struggle to adaptively fuse features between nodes deeply, which can degrade the recommendation performance of the model. Addressing these issues, this paper proposes an Adaptive Residual Attention Recommendation Model based on Interest Social Influence. Firstly, we construct a novel Interest Social Mapping Module to model the confidence of social relationships based on user interests and map original social data to interest social space, thereby gaining a deeper understanding of user interest relationships in social networks. Secondly, we introduce a unique Social Selection Mechanism that dynamically filters and removes meaningless social interactions in the interest social space using social confidence scores, effectively filtering out social information that may interfere with or mislead users. Finally, we design an Adaptive Residual Attention Mechanism to flexibly adjust the feature fusion method of nodes, thereby obtaining more effective node information to improve recommendation accuracy. Experimental results show that compared to several state-of-the-art methods, the proposed model exhibits significant improvements on the Ciao and Epinions datasets.

Author 1: Sheng Fang
Author 2: Xiaodong Cai
Author 3: Yun Xue
Author 4: Wei Lu

Keywords: Social recommendation; redundant and noisy; interest social mapping; social selection mechanism; adaptive residual attention mechanism

PDF

Paper 72: Receive Satellite-Terrestrial Networks Data using Multi-Domain BGP Protocol Gateways

Abstract: In terms of communication media, computer network technology has advanced significantly as a way of communication between devices. An Internet protocol called Border Gateway Protocol (BGP) is used to route traffic and share data between AS. But as of right now, BGP version 5 (BGP-5) has a fairly prevalent problem that degrades the performance of modern IP networks: "high convergence delay" when making routing changes. Since their formation at the start of the twenty-first century, satellite-terrestrial networks (STN) have drawn attention. Particularly in data centers and enterprise networks, this technology has greatly improved traffic control, administration, and monitoring. When adopting the STN paradigm, difficulties were discovered with providing administrative control, security, administration, and monitoring across domain borders. BGP-5 is used in a multi-domain STN to route traffic and communicate data across many domains or autonomous systems. Through fewer advertisement pathways, BGP-5 shields terrestrial networks from the high dynamics of satellites. Furthermore, a genuine network environment is constructed for authentic testing. According to the findings, BGP-5 can lower CPU consumption by 8.23% to 9.56% and bandwidth resource occupancy of the terrestrial network by 32.12% to 73.26%.

Author 1: Tieshi Song
Author 2: Zhanbo Liu

Keywords: Internet of Things; satellite-terrestrial networks; multi-domain; BGP-5; protocol gateways

PDF

Paper 73: High-Resolution Remote Sensing Image Object Detection System for Small Unmanned Aerial Vehicles Based on MPSOC

Abstract: With the maturation of remote sensing, the applications of small unmanned aerial vehicles are rapidly expanding. Efficient image object detection algorithms have become crucial for information extraction in unmanned aerial vehicles. To meet this demand, an improved YOLOv5s algorithm was developed and deployed within a multi-processor system to optimize the performance of object detection in high-resolution remote sensing images captured by small unmanned aerial vehicles. Through adjustments to the structure and parameters of YOLOv5s, the algorithm was enhanced to improve object recognition capabilities in high-resolution remote sensing imagery. Experimental results demonstrated that the improved YOLOv5s (I-YOLOv5s) algorithm effectively mitigates interference from shadows and other external factors, enabling precise identification of objects. During training, I-YOLOv5s exhibited faster convergence, reaching optimal status after approximately 176 iterations. In performance evaluation, the algorithm achieved F1 and Recall values of 0.92 and 0.94, respectively, significantly outperforming single-shot multibox detectors. I-YOLOv5s attained a maximum average precision of 0.96, markedly higher than comparative algorithms, with its Loss value reduced to a mere 0.06. The introduction of this enhanced algorithm not only enhances the accuracy and efficiency of object detection but also profoundly advances the further application of unmanned aerial vehicles in fields such as environmental monitoring, traffic management, and disaster assessment.

Author 1: Hui Xia

Keywords: UAVs; remote sensing images; object recognition; deep learning

PDF

Paper 74: Dynamic Shader Termination and Throttling for Side-Channel Security on GPUOwl

Abstract: GPUs are becoming more and more appealing targets for side-channel attacks because of their high levels of parallelism and shared hardware resources. In order to reduce side-channel assaults on GPUs, we provide a unique dynamic shader termination and throttling approach in this research. The main concept is to use runtime profiling and heuristics to dynamically terminate and restrict the frequency and concurrency of shader programs. We use the open-source GPGPU simulator GPUOwl to implement the suggested method. Our findings show that the suggested method may successfully thwart a variety of side-channel assaults while having no influence on efficiency. Over a range of benchmarks, the average overhead introduced by the dynamic shader termination and throttling is 5.6%. At the same time, it successfully thwarts recently demonstrated cache-based and timing-based side-channel attacks on GPUs. Thus, the proposed technique offers an efficient software-based defence to enhance the side-channel security of GPUs.

Author 1: Nelson Lungu
Author 2: Satyendr Singh
Author 3: Simon Tembo
Author 4: Manoj Ranjan Mishra
Author 5: Hani Moaiteq Aljahdali
Author 6: Lalbihari Barik
Author 7: Parthasarathi Pattnayak
Author 8: Mahendra Kumar Gourisaria
Author 9: Sudhansu Shekhar Patra

Keywords: Graphics processing units; security; side-channel attacks; shader throttling; GPUOwl

PDF

Paper 75: LSTM-GNOG: A New Paradigm to Address Cold Start Movie Recommendation System using LSTM with Gaussian Nesterov’s Optimal Gradient

Abstract: In this modern streaming platform, the movie recommendation system is an important tool for enabling the users to find new content specialized to their interests. To address the cold start problem prevalent in movie recommendation systems, we introduce the Long Short-Term Memory-Gaussian Nesterov’s Optimal Gradient (LSTM-GNOG) approach. This model utilizes both implicit and explicit feedback to effectively manage sparse rating data. By integrating Bayesian Personalized Ranking (BPR) and Probabilistic Matrix Factorization (PMF) algorithms with preprocessing via Singular Value Decomposition (SVD), our system enhances data robustness. Our empirical results on the MovieLens 100K, MovieLens 1M, FilmTrust, and Ciao datasets demonstrate significant improvements, with Mean Absolute Error (MAE) values of 0.4962, 0.5249, 0.4625, and 0.5341, respectively. Compared to traditional methods such as Unsupervised Boltzmann Machine-based Time-aware Recommendation (UBMTR) and Efficient Gowers-Jaccard-Sigmoid Measure (EGJSM), LSTM-GNOG shows better improvement in prediction accuracy. These results underscore the effectiveness of LSTM-GNOG in overcoming data sparsity issues in movie recommendations.

Author 1: Ravikumar R N
Author 2: Sanjay Jain
Author 3: Manash Sarkar

Keywords: Cold start; Gaussian Nesterov’s optimal gradient; long short-term memory; movie recommendation system; probabilistic matrix factorization

PDF

Paper 76: Artificial Intelligence-based Real-Time Electricity Metering Data Analysis and its Application to Anti-Theft Actions

Abstract: This study focuses on the key issue of anti-stealing behavior identification in power systems, aiming to improve the security and efficiency of power energy management. Under the current background of intelligent power grid, the existence of anti-theft phenomenon not only causes serious economic losses, but also poses a threat to the stability of power grid operation. Aiming at this situation, this paper proposes a novel and effective feature extraction and optimization method, which utilizes the recursive feature elimination (rfe) technique, combined with the correlation and exclusion analysis of the features, to achieve the deep screening and dimensionality reduction of a large amount of raw data, so as to refine the core feature set that has the most differentiation for the anti-stolen power behavior. During the research process, this paper constructed a hybrid model integrating long short-term memory network (LSTM) and autoencoder. The model cleverly combines the advantages of LSTM in capturing time series dependency and the powerful ability of autoencoder in feature learning and noise reduction, and is especially designed for targeted enhancement of anti-electricity theft behaviors to achieve real-time and accurate behavior recognition. In order to verify the performance and practicality of the proposed method, this paper carries out rigorous simulation experiments and practical case studies. By comparing the classical anti-electricity theft recognition methods, the results show that the hybrid model proposed in this study exhibits significant advantages in both recognition accuracy and response speed. Whether in the simulation environment or actual application scenarios, this method can effectively identify and warn potential power theft behavior, thus providing a strong technical support for the power company’s anti-power theft management.

Author 1: Kai Liu
Author 2: Anlei Liu
Author 3: Xun Ma
Author 4: Xuchao Jia

Keywords: Artificial intelligence; real-time electrical energy; metering data analysis; anti-power theft

PDF

Paper 77: An Efficient Ensemble Algorithm for Boosting k-Nearest Neighbors Classification Performance via Feature Bagging

Abstract: This paper proposes a novel ensemble algorithm aimed at improving the performance of k-Nearest Neighbors (KNN) classification by incorporating feature bagging techniques, which help overcome the inherent limitations of KNN in Big Data scenarios. The proposed algorithm, termed FBE (Feature Bagging-based Ensemble), employs an efficient ensemble strategy with sorted feature subset techniques to reduce the time complexity from linear to logarithmic. By focusing on essential features during iterative training and utilizing a binary search in the testing phase, FBE boosts computational efficiency and accuracy in high-dimensional and imbalanced datasets. Our study rigorously evaluates the proposed FBE algorithm against traditional KNN, Random Forest (RF), and AdaBoost algorithms across ten benchmark datasets from the UCI Machine Learning Repository. The experimental results demonstrate that FBE not only outperforms the conventional KNN and AdaBoost across all evaluated metrics (accuracy, precision, recall, and F1 score) but also shows competitive performance compared to RF. Specifically, FBE exhibits remarkable improvements in datasets characterized by high dimensionality and class imbalances. The main contributions of this research include the development of an adaptive KNN framework that addresses the typical computational demands and vulnerability to noise in the data, making it well-suited for large-scale datasets. The ensemble methodology within FBE also helps reduce overfitting, a common challenge in standard KNN models, by diversifying the decision-making process across multiple data subsets. This strategy ensures robustness and reliability, positioning FBE as a suitable tool for classification tasks in diverse domains such as healthcare and image processing.

Author 1: Huu-Hoa Nguyen

Keywords: Bagging; ensemble; feature; k-nearest neighbors

PDF

Paper 78: A Novel Framework for Sentiment Analysis: Dimensionality Reduction for Machine Learning (DRML)

Abstract: Sentiment analysis is vital for understanding public opinion, but improving its performance is challenging due to the complexities of high-dimensional text data and diverse user-generated content. We propose a novel framework based on Dimensionality Reduction for Machine Learning (DRML) that enhances the classification performance by 21.55% while reducing the dimension of the feature matrix by 99.63%. Our research addresses the fundamental question of whether it is possible to reduce the feature space significantly while improving sentiment analysis performance. Our approach employs Principal Component Analysis (PCA) to effectively capture essential textual features and includes the development of an algorithm for identifying principal components from positive and negative reviews. We then create a supervised dataset by combining these components. Furthermore, we integrate a range of state-of-the-art machine learning algorithms (Decision Tree, K-Nearest Neighbours, Bernoulli Naïve Bayes, and Majority Voting Ensemble) into our framework, along with a custom tokenizer, to harness the full potential of reduced-dimensional data for sentiment classification. We have conducted extensive experiments using gold standard multi-domain benchmark datasets from Amazon to show that DRML outperforms other state-of-the-art approaches. Our proposed methodology gives superior performance with an average performance of 98.38% which is a significant increase in performance by 21.55% compared to the baseline methodology using Bag of Words (BoW). In terms of individual evaluation parameters, DRML shows an increase of 21.84% in Accuracy, 20.4% in Precision, 21.84% in Recall, and 22.11% in F1-score. In comparison with the state-of-the-art (SOTA) methodologies applied to the same benchmark dataset in recent years, our framework demonstrates a significant average increase in Accuracy for Sentiment Analysis by 10.96%. This substantial improvement underscores the effectiveness of our approach. To conclude, our research contributes to the field of sentiment analysis by introducing an innovative framework that not only improves the efficiency of sentiment analysis but also paves the way for the analysis of extensive textual data in diverse real-world applications.

Author 1: Dhamayanthi N
Author 2: Lavanya B

Keywords: Machine learning; text mining; natural language processing; sentiment analysis; opinion classification

PDF

Paper 79: Text Matching Model Combining Ranking Information and Negative Example Smoothing Strategies

Abstract: Aiming at the problems that current text matching methods are difficult to accurately capture the fine-grained ranking information between texts and the insufficient information interaction between different negative examples, a text matching model combining ranking information and negative example smoothing strategy is proposed. Firstly, it ensures the consistency of the ranking of two sentence representations of the input text obtained after different Dropout masks through Jensen-Shannon Divergence. Secondly, it utilizes the pre-trained SimCSE as the teacher model to obtain coarse-grained ranking information and distills this information into the student model through the ListNet sorting algorithm to obtain fine-grained ranking information. Finally, the negative examples are augmented by a negative example smoothing strategy, which effectively solves the problem of insufficient information interaction between negative examples without increasing the batch size. Experimental results on the standard semantic text similarity task show that the proposed model achieves a significant improvement in the Spearman correlation coefficient evaluation metrics compared with existing state-of-the-art methods, proving its effectiveness.

Author 1: Xiaodong Cai
Author 2: Lifang Dong
Author 3: Yeyang Huang
Author 4: Mingyao Chen

Keywords: Text matching; ranking information; negative example smoothing strategy; jensen-shannon divergence; listnet sorting algorithm

PDF

Paper 80: Pest Detection in Agricultural Farms using SqueezeNet and Multi-Layer Perceptron Model

Abstract: Pest detection is essential to protect agricultural systems from economic losses, lower food production, and environmental degradation. Detection of pests is a crucial aspect of agricultural sustainability because it helps to allocate resources, reduce production costs, and increase producers' profits. Artificial intelligence (AI) has revolutionized the detection of agronomic pests by employing deep learning models to accurately detect individual pests and differentiate between species and life stages. Combining SeueezeNet and Multi-Layer Perceptron, this study extracts feature vectors from image data to detect pests. There are four primary phases: preprocessing, image embedding with SqueezeNet, the final classifier with MLP, and 10-fold cross-validation. Data for this study is acquired in the form of plant pests. The total number of images acquired is 3150, with 350 from each class. Based on the research, the combination model demonstrates excellent performance. Each experiment's accuracy is greater than 99 %. It shows that Squeezenet can effectively extract the data's features, whereas Multi-Layer Perceptron can process these features for optimal classification performance. Even though there are still several classes, such as mites, sawflies, and stem borer, that have not been correctly classified. Since each image's background is unique, it cannot be classified correctly. These promising findings have broad implications for boosting agricultural output and decreasing pest-related losses. Optimal use of this approach in a variety of agricultural contexts requires more study and field testing.

Author 1: Intan Nurma Yulita
Author 2: Anton Satria Prabuwono
Author 3: Firman Ardiansyah
Author 4: Juli Rejito
Author 5: Asep Sholahuddin
Author 6: Rudi Rosadi

Keywords: Pest detection; Squeezenet; multi-layer perceptron; deep learning

PDF

Paper 81: Lightweight Fire Detection Algorithm Based on Improved YOLOv5

Abstract: Among all kinds of disasters, fire is one of the most frequent and common major disasters that threaten public safety and social development. At present, the widely used smoke sensor method to detect fire is susceptible to factors such as distance, resulting in untimely detection. With the development of computer vision technology, image detection technology based on machine learning has been superior to traditional detection methods in terms of detection accuracy and speed, and has gradually become the emerging mainstream in the field of fire detection. At this stage, most of the methods proposed in related studies are based on high-performance hardware devices, which limits the practical application of relevant results. This paper proposes an improved fire detection algorithm based on the YOLOv5 model to address the common issues of high memory usage, slow detection speed, and high operating costs in current fire detection algorithms. The algorithm introduces FasterNet network into the backbone network to reduce model memory usage and improve detection speed. Using Ghost-Shuffle Convolution (GSConv) in the neck network reduces the number of model parameters and computational costs. Introducing a one-time aggregation cross-stage partial network module (VoV-GSCSP) to enhance feature extraction capability and improve the detection accuracy of the model. The experimental results show that compared with the original YOLOv5 model, the improved model achieves better recognition performance, with an average accuracy of 98.3%, a 31.4% reduction in memory usage, and a 13% increase in detection speed. The number of parameters decreased by 33%, and the computational workload decreased by 35%. The improved algorithm can achieve fast and accurate identification of fires, and the lightweight model is more suitable for the deployment and implementation of general embedded hardware.

Author 1: Dawei Zhang
Author 2: Yutang Chen

Keywords: YOLOv5; FasterNet; GSConv; VoV-GSCSP; Fire detection

PDF

Paper 82: A Taxonomy of IDS in IoTs: ML Classifiers, Feature Selection Models, Datasets and Future Directions

Abstract: The applications of the Internet of Things (IoT) are becoming increasingly popular nowadays. Network security and privacy are major concerns of the IoTs, as many IoT devices are connected to the network via the Internet, making IoT networks more vulnerable to various cyber-attacks. An Intrusion Detection System (IDS) is a solution to deal with security and privacy issues by protecting IoT networks from different types of attacks. In this paper, we provide a taxonomy of IDS in IoT. Different Machine Learning (ML) classifiers, feature selection models, and Datasets with high detection accuracy are presented. Our analysis indicates a heightened emphasis on ML-based IDS, with Support vector machines (SVMs) at 33% and RFs at 31% being the most widely used classifiers. Despite the diversity in the use of different datasets for IDS, the NSL-KDD is the most commonly used in 49% of studies. In the realm of feature selection, the K-means and SMO algorithms emerge with an impressive 99.33%, marking the highest percentage in previous research on feature selection for ML-based ID. Moreover, we addressed the future pathways and challenges of IDS detection.

Author 1: Hessah Alqahtani
Author 2: Monir Abdullah

Keywords: Intrusion detection system; feature selection; support vector machine; random forest; decision tree; NSL-KDD

PDF

Paper 83: Two-Step Classification for Solving Data Imbalance and Anomalies in an Altman Z-Score-based Bankruptcy Prediction Model

Abstract: Differences in bankruptcy regulations with varying value parameters cause data anomalies when implemented in the Altman Z-Score model. Another common problem in bankruptcy predictions is imbalanced data; the number of companies that fall into the bankruptcy category is much smaller than those that do not. Therefore, a novel method was proposed to address data imbalance and anomalies in an Altman Z-Score-based bankruptcy prediction model. The proposed method employs a two-step classification controlled with data binning. Assumption values were used to set the proportion of distress and non-distress classes. Quartile calculation-based data binning is then used to ordinally rank the non-distress category into three classes. Furthermore, a two-step classification was performed using the Long-Short Term Memory (LSTM) method, followed by a rule-based classification method. The LSTM method predicts output in the form of one class representing the distress zone and three classes representing non-distress zone subcategories. The results are then processed using a rule-based classification to summarize the output into a two-class classification, where all data not in the distress zone class is part of the non-distress zone. The performance evaluation shows promising results, with outcomes closely matching the source bankruptcy data. These findings strengthen the evidence that the Altman Z-Score is a powerful tool for bankruptcy prediction and demonstrate that the proposed method can improve the Altman Z-Score model in handling differences in data value parameters.

Author 1: Abdul Syukur
Author 2: Arry Maulana Syarif
Author 3: Ika Novita Dewi
Author 4: Aris Marjuni

Keywords: Bankruptcy prediction; Altman Z-Score; data imbalance and anomaly; data binning; two-steps classification; LSTM; rule-based classification

PDF

Paper 84: Real-Time Air Quality Monitoring Model using Fuzzy Inference System

Abstract: Air pollution, which is both environmental and social, is a serious issue that affects people's health as well as ecosystems and the environment. Air pollution currently poses a number of health problems to the ecosystem. The most important factor that has a direct impact on disease occurrence and decreases people's quality of life is city and metropolitan air quality. It is critical to establish real-time air quality monitoring in order to make timely decisions based on measurements and evaluations of environmental factors. Monitoring systems are influential in multiple smart city initiatives for keeping an eye on air quality and reducing pollutant concentrations in metropolitan areas. The Internet of Things (IoT) is becoming increasingly important in a variety of sectors, including air quality monitoring. In this research work, a real-time air quality monitoring model employing fuzzy inference is proposed for monitoring air pollution using multiple parameters such as Sulphur Dioxide (SO2), Nitrogen Dioxide (NO2), Carbon Monoxide (CO), Ozone (O3) and Suspended Particulates (PM10). This proposed research presents a novel technique for improving air quality monitoring. This proposed fuzzy inference system also provides better results in terms of monitoring air quality in a more efficient and effective way.

Author 1: Muhammad Saleem
Author 2: Nitinkumar Shingari
Author 3: Muhammad Sajid Farooq
Author 4: Beenu Mago
Author 5: Muhammad Adnan Khan

Keywords: IoT; fuzzy inference system; smart city; air quality monitoring

PDF

Paper 85: From Technical Indicators to Trading Decisions: A Deep Learning Model Combining CNN and LSTM

Abstract: Stock market prediction is a highly attractive and popular field within finance, driven by the potential for significant profits that come with substantial risks due to data non-linearity and complex economic principles. Extracting features from trading data is crucial in this domain, and numerous strategies have been developed. Among these, deep learning has achieved impressive results in financial applications because of its robust data processing capabilities. In our study, we propose a hybrid deep learning model, the CNN-LSTM, which combines the 2D Convolutional Neural Network (CNN) for image processing with the Long Short-Term Memory (LSTM) network for managing image sequences and classification. We transformed the top 15 of 21 technical indicators from financial time series into 15x15 images for 21 different day periods. Each image is then categorized as Sell, Hold, or Buy based on the trading data. Our model demonstrates superior performance in stock predictions over other deep learning models.

Author 1: SAHIB Mohamed Rida
Author 2: ELKINA Hamza
Author 3: ZAKI Taher

Keywords: Stock market prediction; CNN-LSTM hybrid model; financial time series; technical indicators; CNN; LSTM

PDF

Paper 86: Multimodal Sentiment Analysis using Deep Learning Fusion Techniques and Transformers

Abstract: Multimodal sentiment analysis extracts sentiments from multiple modalities like text, images, audio, and videos. Most of the current sentiment classifications are based on single modality which is less effective due to simple architecture. This paper studies multimodal sentiment analysis by combining several deep learning text and image processing models. These fusion techniques are RoBERTa with EfficientNet b3, RoBERTa with ResNet50, and BERT with MobileNetV2. This paper focuses on improving sentiment analysis through the combination of text and image data. The performance of each fusion model is carefully analyzed using accuracy, confusion matrices, and ROC curves. The fusion techniques implemented in this study outperformed the previous benchmark models. Notably, the EfficientNet-b3 and RoBERTa combination achieves the highest accuracy (75%) and F1 score (74.9%). This research contributes to the field of sentiment analysis by showing the potential of combining textual and visual data for more accurate sentiment analysis. This will lay the groundwork for researchers in the future to work on multimodal sentiment analysis.

Author 1: Muhaimin Bin Habib
Author 2: Md. Ferdous Bin Hafiz
Author 3: Niaz Ashraf Khan
Author 4: Sohrab Hossain

Keywords: Multimodal sentiment analysis; deep learning; transfer learning; natural language processing; image processing; BERT

PDF

Paper 87: Assessing the Impact of Digitalization on Internal Auditing Function

Abstract: Over the past decades, the business environment has become increasingly digitized. Advances in new technologies are driving significant organizational change. Over the years, the internal audit as a governance actor, has adapted to meet the demands of the evolving business environment, and its role in consulting activities has been a significant topic of debate in the literature. This research aims to study the impact of the digitalization of organizations on the internal audit function. The method used to achieve this goal is a survey conducted with 175 internal auditors and managers working for companies in various sectors. The results indicate the existence of a positive relationship between the level of digitalization of the organization and the diversion of risks. This requires greater agility on the part of internal audit, through strengthening the digital skills of auditors, particularly in data analysis, to meet the needs of different stakeholders. The results also indicate that the level of digitalization of the organization has an indirect effect on the level of integration of consulting missions in the internal audit plan, a new role that internal audit is developing to support added value.

Author 1: Khawla Karimallah
Author 2: Hicham Drissi

Keywords: Digitalization; data analytics; organization; Internal Audit Function (IAF); agility

PDF

Paper 88: A Comprehensive Machine Learning Framework for Anomaly Detection in Credit Card Transactions

Abstract: Cybercrimes originate in a variety of forms, and the majority of crimes involve credit cards. Despite various steps taken to prevent credit card fraud, it is crucial to alert customers to unusual attempts at fraudulent transactions. The internet has been largely geared to meet this challenge. Many studies have been published over the years to identify anomalies in credit card transactions, and machine learning (ML) has played a significant role in this. Though various anomaly detection techniques are in place, transaction irregularities remain, especially during banking card transactions. The objective of this proposed work is to bring out an efficient machine learning model for identifying abnormal anomalies in credit card-based transactions by considering the limitations of the existing frameworks. The proposed research employs a ML framework comprising data preprocessing, discovering correlations, outlier removal, feature reduction, and classification with a sampling trade-off. The framework uses classifiers such as logistic regression, kNN, support vector machines, and decision trees. The NearMiss and SMOTE approaches are used to address overfitting and underfitting issues through sampling trade-off, which is the defining feature of this research. Significant improvement was noticed when the machine learning models were evaluated using fresh data after a sampling trade-off.

Author 1: Fathe Jeribi

Keywords: Cybersecurity; anomaly detection; machine learning; optimization; nearmiss; SMOTE

PDF

Paper 89: Defect Prediction of Finite State Machine Models Based on Transfer Learning

Abstract: As software systems become increasingly intricate, predicting cache defects has emerged as a crucial aspect of maintaining software quality. This article introduces a novel approach for predicting cache defects, utilizing a transfer learning (TL) software deterministic finite state machine (DFSM) model. Finite State Machine (DFSM) model defect prediction based on transfer learning is an innovative software defect prediction method. This method combines the advantages of transfer learning (TL) and deterministic finite state machine (DFSM). Intended to improve the effectiveness and accuracy of software cache defect prediction. This innovative method seeks to enhance the effectiveness of predicting cache issues within software. By merging the precision of DFSM with TL's versatility, the proposed technique is transferable to target projects through training and learning from source projects, addressing data scarcity challenges in new or evolving projects. This method utilizes transfer learning (TL) strategy to transfer knowledge from the source project to the target project through learning and training, thereby solving the problem of data scarcity. Experimental findings reveal that as training data grows, the method's test coverage and fault detection rate steadily increase. Additionally, it demonstrates impressive execution efficiency and stability. In comparison to traditional methods, this approach exhibits substantial benefits in elevating software quality and reliability, offering a fresh and efficient tool for ensuring software quality. Thanks to the TL strategy, the method rapidly adapts to the unique environments and requirements of new or evolving projects, thereby enhancing forecasting accuracy and efficiency.

Author 1: Wei Zhang

Keywords: Transfer learning; DFSM; software defects; defect prediction

PDF

Paper 90: A Novel Fuzzy-based Spectrum Allocation (FBSA) Technique for Enhanced Quality of Service (QoS) in 6G Heterogeneous Networks

Abstract: This research focuses on Device to Any device (D2A) communication for 6G in unpredictable circumstances where the topology of the D2A network changes over time as a result of the mobility of D2A Devices. Extremely sophisticated applications with demands for ultra-low latency and ultra-high data rate can be made achievable by cellular D2A communications in 6G. The best way to ensure Quality of Service (QoS) is to make the most of the scarce MAC Layer resources. To share information between D2A systems and a variety of devices, spectrum allocation is crucial. In this paper, a novel Fuzzy Based Spectrum Allocation (FBSA) approach is established to efficiently and rational distribute resources for D2A. A system model for D2A transmission has been established for metropolitan regions, common security and non-secure services are implemented in the network to assess the network performance for this feasible technique. Comparing the proposed FBSA approach to its prior works, which could not deliver guaranteed services due to low resource utilization. Riverbed Modeler simulation results show that the proposed approach can significantly enhance resource usage and satisfy the requirements of D2A systems.

Author 1: S. B. Prakalya
Author 2: Samuthira Pandi V
Author 3: S. Sujatha
Author 4: R. Thangam
Author 5: D. Karunkuzhali
Author 6: G. Keerthiga

Keywords: FBSA; D2A; 6G; spectrum allocation; QoS

PDF

Paper 91: Quality of Service-Oriented Data Optimization in Networks using Artificial Intelligence Techniques

Abstract: This paper outlines a comprehensive AI-driven Quality of Service (QoS) optimization method, presenting a rigorous examination of its effectiveness through extensive experimentation and analysis. By applying real-world datasets to simulate network environments, the study systematically evaluates the proposed method’s impact across various QoS metrics. Key findings reveal substantial enhancements in reducing average latency, minimizing packet loss, and boosting bandwidth utilization compared to baseline scenarios, with the Deep Deterministic Policy Gradient (DDPG) model showcasing the most notable improvements. The research demonstrates that AI optimization strategies, particularly those leveraging DQN and DDPG algorithms, significantly improve upon conventional methods. Specifically, post-migration optimizations lead to a recovery and even surpassing of pre-migration QoS levels, with delays dropping to levels below initial readings, packet loss nearly eliminated, and bandwidth utilization markedly improved. The study further illustrates that while lower learning rates necessitate longer convergence times, they ultimately facilitate superior model performance and stability. In-depth case studies within a cloud data center setting underscore the system’s proficiency in handling large-scale Virtual Machine (VM) migrations with minimal disruption to network performance. The AI-driven optimization successfully mitigates the typical latency spikes, packet loss increases, and resource utilization dips associated with VM migrations, thereby affirming its practical value in maintaining high network efficiency and stability during such operations. Comparative analyses against traditional traffic engineering methods, rule-based controls, and other machine learning approaches consistently place the AI optimization method ahead, achieving up to an 8% increase in throughput alongside a 2 ms decrease in latency. Furthermore, the technique excels in reducing packet loss by 25% and elevating resource utilization rates, underscoring its prowess in enhancing network efficiency and stability. Robustness and scalability assessments validate the method’s applicability across diverse network scales, traffic patterns, and congestion levels, confirming its adaptability and effectiveness in a wide array of operational contexts. Overall, the research conclusively evidences the AI-driven QoS optimization system’s capacity to tangibly enhance network performance, positioning it as a highly efficacious solution for contemporary networking challenges.

Author 1: Zhenhua Yang
Author 2: Qiwen Yang
Author 3: Minghong Yang

Keywords: Artificial intelligence; networking; quality of service-oriented; data optimization

PDF

Paper 92: Evolving Security for 6G: Integrating Software-Defined Networking and Network Function Virtualization into Next-Generation Architectures

Abstract: As technology continues to advance, the emergence of 6G networks is imminent, promising unprecedented levels of connectivity and innovation. A critical aspect of designing the security architecture for 6G networks revolves around the utilization of Software-Defined Networking (SDN) and Network Function Virtualization (NFV) technologies. By harnessing the capabilities of SDN and NFV, the security infrastructure of 6G networks stands to gain significant advantages in terms of flexibility, scalability, and agility. SDN facilitates the decoupling of the network control plane from the data plane, enabling centralized management and control of network resources. This article examines the synergistic relationship between SDN and NFV in enhancing the resilience and adaptability of 6G security architectures, offering insights into key challenges, emerging trends, and future directions in securing the next generation of wireless networks.

Author 1: JAADOUNI Hatim
Author 2: CHAOUI Habiba
Author 3: SAADI Chaimae

Keywords: 6G Network; network function virtualization; software defined network; security; architecture

PDF

Paper 93: Improving Image Stitching Effect using Super-Resolution Technique

Abstract: This paper aims to present a novel methodology that merges image stitching with super-resolution techniques, enabling the creation of a high-resolution panoramic image from several low-resolution inputs. The proposed approach comprehensively addresses challenges throughout the process, encompassing image preprocessing, alignment and handling of mismatches, stitching, super-resolution reconstruction, and post-processing. Employing advanced methodologies such as Convolutional Neural Networks (CNNs), Scale-Invariant Feature Transform (SIFT), Random Sample Consensus (RANSAC), GrabCut algorithm, Super-Resolution Convolutional Neural Network (SRCNN), gradient domain optimization, and Structural Similarity Index Measure (SSIM), each step meticulously tackles specific issues inherent to image stitching tasks. A key innovation lies in the synergy of image stitching and super-resolution techniques, yielding a solution that boasts high robustness and efficiency. This versatile method is adaptable to diverse image processing contexts. To validate its effectiveness, experiments were conducted on two established datasets, USIS-D and VGG, where a quartet of quantitative metrics – Peak Signal-to-Noise Ratio (PSNR), SSIM, Entropy (EN), and Quality Assessment of Blurred Faces (QABF) – were employed to gauge the quality of stitched images against alternative methods. The outcomes decisively illustrate the superiority of our proposed method, achieving superior performance across all metrics and producing panoramas devoid of seams and distortions. This work thereby contributes a significant advancement in the realm of high-fidelity panoramic image reconstruction.

Author 1: Jinjun Liu

Keywords: Image; stitching; super-resolution technology; vision and image processing

PDF

Paper 94: The Design and Execution of a Multimedia Information Intelligent Processing System Oriented to User Experience

Abstract: With the rapid growth of the world economy and the increasing pursuit of culture and entertainment, the integration of multimedia database technology and networks has become crucial. Through extensive research, this integration allows for seamless integration of multimedia information (MI) and promotes accelerated development of cultural exchange on the internet. This article studies and designs a multimedia information (MI) intelligent processing system for user experience (UE). This system integrates multimedia database technology and network technology, aiming to provide seamless integration of multimedia information, accelerate cultural exchange on the network, and enrich the cultural experience of users. In the system design, we propose a UE mode based on context-aware technology and develop an innovative access selection algorithm that can dynamically select the best access path based on network status and user preferences. The experimental results show that the algorithm performs well in terms of throughput, latency, and link load, effectively meeting the QoE (Quality of Experience) requirements of users. In addition, the system has high scalability and can cope with constantly growing data and computing needs without sacrificing performance. The implementation of this system not only provides users with a richer and more personalized cultural experience, but also provides strong support for building a more interconnected global community.

Author 1: Hongmei Liu

Keywords: User experience; multimedia information; intelligent processing; wireless network

PDF

Paper 95: Optimized Task Scheduling in Cloud Manufacturing with Multi-level Scheduling Model

Abstract: Cloud Manufacturing (CMfg) utilizes the cloud computing paradigm to provide manufacturing services over the Internet flexibly and cost-effectively, where users only pay for what they use and may access services as needed. The scheduling method directly impacts the overall efficiency of CMfg systems. Manufacturing industries supply services aligned with customer-specific needs recorded in CMfg systems. CMfg managers develop manufacturing strategies based on real-time demand to establish service delivery timing. Many elements influence customer satisfaction, including dependability, timeliness, quality, and pricing. Therefore, CMfg depends on the use of multi-objective and real-time task scheduling. Multi-objective evolutionary algorithms have effectively examined many solutions, such as non-dominant, Pareto-efficient, and Pareto-optimal solutions, using both actual and synthetic workflows. This study introduces a new Multi-level Scheduling Model (MSM) and evaluates its effectiveness by comparing it with other multi-objective algorithms, including the weighted genetic algorithm, the non-dominated genetic sorting Algorithm II, and the starch Pareto evolution algorithm. The primary emphasis is on assessing the efficacy of algorithms and their suitability in commercial multi-cloud setups. The MSM's dynamic nature and adaptive features are emphasized, indicating its ability to effectively handle the complexity and demands of CMfg and resolve the scheduling issue within this environment. Experimental results suggest that MSM outperforms other algorithms by achieving a 20% improvement in makespan.

Author 1: Xiaoli ZHU

Keywords: Cloud manufacturing; multi-level scheduling model; task scheduling; multi-objective optimization; resource allocation

PDF

Paper 96: Creativity in the Digital Canvas: A Comprehensive Analysis of Art and Design Education Pedagogy

Abstract: Promoting creativity in the dynamic field of education has become a critical goal for educators, aiming to prepare students with the essential abilities for success in various professional and personal situations. As educational institutions globally attempt to promote creative learning outcomes, there is still a notable lack of knowledge regarding efficient techniques for teaching creativity. In this paper, we address the pressing need to bridge the knowledge gap associated with teaching creativity in artistic disciplines. The goal is to offer educators and researchers detailed knowledge of the methods used to promote creativity in art and design education by combining research, historical insights, and modern advancements. We explore the complexities of creative ideas, both classic and current educational methods, as well as the distinct problems and possibilities in art and design education. Finally, the study provides insights into the ongoing debate about creativity with respect to art and design education, offering suggestions for pedagogical innovation in the future to meet the dynamic challenges and potentials within the artistic and design disciplines.

Author 1: Qian TONG

Keywords: Creativity; art and design education; pedagogical practices; learning outcomes; assessment; grounded theory

PDF

Paper 97: Identification of the Main Traditional Project Management Methods Through a Systematic Literature Review

Abstract: Traditional project management methods are specific, predictable and seek to keep the planning as detailed as possible and, even over time, companies continue to integrate them into their processes. The present study aims to raise the main traditional methods of Project Management, to present them in more detail, through a Systematic Literature Review. In this review, 37 articles were found and analyzed to answer five research questions. The research questions focused on answering: the main traditional project management methods, the most relevant maturity models, trends in the area, and the challenges and future directions for project management. As the main results, PMBOK was pointed out as the main traditional method, followed by PRINCE2, ISO 21500 standard and CTCR methodology. In addition, highlighting the tools, there are Gantt Chart, Earned Value Management, Critical Chain Project Management, and TOC Method as the most relevant. Therefore, it is possible to obtain a broad and detailed view of the main traditional methods of PM and with this, researchers in the area will be able to make better decisions in choosing the appropriate method for their type of project. As for challenges and future directions, the article pointed out that currently, project processes are complex and therefore do not meet their initial deadlines, cost, quality and business goals. Thus, difficulties in PM also stand out: delays in the schedule, lack of clearly defined objectives and support from leadership/company, scope changes, insufficient resources, poor risk management and measurement of project performance and lack of communication.

Author 1: Fernanda Souza Valadares
Author 2: Naira Cristina Souza Moura
Author 3: Tábata Nakagomi Fernandes Pereira
Author 4: Milena De Oliveira Arantes

Keywords: Traditional methods; project management; framework; PMBOK®

PDF

Paper 98: Intelligent Transport Systems: Analysis of Applications, Security Challenges, and Robust Countermeasures

Abstract: Intelligent Transport Systems (ITS) are instrumental in optimizing transportation networks, enhancing efficiency, and promoting sustainable mobility in smart cities and advanced technological environments. However, the increasing integration of digital technologies in transportation infrastructure introduces cyber-physical risks and privacy concerns. This paper aims to explore the diverse applications of ITS, and its impact on traffic management, vehicle communication, and urban mobility. It examines real-world deployments and emerging trends to illustrate ITS's transformative potential. Furthermore, it critically assesses the security vulnerabilities inherent in intelligent transport systems, including cyber threats targeting communication protocols, data integrity, and network interconnectedness. Privacy issues related to data collection and utilization are also scrutinized. Furthermore, it emphasizes the importance of proactive security measures to mitigate threats and ensure the resilience of ITS. Finally, the research proposes robust security methodologies, such as encryption techniques, anomaly detection systems, and secure communication routes, drawing upon theoretical frameworks and empirical case studies. Legislative recommendations and collaborative initiatives are advocated to foster a trustworthy intelligent transport ecosystem and address security challenges comprehensively.

Author 1: Mada Alharb
Author 2: Abdulatif Alabdulatif

Keywords: Intelligent Transport Systems (ITS); cybersecurity; urban mobility; anomaly detection systems; privacy concerns

PDF

Paper 99: Spectral Mixture Analysis-based WQI with Convolutional Long Short-Term Memory Techniques

Abstract: Surface water, including river water, is an important natural resource for human life. However, river water quality in Indonesia often declines due to various factors, such as excessive water consumption, waste pollution, and natural disasters. This study aims to predict the Water Quality Index (WQI) of rivers using Spectral Mixture Analysis with deep learning architecture. The methods used in this study are Spectral Mixture Analysis (SMA) and Convolutional Long Short-Term Memory (ConvLSTM). SMAs are used to decompose the spectral signatures of water quality components and provide insight into the composition of water bodies. ConvLSTM, a deep learning architecture, is used to capture temporal dependencies and spatial patterns in water quality data. The results showed that the percentage of WQI prediction accuracy for 345-band model was better than 234-band model, reaching 34.78%. The visible color spectrum that represents the Meets (M) and Light (R) Pollution Index is Blue (0, 0, 255) and wavelengths ranging from 0.53 μm to 0.88 μm. The test results of the ConvLSTM hybrid model on 8 mandatory parameters of River WQI measurements at 30 watershed monitoring points of North Musi Rawas Regency from 2021 to 2023, the accuracy value reaches 96% or it is considered that the performance of this model is acceptable. This research proves that Spectral Mixture Analysis with hybrid model Convolutional Long Short-Term Memory techniques is effectively capable of predicting and monitoring the WQI of rivers and these results can be used to take appropriate steps in determining policies.

Author 1: Ika Oktavianti
Author 2: Yusuf Hartono
Author 3: Sukemi

Keywords: Water quality index; Spectral Mixture Analysis; remote sensing; deep learning; convolutional long short-term memory

PDF

Paper 100: UAV Path Planning Method Considering Safety and Signal Shielding Risk

Abstract: In order to meet the needs for the safe operation of unmanned aerial vehicles (UAV)s in cities, this paper proposes a multi-objective path planning method based on a particle swarm optimization algorithm. Firstly, a complex urban environment model is constructed by using the grid method. Then, taking the total length of the UAV path and the minimum flight risk as objectives, the multi-objective path optimization problem is established under the condition of taking into account the obstacle avoidance requirements and performance constraints of the UAV. Finally, the optimization problem is solved by a multi-objective particle swarm optimization algorithm and the path curve is smoothed by cubic B-spline. The simulation results show that the multi-objective path planning method proposed in this paper is more reasonable than the method that only considers the lowest security risk or the shortest path.

Author 1: Xiaoyong Chen
Author 2: Jiajun Fang
Author 3: Yanjie Zhai

Keywords: Multi-objective particle swarm optimization; path planning; cubic B-splines

PDF

Paper 101: The Application of AES-SM2 Hybrid Encryption Algorithm in Big Data Security and Privacy Protection

Abstract: In the times of big data, information security and privacy protection have become important issues facing today's society. To address big data's security and privacy problems, research designs and implements a hybrid encryption method using advanced encryption standard algorithms and standard encryption module 2 algorithms for encryption operations. This method utilizes Advanced Encryption Standard encryption algorithms to encrypt plaintext data without calling any encryption libraries. It improves the key extension method and security analysis of Advanced Encryption Standard algorithms. The experimental results show that by changing one key, the confusion range of the improved Advanced Encryption Standard algorithm is 62 ± 6, while the confusion range of the traditional Advanced Encryption Standard algorithm is 63 ± 7. The encryption time of the RSA algorithm is 16.50ms higher than that of Standard Encryption Module 2. The Advanced Encryption Standard scheme improved by Standard Encryption Module 2+ has the fastest decryption speed, followed by RSA+Advanced Encryption Standard scheme, and finally Standard Encryption Module 2+Advanced Encryption Standard scheme. The hybrid encryption algorithm proposed by the research institute can encrypt sensitive information in big data without leaking plaintext information, effectively protecting sensitive information in big data. This scheme can effectively protect sensitive information in big data and provide new ideas for big data in terms of network security and privacy protection.

Author 1: Pingyun Huang
Author 2: Guizhou Liao
Author 3: Jianhong Ren

Keywords: AES; SM2; privacy protection; encryption algorithm; data security

PDF

Paper 102: Bionic Hand Movements Recognition: A Unified Framework with Attention-Guided ROI Identification and the Bionic Fusion Net Approach

Abstract: In prosthetics, bionic hand movement recognition is crucial to developing sophisticated systems that can effectively understand and react to human motions. Recent advances in image processing, feature extraction, and deep learning have improved bionic hand movement detection systems' accuracy and flexibility. This study proposes a unified framework called using attention-guided ROI detection and a unique Bionic Fusion-Net architecture to overcome these difficulties which contributes towards Sustainable Development Goal (SDG) Good Health and Well Being. Initially pre-processing undergoes dataset augmentation and image enhancement. The ROI Identification approach uses an attention-guided U-Net with sophisticated convolutional components. Spatial Features, BionicNet-1, and BionicNet-2 learn spatial and temporal features together during feature extraction. Optimized Red Fox Falcon Algorithm (O-RFF) which is a hybrid of Red Fox and Falcon Optimization Algorithms improves the feature selection. The Bionic Fusion-Net Architecture combines Xception, Squeeze-Net, Shuffle-Net, optimized Bi-LSTM, and Huber Loss function application. The recommended technique improves bionic hand movement recognition flexibility that attained an accuracy of about 99% which outperformed other approaches in use for well-being and future health policy.

Author 1: Prakash. S
Author 2: Josephine H. H
Author 3: Priya. S
Author 4: M. Batumalay

Keywords: Bionic Hand; Optimized Red Fox Falcon Algorithm; Xception; Squeeze-Net; Shuffle-Net; Bi-LSTM; Huber Loss; Sustainable Development Goals (SDG); good health; well-being; health policy

PDF

Paper 103: Blockchain-based and IoT-based Health Monitoring App: Lowering Risks and Improving Security and Privacy

Abstract: Blockchain technology is known for its decentralized and immutable nature, which makes it highly resistant to hacking and unauthorized access. This would ensure that patients' private health information remains secure and protected from potential breaches. Moreover, the use of blockchain can also enhance data integrity by creating a transparent and tamper-proof record of all health updates, further increasing trust in the systems. The COVID-19 epidemic has made human health one of the most crucial things we should focus on more in our day-to-day lives. Social separation could help contain the COVID-19 pandemic. Humans are therefore urged to avoid physical contact with one another if the condition is permitted. It is suggested that medical professionals use the Internet of Things (IoT)-based Health Monitoring Application to keep an eye on their patients via their mobile devices. With the help of the suggested system, patients can update the system with their daily health status, and medical professionals can use their mobile devices to monitor their patients for future health policy. Because the suggested system is an application that users can access from their mobile devices rather than just using a laptop or computer to browse the website, it is more practical than most of the current system. Patients do not need to visit the hospital for a check-up because they can update the system with their health information. If physicians discover unusual symptoms in a patient's medical record, are they obligated to seek medical attention? Furthermore, private health information is regarded as confidential. Consequently, this would examine the risks associated with the backend system of the suggested solution as well as security threats. Additionally, by utilizing blockchain technology, improvements in security and privacy can be achieved.

Author 1: Chelsey C. Y. Hang
Author 2: M. Batumalay
Author 3: T D Subash
Author 4: R. Thinakaran
Author 5: B. Chitra

Keywords: IoT health monitoring system; security and privacy; and blockchain technology; health policy

PDF

Paper 104: Classification of Pneumonia from Chest X-ray images using Support Vector Machine and Convolutional Neural Network

Abstract: Pneumonia presents a global health challenge, especially in distinguishing bacterial and viral types via chest X-ray diagnostics. This study focuses on deep learning models Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) for pneumonia classification. Our findings highlight CNN's superior performance. It achieves 91% accuracy overall, outperforming SVM's 79% in differentiating normal lungs and pneumonia-affected lungs. Specifically, CNN excels in distinguishing between bacterial and viral pneumonia with 92% accuracy, compared to SVM's 88%. These results underscore deep learning models' potential to enhance diagnostic precision, improve treatment efficacy and reduce pneumonia-related mortality. In the context of Society 5.0, which integrates technology for societal well-being, deep learning in healthcare emerges as transformative. Enabling early and accurate pneumonia detection, this research aligns with the United Nations Sustainable Development Goals (SDGs). It supports Goal 3 (Good Health and Well-being) by advancing healthcare outcomes and Goal 9 (Industry, Innovation, and Infrastructure) through innovative medical diagnostics. Therefore, this study emphasizes deep learning's pivotal role in revolutionizing pneumonia diagnosis, offering efficient healthcare solutions aligned with current global health challenges.

Author 1: M. Fariz Fadillah Mardianto
Author 2: Alfredi Yoani
Author 3: Steven Soewignjo
Author 4: I Kadek Pasek Kusuma Adi Putra
Author 5: Deshinta Arrova Dewi

Keywords: Pneumonia; chest X-ray; Support Vector Machine; Convolutional Neural Network; SDGs; Society 5.0

PDF

Paper 105: Multimodal Application of GAN in the Image Recognition of Wheat Diseases and Insect Pests

Abstract: “Food is the most important thing for the people”, Food is intricately linked to both the national economy and the livelihood of the people, serving as a vital material for our daily existence. Wheat, standing as one of the three core grain crops, holds paramount importance in safeguarding national food security. However, the wheat planting process remains constantly exposed to a diverse array of environmental factors, ranging from the intensity of light to fluctuations in temperature, soil fertility, fertilizer application methods, and water availability. Occasionally, these variables trigger diseases and insect infestations that can seriously affect wheat yield and quality if not promptly and effectively addressed. Therefore, it is imperative to manage these challenges in a timely and effective manner, ensuring the safety and integrity of wheat production, which in turn guarantees the stability of our national food supply. Traditional methods of manual detection of pests and diseases mainly rely on naked eye observation and manual statistics. Such solutions are highly subjective, have low timeliness, and difficult to unify precision. With the development of computer technology and deep learning, more and more research and applications have been carried out to address the shortcomings of traditional manual detection methods. In this study, deep learning is combined with the application of disease and insect pest recognition. Studying wheat powdery mildew, scab, leaf rust, and midge, convolutional and capsule networks are investigated for pest recognition, establishing an image recognition system for wheat diseases and pests.

Author 1: Bing Li
Author 2: Shaoqing Yang
Author 3: Zeqiang Wang

Keywords: Deep Learning; Identification of diseases and insect pests; Image classification; System development

PDF

Paper 106: Improving the Prediction of Student Performance by Integrating a Random Forest Classifier with Meta-Heuristic Optimization Algorithms

Abstract: Anticipating student performance in higher education is crucial for informed decision-making and the reduction of dropout rates. This study focuses on the intricate analysis of diverse educational datasets using machine learning, particularly emphasizing dimensionality reduction. The aim is to empower educators with data-driven insights, enabling timely interventions for academic improvement. By categorizing individuals based on their inherent aptitudes, the study seeks to mitigate failure rates and enhance the overall educational experience. The integration of predictive modeling, particularly employing the robust Random Forest Classifier (RFC), allows the academic community to proactively address challenges and foster a supportive learning environment, thereby improving student outcomes. To bolster predictive capabilities, the study adopts the RFC model and enhances its efficacy through advanced optimization algorithms, specifically Electric Charged Particles Optimization (ECPO) and Artificial Rabbits Optimization (ARO). These sophisticated algorithms are strategically integrated to refine decision-making processes and enhance predictive precision. Furthermore, the analysis of the input variables has been conducted to assess their individual impact on student performance. This analysis can help institutions identify and address areas for improvement in their management practices. The study's commitment to leveraging state-of-the-art machine learning and bio-inspired algorithms underscores its dedication to achieving precise and resilient predictions of the performance of 4424 students, ultimately contributing to the advancement of educational outcomes. The research outcomes highlight the superiority of the RFEC model, optimized through ECPO for RFC, in aligning with actual measured values, affirming its efficacy in predictive accuracy.

Author 1: Chao Ma

Keywords: Classification; student performance; machine learning; Random Forest Classifier; Electric Charged Particles Optimization; Artificial Rabbits Optimization

PDF

Paper 107: A Novel Hybrid Deep Neural Network Classifier for EEG Emotional Brain Signals

Abstract: The field of brain computer interface (BCI) is one of the most exciting areas in the field of scientific research, as it can overlap with all fields that need intelligent control, especially the field of the medical industry. In order to deal with the brain and its different signals, there are many ways to collect a dataset of brain signals, the most important of which is the collection of signals using the non-invasive EEG method. This group of data that has been collected must be classified, and the features affecting changes in it must be selected to become useful for use in different control capabilities. Due to the need for some fields used in BCI to have high accuracy and speed in order to comply with the environment's motion sequences, this paper explores the classification of brain signals for their usage as control signals in Brain Computer Interface research, with the aim of integrating them into different control systems. The objective of the study is to investigate the EEG brain signal classification using different techniques such as Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), as well as the machine learning approach represented by the Support Vector Machine (SVM). We also present a novel hybrid classification technique called CNN-LSTM which combines CNNs with LSTM networks. This proposed model processes the input data through one or more of the CNN’s convolutional layers to identify spatial patterns and the output is fed into the LSTM layers to capture temporal dependencies and sequential patterns. This proposed combination uses CNNs’ spatial feature extraction and LSTMs’ temporal modelling to achieve high efficacy across domains. A test was done to determine the most effective approach for classifying emotional brain signals that indicate the user's emotional state. The dataset used in this research was generated from a widely available MUSE EEG headgear with four dry extra-cranial electrodes. The comparison came in favor of the proposed hybrid model (CNN-LSTM) in first place with an accuracy of 98.5% and a step speed of 244 milliseconds/step; the CNN model came in the second place with an accuracy of 98.03% and a step speed of 58 milliseconds/step; and in the third place, the LSTM model recorded an accuracy of 97.35% and a step speed of 2 sec/step; finally, in last place, SVM came with 87.5% accuracy and 39 milliseconds/step running speed.

Author 1: Mahmoud A. A. Mousa
Author 2: Abdelrahman T. Elgohr
Author 3: Hatem A. Khater

Keywords: BCI; EEG; Brain Signals Classification; SVM; LSTM; CNN; CNN-LSTM

PDF

Paper 108: Three-Dimensional Animation Capture Driver Technology for Digital Media

Abstract: For the motion capture driving technology of three-dimensional animation, this study combines skeleton extraction methods and human motion pose data to construct the human skeleton of three-dimensional animated characters. Combining matching algorithms and action recognition techniques, the postures of the human three-dimensional model were tested and analyzed. The experimental results showed that the level-set central clustering method extracted shoulder joint position values of 0.26, 0.24, 0.28, and 0.21 in the four models, respectively. The error value was the smallest among the skeleton extraction algorithms, indicating that this skeleton extraction algorithm had high accuracy in extracting human skeleton information. In addition, the depth information of human joint points was compared using the parallax ranging method, and the highest error was 1.57%. This further demonstrated that the coordinate error of the three-dimensional joints was relatively accurate, which also proved the effectiveness of the binocular stereo vision system. The system had an accuracy of over 80% in recognizing joint rotation information and dynamic movements in the human three-dimensional model. Finally, the highest accuracy of inertial sensors in capturing human movements was 97%, indicating the superiority of digital media in capturing three-dimensional animation technology. This also provides a theoretical basis and technical reference for animation production and other aspects.

Author 1: Wanjie Dong

Keywords: 3D animation; computer vision; motion matching algorithm; human 3D skeletal model; motion capture technology

PDF

Paper 109: The Impact of Path Planning Model Based on Improved Ant Colony Optimization Algorithm on Green Traffic Management

Abstract: In response to the demand for green city construction, low-carbon travel standards have been further implemented. This research focuses on intelligent transportation management and designs path planning algorithms. Firstly, the basic model of the proposed ant colony optimization algorithm was constructed. In response to the poor convergence of traditional algorithms, a rollback strategy was introduced to optimize the model taboo table. Subsequently, in response to the dynamic obstacle avoidance problem in practical applications, the optimized A* algorithm was studied and applied to global path planning. The improved ant colony algorithm was applied to local obstacle avoidance planning, further enhancing the accuracy and practicality of the algorithm. In simulation analysis, facing more complex simulation environments, this research method could better achieve obstacle avoidance path planning. The average number of search nodes decreased by 6, the average search time decreased by 4.11%, and the average path length decreased by 22.07%. In summary, the ant colony optimization algorithm designed through research is more suitable for path planning needs in different scenarios, with the best overall performance. It can plan the shortest driving path while ensuring precise obstacle avoidance, helping to achieve green traffic management.

Author 1: Huan Yu

Keywords: Ant colony optimization; A*; path planning; obstacle avoidance; traffic control

PDF

Paper 110: A Study on Life Insurance Early Claim Detection Modeling by Considering Multiple Features Transformation Strategies for Higher Accuracy

Abstract: Early claims in the life insurance sector can lead to significant financial losses if not properly managed. This paper experiments a number of feature selection such as values regrouping, over or undersampling, and encoding that aim to enhance early claim detection by considering five (5) different machine learning algorithms. Utilizing the built-in feature importance from Random Forest, along with regrouping and correlation techniques, we identify the top seven (7) most significant features from a total 800 feature candidates. Our proposed strategy provides a streamlined and effective way to focus on the most relevant features, thereby improving the accuracy and precision of early claim predictive models for the life insurance domain. The results of this study offer practical insights into reducing fraudulent claims and mitigating financial risk. We used Random Forest besides considering techniques such as LightGBM, XGBoost, Feed Forward Neural Network, and CatBoost to train our model and achieved a maximum accuracy of 0.92 across three samples, indicating that our approach can effectively identify critical features and produce reliable results.

Author 1: Tham Hiu Huen
Author 2: Lim Tong Ming

Keywords: Machine learning; feature selection; life insurance; binary classification; Random Forest

PDF

Paper 111: A Hybrid Framework for Evaluating Financial Market Price: An Analysis of the Hang Seng Index Case Study

Abstract: The accurate prediction of financial outcomes presents a considerable challenge as a result of the intricate interaction of economic fundamentals, market dynamics, and investor psychology. The task of accurately forecasting stock prices in the securities market is a challenging undertaking owing to the presence of non-stationary, non-linearity, and significant volatility in the time series data of stock prices. The utilization of conventional approaches possesses the potential to enhance the precision of predictive modeling. It is crucial to acknowledge that these methodologies also encompass computational intricacies, hence potentially augmenting the likelihood of prediction inaccuracies. This work introduces a methodology that addresses many issues by integrating support vector regression technology with the Aquila optimizer procedure. The results of this investigation suggest that, when compared to the other models, the hybrid model performed better and had more efficacy. The proposed model performed at an ideal level and demonstrated a significant level of effectiveness, with a low number of errors. The Hang Seng Index data was analyzed in order to assess the predictive model's accuracy in stock price forecasting. The data was accessible for the years 2015 through 2023. The results show that the proposed framework performs well and is reliable when analyzing and predicting the price time series of equities. Empirical data suggests that, in comparison to other methods presently in use, the suggested model forecasts outcomes with a higher degree of accuracy.

Author 1: Runhua Liu
Author 2: Zhengfeng Yang
Author 3: Juan Su
Author 4: Yu Cao

Keywords: Efficient market; Hang Seng Index; stock forecasting; support vector regression; Aquila optimizer

PDF

Paper 112: A Multi-Modal CNN-based Approach for COVID-19 Diagnosis using ECG, X-Ray, and CT

Abstract: Controlling the spread of Coronavirus Disease 2019 (COVID-19) and reducing its impact on public health need prompt identification and treatment. To improve diagnostic accuracy, this study attempts to create and assess a Multi-Modality COVID-19 Diagnosis System that integrates X-ray, Electrocardiogram (ECG), and Computed Tomography (CT) images utilizing Convolutional Neural Network (CNN) algorithms. To increase the accuracy of COVID-19 diagnosis, the suggested system incorporates data from many imaging modalities in a novel way, including cardiac symptoms identified by ECG data. This approach has not been thoroughly studied in the literature to date. The system analyses CT, ECG, and X-ray images using CNN algorithms, including Visual Geometry Group 19 (VGG19) and Deep Convolutional Networks (DCNN). While ECG data helps detect related cardiac symptoms, CT and X-ray images offer precise insights into lung abnormalities indicative of COVID-19 pneumonia. Noise reduction and image smoothing are accomplished through the implementation of Gaussian filtering algorithms. After extracting characteristics suggestive of either bacterial or viral pneumonia, a deep neural network refines them for accurate COVID-19 identification. Python software is employed throughout the system's implementation. A thorough evaluation of the trained CNN model using separate datasets revealed an amazing 99.12% accuracy rate in COVID-19 detection from chest imaging data. The diagnostic accuracy of the suggested DCNN model was much higher than that of the current models, including Random Forest and Linear Ridge. The Multi-Modality COVID-19 Diagnosis System uses cutting-edge CNN algorithms to seamlessly combine ECG, X-ray, and CT imaging data to provide a highly accurate diagnosis tool. With the implementation of this approach, medical personnel could potentially be able to diagnose COVID-19 more quickly and accurately, which would improve the disease's treatment and control.

Author 1: Kumar Keshamoni
Author 2: L Koteswara Rao
Author 3: D. Subba Rao

Keywords: COVID-19 Diagnosis; Multi-Modality Imaging; Convolutional Neural Networks (CNN); CT imaging; Gaussian filtering

PDF

Paper 113: Advancing Healthcare Anomaly Detection: Integrating GANs with Attention Mechanisms

Abstract: Early illness diagnosis, treatment monitoring, and healthcare administration all depend heavily on the identification of abnormalities in medical data. This paper proposes a unique way to improve healthcare anomaly detection through the integration of attention mechanisms and Generative Adversarial Networks (GANs) for improved performance. By integrating GANs, artificial data that closely mimics the distributions of actual healthcare data may be produced, so, it is important to supplementing the dataset and strengthening the resilience of anomaly detection algorithms. Simultaneously, the Convolutional Block Attention Module (CBAM) facilitates the model's concentration on useful characteristics present in the data, thereby augmenting its capacity to identify minute deviations from the norm. The suggested method is assessed using a large dataset from healthcare settings that includes both typical and unusual cases. When compared to current techniques, the results show notable gains in anomaly detection performance. The model also shows resilience to noise, small abnormalities, and class imbalance, indicating its potential for practical clinical applications. The suggested strategy has the potential to improve clinical decision-making and patient care by giving doctors faster, more precise insights into anomalous health states. With an accuracy of around 99.12%, the suggested GAN-CBAM is implemented in Python software and outperforms other current techniques such as Gaussian Distribution Anomaly detection (GDA), Augmented Time Regularized (ATR-GAN), and Convolutional Long Short-Term Memory (ConvLSTM) by 2.97%. With potential benefits for bettering patient outcomes and the effectiveness of the healthcare system, the suggested strategy is a major step forward in the improvement of anomaly identification in the field of medicine.

Author 1: Thakkalapally Preethi
Author 2: Afsana Anjum
Author 3: Anjum Ara Ahmad
Author 4: Chamandeep Kaur
Author 5: Vuda Sreenivasa Rao
Author 6: Yousef A.Baker El-Ebiary
Author 7: Ahmed I. Taloba

Keywords: Generative Adversarial Networks (GANs); Convolutional Block Attention Module (CBAM); anomaly detection; attention mechanism; healthcare

PDF

Paper 114: BrainLang DL: A Deep Learning Approach to FMRI for Unveiling Neural Correlates of Language across Cultures

Abstract: Employing deep learning techniques on fMRI data enables the exploration of universal and culturally specific neural correlates underlying language processing across diverse populations. The study presents "BrainLang DL," a novel deep learning (DL) approach leveraging functional Magnetic Resonance Imaging (fMRI) data to unveil neural correlates of language processing across diverse cultural backgrounds. To bridge the knowledge gap in the universal and culture-specific aspects of language processing, we engaged participants from various cultural groups in a series of linguistic tasks while recording their brain activity using fMRI. Our rigorous data preprocessing pipeline included steps such as motion correction, slice timing correction, and spatial smoothing to enhance data quality for subsequent analysis. For feature extraction, research utilized the Crocodile Hunting Optimization (CHO) algorithm to pinpoint critical brain regions and connectivity patterns linked to language functions. To capture the temporal dynamics of neural activity related to language processing, we deployed advanced recurrent neural networks, specifically Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models. These techniques enabled us to unravel how linguistic information is encoded and processed over time. Our findings reveal both common and unique neural activation patterns in language processing across different cultures. Universally shared neural mechanisms highlight the fundamental aspects of language processing, while distinct variations underscore the influence of cultural context on brain activity. Furthermore, we employed Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks to analyze the temporal dynamics of language-related neural activity, uncovering how linguistic information is represented and processed over time. By integrating DL with fMRI analysis, our study provides a nuanced understanding of the neural correlates of language across cultures. It reveal both shared neural mechanisms underlying language processing across diverse populations and culturally specific variations in brain activation patterns. These findings contribute to a more comprehensive understanding of the neural basis of language and its modulation by cultural factors. Ultimately, our approach offers insights into the complex interplay between language, cognition, and culture, with implications for fields such as linguistics, neuroscience, and cross-cultural psychology.

Author 1: A. Greeni
Author 2: Yousef A.Baker El-Ebiary
Author 3: G. Venkata Krishna
Author 4: G. Vikram
Author 5: Kuchipudi Prasanth Kumar
Author 6: Ravikiran K
Author 7: B Kiran Bala

Keywords: Long Short-Term Memory; Gated Recurrent Unit; deep learning; functional magnetic resonance imaging; language

PDF

Paper 115: Navigating XRP Volatility: A Deep Learning Perspective on Technical Indicators

Abstract: The rise of cryptocurrency has dramatically changed. Cryptocurrencies have dramatically reshaped the landscape of financial transactions, enabling seamless cross-border exchanges without centralized oversight. This revolutionary shift, powered by blockchain technology, has democratized currency control, entrusting it to a widespread network of participants rather than a single entity. Originating from Satoshi Nakamoto's introduction of Bitcoin, this digital currency model operates on a decentralized framework, contrasting starkly with traditional, centrally governed monetary systems. This research delves into forecasting the price of Ripple (XRP) by leveraging advanced deep-learning approaches and various technical indicators. This study achieves remarkable precision in its predictions through the meticulous preprocessing of data and the application of neural networks, particularly the convolutional neural network-gated recurrent unit hybrid model. Technical indicators further refined these forecasts, highlighting the effective collaboration between machine learning techniques and financial market analysis. Despite the volatile nature of the cryptocurrency market, this work makes a substantial contribution to the field of cryptocurrency prediction strategies, advocating for further investigations into the effects of macroeconomic factors and the utilization of more extensive datasets to deepen our understanding of market dynamics.

Author 1: Susrita Mahapatro
Author 2: Prabhat Kumar Sahu
Author 3: Asit Subudhi

Keywords: Cryptocurrency; ripple; convolutional neural network; gated recurrent unit; technical indicators

PDF

Paper 116: Cross-Cultural Language Proficiency Scaling using Transformer and Attention Mechanism Hybrid Model

Abstract: Assessing language competency in a variety of linguistic and cultural situations requires the use of a cross-cultural language proficiency scale. This study suggests a hybrid model that takes cross-cultural characteristics into account and successfully scales language competency by combining Transformer design with attention processes. The approach seeks to improve the precision and consistency of language competency evaluation by capturing both cross-cultural subtleties and linguistic context. The suggested hybrid model is made up of many essential parts. To capture semantic information, the incoming text is first tokenized into subword units and then transformed into embeddings using word2vec, a pre-trained word embedding algorithm. The contextual information is then extracted from the input sequence using a Transformer encoder stack, which uses multi-head self-attention techniques to focus on distinct textual elements. An attention mechanism layer (or layers) particularly tailored to attend to cross-cultural traits are introduced, in addition to the Transformer encoder. Through learning cross-cultural patterns and links between various languages or cultural settings, this attention mechanism improves the model's comprehension and incorporation of cross-cultural subtleties. A representation that blends linguistic context and cross-cultural elements is produced by fusing the results of the Transformer encoder and the cross-cultural attention mechanism layer(s). This fused representation is subsequently subjected to a classifier in order to forecast language competency levels. The hybrid model uses categorical cross-entropy as the objective function and is trained on a variety of datasets that span several languages and cultural situations. Python is used to implement the suggested work. The accuracy of the suggested study is 97.3% when compared to the T-TC-INT Model, BERT + MECT.

Author 1: Anna Gustina Zainal
Author 2: M. Misba
Author 3: Punit Pathak
Author 4: Indrajit Patra
Author 5: Adapa Gopi
Author 6: Yousef A.Baker El-Ebiary
Author 7: Prema S

Keywords: Cross-cultural; language proficiency; transformer; attention mechanism; hybrid model

PDF

Paper 117: Utilizing Machine Learning and Deep Learning Approaches for the Detection of Cyberbullying Issues

Abstract: This research paper delves into the intricate domain of cyberbullying detection on social media, addressing the pressing issue of online harassment and its implications. The study encompasses a comprehensive exploration of key aspects, including data collection and preprocessing, feature engineering, machine learning model selection and training, and the application of robust evaluation metrics. The paper underscores the pivotal role of feature engineering in enhancing model performance by extracting relevant information from raw data and constructing meaningful features. It highlights the versatility of supervised machine learning techniques such as Support Vector Machines, Naïve Bayes, Decision Trees, and others in the context of cyberbullying detection, emphasizing their ability to learn patterns and classify instances based on labeled data. Furthermore, it elucidates the significance of evaluation metrics like accuracy, precision, recall, F1-score, and AUC-ROC in quantitatively assessing model effectiveness, providing a comprehensive understanding of the model's performance across different classification tasks. By providing valuable insights and methodologies, this research contributes to the ongoing efforts to combat cyberbullying, ultimately promoting safer online environments and safeguarding individuals from the pernicious effects of online harassment.

Author 1: Aiymkhan Ostayeva
Author 2: Zhazira Kozhamkulova
Author 3: Zhadra Kozhamkulova
Author 4: Yerkebulan Aimakhanov
Author 5: Dina Abylkhassenova
Author 6: Aisulu Serik
Author 7: Kuralay Turganbay
Author 8: Yegenberdi Tenizbayev

Keywords: Machine learning; cyberbullying; feature engineering; feature extraction; feature selection

PDF

Paper 118: Quantum-Enhanced Security Advances for Cloud Computing Environments

Abstract: Recent developments in quantum-enhanced security have demonstrated encouraging promise for enhancing cloud computing environments' security. Utilizing quantum physics, in particular Quantum Key Distribution (QKD), provides a new method for generating cryptographic keys and improves cloud data transport security. The present study offers a thorough investigation of the integration of QKD with conventional encryption techniques, including Advanced Encryption Standard (AES), in order to tackle the dynamic cyber security scenario in cloud computing. The approach entails combining AES for encryption and decryption procedures and establishing a QKD layer within the cloud architecture to produce true quantum keys utilizing Quantum in Cloud technology. Data transmission security is greatly improved by the smooth integration of AES with QKD-generated keys, guaranteeing confidentiality, integrity, and authenticity. In addition, strong key management practices are put in place to handle cryptographic keys safely at every stage of their lifespan, reducing the possibility of unwanted access or interception. The suggested approach successfully addresses the difficulties presented by cyber threats by offering a robust and flexible means of enhancing security in cloud-based systems. Using both traditional and quantum encryption methods, this strategy provides a strong barrier against cyber-attacks, data leaks, and other security flaws. After 70 simulation rounds, the suggested strategy, which is implemented using the QKD-AES framework in Python software, achieved a data access rate of 820 MB/s. In addition to providing an accurate and quantitative assessment of the performance, this also exhibits a high data access rate attained under simulated conditions. At 15 milliseconds, the key generation time was achieved with efficiency, guaranteeing the quick creation of secure cryptographic keys in cloud environments. Overall, there is a lot of potential in using quantum-enhanced security techniques to protect sensitive data and guarantee the integrity of cloud computing infrastructures.

Author 1: Devulapally Swetha
Author 2: Shaik Khaja Mohiddin

Keywords: Quantum-enhanced security; cloud computing; quantum key distribution; advanced encryption standard; key management

PDF

Paper 119: Harnessing Machine Learning and Meta-Heuristic Algorithms for Accurate Cooling Load Prediction

Abstract: Precisely calculating the cooling load is essential to improving the energy efficiency of cooling systems, as well as maximizing the performance of chillers and air conditioning controls. Machine learning (ML) has better capabilities in this area than conventional techniques and regression analysis, which are lacking. ML models are capable of automatically recognizing complex patterns that are influenced by various factors, including occupancy, building materials, and weather. They enable responsive predictions that enhance energy optimization and efficient building management because they scale well with data and adapt to changing scenarios. This research acknowledges the difficulties presented by the intricacies of energy optimization while exploring the intricate world of cooling load systems. To solve these issues, in-depth research and creative approaches to problem-solving are needed. The Weevil Damage Optimization Algorithm (WDOA) and the Improved Manta-Ray Foraging Optimizer (IMRFO) are two meta-heuristic algorithms that are seamlessly combined with the Gaussian Process Regression (GPR) model in this study to increase accuracy. Previous stability tests have provided extensive validation for the cooling load data used in these algorithms. The research presents three different models, each of which offers important insights for precise cooling load prediction: GPWD, GPIM, and an independent GPR model. With an RMSE value of 1.004 and an impressive R2 value of 0.990, the GPWD model stands out as the best performer among these models. The remarkable outcomes demonstrate the outstanding precision of the GPWD model in forecasting the cooling load, highlighting its applicability to actual building management situations.

Author 1: Yanfang Zhang

Keywords: Building energy; cooling load; machine learning; Gaussian Process Regression; Improved Manta-Ray Foraging Optimizer; Weevil Damage Optimization Algorithm

PDF

Paper 120: A New Complementary Empirical Ensemble Mode Decomposition Method for Respiration Extraction

Abstract: Respiration monitoring is essential for diagnosing and managing a variety of diseases. It is a non-invasive, convenient and effective method to derive breathing from ECG signals. This paper proposes a new complementary ensemble empirical mode decomposition (NCEEMD) method for respiration extraction. By additional ensemble empirical mode decomposition (EEMD) of the auxiliary white gaussian noise, the noise residue of the corresponding respiratory band after the EEMD decomposition of original ECG signal is subtracted. The new IMF was selected for correlation analysis with the measured respiratory signal, and the optimal amplitude noise coefficient was determined adaptively by the principle of maximum correlation increment. Then IMF in the respiratory band is selected to reconstruct the respiratory signal which is ECG-derived respiration (EDR). A comparative experiment of respiration extraction was conducted using the data of the MIT-BIH Polysomnographic database. The experimental results show that compared with the complementary ensemble empirical mode decomposition (CEEMD) method, the proposed EDR extraction method reduces the average MSE by 3.95%, RMSE by 2.74%, and MAE by 2.52% and the physical significance of the IMF component is more explicit. This method has good accuracy, robustness and adaptability, and provides a new solution idea for the extraction of respiratory signals.

Author 1: Xiangkui Wan
Author 2: Wenxin Gong
Author 3: Yunfan Chen
Author 4: Yang Liu

Keywords: ECG; white gaussian noise; complementary ensemble empirical mode decomposition; ECG-derived respiration (EDR)

PDF

Paper 121: Sleep Apnea and Rapid Eye Movement Detection using ResNet-50 and Gradient Boost

Abstract: Sleep apnea is a prevalent sleep problem marked by interruptions in breathing or superficial breaths while asleep. This frequently results in disrupted sleep patterns and can pose significant health risks such as cardiovascular issues and daytime exhaustion Rapid Eye Movement (REM) sleep stage is easily identifiable due to rapid eye movements, intense dreaming, and muscle immobility. This stage is vital for cognitive processes, the strengthening of memories, and the regulation of emotions. Detection of REM sleep is essential for understanding sleep architecture and diagnosing various sleep disorders. This paper proposes two machine learning models to detect these disorders from physiological signals. The study employs the Apnea-ECG dataset from PhysioNet for sleep apnea detection and the Sleep-EDF dataset for REM detection. For sleep apnea, a ResNet-50 deep learning model is adapted to process ECG signals, treating them as image-like representations. ResNet-50 is trained on the Apnea-ECG dataset, which provides annotated electrocardiogram recordings for supervised learning. For REM detection, Gradient Boosting, an ensemble machine learning technique, is applied to EEG signals from the Sleep-EDF dataset. Relevant features associated with REM sleep phases are extracted from EEG signals and used to train the model. This paper contributes to automated sleep disorder diagnosis by presenting tailored machine learning models for detecting sleep apnea and REM from physiological signals.

Author 1: Ganti Venkata Varshini
Author 2: Sakthivel V
Author 3: Prakash P
Author 4: Mansoor Hussain D
Author 5: Jae Woo Lee

Keywords: Sleep Apnea; Rapid Eye movement; ResNet-50; Gradient boost; sleep stage; sleep disorders

PDF

Paper 122: Advanced Diagnosis of Polycystic Ovarian Syndrome using Machine Learning and Multimodal Data Integration

Abstract: PCOS is a common endocrine disorder that impacts women in their reproductive years characterized by irregular menstrual cycles, hyperandrogenism, and polycystic ovaries. Polycystic Ovary Syndrome (PCOS) presents significant challenges in diagnosis due to its heterogeneous nature and varied clinical manifestations. This project aimed to develop a comprehensive system for PCOS detection, integrating ultrasound images and clinical data through advanced machine learning techniques, using Rotterdam criteria for diagnostic decisions. Feature extraction from ultrasound images was conducted using the ResNet-50 deep learning model, while clinical data underwent correlation-based feature selection. Three classification algorithms - Support Vector Machine (SVM), Random Forest and Logistic Regression - were used to categorize the extracted features from ultrasound images. The integration of image-based and clinical-based features was explored and evaluated to have better accuracy revealing the potential for enhancing PCOS diagnosis accuracy. The developed system holds promise for assisting doctors in PCOS diagnosis, offering a holistic approach that leverages both imaging and clinical information.

Author 1: Nethra Sai M
Author 2: Sakthivel V
Author 3: Prakash P
Author 4: Vishnukumar K
Author 5: Dugki Min

Keywords: PCOS; ultrasound images; clinical data; feature extraction; classification; Rotterdam criteria

PDF

Paper 123: Predictive Modeling of Student Performance Through Classification with Gaussian Process Models

Abstract: In the contemporary educational landscape, proactively engaging in predictive assessment has become indispensable for academic institutions. This strategic imperative involves evaluating students based on their innate aptitude, preparing them adequately for impending examinations, and fostering both academic and personal development. Alarming statistics underscore a notable failure rate among students, particularly in courses. This article aims to employ predictive methodologies to assess and anticipate the academic performance of students in language courses during the G2 and G3 academic exams. The study utilizes the Gaussian Process Classification (GPC) model in conjunction with two optimization algorithms, the Population-based Vortex Search Algorithm (PVS) and the COOT Optimization Algorithm (COA), resulting in the creation of GPPV and GPCO models. The classification of students into distinct performance categories based on their language scores reveals that the GPPV model exhibits the highest concordance between measured and predicted outcomes. In G2, the GPPV model demonstrated the notable 51.1% correct categorization of students as Poor, followed by 25.57% in the Acceptable category, 14.17% in the good category, and 7.7% in the Excellent category. This performance surpasses both the optimized GPCO model and the singular GPC model, signifying its efficacy in predictive analysis and educational advancement.

Author 1: Xiaowei ZHANG
Author 2: Junlin YUE

Keywords: Academic performance; language; hybrid algorithms; Gaussian Process Classification; population-based Vortex Search Algorithm; COOT Optimization Algorithm

PDF

Paper 124: Fiber Tracking Method with Adaptive Selection of Peak Direction Based on CSD Model

Abstract: As a multi-fiber tracking model, the constrained spherical deconvolution (CSD) model is widely used in the field of fiber reconstruction. The CSD model has shown good reconstruction capabilities for crossing fibers in low anisotropy regions, which can achieve more accurate results in terms of brain fiber reconstruction. However, the current fiber tracking algorithms based on the CSD model have a few drawbacks in the selection of tracking strategies, especially in the certain crossing regions, which may lead to isotropic diffusion signals, premature termination of fibers, high computational complexity, and low efficiency. In this study, we proposed the fiber tracking method with adaptive selection of peak direction based on CSD model, called FTASP_CSD, for fiber reconstruction. The method first filters the fiber orientation distribution (FOD) peak threshold and eliminates peak directions lower than the set threshold. Secondly, a priority strategy is used to implement direction selection, and the tracking direction is adaptively adjusted according to the overall shape and needs of the FOD. Through dynamic selection of the maximum peak direction, the second maximum peak direction and the nearest peak direction, the tracking direction that best matches the true fiber direction is found. This method not only ensures spatial consistency, but also avoids the influence of stray peaks in the FOD that may be introduced by imaging noise on the fiber tracking direction. Experimental results on simulation and in vivo data show that the fiber bundles tracked by the FTASP_CSD method have a much smoother in the overall visual effect than the state-of-the-art methods. The fiber bundles tracked in the region of crossing or bifurcating fibers are more complete. This improves the angular resolution of the recognition of fiber crossings and lays a foundation for further in-depth research on fiber tracking technology.

Author 1: Qian Zheng
Author 2: Kefu Guo
Author 3: Jiaofen Nan
Author 4: Lujuan Deng
Author 5: Junying Cheng

Keywords: Diffusion magnetic resonance imaging; constrained spherical deconvolution; fiber orientation distribution; fiber tractography

PDF

Paper 125: Federated LSTM Model for Enhanced Anomaly Detection in Cyber Security: A Novel Approach for Distributed Threat

Abstract: Technological improvements have led to a rapid expansion of the digital realm, raising concerns about cyber security. The last ten years have seen an enormous rise in Internet applications, which has greatly raised the requirement for information network security. In the realm of cyber security, detecting anomalies efficiently and effectively is paramount to safeguarding digital assets and infrastructure. Traditional anomaly detection methods often struggle with the evolving landscape of cyber threats, particularly in distributed environments. To address this challenge, the research proposes a novel approach leveraging federated learning and Long Short-Term Memory (LSTM) networks. Federated learning permits training models across decentralised data sources without sacrificing data privacy, and LSTM networks are highly effective in identifying temporal correlations in sequential data, which makes them suitable for analysing cyber security time-series data. In this paper, the study presents the federated LSTM model architecture tailored for anomaly detection in distributed environments. By allowing model updates to be performed locally on individual devices or servers without sharing raw data, federated learning mitigates privacy concerns associated with centralized data aggregation. This decentralized approach not only safeguards sensitive information but also fosters collaboration among diverse stakeholders, empowering them to contribute to model improvement without relinquishing control over their data. Python software is used to implement the method. The research demonstrate its effectiveness through experiments on real-world cyber security datasets, showcasing improved detection rates compared to traditional methods. When compared to RNN, SVM, and CNN, the suggested Fed LSTM method exhibits superior accuracy with 98.9%, which is 2.28% more advanced. Additionally, the research discuss the practical implications and scalability of our approach, highlighting its potential to enhance cyber security measures in distributed threat scenarios.

Author 1: Aradhana Sahu
Author 2: Yousef A.Baker El-Ebiary
Author 3: K. Aanandha Saravanan
Author 4: K. Thilagam
Author 5: Gunnam Rama Devi
Author 6: Adapa Gopi
Author 7: Ahmed I. Taloba

Keywords: Federated learning; LSTM; anomaly detection; cyber security; distributed threats; privacy-preserving model training

PDF

Paper 126: Optimizing Industrial Engineering Performance with Fuzzy CNN Framework for Efficiency and Productivity

Abstract: In industrial engineering, efficiency is paramount. Convolutional Neural Networks (CNNs) are commonly used to identify and detect labour activity in industrial environments. Accurate fault detection is crucial for identifying and classifying defects in production. This research proposes a novel approach to enhancing industrial performance by predicting defects in manufacturing processes using a fuzzy-based CNN technique. The framework integrates cutting-edge fuzzy logic with CNNs, improving diagnostic model efficacy through fuzzy logic-based weight adjustments during training. Additionally, a novel fuzzy classification method is used for defect detection, followed by a demand forecast error simulation tailored to specific regions. The framework begins with initial training data, which is then combined with multiple classifiers to form a comprehensive dataset. The CNN, enhanced by fuzzy logic for weight updates, first employs fuzzy classification to diagnose errors, then simulates demand forecast errors regionally. This refined dataset is subsequently used as input for the CNN. Implementation in a manufacturing organization demonstrates the proposed framework's effectiveness, significantly improving fault diagnostic accuracy compared to traditional methods. By leveraging the latest advances in CNNs and fuzzy logic, the framework offers a robust solution for boosting industrial efficiency. This comprehensive approach to defect detection in industrial processes seamlessly integrates CNNs with fuzzy logic, highlighting the framework's utility and potential impact on industrial efficiency. The results underscore the viability of this innovative technology in enhancing industrial engineering performance.

Author 1: Suraj Bandhekar
Author 2: Abdul Hameed Kalifullah
Author 3: Venkata Krishna Rao Likki
Author 4: Hatem S. A. Hamatta
Author 5: Deepa
Author 6: Tumikipalli Nagaraju Yadav

Keywords: Industrial Engineering performance; manufacturing industry; fuzzy-based convolutional neural network; fault diagnostic

PDF

Paper 127: A Comparative Study Between Linear and Affine Multi-Model in Predictive Control of a Nonlinear Dynamic System

Abstract: Model Predictive Control (MPC) is the most successful control strategy that coped in many areas. However, the success of an MPC scheme lies in the accuracy of the adopted prediction model. This paper treats the problem of MPC when there is a need to a larger domain of set-point values and best tracking performances. It presents a novel modeling structure for representing a nonlinear dynamic system based on its static nonlinear characteristic. Then, the Multiple Affine Model (MAM) structure is compared to Multiple Linear Models (MLM) in a Linear MPC (LMPC) scheme. It is noted that the MAM structure offers more precision for modeling and the much smaller number of models. Therefore, it guarantees the best tracking performances in terms of stability, speed and accuracy.

Author 1: Houda Mezrigui
Author 2: Wassila Chagra
Author 3: Maher Ben Hariz

Keywords: Affine models; linear models; static characteristic; linear model predictive control; prediction horizon; tracking performances

PDF

Paper 128: Blockchain-Enabled Decentralized Trustworthy Framework Envisioned for Patient-Centric Community Healthcare

Abstract: Ethereum has gained significant attention from businesses as a blockchain technology since its conception. Beyond the first use of cryptocurrencies, it provides many additional features. In the pharmaceutical sector, where reliable supply chains are necessary for cross-border transactions, Ethereum shows promise. It addresses problems through quality, traceability, and transparency in a place defined by complexity and strong laws because of its decentralized structure. As a result, this study looks at how Ethereum is used in the pharmaceutical sector, namely the networks that allow smart contracts to communicate with one another on the Ethereum network. The above concepts are formulated via communication networks, inter-contract owner interactions, and simulation analysis, which seeks to identify dubious practices and unjust contracts inside the supply chain. The study suggests effective manufacturing techniques that call for reduction rather than storage to technological obstacles. With this endeavor, we hope to provide insights into Ethereum-based contract ecosystems and assist in anomaly identification for enhanced security and transparency. The main objective is to support patient record methodology and transform the way healthcare data is managed. The suggested model integrates front-end interfaces, back-end optimization, distributed storage, proof-of-work techniques, and training to establish a safe and efficient ecosystem for healthcare data. These elements can be combined through the blockchain-enabled architecture to transform manufacturing-protecting chemicals in handling, distribution, and necessary training.

Author 1: Mohammad Khalid Imam Rahmani
Author 2: Javed Ali
Author 3: Surbhi Bhatia Khan
Author 4: Muhammad Tahir

Keywords: Blockchain; smart contract; externally owned accounts; decentralized trustworthy framework; community healthcare; Ethereum; supply chain management

PDF

Paper 129: Design and Optimization of Reversible Information Hiding Image Encryption Algorithms in the Context of Electronic Information Security

Abstract: With the widespread application of electronic information, in order to meet the growing security needs in the field of electronic information security, a new encryption algorithm based on a novel chaotic map with traversal and chaos characteristics has been proposed. By introducing a hash algorithm and chaotic map, the randomness and nonlinear characteristics of the system are enhanced, and the confidentiality of data and the security of the system are improved. The encryption process includes generating chaotic sequences, constructing permutation boxes, and DNA encoding operations, ultimately generating cipher-text images with high randomness. Meanwhile, an information-hiding encryption algorithm with a four-dimensional conservative chaotic system is designed, which improves the randomness and initial value sensitivity of the algorithm by introducing a chaotic system, and optimized reversible information hiding and image encryption. The algorithm includes chaotic system encryption, additional data embedding, rearrangement strategy, and symmetric structure data extraction and image restoration. The algorithm was robust to images with 50% tampering degree, with an average peak signal-to-noise ratio of 31.26dB, demonstrating high key sensitivity. In the light home plot test, the peak signal-to-noise ratio reached 57.2dB. Under the same QF value but different embedding amounts, the signal-to-noise ratio of the algorithm was 46.9dB, which was superior to other algorithms, highlighting its outstanding performance in different challenges.

Author 1: Li Zhang
Author 2: Keke Shan

Keywords: Information security; reversible information hiding; key sensitivity; chaos system optimization

PDF

Paper 130: Smart Parking: An Efficient System for Parking and Payment

Abstract: In addition to being a time-consuming and annoying driving experience, searching for a cheap and empty parking space also wastes fuel and pollutes the air. In densely populated cities, there are limited and expensive public parking spaces. On the other hand, private parking spaces are typically underutilized, and the parking space owners are willing to charge higher parking fees to cover the expenses of maintaining their excess parking capacity. In light of these circumstances, it is essential to look for a smart parking system that gathers and allows private parking spaces to ease the worries of public parking. An Internet of Things (IoT) enabled parking space recommendation system is proposed in this paper. It makes recommendations by utilizing IoT technology (traffic and parking sensors). The recommended system helps users automatically pick a spot at the lowest charge by accounting for metrics like distance, availability of vacancy at the slot, and the charges. To accomplish this, the user parking cost is calculated using performance measures. This system provides the user with a way to request a parking spot when one is available, as well as a way to recommend a new parking lot if the present one is filled. The proposed model reduces user waiting time and increases the likelihood of finding an empty slot in the parking, based on the simulation results used besides offering an anonymous payment method. The proposed system also exploited the concept of VANET as it uses onboard and roadside units. The novelty of the research is that apart from calculating the cost function it also maintains the neighbors table at each neighbor which will be shared among all as and when there is a change. We have simulated the environment in Network Simulator 3 (NS3).

Author 1: Md Ezaz Ahmed
Author 2: Mohammad Arif
Author 3: Mohammad Khalid Imam Rahmani
Author 4: Md Tabrez Nafis
Author 5: Javed Ali

Keywords: Smart parking; Internet of Things; sensors; simulation; NS3; VANET

PDF

Paper 131: Design of Network Security Assessment and Prediction Model Based on Improved K-means Clustering and Intelligent Optimization Recurrent Neural Network

Abstract: Aiming at the security problems in cyberspace, the study proposes a cyber security assessment and prediction model based on improved K-means and intelligent optimization recurrent neural network. Firstly, based on traditional self-encoder and K-means algorithm, sparse self-encoder and K-means++ algorithm are proposed to build a cyber security posture assessment model based on improved K-means. Then, a two-way gated loop unit is used for security posture prediction, and a particle swarm optimization algorithm is utilized for enhancing the two-way gated loop unit, and the prediction is performed jointly with the model based on convolutional neural network. The results show that the proposed safety assessment model can react quickly when a fault occurs and is not prone to misjudgment with good stability. The accuracy of the safety assessment model was 99.8%, the running time was 0.277 s, and the recall rate was 96.67%, which was 96.49% in the F1 metric. The proposed safety prediction model has the lowest mean absolute error and root mean square error, which are 0.18 and 0.30. The running time is relatively long, which is 703.23 s and 787.46 s, but still within the acceptable range. The model-predicted posture values fit well with the actual posture values. In summary, the model constructed by the study has a good application effect and helps to ensure the security of cyberspace.

Author 1: Qianqian Wang
Author 2: Xingxue Ren
Author 3: Lei Li
Author 4: Huimin Peng

Keywords: K-means; cybersecurity; situational assessment; situational prediction; self-encoder

PDF

Paper 132: Robust Chaos Image Encryption System using Modification Logistic Map, Gingerbread Man and Arnold Cat Map

Abstract: In the field of security, the information must be protected from unauthorized use because it contains a great deal of sensitive information especially in images. Image encryption is now recognized as an outstanding strategy for protecting images from attackers. Despite numerous advancements, an efficient image encryption method remains essential to achieve high image security. Therefore, an accurate encryption algorithm requires a formidable random key generator and regeneration abilities. In addition, a new strategy for confusion and diffusion with different processes. To accomplish these objectives, a framework for image encryption with three main phases has been created. Firstly, a new key generator was created with a high level of randomness based on different chaotic maps and the proposed Modification Logistic Map function. Secondly, the confusion phase has been proposed based on sorting the key generator ascending and then permuting the image pixels according to the sorting key. Lastly, the confusion phase has been presented based on generating the Gingerbread Man Method (GGM), Arnold Cat Map (ACM) transform and, XOR between the confused image and Arnold image. The ACM is used to remove flat areas from the image. Various parameters were used to assess the experimental result. In conclusion, it has been confirmed that the suggested picture encryption approach is a solid success in the field of encryption.

Author 1: Lina Jamal Ibrahim
Author 2: John Bush Idoko
Author 3: Almuntadher M. Alwhelat

Keywords: Arnold cat map; confusion; diffusion; image encryption; modification logistic map

PDF

Paper 133: A Data Sharing Privacy Protection Model Based on Federated Learning and Blockchain Technology

Abstract: As the main driving force for social development in the new era, data sharing is controversial in terms of privacy and security. Traditional privacy protection methods are a bit challenging when faced with complex and massive shared data. Given this, firstly, the Byzantine consensus algorithm in blockchain technology was elaborated. Meanwhile, a decision tree algorithm was introduced for node classification optimization, and a new consensus algorithm was proposed. In addition, local data training and updating were achieved through federated learning, and a new data-sharing privacy protection model was proposed after jointly optimizing consensus algorithms. The maximum throughput of the optimized consensus algorithm was 1560. The maximum consensus delay was 110 milliseconds. After multiple iterations, the removal rate of the Byzantine nodes reached 56.6%. The optimal reputation value of the new data-sharing privacy protection model was 0.75. The lowest reputation value after 10 iterations was 0.32. As a result, this proposed model achieves excellent results in data sharing privacy protection tasks, demonstrating high model feasibility and effectiveness. The research aims to provide a reliable method for data sharing privacy protection in the field.

Author 1: Fei Ren
Author 2: Zhi Liang

Keywords: Federated learning; blockchain; data sharing; privacy; reputation

PDF

Paper 134: Time Window NSGA-II Route Planning Algorithm for Home Care Appointment Scheduling in the Elderly Industry

Abstract: Given the lack of healthcare resources, the home care sector faces a serious challenge in figuring out how to maximize the effectiveness of healthcare employees' services and raise consumer satisfaction. In this study, a model for healthcare worker scheduling and path planning is built. Fuzzy time window theory is used to discuss how to determine service duration and fuzzy service duration sub-situations. A path-planning algorithm based on a non-dominated ranking genetic algorithm is used to optimize the decision-making process. To analyze the aspects that affect the results of the model runs and use them as a foundation for effective planning recommendations, simulation experiments based on real data were conducted. According to the findings, customer demand under a defined service hour reaches a threshold of 343 before additional man-hour expenses starts to accrue. Decision-makers must therefore make adequate staffing modifications before this happens. The appointment time window has a greater impact on customer satisfaction and can be suitably extended in the customer appointment interface to raise satisfaction. The -value, which can be calculated based on the carer's fuzzy service hours, high and low peak demand, and the percentage of urgent tasks, is related to the time cost and satisfaction under fuzzy service hours. The corresponding optimal -values are 0.6, 0.3, 0.6, and 0.6, which can balance the time cost and customer satisfaction in this scenario.

Author 1: Guoping Xie

Keywords: FTWNSGA-II; aging in place; path planning; appointment scheduling; fuzzy time windows

PDF

Paper 135: The Application of Optimization Algorithms for Workflow Scheduling Based on Cloud Computing IaaS Environment in Industry Multi-Cloud Scenarios

Abstract: The advancement of cloud computing has enabled workflow scheduling to provide users with more network resources. However, there are some scheduling issues between resource allocation and user needs in workflows in IaaS environments. Based on this, this study adopts a heuristic scheduling model based on deadline and list and constructs a single objective workflow scheduling model based on deadline. Based on fuzzy-dominated sorting, traditional non-dominated sorting is improved to construct a time-cost dual objective workflow scheduling model. Introducing evolutionary algorithms with a reliability index as the scheduling objective, a time-cost reliability three-objective workflow scheduling model is constructed. The results showed that the total execution time of the single objective workflow scheduling model in four standard workflows was 92s, 106s, 113s, and 105s, respectively. The throughput was 144b/s, 138b/s, 140b/s, and 142b/s, respectively, all of which were superior to other models. Compared with other comparative models, the dual objective workflow scheduling model and the three objective workflow scheduling model had higher HV values, less execution time, and better Pareto frontier solutions. This study solves the three objective scheduling problem of time cost reliability in IaaS environment, which has a certain reference value in resource scheduling on cloud platforms.

Author 1: Cunbing Li

Keywords: Cloud computing; IaaS; scheduling model; evolutionary algorithms; heuristic model

PDF

Paper 136: Optimization of Robot Environment Interaction Based on Asynchronous Advantage Actor-Critic Algorithm

Abstract: With the continuous advancement of automation and intelligent technology, the application of robots in environmental interaction and delivery tasks has become increasingly important. A new noise network and advantage function were constructed. An asynchronous update mechanism was adopted to enhance the exploration ability and learning efficiency of A3C. Simulation tests were conducted on classic control tasks on the Gym platform and in complex Atari game environments. Experimental verification was conducted in actual robot grasping tasks to evaluate the proposed method's performance. The improved A3C reduced training steps by 14.4% in the "CartPole-v0" task, improved scores by 31.9% in the "BreakoutNoFrameskip-v4" game, and increased scores by 7.74% in the "PongNoFrameskip-v4" game. In actual testing, the position error was controlled at the pixel level, proving the algorithmic accuracy in delivery tasks. This study provides new technological support for the advancement of robotics technology.

Author 1: Jitang Xu
Author 2: Qiang Chen

Keywords: A3C; robot environment interaction; intelligent technology; simulation testing

PDF

Paper 137: Appraising the Building Cooling Load via Hybrid Framework of Machine Learning Techniques

Abstract: The overarching objective of this study lies in the thorough evaluation of the effectiveness of K-nearest neighbors (KNN) models in the precise estimation of building cooling load consumption. This assessment holds significant importance as it pertains to the feasibility and reliability of implementing machine learning techniques, particularly the KNN algorithm, within the domain of building energy management. This evaluation process centers on scrutinizing five distinct spatial metrics closely associated with the KNN algorithm. To refine and enhance the algorithm's predictive capabilities, this endeavor incorporates utilizing test samples drawn from an extensive database. These test samples serve as valuable resources for augmenting the overall predictive accuracy of the model, ultimately leading to more robust and reliable predictions of cooling load consumption within the building systems. Ultimately, the research endeavors to contribute substantially to advancing more energy-efficient and automated cooling system control strategies. Developed models encompass a single base model, another model optimized through the application of African Vultures Optimization, and a third model optimized using the Sand Cat Swarm Optimization technique. The training dataset includes 70% of the data, with eight input variables relating to the geometric and glazing characteristics of the buildings. After validating 15% of the dataset, the performance of the remaining 15% is tested. An analysis of various evaluation metrics reveals that KNSC (K-Nearest Neighbors optimized with the Sand Cat Swarm Optimization) demonstrates remarkable accuracy and stability among the three candidate models. It achieves a substantial reduction in the prediction Root Mean Square Error (RMSE) of 32.8% and 21.5% in comparison to the other two models (KNN and KNAV) and attains a maximum R2 value of 0.985 for cooling load prediction.

Author 1: Longlong Yue
Author 2: Xiangli Liu
Author 3: Shiliang Chang

Keywords: K-nearest-neighbors; machine learning; cooling load prediction; African Vultures Optimization; Sand Cat Swarm Optimization

PDF

Paper 138: Enhancing Hand Sign Recognition in Challenging Lighting Conditions Through Hybrid Edge Detection

Abstract: Edge detection is essential for image processing and recognition. However, single methods struggle under challenging lighting conditions, limiting the effectiveness of applications like sign language recognition. This study aimed to improve the edge detection method in critical lighting for better sign language interpretation. The experiment compared conventional methods (Prewitt, Canny, Roberts, Sobel) with hybrid ones. Project effectiveness was gauged across multiple evaluations considering dataset characteristics portraying critical lighting conditions tested on English alphabet hand signs and with different threshold values. Evaluation metrics included pixel value improvement, algorithm processing time, and sign language recognition accuracy. The findings of this research demonstrate that combining the Prewitt and Sobel operators, as well as integrating Prewitt with Roberts, yielded superior edge quality and efficient processing times for hand sign recognition. The hybrid method excelled in backlight at 100 thresholds and direct light conditions at a threshold of 150. By employing the hybrid method, hand sign recognition rates saw a notable improvement of the pixel value of more than 100% and hand and sign recognition also improved up to 11.5%. Overall, the study highlighted the hybrid method's efficacy for hand sign recognition, offering a robust solution for lighting challenges. These findings not only advance image processing but also have significant implications for technology reliant on accurate segmentation and recognition, particularly in critical applications like sign language interpretation.

Author 1: Fairuz Husna Binti Rusli
Author 2: Mohd Hilmi Hasan
Author 3: Syazmi Zul Arif Hakimi Saadon
Author 4: Muhammad Hamza Azam

Keywords: Critical lighting; edge detection; image recognition; image segmentation; sign language

PDF

Paper 139: Football Video Image Restoration Based on Generalized Equalized Fuzzy C-mean Clustering Algorithm

Abstract: With the development of image processing techniques, the quality of visual content has become crucial for acquiring and analyzing information, especially in applications in the field of sports, such as football match videos. Conventional image restoration techniques have limitations in dealing with motion blur and noise interference, especially in maintaining edge information and texture details. Aiming at these challenges, the study presents a generalized balanced fuzzy C-mean clustering algorithm incorporating fuzzy logic and cluster analysis by introducing local spatial information and adaptive edge protection factors, and the generalized balanced fuzzy C-mean clustering algorithm optimizes the updating strategies of the affiliation function and the class center in order to enhance the detail preservation and noise suppression, aiming to improve the recovery quality of football video images. The results demonstrated that the average gradient ratio, edge strength, standard deviation, and information entropy of the designed algorithm were 1.77, 0.92, 0.26, and 1.73, respectively, which were significantly better than those of other algorithms, proving its superiority in image restoration. Football video images can be made clearer and more detailed with the help of the generalized balanced fuzzy C-mean clustering technique, which also advances motion analysis and automatic identification technologies.

Author 1: Shaonan Liu

Keywords: Generalized equilibrium; fuzzy c-mean clustering algorithm; image restoration; local spatial information; adaptive edge protection factor

PDF

Paper 140: Method for Ripeness Classification of Harvested Strawberries using Hue Information of Images Acquired After the Harvest

Abstract: Hakata Amaou is the most popular strawberry in Fukuoka Prefecture. However, Amaou farmers face a significant challenge due to a shortage of labor and successors, primarily caused by an aging workforce. This labor shortage is particularly severe during the harvest season, when work must be completed within a short timeframe. To address this issue, INAK System Co., Ltd. has developed an automatic harvesting system called "Robotsumi," which utilizes image recognition technology. Despite this advancement, the current image recognition method has not yet been able to classify the Amaou strawberries into 10 quality grades. Additionally, the image recognition process is affected by image defects, varying light conditions, and shadows. To overcome these challenges, this study first conducted questionnaires to gather information on the ripeness of harvested strawberries as classified by humans. Based on the questionnaire results, maturity classifications using modes of hue were performed. The discrimination results are verified and reported here.

Author 1: Jin Sawada
Author 2: Kohei Arai
Author 3: Souichiro Tashi
Author 4: Shigenori Inakazu
Author 5: Mariko Oda

Keywords: Amaou; Robotsumi; hue; strawberry; automatic harvest; 10 grades classification; questionnaire; image defects

PDF

Paper 141: Short Video Recommendation Method Based on Sentiment Analysis and K-means++

Abstract: With the explosive growth of short video content, effectively recommending videos that interest users has become a major challenge. In this study, a short video recommendation model based on barrage sentiment analysis and improved K-means++ was raised to address the interest matching problem in short video recommendation systems. The model uses sentiment vectors to represent bullet content, clusters short videos through sentiment similarity calculation, and studies the use of clustering density to eliminate abnormal sample points during the clustering process. The study validated the effectiveness of the raised model through simulation experiments. The outcomes denoted that when the historical data size increased to 7000, the model's prediction accuracy could reach 0.81, recall rate was 0.822, and F1 value was 0.832. Compared with the current four mainstream recommendation algorithms, this model showed advantages in clustering time and complexity, with clustering time reduced to 8.2 seconds, demonstrating the efficiency of the model in raising recommendation efficiency and accuracy. In summary, the model proposed in the study has high recommendation accuracy in short video recommendation systems and meets the real-time demands of short video recommendation, which can effectively raise the quality of short video recommendations.

Author 1: Rong Hu
Author 2: Wei Yue

Keywords: Short videos; barrage; sentiment analysis; K-means++; recommendation; cluster density

PDF

Paper 142: FEC-IGE: An Efficient Approach to Classify Fracture Based on Convolutional Neural Networks and Integrated Gradients Explanation

Abstract: In this paper, we propose the FEC-IGE framework includes data preprocessing, data augmentation, transfer learning, and fine-tuning of the pre-trained model of convolutional neural network (CNN) architecture for the problem of bone fracture classification. Bone fractures are a widespread medical issue globally, with a significant prevalence and imposing substantial burdens on individuals and healthcare systems. The impact of bone fractures extends beyond physical injury, often leading to pain, reduced mobility, and decreased quality of life for affected individuals. Moreover, fractures can incur substantial economic costs due to medical expenses, rehabilitation, and lost productivity. In recent years, progress in machine learning methodologies has exhibited potential in tackling issues pertaining to fracture diagnosis and classification. By harnessing the capabilities of deep learning frameworks, scholars aspire to design precise and effective mechanisms for automatically detecting and classifying bone fractures from medical imaging data. In this study, FEC-IGE framework has demonstrated its potential and strength when applied models pre-trained of CNN architecture in the task of classifying X-ray bone fracture images with accuracies of 98.48%, 96.92%, and 97.24% in three experimental scenarios. These outcomes are the consequence of the model’s fine-tuning and transfer learning procedures applied to an enhanced dataset including 1129 X-ray pictures classified into ten different kinds of fractures: avulsion fracture, comminuted fracture, fracture dislocation, greenstick fracture, hairline fracture, impacted fracture, longitudinal fracture, oblique fracture, pathological fracture, and spiral fracture. To increase transparency and understanding of the model, Integrated Gradients explanation was also applied in this study. Finally, metrics including precision, recall, F1 score, precision, and confusion matrix were applied to evaluate performance and other in-depth analysis.

Author 1: Triet Minh Nguyen
Author 2: Thuan Van Tran
Author 3: Quy Thanh Lu

Keywords: Convolutional neural network; transfer learning; fine-tuning; X-ray image classification; EfficientNet; classification break bone; deep learning; integrated gradients explanation

PDF

Paper 143: Dynamic Gesture Recognition using a Transformer and Mediapipe

Abstract: There is a rising interest in dynamic gesture recognition as a research area. This is the result of emerging global pandemics as well as the need to avoid touching different surfaces. Most of the previous research has focused on implementing deep learning algorithms for the RGB modality. However, despite its potential to enhance the algorithm’s performance, gesture recognition has not widely utilised the concept of attention. Most research also used three-dimensional convolutional networks with long short-term memory networks for gesture recognition. However, these networks can be computationally expensive. As a result, this paper employs pre-trained models in conjunction with the skeleton modality to address the challenges posed by background noise. The goal is to present a comparative analysis of various gesture recognition models, divided based on video frames or skeletons. The performance of different models was evaluated using a dataset taken from Kaggle with a size of 2 GB. Each video contains 30 frames (or images) to recognise five gestures. The transformer model for skeleton-based gesture recognition achieves 0.99 accuracy and can be used to capture temporal dependencies in sequential data.

Author 1: Asma H. Althubiti
Author 2: Haneen Algethami

Keywords: Gesture recognition; self-attention; transformer en-coder; skeleton; transfer learning

PDF

Paper 144: Blockchain-based Decentralised Management of Digital Passports of Health (DPoH) for Vaccination Records

Abstract: With the recent impact of viral infections and pandemics – akin to a recent global healthcare emergency due to COVID-19 - there is an urgent need for mass-scale testing and vaccination initiatives for tackling the health and economic crises. However, the centralized storage of patient information has given rise to significant concerns regarding privacy, transparency, and efficient transmission of vaccination records. This paper exploits a blockchain-based solution that presents a novel approach by seamlessly integrating identity verification, encryption protocols, and decentralized storage via IPFS (InterPlanetary File System) which gives rise to the concept of Digital Passport of Health (DPoH). The proposed solution in this paper introduces the concept of DPoH, specifically designed for test certification, and leverages the power of smart contracts on Ethereum-based blockchain technology for securely creating, managing and transmitting data in the form of DPoH. The proposed solution is being evaluated in three dimensions including (i) gas cost (i.e. energy efficiency), (ii) data storage (i.e. storage efficiency), and (iii) data access (i.e. response time) for creation and transmission of DHoPs. The developed solution and its criteria-based validation are complemented with algorithmic implementations that can progress existing research and development on blockchain-based management of health-critical systems.

Author 1: Abdulrahman Alreshidi

Keywords: Smart healthcare; blockchain; software architecture; digital passport of health; software engineering

PDF

Paper 145: Hybrid Emotion Detection with Word Embeddings in a Low Resourced Language: Turkish

Abstract: Through natural language processing, subjective information can be obtained from written sources such as suggestions, reviews, and social media publications. Understanding and knowing the user experience or in other words the feelings/emotions of user on any type of product or situation directly affects the decisions to be taken on the regarding product or service. In this study, we focus on a hybrid approach of text-based emotion detection. We combined keyword and lexicon-based approaches by the use of word embeddings. In emotion detection, simply lexicon words/keywords and text units are compared in several different ways and the comparison results are used in emotion identification experiments. As this identification procedure is examined, it is explicit that the performance depends mainly on two actors: the lexicon/keyword list and the representation of text unit. We propose to employ word vectors/embeddings on both actors. Firstly, we propose a hybrid approach that uses word vector similarities in order to determine lexicon words, on contrary to traditional approaches that employs all arbitrary words in given text. By our approach, the overall effort in emotion identification is to be reduced by decreasing the number of arbitrary words that do not carry the emotive content. Moreover, the hybrid approach will decrease the need for crowdsourcing in lexicon word labelling. Secondly, we propose to build the representations of text units by measuring their word vector similarities to given lexicon. We built up two lexicons by our approach and presented three different comparison metrics based on embedding similarities. Emotion identification experiments are performed employing both unsupervised and supervised methods on Turkish text. The experimental results showed that employing the hybrid approach that involves word embeddings is promising on Turkish texts and also due to its flexible and language-independent structure it can be improved and used in studies on different languages.

Author 1: Senem Kumova Metin
Author 2: Hatice Ertugrul Giraz

Keywords: Emotion detection; word embedding; vector similarity; Turkish

PDF

Paper 146: Language Models for Multi-Lingual Tasks - A Survey

Abstract: These days different online media platforms such as social media provide their users the possibility to exchange and engage in different languages. It is not surprising anymore to see comments from different languages in posts published by international celebrities and figures. In this era, understanding cross-language content and multilingualism in natural language processing (NLP) are crucial, and huge amount of efforts have been dedicated on leverage existing technologies in NLP to tackle this challenging research problem, specially with advances in language analysis and the introduction of large language models. In this survey, we provide a comprehensive overview of the existing literature focusing on the evolution of language models with a focus on multilingual tasks and then we identify potential opportunities for further research in this domain.

Author 1: Amir Reza Jafari
Author 2: Behnam Heidary
Author 3: Reza Farahbakhsh
Author 4: Mostafa Salehi
Author 5: Noel Crespi

Keywords: Language models; transfer learning; BERT; NLP; multilingual task; low resource languages; LLMs

PDF

Paper 147: Automatic Flipper Control for Crawler Type Rescue Robot using Reinforcement Learning

Abstract: In recent years, many natural disasters have occurred, and rescue robots have been used to gather information at disaster sites. Rescue robots, particularly crawler type rescue robots are operated through remote control by their operators via wireless communication or wired. However, certain robots have been known to not return owing to tipping over or disconnection of communication wires caused by missed operations. Therefore, studies have focused on automatic control of rescue robots. Adapting the rescue robot for uneven terrain or unexpected obstacle shape to travel in autonomous control situation is challenging. It requires complete autonomous control as well as partial control of the rescue robot, which necessitates assistance for teleoperation. This study proposed automatic flipper control of rescue robots using reinforcement learning for stepping over steps. The proposed method involved designing of the learning environment, reward setting, and system configuration for reinforcement learning. The input data for the training data were coarse-grained information using a distance sensor, gyro sensor, and GPS coordinates information. Reinforcement learning was performed through a physical simulation within an environment wherein the shape of a step changed once every 100 episodes. The robot’s compensation was designed to reduce the impact on the robot’s body by changing the robot’s attitude angle. The learned knowledge, which is contained action-value function, was reused to verify that the flipper could be automatically controlled by the operator when the rescue robot is operated as moving along a direction remotely, and that the robot could step over steps.

Author 1: Hitoshi Kono
Author 2: Sadaharu Isayama
Author 3: Fukuro Koshiji
Author 4: Kaori Watanabe
Author 5: Hidekazu Suzuki

Keywords: Rescue robot; sub-crawler control; reinforcement learning; physics simulation

PDF

Paper 148: On Constructing a Secure and Fast Key Derivation Function Based on Stream Ciphers

Abstract: In order to protect electronic data, pseudorandom cryptographic keys generated by a standard function known as a key derivation function play an important role. The inputs to the function are known as initial keying materials, such as passwords, shared secret keys, and non-random strings. Existing standard secure functions for the key derivation function are based on stream ciphers, block ciphers, and hash functions. The latest secure and fast design is a stream cipher-based key derivation function (SCKDF2). The security levels for key derivation functions based on stream ciphers, block ciphers, and hash functions are equal. However, the execution time for key derivation functions based on stream ciphers is faster compared to the other two functions. This paper proposes an improved design for a key derivation function based on stream ciphers, namely I−SCKDF2. We simulate instances for the proposed I−SCKDF2 using Trivium. As a result, I−SCKDF2 has a lower execution time compared to the existing SCKDF2. The results show that the execution time taken by I−SCKDF2 to generate an n-bit cryptographic key is almost 50 percent lower than SCKDF2. The security of I−SCKDF2 passed all the security tests in the Dieharder test tool. It has been proven that the proposed I−SCKDF2 is secure, and the simulation time is faster compared to SCKDF2.

Author 1: Chai Wen Chuah
Author 2: Janaka Alawatugoda
Author 3: Nureize Arbaiy

Keywords: Key derivation functions; extractors; expanders; stream ciphers; hash functions; symmetric-key cryptography

PDF

Paper 149: Design and Development of an Efficient Explainable AI Framework for Heart Disease Prediction

Abstract: Heart disease remains a global health concern, demanding early and accurate prediction for improved patient outcomes. Machine learning offers promising tools, but existing methods face accuracy, class imbalance, and overfitting issues. In this work, we propose an efficient Explainable Recursive Feature Elimination with eXtreme Gradient Boosting (ERFEX) Framework for heart disease prediction. ERFEX leverages Explainable AI techniques to identify crucial features while ad-dressing class imbalance issues. We implemented various machine learning algorithms within the ERFEX framework, utilizing Support Vector Machine-based Synthetic Minority Over-sampling Technique (SVMSMOTE) and SHapley Additive exPlanations (SHAP) for imbalanced class handling and feature selection with explainability. Among these models, Random Forest and XGBoost classifiers within the ERFEX framework achieved 100% training accuracy and 98.23% testing accuracy. Furthermore, SHAP analysis provided interpretable insights into feature importance, improving model trustworthiness. Thus, the findings of this work demonstrate the potential of ERFEX for accurate and explainable heart disease prediction, paving the way for improved clinical decision-making.

Author 1: Deepika Tenepalli
Author 2: Navamani T M

Keywords: Machine learning; heart disease; explainable AI; XGBoost; SHAP

PDF

Paper 150: A Differential Evolution-based Pseudotime Estimation Method for Single-cell Data

Abstract: The analysis of single-cell genomics data creates an intriguing opportunity for researchers to examine the complex biological system more closely but is challenging due to inherent biological and technical noise. One popular approach involves learning a lower dimensional manifold or pseudotime trajectory through the data that can capture the primary sources of variation in the data. A smooth function of pseudotime then can be used to align gene expression patterns through the lineages in the trajectory which later facilitates downstream analysis such as heterogeneous cell type identification. Here, we propose a differential evolution based pseudotime estimation method. The model operates on continuous search space and allows easy integration of the cell capture time information in the inference process. The suitability of the proposed model is investigated by applying it on benchmarking single-cell data sets collected from different organisms using different assaying techniques. The experimental result shows the model’s capability of producing plausible biological insights about cell ordering which makes it an appealing choice for pseudoitme estimation using single-cell transcriptome data.

Author 1: Nazifa Tasnim Hia
Author 2: Ishrat Jahan Emu
Author 3: Muhammad Ibrahim
Author 4: Sumon Ahmed

Keywords: Pseudotime estimation; trajectory inference; single-cell; differential evolution; RNA-seq

PDF

Paper 151: Human IoT Interaction Approach for Modeling Human Walking Patterns Using Two-Dimensional Levy Walk Distribution

Abstract: This work presents a novel approach to modeling and analyzing human walking patterns using a two-dimensional Levy walk distribution and the Internet of Sensing Things. The study proposes the strategic placement of MPU6050 sensors within a garment worn on the human leg to capture motion data during walking activities that can model human walking patterns. Random samples are generated from the Levy distribution through numerical modeling, simulating normal human walking patterns. A real-world experiment involving five male participants wearing sensor-equipped garments during normal walking activities validates the proposed methodology. Statistical analysis, including the Kolmogorov-Smirnov test, confirms the agreement between simulated Levy distributions and observed step distance data, supporting the hypothesis that deviations indicate abnormal walking patterns. The study contributes to advancing sensor-based systems for human activity recognition and health monitoring, offering insights into the feasibility of using Levy walk distributions for gait analysis.

Author 1: Tajim Md. Niamat Ullah Akhund
Author 2: Waleed M. Al-Nuwaiser

Keywords: Internet of Things (IoT); wearable sensors; Human-Computer Interaction (HCI); 3-axis accelerometer gyroscope; walking pattern; levy walk distribution; abnormal walk prediction

PDF

Paper 152: Blockchain-based System Towards Data Security Against Smart Contract Vulnerabilities: Electronic Toll Collection Context

Abstract: Electronic Toll Collection (ETC) systems have been proposed as a replacement for traditional toll booths, where vehicles are required to queue to make payments, particularly during holiday period. Thus, the primary advantage of ETC is improved traffic efficiency. However, existing ETC systems lack the security necessary to protect vehicle information privacy and prevent fund theft. As a result, automatic payments become inefficient and susceptible to attacks, such as Reentrancy attack. In this paper, we utilize Ethereum blockchain and smart contracts as the automatic payment method. The biggest challenges are to authenticate the vehicle data, automatically deducts fees from the user’s wallet and protects against smart contract Reentrancy Attack without leaking distance information. To address these challenges, we propose an end-to-end Verification algorithms at both entry and exit toll points that corporate measures to protect distance-related information from potential leaks. The proposed system’s performance was evaluated on a private blockchain. Results demonstrate that our approach enhances transaction security and ensures accurate payment processing.

Author 1: Olfa Ben Rhaiem
Author 2: Marwa Amara
Author 3: Radhia Zaghdoud
Author 4: Lamia Chaari
Author 5: Maha Metab Alshammari

Keywords: Blockchain; Ethereum; smart contracts; Reentrancy Attacks; security; ETC

PDF

Paper 153: Comparing AI Algorithms for Optimizing Elliptic Curve Cryptography Parameters in e-Commerce Integrations: A Pre-Quantum Analysis

Abstract: This paper presents a comparative analysis between the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), two vital artificial intelligence algorithms, focusing on optimizing Elliptic Curve Cryptography (ECC) parameters. These encompass the elliptic curve coefficients, prime number, generator point, group order, and cofactor. The study provides insights into which of the bio-inspired algorithms yields better optimization results for ECC configurations, examining performances under the same fitness function. This function incorporates methods to ensure robust ECC parameters, including assessing for singular or anomalous curves and applying Pollard’s rho attack and Hasse’s theorem for optimization precision. The optimized parameters generated by GA and PSO are tested in a simulated e-commerce environment, contrasting with well-known curves like secp256k1 during the transmission of order messages using Elliptic Curve-Diffie Hellman (ECDH) and Hash-based Message Authentication Code (HMAC). Focusing on traditional computing in the pre-quantum era, this research highlights the efficacy of GA and PSO in ECC optimization, with implications for enhancing cybersecurity in third-party e-commerce integrations. We recommend the immediate consideration of these findings before quantum computing’s widespread adoption.

Author 1: Felipe Tellez
Author 2: Jorge Ortiz

Keywords: Artificial intelligence; genetic algorithms; particle swarm optimization; elliptic curve cryptography; e-commerce; third-party integrations; pre-quantum computing

PDF

Paper 154: Bone Quality Classification of Dual Energy X-ray Absorptiometry Images Using Convolutional Neural Network Models

Abstract: The assessment of bone trabecular quality degrada-tion is important for the detection of diseases such as osteoporosis. The gold standard for its diagnosis is the Dual Energy X-ray Absorptiometry (DXA) image modality. The analysis of these images is a topic of growing interest, especially with artificial intelligence techniques. This work proposes the detection of a degraded bone structure from DXA images using some approaches based on the learning of Trabecular Bone Score (TBS) ranges. The proposed models are supported by intelligent systems based on convolutional neural networks using two kinds of approaches: ad hoc architectures and knowledge transfer systems in deep network architectures, such as AlexNet, ResNet, VGG, SqueezeNet, and DenseNet retrained with DXA images. For both approaches, experimental studies were made comparing the proposed models in terms of effectiveness and training time, achieving an F1-Score result of approximately 0.75 to classify the bone structure as degraded or normal according to its TBS range.

Author 1: Mailen Gonzalez
Author 2: Jose M. Fuertes Garcia
Author 3: Manuel J. Lucena Lopez
Author 4: Ruben Abdala
Author 5: Jose M. Massa

Keywords: Osteoporosis; Dual Energy X-ray Absorptiometry (DXA); Trabecular Bone Score (TBS); Classification; Convolutional Neural Network (CNN)

PDF

Paper 155: LBPSCN: Local Binary Pattern Scaled Capsule Network for the Recognition of Ocular Diseases

Abstract: Glaucoma and cataracts are leading causes of blindness worldwide, resulting in significant vision loss and quality of life impairment. Early detection and diagnosis are crucial for effective treatment and prevention of further damage. However, diagnosis is challenging, especially when intraocular pressure is low or cataracts are present. Deep learning algorithms, particularly Convolutional Neural Networks (CNNs), have shown promise in detecting eye diseases but require large training datasets to achieve high performance.. To address this limitation, this work proposes a modified Capsule Network algorithm with a novel scaled processing algorithm and local binary pattern layer, enabling robust and accurate diagnosis of glaucoma and cataracts. The proposed model demonstrates performance comparable to state-of-the-art methods, achieving high accuracy on combined, cataract-only, and glaucoma-only datasets (94.32%, 96.87%, and 95.23%, respectively). This work introduces enhanced feature extraction and robustness to illumination variations, addressing critical limitations of existing methods.. The proposed model offers a promising tool for ophthalmologists and glaucoma specialists to accurately diagnose glaucoma and cataract-compromised eyes, potentially improving patient outcomes.

Author 1: Mavis Serwaa
Author 2: Patrick Kwabena Mensah
Author 3: Adebayo Felix Adekoya
Author 4: Mighty Abra Ayidzoe

Keywords: Glaucoma; cataracts; capsule network; Convolutional neural network

PDF

Paper 156: Text Extraction and Translation Through Lip Reading using Deep Learning

Abstract: Deep learning has revolutionized industries such as natural language processing and computer vision. This study explores the fusion of these domains by proposing a novel approach for text extraction and translation using lip reading and deep learning. Lip reading, the process of interpreting spoken language by analyzing lip movements, has garnered interest due to its potential applications in noisy environments, silent communication, and accessibility enhancements. This study employs the power of deep learning architectures such as CNNs and RNNs to accurately extract text content from lip movements captured in video sequences. The proposed model consists of multiple stages: lip region detection, feature extraction, text recognition, and translation. Initially, the model identifies and isolates the lip region within video frames using a CNN-based object detection approach. Subsequently, relevant features are extracted from the lip region using CNNs to capture intricate motion patterns and convert these visual features into textual in-formation. The extracted text is further processed and translated into the desired language using machine translation techniques to enable translation.

Author 1: Sai Teja Krithik Putcha
Author 2: Yelagandula Sai Venkata Rajam
Author 3: K. Sugamya
Author 4: Sushank Gopala

Keywords: Deep Learning (DL); Convolutional Neural Networks (CNN); Lip Reading; Recurrent Neural Networks (RNN)

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org