The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 15 Issue 1

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Reliability Evaluation Framework for Centralized Agricultural Internet of Things (Agri-IoT)

Abstract: This paper presents a holistic reliability evalua-tion framework for Agri-IoT based on real-world testbed and mathematical modeling of network failure prediction. A testbed has been designed, implemented, and deployed in the real-world in the experimental farm at Saint-Louis/Senegal as a representative area of Sahel conditions. Data collected has been used for real-world reliability analysis and to feed mathematical modeling of network reliability based on energy and environ-mental conditions data with Kaplan Meier and Nelson Aalen estimators. Key factors affecting the network’s lifespan, such as network coverage and density, are explored, along with a comprehensive evaluation of energy consumption to understand node discharge rates impact. The survival analysis, employing Kaplan-Meier and Nelson-Aalen estimators, establishes network stability and the probability of node survival over time. The findings contribute to the understanding of Agri-IoT reliability in a real-world Sahel environment, offering practical insights for system optimization and environmental challenge mitigation in real-world deployments.

Author 1: Fatoumata Thiam
Author 2: Maissa Mbaye
Author 3: Maya Flores
Author 4: Alexander Wyglinski

Keywords: Energy; IoT; reliability; real-world testbed; opti-mization; Agri-IoT

PDF

Paper 2: A Hybrid Approach for Automatic Question Generation from Program Codes

Abstract: Generating questions is one of the most challenging tasks in the natural language processing discipline. With the significant emergence of electronic educational platforms like e-learning systems and the large scalability achieved with e-learning, there is an increased urge to generate intelligent and deliberate questions to measure students' understanding. Many works have been done in this field with different techniques; however, most approaches work on extracting questions from text. This research aims to build a model that can conceptualize and generate questions on Python programming language from program codes. Different models are proposed by inserting text and generating questions; however, the challenge is understanding the concepts in the code snippets and linking them to the lessons so that the model can generate relevant and reasonable questions for students. Therefore, the standards applied to measure the results are the code complexity and question validity regarding the questions. The method used to achieve this goal combines the QuestionGenAi framework and ontology based on semantic code conversion. The results produced are questions based on the code snippets provided. The evaluation criteria were code complexity, question validity, and question context. This work has great potential improvement to the e-learning platforms to improve the overall experience for both learners and instructors.

Author 1: Jawad Alshboul
Author 2: Erika Baksa-Varga

Keywords: Question generation; e-learning; python question generator; semantic code conversion

PDF

Paper 3: An Enhanced Anti-Phishing Technique for Social Media Users: A Multilayer Q-Learning Approach

Abstract: As social media usage grows in popularity, so does the risk of encountering malicious Uniform Resource Locator (URLs). Determining the authenticity of a URL can be a highly challenging task, primarily due to the sophisticated attack structure employed by phishing attempts. Phishing exploits the vulnerabilities of computer users, making it difficult to discern between genuine and fraudulent URLs. To address this issue, a self-learning AI framework is required to warn social media users of potentially dangerous links. While several anti-phishing techniques exist, including blacklists, heuristics, and machine learning-based techniques, there is still a need for improvement in terms of detection accuracy. Hence, this study proposed a novel approach to combat phishing attacks using artificial neural networks, and the main aim is to create and validate the anti-phishing technique tool for detection accuracy. Initially, the URL data is collected, followed by preprocessing and then the analysis for malicious activity using the Logistic Bayesian Long Short-Term Memory model (LB-LSTM). The observed malicious URL features are extracted using multilayer Q-learning with the CaspNet and swarm optimization models. Analysis of these features enables the identification of a malicious URL, which is then removed, and the social media user is warned. The proposed technique attained a detection accuracy of 94.33%, Area under the ROC Curve (AUC) of 98.71%, Mean Squared Error (MSE) of 5.67%, Mean average precision of 88.67%, Recall of 98.67%, and F-1 score of 94.34%.

Author 1: Asif Irshad Khan
Author 2: Bhuvan Unhelkar

Keywords: Multilayer Q-learning; anti-phishing model; social media users; machine learning; optimization; URLs; logistic Bayesian LSTM model

PDF

Paper 4: ML-based Meta-Model Usability Evaluation of Mobile Medical Apps

Abstract: Mobile medical applications (MMAPPs) are one of the recent trends in mobile trading applications (Apps). MMAPPs permit users to resolve health issues easily and effectively in their place. However, the primary issue is effective usability for users in maps. Barely any examination breaks down usability issues subject to the user's age, orientation, trading accessories, or experience. The motivation behind this study is to decide the level of usability issues, concerning traits and experience of versatile clinical clients. The review utilizes a quantitative technique and performs client try and hypothetical insight through the survey by 677 members with six distinct assignments on the application's point of interaction. The post-try review is finished with concerning members. The Response surface method (RSM) is used for perceptional and experimental designs. In each case, participants are divided into 13 runs or groups. Experimental groups are involved after checking the perceptions about theoretical usability for different attributes according to the usability model through the questionnaire. The difference is recorded between the perception of users about usability (theoretical usability) and actual performance for usability. The study analyzed through Analysis of variance (ANOVA) that there is a need to improve mobile medical applications but it is also recommended to minimize the gap between the perception level of laymen and the actual performance of IT literate users in context with usability. The experimentation measures the tasks usability of various mobile medical applications concerning their effectiveness, efficiency, completeness, learnability, memorability, easiness, complexity, number of errors and satisfaction. Every design model also produces a mathematical expression to calculate usability with its attributes. The results of this study will help to improve the usability of MMAPPs for users in their convenient context.

Author 1: Khalid Hamid
Author 2: Muhammad Ibrar
Author 3: Amir Mohammad Delshadi
Author 4: Mubbashar Hussain
Author 5: Muhammad Waseem Iqbal
Author 6: Abdul Hameed
Author 7: Misbah Noor

Keywords: ANOVA; completeness; efficiency; effectiveness; perceptional usability; response surface methodology; actual usability

PDF

Paper 5: Development of a Framework for Predicting Students' Academic Performance in STEM Education using Machine Learning Methods

Abstract: In the continuously evolving educational landscape, the prediction of students' academic performance in STEM (Science, Technology, Engineering, Mathematics) disciplines stands as a paramount component for educational stakeholders aiming at enhancing learning methodologies and outcomes. This research paper delves into a sophisticated analysis, employing Machine Learning (ML) algorithms to predict students' achievements, focusing explicitly on the multifaceted realm of STEM education. By harnessing a robust dataset drawn from diverse educational backgrounds, incorporating myriad factors such as historical academic data, socioeconomic demographics, and individual learning interactions, the study innovates by transcending traditional prediction parameters. The research meticulously evaluates several machine learning models, juxtaposing their efficacies through rigorous methodologies, including Random Forest, Support Vector Machines, and Neural Networks, subsequently advocating for an ensemble approach to bolster prediction accuracy. Critical insights reveal that customized learning pathways, preemptive identification of at-risk candidates, and the nuanced understanding of contributing influencers are significantly enhanced through the ML framework, offering a transformative lens for academic strategies. Furthermore, the paper confronts the ethical quandaries and challenges of data privacy emerging in the wake of advanced analytics in education, proposing a holistic guideline for stakeholders. This exploration not only underscores the potential of machine learning in revolutionizing predictive strategies in STEM education but also advocates for continuous model optimization, embracing a symbiotic integration between pedagogical methodologies and technological advancements, thereby redefining the trajectories of educational paradigms.

Author 1: Rustam Abdrakhmanov
Author 2: Ainur Zhaxanova
Author 3: Malika Karatayeva
Author 4: Gulzhan Zholaushievna Niyazova
Author 5: Kamalbek Berkimbayev
Author 6: Assyl Tuimebayev

Keywords: Load balancing; machine learning; server; classification; software

PDF

Paper 6: Automatic Recognition of Marine Creatures using Deep Learning

Abstract: The identification of marine species is a challenge for people all over the world, and the situation is not different for Mauritians. It is of utmost importance to create an automated system to correctly identify marine species. In the past, researchers have used machine learning to address the issue of marine creature recognition. The manual feature extraction part of machine learning complicates model creation as features have to be extracted manually using an appropriate filter. In this work, we have used deep learning models to automate the feature extraction procedure. Currently, there is no publicly available dataset of marine creatures from the Indian Ocean. We created one of the biggest datasets used in this field, consisting of 51 different marine species collected from the Odysseo Oceanarium in Mauritius. The original dataset has a total of 5,709 images and is imbalanced. Image augmentations were performed to create an oversampled version of the dataset with 171 images per class, for a total of 8,721 images. The MobileNetV1 model trained on the oversampled dataset with a split ratio of 80% for training and 10% for validation and testing was the best performing one in terms of classification accuracy and inference time. The model had the smallest inference time of 0.10 seconds per image and attained a classification accuracy of 99.89% and an F1 score of 99.89%.

Author 1: Oudayrao Ittoo
Author 2: Sameerchand Pudaruth

Keywords: Marine creature identification; machine learning; deep learning; MobileNetV1; Mauritius

PDF

Paper 7: Enhanced Linear Regression Models for Resource Usage Prediction in Dynamic Cloud Environments

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: Xiaoxiao Ma

Keywords: Cloud computing; resource utilization; prediction; linear regression; metaheuristics

PDF

Paper 8: Efficient Processing of Large-Scale Medical Data in IoT: A Hybrid Hadoop-Spark Approach for Health Status Prediction

Abstract: In the realm of Internet of Things (IoT)-driven healthcare, diverse technologies, including wearable medical devices, mobile applications, and cloud-based health systems, generate substantial data streams, posing challenges in real-time operations, especially during emergencies. This study recommends a hybrid architecture utilizing Hadoop for real-time processing of extensive medical data within the IoT framework. By employing distributed machine learning models, the system analyzes health-related data streams ingested into Spark streams via Kafka threads, aiming to transform conventional machine learning methodologies within Spark's real-time processing, crafting scalable and efficient distributed approaches for predicting health statuses related to diabetes and heart disease while navigating the landscape of big data. Furthermore, the system provides real-time health status forecasts based on a multitude of input features, disseminates alert messages to caregivers, and stores this valuable information within a distributed database, which is instrumental in health data analysis and the production of flow reports. We compute a range of evaluation parameters to evaluate the proposed methods' efficacy. This assessment phase encompasses measuring the performance of the Spark-based machine learning algorithm in a distributed parallel computing environment.

Author 1: Yu Lina
Author 2: Su Wenlong

Keywords: Internet of Things; big data; hadoop; spark-based machine learning

PDF

Paper 9: A Yolo-based Approach for Fire and Smoke Detection in IoT Surveillance Systems

Abstract: Fire and smoke detection in IoT surveillance systems is of utmost importance for ensuring public safety and preventing property damage. While traditional methods have been used for fire detection, deep learning-based approaches have gained significant attention due to their ability to learn complex patterns and achieve high accuracy. This paper addresses the current research challenge of achieving high accuracy rates with deep learning-based fire detection methods while keeping computation costs low. This paper proposes a method based on the Yolov8 algorithm that effectively tackles this challenge through model generation using a custom dataset and the model's training, validation, and testing. The model's efficacy is succinctly assessed by the precision, recall and F1-curve metrics, with notable proficiency in fire detection, crucial for early warnings and prevention. Experimental results and performance evaluations show that our proposed method outperforms other state-of-the-art methods. This makes it a promising fire and smoke detection approach in IoT surveillance systems.

Author 1: Dawei Zhang

Keywords: IoT; surveillance systems; fire detection; deep learning; Yolov8

PDF

Paper 10: Design and Analysis of Deep Learning Method for Fragmenting Brain Tissue in MRI Images

Abstract: An essential component of medical image processing is brain tumour segmentation. The process of giving each pixel a label is called image segmentation in order for pixels bearing the same label to share characteristics and help distinguish the target. A higher fatality rate and additional dangers can be avoided with early identification. It can be challenging and time-consuming to manually (man-made) segment brain tumours from the numerous MRI pictures generated during medical procedures in order to diagnose malignancy. This is the fundamental reason why brain tumour imaging has to be automated. The deep learning technique for the segmentation of brain tissue in magnetic resonance imaging (MRI) pictures was examined and enhanced in this work. Researchers are using deep learning techniques—convolutional neural networks in particular—to tackle the complex problem of biological image fragmentation object recognition. In contrast to traditional classification techniques that take in manually constructed qualities, convolutional neural networks automatically extract the required complicated features from the data itself. This solves a number of problems.

Author 1: Ting Yang
Author 2: Jiabao Sun

Keywords: Brain tumor; deep learning; neural networks; magnetic resonance imaging

PDF

Paper 11: Brightness Equalization Algorithm for Chinese Painting Pigments in Low-Light Environment Based on Region Division

Abstract: With the promotion and development of Chinese painting and the advancement of photography technology, people can appreciate various types of Chinese paintings through image and other methods. However, Chinese painting images in low-light environments face the problem of extreme uneven brightness distribution. The currently proposed solutions for this problem are not sufficient. Therefore, this research proposes a brightness equalization algorithm for Chinese painting pigments in low-light environments based on region division. This algorithm also utilizes guided filtering for image denoising. In performance testing, the proposed method has a runtime of 16.63 seconds under a scaling factor of 1 and a runtime of 8.37 seconds under a scaling factor of 0.1, which are the fastest among the compared algorithms. In simulation experiments, the brightness equalization value of the proposed method is 198.93, which is listed at the best among all the compared algorithms. This research provides a valuable research direction for the brightness equalization of Chinese painting pigments.

Author 1: Lijuan Cheng

Keywords: Chinese painting; low-light; region division; guided filtering; scaling factor

PDF

Paper 12: Anomaly Detection in Structural Health Monitoring with Ensemble Learning and Reinforcement Learning

Abstract: This research introduces a novel approach for improving the analysis of Structural Health Monitoring (SHM) data in civil engineering. SHM data, essential for assessing the integrity of infrastructures like bridges, often contains inaccuracies because of sensor errors, environmental factors, and transmission glitches. These inaccuracies can severely hinder identifying structural patterns, detecting damages, and evaluating overall conditions. Our method combines advanced techniques from machine learning, including dilated convolutional neural networks (CNNs), an enhanced differential equation (DE) model, and reinforcement learning (RL), to effectively identify and filter out these irregularities in SHM data. At the heart of our approach lies the use of CNNs, which extract key features from the SHM data. These features are then processed to classify the data accurately. We address the challenge of imbalanced datasets, common in SHM, through a RL-driven method that treats the training procedure as a sequence of choices, with the network learning to distinguish between less and more common data patterns. To further refine our method, we integrate a novel mutation operator within the DE framework. This operator identifies key clusters in the data, guiding the backpropagation process for more effective learning. Our approach was rigorously tested on a dataset from a large cable-stayed bridge in China, provided by the IPC-SHM community. The results of our experiments highlight the effectiveness of our approach, demonstrating an Accuracy of 0.8601 and an F-measure of 0.8540, outperforming other methods compared in our study. This underscores the potential of our method in enhancing the accuracy and reliability of SHM data analysis in civil infrastructure monitoring.

Author 1: Nan Huang

Keywords: Structural health monitoring; Anomaly detection; reinforcement learning; differential equation; imbalanced classification

PDF

Paper 13: Application Effect of Human-Computer Interactive Gymnastic Sports Action Recognition System Based on PTP-CNN Algorithm

Abstract: With the rapid development of artificial intelligence technology, the recognition accuracy performance of traditional gymnastic sports action recognition system can no longer meet the needs of today's society. To address these problems, an improved action recognition algorithm combining Precision Time Protocal (PTP) and Convolutional Neural Networks (CNN) is proposed, and a human-computer interaction gymnastic action recognition system based on PTP-CNN algorithm is constructed. The performance test of the proposed PTP-CNN algorithm was conducted, and it was found that the accuracy of PTP-CNN algorithm was 92.8% and the recall rate was 95.2%, which was better than the comparison algorithm. The performance comparison experiments of the gymnastic action recognition system based on the PTP-CNN algorithm found that the recognition accuracy of the PTP-CNN gymnastic action recognition system was 96.3% and the running time was 3.4s, which was better than the other comparison systems. Comprehensive results can be found that the research proposed PTP-CNN recognition algorithm and improved gymnastic action recognition system can effectively improve the performance of traditional algorithms and models, which has practical application value and great application potential.

Author 1: Yonge Ren
Author 2: Keshuang Sun

Keywords: PTP; CNN; human-computer interaction; gymnastic sports; action recognition

PDF

Paper 14: A Lean Service Conceptual Model for Digital Transformation in the Competitive Service Industry

Abstract: In today's competitive service industry, the pressure to boost productivity, cut costs, and improve service quality is immense. By integrating lean principles and digital transformation, organizations can streamline processes and reduce waste. Although various lean models have been developed for different service industry, there is no universal standard. Hence, this study aims to address this gap by proposing a Lean Service Conceptual Model through qualitative research by identifying nine types of waste and seven lean dimensions. Interviews, observations, and audio-visual materials are the data collection methods used in this study. The model aligns seamlessly with modern digital technologies such as big data, the Internet of Things, blockchain, cloud computing, and artificial intelligence, making it adaptable for service organizations to excel in the digital age. The model focuses on enhancing efficiency and effectiveness while primarily reducing waste in service operations. Due to restrictions during the pandemic and the interest expressed by the informants in participating in this study, the focus is thus made on a single case study, which may lead to biased findings. However, future studies will be performed on multiple case studies to enhance the findings. Exploring and reviewing an array of best practices, techniques, and tools available for waste reduction within organizational operations is paramount.

Author 1: Nur Niswah Hasina Mohammad Amin
Author 2: Amelia Natasya Abdul Wahab
Author 3: Nur Fazidah Elias
Author 4: Ruzzakiah Jenal
Author 5: Muhammad Ihsan Jambak
Author 6: Nur Afini Natrah Mohd Ashril

Keywords: Lean principles; digital transformation; model conceptual; service industry; waste; dimension; qualitative research

PDF

Paper 15: The Scheme Design of Wearable Sensor for Exercise Habits Based on Random Game

Abstract: The development of random game theory has enabled wearable sensors to obtain actuator evolution in sports exercise, thus the design of user exercise habits during the exercise process has begun to be studied. Conventional devices only focus on automatic adjustment of sports design, with slight shortcomings in personalization. To address this issue, this study added an anchor node localization device to the adaptive search hybrid learning algorithm and analyzed the exercise goals of athletes. At the same time, a semi definite programming method was installed in wearable sensors to achieve the goal of paying attention to the physical condition of athletes. To verify the performance of the fusion device, this study conducted experiments on the Physical dataset and compared it with three models such as Harris Eagle Optimization. The accuracy rates of designing exercise habits schemes for the four devices were 97.4%, 96.5%, 94.7%, and 91.2%, respectively, indicating that the model has the strongest stability. Under the same running time, the energy loss of this model was 0.11kW * h, which performs the best among the four models. When the athletes are different in age, the F1 values of the four devices are 5.9, 4.5, 4.2 and 3.6 respectively. The results indicate that the proposed fusion model has strong robustness and is suitable for designing exercise habit schemes in the evolution of sports exercise actuators.

Author 1: Youqin Huang
Author 2: Zhaodi Feng

Keywords: Random game; adaptive search hybrid learning algorithm; wearable sensors; physical exercise; evolution of actuators; exercise habits; anchor node positioning; semi definite programming method

PDF

Paper 16: The Construction and Application of Library Intelligent Acquisition Decision Model Based on Decision Tree Algorithm

Abstract: In today's digital age, libraries, as the core institutions of knowledge management and information services, are facing an increasing demand from readers. In order to provide more efficient, accurate, and personalized interview services, intelligent interview decision-making in libraries has become an important research field. Traditional manual interview services face challenges such as personnel training and knowledge updates, making it difficult to quickly adapt to new needs and changes. To address these issues, research is being conducted on using machine learning technology to perform post pruning on the basis of standard decision trees and combining it with fuzzy logic to design a fuzzy decision tree. The experimental results show that the F rejection rate (FN) of the model rapidly decreases to about 0.1 as the number of training iterations gradually increases, and stabilizes at around 0.05 after 210 rounds of training, which is 0.10 lower than the rule-based decision model FN. The intelligent acquisition decision-making model designed in this study has higher accuracy and stability, and has certain application potential in the field of intelligent acquisition decision-making in libraries.

Author 1: Hong Pan

Keywords: Decision tree; machine learning; fuzzy logic; intelligent interview model; post-pruning

PDF

Paper 17: A Predictive Sales System Based on Deep Learning

Abstract: There are several techniques for predictive sales systems, in this study, a system based on different machine learning algorithms is developed for a trading company in Lima. As any company, it needs to be accurate in its sales calculations to manage the volume of production or product purchases. With the system, the trading company has a mechanism to order products from its supplier based on the predictions and estimates of the needs according to the projection of its sales. For the sales predictive system, Deep Learning technology and the neural network architectures GRU (Gated Recurrent Unit), LSTM (Long Short Term Memory) and RNN (Recurrent Neural Network) were used, 10 products were sampled, and the sales quantities of the last 12 months were obtained for the evaluation. The study found that the LSTM architecture excels in accuracy, significantly outperforming GRU and RNN in terms of Mean Absolute Percentage Error (MAPE), achieving an average MAPE of 7.07%, in contrast to the MAPE of 27.14% for GRU and the MAPE of 36.17% for RNN. These findings support the effectiveness and versatility of LSTM in time series prediction, demonstrating its usefulness in a variety of real-world applications.

Author 1: Jean Paul Luyo Ballena
Author 2: Cristhian Pool Ortiz Pallihuanca
Author 3: Ernesto Adolfo Carrera Salas

Keywords: Deep learning; neural network architectures; sales prediction; neural networks

PDF

Paper 18: Telemedicine and its Impact on the Preoperative Period

Abstract: The application of telemedicine has aroused a lot of interest in the field of chronic disease care, which is associated with clinical medicine. The aim of this research is to systematically evaluate the published evidence on telemedicine in the preoperative period. A systematic search was conducted over the last five years, excluding secondary research. Selection criteria were applied, obtaining 68 articles that met these criteria and quality criteria. The results show that the largest production is carried out in the United States and the United Kingdom, with collaboration between institutions and countries. The main use of telemedicine was in teleconsultation and telecounseling activities. In addition, the application of telemedicine in the preoperative period was made to a greater extent for general procedures without distinction of surgical specialty, oncological surgery and traumatology. An increased production observed can be related to the need for physical distancing due to the pandemic. Future research could include the co-occurrence of search terms, the impact of smartphones, NER terms, and the impact of polarity and objectivity on readers' choice of articles to read, share, and cite.

Author 1: Raquel Elisa Apaza-Avila

Keywords: Telemedicine; digital health; e-health; preoperative care; preoperative period; systematic review

PDF

Paper 19: A Solution to Improve the Detection of the Nominal Value of the Financial Market: A Case Study of the Alphabet Stocks

Abstract: Given the regular occurrence of non-stationarity, non-linearity, and high levels of noise in time series data, predicting the value of stocks is a considerable difficulty. Traditional methods have the potential to enhance the precision of forecasting, although they concurrently introduce computational complexity, hence augmenting the probability of prediction inaccuracies. To effectively tackle a range of concerns, the existing body of research proposes a novel approach that combines a light gradient boosting machine, a machine learning methodology, with artificial bee colony optimization. In the context of the examined dynamic stock market, the proposed model demonstrated better efficiency and performance compared to alternative models. The recommended model exhibited optimal performance, characterized by a low error rate and high efficacy. The analysis utilized data about the stock of Alphabet over the period spanning from January 2, 2015, to June 29, 2023. The outcomes of the study provide evidence of the predictive accuracy of the proposed model in determining stock prices. The study's findings demonstrate how well the suggested model performs when it comes to correctly predicting stock prices. The proposed model presents a pragmatic methodology for evaluating and forecasting time series data about stock prices. The research's findings show that, in terms of forecast accuracy, the suggested model performs better than the methods currently in use.

Author 1: Zhaohua Li
Author 2: Xinyue Chang

Keywords: Alphabet stock; machine learning; light gradient boosting machine; optimization; artificial bee colony algorithm

PDF

Paper 20: Analysis of the Financial Market via an Optimized Machine Learning Algorithm: A Case Study of the Nasdaq Index

Abstract: The complex interaction among economic variables, market forces, and investor psychology presents a formidable obstacle to making accurate forecasts in the realm of finance. Moreover, the nonstationary, non-linear, and highly volatile nature of stock price time series data further compounds the difficulty of accurately predicting stock prices within the securities market. Traditional methods have the potential to enhance the precision of forecasting, although they concurrently introduce computational complexities that may lead to an increase in prediction mistakes. This paper presents a unique model that effectively handles several challenges by integrating the Moth Flame optimization technique with the random forest method. The hybrid model demonstrated superior efficacy and performance compared to other models in the present investigation. The model that was suggested exhibited a high level of efficacy, with little error and optimal performance. The study evaluated the efficacy of a suggested predictive model for forecasting stock prices by analyzing data from the Nasdaq index for the period spanning from January 1, 2015, to June 29, 2023. The results indicate that the proposed model is a reliable and effective approach for analyzing and forecasting the time series of stock prices. The experimental findings indicate that the proposed model exhibits superior performance in terms of predicting accuracy compared to other contemporary methodologies.

Author 1: Lei Wang
Author 2: Mingzhu Xie

Keywords: Stock market prediction; Nasdaq index; random forest; moth-flame optimization; MFO-RF

PDF

Paper 21: Improving of Smart Health Houses: Identifying Emotion Recognition using Facial Expression Analysis

Abstract: Smart health houses have shown great potential for providing advanced healthcare services and support to individuals. Although various computer vision based approaches have been developed, current facial expression analysis methods still have limitations that need to be addressed. This research paper introduces a facial expression analysis technique for emission recognition based on YOLOv4-based algorithm. The proposed method involves the use of a custom dataset for training, validation, and testing of the model. By overcoming the limitations of existing methods, the proposed technique delivers precise and accurate results in detecting subtle changes in facial expressions. Through several experimental and performance evaluation tasks, we have assessed the efficacy of our proposed method and demonstrated its potential to enhance the accuracy of Smart Health Houses. This study emphasizes the importance of addressing emotional well-being in healthcare. As experimental results shown, the prosed method achieved satisfy accuracy rate and the effectiveness of the YOLOv4 model for emotion detection suggests that emotional intelligence training can be a valuable tool in achieving this goal.

Author 1: Yang SHI
Author 2: Yanbin BU

Keywords: Smart health houses; computer vision; facial expression; emotion recognition; YOLO

PDF

Paper 22: Perceived Benefits and Challenges of Implementing CMMI on Agile Project Management: A Systematic Literature Review

Abstract: In an era where the agility and responsiveness of Agile project management are paramount, the integration of structured models like the Capability Maturity Model Integration (CMMI) presents a blend of unique opportunities and challenges. This study conducts a comprehensive systematic literature review of 23 scientific articles, chosen through the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, to explore the benefits and challenges of CMMI and software development integration within the context of Agile project management. Emphasizing the enhancement of Agile project management maturity, the research delves into the role of CMMI, particularly CMMI-DEV, as a pivotal element in Software Process Improvement (SPI) models tailored to Agile environments. The study’s novelty lies in its systematic and in-depth investigation of CMMI’s integration with Agile project management methodologies, a critical yet underexplored area in the existing literature. Addressing the urgency highlighted by global trends of resource inefficiencies and project management challenges, this research offers timely insights for both academia and industry. This study also categorizes key benefits while identifying prevalent challenges, such as resource constraints and organizational resistance. Additionally, this research also suggests solutions and improvements to these challenges. By offering a comprehensive evaluation, the research significantly advances the understanding of the complexities and potential of CMMI and Agile project management integration. It provides valuable insights for practical applications in organizational settings, emphasizing the potential of integrating structured models like CMMI-DEV with Agile project management methodologies. This integration is essential for enhancing project management maturity, marking a significant step forward in academic research and practical applications in this vital domain.

Author 1: Anggia Astridita
Author 2: Teguh Raharjo
Author 3: Anita Nur Fitriani

Keywords: CMMI; SPI; Agile project management; systematic literature review; PRISMA

PDF

Paper 23: Crime Prediction Model using Three Classification Techniques: Random Forest, Logistic Regression, and LightGBM

Abstract: Predicting the likelihood of a crime occurring is difficult, but machine learning can be used to develop models that can do so. Random forest, logistic regression, and LightGBM are three well-known classification methods that can be applied to crime prediction. Random forest is an ensemble learning algorithm that predicts by combining multiple decision trees. It is an effective method for classification tasks, and it is frequently employed for crime prediction because it handles imbalanced datasets well. Logistic regression is a linear model that can be used to predict the probability of a binary outcome, such as the occurrence of a crime. It is a relatively straightforward technique that can be effective for crime prediction if the features are carefully chosen. LightGBM is a gradient-boosting decision tree algorithm with a reputation for speed and precision. It is a relatively new algorithm, but because it can achieve high accuracy on even small datasets, it has rapidly gained popularity for crime prediction. The experimental results show that the LightGBM performs best for binary classification, followed by Random Forest and Logistic Regression.

Author 1: Abdulrahman Alsubayhin
Author 2: Muhammad Sher Ramzan
Author 3: Bander Alzahrani

Keywords: Crime prediction; random forest; logistic regression; LightGBM

PDF

Paper 24: Machine Learning-Driven Integration of Genetic and Textual Data for Enhanced Genetic Variation Classification

Abstract: Precision medicine and genetic testing have the potential to revolutionize disease treatment by identifying driver mutations crucial for tumor growth in cancer genomes. However, clinical pathologists face the time-consuming and error-prone task of classifying genetic variations using Textual clinical literature. In this research paper, titled “Machine Learning-Driven Integration of Genetic and Textual Data for Enhanced Genetic Variation Classification”, we propose a solution to automate this process. We aim to develop a robust machine learning algorithm with a knowledge base foundation to streamline precision medicine. Our methods leverage advanced machine learning and natural language processing techniques, coupled with a comprehensive knowledge base that incorporates clinical and genetic data to inform mutation significance. We use text mining to extract relevant information from scientific literature, enhancing classification accuracy. Our results demonstrate significant improvements in efficiency and accuracy compared to manual methods. Our system excels at identifying driver mutations, reducing the burden on clinical pathologists and minimizing errors. Automating this critical aspect of precision medicine promises to empower healthcare professionals to make more precise treatment decisions, advancing the field and improving patient care.

Author 1: Malkapurapu Sivamanikanta
Author 2: N Ravinder

Keywords: Precision medicine; genetic testing; driver mutations; cancer genomes; textual clinical literature; text mining; genetic variations

PDF

Paper 25: Performance Evaluation of Machine Learning Classifiers for Predicting Denial-of-Service Attack in Internet of Things

Abstract: Eliminating security threats on the Internet of Things (IoT) requires recognizing threat attacks. IoT and its implementations are currently the most common scientific field. When it comes to real-world implementations, IoT's attributes, on the one hand, make it simple to apply, but on the other hand, they expose it to cyber-attacks. Denial of Service (DoS) attack is a type of threat that is now widespread in the field of IoT. Its primary goal is to stop or damage service or capability on a target. Conventional Intrusion Detection Systems (IDS) are no longer sufficient for detecting these sophisticated attacks with unpredictable behaviors. Machine learning (ML)--based intrusion detection does not need a massive list of expected activities or a variety of threat signatures to create detection rules. This study aims to evaluate different ML classifiers for network intrusion detection that focus on DoS attacks in the IoT environment to determine the best ML classifier that can detect the DoS attack. The XGBoost, Decision Tree (DT), Gaussian Naive Bayes (NB), Random Forest (RF), Logistic Regression (LR), and Support Vector Machine (SVM) ML classifiers are used to evaluate the DoS attack. The UNSW-NB15 dataset was used for this study. The obtained accuracy rate for XGboost was 98.92%, SVM 98.62%, Gaussian NB 83.75%, LR 97.74%, RF 99.48%, and DT 99.16%. where the precision rate for XGboost, SVM, Gaussian NB, LR, RF, and DT was 98.40%, 98.29%, 77.50%, 97.14%, 99.21%, and 99.12%, respectively. The sensitivity rate for XGboost, SVM, Gaussian NB, LR, RF, and DT was 99.29%, 98.76%, 91.87%, 98.06%, 99.69%, and 99.08%, respectively. The results show that the RF classifier outperformed other classifiers in terms of Accuracy, Precision, and Sensitivity.

Author 1: Omar Almomani
Author 2: Adeeb Alsaaidah
Author 3: Ahmad Adel Abu Shareha
Author 4: Abdullah Alzaqebah
Author 5: Malek Almomani

Keywords: Cybersecurity; IDS; DOS attack; IoT; machine learning

PDF

Paper 26: Improving the Trajectory Clustering using Meta-Heuristic Algorithms

Abstract: The rapid growth of GPS trajectories obscures valuable information regarding urban road infrastructure, urban traffic patterns, and population mobility. An innovative method termed trajectory regression clustering is introduced to improve the extraction of hidden data and generate more precise clustering results. This approach belongs to the unsupervised trajectory clustering category and has the objective of minimizing the loss of local information inside the trajectory. It also seeks to prevent the algorithm from getting stuck in a suboptimal solution. The methodology we employ consists of three primary stages. To begin with, we present the notion of trajectory clustering and devise a distinctive approach known as angle-based partitioning to segment line segments. The evaluation results indicate a significant improvement in the clustering accuracy of the proposed method compared to existing methodologies, especially for a high number of clusters. The HCMGA and HCMMOPSO algorithms have improved clustering accuracy for MBP values by 0.61% and 0.64%, respectively, as compared to previous approaches. Moreover, based on the implementation findings, the ant colony approach demonstrates superior accuracy compared to alternative methods, while the particle swarm method exhibits faster convergence.

Author 1: Haiyang Li
Author 2: Xinliu Diao

Keywords: Ant colony method; particle swarm algorithm; HCM clustering; and trajectory lines

PDF

Paper 27: Sustainability and Resilience Analysis in Supply Chain Considering Pricing Policies and Government Economic Measures

Abstract: Sustainability and resilience are becoming increasingly critical in shaping supply chain pricing strategies. They ensure that supply chains can withstand disruptions while adhering to environmental and social standards, thereby securing long-term economic viability. Despite their importance, the integration of these two pillars with the promotion of domestic products remains under-explored, especially concerning their influence on the competitive dynamics within supply chains. This study seeks to bridge this gap by examining the influence of sustainability, resilience, and domestic product promotion on supply chain pricing strategies. We introduce a model that captures the interactions among a central supplier, multiple stores, and the government, focusing on strategies adopted by each stakeholder to maximize its profit while adhering to sustainability and resilience requirements. The study reveals that stores' pricing strategies are significantly influenced by their sustainability efforts, with the cost coefficient of these efforts and the elasticity of sustainability efforts directly affecting profit margins. It also finds that the supplier's resilience strategy involves allocating inventory reserves to manage wholesale pricing effectively. Governmental regulatory measures, through taxation and subsidies, are shown to play a crucial role in maintaining the balance between domestic and foreign products and providing flexibility to diversify product sources to cope with local disruptions. Finally, perspectives are provided to enrich the understanding of how sustainability and resilience can be considered and impact pricing policies of the whole network.

Author 1: Dounia SAIDI
Author 2: Aziz AIT BASSOU
Author 3: Jamila EL ALAMI
Author 4: Mustapha HLYAL

Keywords: Supply chain management; pricing policies; sustainability; resilience; government regulation

PDF

Paper 28: Investigating Agile Values and Principles in Real Practices

Abstract: Software engineering is the field of development of information systems. However, the development process can often be complicated. Therefore, many researchers have introduced their approaches to manage the complication. This led to the introduction of new subfields such as change management, and organisational change. Agile can be regarded as a collection of best practices with the same values and principles. Since the introduction of Agile manifesto, many researchers, manufacturers, and organisations have introduced their thoughts, tools, and models to enhance the understanding and adoption of Agile. Sharing a similar understanding of Agile among people involved is essential in order to adopt it. This paper investigates the understanding of Agile among IT professionals. In addition, the factors that impact the understanding and adoption of Agile are highlighted and studied. A survey methodology was employed in this research among IT professionals from different organisations. The results of this study show that productivity and ability to accept change are conflicting the understanding among participants. Furthermore, the experience of participants has an impact on the ways in which Agile are adopted.

Author 1: Abdullah A H Alzahrani

Keywords: Agile; software engineering; information systems; change management; organisational change

PDF

Paper 29: Category Decomposition-based Within Pixel Information Retrieval Method and its Application to Partial Cloud Extraction from Satellite Imagery Pixels

Abstract: Category decomposition-based within pixel information retrieval method is proposed together with its application to partial cloud extraction from satellite imagery pixels. A comparative study was conducted for estimation of the sea surface temperature of the pixel suffered from partial cloud cover within a pixel. Three methods for estimation of partial cloud cover within a pixel, based on the proposed category decomposition-based method with Generalized Inverse Matrix Method: GIMM and well-known Least Square Method: LSM and Maximum Likelihood Method: MLH, were compared. It was found that around 9% of RMS (Root Mean Square) error can be achieved. Also, it was found that estimation accuracy highly depends on variance of representative vectors for cloud and the ocean or observed noise. The experimental results with simulated data show RMS error of GIMM are highly dependent to the noise followed by MLH and LSM. The results also show the best estimation accuracy can be achieved for MLH followed by LSM and GIMM.

Author 1: Kohei Arai
Author 2: Yasunori Terayama
Author 3: Masao Moriyama

Keywords: Category decomposition; information retrieval; cloud cover estimation; Generalized Inverse Matrix Method: GIMM and well-known Least Square Method: LSM and Maximum Likelihood Method: MLH

PDF

Paper 30: Costless Expert Systems Development and Re-engineering

Abstract: Symbolic AI is indispensable for the current LLM agents that are used for example to reason the context of the questions. An expert system is a symbolic AI that can explain the reasoning it reached to, which typically is a rule-based system has been attractive for different domains such as medicine, agriculture, and operations. On average, these systems involve hundreds of rules that are instable; moreover, they are coded at low levels of abstraction. Therefore, designing and reengineering an expert system is still costly and needs technical knowledge because of the manual process and maintaining of a low-level abstraction. On the other hand, model-driven architecture (MDA) has proven to be a successful technology that raised the abstraction level and formalized it to automate software development. It specifies business aspects in the platform-independent model (PIM) and implementation aspects in a platform-specific model (PSM). It then automates mapping between them using a standard mapping language called Query- View- Transform QVT. This paper argues that utilizing MDA principles such as the automation and abstractions represented by the descriptor PIM and PSM and mappings metamodels will not only overcome the instability of rules of expert systems, but also provides new insights for its usage. Therefore, this work proposes an MDA-compliant methodology that adopts a UML sequence diagram, a class diagram for the PIM descriptor, and a generic PSM) based on production rules. Moreover, a UML profile to support lacking features in the sequence model has been developed. However, the paper argues for a new kind of process-oriented expert system. Therefore, it not only allows domain experts to develop or participate in expert systems but also reduces the cost of developing new systems and re-engineering or maintenance of the critical and large-scale legacy expert systems.

Author 1: Manal Alsharidi
Author 2: Abdelgaffar Hamed Ali

Keywords: Model-Driven-Architecture(MDA);Unified Modelling Language (UML); Platform-Independent Model (PIM); Platform-Specific Model (PSM); Query- View- Transform (QVT)

PDF

Paper 31: Comparison of SVM kernels in Credit Card Fraud Detection using GANs

Abstract: The technological evolution in smartphones and telecommunication systems have led people to be more dependent on online shopping and electronic payments, which created burdensome task of transaction validation for many financial institutions. This paper examined and evaluated the efficacy of Support vector machine (SVM) kernels on Generative Adversarial Network (GAN)-generated synthetic data to detect credit card fraud transactions. Four SVM kernels have been investigated and compared; linear, polynomial, sigmoid, and redial basis function. The accuracy results indicated that linear and polynomial kernels reached over 91%, while sigmoid and redial basis function reached 79% and 83% respectively. Linear and polynomial models received over 90% ROC and F1 score, in contrast the ROC scores were lower for sigmoid (81%) and redial basis function (83%). Both sigmoid and redial basis function achieved over 80% in terms of F1 score. The precision score demonstrated a high score for both linear and polynomial kernel reaching 99%. Additionally, sigmoid and redial basis function achieved over 80%. These results overcame the imbalance dataset issue through the generation of synthetic data by applying the SVM kernels using GANs algorithm.

Author 1: Bandar Alshawi

Keywords: Fraud transactions; credit card; Generative Adversarial Network; Support Vector Machine kernels; imbalance dataset

PDF

Paper 32: A Cost-Efficient Approach for Creating Virtual Fitting Room using Generative Adversarial Networks (GANs)

Abstract: Customers all over the world want to see how the clothes fit them or not before purchasing. Therefore, customers by nature prefer brick-and-mortar clothes shopping so they can try on products before purchasing them. But after the Pandemic of COVID19 many sellers either shifted to online shopping or closed their fitting rooms which made the shopping process hesitant and doubtful. The fact that the clothes may not be suitable for their buyers after purchase led us to think about using new AI technologies to create an online platform or a virtual fitting room (VFR) in the form of a mobile application and a deployed model using a webpage that can be embedded later to any online store where they can try on any number of cloth items without physically trying them. Besides, it will save much searching time for their needs. Furthermore, it will reduce the crowding and headache in the physical shops by applying the same technology using a special type of mirror that will enable customers to try on faster. On the other hand, from business owners' perspective, this project will highly increase their online sales, besides, it will save the quality of the products by avoiding physical trials issues. The main approach used in this work is applying Generative Adversarial Networks (GANs) combined with image processing techniques to generate one output image from two input images which are the person image and the cloth image. This work achieved results that outperformed the state-of-the-art approaches found in literature.

Author 1: Kirolos Attallah
Author 2: Girgis Zaky
Author 3: Nourhan Abdelrhim
Author 4: Kyrillos Botros
Author 5: Amjad Dife
Author 6: Nermin Negied

Keywords: Generative Adversarial Networks (GANs); virtual reality; human body segmentation; image generator; conditional generator; background removal

PDF

Paper 33: Observational Quantitative Study of Healthy Lifestyles and Nutritional Status in Firefighters of the fifth Command of Callao, Ventanilla 2023

Abstract: Given the high concern for human health, the aim is to determine the relationship between healthy lifestyles and nutritional status among firefighters of the VCD Callao Ventanilla 2023. This study was conducted in four volunteer fire companies, namely B-75, B-184, B-207, B-232, located in the districts of Ventanilla and Mi Perú. The population consists of 291 personnel, with a sample of 168 participants. It was observed that 58.9% (99) of the participants are under 36 years old, 29.8% (50) are between 36 and 45 years old, and 11.3% (19) are 46 years or older. In terms of gender, 62.5% (105) are males. Regarding the duration of their firefighting service, 70.2% (118) have a maximum of 10 years of seniority. On the other hand, 57.7% (97) of the participants have an unhealthy lifestyle, 40.5% (68) have a healthy lifestyle, and 1.8% (3) have a very healthy lifestyle. Regarding the nutritional status of the firefighters in this study, it was found that 53.3% (89) are overweight, 26.8% are considered normal weight, 19.6% (33) are obese, and 0.6% (1) are underweight. Concerning lifestyles, the study revealed that 57.7% of the participants have an unhealthy lifestyle, 40.5% have a healthy lifestyle, and 1.8% (3) have a very healthy lifestyle. It is worth mentioning that according to Rodríguez C's study, 95.2% of volunteers belonging to the B107 Fire Company lead a healthy lifestyle, while 4.8% do not. Statistically, we can assert that there is no significant relationship between the variable of healthy lifestyles and nutritional status. However, it is observed that there is a direct relationship between nutritional status and age. Likewise, it can be affirmed that more than at least 72.9% of the studied population is overweight, either with overweight or obesity.

Author 1: Genrry Perez-Olivos
Author 2: Exilda Garcia-Carhuapoma
Author 3: Ethel Gurreonero-Seguro
Author 4: Julio Méndez-Nina
Author 5: Sebastian Ramos-Cosi
Author 6: Alicia Alva Mantari

Keywords: BMI; firemen; lifestyles; excess weight

PDF

Paper 34: Enhanced Emotion Analysis Model using Machine Learning in Saudi Dialect: COVID-19 Vaccination Case Study

Abstract: Sentiment Analysis (SA) and Emotion Analysis (EA) are effective areas of research aimed to auto-detect and recognize the sentiment expressed in a text and identify the underpinning opinion towards a specific topic. Although they are often considered interchangeable terms, they have slight differences. The primary purpose of SA is to find the polarity expressed in a text by distinguishing between positive, negative, and neutral opinions. EA is concerned with detecting more emotion categories, such as happiness, anger, sadness, and fear. EA allows the analysis to extract more accurate and detailed results that suit the field in which it is applied. This work delves into EA within the Saudi Arabian dialect, focusing on sentiments related to COVID-19 vaccination campaigns. Our endeavor addresses the absence of research on developing an effective EA machine-learning model for Saudi dialect texts, particularly within the healthcare and vaccinations domain, exacerbated by the lack of an EA manual-labeled corpus. Using a systematic approach, a dataset of 33,373 tweets is collected, annotated, and preprocessed. Thirty-six machine learning experiments encompassing SVM, Logistic Regression, Decision Tree models, three stemming techniques, and four feature extraction methods enhance the understanding of public sentiment surrounding COVID-19 vaccination campaigns. Our Logistic Regression model achieved 74.95% accuracy. Findings reveal a predominantly positive sentiment, particularly happiness, among Saudi citizens. This research contributes valuable insights for healthcare communication, public sentiment monitoring, and decision-making while providing labeled-corpus and ML model comparison results for improving model performance and exploring broader linguistic and dialectal applications.

Author 1: Abdulrahman O. Mostafa
Author 2: Tarig M. Ahmed

Keywords: Data mining; natural language processing; sentiment analysis; emotion analysis; machine learning; support vector machine; logistic regression; decision tree; Covid-19

PDF

Paper 35: Dimensionality Reduction: A Comparative Review using RBM, KPCA, and t-SNE for Micro-Expressions Recognition

Abstract: Facial expressions are the main ways how humans display emotions. Under certain circumstances, humans can do facial expression, but emotions can also appear in the special form of micro-expressions. A micro-expression is a very brief facial expression faced on people’s faces under some circumstances. Micro-expressions are shown in the situations when a person tries to lie or hide something. Studying micro-expressions sounds very attractive but considering the number of pixels that an image contains becomes difficult. Feature extraction techniques are the most popular ones for reducing data dimensionality. Those techniques create a new low-dimensional dataset, which tries to represent as much information as original dataset. Many and many methods are used for dimensionality reduction. Restricted Boltzmann Machine (RBM), Kernel Principal Component Analyses (KPCA) and t-distributed stochastic neighbor embedding (t-SNE) are currently widely used by researchers. Choosing the right dimensionality reduction technique is time consuming. This study proposes one framework for micro-expression recognition. The two key processes of this framework are the facial feature extraction (Dlib) and dimensionality reduction using RBM, KPCA and t-SNE. We will select the technique that generates new dataset which represents as much the original dataset as possible. The framework will be trained with images from the CASMEII database, which is a database built specially for research purposes. The framework will be tested with new images unseen before. Software used for conducting the experiments is Python.

Author 1: Viola Bakiasi
Author 2: Markela Muça
Author 3: Rinela Kapçiu

Keywords: Dimensionality reduction; Kernel Principal Component Analyses (KPCA); t-distributed Stochastic Neighbor Embedding (t-SNE); Restricted Boltzmann Machine (RBM); facial feature extraction

PDF

Paper 36: A Method for Extracting Traffic Parameters from Drone Videos to Assist Car-Following Modeling

Abstract: A new method for extracting traffic parameters from UAV videos to assist in establishing a car-following model is proposed in this paper. The improved ShuffleNet network and GSConv module were introduced into the Yolov7-tiny neural network model as the target detection stage. HOG features and IOU motion metrics are introduced into the DeepSort multi-object tracking algorithm as the tracking matching stage. By building a self-built UAV aerial traffic data set, experiments were conducted to prove that the new method improved a few detection and tracking indicators. In addition, it improves the false detection, missed detection, wrong ID conversion and other phenomena of the previous algorithm, and improves the accuracy and lightweight of multi-target tracking. Finally, gray correlation was applied to analyze the traffic parameters extracted by the new method, and the driver's visual perception of collision was introduced into the car-following model. Through stability analysis, small disturbance simulation and collision risk assessment, the newly proposed traffic flow parameter extraction method has been proven to improve the dynamic characteristics and safety of the car-following model, and can be used to alleviate traffic congestion and improve driving safety.

Author 1: Xiangzhou Zhang
Author 2: Zhongke Shi

Keywords: UAV; Yolov7-tiny; DeepSor; Car-following model; Stability analysis; Traffic congestion; safety assessment

PDF

Paper 37: A Review of Fake News Detection Techniques for Arabic Language

Abstract: The growing proliferation of social networks provides users worldwide access to vast amounts of information. However, although social media users have benefitted significantly from the rise of various platforms in terms of interacting with others, e.g., expressing their opinions, finding products and services, and checking reviews, it has also raised critical problems, such as the spread of fake news. Spreading fake news not only affects individual citizens but also governments and countries. This situation necessitates the immediate integration of artificial intelligence methodologies to address and alleviate this issue effectively. Researchers in the field have leveraged different techniques to mitigate this problem. However, research in the Arabic language for fake news detection is still in its early stages compared with other languages, such as English. This review paper intends to provide a clear view of Arabic research in the field. In addition, the paper aims to provide other researchers working on solving Arabic fake news detection problems with a better understanding of the common features used in extraction, machine learning, and deep learning algorithms. Moreover, a list of publicly available datasets is provided to give an idea of their characteristics and facilitate researcher access. Furthermore, some of limitations and challenges related to Arabic fake news and rumor detection are discussed to encourage other researchers.

Author 1: Taghreed Alotaibi
Author 2: Hmood Al-Dossari

Keywords: Fake news detection; rumors; classification; Arabic language

PDF

Paper 38: Enhancing Quality-of-Service in Software-Defined Networks Through the Integration of Firefly-Fruit Fly Optimization and Deep Reinforcement Learning

Abstract: The Software Defined Networking (SDN) paradigm has emerged as a critical tool for meeting the dynamic demands of network management with respect to efficiency and flexibility. Quality of Service (QoS) optimization, which encompasses essential features including bandwidth allocation, latency, and packet loss, is a major problem in SDN systems due to its direct influence on network application performance and user experience. To deal with these important issues, this paper tackles the critical problem of Software-Defined Networks (SDNs) Quality-of-Service (QoS) optimization, which is a critical factor affecting network application performance and user experience. Within the Firefly-Fruit Fly Optimised Deep Reinforcement Learning (DQ-FFO-DRL) framework, a novel combination of optimization techniques derived from Fruit Fly and Firefly behaviors with Deep Q-Learning is presented in this suggested approach, which is called Deep Q-Learning. The framework effectively investigates ideal network configurations by utilizing the distinct advantages of the Fruit Fly and Firefly optimization components, while the Deep Q-Learning component dynamically adjusts to changing network circumstances by drawing conclusions from prior experiences. Extensive testing and modeling reveal that the DQ-FFO-DRL approach performs very well in SDNs compared to conventional QoS management solutions. When it comes to negotiating the always changing world of resource allocation, network usage, and overall network performance, this algorithm demonstrates exceptional adaptability. The suggested system, which is implemented in Python, offers an advanced and flexible method for enhancing QoS in SDN systems.

Author 1: Mahmoud Aboughaly
Author 2: Shaikh Abdul Hannan

Keywords: Software Defined Network (SDN); Quality of Service (QoS); firefly-fruit fly optimization; Deep Reinforcement Learning (DRL); adaptive QoS enhancement; network optimization

PDF

Paper 39: Revolutionizing Magnetic Resonance Imaging Image Reconstruction: A Unified Approach Integrating Deep Residual Networks and Generative Adversarial Networks

Abstract: Advancements in data capture techniques in the field of Magnetic Resonance Imaging (MRI) offer faster retrieval of critical medical imagery. Even with these advances, reconstruction techniques are generally slow and visually poor, making it difficult to include compression sensors. To address these issues, this work proposes a novel hybrid GAN-DRN architecture-based method for MRI reconstruction. This approach greatly improves texture, boundary characteristics, and picture fidelity over previous methods by combining Generative Adversarial Networks (GANs) with Deep Residual Networks (DRNs). One important innovation is the GAN's all-encompassing learning mechanism, which modifies the generator's behaviour to protect the network against corrupted input. In addition, the discriminator assesses forecast validity thoroughly at the same time. With this special technique, intrinsic features in the original photo are skillfully extracted and managed, producing excellent results that adhere to predetermined quality criteria. The Hybrid GAN-DRN technique's effectiveness is demonstrated by experimental findings, which use Python software to achieve an astounding 0.99 SSIM (Structural Similarities Index) and an amazing 50.3 peak signal-to-noise ratio. This achievement is a significant advancement in MRI reconstructions and has the potential to completely transform the medical imaging industry. In the future, efforts will be directed towards improving real-time MRI reconstruction, going multi-modal MRI fusion, confirming clinical effectiveness via trials, and investigating robustness, intuitive interfaces, transferable learning, and explanatory techniques to improve clinical interpretive practices and adoption.

Author 1: M Nagalakshmi
Author 2: M. Balamurugan
Author 3: B. Hemantha Kumar
Author 4: Lakshmana Phaneendra Maguluri
Author 5: Abdul Rahman Mohammed ALAnsari
Author 6: Yousef A.Baker El-Ebiary

Keywords: Magnetic Resonance Imaging (MRI); deep learning; generative adversarial network; deep residual network; ResNet50

PDF

Paper 40: Hybrid Vision Transformers and CNNs for Enhanced Transmission Line Segmentation in Aerial Images

Abstract: This paper presents a novel architecture for the segmentation of transmission lines in aerial images, utilizing a hybrid model that combines the strengths of Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs). The proposed method first employs a Swin Transformer backbone (Swin-B) that processes the input image through a hierarchical structure, effectively capturing multi-scale contextual information. Following this, an upsampling strategy is employed, wherein the features extracted by the transformer are refined through convolutional layers, ensuring that the resolution is maintained, and spatial details are recovered. To integrate multi-level feature maps, a feature fusion module with a squeeze-and-excitation (SE) layer is introduced, which consolidates the benefits of both high-level and low-level feature extractions. The SE layer plays a pivotal role in augmenting the feature channels, focusing the model's attention on the most informative features for transmission line detection. By leveraging the global receptive field of ViTs for comprehensive context and the local precision of CNNs for fine-grained detail, our method aims to set a new benchmark for transmission line segmentation in aerial imagery. The effectiveness of our approach is demonstrated through extensive experiments and comparisons with existing state-of-the-art methods.

Author 1: Hoanh Nguyen
Author 2: Tuan Anh Nguyen

Keywords: Vision transformers; convolutional neural networks; transmission lines segmentation; hybrid model; feature fusion

PDF

Paper 41: Dynamic Object Detection Revolution: Deep Learning with Attention, Semantic Understanding, and Instance Segmentation for Real-World Precision

Abstract: Semantic and instance segmentation are critical goals that span a wide range of applications, from autonomous driving to object recognition in different fields. The existing approaches have limitations, especially when it comes to the difficult task of identifying and detecting minute things in intricate real-world situations. This work presents a novel method that uses a hybrid deep learning architecture with the Python programming language to smoothly combine semantic and instance segmentation. The suggested approach takes care of the pressing necessity in challenging real-world settings for accurate localization and fine-grained object detection. By combining the strengths of a Convolutional Neural Network (CNN) with a Bidirectional Long Short-Term Memory Network (BiLSTM), the hybrid model effectively achieves semantic segmentation by using sequential input and spatial information. A parallel attention method is smoothly included into the segmentation process to further improve the model's capabilities and enable the recognition of important object attributes. This study highlights the difficulties caused by changing environmental elements, highlighting the need for precise object location and understanding in addition to the complexities of fine-grained object detection. The suggested approach has an outstanding accuracy rate of 99.66%, outperforming existing approaches by 25.22%. This significant increase highlights the benefits that the hybrid design has over individual techniques and shows how effective it is at resolving issues that arise in dynamic real-world circumstances. The research highlights the importance of attention processes in deep learning and demonstrates how they might improve the specificity and accuracy of object detection and localization in intricate real-world scenarios. The improved performance of the suggested methodology is with well-known techniques like RCNN, CNN, and DNN, reaffirming its status as a reliable means of developing object localization and recognition in difficult situations.

Author 1: Karimunnisa Shaik
Author 2: Dyuti Banerjee
Author 3: R. Sabin Begum
Author 4: Narne Srikanth
Author 5: Jonnadula Narasimharao
Author 6: Yousef A.Baker El-Ebiary
Author 7: E. Thenmozhi

Keywords: Semantic segmentation; instance segmentation; convolutional neural network; bidirectional long short-term memory; attention mechanism

PDF

Paper 42: Improved Algorithm with YOLOv5s for Obstacle Detection of Rail Transit

Abstract: As an infrastructure for urban development, it is particularly important to ensure the safe operation of urban rail transit. Foreign object intrusion in urban rail transit area is one of the main causes of train accidents. To tackle the obstacle detection challenge in rail transit, this paper introduces the CS-YOLO urban rail foreign object intrusion detection model. It utilizes the improved YOLOv5s algorithm, incorporating an enhanced convolutional attention CBAM module to replace the original YOLOv5s algorithm's backbone network C3 module. In addition, the KM-Decoupled Head is proposed to decouple the detection head, and SIoU is applied as the loss function. Tested on the WZ dataset, the average accuracy increased from 0.844 to 0.893 .The research method in this paper provides a reference basis for urban rail transit safety detection, which has certain reference value.

Author 1: Shuangyuan Li
Author 2: Zhengwei Wang
Author 3: Yanchang Lv
Author 4: Xiangyang Liu

Keywords: Railroad track intrusion detection; CBAM (Convolutional Block Attention Module) attention; activation function; decoupling probe; loss function

PDF

Paper 43: Students' Perception of ChatGPT Usage in Education

Abstract: This research article delves into the impact of ChatGPT on education, focusing on the perceptions and usage patterns among high school and university students. The article begins by introducing ChatGPT, emphasizing its rapid user adoption and widespread interest. It explores the application of ChatGPT in various fields, including healthcare, agriculture, and education. A comprehensive survey involving 102 students, both high school and university, is detailed, covering aspects like familiarity with ChatGPT, reasons for usage, self-assessment of its effectiveness, and attitudes toward informing teachers about its use. The findings reveal varied perspectives on the benefits and challenges of incorporating ChatGPT in the learning process. The article concludes by emphasizing the need for careful consideration and integration of AI technologies in education, highlighting the risks of uncritical reliance on such tools and advocating for a balanced approach to foster students' critical thinking and intellectual growth.

Author 1: Irena Valova
Author 2: Tsvetelina Mladenova
Author 3: Gabriel Kanev

Keywords: Artificial intelligence in education; assessment; ChatGPT; Generative Pretrained Transformer 3; GPT-3; higher education; learning; teaching; Natural Language Processing (NLP)

PDF

Paper 44: From Time Series to Images: Revolutionizing Stock Market Predictions with Convolutional Deep Neural Networks

Abstract: Predicting the trend of stock prices is a hard task due to numerous factors and prerequisites that can affect price movement in a specific direction. Various strategies have been proposed to extract relevant features of stock data, which is crucial for this domain. Due to its powerful data processing capabilities, deep learning has demonstrated remarkable results in the financial field among modern tools. This research suggests a convolutional deep neural network model that utilizes a 2D-CNN to process and classify images. The process for creating images involves transforming the top technical indicators from a financial time series, each calculated for 21 different day periods, to create images of specific sizes. The images are labeled Sell, Hold, or Buy based on the original trading data. Compared to the Long Short Time Memory Model and to the one-dimensional Convolutional Neural Network and the model exhibits the best performance.

Author 1: TATANE Khalid
Author 2: SAHIB Mohamed Rida
Author 3: ZAKI Taher

Keywords: Technical indicators; convolutional neural networks; stock trend forecasting; deep learning

PDF

Paper 45: An Explainable and Optimized Network Intrusion Detection Model using Deep Learning

Abstract: In the current age, internet and its usage have become a core part of human existence and with it we have developed technologies that seamlessly integrate with various phases of our day to day activities. The main challenge with most modern-day infrastructure is that the requirements pertaining to security are often an afterthought. Despite growing awareness, current solutions are still unable to completely protect computer networks and internet applications from the ever-evolving threat landscape. In the recent years, deep learning algorithms have proved to be very efficient in detecting network intrusions. However, it is exhausting, time-consuming, and computationally expensive to manually adjust the hyper parameters of deep learning models. Also, it is important to develop models that not only make accurate predictions but also help in understanding how the model is making those predictions. Thus, model explainability helps increase user’s trust. The current research gap in the domain of Network Intrusion Detection is the absence of a holistic framework that incorporates both optimization and explainable methods. In this research article, a hybrid approach to hyper parameter optimization using hyperband is proposed. An overall accuracy of 98.58% is achieved by considering all the attack types of the CSE CIC 2018 dataset. The proposed hybrid framework enhances the performance of Network Intrusion Detection by choosing an optimized set of parameters and leverages explainable AI (XAI) methods such as Local Interpretable Model agnostic Explanations (LIME) and SHapely Additive exPlanations (SHAP) to understand model predictions.

Author 1: Haripriya C
Author 2: Prabhudev Jagadeesh M. P

Keywords: Network Intrusion Detection; deep learning; hyper parameter optimization; hyperband; CSE CIC IDS 2018 dataset; XAI methods; LIME; SHAP

PDF

Paper 46: Low-Light Image Enhancement using Retinex-based Network with Attention Mechanism

Abstract: Images in low-light conditions typically exhibit significant degradation such as low contrast, color shift, noise and artifacts, which diminish the accuracy of the recognition task in computer vision. To address these challenges, this paper proposes a low-light image enhancement method based on Retinex. Specifically, a decomposition network is designed to acquire high-quality light illumination and reflection maps, complemented by the incorporation of a comprehensive loss function. A denoising network was proposed to mitigate the noise in low-light images with the assistance of images’ spatial information. Notably, the extended convolution layer has been employed to replace the maximum pooling layer and the Basic-Residual-Modules (BRM) module from the decomposition network has integrates into the denoising network. To address challenges related to shadow blocks and halo artifacts, an enhancement module was proposed to be integration into the jump connections of U-Net. This enhancement module leverages the Feature-Extraction- Module (FEM) attention module, a sophisticated mechanism that improves the network’s capacity to learn meaningful features by integrating the image features in both channel dimensions and spatial attention mechanism to receive more detailed illumination information about the object and suppress other useless information. Based on the experiments conducted on public datasets LOL-V1 and LOL-V2, our method demonstrates noteworthy performance improvements. The enhanced results by our method achieve an average of 23.15, 0.88, 0.419 and 0.0040 on four evaluation metrics - PSNR, SSIM, NIQE and GMSD. Those results superior to the mainstream methods.

Author 1: Shaojin Ma
Author 2: Weiguo Pan
Author 3: Nuoya Li
Author 4: Songjie Du
Author 5: Hongzhe Liu
Author 6: Bingxin Xu
Author 7: Cheng Xu
Author 8: Xuewei Li

Keywords: Low-light image enhancement; decomposition network; FEM attention mechanism; denoising network; detail enhancement

PDF

Paper 47: Double Branch Lightweight Finger Vein Recognition based on Diffusion Model

Abstract: Aiming at the problems of high complexity, insufficient global information extraction and easy overfitting in finger vein recognition, a finger vein recognition method based on diffusion model is proposed. Firstly, finger vein images are generated according to the dataset by diffusion model, which is used to prevent overfitting; secondly, a streamlined convolutional neural network is used to form a two-branch lightweight backbone network with an improved multi-head self-attention mechanism, which can effectively reduce the complexity of the model; and finally, in order to maximally extract the image's overall information, the convolution is used to merge the extracted local and global features, and the recognition results are output. The algorithm can reach a maximum recognition rate of 99.78% on multiple datasets, while the number of references is only 2.15M, which further reduces the complexity of the algorithm while maintaining a high accuracy compared to other novel finger vein recognition algorithms as well as lightweight convolutional neural network models. As the first attempt in this field, it will provide new ideas for future research work.

Author 1: Zhiyong Tao
Author 2: Yajing Gao
Author 3: Sen Lin

Keywords: Finger vein recognition; convolution neural network; diffusion model; multi-head self-attention mechanism; lightweight network

PDF

Paper 48: An Ensemble Approach to Question Classification: Integrating Electra Transformer, GloVe, and LSTM

Abstract: Natural Language Processing (NLP) has emerged as a critical technology for understanding and generating human language, with applications including machine translation, sentiment analysis, and, most importantly, question classification. As a subfield of NLP, question classification focuses on determining the type of information being sought, which is an important step for downstream applications such as question answering systems. This study introduces an innovative ensemble approach to question classification that combines the strengths of the Electra, GloVe, and LSTM models. After being tried thoroughly on the well-known TREC dataset, the model shows that combining these different technologies can produce better outcomes. For understanding complex language, Electra uses transformers; GloVe uses global vector representations for word-level meaning; and LSTM models long-term relationships through sequence learning. Our ensemble model is a strong and effective way to solve the hard problem of question classification by mixing these parts in a smart way. The ensemble method works because it got an 80% accuracy score on the test dataset when it was compared to well-known models like BERT, RoBERTa, and DistilBERT.

Author 1: Sanad Aburass
Author 2: Osama Dorgham
Author 3: Maha Abu Rumman

Keywords: Ensemble learning; long short term memory; transformer models; Electra; GloVe; TREC dataset

PDF

Paper 49: SpanBERT-based Multilayer Fusion Model for Extractive Reading Comprehension

Abstract: Extractive reading comprehension is a prominent research topic in machine reading comprehension, which aims to predict the correct answer from the given context. Pre-trained models have recently shown considerable effectiveness in this area. However, during the training process, most existing models face the problem of semantic information loss. To address this problem, this paper proposes a model based on the SpanBERT pre-trained model to predict answers using a multi-layer fusion method. Both the outputs of the intermediate layer and the prediction layer of the transformer are fused to perform answer prediction, thereby improving the model's performance. The proposed model achieves F1 scores of 92.54%, 84.02%, 80.86%, 71.32%, and EM scores of 86.27%, 81.25%, 69.10%, 56.42% on the SQuAD1.1, SQuAD2.0, Natural Questions and NewsQA datasets, respectively. Experimental results show that our model outperforms a number of existing models and has excellent performance.

Author 1: Pu Zhang
Author 2: Lei He
Author 3: Deng Xi

Keywords: Machine reading comprehension; pre-trained model; transformer

PDF

Paper 50: Topology Approach for Crude Oil Price Forecasting of Particle Swarm Optimization and Long Short-Term Memory

Abstract: Forecasting crude oil prices hold significant importance in finance, energy, and economics, given its extensive impact on worldwide markets and socio-economic equilibrium. Using Long Short-Term Memory (LSTM) neural networks has exhibited noteworthy achievements in time series forecasting, specifically in predicting crude oil prices. Nevertheless, LSTM models frequently depend on the manual adjustment of hyperparameters, a task that can be laborious and demanding. This study presents a novel methodology incorporating Particle Swarm Optimization (PSO) into LSTM networks to optimize the network architecture and minimize the error. This study employs historical data on crude oil prices to explore and identify optimal hyperparameters autonomously and embedded with the star and ring topology of PSO to address the local and global search capabilities. The findings demonstrate that LSTM+starPSO is superior to LSTM+ringPSO, previous hybrid LSTM-PSO, conventional LSTM networks, and statistical time series methods in its predictive accuracy. LSTM+starPSO model offers a better RMSE of about +0.16% and +22.82% for WTI and BRENT datasets, respectively. The results indicate that the LSTM model, when enhanced with PSO, demonstrates a better proficiency in capturing the patterns and inherent dynamics data changes of crude oil prices. The proposed model offers a dual benefit by alleviating the need for manual hyperparameter tuning and serving as a valuable resource for stakeholders in the energy and financial industries interested in obtaining dependable insights into fluctuations in crude oil prices.

Author 1: Marina Yusoff
Author 2: Darul Ehsan
Author 3: Muhammad Yusof Sharif
Author 4: Mohamad Taufik Mohd Sallehud-din

Keywords: Crude oil; deep learning; Particle Swarm Optimization; Long Term-Short Memory; forecasting

PDF

Paper 51: Explore Innovative Depth Vision Models with Domain Adaptation

Abstract: In recent years, deep learning has garnered widespread attention in graph-structured data. Nevertheless, due to the high cost of collecting labeled graph data, domain adaptation becomes particularly crucial in supervised graph learning tasks. The performance of existing methods may degrade when there are disparities between training and testing data, especially in challenging scenarios such as remote sensing image analysis. In this study, an approach to achieving high-quality domain adaptation without explicit adaptation was explored. The proposed Efficient Lightweight Aggregation Network (ELANet) model addresses domain adaptation challenges in graph-structured data by employing an efficient lightweight architecture and regularization techniques. Through experiments on real datasets, ELANet demonstrated robust domain adaptability and generality, performing exceptionally well in cross-domain settings of remote sensing images. Furthermore, the research indicates that regularization techniques play a crucial role in mitigating the model's sensitivity to domain differences, especially when incorporating a module that adjusts feature weights in response to redefined features. Moreover, the study finds that under the same training and validation set configurations, the model achieves better training outcomes with appropriate data transformation strategies. The achievements of this research extend not only to the agricultural domain but also show promising results in various object detection scenarios, contributing to the advancement of domain adaptation research.

Author 1: Wenchao Xu
Author 2: Yangxu Wang

Keywords: Deep learning; neural network; domain adaptation; lightweight; regularization techniques

PDF

Paper 52: Improving Brain Tumor MRI Image Classification Prediction based on Fine-tuned MobileNet

Abstract: Brain tumors are a prevalent issue in contemporary society as they impact human health. The location of the tumor in the brain determines the variety of symptoms that may manifest. Some frequent symptoms are cephalalgia, convulsions, visual impairments, nausea, emesis, asthenia, paresthesia, dysphasia, personality alterations, and amnesia. The prognosis for brain cancer differs considerably depending on the cancer type. Nevertheless, brain tumors are amenable to treatment with surgical intervention, chemotherapy, and radiotherapy if the diagnosis is timely. Furthermore, artificial intelligence and machine learning can assist in the detection of brain tumors as they have significant implications for the analysis of Magnetic Resonance Imaging (MRI). To accomplish this objective, automated measurement instruments were proposed based on the processing of MRI. In this study, we employed the latest developments in deep transfer learning and fine-tuning to identify tumors without many complex steps. We gathered data from authentic MRI of 3264 subjects (i.e., 926 glioma tumors, 937 meningioma tumors, 901 pituitary tumors, and 500 normal). With the MobileNet model from the Keras library, we attained the highest validation accuracy, test accuracy, and F1 score in four-class classifications was 97.24%, 97,86%, and 97.85%, respectively. Concerning two-class classification, high accuracy values were obtained for most of the models (i.e., ~100%). These outcomes and other performance indicators demonstrate a strong capability to diagnose brain tumors from conventional MRI. The current research developed a supportive machine learning that can aid doctors in making the accurate diagnosis with less time and mistakes.

Author 1: Quy Thanh Lu
Author 2: Triet Minh Nguyen
Author 3: Huan Le Lam

Keywords: Brain tumor; fine-tuning; transfer learning; Magnetic Resonance Imaging (MRI); MobileNet

PDF

Paper 53: DDoS Classification using Combined Techniques

Abstract: Now-a-days, the attacker's favourite is to disrupt a network system. An attacker has the capability to generate various types of DDoS attacks simultaneously, including the Smurf attack, ICMP flood, UDP flood, and TCP SYN flood. This DDoS issue encouraged the design of a classification technique against DDoS attacks that enter a computer network environment. The technique is called Packet Threshold Algorithm (PTA) and is combined with several machine learning to classify incoming packets that have been captured and recorded. Apart from that, the combination of techniques can differentiate between normal packets and DDoS attacks. The performance of all techniques in the research achieved high detection accuracy while mitigating the issue of a high false positive rate. The four techniques focused in this research are PTA-SVM, PTA-NB, PTA-LR and PTA-KNN. Based on the results of detection accuracy and false positive rate for all the techniques involved, it proves the PTA-KNN technique is a more effective technique in the context of detection of incoming packets whether DDoS attacks or normal packets.

Author 1: Mohd Azahari Mohd Yusof
Author 2: Noor Zuraidin Mohd Safar
Author 3: Zubaile Abdullah
Author 4: Firkhan Ali Hamid Ali
Author 5: Khairul Amin Mohamad Sukri
Author 6: Muhamad Hanif Jofri
Author 7: Juliana Mohamed
Author 8: Abdul Halim Omar
Author 9: Ida Aryanie Bahrudin
Author 10: Mohd Hatta Mohamed Ali @ Md Hani

Keywords: DDoS; machine learning; accuracy; false positive rate

PDF

Paper 54: Association Model of Temperature and Cattle Weight Influencing the Weight Loss of Cattle Due to Stress During Transportation

Abstract: This study aimed to enhance animal welfare in the context of modern agriculture. The Association Rule analysis method using FP-Growth and Apriori algorithms was employed to identify patterns and factors influencing animal welfare, particularly in the context of live cattle weight loss (shrink) due to stress during transportation. Data obtained from several farms and clinical tests were used to develop insights into the relationship between farming practices, data science, and animal welfare. The research stages included data preprocessing, initial analysis, modeling, evaluation, and interpretation of results, recommendations and implications, and conclusions. The research results indicate that the use of FP-Growth and Apriori algorithms uncovered hidden patterns in the data, resulting in four association rules from FP-Growth and five rules from Apriori. These rules aid in designing recommendations to enhance animal welfare, improve agricultural efficiency, and support sustainability of the cattle sector. Our findings have significant implications in the context of animal welfare and sustainable farm management.

Author 1: Jajam Haerul Jaman
Author 2: Agus Buono
Author 3: Dewi Apri Astuti
Author 4: Sony Hartono Wijaya
Author 5: Burhanuddin
Author 6: Jajam Haerul Jaman

Keywords: Association rule; animal welfare; cattle management; animal product quality; modern agriculture; recommendations; sustainability

PDF

Paper 55: Image Caption Generation using Deep Learning For Video Summarization Applications

Abstract: In the area of video summarization applications, automatic image caption synthesis using deep learning is a promising approach. This methodology utilizes the capabilities of neural networks to autonomously produce detailed textual descriptions for significant frames or instances in a video. Through the examination of visual elements, deep learning models possess the capability to discern and classify objects, scenarios, and actions, hence enabling the generation of coherent and useful captions. This paper presents a novel methodology for generating image captions in the context of video summarizing applications. DenseNet201 architecture is used to extract image features, enabling the effective extraction of comprehensive visual information from keyframes in the videos. In text processing, GloVe embedding, which is pre-trained word vectors that capture semantic associations between words, is employed to efficiently represent textual information. The utilization of these embeddings establishes a fundamental basis for comprehending the contextual variations and semantic significance of words contained within the captions. LSTM models are subsequently utilized to process the GloVe embeddings, facilitating the development of captions that keep coherence, context, and readability. The integration of GloVe embeddings with LSTM models in this study facilitates the effective fusion of visual and textual data, leading to the generation of captions that are both informative and contextually relevant for video summarization. The proposed model significantly enhances the performance by combining the strengths of convolutional neural networks for image analysis and recurrent neural networks for natural language generation. The experimental results demonstrate the effectiveness of the proposed approach in generating informative captions for video summarization, offering a valuable tool for content understanding, retrieval, and recommendation.

Author 1: Mohammed Inayathulla
Author 2: Karthikeyan C

Keywords: Video summarization; deep learning; image caption synthesis; densenet201; GloVe embeddings; LSTM

PDF

Paper 56: Evolving Adoption of eLearning Tools and Developing Online Courses: A Practical Case Study from Al-Baha University, Saudi Arabia

Abstract: eLearning or online learning has gained acceptance worldwide, particularly after the Covid-19 pandemic. Although the pandemic has forced the shift towards this learning mode, there is still a continuous need to improve instructors' cognitive and practical competencies to effectively design and deliver online courses. In this paper, a practical case study from Al-Baha University, a Higher Education Institution (HEI) in Saudi Arabia, is presented, showing the development stages of eLearning at the university and how effective utilization of eLearning tools through a structured methodology in a short time, with minimum resources, has helped to improve the teaching and learning experiences for both instructors and students at the university before the pandemic. Various standards and research techniques have been adopted to develop and assess the methodology and its viability of implementation in other higher education institutions. The findings show the methodology’s effectiveness and how it helps Al-Baha University smoothly adapt to the online shift at the onset of the pandemic. The methodology is presented to and gained acceptance and recommendation for application in other HEIs in Saudi Arabia from the committee of eLearning and distance education deans in Saudi Universities in March 2023. It also receives the Anthology Middle East award for community engagement in November 2023.

Author 1: Hassan Alghamdi
Author 2: Naif Alzahrani

Keywords: eLearning; ICT competencies; Higher Education Institutions (HEIs); Learning Management System (LMS)

PDF

Paper 57: Implementation of Machine Learning Classification Algorithm Based on Ensemble Learning for Detection of Vegetable Crops Disease

Abstract: In India, plant diseases pose a significant threat to food security, requiring precise detection and management protocols to minimize potential damage. Research introduces an innovative ensemble machine learning model for precise disease detection in tomato, potato, and bell pepper crops. Utilizing transfer learning, pre-trained models such as MobileNet and Inception are fine-tuned on a dataset of over 10,403 images of diseased and healthy plant leaves. The models are combined into a diverse ensemble, enhancing the precision and robustness of disease detection. The proposed ensemble models achieve an impressive accuracy rate of 98.95%, demonstrating their superiority over individual models in reducing misclassification and false positives. This advancement in plant disease detection provides valuable support to farmers and agricultural experts by enabling early disease identification and intervention.

Author 1: Pradeep Jha
Author 2: Deepak Dembla
Author 3: Widhi Dubey

Keywords: DNN; transfer learning; crop; ensemble model; deep stacking and stacking approach; image pre-processing; tomato; bell paper; potato; disease

PDF

Paper 58: Revolutionizing Software Project Development: A CNN-LSTM Hybrid Model for Effective Defect Prediction

Abstract: Within the domain of software development, the practice of software defect prediction (SDP) holds a central and critical position, significantly contributing to the efficiency and ultimate success of projects. It embodies a proactive approach that harnesses data-driven techniques and analytics to preemptively identify potential defects or vulnerabilities within software systems, thereby enhancing overall quality and reliability while significantly impacting project timelines and resource allocation. The efficiency of software development projects hinges on their ability to adhere to deadlines, budget constraints, and deliver high-quality products. SDP contributes to these objectives through various means. This paper introduces a novel SDP model that harnesses the combined capabilities of Convolutional Neural Networks (CNNs) and Long Short Term Memory (LSTMs) unit. CNNs excel at extracting features from structured data, enabling them to discern patterns and dependencies within code repositories and change histories. LSTMs, conversely, excel in handling sequential data, which is pivotal for capturing the temporal aspects of software development and tracking the evolution of defects over time. The outcomes of the proposed CNN-LSTM hybrid model showcase its superior predictive performance. Simulation results affirm the substantial potential of this model to bolster the efficiency and reliability of software development processes. As technology advances and data-driven methodologies become increasingly prevalent in the software industry, the integration of such hybrid models presents a promising avenue for continually elevating software quality and ensuring the triumph of software projects. In summary, the utilization of this innovative SDP model offers a transformative approach to efficient software development, positioning it as a vital tool for project success and quality assurance.

Author 1: Selvin Jose G
Author 2: J Charles

Keywords: Data driven software development; proactive defect identification; software quality; predictive analytics; software defect prediction; artificial intelligence; long short term memory

PDF

Paper 59: US Road Sign Detection and Visibility Estimation using Artificial Intelligence Techniques

Abstract: This paper presents a fully-automated system for detecting road signs in the United States and assess their visibility during daytime from the perspective of the driver using images captured by an in-vehicle camera. The system deploys YOLOv8 to build a multi-label detection model and then, calculates various readability and detectability factors, including the simplicity of the surroundings, potential obstructions, and the angle at which the road sign is positioned, to determine the overall visibility of the sign. This proposed system can be integrated into Driver Assistance Systems (DAS) to manage the information delivered to drivers, as an excess of information could potentially distract them. Road signs are categorized based on their visibility levels, allowing Driver Assistance Systems to caution drivers about signs that may have lower visibility but are of significant importance. The system comprises four main stages: 1) identifying road signs using YOLOv8; 2) segmenting the surrounding areas; 3) measuring visibility parameters; and 4) determining visibility levels through fuzzy logic inference system. This paper introduces a visibility estimation system for road signs specifically tailored to the United States. Experimental results showcase the system’s effectiveness. The visibility levels generated by the proposed system were subjectively compared to decisions made by human experts, revealing a substantial agreement between the two approaches.

Author 1: Jafar AbuKhait

Keywords: Road sign detection; YOLOv8; driver assistance system; fuzzy logic; detectability; visibility estimation

PDF

Paper 60: Dual-Branch Grouping Multiscale Residual Embedding U-Net and Cross-Attention Fusion Networks for Hyperspectral Image Classification

Abstract: Due to the high cost and time-consuming nature of acquiring labelled samples of hyperspectral data, classification of hyperspectral images with a small number of training samples has been an urgent problem. In recent years, U-Net can train the characteristics of high-precision models with a small amount of data, showing its good performance in small samples. To this end, this paper proposes a dual-branch grouping multiscale residual embedding U-Net and cross-attention fusion networks (DGMRU_CAF) for hyperspectral image classification is proposed. The network contains two branches, spatial GMRU and spectral GMRU, which can reduce the interference between the two types of features, spatial and spectral. In this case, each branch introduces U-Net and designs a grouped multiscale residual block (GMR), which can be used in spatial GMRUs to compensate for the loss of feature information caused by spatial features during down-sampling, and in spectral GMRUs to solve the problem of redundancy in spectral dimensions. Considering the effective fusion of spatial and spectral features between the two branches, the spatial-spectral cross-attention fusion (SSCAF) module is designed to enable the interactive fusion of spatial-spectral features. Experimental results on WHU-Hi-HanChuan and Pavia Center datasets shows the superiority of the method proposed in this paper.

Author 1: Ning Ouyang
Author 2: Chenyu Huang
Author 3: Leping Lin

Keywords: U-Net; multiscale; cross-attention; hyperspectral image classification

PDF

Paper 61: FPGA-based Implementation of a Resource-Efficient UNET Model for Brain Tumour Segmentation

Abstract: In this study an optimized UNET model is used for FPGA-based inference in the context of brain tumour segmentation using the BraTS dataset. The presented model features reduced depth and fewer filters, tailored to enhance efficiency on FPGA hardware. The implementation leverages High-Level Synthesis for Machine Learning (HLS4ML) to optimize and convert a Keras-based UNET model to Hardware Description Language (HDL) in the Kintex Ultrascale (xcku085-flva1517-3-e) FPGA. Resource strategy, First in First out (FIFO) depth optimization, and precision adjustment were employed to optimize FPGA resource utilization. Resource strategy is demonstrated to be effective, with resource utilization reaching a saturation point at a 1000-reuse factor. Following FIFO optimization, significant reductions are observed, including a 55 percent decrease in Block RAM (BRAM) usage, a 43 percent reduction in Flip-Flops (FF), and a 49 percent reduction in Look-Up Tables (LUT). In C/RTL co-simulation, the proposed FPGA-based UNET model achieves an Intersection over Union (IoU) score of 74 percent, demonstrating comparable segmentation accuracy to the original Keras model. These findings underscore the viability of the optimized UNET model for efficient brain tumour segmentation on FPGA platforms.

Author 1: Modise Kagiso Neiso
Author 2: Nicasio Maguu Muchuka
Author 3: Shadrack Maina Mambo

Keywords: UNET; field programmable gate array; high-level synthesis for machine learning; brain tumour segmentation

PDF

Paper 62: Enhancing Diabetes Management: A Hybrid Adaptive Machine Learning Approach for Intelligent Patient Monitoring in e-Health Systems

Abstract: The goal of the present research is to better understand the need of accurate and ongoing monitoring in the complicated chronic metabolic disease known as diabetes. With the integration of an intelligent system utilising a hybrid adaptive machine learning classifier, the suggested method presents a novel way to tracking individuals with diabetes. The system uses cutting edge technologies like intelligent tracking and machine learning (ML) to improve the efficacy and accuracy of diabetes patient monitoring. Integrating smart gadgets, sensors, and telephones in key locations to gather full body dimension data that is essential for diabetic health forms the architectural basis. Using a dataset that includes comprehensive data on the patient's characteristics and glucose levels, this investigation looks at sixty-two diabetic patients who were followed up on a daily basis for sixty-seven days. The study presents a hybrid architecture that combines a Convolutional Neural Network (CNN) with a Support Vector Machine (SVM) in order to optimise system performance. To train and optimise the hybrid model, Grey Wolf Optimisation (GWO) is utilised, drawing inspiration from collaborative optimisation in wolf packs. Thorough assessment, utilising standardised performance criteria including recall, F1-Score, accuracy, precision, and the Receiver Operating Characteristic (ROC) Curve, methodically verifies the suggested solution. The results reveal a remarkable 99.6% accuracy rate, which shows a considerable increase throughout training epochs. The CNN-SVM hybrid model achieves a classification accuracy advantage of around 4.15% over traditional techniques such as SVM, Decision Trees, and Sequential Minimal Optimisation. Python software is used to implement the suggested CNN-SVM technique. This research advances e-health systems by presenting a novel framework for effective diabetic patient monitoring that integrates machine learning, intelligent tracking, and optimisation techniques. The results point to a great deal of promise for the proposed method in the field of medicine, especially in the accurate diagnosis and follow-up of diabetic patients, which would provide opportunities for tailored and adaptable patient care.

Author 1: Sushil Dohare
Author 2: Deeba K
Author 3: Laxmi Pamulaparthy
Author 4: Shokhjakhon Abdufattokhov
Author 5: Janjhyam Venkata Naga Ramesh
Author 6: Yousef A.Baker El-Ebiary
Author 7: E. Thenmozhi

Keywords: Diabetes; machine learning; convolutional neural network; support vector machine; grey wolf optimization; e-health systems

PDF

Paper 63: Feature Selection Model Development on Near-Infrared Spectroscopy Data

Abstract: This study aims to develop a feature selection model on Near-Infrared Spectroscopy (NIRS) data. The object used is beef with six quality parameters: color, drip loss, pH, storage time, Total Plate Colony (TPC), and water moisture. The prediction model is a Random Forest Regressor (RFR) with default parameters. The feature selection model is carried out by mapping spectroscopic data into line form. The collection of lines is made into one line by finding the mean value. Next, apply the line simplification method based on angle elimination, starting from the smallest angle to the largest. Each iteration will eliminate one corner, reducing one column of data in the corresponding dataset. Then, the predicted value in the form of R2 will be collected, and the highest value will be considered the best feature selection formation. RFR prediction results with R2 values are as follows: color R2= 0.597, drip loss R2=0.891, pH R2=0.797, storage time R2=0.889, TPC R2=0.721, and water moisture R2=0.540. Meanwhile, after applying the feature selection model, the R2 values for all parameters increased to color R2=0.877, drip loss R2=0.943, pH R2=0.904, storage time R2=0.917, TPC R2=0.951, and water moisture R2=0.893. Based on the results of increasing the R2 value of the six parameters, an average value of increasing prediction accuracy of 17.49% can be taken. So, the feature selection method based on line simplification with an angle elimination system can provide very good results.

Author 1: Ridwan Raafi’udin
Author 2: Y. Aris Purwanto
Author 3: Imas Sukaesih Sitanggang
Author 4: Dewi Apri Astuti

Keywords: Beef quality prediction; feature selection; machine learning; Random Forest Regressor

PDF

Paper 64: Research on Spatial Accessibility Measurement Algorithm for Sanya Tourist Attractions Based on Seasonal Factor Adjustment Analysis

Abstract: Seasonal factors will lead to changes in tourists' demand for scenic spots in different seasons, which will affect the traffic network and road conditions, and then affect the convenience and efficiency of tourists arriving at scenic spots. Based on the adjustment and analysis of seasonal factors, this study puts forward an algorithm for measuring the spatial accessibility of Sanya tourist attractions. Principal component analysis is used to denoise the data of Sanya tourist attractions in different seasons, and independent component analysis is used to extract the data characteristics of Sanya tourist attractions in different seasons after denoising. On this basis, the spatial accessibility index of Sanya tourist attractions is calculated by combining the spatial information of Sanya tourist attractions and GIS technology, and the spatial accessibility of Sanya tourist attractions is analyzed, and the spatial accessibility measurement model of Sanya tourist attractions is constructed to realize the spatial accessibility measurement of Sanya tourist attractions. The experimental results show that the spatial accessibility measurement method of Sanya tourist attractions is effective, which can effectively improve the accuracy of accessibility measurement and shorten the accessibility measurement time. It aims to help decision makers plan and optimize tourist routes and improve the efficiency and convenience of tourists arriving at their destinations.

Author 1: Xiaodong Mao
Author 2: Yan Zhuang

Keywords: Seasonal factors; adjustment analysis; Sanya Tourist Attractions; spatial accessibility measure; GIS technology

PDF

Paper 65: Decoding the Narrative: Patterns and Dynamics in Monkeypox Scholarly Publications

Abstract: This study conducts a bibliometric analysis of monkeypox research to uncover trends, influential publishers, and key research topics. A dataset of Google Scholar-indexed articles was analyzed using bibliometric methods and tools such as Publish or Perish (PoP), VOSviewer, and Bibliometrix. The study reveals a growing research interest in monkeypox, with a notable increase in publications over the past decade. The Wiley Online Library emerged as the leading publisher, while highly cited articles covered various aspects of the disease. Cluster analysis identified key research topics, including clinical features, zoonotic transmission, and outbreak patterns. Network visualization and bigram analysis showcased relationships between authors, keywords, and publishers, with "monkeypox" being the most frequent keyword. By visualizing topic trends over time, the study identified emerging areas of investigation. The findings contribute to a comprehensive understanding of monkeypox research, aiding in identifying research gaps and guiding future studies. This research highlights the relevance of bibliometric analysis in health and information sciences. By uncovering trends, influential publishers, and key topics in monkeypox research, this study informs prevention, vaccination, and treatment strategies for mitigating the impact of monkeypox on public health.

Author 1: Muhammad Khahfi Zuhanda
Author 2: Desniarti
Author 3: Anil Hakim Syofra
Author 4: Andre Hasudungan Lubis
Author 5: Prana Ugiana Gio
Author 6: Habib Satria
Author 7: Rahmad Syah

Keywords: Bibliometrics; monkeypox virus; research trends; publication patterns; research impact

PDF

Paper 66: Healthcare Intrusion Detection using Hybrid Correlation-based Feature Selection-Bat Optimization Algorithm with Convolutional Neural Network

Abstract: Cloud computing is popular among users in various areas such as healthcare, banking, and education due to its low-cost services alongside increased reliability and efficiency. But, security is a significant problem in cloud-based systems due to the cloud services being accessed via the Internet by a variety of users. Therefore, the patient’s health information needs to be kept confidential, secure, and accurate. Moreover, any change in actual patient data potentially results in errors during the diagnosis and treatment. In this research, the hybrid Correlation-based Feature Selection-Bat Optimization Algorithm (HCFS-BOA) based on the Convolutional Neural Network (CNN) model is proposed for intrusion detection to secure the entire network in the healthcare system. Initially, the data is obtained from the CIC-IDS2017, NSL-KDD datasets, after which min-max normalization is performed to normalize the acquired data. HCFS-BOA is employed in feature selection to examine the appropriate features that not only have significant correlations with the target variable, but also contribute to the optimal performance of intrusion detection in the healthcare system. Finally, CNN classification is performed to identify and classify intrusion detection accurately and effectively in the healthcare system. The existing methods namely, SafetyMed, Hybrid Intrusion Detection System (HIDS), and Blockchain-orchestrated Deep learning method for Secure Data Transmission in IoT-enabled healthcare systems (BDSDT) are employed to evaluate the efficacy of HCFS-BOA-based CNN. The proposed HCFS-BOA-based CNN achieves a better accuracy of 99.45% when compared with the existing methods: SafetyMed, HIDS, and BDSDT.

Author 1: H. Kanakadurga Bella
Author 2: S. Vasundra

Keywords: Convolutional neural network; deep learning; intrusion detection system; healthcare; security

PDF

Paper 67: Context-Aware Transfer Learning Approach to Detect Informative Social Media Content for Disaster Management

Abstract: In the wake of disasters, timely access to accurate information about on-the-ground situation is crucial for effective disaster response. In this regard, social media (SM) like Twitter have emerged as an invaluable source of real-time user-generated data during such events. However, accurately detecting informative content from large amounts of unstructured user-generated data under such time-sensitive circumstances remains a challenging task. Existing methods predominantly rely on non-contextual language models, which fail to accurately capture the intricate context and linguistic nuances within the disaster-related tweets. While some recent studies have explored context-aware methods, they are based on computationally demanding transformer architectures. To strike a balance between effectiveness and computational efficiency, this study introduces a new context-aware transfer learning approach based on DistilBERT for the accurate detection of disaster related informative content on SM. Our novel approach integrates DistilBERT with a Feed Forward Neural Network (FFNN) and involves multistage finetuning of the model on balanced benchmark real-world disaster datasets. The integration of DistilBERT with an FFNN provides a simple and computationally efficient architecture, while the multistage finetuning facilitates a deeper adaptation of the model to the disaster domain, resulting in improved performance. Our proposed model delivers significant improvements compared to the state-of-the-art (SOTA) methods. This suggests that our model not only addresses the computational challenges but also enhances the contextual understanding, making it a promising advancement for accurate and efficient disaster-related informative content detection on SM platforms.

Author 1: Saima Saleem
Author 2: Monica Mehrotra

Keywords: Disaster management; twitter; distilBERT; deep learning; multistage finetuning; transfer learning

PDF

Paper 68: Practical Application of AI and Large Language Models in Software Engineering Education

Abstract: Subjects with limited application in the software industry like AI have recently received tremendous boon due to the development and raise of publicity of LLMs. LLM-powered software has a wide array of practical applications that must be taught to Software Engineering students, so that they can be relevant in the field. The speed of technological change is extremely fast, and university curriculums must include those changes. Renewing and creating new methodologies and workshops is a difficult task to complete successfully in such a dynamic environment full of cutting-edge technologies. This paper aims to showcase our approach to using LLM-powered software for AI generated images, like Stable diffusion and code generation tools like ChatGPT in workshops for two relevant subjects – Analysis of Software Requirements and Specifications, as well as Artificial Intelligence. A comparison between the different available LLMs that generate images is made, and the choice between them is explained. Student feedback is shown and a general positive and motivational impact is noted during and after the workshop. A brief introduction that covers the subjects where AI is applied is made. The proposed solutions for several uses of AI in the field of higher education, more specifically software engineering, are presented. Several workshops have been made and included in the curriculum. The results of their application have been noted and an analysis is made. More propositions on further development based on the gained experience, feedback and retrieved data are made. Conclusions are made on the application of AI in higher education and different ways to utilize such tools are presented.

Author 1: Vasil Kozov
Author 2: Galina Ivanova
Author 3: Desislava Atanasova

Keywords: Application of AI-powered software; AI generated images; software engineering; stable diffusion; higher education

PDF

Paper 69: A Novel Approach to Data Clustering based on Self-Adaptive Bacteria Foraging Optimization

Abstract: Data clustering reduces the number of data objects by grouping similar data objects together. In this process, data are divided into valuable groups (clusters) or expressive without at all previous information. This manuscript represents a different clustering algorithm based on the technique of the adaptive strategy algorithm known as Self-Adaptive Bacterial Foraging Optimization (SABFO). It is a streamlining strategy for bunching issues where a cluster of bacteria forages to converge to definite locations as ultimate group communities by limiting the fitness function. The superiority of this method is assessed on numerous famous benchmark data sets. In this paper, the authors have compared the projected technique with some well-known advanced clustering approaches: the k-means algorithm, the Particle Swarm optimization algorithm, and the Fitness-Based Adaptive Differential Evolution (FBADE) Scheme. An experimental finding demonstrates the usefulness of the projected algorithm as a clustering method that can operate on data sets with different densities, and cluster sizes.

Author 1: Tanmoy Singha
Author 2: Rudra Sankar Dhar
Author 3: Joydeep Dutta
Author 4: Arindam Biswas

Keywords: Data clustering; Self-Adaptive Bacterial Foraging Optimization (SABFO); Particle Swarm Optimization (PSO); FBADE scheme; the k-means algorithm and the classical BFO

PDF

Paper 70: Traffic Flow Prediction in Urban Networks: Integrating Sequential Neural Network Architectures

Abstract: The rapid growth of urban areas has significantly compounded traffic challenges, amplifying concerns about congestion and the need for efficient traffic management. Accurate short-term traffic flow prediction remains important for strategic infrastructure planning within these expanding urban networks. This study explores a Transformer-based model designed for traffic flow prediction, conducting a comprehensive comparison with established models such as Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), Bidirectional Gated Recurrent Unit (BiGRU), and Time-Delay Neural Network (TDNN). Our approach integrates traditional time series values with derived time-related features, enhancing the model's predictive capabilities. The aim is to effectively capture temporal dependencies within operational data. Despite the effectiveness of existing models, internal complexities persist due to diverse road conditions that influence traffic dynamics. The proposed Transformer model consistently demonstrates competitive performance and offers adaptability when learning from longer time spans. However, the simpler BiLSTM model proved to be the most effective when applied to the utilized data.

Author 1: Eva Lieskovska
Author 2: Maros Jakubec
Author 3: Pavol Kudela

Keywords: Traffic flow; short-term prediction; machine learning; transformer

PDF

Paper 71: Experience Replay Optimization via ESMM for Stable Deep Reinforcement Learning

Abstract: The memorization and reuse of experience, popularly known as experience replay (ER), has improved the performance of off-policy deep reinforcement learning (DRL) algorithms such as deep Q-networks (DQN) and deep deterministic policy gradients (DDPG). Despite its success, ER faces the challenges of noisy transitions, large memory sizes, and unstable returns. Researchers have introduced replay mechanisms focusing on experience selection strategies to address these issues. However, the choice of experience retention strategy has a significant influence on the selection strategy. Experience Replay Optimization (ERO) is a novel reinforcement learning algorithm that uses a deep replay policy for experience selection. However, ERO relies on the naïve first-in-first-out (FIFO) retention strategy, which seeks to manage replay memory by constantly retaining recent experiences irrespective of their relevance to the agent’s learning. FIFO sequentially overwrites the oldest experience with a new one when the replay memory is full. To improve the retention strategy of ERO, we propose an experience replay optimization with enhanced sequential memory management (ERO-ESMM). ERO-ESMM uses an improved sequential retention strategy to manage the replay memory efficiently and stabilize the performance of the DRL agent. The efficacy of the ESMM strategy is evaluated together with five additional retention strategies across four distinct OpenAI environments. The experimental results indicate that ESMM performs better than the other five fundamental retention strategies.

Author 1: Richard Sakyi Osei
Author 2: Daphne Lopez

Keywords: Experience replay; experience replay optimization; experience retention strategy; experience selection strategy; replay memory management

PDF

Paper 72: Real Time FPGA Implementation of a High Speed for Video Encryption and Decryption System with High Level Synthesis Tools

Abstract: The development of communication networks has made information security more important than ever for both transmission and storage. Since the majority of networks involve images, image security is becoming a difficult challenge. In order to provide real-time image encryption and decryption, this study suggests an FPGA implementation of a video cryptosystem that has been well-optimized based on high level synthesis. The MATLAB HDL coder and Vivado Tools from Xilinx are used in the design, implementation, and validation of the algorithm on the Xilinx Zynq FPGA platform. Low resource consumption and pipeline processing are well-suited to the hardware architecture. For real-time applications involving secret picture encryption and decryption, the suggested hardware approach is widely utilized. This study suggests an implementation of the encryption-decryption system that is both very efficient and area-optimized. A unique high-level synthesis (HLS) design technique based on application-specific bit widths for intermediate data nodes was used to realize the proposed implementation. For HLS, MATLAB HDL coder was used to generate register transfer level RTL design. Using Vivado software, the RTL design was implemented on the Xilinx ZedBoard, and its functioning was tested in real time using an input video stream. The results produced are faster and more area- efficient (target FPGA has fewer gates than before) than those of earlier solutions for the same target board.

Author 1: Ahmed Alhomoud

Keywords: Security; encryption; decryption; AES; HDL coder; high level synthesis; FPGA; Zynq7000

PDF

Paper 73: A Method to Increase the Analysis Accuracy of Stock Market Valuation: A Case Study of the Nasdaq Index

Abstract: For a significant period, conventional methodologies have been employed to assess fundamental and technical aspects in forecasting and analyzing stock market performance. The precision and availability of stock market predictions have been enhanced by machine learning. Various machine learning methods have been utilized for stock market predictions. A novel, optimized machine-learning approach for financial market analysis is aimed to be introduced by this study. A unique method for improving the accuracy of stock price forecasting by incorporating support vector regression with the slime mould algorithm is presented in the present work. Other optimization algorithms were employed to enhance the prediction accuracy and the convergence speed of the network, which were Biogeography-based optimization and Gray Wolf Optimizer. An assessment of the proposed model's effectiveness in predicting stock prices was conducted through research employing Nasdaq index data extending from January 1, 2015, to June 29, 2023. Substantial improvements in accuracy for the proposed model were indicated by the results compared to other models, with an R-squared value of 0.991, a root mean absolute error of 149.248, a mean absolute percentage error of 0.930, and a mean absolute error of 116.260. Furthermore, not only is the prediction accuracy enhanced by the integration of the proposed model, but the model's adaptability to dynamic market conditions is also increased.

Author 1: Haixia Niu

Keywords: Machine learning; Nasdaq index; support vector regression; gray wolf optimizer; slime mould algorithm

PDF

Paper 74: Presenting an Optimized Hybrid Model for Stock Price Prediction

Abstract: In the finance sector, stock price forecasting is deemed crucial for traders and investors. In this study, a detailed comparison and analysis of various machine learning models for stock price forecasting were undertaken. Historical stock data and an array of technical indicators were utilized in these models. The enhancement of the Histogram-Based Gradient Boosting (HGBR) method for predicting the Nasdaq stock index was the focus. Optimization techniques such as genetic algorithm optimization, biologically-based optimization, and the grasshopper optimization algorithm were applied. Among these, the most promising results were shown by the grasshopper optimization method. The optimized HGBR models, namely GA-HGBR, BBO-HGBR, and GOA-HGBR, were found to have achieved significant improvements, with coefficient of determination values of 0.96, 0.98, and 0.99, respectively. These figures underscore the substantial advancement of these models as compared to the baseline HGBR model. Metrics such as Mean Absolute Error, Root Mean Square Error, Mean Absolute Percentage Error, and the Coefficient of Determination were employed to assess the performance of the models.

Author 1: Liangchao LIU

Keywords: Stock prediction; machine learning approaches; ensemble learning; grasshopper optimization; histogram-based gradient boosting

PDF

Paper 75: Scalable Accelerated Intelligent Charging Strategy Recommendation for Electric Vehicles Based on Deep Q-Networks

Abstract: With the rapid development of electric vehicles, their charging strategies significantly impact the overall power grid. Solving the spatiotemporal scheduling problem of vehicle charging has become a hot research topic. This paper focuses on recommending suitable charging stations for electric vehicles and proposes a scalable accelerated intelligent charging strategy recommendation algorithm based on Deep Q-Networks (DQN). The strategy recommendation problem is formulated as a Markov decision process, where the continuous sequence of regional charging requests within a time slice is fed into the DQN network as the input state, enabling optimal charging strategy recommendations for each electric vehicle. The algorithm aims to maintain regional load balance while minimizing user waiting time. To enhance the algorithm's applicability, a scalable, accelerated charging strategy framework is further proposed, which incorporates information filtering and shared experience pool mechanisms to adapt to different expansion scenarios and expedite strategy iterations in new scenarios. Simulation results demonstrate that the proposed DQN-based strategy recommendation algorithm outperforms the shortest path-first strategy, and the scalable, accelerated charging strategy framework achieves a 64.3% improvement in iteration speed in new scenarios, which helps to reduce the cloud server load and saves overheads.

Author 1: Xianhao Shen
Author 2: Zhen Wu
Author 3: Yexin Zhang
Author 4: Shaohua Niu

Keywords: Scalable acceleration; smart charging; Deep Q-network; Markov decision

PDF

Paper 76: Geospatial Pharmacy Navigator: A Web and Mobile Application Integrating Geographical Information System (GIS) for Medicine Accessibility

Abstract: This project introduces a web and mobile application that integrates Geographic Information Systems (GIS) to identify pharmacies with available prescription drugs, addressing the expanding role of Information and Communication Technology (ICT) in healthcare. The primary objective is to offer the general public an easy-to-use platform that locates the closest pharmacy having the searched drugs or medicines. Adopting the Rapid Application Development methodology ensures continuous engagement with stakeholders, allowing developers to closely align the application with user requirements. Essential elements of the web platform include chat functionality, inventory management, pharmacy oversight, and the display of medication listings. General users may check medication lists, search pharmacies, find pharmacy locations and the best routes, search for specific medications, access comprehensive medication information, and more with the mobile application. Fifty respondents, comprising five pharmacists and forty-five general users, expressed overall satisfaction with the system's functionality, emphasizing its ease of use and straightforward navigation across most features. This project not only amplifies the importance of ICT in the healthcare industry, but it also shows how technology can be successfully integrated to improve accessibility and expedite healthcare procedures for both the general public and professionals.

Author 1: Mia Amor C. Tinam-isan
Author 2: Sherwin D. Sandoval
Author 3: Nathanael R. Neri
Author 4: Nasrollah L. Gandamato

Keywords: ICT in health; mobile application; web application; GIS; pharmacy mapping

PDF

Paper 77: Hybrid Bio-Inspired Optimization-based Cloud Resource Demand Prediction using Improved Support Vector Machine

Abstract: In order to furnish diverse resource requirements in cloud computing, numerous resources are integrated into a data centre. How to deliver resources in a timely and accurate manner to meet user expectations is a significant concern. However, the resource demands of users fluctuate greatly and frequently change regularly. It's possible that the resource provision won't happen on time. Furthermore, because some physical resources are shut down to save energy, there may occasionally not be enough of them to meet user requests. Therefore, it's critical to offer resource provision proactively to ensure positive user involvement using cloud computing. To enable resource provision in advance, it is essential to accurately estimate future resource demands. Using machine learning techniques, we offer a unique approach in this study that tries to identify key features, accelerating the forecast of cloud resource consumption. Finding the classification method with the greatest fit and maximum classification accuracy is crucial when predicting cloud resource consumption. The attribute selection method is used to decrease the dataset. The categorization process is then given the reduced data. The hybrid attribute selection method used in the investigation, which combines the bio-inspired algorithm genetic algorithm, the pulse-coupled neural network, and the particle swarm optimization algorithm, improves classification accuracy. The accuracy of prediction employing this technique is examined using a variety of performance criteria. When it comes to predicting the demand for cloud resources, the experimental results show that the suggested machine learning method performs more effectively than traditional machine learning models.

Author 1: Nisha Sanjay
Author 2: Sasikumaran Sreedharan

Keywords: Cloud computing; resource demand; machine learning; cloud resource demand prediction; bio-inspired algorithm

PDF

Paper 78: Spatial Display Model of Oil Painting Art Based on Digital Vision Design

Abstract: Oil painting, owing to its unique expressive approach, holds infinite charm in classical artistic creation, yet introduces complexities in terms of manual maintenance. In pursuit of digital spatial visualization of oil painting art, this study employs a stereo matching algorithm and Efficient large-scale stereo matching, focusing on aspects like disparity maps and pixel contrasts. Furthermore, enhancements in the algorithm involve the incorporation of the cross-arms strategy for image registration and the selection of auxiliary point sets to optimize the handling of image features. Results indicate that the proposed model, evaluated on the Middlebury dataset, achieves high accuracy, recall rates, and F1 scores, measuring 97.2%, 95.0%, and 97.5% respectively, surpassing the DecStereo algorithm by 3.4%, 8.2%, and 5.7%. When tested on the Photo2monet oil painting dataset, the proposed model achieves peak signal-to-noise ratio and average structural similarity index values of 16.781 and 0.833 respectively. This suggests that the proposed model excels in digital visual representation of oil paintings, exhibiting higher image precision, stronger stereo matching capabilities, and superior spatial display performance.

Author 1: Qiong Yang
Author 2: Zixuan Yue

Keywords: Oil painting; spatial visualization; Stereo matching; Spatial display; ELAS

PDF

Paper 79: Research on Neural Network-based Automatic Music Multi-Instrument Classification Approach

Abstract: The automatic classification of multi-instruments plays a crucial role in providing services for music retrieval and recommendation. This paper focuses on automatic multi-instrument classification. Firstly, instrument features were analyzed, and Mel-frequency cepstral coefficient (MFCC) and perceptual linear predictive coefficient (PLPC) were extracted from instrument signals. Features were selected using the entropy weight method. The optimal initial weight threshold of a back-propagation neural network (BPNN) was obtained by utilizing the sparrow search algorithm (SSA), achieving a SSA-BPNN classifier. Experiments were conducted using the IRMAS dataset. The results demonstrated that the combination of MFCC and PLPC selected through the entropy weight method achieved the best performance in automatic multi-instrument classification. The method yielded high P value, recall rate, and F1 value, 0.72, 0.71, and 0.71, respectively. Moreover, it outperformed other algorithms such as support vector machine and XGBoost. These results confirm the reliability of the automatic multi-instrument classification method proposed in this paper, making it suitable for practical applications.

Author 1: Ribin Guo

Keywords: Neural network; musical instrument; automatic classification; auditory feature; sparrow search algorithm

PDF

Paper 80: HarborSync: An Advanced Energy-efficient Clustering-based Algorithm for Wireless Sensor Networks to Optimize Aggregation and Congestion Control

Abstract: In the ever-evolving landscape of Wireless Sensor Networks (WSNs), the demand for cutting-edge algorithms has never been more critical. This paper proposes an algorithm, HarborSync, to improve stability, energy efficiency, durability, and congestion control in WSN. While selecting cluster heads and backup nodes, HarborSync applies the Optimised Stable Clustering Algorithm (OSCA) and the Weighted Clustering Algorithm (WCA). This fresh method puts the groundwork for better performance by acquainting techniques to intentionally postpone changes in cluster heads and computing priorities. Using the innovative Cluster-based Aggregation and Congestion Control (CACC) features, HarborSync provides enhanced routing, adaptive reconfiguration, efficient aggregation techniques, and dynamic congestion monitoring. Among HarborSync’s strengths, stability bears out with a 90% rating, surpassing those of LEACH (78%), LEACH-C (82%), TEEN (88%), and PEGASIS (76%). When it comes to durability, HarborSync scores 88% better than LEACH (75%), LEACH-C (80%), TEEN (85%), and PEGASIS (72%). The HarborSync score is 3.85% for congestion control compared to LEACH and LEACH-C, managing 5.22%, TEEN accomplishing 4.98%, and PEGASIS with 7.32%. Regarding adaptability, HarborSync showcases its versatility, earning an 85% rating, surpassing LEACH (72%), competes with LEACH-C (78%), equals TEEN (90%), and outperforms PEGASIS (68%). In the critical realm of packet loss management, HarborSync demonstrates efficiency with a reduced rate of 6.179%. Therefore, it outperforms LEACH (7.811%), LEACH-C (6.897%), TEEN (4.953%), and PEGASIS (7.973%).

Author 1: Ibrahim Aqeel

Keywords: Clustering; congestion control; cluster head selection; energy-efficient clustering; wireless sensor networks; energy optimization

PDF

Paper 81: Application of Skeletal Skinned Mesh Algorithm Based on 3D Virtual Human Model in Computer Animation Design

Abstract: 3D virtual character animation is the core technology of games, animation, and virtual reality. To improve its visual and realistic effects, the research focused on the skeleton skinned mesh algorithm. Firstly, a three-dimensional human body model was established based on motion capture data. Then, the skin vertex weight calculation and bone skin animation design were completed for the human body model. These experiments confirm that the designed weight calculation method has a smooth weight transition and good computational stability. The designed skinned mesh algorithm outperforms its skinned mesh algorithms in accuracy, recall, and area under curve values, with a maximum area under curve value of 0.927. Its smoothness and volume retention rate are both above 90.00%, and there is no obvious collapse phenomenon. Its other objective and subjective evaluation indicators are superior to the existing advanced skinned mesh algorithms processing, and the skin effect is realistic and smooth. Overall, this study contributes to the creation of 3D virtual character animation, enhances the visual realism of virtual creation, and provides key support for the animation performance of virtual characters.

Author 1: Zhongkai Zhan

Keywords: 3D virtual human; skinned mesh algorithm; weight; character animation; dual quaternion; motion capture data

PDF

Paper 82: Applying Computer Vision and Machine Learning Techniques in STEM-Education Self-Study

Abstract: In this innovative exploration, "Applying Computer Vision Techniques in STEM-Education Self-Study," the research delves into the transformative intersection of advanced computer vision (CV) technologies and self-directed learning within Science, Technology, Engineering, and Mathematics (STEM) education. Challenging traditional educational paradigms, this study posits that sophisticated CV algorithms, when judiciously integrated with modern educational frameworks, can profoundly augment the efficacy of self-study models for students navigating the increasingly intricate STEM curricula. By leveraging state-of-the-art facial recognition, object detection, and pattern analysis, the study underscores how CV can monitor, analyze, and thereby enhance students' engagement and interaction with digital content, a pioneering stride that addresses the prevalent disconnect between static study materials and the dynamic nature of learner engagement. Furthermore, the research illuminates the critical role of CV in generating personalized study roadmaps, effectively responding to individual learner's behavioral patterns and cognitive absorption rhythms, identified through meticulous analysis of captured visual data, thereby transcending the one-size-fits-all educational approach. Through rigorous qualitative and quantitative research methods, the paper offers groundbreaking insights into students' study habits, proclivities, and the nuanced obstacles they face, facilitating the creation of responsive, adaptive, and deeply personalized learning experiences. Conclusively, this research serves as a clarion call to educators, technologists, and policy-makers, emphatically demonstrating that the thoughtful application of computer vision techniques not only catalyzes a more engaging self-study landscape but also holds the latent potential to revolutionize the holistic STEM education ecosystem.

Author 1: Rustam Abdrakhmanov
Author 2: Assyl Tuimebayev
Author 3: Botagoz Zhussipbek
Author 4: Kalmurat Utebayev
Author 5: Venera Nakhipova
Author 6: Oichagul Alchinbayeva
Author 7: Gulfairuz Makhanova
Author 8: Olzhas Kazhybayev

Keywords: Load balancing; machine learning; server; classification; software

PDF

Paper 83: Automated Fruit Sorting in Smart Agriculture System: Analysis of Deep Learning-based Algorithms

Abstract: Automated fruit sorting plays a crucial role in smart agriculture, enabling efficient and accurate classification of fruits based on various quality parameters. Traditionally, rule-based and machine-learning methods have been employed for fruit sorting, but in recent years, deep learning-based approaches have gained significant attention. This paper investigates deep learning methods for fruit sorting and justifies their prevalence in the field. Therefore, it is necessary to address these limitations and improve the effectiveness of CNN-based fruit sorting methods. This research paper presents a comprehensive analysis of CNN-based methods, highlighting their strengths and limitations. This analysis aims to contribute to advancing automated fruit sorting in smart agriculture and provide insights for future research and development in deep learning-based fruit sorting techniques.

Author 1: Cheng Liu
Author 2: Shengxiao Niu

Keywords: Smart agriculture; automated fruit sorting; deep learning; Convolutional Neural Network (CNN); analysis

PDF

Paper 84: Artificial Intelligence-driven Training and Improvement Methods for College Students' Line Dancing

Abstract: With the advancement of computer technology, artificial intelligence technology has gradually become a research focus, and the thinking of relevant researchers has gradually transferred from the computer to the interaction between computers and humans. Artificial intelligence has begun to appear in various industries. With its rigorous computing logic and efficient computing speed, artificial intelligence technology begins to replace high-precision or highly repetitive work in work gradually. However, no specific data supports the specific work efficiency and output. In this context, this essay studies the methods of AI in the training and improvement of college students' line dancing levels. Virtual reality technology mainly undertakes functions such as virtual space modeling, sound positioning, sensory feedback, voice interaction, visual and spatial tracking, to ensure accurate positioning during choreography and motion capture. In this case, mechanical capture devices are used for motion capture in virtual reality space. This article uses intelligent capture technology based on virtual reality technology and artificial intelligence algorithms to capture and analyze the dance posture, generate analysis reports in a timely manner, and provide correction and suggestions for the dance posture. The final results show that AI can improve the training efficiency of line dancing of college students and can increase the innovation degree and method of dance posture by 7% to 13% compared with pure artificial. It shows that artificial intelligence technology plays a good role in college students' overall line dance training. At the same time, this paper also argues that artificial intelligence technology can effectively improve the overall productivity of traditional industries.

Author 1: Xiaohui WANG

Keywords: Motion capture; artificial intelligence technology; virtual reality; college students’ line dance training; dance ascension

PDF

Paper 85: State-of-the-Art Review of Deep Learning Methods in Fake Banknote Recognition Problem

Abstract: In the burgeoning epoch of digital finance, the exigency for fortified monetary transactions is paramount, underscoring the need for advanced counterfeit deterrence methodologies. The research paper provides an exhaustive analysis, delving into the profundities of employing sophisticated deep learning (DL) paradigms in the battle against fiscal fraudulence through fake banknote detection. This comprehensive review juxtaposes the traditional machine learning approaches with the avant-garde DL techniques, accentuating the conspicuous superiority of the latter in terms of accuracy, efficiency, and the diminution of human oversight. Spanning multiple continents and currencies, the discourse highlights the universal applicability and potency of DL, incorporating convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs) in discerning the most cryptic of counterfeits, a feat unachievable by obsolete technologies. The paper meticulously dissects the architectures, learning processes, and operational facets of these systems, offering insights into their convolutional strata, pooling heuristics, backpropagation, and loss minimization algorithms, alluding to their consequential roles in feature extraction and intricate pattern recognition - the quintessentials of authenticating banknotes. Furthermore, the exploration broaches the ethical and privacy concerns stemming from DL, including data bias and over-reliance on technology, suggesting the harmonization of algorithmic advancements with robust legislative frameworks. Conclusively, this seminal review posits that while DL techniques herald a revolutionary competence in fake banknote recognition, continuous research, and multi-faceted strategies are imperative in adapting to the ever-evolving chicanery of counterfeit malefactors.

Author 1: Ualikhan Sadyk
Author 2: Rashid Baimukashev
Author 3: Cemil Turan

Keywords: Fake banknote; detection; classification; recognition; review

PDF

Paper 86: Development of Intellectual Decision Making System for Logistic Business Process Management

Abstract: This research paper delves into the design and development of an Intellectual Decision Making System (IDMS) incorporated into a Logistic Business Process Management System (LBPSMS), employing advanced Machine Learning (ML) models. Aimed at streamlining and optimizing logistics business operations, the focal point of this study is to significantly elevate efficiency, enhance decision-making precision, and substantially reduce operational costs. This research introduces a pioneering hybrid approach that amalgamates both supervised and unsupervised machine learning algorithms, creating a unique paradigm for predictive analytics, trend analysis, and anomaly detection in logistics business processes. The practical application of these combined methodologies extends to diverse areas such as accurate demand forecasting, optimal route planning, efficient inventory management, and predictive customer behavior analysis. Empirical evidence from experimental trials corroborates the efficacy of the proposed IDMS, showcasing its profound impact on the decision-making process, with clear and measurable enhancements in operational efficiency and overall business performance within the logistics sector. This study thus delivers invaluable insights into the realm of machine learning applications within logistics, extending a comprehensive blueprint for future research undertakings and practical system implementations. With its practical significance and academic relevance, this research underscores the transformative potential of machine learning in revolutionizing the logistics business process management systems.

Author 1: Zhadra Kozhamkulova
Author 2: Leilya Kuntunova
Author 3: Shirin Amanzholova
Author 4: Almagul Bizhanova
Author 5: Marina Vorogushina
Author 6: Aizhan Kuparova
Author 7: Mukhit Maikotov
Author 8: Elmira Nurlybayeva

Keywords: Decision making; logistics; business process; machine learning; management

PDF

Paper 87: Construction of Short-Term Traffic Flow Prediction Model Based on IoT and Deep Learning Algorithms

Abstract: On a global scale, traffic problems are an essential factor affecting urban operations, particularly challenging the frequent occurrence of traffic congestion and accidents. The solution to the problem requires real-time and accurate prediction of traffic flow. This article mainly explores the application of the Internet of Things and deep learning in traffic flow prediction, aiming to solve the problem where existing methods cannot meet the requirements of real-time and accuracy. IoT devices, such as road sensors and in-vehicle GPS devices, which provides rich information for traffic flow prediction. With the ability of deep learning, it can not only learn and abstract a large amount of complex traffic data but also handle traffic flow prediction tasks in various complex situations. During the model construction process, the complexity of the road network was fully considered, practical algorithms were designed to fuse multi-source data, and the structure of the model was optimized to meet the needs of real-time prediction. The experimental results show that the absolute error of the test results is generally less than 6km/h, which can better reflect the traffic speed of the road section in the future.

Author 1: Xiaowei Sun
Author 2: Huili Dou

Keywords: Internet of things; deep learning algorithm; short term traffic flow; prediction model

PDF

Paper 88: Deep Learning for Early Detection of Tomato Leaf Diseases: A ResNet-18 Approach for Sustainable Agriculture

Abstract: The paper explores the application of Convolutional Neural Networks (CNNs), specifically ResNet-18, in revolutionizing the identification of diseases in tomato crops. Facing threats from pathogens like Phytophthora infestans, timely disease detection is crucial for mitigating economic losses and ensuring food security. Traditionally, manual inspection and labour-intensive tests posed limitations, prompting a shift to CNNs for more efficient solutions. The study uses a well-organized dataset, employing data preprocessing techniques and ResNet-18 architecture. The model achieves remarkable results, with a 91% F1 score, indicating its proficiency in distinguishing healthy and unhealthy tomato leaves. Metrics such as accuracy, sensitivity, specificity, and a high AUC score on the ROC curve underscore the model's exceptional performance. The significance of this work lies in its practical applications for early disease detection in agriculture. The ResNet-18 model, with its high precision and specificity, presents a powerful tool for crop management, contributing to sustainable agriculture and global food security.

Author 1: Asha M S
Author 2: Yogish H K

Keywords: Convolution neural networks; tomato crop health; deep learning; binary classification; disease detection

PDF

Paper 89: EmotionNet: Dissecting Stress and Anxiety Through EEG-based Deep Learning Approaches

Abstract: Amid global health crises, such as the COVID-19 pandemic, the heightened prevalence of mental health disorders like stress and anxiety has underscored the importance of understanding and predicting human emotions. Introducing "EmotionNet," an advanced system that leverages deep learning and state-of-the-art hardware capabilities to predict emotions, specifically stress and anxiety. Through the analysis of electroencephalography (EEG) signals, EmotionNet is uniquely poised to decode human emotions in real time. To get information from pre-processed EEG signals, the EmotionNet architecture combines convolutional neural networks (CNN) and long short-term memory (LSTM) networks in a way that works well together. This dual approach first decomposes EEG signals into their core alpha, beta, and theta rhythms. We preprocess these decomposed signals and develop a CNN-LSTM-based architecture for feature extraction. The LSTM captures the intricate temporal dynamics of EEG signals, further enhancing understanding. The end process discerningly classifies signals into "stress" or "anxiety" states through the AdaBoost classifier. Evaluation against the esteemed DEEP, SEED, and DASPS datasets showcased EmotionNet's exceptional prowess, achieving a remarkable accuracy of 98.6%, which surpasses even human detection rates. Beyond its technical accomplishments, EmotionNet emphasizes the paramount importance of addressing and safeguarding mental health.

Author 1: Yassine Daadaa

Keywords: Electroencephalography (EEG); Long short-term memory (LSTM); Convolutional neural network (CNN); human stress; anxiety detection; deep learning

PDF

Paper 90: Target Detection in Martial Arts Competition Video using Kalman Filter Algorithm Based on Multi target Tracking

Abstract: To solve the low accuracy and poor stability in traditional object tracking methods for martial arts competition videos, a Kalman filtering algorithm based on feature matching and multi object tracking is proposed for object detection in martial arts competition videos. Firstly, feature matching in multi target tracking is studied. Then, based on target feature matching, the Kalman filtering algorithm is fused to construct a target detection model in martial arts videos. Finally, simulation experiments are conducted to verify the performance and application effectiveness of the model. The results showed that the average tracking errors of the model on the X and Y axes were 3.86% and 3.38%, respectively. At the same time, the average accuracy and recall rate in the video target tracking process were 93.64% and 95.48%, respectively. After 100 iterations, the results gradually stabilized. This indicated that the constructed model could accurately detect targets in martial arts competition videos. It had high tracking accuracy and robustness. Compared with traditional object detection methods, this algorithm has better performance and effectiveness. The Kalman filter algorithm based on feature matching and multi target tracking has broad application prospects and research value in target detection in martial arts competition videos.

Author 1: Zhiguo Xin

Keywords: Multi target tracking; Kalman filtering algorithm; martial arts competition videos; target detection; feature matching

PDF

Paper 91: 2D-CNN Architecture for Accurate Classification of COVID-19 Related Pneumonia on X-Ray Images

Abstract: In the wake of the COVID-19 pandemic, the use of medical imaging, particularly X-ray radiography, has become integral to the rapid and accurate diagnosis of pneumonia induced by the virus. This research paper introduces a novel two-dimensional Convolutional Neural Network (2D-CNN) architecture specifically tailored for the classification of COVID-19 related pneumonia in X-ray images. Leveraging the advancements in deep learning, our model is designed to distinguish between viral pneumonia, typical of COVID-19, and other types of pneumonia, as well as healthy lung imagery. The architecture of the proposed 2D-CNN is characterized by its depth and a unique layer arrangement, which optimizes feature extraction from X-ray images, thus enhancing the model's diagnostic precision. We trained our model using a substantial dataset comprising thousands of annotated X-ray images, including those of patients diagnosed with COVID-19, patients with other pneumonia types, and individuals with no lung infection. This dataset enabled the model to learn a wide range of radiographic features associated with different lung conditions. Our model demonstrated exceptional performance, achieving high accuracy, sensitivity, and specificity in preliminary tests. The results indicate that our 2D-CNN model not only outperforms existing pneumonia classification models but also provides a valuable tool for healthcare professionals in the early detection and differentiation of COVID-19 related pneumonia. This capability is crucial for prompt and appropriate treatment, potentially reducing the pandemic's burden on healthcare systems. Furthermore, the model's design allows for easy integration into existing medical imaging workflows, offering a practical and efficient solution for frontline medical facilities. Our research contributes to the ongoing efforts to combat COVID-19 by enhancing diagnostic procedures through the application of artificial intelligence in medical imaging.

Author 1: Nurlan Dzhaynakbaev
Author 2: Nurgul Kurmanbekkyzy
Author 3: Aigul Baimakhanova
Author 4: Iyungul Mussatayeva

Keywords: Machine learning; deep learning; X-Ray; CNN; detection; classification

PDF

Paper 92: Revolutionizing Generalized Anxiety Disorder Detection using a Deep Learning Approach with MGADHF Architecture on Social Media

Abstract: In the contemporary landscape, social media has emerged as a dominant medium via which individuals are able to articulate a wide range of emotions, encompassing both positive and negative sentiments, therefore offering significant insights into their psychological well-being. The ability to identify these emotional signals plays a vital role in the timely identification of persons who are undergoing depression and other mental health difficulties, hence facilitating the implementation of potentially life-saving therapies. There are already a multitude of clever algorithms available that demonstrate high accuracy in predicting depression. Despite the availability of many machine learning (ML) techniques for detecting persons with depression, the overall effectiveness of these systems has been deemed unsatisfactory. In order to overcome this constraint, the present study introduces an innovative methodology for identifying depression by employing deep learning (DL) techniques, specifically the Deep Learning Multi-Aspect Generalized Anxiety Disorder Detection with Hierarchical-Attention Network and Fuzzy (MGADHF). The process of feature selection is conducted by employing the Adaptive Particle and Grey Wolf optimization techniques and fuzzy. The Multi-Aspect Depression Detection with Hierarchical Attention Network (MDHAN) model is subsequently utilized to categorize Twitter data, differentiating between those exhibiting symptoms of depression and those who do not. Comparative assessments are performed utilizing established methodologies such as Convolutional Neural Network (CNN), Support Vector Machine (SVM), Minimum Description Length (MDL), and MDHAN. As proposed, the MGADHF architecture demonstrates a notable accuracy level, reaching 99.19%. This surpasses frequency-based DL models' performance and achieves a reduced false-positive rate.

Author 1: Faisal Alshanketi

Keywords: Deep learning; machine learning; anxiety disorder; social media; grey wolf optimization technique

PDF

Paper 93: Intelligent Temperature Control Method of Instrument Based on Fuzzy PID Control Technology

Abstract: The current instrumentation intelligent temperature control is generally realized based on PID control technology, whose efficiency and precision are low and cannot meet the actual production requirements. A fuzzy PID (FPID) control technique is suggested as a solution to this issue with the goal to increase the control precision by adjusting the PID parameters in real-time using a fuzzy algorithm. In addition, a multi-strategy-fused Improved Grey Wolf Optimization (MGWO) algorithm is used to obtain the optimal fuzzy rule parameters for the fuzzy controller to achieve the optimization of FPID. In addition to the aforementioned, the MGWO-FPID-based instrumentation intelligent temperature control model is created to enhance the instrumentation's ability to regulate temperature. The testing results demonstrated that the MGWO-FPID model outperformed the other two models with values for the objective function of 5 10-8, adaptation degree of 13.1, control regulation time of 2.08 s, F1 value of 96.14%, MAE value of 8.53, Recall value of 95.37%, and AUC value of 0.995. The above results prove that the MGWO-FPID-based instrumentation intelligent temperature control model proposed in the study has high accuracy and efficiency, which can effectively realize the instrumentation intelligent temperature control in industrial production, and then improve the accuracy and efficiency of instrumentation temperature control, ensure the safe production of industry, and promote the industrial development to a certain extent. This model can monitor and regulate the temperature in the industrial production process in real time, avoiding safety accidents caused by temperature anomalies, and ensuring the safety of industrial production. And the application of this model can improve the efficiency and product quality of industrial production, help reduce production costs and improve economic benefits. This can not only promote the development of related industries, but also drive the economic development of the entire society.

Author 1: Wenfang Li
Author 2: Yuqiao Wang

Keywords: Fuzzy PID control; instrumentation; intelligent temperature control; differential negative feedback; grey wolf optimization algorithm

PDF

Paper 94: Designing an Adaptive Effective Intrusion Detection System for Smart Home IoT

Abstract: As the ubiquity of IoT devices in smart homes escalates, so does the vulnerability to cyber threats that exploit weaknesses in device security. Timely and accurate detection of attacks is critical to protect smart home networks. Intrusion Detection Systems (IDS) are a cornerstone in any layered security defense strategy. However, building such a system is challenging given smart home devices' resource constraints and behaviors' diversity. This paper presents an adaptative IDS based on a device-specific approach and SDN deployment. We categorize devices based on traffic profiles to enable specialized architectural design and dynamically assign the suitable detection model. We demonstrate the IDS efficiency, effectiveness, and adaptability by thoroughly benchmarking an ensemble of machine learning models, mainly tree ensemble models and extreme learning machine variants, on the up-to-date IoT CICIoT2023 security dataset. Our IDS multi-component device-aware architecture leverages software-defined networking and virtualized network functions for scalable deployment, with an edge computing design to meet strict latency requirements. The results reveal that our adaptive model selection ensures detection accuracy while maintaining low latency, aligning with the critical requirement of real-time accuracy and adaptability to smart home devices' traffic patterns.

Author 1: Hassen Sallay

Keywords: Smart home; IoT; IDS; taxonomy; architecture; SDN; ELM

PDF

Paper 95: Audio Style Conversion Based on AutoML and Big Data Analysis

Abstract: In the field of audio style conversion research, the application of AutoML and big data analysis has shown great potential. The study used AutoML and big data analysis methods to conduct deep learning on audio styles, especially in style transitions between flutes and violins. The results show that using iterative learning for audio style conversion training, the training curve tends to stabilize after 100 iterations, while the validation curve reaches stability after 175 iterations. In terms of efficiency analysis, the efficiency of the yellow curve and the green curve reached 1.05 and 1.34, respectively, with the latter being significantly more efficient. This study achieved significant results in audio style conversion through the application of AutoML and big data analysis, successfully improving conversion accuracy. This progress has practical application value in multiple fields, including music production and sound effect design.

Author 1: Dan Chi

Keywords: AutoML; audio style conversion; machine learning; big data analysis; adain module

PDF

Paper 96: Attraction Recommendation and Itinerary Planning for Smart Rural Tourism Based on Regional Segmentation

Abstract: As the rural tourism industry develops, effective attraction recommendations and planning are crucial for the tourist experience. Then, a rural scenic spot tourism recommendation and planning technology based on regional segmentation was proposed. The scenic area was divided into multiple grids based on tourist check-in behaviour, and the interest and influence of the scenic area were associated with the grid check-in behaviour. Content recommendation was achieved through two factors: popularity and regional location. And considering the sparsity of data in the recommendation, clustering algorithms were introduced to model tourist check-in behaviour based on factors such as time and regional location, and content recommendation was achieved through tourist preferences. In the performance analysis of recommendation models, the proposed model has an accuracy of 0.965 and 0.956 on the Gowalla and Yelp datasets, respectively, which is superior to other models. Comparing the recommendation loss performance of different models, the proposed model has an RMSE loss of 0.120 on the Gowalla dataset, which is superior to other models. In practical application analysis, when the recommended number is 5, the accuracy and recall of the proposed model are 0.138 and 0.069, respectively, which are superior to other models. In tourism itinerary planning, the overall planning time of the model is the shortest. Therefore, the proposed model has excellent application effects, and the research content provides important technical references for tourist travel and rural tourism destination planning.

Author 1: Ruiping Chen
Author 2: Yanli Zhou
Author 3: Dejun Zhang

Keywords: Regional division; trip planning; recommended tourist attractions; clustering algorithm; time factor

PDF

Paper 97: A Hybrid GAN-BiGRU Model Enhanced by African Buffalo Optimization for Diabetic Retinopathy Detection

Abstract: Diabetic retinopathy (DR) is a severe complication of diabetes mellitus, leading to vision impairment or even blindness if not diagnosed and treated early. A manual inspection of the patient's retina is the conventional way for diagnosing diabetic retinopathy. This study offers a novel method for the identification of diabetic retinopathy in medical diagnosis. Using a hybrid Generative Adversarial Network (GAN) and Bidirectional Gated Recurrent Unit (BiGRU) model, further refined using the African Buffalo Optimization algorithm, the model's capacity to identify minute patterns suggestive of diabetic retinopathy is improved by the GAN's skill in extracting complex characteristics from retinal pictures. The technique of feature extraction plays a critical role in revealing information that may be hidden yet is essential for a precise diagnosis. Then, the BiGRU part works on the characteristics that have been extracted, efficiently maintaining temporal relationships, and enabling thorough information absorption. The combination of GAN's feature extraction capabilities with BiGRU's sequential information processing capability creates a synergistic interaction that gives the model a comprehensive grasp of retinal pictures. Moreover, the African Buffalo Optimization technique is utilized to optimize the model's performance for improved accuracy in the identification of diabetic retinopathy by fine-tuning its parameters. The current study, which uses Python, obtains a 98.5% accuracy rate and demonstrates its amazing ability to reach high levels of accuracy in Diabetic Retinopathy Detection.

Author 1: Sasikala P
Author 2: Sushil Dohare
Author 3: Mohammed Saleh Al Ansari
Author 4: Janjhyam Venkata Naga Ramesh
Author 5: Yousef A.Baker El-Ebiary
Author 6: E. Thenmozhi

Keywords: African Buffalo Optimization (ABO); Bidirectional Gated Recurrent Unit (BI-GRU); Generative Adversarial Network (GAN); diabetic retinopathy; medical diagnosis

PDF

Paper 98: The Application of Artificial Intelligence Technology in Ideological and Political Education

Abstract: As for many schools, artificial intelligence will be more than a practical background; it is also a technical tool and an opportunity for development. Artificial intelligence's in-depth integration and standardization can inject new technological momentum into effectively identifying educational objects' ideological dynamics, improving educational content's accuracy, and expanding the spatial dimension. It has become one and such an inevitable trend of innovation and development. However, there are also many potential risks and practical problems at the value premise, technical limits, and specific operation level, such as privacy protection and ideological security risks, the loss of educational subjectivity, the digitization of educational relations, and the lack of specialized talents. Therefore, it is necessary to look at the technical momentum and potential risks of artificial intelligence dialectically, promote the rationality of educational value, strengthen technical supervision, forge an intelligent education team, reasonably define the integration boundary and application scope of artificial intelligence, and combine the main initiative of human beings with the intelligence of machines. It combines strengths, actively explores the path of coexistence and co-prosperity between education and technology, and consciously constructs an intelligent form of them.

Author 1: Chao Xu
Author 2: Lin Wu

Keywords: Artificial intelligence; ideological and political education; wisdom development; semantic understanding and emotional analysis

PDF

Paper 99: Predicting Students' Academic Performance Through Machine Learning Classifiers: A Study Employing the Naive Bayes Classifier (NBC)

Abstract: Modern universities must strategically analyze and manage student performance, utilizing knowledge discovery and data mining to extract valuable insights and enhance efficiency. Educational Data Mining (EDM) is a theory-oriented approach in academic settings that integrates computational methods to improve academic performance and faculty management. Machine learning algorithms are essential for knowledge discovery, enabling accurate performance prediction and early student identification, with classification being a widely applied method in predicting student performance based on various traits. Utilizing the Naive Bayes classifier (NBC) model, this research predicts student performance by harnessing the robust capabilities inherent in this classification tool. To bolster both efficiency and accuracy, the model integrates two optimization algorithms, namely Jellyfish Search Optimizer (JSO) and Artificial Rabbits Optimization (ARO). This underscores the research's commitment to employing cutting-edge machine learning and algorithms inspired by nature to achieve heightened precision in predicting student performance through the refinement of decision-making and prediction quality. To classify and predict G1 and G3 grades and evaluate students' performance in this study, a comprehensive analysis of the information pertaining to 395 students has been conducted. The results indicate that in predicting G1, the NBAR model, with an F1_Score of 0.882, performed almost 1.03% better than the NBJS model, which had an F1_Score of 0.873. In G3 prediction, the NBAR model outperformed the NBJS model with F1_Score values of 0.893 and 0.884, respectively.

Author 1: Xin ZHENG
Author 2: Conghui LI

Keywords: Machine learning; Naive Bayes Classifier; Artificial Rabbits Optimization; Jellyfish Search Optimizer; student performance

PDF

Paper 100: A Deep Learning-based Framework for Vehicle License Plate Detection

Abstract: In the contemporary landscape of smart transportation systems, the imperative role of intelligent traffic monitoring in bolstering efficiency, safety, and sustainability cannot be overstated. Leveraging recent strides in computer vision, machine learning, and data analytics, this study addresses the pressing need for advancements in car license plate recognition within these systems. Employing an innovative approach based on the YOLOv5 architecture in deep learning, the study focuses on refining the accuracy of license plate recognition. A bespoke dataset is meticulously curated to facilitate a comprehensive evaluation of the proposed methodology, with extensive experiments conducted and metrics such as precision, recall, and F1-score employed for assessment. The outcomes underscore the efficacy of the approach in significantly enhancing the precision and accuracy of license plate recognition using performance evaluation of the proposed method. This tailored dataset ensures a rigorous evaluation, affirming the practical viability of the proposed approach in real-world scenarios. The study not only showcases the successful application of deep learning and YOLOv5 in achieving accurate license plate detection and recognition but also contributes to the broader discourse on advancing intelligent traffic monitoring for more robust and efficient smart transportation systems.

Author 1: Deming Yang
Author 2: Ling Yang

Keywords: Intelligent traffic monitoring; smart transportation; deep learning; Yolov5; performance evaluation

PDF

Paper 101: Estimation of Heating Load Consumption in Residual Buildings using Optimized Regression Models Based on Support Vector Machine

Abstract: Accurate energy consumption forecasting and assessing retrofit options are vital for energy conservation and emissions reduction. Predicting building energy usage is complex due to factors like building attributes, energy systems, weather conditions, and occupant behavior. Extensive research has led to diverse methods and tools for estimating building energy performance, including physics-based simulations. However, accurate simulations often require detailed data and vary based on modeling sophistication. The growing availability of public building energy data offers opportunities for applying machine learning to predict building energy performance. This study evaluates Support Vector Regression (SVR) models for estimating building heating load consumption. These models encompass a single model, one optimized with the Transit Search Optimization Algorithm (TSO) and another optimized with the Coot optimization algorithm (COA). The training dataset consists of 70% of the data, which incorporates eight input variables related to the geometric and glazing characteristics of the buildings. Following the validation of 15% of the dataset, the performance of the remaining 15% is evaluated using five different assessment metrics. Among the three candidate models, Support Vector Regression optimized with the Coot optimization algorithm (SVCO) demonstrates remarkable accuracy and stability, reducing prediction errors by an average of 20% to over 50% compared to the other two models and achieving a maximum R2 value of 0.992 for heating load prediction.

Author 1: Chao WANG
Author 2: Xuehui QIU

Keywords: Heating load demand; prediction models; building energy consumption; support vector machine; metaheuristic optimization algorithms

PDF

Paper 102: Application of Ant Colony Optimization Improved Clustering Algorithm in Malicious Software Identification

Abstract: Due to the increasing threat of malware to computer systems and networks, traditional malware detection and recognition technologies face difficulties and limitations. Therefore, exploring new methods to improve the accuracy and efficiency of malware identification has become an urgent need. This study introduces ant colony algorithm to optimize traditional clustering algorithms and algorithm parameters. The experimental results showed that the improvement rates of the improved algorithm in accuracy, echo value, and false alarm rate were 0.253, 0.115, and 0.056, respectively. The accuracy on the training and validation sets continued to increase and the loss curve continued to decrease. In addition, the improved algorithm had stronger modeling ability for data feature relationships and temporal information. This is of great help in improving the recognition ability of virus and worm software. The improved algorithm had a lower occupancy rate of computing resources compared to other algorithms, but it could also effectively monitor device operation. Compared with traditional methods, this method can more accurately identify malicious software and effectively identify malicious software samples from large-scale datasets. This is of great significance for protecting computer systems and network security.

Author 1: Yong Qian

Keywords: Ant colony algorithm; clustering algorithm; malicious software identification; computer security; optimization algorithm

PDF

Paper 103: The Application of MIR Technology in Higher Vocational English Teaching

Abstract: The traditional teaching model is teacher centered, with conservative textbooks and methods. To some extent, multimedia information retrieval technology can provide relevant information based on user query conditions, thereby alleviating the problem of information overload. This study applies image retrieval, audio and video retrieval techniques from multimedia information retrieval technology to vocational English education. It is recommended to include visual, auditory, and video materials in the course plan to meet the needs of all students. This will help ensure that the teaching objectives of each unit are achieved. Multimedia information retrieval technology may create a new learning mode in which vocational college students can use a series of mobile terminals for learning activities at anytime and anywhere, making learning more comfortable and personalized. A random double-blind survey questionnaire was designed to investigate student satisfaction and evaluate the effectiveness of multimedia information retrieval technology in vocational English teaching, in order to test the effectiveness of multimedia information retrieval technology in vocational college English teaching. According to the survey results, the majority of students acknowledge the performance of multimedia information retrieval technology in English teaching. Therefore, the application of multimedia information retrieval technology in vocational English teaching is conducive to cultivating students' self-learning ability and creative thinking ability. Meanwhile, multimedia information retrieval technology has improved the quality and level of information literacy education for college students.

Author 1: Xiaoting Deng

Keywords: English teaching in higher vocational colleges; multimedia information retrieval technology; applied research; modern teaching models

PDF

Paper 104: Meta-Model Classification Based on the Naïve Bias Technique Auto-Regulated via Novel Metaheuristic Methods to Define Optimal Attributes of Student Performance

Abstract: Accurately assessing and predicting student performance is critical in today’s educational environment. Schools are dependent on evaluating students’ skills, forecasting their grades, and providing customized instruction to improve their academic performance. Early intervention is essential for pinpointing areas in need of development. By predicting students’ futures in particular subjects, data mining, a potent technique for revealing hidden patterns within large datasets, helps lower failure rates. These methods are combined in the field of educational data mining, which focuses on the analysis of data from educators and students with the aim of raising academic achievement. In this study, the Naive Bayes classification (NBC) model is given the main responsibility for predicting student performance. However, two cutting-edge optimization strategies, Alibaba and the Forty Thieves (AFT) and Leader Harris Hawk’s optimization (LHHO), have been used to improve the model’s accuracy. The study’s findings show that the NBC+AFT model performs more accurately than the other models. Accuracy, Precision, Recall, and F1-Score all display impressive performance metrics for a superior model, with values of 0.891, 0.9, 0.89, and 0.89, respectively. These metrics outperform those of competing models, highlighting how successful this strategy is. Because of the NBC+AFT model’s strong performance, educational institutions are getting closer to a time when they will be able to predict students’ success more precisely and help them along the way, making everyone’s academic journey more promising and brighter.

Author 1: Zhen Ren
Author 2: Mingmin He

Keywords: Student performance; machine learning; classification; Naive Bayes Classification; Alibaba and the forty thieves; Leader Harris Hawk’s Optimization

PDF

Paper 105: Design of Teaching Mode and Evaluation Method of Effect of Art Design Course from the Perspective of Big Data

Abstract: In modern educational curriculum teaching, we should fully leverage the advantages of modern technology, especially in teaching methods, and deeply understand and apply big data technology. This article explores the design and effectiveness evaluation methods of curriculum teaching models from the perspective of big data. We utilized big data thinking and conducted research and practical exploration to compare and evaluate teaching mode design methods. In the art and design course, we adopted a blended learning model, combining MOOC and SPOC, and innovated traditional teaching methods and plans. Meanwhile, we investigated the teaching effectiveness and feasibility of this blended learning model. By extensively evaluating teaching techniques, evaluation methods, and technologies that support the learning process, we reconstructed blended learning evaluation indicators and evaluated the effectiveness of learning outcomes and processes under different teaching modes. The research results show that the blended learning model based on big data perspective can significantly improve the effectiveness of classroom teaching. In contrast, learners' self-learning ability and practical innovation ability have also been further improved.

Author 1: Danjun ZHU
Author 2: Gangtian LIU

Keywords: Big data perspective; teaching mode; evaluation system; art and design; hybrid teaching

PDF

Paper 106: Research on Evaluation and Improvement of Government Short Video Communication Effect Based on Big Data Statistics

Abstract: Mainstream media is no longer the only way for people to obtain information, and the official media no longer has absolute control. People can choose the form and content of receiving information according to their preferences, which poses a new challenge to the government departments that have always been serious. From the beginning of short video to its prosperity, the government has shown great interest in its characteristics and functions. It has started to layout short video of government affairs on platforms such as Tiktok and Kwai, opened accounts one after another, and actively participated in the production and dissemination of content. Through the continuous launch of well-designed "hot money", the popularity of government affairs short videos on Tiktok and other platforms continued to rise, harvested a large number of fans, attracted social attention, and also brought good results and repercussions. This paper proposes an optimization design scheme for the evaluation and improvement of the dissemination effect of government short video based on big data statistics. The basic situation of government video is obtained through content analysis, and then the judgment coefficient and linear regression in big data statistics are used to extract common factors to improve the dissemination effect of government short video, so as to improve the dissemination influence of government short video. Finally, simulation test and analysis are carried out. Simulation results show that the proposed algorithm has certain accuracy, which is 8.24% higher than the traditional algorithm. Carrying out the research on the promotion planning and design with the dissemination of short videos of government affairs as the core has important practical guiding significance for guiding local grass-roots governments to build public services and public feedback.

Author 1: Man Xu

Keywords: Big data statistics; short videos of government affairs; communication effect; linear regression; mainstream media

PDF

Paper 107: Improved Ant Colony Algorithm Based on Binarization in Computer Text Recognition

Abstract: Pheromones, path selection, and probability transfer functions are the main factors that affect the performance of computer text recognition. The path selection function is the most important factor affecting the recognition rate. In response to the difficulties in path selection and slow algorithm convergence in the text recognition, an edge detection algorithm based on improved ant colony optimization algorithm is proposed. The strong denoising performance of the ant colony optimization algorithm reduces the interference of textured backgrounds. The edge extraction effect is analyzed in the connected domain to overcome complex effects. Finally, the improved Otsu binarization algorithm is used to recognize the text. According to the results, the proposed method could effectively preserve the edge information of characters in images. The positioning effect of the text area was good. The accuracy rate reached around 85%. The tuned threshold improved the binarization effect. The text recognition rate of the improved ant colony algorithm proposed in the research has generally reached 80%, with good text positioning accuracy and recognition rate, which has great practical significance in computer text recognition.

Author 1: Zhen Li

Keywords: Binarization; ant colony algorithm; text recognition; edge detection; Otsu algorithm

PDF

Paper 108: Application of Style Transfer Algorithm in Artistic Design Expression of Terrain Environment

Abstract: The use of artistic expression to depict and express terrain and landform can not only convey terrain information, but also spread art and culture. The existing landscape design methods focus on the accurate expression of terrain height and the realistic expression of form, but neglect the aesthetic aspect of landscape design. In view of this situation, this paper studied the use of generative adversarial network, constructed the presentation mode of landscape plane style, and realized the expression of landscape art style. A terrain style transfer model based on a pre-trained deep neural network model and style transfer algorithm was constructed to achieve a variety of terrain style expressions. The results showed that, in terms of Peak Signal to Noise Ratio, the proposed style transfer algorithm was higher than style attention network and adaptive instance normalization, and the peak signal-to-noise ratio index value was increased by 7.5% and 16.5%. This indicated that the style transfer model proposed by the topographic artistry research had more advantages in terms of image diversity and fidelity. The Structural Similarity Index of the proposed algorithm has been greatly improved. This research expands the method of computer rendering of terrain environment art, which is of great significance for the preservation of traditional Chinese culture.

Author 1: Yangfei Chen

Keywords: Generative adversarial network; terrain; style transfer; artistic; peak signal-to-noise ratio; Structural Similarity Index

PDF

Paper 109: An Improved K-means Clustering Algorithm Towards an Efficient Educational and Economical Data Modeling

Abstract: Education is one of the most crucial pillars for the sustainable development of societies. It is essential for each country to assess its level of access to education. However, the conventional methods of ranking access to education have their limitations. Therefore, there is a need for strategic planning to develop a new classification methods. This study aims to address this need by developing an innovative and efficient unsupervised K-Means model capable of predicting global access to education. The novel approach adopted in this research fills a gap in traditional ranking methods for assessing access to education. Utilizing statistical analysis of data sourced from the World Bank, we evaluated education access across 217 countries spanning various continents and levels of development. By employing economic and educational factors as input for the K-Means algorithm, we successfully identified three distinct clusters, each comprising countries with similar levels of education access. The reliability of our approach was reinforced through rigorous statistical testing to validate the results. Furthermore, we compared the economies of countries within each cluster using primary data, enabling specific recommendations at the economic level to assist countries with limited education access in enhancing their circumstances. Finally, this study makes a significant contribution by introducing a new approach to globally assess education access. The findings provide practical recommendations to aid countries in improving their educational opportunities.

Author 1: Rabab El Hatimi
Author 2: Cherifa Fatima Choukhan
Author 3: Mustapha Esghir

Keywords: Education assessment; unsupervised learning; statistical analysis; world bank data; K-means

PDF

Paper 110: Investigating the Impact of Preprocessing Techniques and Representation Models on Arabic Text Classification using Machine Learning

Abstract: Arabic Text Classification (ATC) is a crucial step for various Natural Language Processing (NLP) applications. It emerged as a response to the exponential growth of online content like social posts and review comments. In this study, preprocessing techniques and representation models are used to evaluate the effectiveness of ATC using Machine Learning (ML). Generally, the ATC operation depends on various factors, such as stemming in preprocessing, feature extraction and selection, and the nature of the dataset. To enhance the overall classification performance, preprocessing methodologies are primarily employed to transform each Arabic term into its root form and reduce the dimensionality of representation. In the representation of Arabic text, feature extraction and selection processes are imperative, as they significantly enhance the performance of ATC. This study implements the chosen classifiers using various feature selection algorithms. The comprehensive assessment of classification outcomes is conducted by comparing various classifiers, including Multinomial Naive Bayes (MNB), Bernoulli Naive Bayes (BNB), Stochastic Gradient Descent (SGD), Support Vector Classifier (SVC), Logistic Regression (LR), and linear Support Vector Classifier (LSVC). These ML classifiers are assessed utilizing short and long Arabic text benchmark datasets called BBC Arabic corpus and the COVID-19 dataset. The assessment findings indicate that the efficacy of classification is significantly influenced by the preprocessing methods, representation model, classification algorithm, and the datasets’ characteristics. In most cases, the SGDC and LSVC have consistently surpassed other classifiers for the datasets under consideration when significant features are chosen.

Author 1: Mahmoud Masadeh
Author 2: Moustapha. A
Author 3: Sharada B
Author 4: Hanumanthappa J
Author 5: Hemachandran K
Author 6: Channabasava Chola
Author 7: Abdullah Y. Muaad

Keywords: Arabic Text Classification (ATC); Text Mining (TM); Machine Learning (ML); preprocessing methods; representation models; Feature Extraction (FE); Feature Selection (FS)

PDF

Paper 111: Evaluating Tree-based Ensemble Strategies for Imbalanced Network Attack Classification

Abstract: With the continual evolution of cybersecurity threats, the development of effective intrusion detection systems is increasingly crucial and challenging. This study tackles these challenges by exploring imbalanced multiclass classification, a common situation in network intrusion datasets mirroring real-world scenarios. The paper aims to empirically assess the performance of diverse classification algorithms in managing imbalanced class distributions. Experiments were conducted using the UNSW-NB15 network intrusion detection benchmark dataset, comprising ten highly imbalanced classes. The evaluation includes basic, traditional algorithms like the Decision Tree, K-Nearest Neighbor, and Gaussian Naive Bayes, as well as advanced ensemble methods such as Gradient Boosted Decision Trees (GraBoost) and AdaBoost. Our findings reveal that the Decision Tree surpassed the Multi-Layer Perceptron, K-Nearest Neighbor, and Naive Bayes in terms of overall F1-score. Furthermore, thorough evaluations of nine tree-based ensemble algorithms were performed, showcasing their varying efficacy. Bagging, Random Forest, ExtraTrees, and XGBoost achieved the highest F1-scores. However, in individual class analysis, XGBoost demonstrated exceptional performance relative to the other algorithms. This is confirmed by achieving the highest F1-scores in eight out of the ten classes within the dataset. These results establish XGBoost as a predominant method for handling multiclass imbalance classification with Bagging being the closest feasible alternative, as Bagging gains an almost similar accuracy and F1-score as XGBoost.

Author 1: Hui Fern Soon
Author 2: Amiza Amir
Author 3: Hiromitsu Nishizaki
Author 4: Nik Adilah Hanin Zahri
Author 5: Latifah Munirah Kamarudin
Author 6: Saidatul Norlyana Azemi

Keywords: Multiclass imbalanced classification; ensemble algorithm; network attack; UNSW-NB15 dataset; F1-score

PDF

Paper 112: Bystander Detection: Automatic Labeling Techniques using Feature Selection and Machine Learning

Abstract: A hostile or aggressive behavior on an online platform by an individual or a group of people is termed as cyberbullying. A bystander is the one who sees or knows about such incidences of cyberbullying. A defender who intervenes can mitigate the impact of bullying, an instigator who accomplices the bully, can add to the victim’s suffering, and an impartial onlooker who remains neutral and observes the scenario without getting engaged. Studying the behavior of Bystanders role can help in shaping the scale and progression of bullying incidents. However, the lack of data hinders the research in this area. Recently, a dataset, CYBY23, of Twitter threads having main tweets and the replies of Bystanders was published on Kaggle in Oct 2023. The dataset has extracted features related to toxicity and sensitivity of the main tweets and reply tweets. The authors have got manual annotators to assign the labels of Bystanders’ roles. Manually labeling bystanders’ roles is a labor-intensive task which eventually raises the need to have an automatic labeling technique for identifying the Bystander role. In this work, we aim to suggest a machine-learning model with high efficiency for the automatic labeling of Bystanders. Initially, the dataset was re-sampled using SMOTE to make it a balanced dataset. Next, we experimented with 12 models using various feature engineering techniques. Best features were selected for further experimentation by removing highly correlated and less relevant features. The models were evaluated on the metrics of accuracy, precision, recall, and F1 score. We found that the Random Forest Classifier (RFC) model with a certain set of features is the highest scorer among all 12 models. The RFC model was further tested against various splits of training and test sets. The highest results were achieved using a training set of 85% and a test set of 15%, having 78.83% accuracy, 81.79% precision, 74.83% recall, and 79.45% F1 score. Automatic labeling proposed in this work, will help in scaling the dataset which will be useful for further studies related to cyberbullying.

Author 1: Anamika Gupta
Author 2: Khushboo Thakkar
Author 3: Veenu Bhasin
Author 4: Aman Tiwari
Author 5: Vibhor Mathur

Keywords: Bystanders; cyberbullying; machine learning; defender; instigator; impartial; toxicity; twitter

PDF

Paper 113: Using Deep Learning to Recognize Fake Faces

Abstract: In recent times, many fake faces have been created using deep learning and machine learning. Most fake faces made with deep learning are referred to as “deepfake photos.” Our study’s primary goal is to propose a useful framework for recognizing deep-fake photos using deep learning and transformative learning techniques. This paper proposed convolutional neural network (CNN) models based on deep transfer learning method-ologies in which the designed classifier using global average pooling (GAP), dropout, and a dense layer with two neurons that use SoftMax are substituted for the final fully connected layer in the pretrained models. DenseNet201, the suggested framework, produced the best accuracy of 86.85% for both the deepfake and real picture datasets, while MobileNet produced a lower accuracy of 82.78%. The obtained experimental results showed that the proposed method outperformed other state-of-the-art fake picture discriminators in terms of performance. The proposed architecture helps cybersecurity specialists fight deepfake-related cybercrimes.

Author 1: Jaffar Atwan
Author 2: Mohammad Wedyan
Author 3: Dheeb Albashish
Author 4: Elaf Aljaafrah
Author 5: Ryan Alturki
Author 6: Bandar Alshawi

Keywords: Deep learning; machine learning; deepfake; convolutional neural network; global average pooling

PDF

Paper 114: Enhancing Adversarial Defense in Neural Networks by Combining Feature Masking and Gradient Manipulation on the MNIST Dataset

Abstract: This research investigates the escalating issue of adversarial attacks on neural networks within AI security, specifically targeting image recognition using the MNIST dataset. Our exploration centered on the potential of a combined approach incorporating feature masking and gradient manipulation to bolster adversarial defense. The main objective was to evaluate the extent to which this integrated strategy enhances network resilience against such attacks, contributing to the advancement of more robust AI systems. In our experimental framework, we utilized a conventional neural network architecture, integrating various levels of feature masking alongside established training protocols. A baseline model, devoid of feature masking, functioned as a comparative standard to gauge the efficacy of our proposed technique. We assessed the model’s performance in standard scenarios as well as under Fast Gradient Sign Method (FGSM) adversarial assaults. The outcomes provided significant insights. The baseline model demonstrated a high test accuracy of 98% on the MNIST dataset, yet it showed limited resistance to adversarial incursions, with accuracy diminishing to 60% under FGSM onslaughts. Conversely, models incorporating feature masking exhibited a reciprocal relationship between masking proportion and accuracy, counterbalanced by an enhancement in adversarial resilience. Specifically, a 10% masking ratio achieved a 96% accuracy rate coupled with a 75% robustness against attacks, a 30% masking led to a 94% accuracy with an 80%robustness level, and a 50% masking threshold resulted in a 92% accuracy, attaining the apex of robustness at 85%. These results affirm the efficacy of feature masking in augmenting adversarial defense, highlighting a pivotal equilibrium between accuracy and resilience. The study lays the groundwork for further investigations into refined masking methodologies and their amalgamation with other defensive strategies, potentially broadening the scope of neural network security against adver-sarial threats. Our contributions are significant to the realm of AI security, showcasing an effective strategy for the development of more secure and dependable neural network frameworks.

Author 1: Ganesh Ingle
Author 2: Sanjesh Pawale

Keywords: Feature masking; neural networks; gradient manipulation; adversarial resilience; fast gradient sign method

PDF

Paper 115: Automated Paper-based Multiple Choice Scoring Framework using Fast Object Detection Algorithm

Abstract: Optical mark reader (OMR) technology is an important research topic in artificial intelligence, with a wide range of applications such as text processing, document recognition, surveying, statistics, and process automation. Researchers have proposed many methods employing either traditional image processing and statistics or complex machine learning models. This paper presents a feasible solution for the OMR problem. It uses a fast object detection model to detect markers effectively and then segment the answer sheet into smaller regions for the mark reader model to recognize the user’s selections accurately. The experimental results on actual answer sheets from college exams show that the error is less than 0.5 percent, and the processing speed can achieve up to 50 answer sheets per minute on standard core i5 personal computers.

Author 1: Pham Doan Tinh
Author 2: Ta Quang Minh

Keywords: Optical mark reader; multiple choice exam; automatic scoring; segmentation; fast object detection

PDF

Paper 116: EpiNet: A Hybrid Machine Learning Model for Epileptic Seizure Prediction using EEG Signals from a 500 Patient Dataset

Abstract: The accurate prognosis of epileptic seizures has great significance in enhancing the management of epilepsy, necessitating the creation of robust and precise predictive models. EpiNet, our hybrid machine learning model for EEG signal analysis, incorporates key elements of computer vision and machine learning , positioning it within this advancing technological domain for enhanced seizure prediction accuracy. Hence, this research aims to provide a thorough investigation using the Bonn Electroencephalogram (EEG) signals dataset as an alternative method. The methodology used in this study encompasses the training of five machine learning models, such as Support Vector Machines (SVM), Gaussian Naive Bayes, Gradient Boosting, XGBoost, and LightGBM. Performance criteria, including accuracy, sensitivity, specificity, precision, recall, and F1-score, are extensively used to assess the efficacy of each model. A unique contribution is the development of a hybrid model, integrating predictions from individual models to enhance the overall accuracy of epilepsy identification. Experimental results demonstrate notable success, with the hybrid model achieving an accuracy of 99.81%. Performance matrices for both classes demonstrate the hybrid model’s epileptic seizure prediction reliability. Visualizations, including ROC-AUC curves and accuracy curves, provide a nuanced understanding of the models’ discriminative abilities and performance improvement with increasing sample size. A comparative analysis with existing studies reaffirms the advancement of our research, positioning it at the forefront of epileptic seizure prediction. This study not only highlights the promising integration of machine learning in medical diagnostics but also emphasises areas for future refinement. The achieved results open avenues for proactive healthcare management and improved patient outcomes.

Author 1: Oishika Khair Esha
Author 2: Nasima Begum
Author 3: Shaila Rahman

Keywords: Epilepsy; seizure prediction; computer vision; hybrid model; electroencephalography; bonn dataset; proactive healthcare

PDF

Paper 117: A Comparative Study of ChatGPT-based and Hybrid Parser-based Sentence Parsing Methods for Semantic Graph-based Induction

Abstract: Sentence parsing is a fundamental step in the conversion of a text document into semantic graphs. In this research, novel phrase parsing techniques for semantic graph-based induction are presented, namely the ChatGPT-based and Hybrid Parser-based approaches. The performance of these two approaches in the context of inducing semantic networks from textual data is assessed through a comprehensive analysis in this study. The primary purpose is to enhance the construction of semantic graphs, specifically focusing on capturing detailed event descriptions and relationships within text. The research finds that the Hybrid Parser-Based approach exhibits a slight advantage in accuracy (acc hybrid = 0.87) compared to ChatGPT (acc GPT = 0.85) in sentence parsing tasks. Furthermore, the efficiency analysis reveals that ChatGPT’s response quality varies with different prompt sizes, while the Hybrid Parser-Based method consistently maintains an “excellent” response quality rating.

Author 1: Walelign Tewabe
Author 2: Laszlo Kovacs

Keywords: Adverb prediction; ChatGPT; hybrid parser-based; natural language processing; sentence parsing; semantic graph induction

PDF

Paper 118: Overview of Data Augmentation Techniques in Time Series Analysis

Abstract: Time series data analysis is vital in numerous fields, driven by advancements in deep learning and machine learning. This paper presents a comprehensive overview of data augmentation techniques in time series analysis, with a specific focus on their applications within deep learning and machine learning. We commence with a systematic methodology for literature selection, curating 757 articles from prominent databases. Subsequent sections delve into various data augmentation techniques, encompassing traditional approaches like interpolation and advanced methods like Synthetic Data Generation, Generative Adversarial Networks (GANs), and Variational Autoencoders (VAEs). These techniques address complexities inherent in time series data. Moreover, we scrutinize limitations, including computational costs and overfitting risks. However, it’s essential to note that our analysis does not end with limitations. We also comprehensively analyzed the advantages and applicability of the techniques under consideration. This holistic evaluation allows us to provide a balanced perspective. In summary, this overview illuminates data augmentation’s role in time series analysis within deep and machine-learning contexts. It provides valuable insights for researchers and practitioners, advancing these fields and charting paths for future exploration.

Author 1: Ihababdelbasset ANNAKI
Author 2: Mohammed RAHMOUNE
Author 3: Mohammed BOURHALEB

Keywords: Time series; data augmentation; machine learning; deep learning; synthetic data generation

PDF

Paper 119: Utilizing UAV Data for Neural Network-based Classification of Melon Leaf Diseases in Smart Agriculture

Abstract: Integrating unmanned aerial vehicle (UAV) technology with plant disease detection is a significant advancement in agricultural surveillance, marking the beginning of a transformational era characterised by innovation. Traditionally, farmers have had to rely on manual visual inspections to identify melon leaf diseases, which proves to be a time-consuming and costly process in terms of labour. This paper aims to use UAV technology for plant disease detection to achieve notable progress in agricultural surveillance. Incorporating UAV technology, specifically utilising the You Only Look Once version 8 (YOLOv8) deep-learning model, is revolutionary in precision agriculture. This study uses UAV imagery in precision agriculture to explore the utility of YOLOv8, a powerful deep-learning model, for detecting diseases in melon leaves. The labelled dataset is created by annotating disease-affected areas using bounding boxes. The YOLOv8 model has been trained using a labelled dataset to detect and classify various diseases accurately. Following the training, the performance of YOLOv8 stands out significantly compared to other models, boasting an impressive accuracy of 83.2%. This high level of accuracy underscores its effectiveness in object detection tasks and positions it as a robust choice in computer vision applications. It has been shown that rigorous evaluation can help find diseases, which suggests that it could be used for early intervention in precision farming and to change how crop management systems work. This has the potential to assist farmers in promptly identifying and addressing plant issues, hence altering their crop management practices.

Author 1: Siti Nur Aisyah Mohd Robi
Author 2: Norulhusna Ahmad
Author 3: Mohd Azri Mohd Izhar
Author 4: Hazilah Mad Kaidi
Author 5: Norliza Mohd Noor

Keywords: Smart agriculture; plant disease; melon leaf disease; image processing; neural network; UAV

PDF

Paper 120: Guiding 3D Digital Content Generation with Pre-Trained Diffusion Models

Abstract: The production technology of 3D digital content involves multiple stages, including 3D modeling, simulation animation, visualization rendering, and perceptual interaction. It is not only the core technology supporting the creation of 3D digital content but also a key element in enhancing immersive application experiences in virtual reality and the metaverse. A primary focus in computer vision and computer graphics research has been on how to create 3D digital content that is efficient, convenient, controllable, and editable. Currently, producing high-quality 3D digital content still requires significant time and effort from a large number of designers. To address this challenge, leveraging artificial intelligence-generated methods to break down production barriers has emerged as an effective strategy. With the substantial breakthroughs achieved by diffusion models in the field of image generation, they also demonstrate tremendous potential in 3D digital content generation, potentially becoming a foundational model in this area. Recent studies have shown that diffusion model-based techniques for generating 3D digital content can significantly reduce production costs and enhance efficiency. Therefore, it is essential to summarize and categorize existing methods to facilitate further research. This paper systematically reviews 3D digital content generation methods, introducing related 3D representation techniques and focusing on 3D digital content generation schemes, algorithms, and pipeline based on diffusion models. We perform a horizontal comparison of different approaches in terms of generation speed and quality, deeply analyze existing challenges, and propose viable solutions. Furthermore, we thoroughly explore future research themes and directions in this domain, aiming to provide guidance and reference for subsequent research endeavors.

Author 1: Jing Li
Author 2: Zhengping Li
Author 3: Peizhe Jiang
Author 4: Lijun Wang
Author 5: Xiaoxue Li
Author 6: Yuwen Hao

Keywords: 3D Digital content; computer vision; artificial intelligence; diffusion models; 3D representation

PDF

Paper 121: A Robust Deep Learning Model for Terrain Slope Estimation

Abstract: Interest in autonomous robots has grown significantly in recent years, motivated by the many advances in computational power and artificial intelligence. Space probes landing on extra-terrestrial celestial bodies, as well as vertical take-off and landing on unknown terrains, are two examples of high levels of autonomy being pursued. These robots must be endowed with the capability to evaluate the suitability of a given portion of terrain to perform the final touchdown. In these scenarios, the slope of the terrain where a lander is about to touch the ground is crucial for a safe landing. The capability to measure the slope of the terrain underneath the vehicle is essential to perform missions where landing on unknown terrain is desired. This work attempts to develop algorithms to assess the slope of the terrain below a vehicle using monocular images in the visible spectrum. A lander takes these images with a camera pointing in the landing direction at the final descent before the touchdown. The algorithms are based on convolutional neural networks, which classify the perceived slope into discrete bins. To this end, three convolutional neural networks were trained using images taken from multiple types of surfaces, extracting features that indicate the existing inclination in the photographed surface. The metrics of the experiments show that it is feasible to identify the inclination of surfaces, along with their respective orientations. Our overall aim is that if a hazardous slope is detected, the vehicle can abort the landing and search for another, more appropriate site.

Author 1: Abdulaziz Alorf

Keywords: Terrain slope estimation; spacecrafts; robotics; artificial intelligence; machine learning techniques; deep neural network; computer vision

PDF

Paper 122: Transformative Automation: AI in Scientific Literature Reviews

Abstract: This paper investigates the integration of Artificial Intelligence (AI) into systematic literature reviews (SLRs), aiming to address the challenges associated with the manual review process. SLRs, a crucial aspect of scholarly research, often prove time-consuming and prone to errors. In response, this work explores the application of AI techniques, including Natural Language Processing (NLP), machine learning, data mining, and text analytics, to automate various stages of the SLR process. Specifically, we focus on paper identification, information extraction, and data synthesis. The study delves into the roles of NLP and machine learning algorithms in automating the identification of relevant papers based on defined criteria. Researchers now have access to a diverse set of AI-based tools and platforms designed to streamline SLRs, offering automated search, retrieval, text mining, and analysis of relevant publications. The dynamic field of AI-driven SLR automation continues to evolve, with ongoing exploration of new techniques and enhancements to existing algorithms. This shift from manual efforts to automation not only enhances the efficiency and effectiveness of SLRs but also marks a significant advancement in the broader research process.

Author 1: Kirtirajsinh Zala
Author 2: Biswaranjan Acharya
Author 3: Madhav Mashru
Author 4: Damodharan Palaniappan
Author 5: Vassilis C. Gerogiannis
Author 6: Andreas Kanavos
Author 7: Ioannis Karamitsos

Keywords: Artificial intelligence; systematic literature review; scholarly data analysis; machine learning algorithms; natural language processing; scientific publication automation

PDF

Paper 123: Reciprocal Bucketization (RB) - An Efficient Data Anonymization Model for Smart Hospital Data Publishing

Abstract: With the lightning growth of the Internet of Things (IoT), enormous applications have been developed to serve industries, the environment, society, etc. Smart Health care is one of the significant applications of the IoT, where intelligent environments enrich safety and ease of surveillance. The database of the Smart Hospital records the patient’s sensitive information, which could face various potential privacy breaches through linkage attacks. Publishing such sensitive data to society is challenging in adopting the best privacy preservation model to defend against linkage attacks. In his paper, we propose a novel Reciprocal Bucketization Anonymization model as the privacy preservation method to defend against Identity, Attribute, and Correlated Linkage attacks. The proposed anonymization method creates the Buckets of patient records and then partitions the data into sensor trajectory and Multiple Sensitive attributes (MSA). A local suppression is employed on Sensor Trajectory Data and Slicing on MSA to get the anonymized data to be published gathered by combining anonymized sensor trajectory and MSA. The proposed method is validated on the synthetic and real-time dataset by comparing its data utility loss in both sensor trajectory and the MSA. The experimental results eradicate that the RB –Anonymization exhibits the nature of best privacy preservation against Identity, Attribute, and Correlated linkages attacks with negligible utility loss compared with the existing methods.

Author 1: Rajesh S M
Author 2: Prabha R

Keywords: Anatomization; anonymization; entropy; pearson’s contingency coefficient; and KL – Divergence

PDF

Paper 124: Machine Learning in Malware Analysis: Current Trends and Future Directions

Abstract: Malware analysis is a critical component of cyber-security due to the increasing sophistication and the widespread of malicious software. Machine learning is highly significant in malware analysis because it can process huge amounts of data, identify complex patterns, and adjust to changing threats. This paper provides a comprehensive overview of existing work related to Machine Learning (ML) methods used to analyze malware along with a description of each trend. The results of the survey demonstrate the effectiveness and importance of three trends, which are: deep learning, transfer learning, and XML techniques in the context of malware analysis. These approaches improve accuracy, interpretability, and transparency in detecting and analyzing malware. Moreover, the related challenges and Issues are presented. After identifying these challenges, we highlight future directions and potential areas that require more attention and improvement, such as distributed computing and parallelization techniques which can reduce training time and memory requirements for large datasets. Also, further investigation is needed to develop image resizing techniques to be used during the visual representation of malware to minimize information loss while maintaining consistent image sizes. These areas can contribute to the enhancement of machine learning-based malware analysis.

Author 1: Safa Altaha
Author 2: Khaled Riad

Keywords: Malware; malware analysis; machine learning; deep learning; transfer learning

PDF

Paper 125: Towards a Continuous Temporal Improvement Approach for Real-Time Business Processes

Abstract: Time is relative, which makes the interaction so sensitive. Indeed, contemplating the concept of real-time enterprises resembled envisioning an idealized notion that seemed unattainable and impracticable in reality. Consequently, we give a new definition of the real-time concept according to our needs and targets for a successful business process. According to this definition, we can go towards a real-time business process validation algorithm, which has the goal of ensuring quality in terms of time, i.e., time latency ≃ 0. Put simply, it serves as a method to assess the consistency of a process. This approach aids in comprehending the temporal patterns inherent in a process as it evolves, empowering decision-makers to glean insights and swiftly form initial judgments for effective problem-solving and the identification of appropriate solutions. Thus, our main purpose is to deliver the right information and knowledge to the right person at the right time. To achieve this, we introduce a novel real-time component within the Business Process Management Notation (BPMN), encompassing various attributes that facilitate process monitoring. This extension transforms the BPMN into a unified real-time business process meta-model. To be more specific, our contribution proposes a continuous temporal improvement assessment and knowledge management as temporal knowledge helps to evaluate the real-time situation of the business process.

Author 1: Asma Ouarhim
Author 2: Karim Baina
Author 3: Brahim Elbhiri

Keywords: Real-time business process; real-time enterprises; temporal latency; process validation; continuous improvement approach

PDF

Paper 126: Modern Education: Advanced Prediction Techniques for Student Achievement Data

Abstract: Enhancing educational outcomes across varied institutions like universities, schools, and training centers necessitates accurately predicting student performance. These systems aggregates the data from multiple sources—exam centers, virtual courses, registration departments, and e-learning platforms. Analyzing this complex and diverse educational data is a challenge, thus necessitating the application of machine learning techniques. Utilizing machine learning algorithms for dimensionality reduction simplifies intricate datasets, enabling more comprehensive analysis. Through machine learning, educational data is refined, uncovering valuable patterns and forecasts by simplifying complexities via feature selection and dimensionality reduction methods. This refinement significantly amplifies the efficacy of student performance prediction systems, empowering educators and institutions with data-driven insights and thereby enriching the overall educational landscape. In this particular research, the Decision Tree Classification (DTC) model is used for forecasting student performance. DTC stands out as a potent machine-learning method for classification purposes. Two optimization algorithms, namely the Fox Optimization (FO) and the Black Widow Optimization (BWO), are integrated to heighten the model's accuracy and efficiency further. The amalgamation of DTC with these pioneering optimization techniques underscores the study's dedication to harnessing the forefront of machine learning and bio-inspired algorithms, ensuring more precise and resilient predictions of student performance, ultimately culminating in improved educational outcomes. From the results garnered for G1 and G3, it is evident that the DTBW model demonstrated the most exceptional performance in both predicting and categorizing G1, achieving an Accuracy and Precision value of 93.7 percent. Conversely, the DTFO model emerged as the most precise predictor for G3, achieving an Accuracy and Precision of 93.4 and 93.5 percent, respectively, in the prediction task.

Author 1: Xi LU

Keywords: Student performance; classification; decision tree classification; fox optimization; black widow optimization

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org