The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 14 Issue 8

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: An Empirical Internet Protocol Network Intrusion Detection using Isolation Forest and One-Class Support Vector Machines

Abstract: With the increasing reliance on web-based applications and services, network intrusion detection has become a critical aspect of maintaining the security and integrity of computer networks. This study empirically investigates internet protocol network intrusion detection using two machine learning techniques: Isolation Forest (IF) and One-Class Support Vector Machines (OC-SVM), combined with ANOVA F-test feature selection. This paper presents an empirical study comparing the effectiveness of two machine learning algorithms, Isolation Forest (IF) and One-Class Support Vector Machines (OC-SVM), with ANOVA F-test feature selection in detecting network intrusions using web services. The study used the NSL-KDD dataset, encompassing hypertext transfer protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP) web services attacks and normal traffic patterns, to comprehensively evaluate the algorithms. The performance of the algorithms is evaluated based on several metrics, such as the F1-score, detection rate (recall), precision, false alarm rate (FAR), and Area Under the Receiver Operating Characteristic (AUCROC) curve. Additionally, the study investigates the impact of different hyper-parameters on the performance of both algorithms. Our empirical results demonstrate that while both IF and OC-SVM exhibit high efficacy in detecting network intrusion attacks using web services of type HTTP, SMTP, and FTP, the One-Class Support Vector Machines outperform the Isolation Forest in terms of F1-score (SMTP), detection rate(HTTP, SMTP, and FTP), AUCROC, and a consistent low false alarm rate (HTTP). We used the t-test to determine that OCSVM statistically outperforms IF on DR and FAR.

Author 1: Gerard Shu Fuhnwi
Author 2: Victoria Adedoyin
Author 3: Janet O. Agbaje

Keywords: HTTP; SMTP; FTP; ANOVA F-test; AUCROC; OC-SVMs; FAR; DR; IF

PDF

Paper 2: Ensemble Security and Multi-Cloud Load Balancing for Data in Edge-based Computing Applications

Abstract: Edge computing has gained significant attention in recent years due to its ability to process data closer to the source, resulting in reduced latency and improved performance. However, ensuring data security and efficient data management in edge-based computing applications poses significant challenges. This paper proposes an ensemble security approach and a multi-cloud load-balancing strategy to address these challenges. The ensemble security approach leverages multiple security mechanisms, such as encryption, authentication, and intrusion detection systems, to provide a layered defense against potential threats. By combining these mechanisms, the system can detect and mitigate security breaches at various levels, ensuring the integrity and confidentiality of data in edge-based environments. The multi-cloud load balancing strategy also aims to optimize resource utilization and performance by distributing data processing tasks across multiple cloud service providers. This approach takes advantage of the flexibility and scalability offered by the cloud, allowing for dynamic workload allocation based on factors like network conditions and computational capabilities. To evaluate the effectiveness of the proposed approach, we conducted experiments using a realistic edge-based computing environment. The results demonstrate that the ensemble security approach effectively detects and prevents security threats, while the multi-cloud load balancing strategy with edge computing to improve the overall system performance and resource utilization.

Author 1: Raghunadha Reddi Dornala

Keywords: Edge computing; cloud computing; dynamic load balancing; fog computing; multi-cloud load balancing

PDF

Paper 3: Converting Data for Spiking Neural Network Training

Abstract: The application of spiking neural networks (SNNs) for processing visual and auditory data necessitate the conversion of traditional neural network datasets into a format suitable for spike-based computations. Existing datasets designed for conventional neural networks are incompatible with SNNs due to their reliance on spike timing and specific preprocessing requirements. This paper introduces a comprehensive pipeline that enables the conversion of common datasets into rate-coded spikes, meeting processing demands of SNNs. The proposed solution is evaluated on Spike-CNN trained on Time-to-First-Spike encoded MNIST and compared with the similar system trained on the neuromorphic dataset (N-MNIST). Both systems have comparative precision; however the proposed solution is more energy efficient than the system based on neuromorphic computing. Since, the proposed solution is not limited to any specific data form and can be applied to various types of audio/visual content. By providing a means to adapt existing datasets, this research facilitates the exploration and advancement of SNNs across different domains.

Author 1: Erik Sadovsky
Author 2: Maros Jakubec
Author 3: Roman Jarina

Keywords: SNN; rate coding; spike timing; data conversion; MNIST

PDF

Paper 4: A Secure and Scalable Behavioral Dynamics Authentication Model

Abstract: Various authentication methods have been proposed to mitigate data breaches. However, the increasing frequency of data breaches and users' lack of awareness have exposed traditional methods, including single-factor password-based systems and, two-factor authentication systems, to vulnerabilities against attacks. While behavioral authentication holds promise in tackling these issues, it faces challenges concerning interoperability between operating systems, the security of behavioral data, accuracy enhancement, scalability, and cost. This research presents a scalable dynamic behavioral authentication model utilizing keystroke typing patterns. The model is constructed around five key components: human-computer interface devices, encryption of behavioral data, consideration of the authenticator's emotional state, incorporation of cross-platform features, and proposed implementation solutions. It addresses potential typing errors and employs data encryption for behavioral data, achieving a harmonious blend of usability and security by leveraging keyboard dynamics. This is accomplished through the implementation of a web-based authentication system that integrates Convolutional Neural Networks (CNNs) for advanced feature engineering. Keystroke typing patterns were gathered from participants and subsequently employed to evaluate the system's keystroke timing verification, login ID verification, and error handling capabilities. The web-based system uniquely identifies users by merging their username-password (UN-PW) credentials with their keyboard typing patterns, all while securely storing the keystroke data. Given the achievement of a 100% accuracy rate, the proposed Behavioral Dynamics Authentication Model (BDA) introduces future researchers to five scalable constructs. These constructs offer an optimal combination, tailored to the device and context, for maximizing effectiveness. This achievement underscores its potential applications in the realm of authentication.

Author 1: Idowu Dauda Oladipo
Author 2: Mathew Nicho
Author 3: Joseph Bamidele Awotunde
Author 4: Jemima Omotola Buari
Author 5: Muyideen Abdulraheem
Author 6: Tarek Gaber

Keywords: Behavioral authentication; keystroke dynamics; human-computer interface; two-factor authentication

PDF

Paper 5: Visualization of AI Systems in Virtual Reality: A Comprehensive Review

Abstract: This study provides a comprehensive review of the utilization of Virtual Reality (VR) in the context of Human-Computer Interaction (HCI) for visualizing Artificial Intelligence (AI) systems. Drawing from 18 selected studies, the results illuminate a complex interplay of tools, methods, and approaches, notably the prominence of VR engines like Unreal Engine and Unity. However, despite these tools, a universal solution for effective AI visualization remains elusive, reflecting the unique strengths and limitations of each technique. The application of VR for AI visualization across multiple domains is observed, despite challenges such as high data complexity and cognitive load. Moreover, it briefly discusses the emerging ethical considerations pertaining to the broad integration of these technologies. Despite these challenges, the field shows significant potential, emphasizing the need for dedicated research efforts to unlock the full potential of these immersive technologies. This review, therefore, outlines a roadmap for future research, encouraging innovation in visualization techniques, addressing identified challenges, and considering the ethical implications of VR and AI convergence.

Author 1: Medet Inkarbekov
Author 2: Rosemary Monahan
Author 3: Barak A. Pearlmutter

Keywords: Virtual Reality (VR); Artificial Intelligence (AI) Vi-sualization; VR in AI Visualization; Human-Computer Interaction (HCI)

PDF

Paper 6: Symbol Detection in a Multi-class Dataset Based on Single Line Diagrams using Deep Learning Models

Abstract: Single Line Diagrams (SLDs) are used in electrical power distribution systems. These diagrams are crucial to engineers during the installation, maintenance, and inspection phases. For the digital interpretation of these documents, deep learning-based object detection methods can be utilized. However, there is a lack of efforts made to digitize the SLDs using deep learning methods, which is due to the class-imbalance problem of these technical drawings. In this paper, a method to address this challenge is proposed. First, we use the latest variant of You Look Only Once (YOLO), YOLO v8 to localize and detect the symbols present in the single-line diagrams. Our experiments determine that the accuracy of symbol detection based on YOLO v8 is almost 95%, which is more satisfactory than its previous versions. Secondly, we use a synthetic dataset generated using multi-fake class generative adversarial network (MFCGAN) and create fake classes to cope with the class imbalance problem. The images generated using the GAN are then combined with the original images to create an augmented dataset, and YOLO v5 is used for the classification of the augmented dataset. The experiments reveal that the GAN model had the capability to learn properly from a small number of complex diagrams. The detection results show that the accuracy of YOLO v5 is more than 96.3%, which is higher than the YOLO v8 accuracy. After analyzing the experiment results, we might deduce that creating multiple fake classes improved the classification of engineering symbols in SLDs.

Author 1: Hina Bhanbhro
Author 2: Yew Kwang Hooi
Author 3: Worapan Kusakunniran
Author 4: Zaira Hassan Amur

Keywords: Single line diagrams; engineering drawings; synthetic data; symbol detection; deep learning; augmented dataset

PDF

Paper 7: The Spatial Distribution of Atmospheric Water Vapor Based on Analytic Hierarchy Process and Genetic Algorithm

Abstract: The inversion of water vapor spatial distribution using ground-based global navigation satellite systems is a technique that utilizes the propagation delay of satellite signals in the atmosphere to retrieve atmospheric water vapor information. To further promote the accuracy of the information obtained by this method, a satellite system is designed to solve the spatial distribution of atmospheric water vapor based on chromatography technology and genetic algorithm. Firstly, the accuracy of the empirical air temperature and pressure model to calculate the zenith statics delay is analyzed. To optimize the global weighted average temperature model, a model that considers the decreasing rate of atmospheric weighted average temperature and a model based on the linear relationship between surface heat and weighted average temperature are proposed. The idea of removal interpolation restoration is introduced to achieve regional interpolation of atmospheric precipitable water. Finally, in response to the problem of multiple solutions in the current water vapor chromatography equation, a genetic algorithm based chromatography method is put forward to achieve the solution of atmospheric water vapor spatial distribution. The experimental analysis shows that the average root mean square error and average absolute error of the design method of the research institute are 1.78g/m3 and 1.41g/m3, respectively, which can realize the calculation of atmospheric water vapor density distribution with high accuracy.

Author 1: Fengjun Wei
Author 2: Chunhua Liu
Author 3: Rendong Guo
Author 4: Xin Li
Author 5: Jilei Hu
Author 6: Chuanxun Che

Keywords: Global navigation satellite system; spatial distribution of water vapor; genetic algorithm; chromatography technology

PDF

Paper 8: Detection of Tuberculosis Based on Hybridized Pre-Processing Deep Learning Method

Abstract: The disease, tuberculosis (TB) is a serious health concern as it primarily affects the lungs and can lead to fatalities. However, early detection and treatment can cure the disease. One potential method for detecting TB is using Computer Aided Diagnosis (CAD) systems, which can analyze Chest X-Ray Images (CXR) for signs of TB. This paper proposes a new approach for improving the performance of CAD systems by using a hybrid pre-processing method for Convolutional Neural Network (CNN) models. The goal of the research is to enhance the accuracy and Area Under Curve (AUC) of detection for TB in CXR images by combining two different pre-processing methods and multi-classifying different manifestations of the disease. The hypothesis is that this approach will result in more accurate detection of TB in CXR images. To achieve this, this research used augmentation and segmentation techniques to pre-process the CXR images before feeding them into a pre-trained CNN model for classification. The VGG16 model managed to achieve an AUC of 0.935, an accuracy of 90% and a 0.8975 F1-score with the proposed pre-processing method.

Author 1: Mohamed Ahmed Elashmawy
Author 2: Irraivan Elamvazuthi
Author 3: Lila Iznita Izhar
Author 4: Sivajothi Paramasivam
Author 5: Steven Su

Keywords: Tuberculosis; CNN; pre-processing; CXR images; augmentation; segmentation

PDF

Paper 9: Automated CAD System for Early Stroke Diagnosis: Review

Abstract: Stroke is an important health issue that affects millions of people globally each year. Early and precise stroke diagnosis is crucial for efficient treatment and better patient outcomes. Traditional stroke detection procedures, such as manual visual evaluation of clinical data, can be time-consuming and error-prone. Computer-aided diagnostic (CAD) technologies have emerged as a viable option for early stroke diagnosis in recent years. These systems analyze medical pictures, such as magnetic resonance imaging (MRI), and identify indicators of stroke using modern algorithms and machine learning approaches. The goal of this review paper is to offer a thorough overview of the current state-of-the-art in CAD systems for early stroke detection. We give an examination of the merits and limits of this technology, as well as future research and development directions in this field. Finally, we contend that CAD systems represent a promising solution for improving the efficiency and accuracy of early stroke diagnosis, resulting in better patient outcomes and lower healthcare costs.

Author 1: Izzatul Husna Azman
Author 2: Norhashimah Mohd Saad
Author 3: Abdul Rahim Abdullah
Author 4: Rostam Affendi Hamzah
Author 5: Adam Samsudin
Author 6: Shaarmila A/P Kandaya

Keywords: Stroke diagnosis; CAD system; machine learning; deep learning

PDF

Paper 10: The Current State of Blockchain Consensus Mechanism: Issues and Future Works

Abstract: Blockchain is a decentralized ledger that serves as the foundation of Bitcoin and has found applications in various domains due to its immutable properties. It has the potential to change digital transactions drastically. It has been successfully used across multiple fields for record immutability and reliability. The consensus mechanism is the backbone of blockchain operations and validates newly generated blocks before they are added. To verify transactions in the ledger, various peer-to-peer (P2P) network validators use different consensus algorithms to solve the reliability problem in a network with unreliable nodes. The security and reliability of the inherent consensus algorithm used mainly determine blockchain security. However, consensus algorithms consume significant resources for validating new nodes. Therefore the safety and reliability of a blockchain system is based on the consensus mechanism's reliability and performance. Although various consensus mechanisms/algorithms exist, there is no unified evaluation criterion to evaluate them. Evaluating the consensus algorithm will explain system reliability and provide a mechanism for choosing the best consensus mechanism for a defined set of problems. This article comprehensively analyzes existing and recent consensus algorithms' throughput, scalability, latency, energy efficiency, and other factors such as attacks, Byzantine fault tolerance, adversary tolerance, and decentralization levels. The paper defines consensus mechanism criteria, evaluates available consensus algorithms based on them, and presents their advantages and disadvantages.

Author 1: Shadab Alam

Keywords: Blockchain; consensus mechanism; consensus algorithm; data security; distributed systems; bitcoin

PDF

Paper 11: A Novel Approach for Identification of Figurative Language Types in Devanagari Scripted Languages

Abstract: Poetry can be defined as a form of literary expression that uses language and artistic techniques to evoke emotions, create imagery, and convey complex ideas in a concentrated and imaginative manner. It is a form of written or spoken art that often incorporates rhythm, meter, rhyme, and figurative language to engage the reader or listener on multiple levels. There is no automated system that can identify figures of speech (FsoS) in poetry using Natural Language Processing (NLP) methods. In this research paper, the authors categorized four types of FsoS व्रत्या अनुप्रास (Type of alliteration), छेकानुप्रास (Type of alliteration), अन्तत्यानुप्रास (Rhyme), and पुनरुक्ति (Repetition) using two custom algorithms, Koshur and Awadhi (KA and AA), developed specifically for three different language corpora of poems (Koshur (K), Awadhi (A), and Hindi (H)). To evaluate the effectiveness of these algorithms, the authors conducted tests on three languages using four distinct approaches: with stopwords without optimization, with stopwords with optimization, without stopwords without optimization, and without stopwords with optimization. Authors have put lots of effort into identifying FsoS in not only one single language but in three Devanagari scripted languages. This research work is the first of its kind. The average accuracy without stopwords was not up to the mark. The authors then optimized both algorithms and again tested them on the same corpora with or without stopwords, resulting in a significant increase in accuracy.

Author 1: Jatinderkumar R. Saini
Author 2: Preety Sagar
Author 3: Hema Gaikwad

Keywords: Figures of speech (FsoS); natural language processing (NLP); Koshur; Awadhi

PDF

Paper 12: Machine Learning Model for Automated Assessment of Short Subjective Answers

Abstract: Natural Language Processing (NLP) has recently gained significant attention; where, semantic similarity techniques are widely used in diverse applications, such as information retrieval, question-answering systems, and sentiment analysis. One promising area where NLP is being applied, is personalized learning, where assessment and adaptive tests are used to capture students' cognitive abilities. In this context, open-ended questions are commonly used in assessments due to their simplicity, but their effectiveness depends on the type of answer expected. To improve comprehension, it is essential to understand the underlying meaning of short text answers, which is challenging due to their length, lack of clarity, and structure. Researchers have proposed various approaches, including distributed semantics and vector space models, However, assessing short answers using these methods presents significant challenges, but machine learning methods, such as transformer models with multi-head attention, have emerged as advanced techniques for understanding and assessing the underlying meaning of answers. This paper proposes a transformer learning model that utilizes multi-head attention to identify and assess students' short answers to overcome these issues. Our approach improves the performance of assessing the assessments and outperforms current state-of-the-art techniques. We believe our model has the potential to revolutionize personalized learning and significantly contribute to improving student outcomes.

Author 1: Zaira Hassan Amur
Author 2: Yew Kwang Hooi
Author 3: Hina Bhanbro
Author 4: Mairaj Nabi Bhatti
Author 5: Gul Muhammad Soomro

Keywords: Natural language processing; short text; answer assessment; BERT; semantic similarity

PDF

Paper 13: Sentiment Analysis in Indonesian Healthcare Applications using IndoBERT Approach

Abstract: The rapid growth of application development has made applications an integral part of people's lives, offering solutions to societal problems. Health service applications have gained popularity due to their convenience in accessing information on diseases, health, and medicine. However, many of these applications disappoint users with limited features, slow response times, and usability challenges. Therefore, this research focuses on developing a sentiment analysis system to assess user satisfaction with health service applications. The study aims to create a sentiment analysis model using reviews from health service applications on the Google Play Store, including Halodoc, Alodokter, and klikdokter. The dataset comprises 9.310 reviews, with 4.950 positive and 4.360 negative reviews. The IndoBERT pre-training method, a transfer learning model, is employed for sentiment analysis, leveraging its superior context representation. The study achieves impressive results with an accuracy score of 96%, precision of 95%, recall of 96%, and an F1-score of 95%. These findings underscore the significance of sentiment analysis in evaluating user satisfaction with health service applications. By utilizing the IndoBERT pre-training method, this research provides valuable insights into the strengths and weaknesses of health service applications on the Google Play Store, contributing to the enhancement of user experiences.

Author 1: Helmi Imaduddin
Author 2: Fiddin Yusfida A’la
Author 3: Yusuf Sulistyo Nugroho

Keywords: Application; healthcare; IndoBERT; sentiment analysis

PDF

Paper 14: The Medical Image Denoising Method Based on the CycleGAN and the Complex Shearlet Transform

Abstract: Medical image denoising plays an important role for the noise in the medical images can reduce the visibility, thereby affecting the diagnostic results of the doctors. Although good results have been achieved by the well-known deep learning-based denoising methods for their strong ability of learning, the loss of structural feature information and the well preservation of the edge information have not attracted considerable attention. To deal with these problems, a novel medical image denoising method based on the improved CycleGAN and the complex shearlet transform(CST) is proposed. The CST is used to construct the generator to embed more feature information in the training process and the denoising process is modeled to adversarial learn the mapping between the noise-free image domain and the noisy image domain. With the mechanism of the recurrent learning from the CycleGAN, the proposed method does not need the paired training data, which obviously speeds up the training and is more convenient than other classical methods. By comparing with five state-of-the-art denoising methods, experiments on the open dataset fully prove the accuracy and efficiency of the proposed method in terms of the visual quality and the quantitative PSNR, SSIM, and EPI.

Author 1: ChunXiang Liu
Author 2: Jin Huang
Author 3: Muhammad Tahir
Author 4: Lei Wang
Author 5: Yuwei Wang
Author 6: Faiz Ullah

Keywords: Medical image; image denoising; CycleGAN; complex shearlet transform

PDF

Paper 15: Comparing Scrum Maturity of Digital and Business Process Reengineering Groups: A Case Study at an Indonesia’s State-Owned Bank

Abstract: Bank XYZ, an Indonesia’s state-owned bank, has been conducting business and digital transformation throughout its organization. Based on a recent McKinsey survey, less than 30% of organizations succeed in transformation. Fast changing business requirements and various technology-based initiatives enforce the organization to employ an Agile methodology and Scrum, to cope with the situation. Group Grp-DGT and Grp-BPR are two groups in Bank XYZ that manage their projects using Scrum. Grp-DGT develops digital projects, whereas Grp-BPR develops Business Process Reengineering (BPR) projects. Scrum maturity in both groups needs to be appraised to promote sustainability in the long run. Comparing Scrum maturity between digital and BPR projects has not been done in the previous works, especially in a state-owned bank in Indonesia. This research will help the organization through the research output which are Scrum maturity level at both groups and proposed recommendations to improve Scrum practices. The other organizations can benefit from the recommendations as well. Scrum maturity model (SMM) is used to appraise the practices, while Agile Maturity (AMM) is used to calculate the maturity rating. From this research, it is found that Grp-DGT has reached maturity level 5 (optimizing), whereas Grp-BPR is still at level 1 (initial). Based on assessment results and Scrum guides, the recommendations are then drafted. There are 15 recommendations proposed to Grp-BPR to reach level 2 and onwards.

Author 1: Gloria Saripah Patara
Author 2: Teguh Raharjo

Keywords: Transformation; scrum; digital project; BPR project; scrum maturity model; agile maturity model

PDF

Paper 16: Adaptive Learner-CBT with Secured Fault-Tolerant and Resumption Capability for Nigerian Universities

Abstract: The post covid-19 studies have reported significant negative impact witnessed on global education and learning with the closure of schools’ physical infrastructure from 2020 to 2022. Its effects today continues to ripple across the learning processes even with advances in e-learning or media literacy. The adoption and integration therein of e-learning on the Nigerian frontier is yet to be fully harnessed. From traditional to blended learning, and to virtual learning – Nigeria must rise, and develop new strategies to address issues with her educational theories as well as to bridge the gap and negative impact of the post covid-19 pandemic. This study implements a virtual learning framework that adequately fuses the alternative delivery asynchronous-learning with traditional synchronous learning for adoption in the Nigerian Educational System. Result showcases improved cognition in learners, engaged qualitative learning, and a learning scenario that ensures a power shift in the educational structure that will further equip learners to become knowledge producer, help teachers to emancipate students academically, in a framework that measures quality of engaged student’s learning.

Author 1: Bridget Ogheneovo Malasowe
Author 2: Maureen Ifeanyi Akazue
Author 3: Ejaita Abugor Okpako
Author 4: Fidelis Obukohwo Aghware
Author 5: Deborah Voke Ojie
Author 6: Arnold Adimabua Ojugo

Keywords: Adaptive blended learning; computer-based test; fault tolerant design; resumption capabilities; Nigeria; FUPRE

PDF

Paper 17: A Yolo-based Violence Detection Method in IoT Surveillance Systems

Abstract: Violence detection in Internet of Things (IoT)-based surveillance systems has become a critical research area due to their potential to provide early warnings and enhance public safety. There have been many types of research on vision-based systems for violence detection, including traditional and deep learning-based methods. Deep learning-based methods have shown great promise in ameliorating the efficiency and accuracy of violence detection. Despite the recent advances in violence detection using deep learning-based methods, significant limitations and research challenges still need to be addressed, including the development of standardized datasets and real-time processing. This study presents a deep learning method based on You Only Look Once (YOLO) algorithm for the violence detection task to overcome these issues. We generate a model for violence detection using violence and non-violence images in a prepared dataset divided into testing, validation, and training sets. Based on accepted performance indicators, the produced model is assessed. The experimental results and performance evaluation show that the method accurately identifies violence and non-violence classes in real-time.

Author 1: Hui Gao

Keywords: Violence detection; IoT; surveillance systems; Yolo; deep learning

PDF

Paper 18: Towards Automated Evaluation of the Quality of Educational Services in HEIs

Abstract: The provision of educational services with high quality is a matter of concern to all stakeholders in higher education (academic staff, administration, students, etc.). According to many researchers, student satisfaction is an indicator of service quality in higher education institutions (HEIs), and evaluating the quality of educational and administrative services from students is an effective tool for improving the quality of HEIs. To ensure a competitive benefit over other educational institutions, HEIs leadership should take measures leading to improved student feedback on the quality of the provided administrative and education services, seek ways to exceed student expectations and provide high-quality services. Due to the great importance of the opinion of students on the quality of the services offered, many HEIs develop and use tools to assess student satisfaction with the quality of the services in the HEI. Little researched in the literature is the issue regarding the need to develop tools for HEIs leadership allowing survey results analysis, tracking trends over the years and comparing HEIs results. Based on a detailed analysis of developed questionnaires for evaluating the quality of services, this paper explores the possibilities of automation of the overall process for conducting questionnaire surveys of student’s satisfaction with the quality of services. As a result, a software prototype of a tool to automate the entire process for assessing student satisfaction is proposed - from questionnaire modelling, survey organizing and conducting to the analysis of the collected data. The developed tool allows governing bodies in HEIs to make informed decisions to improve the quality of services and to compare the results with those of competing universities.

Author 1: Silvia Gaftandzhieva
Author 2: Rositsa Doneva
Author 3: Mariya Zhekova
Author 4: George Pashev

Keywords: Quality assurance; higher education; educational services; administrative services; data analysis

PDF

Paper 19: Machine-Learning-based User Behavior Classification for Improving Security Awareness Provision

Abstract: Users of information technology are regarded as essential components of information security. Users’ lack of cybersecurity awareness can result in external and internal security attacks and threats in any organization that has several users or employees. Although various security methods have been designed to protect organizations from external intrusions and attacks, the human factor is also essential because security risks by “insiders” can occur due to a lack of awareness. Therefore, instead of general nontargeted security training, comprehensive cybersecurity awareness should be provided based on employees’ online behavior. This study seeks to provide a machine-learning-based model that provides user behavior analysis in which organizations can profile their employees by analyzing their online behavior to classify them into different classes and, thus, help provide them with appropriate awareness sessions and training. The model proposed in this paper will be evaluated and assessed through its implementation on a sample dataset that reflects users’ online activities over a specific period to measure the model’s accuracy and effectiveness. A comparison between six classification techniques has been made, and random forest classification had the best performance regarding classification accuracy and performance time. After users are classified, each group can be provided with the appropriate training material. This study will stimulate additional research in this area, which has not been widely investigated, and it will provide a useful point of reference for other studies. Additionally, it should provide insightful information to help decision-makers in organizations provide necessary and effective security awareness.

Author 1: Alaa Al-Mashhour
Author 2: Areej Alhogail

Keywords: Machine learning; user behavior analysis; cybersecurity; classification; security awareness

PDF

Paper 20: Collateral Circulation Classification Based on Cone Beam Computed Tomography Images using ResNet18 Convolutional Neural Network

Abstract: Collateral circulation is an arterial anastomotic channel that supply nutrient perfusion to areas of the brain. It happens when there is an existence of disruption of regular sources of flow due to an ischemic stroke. The most recent method, Cone Beam Computed Tomography (CBCT) neuroimaging is able to provide specific details regarding the extent and adequacy of collaterals. The current approaches for collateral circulation classification are based on manual observation and lead to inter and intra-rater inconsistency. This paper presented a 2-class automatic classification that is recently growing very fast in artificial intelligence disciplines. The two classes will differentiate between good and poor collateral circulation. A pre-trained convolutional neural network (CNN), namely ResNet18, has been used to learn features and train using 4368 CBCT images. Initially, the dataset is prepared, labeled and augmented. Then the images were transferred to be trained using the ResNet18 method with certain specifications. The algorithm performance was then evaluated using metrics in terms of accuracy, sensitivity, specificity, F1 score and precision on the CBCT images to classify collateral circulation accurately. The findings can automate collateral circulation classification to ease the limitations of standard clinical practice. It is a convincing method that supports neuroradiologists in assessing clinical scans and helps neuroradiologists in clinical decisions about stroke treatment.

Author 1: Nur Hasanah Ali
Author 2: Abdul Rahim Abdullah
Author 3: Norhashimah Mohd Saad
Author 4: Ahmad Sobri Muda

Keywords: Collateral circulation; CBCT; ResNet; convolutional neural network; classification

PDF

Paper 21: An Enhanced Algorithm of Improved Response Time of ITS-G5 Protocol

Abstract: This research article proposes an algorithm for improving the ITS-G5 protocol, which addresses the issue of response time. The algorithm includes the integration of Dijkstra's algorithm to prioritize shorter paths for message transmission, resulting in reduced delays. The initial algorithm for the ITS-G5 protocol is presented, followed by the modified algorithm that incorporates Dijkstra's algorithm. The modified algorithm utilizes a node-based approach and implements Dijkstra's algorithm to find the shortest path between two nodes. The algorithm is evaluated in a scenario involving 20 vehicles, where each vehicle has its own message. The results show improved communication efficiency and reduced response time compared to the original ITS-G5 protocol.

Author 1: Kawtar Jellid
Author 2: Tomader Mazri

Keywords: ITS-G5 (Intelligent Transport Systems); V2V (Vehicle-to-Vehicle); V2I (Vehicle-to-Infrastructure); V2X (Vehicle-to-everything); autonomous vehicle

PDF

Paper 22: Design and Application of an Automatic Scoring System for English Composition Based on Artificial Intelligence Technology

Abstract: The automatic grading of English compositions involves utilizing natural language processing, statistics, artificial intelligence (AI), and other techniques to evaluate and score compositions. This approach is objective, fair, and resource-efficient. The current widely used evaluation system for English compositions falls short in off-topic assessment, as subjective factors in manual marking lead to inconsistent scoring standards, which affects objectivity and fairness. Hence, researching and implementing an AI-based automatic scoring system for English compositions holds significant importance. This paper examines various composition evaluation factors, such as vocabulary usage, sentence structure, errors, development, word frequency, and examples. These factors are classified, quantified, and analysed using methods such as standardization, cluster analysis, and TF word frequency. Scores are assigned to each feature factor based on fuzzy clustering analysis and the information entropy principle of rough set theory. The system can flexibly identify composition themes in batches and rapidly score English compositions, offering more objective and impartial quality control. The goal of the proposed system is to address existing issues in teacher corrections and evaluations, as well as low self-efficacy in students' writing learning. The test results demonstrate that the system expands the learning material collections, enhances the identification of weak points, optimizes the marking engine performance with the text matching degree, reduces the marking time, and ensures efficient and high-quality assessments. Overall, this system shows great potential for widespread adoption.

Author 1: Fengqin Zhang

Keywords: English composition; automatic scoring; artificial intelligence; text matching degree; natural language processing

PDF

Paper 23: An Efficient Deep Learning with Optimization Algorithm for Emotion Recognition in Social Networks

Abstract: Emotion recognition, or computers' ability to interpret people's emotional states, is a rapidly expanding topic with many life-improving applications. However, most image-based emotion recognition algorithms have flaws since people can disguise their emotions by changing their facial expressions. As a result, brain signals are being used to detect human emotions with increased precision. However, most proposed systems could do better because electroencephalogram (EEG) signals are challenging to classify using typical machine learning and deep learning methods. Human-computer interaction, recommendation systems, online learning, and data mining all benefit from emotion recognition in photos. However, there are challenges with removing irrelevant text aspects during emotion extraction. As a consequence, emotion prediction is inaccurate. This paper proposes Radial Basis Function Networks (RBFN) with Blue Monkey Optimization to address such challenges in human emotion recognition (BMO). The proposed RBFN-BMO detects faces on large-scale images before analyzing face landmarks to predict facial expressions for emotional acknowledgment. Patch cropping and neural networks comprise the two stages of the RBFN-BMO. Pre-processing, feature extraction, rating, and organizing are the four categories of the proposed model. In the ranking stage, appropriate features are extracted from the pre-processed information, the data are then classed, and accurate output is obtained from the classification phase. This study compares the results of the proposed RBFN-BMO algorithm to the previous state-of-the-art algorithms using publicly available datasets derived from the RBFN-BMO model. Furthermore, we demonstrated the efficacy of our framework in comparison to previous works. The results show that the projected method can progress the rate of emotion recognition on datasets of various sizes.

Author 1: Ambika G N
Author 2: Yeresime Suresh

Keywords: Blue monkey optimization (BMO); deep learning; electroencephalograph (EEG); emotion recognition; human-computer interaction (HCI); radial basis function networks (RBFN)

PDF

Paper 24: Improved Drosophila Visual Neural Network Application in Vehicle Target Tracking and Collision Warning

Abstract: To enable the vehicle tracking and collision warning system to face more complex road information, the Drosophila visual neural network collision warning algorithm has been improved, including image stabilization algorithm, target region synthesis algorithm, and target tracking algorithm. The results showed that the improved image stabilization algorithm had significantly higher image stabilization quality. The peak signal-to-noise ratio of the stabilized image before improvement was the highest at 80dB and the lowest at 54dB. After improvement, the peak signal-to-noise ratio of the stabilized image was the highest at 82dB and the lowest at 60dB. The improved algorithm did not have any false alarms or missed alarms in collision warning. In video 1, there were false alarms in the unimproved algorithm, while in video 2, there were missed alarms. In video 1, all frames were in a safe state, but the original algorithm displayed an alarm in frames 7-12, 13-22, and 23-31. In video 2, there were dangerous situations in frames 8-24 that required an alarm, while the original algorithm displayed an alarm message in frames 8-17, consistent with the actual situation. The improved target tracking algorithm can complete the task of extracting target motion curves. The target tracking algorithm extracted the motion curves of one target in video 1 and two targets in video 2, which were consistent with the video content. The improvement of the Drosophila visual neural network collision warning model through research is effective, which can improve the driving safety of vehicles in complex road conditions.

Author 1: Jianyi Wu

Keywords: Drosophila visual neural network; collision warning; target calibration; target tracking

PDF

Paper 25: Earth Observation Satellite: Big Data Retrieval Method with Fuzzy Expression of Geophysical Parameters and Spatial Features

Abstract: A method for fuzzy retrievals of Earth observation satellite image database using geophysical parameters and spatial features is proposed. It is confirmed that the proposed method allows fuzzy expressions of queries with sea surface temperature, chlorophyll-a concentration and cloud coverage as well as circle, line and edge, for instance “rather cold sea surface temperature and a sort of circle feature”. Thus users, in particular, oceanographers may access the most appropriate image data from the database for finding of cold cores (circle features), fronts (arc and line features), etc. in a simple manner. Although this is just an example for oceanographers, it is found that the proposed method allows data mining with fuzzy expressions of geophysical queries from the big data platforms of the earth observation satellite database.

Author 1: Kohei Arai

Keywords: Fuzzy retrieval; earth observation satellite; big data; geophysical parameter; oceanographer; circle feature; arc feature; line feature; fuzzy expression

PDF

Paper 26: Virtual Route Guide Chatbot Based on Random Forest Classifier

Abstract: Improvements in the quality of tourism services and the number of human resources will affect the quality of social services and information services provided to foreign tourists, thereby enhancing the quality of services offered regarding tourist destination information in the Malang Raya area. Considering the urgency of foreign tourists in obtaining information related to directions, routes, and access roads to their desired tourist destinations, especially in East Java, due to limited data from the government agencies handling the tourism sector, as well as the difficulty in communication with residents who may not understand what is being communicated by foreign tourists. Therefore, the need for an interactive chatbot to assist in obtaining routes and access information to the desired tourist destinations will facilitate foreign tourists. To improve the accuracy of the chatbot's ability to answer sentence selection, the use of artificial intelligence, specifically the Random Forest Classifier, is necessary. This study obtained the highest accuracy value using a tree quantity of 200, a maximum tree depth of 20, and a minimum sample split of 5. Using these quantities resulted in an accuracy of 95.88%, precision of 96.29%, recall of 96.03%, and f-measure of 96.16%.

Author 1: Puspa Miladin Nuraida Safitri A. Basid
Author 2: Fajar Rohman Hariri
Author 3: Fresy Nugroho
Author 4: Ajib Hanani
Author 5: Firman Jati Pamungkas

Keywords: Tourism; chatbot; artificial intelligence; random forest classifier

PDF

Paper 27: Approaches and Tools for Quality Assurance in Distance Learning: State-of-play

Abstract: In recent years, distance learning has become an increasingly popular mode of education due to its flexibility and accessibility. However, the quality of distance learning programs has been a cause for concern, which has led to the development of various approaches and tools for quality assurance and assessment. This review article aims to provide an in-depth analysis of the current state of play of quality assurance in distance learning. The paper discusses the fundamental requirements to establish quality in distance learning and the challenges associated with ensuring quality in this mode of education. Then it explores the different approaches and tools used for quality assurance and assessment, such as course evaluations, self-assessments, and external reviews. In addition, the paper delves into the development of regulatory documents and manuals for quality assurance, which are essential for ensuring that distance learning programs adhere to established standards. It also discusses in detail the importance of audits and accreditations from assessment organizations in assuring quality in distance learning. As the satisfaction of all stakeholders (including students, faculty, and administrators) is crucial for ensuring the success of distance learning programmes, the paper highlights the various measures HEIs can take to ensure stakeholder satisfaction. Finally, the article discusses the processing of statistical data and performance indicators, which can provide valuable insights into the effectiveness of distance learning programmes.

Author 1: Silvia Gaftandzhieva
Author 2: Rositsa Doneva
Author 3: Senthil Kumar Jagatheesaperumal

Keywords: Distance learning; quality assurance; assessment; stakeholder satisfaction; regulatory documents; performance indicators

PDF

Paper 28: Efficient Parameter Estimation in Image Processing using a Multi-Agent Hysteretic Q-Learning Approach

Abstract: Optimizing image processing parameters is often a time-consuming and unreliable task that requires manual adjustments. In this paper, we present a novel approach that utilizes a multi-agent system with Hysteretic Q-learning to automatically optimize these parameters, providing a more efficient solution. We conducted an empirical study that focused on extracting objects of interest from textural images to validate our approach. Experimental results demonstrate that our multi-agent approach outperforms the traditional single-agent approach by quickly finding optimal parameter values and producing satisfactory results. Our approach's key innovation is the ability to enable agents to cooperate and optimize their behavior for the given task through the use of a multi-agent system. This feature distinguishes our approach from previous work that only used a single agent. By incorporating reinforcement learning techniques in a multi-agent context, our approach provides a scalable and effective solution to parameter optimization in image processing.

Author 1: Issam QAFFOU

Keywords: Parameter estimation; reinforcement learning; cooperative agents; hysteretic q-learning; optimistic agent; object extraction

PDF

Paper 29: Design and Implementation of an IoT Control and Monitoring System for the Optimization of Shrimp Pools using LoRa Technology

Abstract: The shrimp farming industry in Ecuador, renowned for its shrimp breeding and exportation, faces challenges due to diseases related to variations in abiotic factors during the maturation stage. This is partly attributed to the traditional methods employed in shrimp farms. Consequently, a prototype has been developed for monitoring and controlling abiotic factors using IoT technology. The proposed system consists of three nodes communicating through the LoRa interface. For control purposes, a fuzzy logic system has been implemented that evaluates temperature and dissolved oxygen abiotic factors to determine the state of the aerator, updating the information in the ThingSpeak application. A detailed analysis of equipment energy consumption and the maximum communication range for message transmission and reception was conducted. Subsequently, the monitoring and control system underwent comprehensive testing, including communication with the visualization platform. The results demonstrated significant improvements in system performance. By modifying parameters in the microcontroller, a 2.55-fold increase in battery durability was achieved. The implemented fuzzy logic system enabled effective on/off control of the aerators, showing a corrective trend in response to variations in the analyzed abiotic parameters. The robustness of the LoRa communication interface was evident in urban environments, achieving a distance of up to 1 km without line of sight.

Author 1: José M. Pereira Pontón
Author 2: Verónica Ojeda
Author 3: Víctor Asanza
Author 4: Leandro L. Lorente-Leyva
Author 5: Diego H. Peluffo-Ordóñez

Keywords: Control and monitoring system; shrimp pools; IoT architecture; LoRa technology; fuzzy logic control

PDF

Paper 30: An Overview of Vision Transformers for Image Processing: A Survey

Abstract: Using image processing technology has become increasingly essential in the education sector, with universities and educational institutions exploring innovative ways to enhance their teaching techniques and provide a better learning experience for their students. Vision transformer-based models have been highly successful in various domains of artificial intelligence, including natural language processing and computer vision, which have generated significant interest from academic and industrial researchers. These models have outperformed other networks like convolutional and recurrent networks in visual benchmarks, making them a promising candidate for image processing applications. This article presents a comprehensive survey of vision transformer models for image processing and computer vision, focusing on their potential applications for student verification in university systems. The models can analyze biometric data like student ID cards and facial recognition to ensure that students are accurately verified in real-time, becoming increasingly vital as online learning continues to gain traction. By accurately verifying the identity of students, universities and educational institutions can guarantee that students have access to relevant learning materials and resources necessary for their academic success.

Author 1: Ch. Sita Kameswari
Author 2: Kavitha J
Author 3: T. Srinivas Reddy
Author 4: Balaswamy Chinthaguntla
Author 5: Senthil Kumar Jagatheesaperumal
Author 6: Silvia Gaftandzhieva
Author 7: Rositsa Doneva

Keywords: Vision transformers; image processing; natural language processing; image

PDF

Paper 31: Multimodal Contactless Architecture for Upper Limb Virtual Rehabilitation

Abstract: The use of virtual rehabilitation systems for upper limbs has been implemented using different devices, and its efficiency as a complement to traditional therapies has been demonstrated. Multimodal systems are necessary for virtual rehabilitation systems since they allow multiple sources of information for both input and output so that the participant can have a personalized interaction. This work presents a simplified multimodal contactless architecture for virtual reality systems that focuses on upper limb rehabilitation. This research presents the following: 1) the proposed architecture 2) the implementation of a virtual reality system oriented to activities of daily living, and 3) an evaluation of the user experience and the kinematic results of the implementation. The results of the two experiments showed positive results regarding the implementation of a multimodal contactless virtual rehabilitation system based on the architecture. User experience evaluation showed positive values regard to six dimensions: perspicuity=2.068, attractiveness=1.987, stimulation=1.703, dependability=1.649, efficiency=1.517, and novelty=1.401. Kinematic evaluation was consistent with the score of the implemented game.

Author 1: Emilio Valdivia-Cisneros
Author 2: Elizabeth Vidal
Author 3: Eveling Castro-Gutierrez

Keywords: Human computer interaction (HCI); multimodal; feedback; architecture; upper limb; rehabilitation; contactless

PDF

Paper 32: Attitude Synchronization and Stabilization for Multi-Satellite Formation Flying with Advanced Angular Velocity Observers

Abstract: This paper focuses on two aspects of satellite formation flying (SFF) control: finite-time attitude synchronization and stabilization under undirected time-varying communication topology and synchronization without angular velocity measurements. First, a distributed nonlinear control law ensures rapid convergence and robust disturbance attenuation. To prove stability, a Lyapunov function involving an integrator term is utilized. Specifically, attitude synchronization and stabilization conditions are derived using graph theory, local finite-time convergence for homogeneous systems, and LaSalle's non-smooth invariance principle. Second, the requirements for angular velocity measurements are loosened using a distributed high-order sliding mode estimator. Despite the failure of inter-satellite communication links, the homogeneous sliding mode observer precisely estimates the relative angular velocity and provides smooth control to prevent the actuators of the satellites from chattering. Simulations numerically demonstrate the efficacy of the proposed design scheme.

Author 1: Belkacem Kada
Author 2: Khalid Munawar
Author 3: Muhammad Shafique Shaikh

Keywords: Attitude synchronization; coordinated control; finite-time control; high-order sliding mode observer; inter-satellite communication links; leader-following consensus; switching communication topology

PDF

Paper 33: Hussein Search Algorithm: A Novel Efficient Searching Algorithm in Constant Time Complexity

Abstract: Hussein search algorithm focuses on the fundamental concept of searching in computer science and aims to enhance the retrieval of data from various data warehouses. The efficiency of cloud systems is substantially influenced by the manner in which data is saved and retrieved, given the vast quantity of data being generated and stored in the cloud. The act of searching entails the systematic endeavor of locating a particular item within a substantial volume of data, and searching algorithms offer methodical strategies for accomplishing this task. There exists a wide array of searching algorithms, each exhibiting variations in terms of the search procedure, time complexity, and space complexity. The choice of the suitable algorithm is contingent upon various aspects, including the magnitude of the dataset, the distribution of the data, and the desired temporal and spatial intricacy. This study presents a novel prediction-based searching algorithm named the Hussein search algorithm. The system is designed to operate in a straightforward manner and makes use of a simple data structure. This study relies on fundamental mathematical computations and incorporates the interpolation search algorithm, an algorithm that introduces a search by-prediction method for uniformly distributed lists, it forecasts the precise position of the queried object. The cost of prediction remains consistent and, in numerous instances, falls under the O(1) range. Hussein search algorithm exhibits enhanced efficiency in comparison to the binary search and ternary search algorithms, both of which are widely regarded as the best methods for searching sorted data.

Author 1: Omer H Abu El Haijia
Author 2: Arwa H. F. Zabian

Keywords: Binary search; prediction search procedure; prediction cost; constant time complexity

PDF

Paper 34: Testing the Usability of Serious Game for Low Vision Children

Abstract: Serious games are prodigious tools for building language, science and math knowledge and skills. Despite a growing number of studies on using serious games for learning, children with visual impairment have obstacles when playing the games. Low vision children have a visual balance that can be assisted with assistive technology. A 2D serious game for learning Mathematics is developed using Unity for low vision children. In order to enhance the game’s accessibility for low vision children, accessibility elements have been implemented in the serious game prototype. Those elements are screen design (buttons, menus, and navigation), multimedia (text, graphics, audio, and animation), object motion, and language. Upon completion of the serious game, usability testing was done to identify the accessibility of the serious game to low vision children based on the usability level. The observation technique is used for analysing the serious game. The overall usability score is good based on aspects of effectiveness, efficiency and user satisfaction tested.

Author 1: Nurul Izzah Othman
Author 2: Hazura Mohamed
Author 3: Nor Azan Mat Zin

Keywords: Serious game; learning; low vision; usability; accessibility

PDF

Paper 35: Cybersecurity Advances in SCADA Systems

Abstract: The management of critical infrastructure heavily relies on Supervisory Control and Data Acquisition [SCADA] systems, but as they become more connected, insider attacks become a greater concern. Insider threat detection systems [IDS] powered by machine learning have emerged as a potential answer to this problem. In order to identify and neutralize insider threats, this review paper examines the most recent developments in machine learning algorithms for insider IDS in SCADA security systems. A thorough analysis of research articles published in 2019 and later, focussed on variety of machine learning methods, have been adopted in this review study to better highlight difficulties and challenges being faced by professionals, and how the study will contribute to overcome them. The results show that, in addition to conventional methods, machine-learning based intrusion detection techniques offer important advantages in identifying complex and covert insider attacks. Finding pertinent insider threat data for model training and guaranteeing data privacy and security are still difficult to address. Ensemble techniques and hybrid strategies show potential for improving detection resiliency. In conclusion, machine learning-based insider IDS has the potential to protect critical infrastructures by strengthening SCADA systems against insider attacks. The similarities and differences between cyber physical systems and SCADA systems, emphasizing security challenges and the potential for mutual improvement were also reviewed in this study. In order to be as effective as possible, future research should concentrate on addressing issues with data collecting and privacy, investigating the latest developments in technology, and creating hybrid models. SCADA systems can accomplish proactive and effective defence against insider attacks by integrating machine learning advancements, maintaining their dependability and security in the face of emerging threats.

Author 1: Bakil Al-Muntaser
Author 2: Mohamad Afendee Mohamed
Author 3: Ammar Yaseen Tuama
Author 4: Imran Ahmad Rana

Keywords: Threat detection; SCADA security; machine learning-based intrusion detection; cyber-physical systems security; insider attack prevention

PDF

Paper 36: Model Classification of Fire Weather Index using the SVM-FF Method on Forest Fire in North Sumatra, Indonesia

Abstract: As a tropical country, Indonesia is situated in Southeast Asia nation has vast forests. Forest fire occur busy vary due to land conditions and forest conditions in drought season. The indicator used mitigated potential forest fire is to study the indicator behavior of the fire weather index (FWI). The data is gathered from the observation station in north Sumatra province, computation and estimation FWI by Canadian Forest Fire Weather Index based on the data gathered. It is found that there is gathered outlier data. to hope will it, it is necessary to conduct classification and predict this of the dataset by machine learning approach using Support Vector Machine Forest Fire (SVM-FF), which is a further development of the previous models, known as the c-SVM and v-SVM. This method includes a balancing parameter by determining the lower and upper limits of a support vector. Furthermore, it allowed the balancing parameter value to be negative. The results showed that the classification of FWI was at low, medium, high, and extreme levels. The low FWI value has an average of 0.5 which is in the 0 to 1 interval. There was an increase in the model’s accuracy and performance from its predecessor, which include the c-SVM and v-SVM with respective values of 0.96 and 0.89. Meanwhile, it was observed that with the SVM-FF model, the accuracy was quite better with a value of 0.99, indicating that it is useful as an alternative to classify and predict forest fires.

Author 1: Darwis Robinson Manalu
Author 2: Opim Salim Sitompul
Author 3: Herman Mawengkang
Author 4: Muhammad Zarlis

Keywords: Fire weather index; forest fire; support vector machine; SVM-FF model

PDF

Paper 37: Prediction of Cryptocurrency Price using Time Series Data and Deep Learning Algorithms

Abstract: One of the most significant and extensively utilized cryptocurrencies is Bitcoin (BTC). It is used in many different financial and business activities. Forecasting cryptocurrency prices are crucial for investors and academics in this industry because of the frequent volatility in the price of this currency. However, because of the nonlinearity of the cryptocurrency market, it is challenging to evaluate the unique character of time-series data, which makes it impossible to provide accurate price forecasts. Predicting cryptocurrency prices has been the subject of several research studies utilizing machine learning (ML) and deep learning (DL) based methods. This research suggests five different DL approaches. To forecast the price of the bitcoin cryptocurrency, recurrent neural networks (RNN), long short-term memories (LSTM), gated recurrent units (GRU), bidirectional long short-term memories (Bi-LSTM), and 1D convolutional neural networks (CONV1D) were used. The experimental findings demonstrate that the LSTM outperformed RNN, GRU, Bi-LSTM, and CONV1D in terms of prediction accuracy using measures such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Squared Error (MSE), and R-squared score (R2). With RMSE= 1978.68268, MAE=1537.14424, MSE= 3915185.15068, and R2= 0.94383, it may be considered the best method.

Author 1: Michael Nair
Author 2: Mohamed I. Marie
Author 3: Laila A. Abd-Elmegid

Keywords: Cryptocurrency; deep learning; prediction; LSTM

PDF

Paper 38: Advances in Value-based, Policy-based, and Deep Learning-based Reinforcement Learning

Abstract: Machine learning is a branch of artificial intelligence in which computers use data to teach themselves and improve their problem-solving abilities. In this case, learning is the process by which computers use data and algorithms to build models that improve performance, and it can be divided into supervised learning, unsupervised learning, and reinforcement learning. Among them, reinforcement learning is a learning method in which AI interacts with the environment and finds the optimal strategy through actions, and it means that AI takes certain actions and learns based on the feedback it receives from the environment. In other words, reinforcement learning is a learning algorithm that allows AI to learn by itself and determine the optimal action for the situation by learning to find patterns hidden in a large amount of data collected through trial and error. In this study, we introduce the main reinforcement learning algorithms: value-based algorithms, policy gradient-based reinforcement learning, reinforcement learning with intrinsic rewards, and deep learning-based reinforcement learning. Reinforcement learning is a technology that enables AI to develop its own problem-solving capabilities, and it has recently gained attention among AI learning methods as the usefulness of the algorithms in various industries has become more widely known. In recent years, reinforcement learning has made rapid progress and achieved remarkable results in a variety of fields. Based on these achievements, reinforcement learning has the potential to positively transform human lives. In the future, more advanced forms of reinforcement learning with enhanced interaction with the environment need to be developed.

Author 1: Haewon Byeon

Keywords: Reinforcement learning; value-based algorithms; policy gradient-based reinforcement learning; reinforcement learning with intrinsic rewards; deep learning-based reinforcement learning

PDF

Paper 39: Scalable Blockchain Architecture: Leveraging Hybrid Shard Generation and Data Partitioning

Abstract: Blockchain technology has gained widespread recognition and adoption in various domains, but its implementation beyond crypto currencies faces a significant challenge - poor scalability. The serial execution of transactions in existing blockchain systems hampers transaction throughput and increases network latency, limiting overall system performance. In response to this limitation, this paper proposes a static analysis-driven data partitioning approach to enhance blockchain system scalability. By enabling parallel and distributed transaction execution through a simultaneous block-level transaction approach, the proposed technique substantially improves transaction throughput and reduces network latency. The study employs a hybrid shard generation algorithm within the Geth node of the blockchain network to create multiple shards or partitions. Experimental results indicate promising outcomes, with miners experiencing a remarkable speedup of 1.91x and validators achieving 1.90x, along with a substantial 35.34% reduction in network latency. These findings provide valuable insights and offer scalable solutions, empowering researchers and practitioners to address scalability concerns and promoting broader adoption of blockchain technology across various industries.

Author 1: Praveen M Dhulavvagol
Author 2: Prasad M R
Author 3: Niranjan C Kundur
Author 4: Jagadisha N
Author 5: S G Totad

Keywords: Ethereum; shard generation; data partitioning; proof of work

PDF

Paper 40: Detection of Herd Pigs Based on Improved YOLOv5s Model

Abstract: Fast and accurate detection technology for individual pigs raised in herds is crucial for subsequent research on counting and disease surveillance. In this paper, we propose an improved lightweight object detection method based on YOLOv5s to improve the speed and accuracy of detection of herd-raised pigs in real-world and complex environments. Specifically, we first introduce a lightweight feature extraction module called C3S, then replace the original large object detection layer with a small object detection layer at the output (head) of YOLOv5s. Finally, we propose a dual adaptive weighted PAN structure to compensate for the information loss of feature map at the neck of YOLOv5s caused by down sampling. Experiments show that our method has an accuracy rate of 95.2%, a recall rate of 89.1%, a mean Average Precision (mAP) of 95.3%, a model parameter number of 3.64M, a detection speed of 154 frames per second, and a model layer count of 183 layers. Comparing with the original YOLOv5s model and the current state-of-the-art object detection models, our proposed method achieves the best results in terms of mAP and detection speed.

Author 1: Jianquan LI
Author 2: Xiao WU
Author 3: Yuanlin NING
Author 4: Ying YANG
Author 5: Gang LIU
Author 6: Yang MI

Keywords: Pig; deep learning; computer vision; object detection

PDF

Paper 41: The Impact of Cyber Security on Preventing and Mitigating Electronic Crimes in the Jordanian Banking Sector

Abstract: As technology advances and cyber threats continue to evolve, cyber security professionals play a critical role in developing and implementing robust security measures, staying ahead of potential risks, and mitigating the impact of cyber incidents. Many studies have examined the impact of cyber security on banks, without focusing on electronic crimes. Despite its importance, to the best of our knowledge, there are no studies on the impact of cyber security on mitigating electronic crimes in the banking sector. Therefore, the purpose of this study is to ascertain how cyber security affects electronic crimes in the Jordanian banking industry. The study sample consisted of 270 senior Jordanian managers and employees who understand the importance of cyber security in the banking sector in 14 Jordanian commercial banks, listed on the Amman stock exchange. The study used SPSS to evaluate how banks can enhance network security infrastructure to prevent unauthorized access and data breaches and also to find out the role of cybersecurity in granting competitive advantage to banks. A relative importance index (RII) was conducted to rank the importance of variables’ statements and test the hypotheses. The results found the most important method through which banks can effectively mitigate the risk of electronic crimes and ensure the security of customers’ financial data is that banks utilize robust encryption technologies to ensure the protection of customer financial data while it is being transmitted and when it is stored (RII=0.740). About 81.5 % of the sample agree, also, banks that have a strong cyber security system provide a secure platform for digital financial services which increases the competitive advantage as they were ranked first for their relative importance at both the category level and overall ranking with (RII=0.754). The study recommended that the banking industry, must consistently educate its customers on information security techniques and how to avoid hacking into their accounts, and develop an alert system that can raise awareness for both banks and bank customers if there is any possible entry or access to the customer's account or organization confidential information.

Author 1: Tamer Bani Amer
Author 2: Mohammad Ibrahim Ahmed Al-Omar

Keywords: Cyber security; electronic crime; Jordanian banks; banking sector

PDF

Paper 42: Research on the Local Path Planning for Mobile Robots based on PRO-Dueling Deep Q-Network (DQN) Algorithm

Abstract: This paper proposes a Pro-Dueling DQN algorithm to solve the problems of slow convergence speed and waste of effective experience of the traditional DQN (Deep Q-Network) algorithm for the local path planning of mobile robot. The new algorithm introduces a priority experience playback mechanism based on SumTree to avoid forgetting the learning effective experiences as the number of samples in the experience pool increases. A more detailed reward and punishment function is designed for the new algorithm to reduce the blindness of extracting experience in the early stages of algorithm training. The feasibility of the algorithm is verified by comparative verification on ROS simulation platform and real scene, respectively. The results show that the designed Pro-Dueling DQN algorithm converges faster and the length of planned path is shorter than that of the original DQN algorithm.

Author 1: Yaoyu Zhang
Author 2: Caihong Li
Author 3: Guosheng Zhang
Author 4: Ruihong Zhou
Author 5: Zhenying Liang

Keywords: Deep Q-Network (DQN) algorithm; local path planning; mobile robot; Pro-Dueling DQN algorithm; SumTree

PDF

Paper 43: Prostate Cancer Detection and Analysis using Advanced Machine Learning

Abstract: Prostate cancer is one of the leading causes of cancer-related deaths among men. Early detection of prostate cancer is essential in improving the survival rate of patients. This study aimed to develop a machine-learning model for detecting and diagnosing prostate cancer using clinical and radiological data. The dataset consists of 200 patients with prostate cancer and 200 healthy controls and extracted features from their clinical and radiological data. Then, the data trained and evaluated using several machines learning models, including logistic Regression, decision tree, random forest, support vector machine, and neural network models, using 10-fold cross-validation. Our results show that the random forest model achieved the highest accuracy of 0.92, with a sensitivity of 0.95 and a specificity of 0.89. The decision tree model achieved a nearly similar accuracy of 0.91, while the logistic regression, support vector machine, and neural network models achieved lower accuracies of 0.86, 0.87, and 0.88, respectively. Our findings suggest that machine learning models can effectively detect and diagnose prostate cancer using clinical and radiological data. The random forest model may be the most suitable model for this task.

Author 1: Mowafaq Salem Alzboon
Author 2: Mohammad Subhi Al-Batah

Keywords: Prostate cancer; machine learning; clinical data; radiological data; diagnosis; medical diagnosis

PDF

Paper 44: Application of Improved Ant Colony Algorithm Integrating Adaptive Parameter Configuration in Robot Mobile Path Design

Abstract: Under the background of the continuous progress of Industry 4.0 reform, the market demand for mobile robots in major world economies is gradually increasing. In order to improve the mobile robot's movement path planning quality and obstacle avoidance ability, this research adjusted the node selection method, pheromone update mechanism, transition probability and volatility coefficient calculation method of the ant colony algorithm, and improved the search direction setting and cost estimation calculation method of the A* algorithm. Thus, a robot movement path planning model can be designed with respect to the improved ant colony algorithm and A* algorithm. The simulation experiment results on grid maps show that the planning model constructed in view of the improved algorithm, the traditional ant colony algorithm, the Tianniu whisker search algorithm, and the particle swarm algorithm designed in this study converged after 8, 37, 23, and 26 iterations, respectively. The minimum path lengths after convergence were 13.24m, 17.82m, 16.24m, and 17.05m, respectively. When the edge length of the grid map is 100m, the minimum planning length and total moving time of the planning model constructed in view of the improved algorithm, the traditional ant colony algorithm, the longicorn whisker search algorithm, and the particle swarm algorithm designed in this study are 49m, 104m, 75m, 93m and 49s, 142s, 93s, and 127s, respectively. This indicates that the model designed in this study can effectively shorten the mobile path and training time while completing mobile tasks. The results of this study have a certain reference value for optimizing the robot's movement mode and obstacle avoidance ability.

Author 1: Jinli Han

Keywords: Ant colony algorithm; robots; mobile path planning; obstacle avoidance

PDF

Paper 45: Simulation of Logistics Frequent Path Data Mining Based on Statistical Density

Abstract: Sharp increases and rapid development followed the effects of a novel coronavirus outbreak on online sales and the real economy. The e-commerce mode on the Internet has attracted much attention, and users' purchases on the Internet has never been done before. However, among the many express companies, as the ones closest to consumers, they can still provide high-quality products in the face of huge market demand. Urban terminal logistics refers to the purpose of express services to meet the needs of terminal customers under the requirements of logistics centralization and customer diversification. However, the geographical distribution of logistics services in China is comprehensive, and customers' requirements are also complex. Practical problems in logistics enterprises in China significantly restrict the quality of logistics services. The final kilometer of distribution is composed of many links, and it is a very cumbersome enlace; it contains the determination of distribution scope, loading goods, arrangement of distribution sequence, arrangement of vehicles or personnel scheduling, and planning of distribution routes. A Genetic Algorithm (GA) with local search method fusion is proposed for fast logistics data modeling and mining simulation analysis. Practical examples and literature data prove the method's accuracy.

Author 1: Fengju Hou

Keywords: Statistical density; logistics; the path; the simulation data

PDF

Paper 46: Simulation Analysis of Hydraulic Control System of Engineering Robot Arm Based on ADAMS

Abstract: Substantial trenching capacity, communication capabilities, simple configuration, and so on are just a few of the many benefits that make Hydraulic Control Systems (HCS) the context of physical devices used within the geotechnical trench. These characteristics have led to widespread application in developing water conservation and hydroelectric technology, architectural construction, local construction, and other technology. In this article, the engineering robot arm proposed an HCS. Subsequently, a digital version of the functional device is constructed using Anti-Doping Administration and Management System (ADAMS), a simulation program, by incorporating associated restrictions and workload. With the help of a simulation model of the HCS's functioning apparatus, this research obtains the fundamental factors of the excavator's operating range and the pressured condition variation curve of the location of every Hydraulic Actuator (HA). The findings, which provide a conceptual framework and enhancements for the control system equipment, significantly raise the bar on China's excavator architecture, expand digger efficiency, and foster the firm's fast growth. An in-depth examination of the HCS's current operating condition, including an examination of the simulated model's transmission phase, can be determined. The findings provide a theoretical foundation for designing an optimal HCS.

Author 1: Haiqing Wu

Keywords: Hydraulic control systems; ADAMS; simulation analysis; engineering robot arm

PDF

Paper 47: Enhanced Transfer Learning Strategies for Effective Kidney Tumor Classification with CT Imaging

Abstract: Kidney tumours (KTs) rank seventh in global tumour prevalence among both males and females, posing a significant health challenge worldwide. Early detection of KT plays a crucial role in reducing mortality rates, mitigating side effects, and effectively treating the tumor. In this context, computer-assisted diagnosis (CAD) offers promising benefits, such as improved test accuracy, cost reduction, and timesaving compared to manual detection, which is known to be laborious and time-consuming. This research investigates the feasibility of employing machine learning (ML) and Fine-tuned Transfer Learning (TL) to improve KT detection. CT images of individuals with and without kidney tumors were utilized to train the models. The study explores three different image dimensions: 32x32, 64x64, and 128x128 pixels, employing the Grey Level Co-occurrence Matrix (GLCM) for feature engineering. The GLCM uses pixel pairs' distance (d) and angle (θ) to calculate their occurrence in the image. Various detection approaches, including Random Forest (RF), Support Vector Machine (SVM), Gradient Boosting (GB), and Light Gradient Boosting Model (LGBM), were applied to identify KTs in CT images for diagnostic purposes. Additionally, the study experimented with fine-tuned ResNet-101 and DenseNet-121 models for more effective computer-assisted diagnosis of KT. Evaluation of the efficient diagnostics of fine-tuned ResNet-101 and DenseNet-121 was conducted by comparing their performance with four ML models (RF, SVM, LGBM, and GB). Notably, ResNet-101 and DenseNet-121 achieved the highest accuracy of 94.09%, precision of 95.10%, recall of 93.5%, and F1-score of 93.95% when using 32x32 input images. These results outperformed other models and even surpassed state-of-the-art methods. This research demonstrates the potential of accurately and efficiently classifying KT in CT kidney scans using ML approaches. The use of fine-tuned ResNet-101 and DenseNet-121 shows promising results and opens up avenues for enhanced computer-assisted diagnosis of kidney tumors.

Author 1: Muneer Majid
Author 2: Yonis Gulzar
Author 3: Shahnawaz Ayoub
Author 4: Farhana Khan
Author 5: Faheem Ahmad Reegu
Author 6: Mohammad Shuaib Mir
Author 7: Wassim Jaziri
Author 8: Arjumand Bano Soomro

Keywords: Kidney; kidney tumor; automatic diagnosis; machine learning algorithms; CT imaging; deep learning; transfer learning

PDF

Paper 48: A Hybrid Metaheuristic Model for Efficient Analytical Business Prediction

Abstract: Accurate and efficient business analytical predictions are essential for decision making in today's competitive landscape. Involves using data analysis, statistical methods, and predictive modeling to extract insights and make decisions. Current trends focus on applying business analytics to predictions. Optimizing business analytics predictions involves increasing the accuracy and efficiency of predictive models used to forecast future trends, behavior, and outcomes in the business environment. By analyzing data and developing optimization strategies, businesses can improve their operations, reduce costs, and increase profits. The analytic business optimization method uses a hybrid PSO (Particle Swarm Optimization) and GSO (Gravitational Search Optimization) algorithm to increase the efficiency and effectiveness of the decision-making process in business. In this approach, the PSO algorithm is used to explore the search space and find the global best solution, while the GSO algorithm is used to refine the search around the global best solution. The hybrid meta-heuristic method optimizes the three components of business analytics: descriptive, predictive, and perspective. The hybrid model is designed to strike a balance between exploration and exploitation, ensuring effective search and convergence to high-quality solutions. The results show that the R2 value for each optimization parameter is close to one, indicating a more fit model. The RMSE value measures the average prediction error, with a lower error indicating that the model is performing well. MSE represents the mean of the squared difference between the predicted and optimized values. A lower error value indicates a higher level of accuracy.

Author 1: Marischa Elveny
Author 2: Mahyuddin K. M Nasution
Author 3: Rahmad B. Y Syah

Keywords: Efficiency; analytics business; predictions; Particle Swam Optimization (PSO); Gravitational Search Optimization (GSO)

PDF

Paper 49: A Mechanism for Bitcoin Price Forecasting using Deep Learning

Abstract: Researchers and investors have recently become interested in forecasting the cryptocurrency price forecasting but the most important currency can take that it’s the bitcoin exchange rate. Some researchers have aimed at leveraging the technical and financial characteristics of Bitcoin to create predictive models, while others have utilized conventional statistical methods to explain these factors. This article explores the LSTM model for forecasting the value of bitcoins using historical bitcoin price series. Predict future bitcoin prices by developing the most accurate LSTM forecasting model, building an advanced LSTM forecasting model (LSTM-BTC), and comparing past bitcoin prices. This is the second step, if looking at the end of the model, it has very high accuracy in predicting future prices. The performance of the proposed model is evaluated using five different datasets with monthly, weekly, daily, hourly, and minute-by-minute bitcoin price data with total records from January 1, 2021, to March 31, 2022. The results confirm the better forecasting accuracy of the proposed model using LSTM-BTC. The analysis includes square error MSE, RMSE, MAPE, and MAE of bitcoin price forecasting. Compared to the conventional LSTM model, the suggested LSTM-BTC model performs better. The contribution made by this research is to present a new framework for predicting the price of Bitcoin that solves the issue of choosing and evaluating input variables in LSTM without making firm data assumptions. The outcomes demonstrate its potential use in applications for industry forecasting, including different cryptocurrencies, health data, and economic time.

Author 1: Karamath Ateeq
Author 2: Ahmed Abdelrahim Al Zarooni
Author 3: Abdur Rehman
Author 4: Muhammd Adna Khan

Keywords: Currency; bitcoin; LSTM; forecasting; models

PDF

Paper 50: Research on Improving Piano Performance Evaluation Method in Piano Assisted Online Education

Abstract: With the continuous progress of science and technology and the popularization of the Internet, online piano education has gradually emerged. This educational model provides piano learning resources and communication platforms through the network platform, so that students can learn piano at home anytime and anywhere. However, there are still some problems in the evaluation method of piano assisted online education, which hinders the development of piano assisted online education. Aiming at the problem that piano assisted online education is difficult to evaluate correctly, this paper proposes to integrate the bidirectional long and short memory network into the instrument digital interface piano performance evaluation model, and to integrate the attention mechanism into the bidirectional long and short memory network, hoping to improve the evaluation accuracy of the model. In the comparison experiment of the evaluation model based on the bidirectional long term memory network, it is found that the accuracy of the bidirectional long term memory network model is 0.91, which is significantly higher than the comparison model. In addition, through the empirical analysis of the model, it is found that the piano online education course integrated with the model can improve students' performance level scores and promote their participation enthusiasm. The above results indicate that the digital interface piano performance evaluation model can not only be used to evaluate digital interface piano performance more accurately, but also to promote the development of online piano education.

Author 1: Huayi Qi
Author 2: Chunhua She

Keywords: Short-term memory network; attention mechanism; musical instrument digital interface; online education; piano performance evaluation model

PDF

Paper 51: Methodological Insights Towards Leveraging Performance in Video Object Tracking and Detection

Abstract: Video Object Detection and Tracking (VODT), one of its integral operations of surveillance system in present time, mechanizes a way to identify and track the target object autonomously and seamlessly within its visual field. However, the challenges associated with video feeding are immensely high, and the scene context is out of human control, posing an impediment to a successful model of VODT. The presented work has discussed about effectiveness of existing VODT approaches considering its identified taxonomies viz. satellite based, remote sensing-based, unmanned-based, Real-time Tracking based, behavioral analysis and event detection based, integration of multiple data sources, and privacy and ethics. Further, research trend associated with cumulative publications and evolving methods to realize the frequently used methodologies in VODT. Further, the results of review showcase that there is prominent research gap of manifold attributes that demands to be addressed for improving performance of VODT.

Author 1: Divyaprabha
Author 2: M. Z Kurian

Keywords: Object detection; object tracking; video; visual field; surveillance system; video feed

PDF

Paper 52: Pairwise Test Case Generation using (1+1) Evolutionary Algorithm for Software Product Line Testing

Abstract: Software product line SPLs, or software product lines, are groups of similar software systems that share some commonalities but stand out from one another in terms of the features they offer. Over the past few decades, SPLs have been the focus of a great deal of study and implementation in both the academic and commercial sectors. Using SPLs has been shown to improve product customization and decrease time to market. Additional difficulties arise when testing SPLs because it is impractical to test all possible product permutations. The use of Combinatorial Testing in SPL testing has been the subject of extensive study in recent years. The purpose of this study is to gather and analyze data on combinatorial testing applications in SPL, apply Pairwise Testing using (1+1) evolutionary algorithms to SPL across four case studies, and assess the algorithms' efficacy using predetermined evaluation criteria. According to the findings, the performance of this technique is superior when the case study is larger, that is, when it has a higher number of features, than when the case study is smaller in scale.

Author 1: Sharafeldin Kabashi Khatir
Author 2: Rabatul Aduni Binti Sulaiman
Author 3: Mohammed Adam Kunna Azrag
Author 4: Jasni Mohamad Zain
Author 5: Julius Beneoluchi Odili
Author 6: Samer Ali Al-Shami

Keywords: SPL; SPL testing; combinatorial testing; pairwise testing; evolutionary algorithm; 1+1 EA

PDF

Paper 53: Campus Network Intrusion Detection Based on Gated Recurrent Neural Network and Domain Generation Algorithm

Abstract: Network attacks are diversified, rare and Universal generalization. This has made the exploration and construction of network information flow packet threat detection systems, which becomes a hot research topic in preventing network attacks. So this study establishes a network data threat detection model based on traditional network threat detection systems and deep learning neural networks. And convolutional neural network and data enhancement technology are used to optimize the model and improve rare data recognizing accuracy. The experiment confirms that this detection model has a recognition probability of approximately 11% and 42% for two rare attacks when N=1, respectively. When N=2, their probabilities are 52% and 78%, respectively. When N=3, their recognition probabilities are approximately 85% and 92%, respectively. When N=4, their recognition probabilities are about 58% and 68%, respectively, with N=3 having the best recognition effect. In addition, the recognition efficiency of this model for malicious domain name attacks and normal data remains around 90%, which has significant advantages compared to traditional detection systems. The proposed network data flow threat detection model that integrates Gated Recurrent Neural Network and Domain Generation Algorithm has certain practicality and feasibility.

Author 1: Qi Rong
Author 2: Guang Zhao

Keywords: Gated recurrent; domain generation algorithm; campus network; threat detection; neural network

PDF

Paper 54: Dynamic Modelling of Hand Grasping and Wrist Exoskeleton: An EMG-based Approach

Abstract: Human motion intention plays an important role in designing an exoskeleton hand wrist control for post-stroke survivors especially for hand grasping movement. The challenges occurred as sEMG signal frequently being affected by noises from its surroundings. To overcome these issues, this paper aims to establish the relationship between sEMG signal with wrist angle and handgrip force. ANN and ANFIS were two approaches that have been used to design dynamic modelling for hand grasping of wrist movement at different MVC levels. Input sEMG signals value from FDS and EDC muscles were used to predict the hand grip force as a representation of output signal. From the experimental results, sEMG MVC signal level was directly proportional to the hand grip force production while hand grip force signal values will depend on the position of wrist angle. It’s also concluded that the hand grip force signal production is higher while the wrist at flexion position compared to extension. A strong relationship between sEMG signal and wrist angle improved the estimation of hand grip force result thus improved the myoelectronic control device for exoskeleton hand. Moreover, ANN managed to improve the estimation accuracy result provided by ANFIS by 0.22% summation of integral absolute error value with similar testing dataset from the experiment.

Author 1: Mohd Safirin Bin Karis
Author 2: Hyreil Anuar Bin Kasdirin
Author 3: Norafizah Binti Abas
Author 4: Muhammad Noorazlan Shah Bin Zainudin
Author 5: Sufri Bin Muhammad
Author 6: Mior Muhammad Nazmi Firdaus Bin Mior Fadzil

Keywords: Hand grasping; wrist control; ANN; ANFIS; exoskeleton wrist design

PDF

Paper 55: Research on Semantic Segmentation Method of Remote Sensing Image Based on Self-supervised Learning

Abstract: To address the challenge of requiring a large amount of manually annotated data for semantic segmentation of remote sensing images using deep learning, a method based on self-supervised learning is proposed. Firstly, to simultaneously learn the global and local features of remote sensing images, a self-supervised learning network structure called TBSNet (Triple-Branch Self-supervised Network) is constructed. This network comprises an image transformation prediction branch, a global contrastive learning branch, and a local contrastive learning branch. The contrastive learning part of the network employs a novel data augmentation method to simulate positive pairs of the same remote sensing images under different weather conditions, enhancing the model's performance. Meanwhile, the model integrates channel attention and spatial attention mechanisms in the projection head structure of the global contrastive learning branch, and replaces a fully connected layer with a convolutional layer in the local contrastive learning branch, thus improving the model's feature extraction ability. Secondly, to mitigate the high computational cost during the pre-training phase, an algorithm optimization strategy is proposed using the TracIn method and sequential optimization theory, which increases the efficiency of pre-training. Lastly, by fine-tuning the model with a small amount of annotated data, effective semantic segmentation of remote sensing images is achieved even with limited annotated data. The experimental results indicate that with only 10% annotated data, the overall accuracy (OA) and recall of this model have improved by 4.60% and 4.88% respectively, compared to the traditional self-supervised model SimCLR (A Simple Framework for Contrastive Learning of Visual Representations). This provides significant application value for tasks such as semantic segmentation in remote sensing imagery and other computer vision domains.

Author 1: Wenbo Zhang
Author 2: Achuan Wang

Keywords: Computer vision; deep learning; self-supervised learning; remote sensing image; semantic segmentation

PDF

Paper 56: Mechatronics Design and Robotic Simulation of Serial Manipulators to Perform Automation Tasks in the Avocado Industry

Abstract: Peru is considered one of the principal agroindustrial avocado exporters worldwide. At the beginning of 2022, the volume exported was 8.3% higher than in 2021, so the design and simulation of a pick and place and palletizing cell for agro-exporting companies in the Region of La Libertad was proposed. A methodology was followed that presented a flow diagram of the design of the cell, considering the size of the avocado and the dimensions of the box-type packaging. The forward and inverse kinematics for the Scara T6 and UR10 robots were developed in Matlab according to the Denavit-Hartenberg algorithm, and 3D CAD, dynamic modeling, and trajectory calculation were performed in Solidworks using a "planner" algorithm developed in Matlab, which takes into account the start and end points, maximum speeds, and travel time of each robot. Then, in CoppeliaSim, the working environment of the cell and the robots with their respective configurations are created. Finally, the simulation of trajectories is performed, describing the expected movement, getting the time of the finished task was calculated, where the Scara T6 robot had a working time of 1.18 s and the UR10 of 2.32 s. For 2023 - 2025, its implementation is proposed in the Camposol Company located in the district of Chao - La Libertad, considering the dynamic control of the system.

Author 1: Carlos Paredes
Author 2: Ricardo Palomares
Author 3: Josmell Alva
Author 4: José Cornejo

Keywords: Mechatronic design; inverse kinematics; dynamic modeling; pick and place; palletizing; Scara robot; universal robot; robot manipulators; path tracking simulation; kinematic control

PDF

Paper 57: Integrating Transfer Learning and Deep Neural Networks for Accurate Medical Disease Diagnosis from Multi-Modal Data

Abstract: Effective patient treatment and care depend heavily on accurate disease diagnosis. The availability of multi-modal medical data in recent years, such as genetic profiles, clinical reports, and imaging scans, has created new possibilities for increasing diagnostic precision. However, because of their inherent complexity and variability, analyzing and integrating these varied data types present significant challenges. In order to overcome the difficulties of precise medical disease diagnosis using multi-modal data, this research suggests a novel approach that combines Transfer Learning (TL) and Deep Neural Networks (DNN). An image dataset that included images from various stages of Alzheimer's disease (AD) was collected from kaggle repository. In order to improve the quality of the signals or images for further analysis, a Gaussian filter is applied during the preprocessing stage to smooth out and reduce noise in the input data. The features are then extracted using Gray-Level Co-occurrence Matrix (GLCM). TL makes it possible for the model to use the information gained from previously trained models in other domains, requiring less training time and data. The trained model used in this approach is AlexNet. The classification of the disease is done using DNN. This integrated approach improves diagnostic precision particularly in scenarios with limited data availability. The study assesses the effectiveness of the suggested method for diagnosing AD, focusing on evaluation metrics such as accuracy, precision, miss rate, recall, F1-score, and the Area under the Receiver Operating Characteristic Curve (AUC-ROC). The approach is a promising tool for medical professionals to make more accurate and timely diagnoses, which will ultimately improve patient outcomes and healthcare practices. The results show significant improvements in accuracy (99.32%).

Author 1: Chamandeep Kaur
Author 2: Abdul Rahman Mohammed Al-Ansari
Author 3: Taviti Naidu Gongada
Author 4: K. Aanandha Saravanan
Author 5: Divvela Srinivasa Rao
Author 6: Ricardo Fernando Cosio Borda
Author 7: R. Manikandan

Keywords: Transfer learning; deep neural network; disease diagnosis; multi-modal data; Alexnet; GLCM; DNN; pre-trained model

PDF

Paper 58: An Integrated Instrument for Measuring Science, Technology, Engineering, and Mathematics: Digital Educational Game Acceptance and Player Experience

Abstract: Digital educational games (DEGs) are effective learning tools for subjects related to science, technology, engineering, and mathematics (STEM), yet they are still not widely used among students. Existing instruments typically assess player experience (PX) and acceptance separately, even though both are essential DEG evaluations that can be merged and analyzed concurrently in a thorough manner. This study, therefore, proposes an integrated instrument called DEGAPX that combines fundamental technology acceptance factors with a broad range of PX criteria. The proposed instrument can be used by educators and game designers in the selection and development of DEGs that satisfy the needs of target users. This article describes the process of developing the scale instrument and validating it through two rounds of expert judgment and among students after using three DEGs related to STEM. The proposed instrument, which comprised 15 constructs measured by 67 items, was proven to be reliable and valid.

Author 1: Husna Hafiza R. Azami
Author 2: Roslina Ibrahim
Author 3: Suraya Masrom
Author 4: Rasimah Che Mohd Yusoff
Author 5: Suraya Yaacob

Keywords: Game; education; acceptance; experience; stem

PDF

Paper 59: Human-object Behavior Analysis Based on Interaction Feature Generation Algorithm

Abstract: Aiming at the problem of insufficient utilization of interactive feature information between human and object, this paper proposes a two-stream human-object behavior analysis network based on interaction feature generation algorithm. The network extracts human-object’s feature information and interactive feature information respectively. When extracting human-object features information, considering that ResNeXt has powerful feature expression ability, the network is used to extract human-object features from images. When extracting interactive features information between human and object, an interaction feature generation algorithm is proposed, which uses the feature reasoning ability of graph convolutional neural networks. A graph model is constructed by taking human and objects as nodes and the interaction between them as edges. According to the interactive feature generation algorithm, the graph model is updated by traversing nodes, and new interactive features are generated during this process. Finally, the humans’ and objects’ features information and the human-object interaction feature information are fused and sent to the classification network for behavior recognition, so as to fully utilize the humans’ and objects’ feature information and the interaction feature information of human-objects. The human-object behavior analysis network is experimentally verified. The results show that the accuracy of the network has been significantly improved on HICO-DET and V-COCO datasets.

Author 1: Qing Ye
Author 2: Xiuju Xu
Author 3: Rui Li

Keywords: Two-stream human-object behavior analysis network; interaction feature generation algorithm; interactive feature information; ResNeXt; graph convolutional neural networks; graph model

PDF

Paper 60: A Proposed Framework for Context-Aware Semantic Service Provisioning

Abstract: Web-hosted Internet of Things (IoT) applications are the next logical step in the recent endeavor by academia and industry to design and standardize new communication protocols for smart objects. Context Awareness is defined as the property of a system that employs context to provide related information or services to the user, where the relationship is based on the user's task. Therefore, context-aware service discovery can be defined as utilizing context information to discover the most relevant services for the user. Merging context-aware concepts with the IoT facilitates IoT system developments that depend on complex environments with many sensors and actuators, user, and their environment. The main objective of this study is to design an abstract framework for provisioning smart objects as a service based on context-aware concepts while considering constraints of bandwidth, scalability, and performance. The proposed framework building blocks include data acquisition and management service and data aggregation, and rules reasoning. The proposed framework is validated and evaluated by constructing an IoT network simulation and testing accessing the service in the traditional method and according to the proposed framework and comparing the results.

Author 1: Wael Haider
Author 2: Hatem Abdelkader
Author 3: Amira Abdelwahab

Keywords: Internet of Things (IoT); Web of Things (WoT); Web of Objects (WoOs); context-awareness; service provisioning; interoperability; ontology; OWL

PDF

Paper 61: Impact of the Use of the Video Game SimCity on the Development of Critical Thinking in Students: A Quantitative Experimental Approach

Abstract: The objective of the research is to determine to what extent the use of the SimCity video game allows the development of critical thinking in the teaching-learning processes of students. The methodology applied consisted of a research with a quantitative approach of experimental type, working with a sample of 25 students selected through a simple random sampling of a population of 100 students, 10 sessions were developed using the SimCity video game, a pretest and posttest of skills and abilities required to develop critical thinking of Watson Glaser were applied, whose dimensions measured were: Inferences, assumptions, deductive reasoning, logical interpretation and evaluation of arguments. The results show that with adequate stimulation through the use of the SimCity video game, critical thinking can have a moderate but effective development in the students, from the comparison of the data obtained in the pretest and posttest a significant progress in terms of scores is observed; likewise, the effectiveness of the use of the SimCity video game is reflected to a greater extent in inferences and evaluations of arguments, since during the posttest evaluations greater progress was observed in comparison to other skills; while the interpretation of information obtained less progress in comparison to the other skills, the use of skills such as deductive reasoning, inferences and evaluation of arguments were moderately developed. In conclusion, the use of the SimCity video game allows the development of skills and abilities to develop critical thinking according to various factors, such as the way in which it is incorporated into the curriculum, the orientation and guidance of teachers, and the way in which reflection and analysis is carried out after the game experience.

Author 1: Jorge Luis Torres-Loayza
Author 2: Grunilda Telma Reymer-Morales
Author 3: Benjamín Maraza-Quispe

Keywords: SimCity; video games; critical thinking; critical learning

PDF

Paper 62: An Automated Medical Image Segmentation Framework using Deep Learning and Variational Autoencoders with Conditional Neural Networks

Abstract: It is a highly difficult challenge to achieve correlation between images by reliable image authentication and this is essential for numerous therapeutic activities like combining images, creating tissue atlases and tracking the development of the tumors. The separation of healthcare data utilizing deep learning variational autoencoders and conditional neural networks is presented in this research as a paradigm. One of the essential jobs in machine vision is the partitioning of an image. Due to the requirement for low-level spatial data, this assignment is more challenging compared to other vision-related challenges. By utilizing VAEs' capacity to develop hidden representations and combining CNNs in a conditioned environment, the algorithm generates accurate and efficient results for the segmentation. Moreover, to learn the representation of latent space from labelled clinical images, the VAE is trained as part of the system that is suggested. After that, the representations that were learned and real categorizations are used to develop the conditional neural network. Furthermore, the model that has been trained is utilized to accurately separate the areas that are important in brand-new medical images during the inferential stage. Thus, the experimental findings on several healthcare imaging databases show the enhanced precision of segmentation of the structure, highlighting its ability to enhance automated diagnosis and treatment. Henceforth, the suggested Deep Learning and Variational Auto Encoders with Conditional Neural Networks (DL-VAE-CNN) are employed to solve the pixel-level problem of classification that plagues the earlier investigations.

Author 1: Dustakar Surendra Rao
Author 2: L. Koteswara Rao
Author 3: Bhagyaraju Vipparthi

Keywords: Deep learning; variational autoencoders; CNN; medical image segmentation; automated diagnosis and treatment

PDF

Paper 63: Estimating Probability Values Based on Naïve Bayes for Fuzzy Random Regression Model

Abstract: In the process of treating uncertainties of fuzziness and randomness in real regression application, fuzzy random regression was introduced to address the limitation of classical regression which can only fit precise data. However, there is no systematic procedure to identify randomness by means of probability theories. Besides, the existing model mostly concerned in fuzzy equation without considering the discussion on probability equation though random plays a pivotal role in fuzzy random regression model. Hence, this paper proposed a systematic procedure of Naïve Bayes to estimate the probabilities value to overcome randomness. From the result, it shows that the accuracy of Naïve Bayes model can be improved by considering the probability estimation.

Author 1: Hamijah Mohd Rahman
Author 2: Nureize Arbaiy
Author 3: Chuah Chai Wen
Author 4: Pei-Chun Lin

Keywords: Naïve Bayes; fuzziness; randomness; probability estimation

PDF

Paper 64: A New Approach of Hybrid Sampling SMOTE and ENN to the Accuracy of Machine Learning Methods on Unbalanced Diabetes Disease Data

Abstract: The performance of machine learning methods in disease classification is affected by the quality of the dataset, one of which is unbalanced data. One example of health data that has unbalanced data is diabetes disease data. If unbalanced data is not addressed, it can affect the performance of the classification method. Therefore, this research proposed the SMOTE-ENN approach to improving the performance of the Support Vector Machine (SVM) and Random Forest classification methods for diabetes disease prediction. The methods used in this research were SVM and Random Forest classification methods with SMOTE-ENN. The SMOTE-ENN method was used to balance the diabetes data and remove noise data adjacent to the majority and minority classes. Data that has been balanced was predicted using SVM and Random Forest methods based on the division of training and testing data with 10-fold cross-validation. The results of this study were Random Forest method with SMOTE-ENN got the best performance compared to the SVM method, such as accuracy of 95.8%, sensitivity of 98.3%, and specificity of 92.5%. In addition, the proposed method approach (Random Forest with SMOTE-ENN) also obtained the best accuracy compared to previous studies referenced. Thus, the proposed method can be adopted to predict diabetes in a health application.

Author 1: Hairani Hairani
Author 2: Dadang Priyanto

Keywords: SMOTE-ENN; data imbalance; SVM; random forest; health dataset

PDF

Paper 65: An Ensemble Load Balancing Algorithm to Process the Multiple Transactions Over Banking

Abstract: The banking industry has been transformed by cloud computing, which has provided scalable and cost-effective solutions for managing large volumes of transactions. However, as the number of transactions grow, the need for efficient load-balancing algorithms to ensure optimal utilization of cloud resources and improve system performance becomes critical. This paper proposes an ensemble cloud load-balancing (ECBA) algorithm specifically designed to process multiple banking transactions. The proposed algorithm combines the strengths of several load-balancing techniques to achieve a balanced distribution of transaction loads in various cloud servers. It considers factors such as transaction types, server capacities, and network conditions to make intelligent load distribution decisions. The algorithm dynamically adapts to changing workload patterns and optimizes resource allocation by leveraging machine learning and predictive analytics. A simulation environment that mimics the banking system's transaction processing workflow is created to evaluate the performance of the ensemble load balancing algorithm. Extensive experiments with various workload scenarios are conducted to assess the algorithm's effectiveness in load balancing, response time, resource utilization, and overall system performance. The results show that the proposed ECBA outperforms traditional banking load-balancing approaches. It reduces response time, improves resource utilization, and ensures every server is adequately funded with a few transactions. The algorithm's adaptability and scalability make it well-suited for handling dynamic and fluctuating workloads, thus providing a robust solution for processing multiple transactions in the banking sector.

Author 1: Raghunadha Reddi Dornala

Keywords: Cloud computing; load balancing; ensemble algorithm; banking; transaction processing; resource utilization; response time; scalability

PDF

Paper 66: Genetic Approach for Improved Prediction of Adaptive Learning Activities in Intelligent Tutoring System

Abstract: The intelligent tutoring system registers the reference data of the learners in a database. This data is stored for later use in the instructional module. Designing a student model is not an easy task. It is first necessary to identify the knowledge acquired by the learner, then identify the learner's level of understanding of the functionality and finally identify the pedagogical strategies used by the learner to solve a problem. These elements must be taken into account in the development of the learner model. Learner characteristics must be considered in several forms. To build an effective learner model, the system must take into consideration both static (Learner preferences) and dynamic (Compartmental action) student characteristics. The objective of the article is to work out the learner model of the intelligent tutoring system by suggesting a new learning path. This proposal is based on the constructivist approach and the activist style (based on experimentation). According to the KOLB model, the authors propose a list of pedagogical activities depending on the learners' profile. Based on the learners' actions, the system reduces the list of activities based on two criteria: the learner's preference and the presence of one or more activities based on the activist style using genetic algorithm as an evolutionary algorithm. The results obtained led us to improve the learning process through a new conception of the ITS learner model.

Author 1: Fatima-Zohra Hibbi
Author 2: Otman Abdoun
Author 3: El Khatir Haimoudi

Keywords: Intelligent tutoring system; learner model; genetic algorithm; adaptive learning activities

PDF

Paper 67: Algorithm for Skeleton Action Recognition by Integrating Attention Mechanism and Convolutional Neural Networks

Abstract: An action recognition model based on 3D skeleton data may experience a decrease in recognition accuracy when facing complex backgrounds, and it is easy to overlook the local connection between dynamic gradient information and dynamic actions, resulting in a decrease in the fault tolerance of the constructed model. To achieve accurate and fast capture of human skeletal movements, a directed graph convolutional network recognition model that integrates attention mechanism and convolutional neural network is proposed. By combining spacetime converter and central differential graph convolution, a corresponding central differential converter graph convolutional network model is constructed to obtain dynamic gradient information in actions and calculate local connections between dynamic actions. The research outcomes express that the cross-target benchmark recognition rate of the directed graph convolutional network recognition model is 92.3%, and the cross-view benchmark recognition rate is 97.3%. The accuracy of Top-1 is 37.6%, and the accuracy of Top-5 is 60.5%. The cross-target recognition rate of the central differential converter graph convolutional network model is 92.9%, and the cross-view benchmark recognition rate is 97.5%. Undercross-target and cross-view benchmarks, the average recognition accuracy for similar actions is 81.3% and 88.9%, respectively. The accuracy of the entire action recognition model in single-person multi-person action recognition experiments is 95.0%. The outcomes denote that the model constructed by the research institute has higher recognition rate and more stable performance compared to existing neural network recognition models, and has certain research value.

Author 1: Jianhua Liu

Keywords: Attention mechanism; convolutional neural network; action recognition; central differential network; spacetime converter; directed graph convolution

PDF

Paper 68: A Population-based Plagiarism Detection using DistilBERT-Generated Word Embedding

Abstract: Plagiarism is the unacknowledged use of another person’s language, information, or writing without crediting the source. This manuscript presents an innovative method for detecting plagiarism utilizing attention mechanism-based LSTM and the DistilBERT model, enhanced by an enriched differential evolution (DE) algorithm for pre-training and a focal loss function for training. DistilBERT reduces BERT’s size by 40% while maintaining 97% of its language comprehension abilities and being 60% quicker. Current algorithms utilize positive-negative pairs to train a two-class classifier that detects plagiarism. A positive pair consists of a source sentence and a suspicious sentence, while a negative pair comprises two dissimilar sentences. Negative pairs typically outnumber positive pairs, leading to imbalanced classification and significantly lower system performance. To combat this, a training method based on a focal loss (FL) is suggested, which carefully learns minority class examples. Another addressed issue is the training phase, which typically uses gradient-based methods like back-propagation for the learning process. As a result, the training phase has limitations, such as initialization sensitivity. A new DE algorithm is proposed to initiate the back-propagation process by employing a mutation operator based on clustering. A successful cluster for the current DE population is found, and a fresh updating approach is used to produce potential solutions. The proposed method is assessed using three datasets: SNLI, MSRP, and SemEval2014. The model attains excellent results that outperform other deep models, conventional, and population-based models. Ablation studies excluding the proposed DE and focal loss from the model confirm the independent positive incremental impact of these components on model performance.

Author 1: Yuqin JING
Author 2: Ying LIU

Keywords: Plagiarism detection; LSTM; imbalanced classification; DistilBERT; differential evolution; focal loss

PDF

Paper 69: Enhancing Startup Efficiency: Multivariate DEA for Performance Recognition and Resource Optimization in a Dynamic Business Landscape

Abstract: Startups encounter a variety of difficulties in maximising their performance and resource allocation in the dynamic business environment of today. This study employs a two-stage methodology to address the challenges faced by startups in optimizing their performance and resource allocation in the dynamic contemporary business environment. The research utilizes an advanced Data Envelopment Analysis (DEA) technique to identify the factors influencing startups' efficiency. In the first stage, the relative efficiency of startups is assessed by comparing their inputs and outputs through DEA, a non-parametric approach. This analysis not only reveals the successful startups but also establishes benchmarks for others to aspire to. By examining the efficiency scores, critical factors that significantly impact startup performance can be identified. In the second stage, a logistic approach is employed to predict the performance of startups based on these discovered factors. This prediction model can be valuable in making informed decisions regarding resource allocation, aiding startups in their survival and development endeavors. This study introduces a novel two-stage methodology, combining advanced Data Envelopment Analysis (DEA) with predictive modeling, to uncover the key factors influencing startup efficiency. By evaluating relative efficiency and predicting performance based on these factors, it offers a comprehensive approach for startups to strategically allocate resources and enhance overall performance in present dynamic business environment.

Author 1: K. N. Preethi
Author 2: Yousef A. Baker El-Ebiary
Author 3: Esther Rosa Saenz Arenas
Author 4: Kathari Santosh
Author 5: Ricardo Fernando Cosio Borda
Author 6: Anuradha. S
Author 7: R. Manikandan

Keywords: Startup efficiency; data envelopment analysis; logistic approach; resource allocation; dynamic business landscape

PDF

Paper 70: Design and Improvement of New Industrial Robot Mechanism Based on Innovative BP-ARIMA Combined Model

Abstract: The main innovation of Industry 4.0, which involves human-robot cooperation, is transforming industrial operation facilities. Robotic systems have been developed as modern industrial solutions to assist operators in carrying out manual tasks in cyber-physical industrial environments. These robots integrate unique human talents with the capabilities of intelligent machinery. Due to the increasing demand for modern robotics, numerous ongoing industrial robotics studies exist. Robots offer advantages over humans in various aspects, as they can operate continuously. Enhanced efficiency is achieved through reduced processing time and increased industrial adaptability. When deploying interactive robotics, emphasis should be placed on optimal design and improvisation requirements. Robotic design is a very challenging procedure that involves extensive development and modeling efforts. Significant progress has been made in robotic design in recent years, providing multiple approaches to address this issue. Considering this, we propose utilizing the Backpropagation Autoregressive Integrated Moving Average (BP-ARIMA) combination model for designing and improving a novel industrial robot mechanism. The design outcomes were evaluated based on performance indicators, including accuracy, optimal performance, error rate, implementation cost, and energy consumption. The evaluation findings demonstrate that the suggested BP-ARIMA model offers optimal design for industrial robotics.

Author 1: Yuanyuan Liu

Keywords: Industry 4.0; robotics; design; backpropagation autoregressive integrated moving average (BP-ARIMA); and operation facilities

PDF

Paper 71: A Proposed Approach for Monkeypox Classification

Abstract: Public health concerns have been heightened by the emergence and spread of monkeypox, a viral disease that affects both humans and animals. The significance of early detection and diagnosis of monkeypox cannot be overstated, as it plays a crucial role in minimizing the negative impact on affected individuals and safeguarding public health. Monkeypox poses a considerable threat to human well-being, causing physical discomfort and mental distress, while also posing challenges to work productivity. This study proposes an applied model that combines deep learning models such as: ResNet-50, VGG16, MobileNet and machine learning models such as: Random Forest Classifier, K-Nearest Neighbors Classifier, Gaussian Naive Bayes Classifier, Decision Tree Classifier, Logistic Regression Classifier, AdaBoost Classifier to classify and detect monkeypox. The datasets are used in this research are the Monkeypox Skin Lesion Dataset (MSLD) and the Monkeypox Image Dataset (MID) that have total 659. Subjects range from healthy cases to severe skin lesions. The test results show that the model which combines deep learning and machine learning models achieves positive results, with Accuracy being 0.97 and F1-score being 0.98.

Author 1: Luong Hoang Huong
Author 2: Nguyen Hoang Khang
Author 3: Le Nhat Quynh
Author 4: Le Huu Thang
Author 5: Dang Minh Canh
Author 6: Ha Phuoc Sang

Keywords: Monkeypox; machine learning; deep learning; skin lesions

PDF

Paper 72: CryptoScholarChain: Revolutionizing Scholarship Management Framework with Blockchain Technology

Abstract: Scholarship management is a crucial aspect of higher education systems, aimed at supporting deserving students and reducing financial barriers. However, traditional scholarship management processes often suffer from challenges such as a lack of transparency, inefficient communication, and difficulty tracking and verifying scholarship applications. Recently, Blockchain technology has emerged as a potential solution to address these issues, offering a decentralized, transparent, and secure framework for scholarship management. Blockchain technology has emerged as a promising solution to address the challenges faced in scholarship management. However, existing literature lacks comprehensive solutions in critical areas such as scholarship management, storage facilities, payment systems, monitoring and auditing, and experimental validation. This research introduces an innovative smart scholarship management system leveraging Blockchain technology to overcome these limitations. The research presents an Ethereum-based implementation utilizing Solidity for backend smart contracts and ReactJS for the front end. Experimental evaluation validates the transaction execution gas costs and deployment cost.

Author 1: Jadhav Swati
Author 2: Pise Nitin

Keywords: Blockchain; smart scholarship management; smart contract; solidity

PDF

Paper 73: The Application of Decision Tree Classification Algorithm on Decision-Making for Upstream Business

Abstract: In today's rapidly advancing technological landscape and evolving business paradigms, the pursuit of insightful patterns and concealed knowledge beyond conventional big data becomes imperative. This pursuit serves a crucial role in aiding stakeholders, particularly in the realms of tactical decision-making and forecasting, with a particular focus on business strategy and risk management. Strategic and tactical decision-making holds the key to sustaining the longevity, profitability, and continuous enhancement of the oil and gas industry. Therefore, it is paramount to address this need by uncovering the most effective Decision Tree (DT) techniques for various challenges and identifying their practical applications in real-life scenarios. The integration of big data with Machine Learning (ML) stands as a pivotal approach to foster data-driven innovation within the oil and gas sector. This study aims to offer valuable insights and methodologies for efficient decision-making, catering to the diverse stakeholders within the oil and gas industry. It focuses on the exploration of optimal DT techniques for specific problems and their relevance in practical situations. By harnessing the potential of machine learning and collaborative efforts among research scientists, big data practitioners, data scientists, and analysts, the study strives to provide more precise and effective data. Furthermore, it is imperative to recognize that not all stakeholders are mathematicians. In project management, a holistic approach that considers humanistic perspectives, such as risk analysis, ethics, and empathy, is crucial. Ultimately, the output and findings of any system must be accessible, comprehensible, and interpretable by humans or human groups. The success of these insights lies not just in their mathematical precision but also in their ability to resonate with and guide human decision-makers. In this light, the study emphasizes the human element in data interpretation and decision-making, acknowledging that the system's output will require human interaction, analysis, and ethical considerations to be truly effective in driving positive outcomes in the industry.

Author 1: Mohd Shahrizan Abd Rahman
Author 2: Nor Azliana Akmal Jamaludin
Author 3: Zuraini Zainol
Author 4: Tengku Mohd Tengku Sembok

Keywords: Decision-making strategies; decision tree family; business decisions; upstream; oil & gas; predictive analysis; project control; project planning; machine learning algorithms

PDF

Paper 74: Deep Learning Enhanced Internet of Medical Things to Analyze Brain Computed Tomography Images of Stroke Patients

Abstract: In the realm of advancing medical technology, this paper explores a revolutionary amalgamation of deep learning algorithms and the Internet of Medical Things (IoMT), demonstrating their efficacy in decoding the labyrinthine intricacies of brain Computed Tomography (CT) images from stroke patients. Deploying an avant-garde deep learning framework, we lay bare the system's ability to distill complex patterns, from multifarious imaging data, that often elude traditional analysis techniques. Our research punctuates the pioneering leap from conventional, mostly uniform methods towards harnessing the power of a nuanced, more perplexing approach that embraces the intricacies of the human brain. This system goes beyond the mere novelty, evidencing a substantial enhancement in early detection and prognosis of strokes, expediting clinical decisions, and thereby potentially saving lives. Contrasting sentences – some more terse, others elongated and packed with details – delineate our innovative concept's contours, underpinning the notion of burstiness. Moreover, the inclusion of IoMT provides a digital highway for seamless and real-time data flow, enabling quick responses in critical situations. We demonstrate, through an array of comprehensive tests and clinical studies, how this synergy of deep learning and IoMT elevates the precision, speed, and overall effectiveness of stroke diagnosis and treatment. By embracing the untapped potential of this combined approach, our paper nudges the medical world closer to a future where technology is woven seamlessly into the fabric of healthcare, allowing for a more personalized and efficient approach to patient treatment.

Author 1: Batyrkhan Omarov
Author 2: Azhar Tursynova
Author 3: Meruert Uzak

Keywords: Deep learning; machine learning; stroke; diagnosis; detection; computed tomography

PDF

Paper 75: Chatbot Program for Proposed Requirements in Korean Problem Specification Document

Abstract: In software engineering, requirement analysis is a crucial task throughout the entire process and holds significant importance. However, factors contributing to the failure of requirement analysis include communication breakdowns, divergent interpretations of requirements, and inadequate execution of requirements. To address these issues, a proposed approach involves utilizing NLP machine learning within Korean requirement documents to generate knowledge-based data and deduce actors and actions using natural language processing knowledge-based information. Actors and actions derived are then structured into a hierarchy of sentences through clustering, establishing a conceptual hierarchy between sentences. This is transformed into ontology data, resulting in the ultimate requirement list. A chatbot system provides users with the derived system event list, generating requirement diagrams and specification documents. Users can refer to the chatbot system's outputs to extract requirements. In this paper, the feasibility of this approach is demonstrated by applying it to a case involving Korean-language requirements for course enrollment.

Author 1: Young Yun Baek
Author 2: Soojin Park
Author 3: Young B. Park

Keywords: Requirement engineering; NLP machine learning; clustering; Korean document; chatbot

PDF

Paper 76: Applying Artificial Intelligence and Computer Vision for Augmented Reality Game Development in Sports

Abstract: This paper delineates the intricate process of crafting an Augmented Reality (AR)-enriched version of the Subway Surfers game, engineered with an emphasis on action recognition and the leverage of Artificial Intelligence (AI) principles, with the primary objective of boosting children's enthusiasm towards physical activity. The gameplay, fundamentally predicated on advanced computer vision methodologies for discerning player kinesthetics, and reinforced with machine learning tactics for modulating the intricacy of the game in accordance with player capabilities, offers an immersive and engaging interface. This innovative amalgamation serves to not only catalyze children's interest in participating in active exercises, but also introduces a playful aspect to it. The procedural development of the game required the cohesive assimilation of a diverse spectrum of technologies, encompassing Unity for game development, TensorFlow for implementing machine learning algorithms, and Vuforia for crafting the AR elements. A preliminary study, conducted to assess the efficacy of the game in fostering a pro-sport attitude in children, reported encouraging outcomes. Given the potential of the game to incite physical activity among young users, it could be construed as a promising antidote to sedentarism and a potent catalyst for endorsing a healthier lifestyle.

Author 1: Nurlan Omarov
Author 2: Bakhytzhan Omarov
Author 3: Axaule Baibaktina
Author 4: Bayan Abilmazhinova
Author 5: Tolep Abdimukhan
Author 6: Bauyrzhan Doskarayev
Author 7: Akzhan Adilzhan

Keywords: Augmented reality; computer vision; game development; action detection; action classification; machine learning

PDF

Paper 77: PMG-Net: Electronic Music Genre Classification using Deep Neural Networks

Abstract: With the rapid development of electronic music industry, how to establish a set of electronic music genre automatic classification technology has also become an urgent problem. This paper utilized neural network (NN) technology to classify electronic music genres. The basic idea of the research was to establish a deep neural network (DNN) based classification model to analyze the audio signal processing and classification feature extraction of electronic music. In this paper, 2700 different types of electronic music were selected as experimental data from the publicly available dataset of W website, and substituted into the convolutional neural network (CNN) model, PMG-Net electronic music genre classification model and traditional classification model for comparison. The results showed that the PMG-Net model had the best classification performance and the highest recognition accuracy. The classification error of PMG-Net electronic music genre classification model in each round of training was smaller than the other two classification models, and the fluctuation was small. The speed of music signal processing in each round and the feature extraction of audio samples of PMG-Net electronic music genre classification model were faster than the traditional classification model and CNN model. It can be seen that using the PMG-Net electronic music genre classification model customized based on DNN for automatic classification of electronic music genres has a better classification effect, and can achieve the goal of efficiently completing the classification in massive data.

Author 1: Yuemei Tang

Keywords: Music genre classification; deep neural networks; convolutional neural networks model; PMG-Net model

PDF

Paper 78: Automatic Layout Algorithm for Graphic Language in Visual Communication Design

Abstract: As computer technology advances, people's capacity for visual perception grows better, and the demands placed on computerized layouts progressively rise. The simple style of graphics is no longer the only option for computer figure video creation; instead, there is a greater tendency to visually represent the effect and improve the aesthetics and expressiveness of visuals and images. Graphic language uses visual components, including shapes, colors, typographies, images, and icons, in a visual communication context to express messages, ideas, and emotions. Graphics language encounters greater chances and obstacles against the backdrop of this information era. Consequently, it is crucial to convert data into graphics language. Visual communication is evolving in some potential directions with the advancement of technological advances and cultural convergence. The graphic language has its distinct visual meaning, and each person's visual experience is extremely diverse and exists in life with various visual elements in different layouts. A hybridized Grid and Content-based Automatic Layout (HGC-AL) algorithm for graphic language in Visual Communication Design (VCD) has been developed to produce visually balanced layouts and establish a structured system for arranging content elements. The content-based layout uses design constraints for better alignment and avoids conflict loss. The hierarchical arrangement of graphic elements in a grid layout analyzes the types of visual elements like image, text, and color. Finally, graphic language enhances the visual score and gives flexibility by allowing changes and modifications within the grid layout. Following the design requirements change, the responsive fluid grid supports various graphical content, sizes, and alignments. Thus, compared with existing layout algorithms, the proposed algorithm is validated with metrics like Intersection of Union (IoU), alignment accuracy, content coverage ratio, visual score, scalability ratio, and overall layout quality.

Author 1: Xiaofang Liao
Author 2: Xinqian Hu

Keywords: Graphic language; visual communication design; layout algorithm; design elements; grid layout; content layout

PDF

Paper 79: Smart Sensor Signal-Assisted Behavioral Model and Control of Live Interaction in Digital Media Art

Abstract: Digital media art immersive scene design is a type of art design based on the theory of positive psychology mind flow theory, using digital media as the main technology and tool to build a certain scene by stimulating the senses and perception of the user so that they can achieve a state of immersion and forgetting other things. In this paper, we discuss the application of digital experience technology in designing art scene interaction devices by combining intelligent sensor signal analysis with multimodal interaction. Based on this, a new inductive displacement sensing element is proposed, which adopts square wave driving mode and op-amp circuit to extract signals, overcoming the shortcomings of the traditional inductive displacement sensing element, gaining the advantages of small size, lightweight, good linearity, high-frequency response, a simple driving circuit, and signal detection circuit, and more easily adaptable to microcomputer control. A more comprehensive anti-interference and system fault self-diagnosis design is carried out for the sensor system to ensure the stability and reliability of the system. An intelligent digital filtering algorithm with program judgment is proposed, with better smoothing ability and faster response speed. The multimodal interaction in digital experience design strategy is applied to the design practice, and a series of diversified device design solutions are proposed suitable for on-site interaction behavior.

Author 1: Pujie Li
Author 2: Shi Bai

Keywords: Intelligent sensors; digital media; VR technology; artistic interaction

PDF

Paper 80: Research on Strategic Decision Model of Human Resource Management based on Biological Neural Network

Abstract: Human resource management system is an indispensable part of information strategy construction. Based on the theory of biological neural network, this paper constructs the strategic decision model of human resources management, then uses the micro-integration method to predict the demand for human resources, and solves the quantification problem of human resources supply prediction. In the simulation process, the model analyzes the current situation of the personnel management system and the necessity of research and plans and designs a computer-aided personnel management information system based on the Client/Server biological neural network structure. Personnel quality evaluation through the evaluation and analysis of the quality of the evaluated, to provide effective reference information for the enterprise personnel decision and index selection, the enterprise human resources allocation, use, training and development is of great significance. Neural networks rely on the powerful data storage, processing and computing capabilities of computers to help enterprises respond quickly to changes in external market conditions, improve the efficiency of decision-making, and create greater value for enterprises. Through experimental testing, it is found that when the iteration is 5, the network verification results have the best consistency. When the iterations reach 7, the standard of training target error set in this paper is reached. When the samples reached 60, the screening accuracy of the network reached 92.18%; when the samples increased to 80, the screening accuracy was further improved to 92.84%, indicating that the screening accuracy of the network increased with the training samples, which could be used to detect and classify samples quickly, objectively and accurately.

Author 1: Ke Xu

Keywords: Biological neural network; human resources management; strategic decision making; index selection

PDF

Paper 81: Multimodal Deep Learning Approach for Real-Time Sentiment Analysis in Video Streaming

Abstract: Recognizing emotions from visual data, like images and videos, presents a daunting challenge due to the intricacy of visual information and the subjective nature of human emotions. Over the years, deep learning has showcased remarkable success in diverse computer vision tasks, including sentiment classification. This paper introduces a novel multi-view deep learning framework for emotion recognition from visual data. Leveraging Convolutional Neural Networks (CNNs) this framework extracts features from visual data to enhance sentiment classification accuracy. Additionally, we enhance the deep learning model through cutting-edge techniques like transfer learning to bolster its generalization capabilities. Furthermore, we develop an efficient deep learning classification algorithm, effectively categorizing visual sentiments based on the extracted features. To assess its performance, we compare our proposed model with state-of-the-art machine learning methods in terms of classification accuracy, training time, and processing speed. The experimental results unequivocally demonstrate the superiority of our framework, showcasing higher classification accuracy, faster training times, and improved processing speed compared to existing methods. This multi-view deep learning approach marks a significant stride in emotion recognition from visual data and holds the potential for various real-world applications, such as social media sentiment analysis and automated video content analysis.

Author 1: Tejashwini S. G
Author 2: Aradhana D

Keywords: Deep learning; emotion recognition; feature ex-traction; machine learning; sentiment analysis; visual data

PDF

Paper 82: 3D Magnetic Resonance Image Denoising using Wasserstein Generative Adversarial Network with Residual Encoder-Decoders and Variant Loss Functions

Abstract: Magnetic resonance imaging (MRI) is frequently contaminated by noise during scanning and transmission of images, this deteriorates the accuracy of quantitative measures from the data and limits disease diagnosis by doctors or a computerized system. It is common for MRI to suffer from noise commonly referred to as Rician noise because the uncorrelated Gaussian noise is present in both the real and imaginary parts of a complex K-space image with zero mean and equal standard deviation, the distribution of noise in magnitude MR images typically tends to be related to Rician distributions. To remove the Rician noise from an MRI scan, deep learning has been used in the MRI denoising method to achieve improved performance. The proposed models were inspired by the Residual Encoder-Decoder Wasserstein Generative Adversarial Network (RED-WGAN). Specifically, the generator network is residual autoencoders combined with the convolution and deconvolution operations, and the discriminator network is convolutional layers. As a result of replacing Mean Square Error (MSE) in RED-WGAN with Structurally Sensitive Loss (SSL), RED-WGAN-SSL has been proposed to overcome the loss of important structural details that occurs because of over-smoothing the edges. The RED-WGAN-SSIM model has also been developed using Structural Similarity Loss SSIM. The proposed RED-WGAN-SSL and RED-WGAN-SSIM models are formed by using the SSL, SSIM, Visual Geometry Group (VGG), and adversarial loss that are incorporated to form the new loss function. They preserved the informative details and fine image better than RED-WGAN, so our models could effectively reduce noise and suppress artifacts.

Author 1: Hanaa A. Sayed
Author 2: Anoud A. Mahmoud
Author 3: Sara S. Mohamed

Keywords: Deep learning; image denoising; MRI; Wasserstein GAN; loss function

PDF

Paper 83: A Framework for Patient-Centric Medical Image Management using Blockchain Technology

Abstract: In smart systems context, the storage and distribution of health-critical data – medical images, test reports, clinical information etc. that is processed and transmitted via web portal and pervasive devices which requires a secure and efficient management of patients’ medical records. The reliance on centralized data centers in the cloud to process, store, and transmit patients’ medical records poses some critical challenges including but not limited to operational costs, storage space requirements, and importantly threats and vulnerabilities to the security and privacy of health-critical data. To address these issues, this research proposes a framework and provides a proof-of-the-concept named Patient-Centric Medical Image Management System (PCMIMS). The proposed solution PCMIMS utilizes the Ethereum blockchain and Inter-Planetary File System (IPFS) to enable secure and decentralized storage capabilities that lack in existing solution for patients’ medical image management. The PCMIMS design facilitates secure access to Patient-Centric information for health units, patients, medics, and third-party requestors by incorporating the Patient-Centric access control protocol, ensuring privacy and control over medical data. The proposed framework is validated through the deployment of a prototype based on smart contract executed on Ethereum TESTNET blockchain that demonstrates efficiency and feasibility of the solution. Validation results highlight a correlation between (i) number of transactions (i.e., data storage and retrieval), (ii) gas consumption (i.e., energy efficiency), and (iii) data size (volume of Patient-Centric medical images) via repeated trials in Microsoft Windows environment. Validation results also indicate computational efficiency of the solution in terms of processing three most common types of Patient-Centric medical images namely (a) Magnetic resonance imaging (MRI) (b) X-radiation (X-Rays), (c) Computed tomography (CT) scan. This research primarily contributes by designing, implementing, and validating a blockchain based practical solution for efficient and secure management of Patient-Centric medical image management in the context of smart healthcare systems.

Author 1: Abdulaziz Aljaloud

Keywords: Smart healthcare; medical imaging; blockchain; ethereum; distributed storage

PDF

Paper 84: An Ensemble Learning Approach for Multi-Modal Medical Image Fusion using Deep Convolutional Neural Networks

Abstract: Medical image fusion plays a vital role in enhancing the quality and accuracy of diagnostic procedures by integrating complementary information from multiple imaging modalities. In this study, we propose an ensemble learning approach for multi-modal medical image fusion utilizing deep convolutional neural networks (DCNNs) to predict brain tumour. The proposed method aims to exploit the inherent characteristics of different modalities and leverage the power of CNNs for improved fusion results. The Generative Adversarial Network (GAN) strengthens the input images. The ensemble learning framework comprises two main stages. Firstly, a set of DCNN models is trained independently on the respective input modalities, extracting high-level features that capture modality-specific information. Each DCNN model is fine-tuned to optimize its performance for fusion. Secondly, a fusion module is designed to aggregate the individual modality features and generate a fused image. The fusion module employs a weighted averaging technique to assign appropriate weights to the features based on their relevance and significance. The fused image obtained through this process exhibits enhanced spatial details and improved overall quality compared to the individual modalities. On a diversified dataset made up of multi-modal medical images, thorough tests are carried out to assess the efficacy of the suggested approach. The fusion images exhibit improved visual quality, enhanced feature representation, and better preservation of diagnostic information. The BRATS 2018 dataset, which contains Multi-Modal MRI images and patients’ healthcare information were used. The proposed method also demonstrates robustness across different medical imaging modalities, highlighting its versatility and potential for widespread adoption in clinical practice.

Author 1: Andino Maseleno
Author 2: D. Kavitha
Author 3: Koudegai Ashok
Author 4: Mohammed Saleh Al Ansari
Author 5: Nimmati Satheesh
Author 6: R. Vijaya Kumar Reddy

Keywords: Deep convolutional neural networks; image fusion; generative adversarial network; ensemble learning

PDF

Paper 85: Segmentation of Breast Cancer on Ultrasound Images using Attention U-Net Model

Abstract: Breast cancer (BC) is one of the most prevailing and life-threatening types of cancer impacting women worldwide. Early detection and accurate diagnosis are crucial for effective treatment and improved patient outcomes. Deep learning techniques have shown remarkable promise in medical image analysis tasks, particularly segmentation. This research leverages the Breast Ultrasound Images BUSI dataset to develop two variations of a segmentation model using the Attention U-Net architecture. In this study, we trained the Attention3 U-Net and the Attention4 U-net on the BUSI dataset, consisting of normal, benign, and malignant breast lesions. We evaluated the model's performance based on standard segmentation metrics such as the Dice coefficient and Intersection over Union (IoU). The results demonstrate the effectiveness of the Attention U-Net in accurately segmenting breast lesions, with high overall performance, indicating agreement between predicted and ground truth masks. The successful application of the Attention U-Net to the BUSI dataset holds promise for improving breast cancer diagnosis and treatment. It highlights the potential of deep learning in medical image analysis, paving the way for more efficient and reliable diagnostic tools in breast cancer management.

Author 1: Sara LAGHMATI
Author 2: Khadija HICHAM
Author 3: Bouchaib CHERRADI
Author 4: Soufiane HAMIDA
Author 5: Amal TMIRI

Keywords: Breast cancer; deep learning; segmentation; attention U-Net

PDF

Paper 86: New Real Dataset Creation to Develop an Intelligent System for Predicting Chemotherapy Protocols

Abstract: Breast cancer is the most common cancer diagnosed in women. In developing countries, controlling this scourge is often problematic due to late diagnosis and the lack of medical and human resources. Automation and optimization of treatment is then needed to improve patient outcome. The use of medical datasets could, according to medical staff and pharmacists, assist them in clinical decision-making and would allow for better use of resources especially when limited. In our paper, a new real dataset was produced by collecting medical and personal data from 601 patients with breast cancer at the University Hospital Center (UHC) Mohammed VI of Marrakech. Data of women diagnosed with breast cancer from January 2018 at UHC were assessed. Most patients were 24-85 year-old, with an average age of 48.84 years. Patient age, performance status (PS), cancer stage and subtype, treatment patterns and correlations among the different variables were analyzed. The created dataset will help to determine the most appropriate treatment regimen depending on the individual characteristics of patients to allow for better use of limited resources.

Author 1: Houda AIT BRAHIM
Author 2: Mariam BENLLARCH
Author 3: Nada BENHIMA
Author 4: Salah EL-HADAJ
Author 5: Abdelmoutalib METRANE
Author 6: Ghizlane BELBARAKA

Keywords: Dataset; breast cancer; cancer stage; chemotherapeutic regimen; machine learning; prediction

PDF

Paper 87: Presenting a Novel Method for Identifying Communities in Social Networks Based on the Clustering Coefficient

Abstract: In recent decades, social networks have been considered as one of the most important topics in computer science and social science. Identifying different communities and groups in these networks is very important because this information can be useful in analyzing and predicting various behaviors and phenomena, including the spread of information and social influence. One of the most important challenges in social network analysis is identifying communities. A community is a collection of people or organizations that are more densely connected than other network entities. In this article, a method to increase the accuracy, quality, and speed of community detection using the Fire Butterfly algorithm is presented, which defines the algorithm and fully introduces the parameters used in the proposed algorithm and how to implement it. In this method, first the social network is converted into a graph and then the clustering coefficient is calculated for each node. Also, the butterfly algorithm based on the clustering coefficient (CC-BF) has been proposed to identify complex social networks. The proposed algorithm is new both in terms of generating the initial population and in terms of the mutation method, and these improve its efficiency and accuracy. This research is inspired by the meta-heuristic algorithm of Butterfly Flame based on the clustering coefficient to find active nodes in the social network. The results have shown that the proposed algorithm has improved by 23.6% compared to previous similar works. The findings of this research have great value and can be useful for researchers in computer science, social network managers, data analysts, organizations and companies, and other general public.

Author 1: Zhihong HE
Author 2: Tao LIU

Keywords: Social network; detection of communities; butterfly fire algorithm; clustering coefficient

PDF

Paper 88: Motor Imagery EEG Signals Marginal Time Coherence Analysis for Brain-Computer Interface

Abstract: The synchronization of neural activity in the human brain has great significance for coordinating its various cognitive functions. It changes throughout time and in response to frequency. The activity is measured in terms of brain signals, like an electroencephalogram (EEG). The time-frequency (TF) synchronization among several EEG channels is measured in this research using an efficient approach. Most frequently, the windowed Fourier transforms-short-time Fourier transform (STFT), as well as wavelet transform (WT), and are used to measure the TF coherence. The information provided by these model-based methods in the TF domain is insufficient. The proposed synchro squeezing transform (SST)-based TF representation is a data-adaptive approach for resolving the problem of the traditional one. It enables more perfect estimation and better tracking of TF components. The SST generates a clearly defined TF depiction because of its data flexibility and frequency reassignment capabilities. Furthermore, a non-identical smoothing operator is used to smooth the TF coherence, which enhances the statistical consistency of neural synchronization. The experiment is run using both simulated and actual EEG data. The outcomes show that the suggested SST-dependent system performs significantly better than the previously mentioned traditional approaches. As a result, the coherences dependent on the suggested approach clearly distinguish between various forms of motor imagery movement. The TF coherence can be used to measure the interdependencies of neural activities.

Author 1: Md. Sujan Ali
Author 2: Jannatul Ferdous

Keywords: Brain-Computer Interface (BCI); Electroencephalogram (EEG); Short-time Fourier Transform (STFT); Synchrosqueezing Transform (SST); time-frequency coherence

PDF

Paper 89: Systematic Review for Phonocardiography Classification Based on Machine Learning

Abstract: Phonocardiography, the recording and analysis of heart sounds, has become an essential tool in diagnosing cardiovascular diseases (CVDs). In recent years, machine learning and deep learning techniques have dramatically improved the automation of phonocardiogram classification, making it possible to delve deeper into intricate patterns that were previously difficult to discern. Deep learning, in particular, leverages layered neural networks to process data in complex ways, mimicking how the human brain works. This has contributed to more accurate and efficient diagnoses. This systematic review aims to examine the existing literature on phonocardiography classification based on machine learning, focusing on algorithms, datasets, feature extraction methods, and classification models utilized. The materials and methods used in the study involve a comprehensive search of relevant literature and a critical evaluation of the selected studies. The review also discusses the challenges encountered in this field, especially when incorporating deep learning techniques, and suggests future research directions. Key findings indicate the potential of machine and deep learning in enhancing the accuracy of phonocardiography classification, thereby improving cardiovascular disease diagnosis and patient care. The study concludes by summarizing the overall implications and recommendations for further advancements in this area.

Author 1: Abdullah Altaf
Author 2: Hairulnizam Mahdin
Author 3: Awais Mahmood
Author 4: Mohd Izuan Hafez Ninggal
Author 5: Abdulrehman Altaf
Author 6: Irfan Javid

Keywords: Heart sounds classification; Phonocardiogram (PCG); CVDs; deep learning

PDF

Paper 90: A Hybrid Classification Approach of Network Attacks using Supervised and Unsupervised Learning

Abstract: The increasing scale and sophistication of network attacks have become a major concern for organizations around the world. As a result, there is an increasing demand for effective and accurate classification of network attacks to enhance cyber security measures. Most existing schemes assume that the available training data is labeled; that is, classification is based on supervised learning. However, this is not always the case since the available real data is expected to be unlabeled. In this paper, this issue is tackled by proposing a hybrid classification approach that combines both supervised and unsupervised learning to build a predictive classification model for classifying network attacks. First, unsupervised learning is used to label the data available in the dataset. Then, different supervised machine learning algorithms are utilized to classify data with the labels obtained from the first step and compare the results with the ground truth labels. Moreover, the issue of the unbalanced dataset is addressed using both over-sampling and under-sampling techniques. Several experiments have been conducted, using the NSL-KDD dataset, to evaluate the efficiency of the proposed hybrid model and the obtained results demonstrate that the accuracy of our proposed model is comparable to supervised classification methods that assume that all data is labeled.

Author 1: Rahaf Hamoud R. Al-Ruwaili
Author 2: Osama M. Ouda

Keywords: Network attacks; supervised learning; unsupervised learning; machine learning

PDF

Paper 91: Violent Physical Behavior Detection using 3D Spatio-Temporal Convolutional Neural Networks

Abstract: The use of surveillance cameras has made it possible to analyze a huge amount of data for automated surveillance. The use of security systems in schools, hotels, hospitals, and other security areas is required to identify the violent activities that can cause social, economic, and environmental damage. Detecting the mobile objects on each frame is a fundamental phase in the analysis of the video trail and the violence recognition. Therefore, a three-step approach is presented in this article. In our method, the separation of the frames containing the motion information and the detection of the violent behavior are applied at two levels of the network. First, the people in the video frames are identified by using a convolutional neural network. In the second step, a sequence of 16 frames containing the identified people is injected into the 3D CNN. Furthermore, we optimize the 3D CNN by using the visual inference and then a neural network optimization tool that transforms the pre-trained model into an average representation. Finally, this method uses the toolbox of OPENVINO to perform the optimization operations to increase the performance. To evaluate the accuracy of our algorithm, two datasets have been analyzed, which are: Violence in Movies and Hockey Fight. The results show that the final accuracy of this analysis is equal to 99.9% and 96% from each dataset.

Author 1: Xiuhong Xu
Author 2: Zhongming Liao
Author 3: Zhaosheng Xu

Keywords: Violence detection; surveillance cameras; 3D Convolutional Neural Network (3D CNN); Spatio-temporal convolution; deep learning; abnormal behavior

PDF

Paper 92: Construction of VR Video Quality Evaluation Model Based on 3D-CNN

Abstract: Currently, virtual reality (VR) panoramic video content occupies a very important position in the content of virtual reality platforms. The level of video quality directly affects the experience of platform users, and there is increasing research on methods for evaluating VR video quality. Therefore, this study establishes a subjective evaluation library for VR video data and uses viewport slicing method to segment VR videos, expanding the sample size. Finally, a classification prediction network structure was constructed using a three-dimensional convolutional neural network (3D-CNN) to achieve objective evaluation of VR videos. However, during the research process, it was found that the increase in its convolutional dimension inevitably leads to a significant increase in the parameter count of the entire neural network, resulting in a surge in algorithm time complexity. In response to this defect, research and design dual 3D convolutional layers and improve 3D-CNN based on residual networks. Based on this research, a virtual reality video quality evaluation model based on improved 3D-CNN was constructed. Through experimental analysis, it can be concluded that the average overall accuracy value of the constructed model is 95.27%, the average accuracy value is 95.94%, and the average Kappa coefficient value is 96.18%. Being able to accurately and effectively evaluate the quality of virtual reality videos and promote the development of the virtual reality field.

Author 1: Hongxia Zhao
Author 2: Li Huang

Keywords: Virtual reality video; 3D convolutional neural network; residual network; quality evaluation

PDF

Paper 93: Design Strategy and Application of Headwear with National Characteristics Based on Information Visualization Technology

Abstract: With the rapid development of big data technology, information technology and visualization technology, traditional national headdress design has gradually been combined with it. The strategies and applications related to national headdress design also fully reflect the beauty of modern science and technology, which is a model of the combination of national classics and modern technology. Based on this, this paper will deeply analyze the various links and processes of the design based on the data based on the specific information of Yao ethnic headwear. At the same time, based on the existing visual design, this paper will take spring, hibernate and other systems as the basic software architecture of the design system and deeply study the visualization principles and data information visualization methods of spring, hibernate and other software, and carry out data information visualization processing on the relevant design of national headwear, to build the corresponding digital material library with national characteristics and the digital design process of national headwear. Through the digital processing and matching of the whole design, the current design of national headwear can be simplified and optimized, and the design efficiency can be improved to provide reference samples for the design of other national characteristics. In the specific design part, this paper will carry out design verification based on Yao nationality's corresponding characteristic headdress design and evaluate the corresponding design from the perspective of artistry, practicality and nationality of headdress design. The practice results show that the information visualization design of national headwear proposed in this paper has obvious advantages over the traditional design, which greatly improves the design efficiency and simplifies the design process.

Author 1: Ting Zhang

Keywords: Information visualization; headwear national characteristics; digital material library; yao nationality characteristic headdress design

PDF

Paper 94: SLAM Mapping Method of Laser Radar for Tobacco Production Line Inspection Robot Based on Improved RBPF

Abstract: The study focuses on the laser radar SLAM mapping method employed by the tobacco production line inspection robot, utilizing an enhanced RBPF approach. It involves the construction of a well-structured two-dimensional map of the inspection environment for the tobacco production line inspection robot. This construction aims to ensure the seamless execution of inspection tasks along the tobacco production line. The fusion of wheel odometer and IMU data is accomplished using the extended Kalman filter algorithm, wherein the resulting fused odometer motion model and LiDAR observation model jointly serve as the hybrid proposal distribution. In the hybrid proposal distribution, the iterative nearest point method is used to find the sampling particles in the high probability area, and the matching score during particle matching scanning is used as the fitness value, and the Drosophila optimization strategy is used to adjust the particle distribution. Then, the weight of each particle after optimization is solved, and the particles are adaptively resampled according to the size of the weight after solution, and the inspection map of the inspection robot of the tobacco production line is updated according to the updated position and posture information and observation information of the particles of the inspection robot of the tobacco production line. The experimental results show that this method can realize the laser radar SLAM mapping of the tobacco production line inspection robot, and it can build a more ideal two-dimensional map of the inspection environment of the tobacco production line inspection robot with fewer particles. If it is applied to practical work, a more ideal work effect can be achieved.

Author 1: Zhiyuan Liang
Author 2: Pengtao He
Author 3: Wenbin Liang
Author 4: Xiaolei Zhao
Author 5: Bin Wei

Keywords: Improved RBPF; tobacco production line; patrol robot; LiDAR; slam mapping; drosophila optimization strategy

PDF

Paper 95: Visual Image Feature Recognition Method for Mobile Robots Based on Machine Vision

Abstract: With the continuous advancement of machine vision and computer technology, mobile robots with visual systems have received widespread attention in fields such as industry, agriculture, and services. However, the current methods for processing visual images of mobile robots are difficult to meet the requirements of practical applications. There are issues of low efficiency and low accuracy. Therefore, firstly, spatial information is integrated into the K-means algorithm and image spatial structure constraints are introduced for visual image segmentation. Then the dense connected network is added to the Convolutional neural network structure. This structure is combined with a bidirectional long-term and short-term memory network to achieve visual image feature recognition. The results show that the improved K-means algorithm has a maximum recall rate of 97.35% in the Berkeley image segmentation dataset, with a maximum Randall index of 86.18%. After combining with the proposed improved Convolutional neural network, the highest feature recognition rate for five scenes of mining, risk elimination, agriculture, factory and building is 96.1%, and the lowest error rate is 1.2%. It possesses a high degree of recognition accuracy and is capable of effectively being applied to visual feature recognition on mobile robots, providing a novel reference point for visual image processing on mobile robots.

Author 1: Minghe Hu
Author 2: Jiancang He

Keywords: Machine vision; mobile robots; image recognition; convolutional neural network; K-means algorithm

PDF

Paper 96: Explore Chinese Energy Commodity Prices in Financial Markets using Machine Learning

Abstract: This study simultaneously investigates the causality and dynamic links between international energy trade and economic price changes, especially in the Chinese commodity market. To get a causal route, it attempts to identify the linear and nonlinear causality among commodity prices, equities, and the exchange rate in China and the United States (US). Here, we adapt multilayer perceptron networks to obtain a nonlinear autoregressive model for causality discovery. After comparing methods without networks, this study proves that the nonlinear causality discovery method using machine learning performs best on simulated data. Subsequently, we apply that causality to actual data; we combine the causal routes, particularly from the machine learning methodology, to investigate the existence of a causal direct or indirect relationship among Chinese commodity prices, long-term interest rates, stock index, and exchange rates in China and the US. The steady-state accuracy of cmlpgranger is 99%. In most cases, the order of judgment accuracy of causality is cmlpgranger > HSICLasso > ARD > LinSVR. The results show that Energy trade as an element of the global economic system. The Chinese commodity price of energy has an interactive relationship with the Chinese commodity price of agricultural products. The significant transmission is from the commodity price of energy to equities, then to the exchange rate, and, finally, to the commodity price of agricultural products.

Author 1: Yu Cui
Author 2: Tianhao Ma

Keywords: Chinese commodity price; exchange rate; stock markets; machine learning; international energy trade; global economic system

PDF

Paper 97: Research on the Application of Multi-Objective Algorithm Based on Tag Eigenvalues in e-Commerce Supply Chain Forecasting

Abstract: With the continuous development of Internet technology, the scale of Internet data is increasing day by day, and business forecasting has become more and more important in corporate business decision-making. Therefore, to improve the accuracy of Multi Target Regression in the actual e-commerce supply chain forecasting, research through the method of constructing the labeling feature for each target is optimized, the Multi-Target Regression via Sparse Integration and Label- Specific Features algorithm is obtained, and the experimental analysis is carried out on the performance of the algorithm and the application effect in the actual e-commerce supply chain. The experimental results show that the average of Relative Root Mean Square Error value of the research algorithm and is the lowest in most datasets, with a minimum of 0.058 in the effect experiments of prediction and label-specific features; in the effect and flexibility experiments of sparse sets, the lowest average of Relative Root Mean Square Error value of the research algorithm was 0.058, and the average rank value was the smallest. In addition, the average of Relative Root Mean Square Error value of the research algorithm is the smallest under the target variable of Y2 in the Enb data, and its value is 0.075. In the actual e-commerce supply chain forecast, the research algorithm has the highest score of 0.097 points. Overall, research algorithm has a better forecasting effect and higher performance, and has better practicality in practice, and can play a better effect in actual e-commerce supply chain forecasting.

Author 1: Man Huang
Author 2: Jie Lian

Keywords: Label features; multi-objective algorithm; sparse set; e-commerce supply chain; multi target regression

PDF

Paper 98: Construction and Application of Automatic Scoring Index System for College English Multimedia Teaching Based on Neural Network

Abstract: With the continuous development of interactive multimedia, multimedia is increasingly integrated into college English teaching, providing advanced teaching equipment and resources. While enriching the teaching environment, it also brings new challenges to teaching ideas and strategies. Although the proportion of independent and selective learning of college students has increased, classroom teaching still constitutes the most essential unit of educational activities. Classroom evaluation is an important means and institutionalized element to improve the quality of university teaching. This paper analyzes the elements of multimedia classroom teaching and constructs an evaluation index system for English multimedia teaching. The improved model is used to achieve automatic teaching grading, acquire knowledge through environmental learning and improve its own performance, and evaluate the mathematical model of the English multimedia teaching evaluation system established by neural network theory accurately and effectively. In this paper, the results of automatic scoring of multimedia English teaching in colleges and universities are compared. Simulation software is used to verify the established neural network evaluation system. The simulation results show that the model is more suitable for the test data of English classroom teaching than the traditional methods, and the prediction effect is better. All 15 English teachers had a predicted error rate of less than 2%, and all 10 English teachers had a predicted error rate of less than 1%.

Author 1: Hui Dong
Author 2: Ping Wei

Keywords: Cognition of multimedia teaching in universities; scoring index; neural network; teaching system

PDF

Paper 99: Design of a Decentralized AI IoT System Based on Back Propagation Neural Network Model

Abstract: In the Internet of Things (IoT) era, when user needs are continually evolving, the coupling of AI and IoT technologies is unavoidable. Fog devices are introduced into the IoT system and given the function of hidden layer neurons of Back Propagation neural network, and Docker containers are combined to realize the mapping of devices and neurons in order to improve the quality of service of IoT devices. This study proposes the design of a decentralized AI IoT system based on Back Propagation neural network model. The testing data revealed that, at various data transfer intervals, the average transmission rate between the fog device and the sensing device was 8.265Mbps, and that the device's transmission rate could satisfy user demand. When the data transmission interval was 20s, the network data transmission rate was greater than 8.5Mbps and did not vary much when the number of data transmissions rose. The research demonstrates that the decentralized AI IoT system's network performance, which is based on a back propagation neural network model, can match user usage requirements and has good stability.

Author 1: Xiaomei Zhang

Keywords: BP neural networks; artificial intelligence; IoT systems; fog devices; Docker containers

PDF

Paper 100: Black Widow Optimization Algorithm for Virtual Machines Migration in the Cloud Environments

Abstract: Cloud data centers use virtualization technology to manage computing resources. Using a group of connected Virtual Machines (VMs), corresponding users can compute data efficiently and effectively. It improves the utilization of resources, thereby reducing hardware requirements. Repossession of affected services requires VM-based infrastructure overhaul schemes. Clarifications concerning dedicated routing are also desirable to improve the reliability of Domain Controller (DC) services. The migration of a VM experiencing a node failure challenges maintaining reliability. The selection of VMs is influential in limiting the number of VM migrations. Choosing one or more potential VMs for migration reduces the servers' workload. This paper presents an energy-aware VM migration method for cloud computing based on the Black Widow Optimization (BWO) algorithm. The proposed algorithm was implemented and measured using JAVA. Afterward, we compared our results against existing methodologies regarding resource availability, energy consumption, load, and migration cost.

Author 1: Chuang Zhou

Keywords: Cloud computing; migration; energy consumption; optimization; black widow algorithm

PDF

Paper 101: Towards Secure Blockchain-enabled Cloud Computing: A Taxonomy of Security Issues and Recent Advances

Abstract: Blockchain technology offers a promising solution for addressing performance and security challenges within distributed systems. This paper presents a comprehensive taxonomy of security issues in cloud computing and explores recent advances in utilizing blockchain to enhance security and efficiency in this domain. We employ a systematic literature review approach to analyze various blockchain-enabled solutions for cloud computing. Our findings reveal that blockchain's decentralized and immutable nature empowers cloud computing services to establish secure and private data interactions. By leveraging blockchain's consensus mechanism, we demonstrate the feasibility of creating a robust platform for authenticating transactions involving digital assets. Through cryptographic methods, blocks of transactions are securely linked, ensuring data integrity. This paper provides a roadmap for understanding security concerns in cloud computing and offers insights into the potential of blockchain technology. We conclude by outlining future research directions that can drive innovation in this exciting intersection of fields.

Author 1: Shengli LIU

Keywords: Cloud computing; security; blockchain; review

PDF

Paper 102: Research on Enterprise Supply Chain Anti-Disturbance Management Based on Improved Particle Swarm Optimization Algorithm

Abstract: A supply chain that is effective and of the highest caliber boosts customer happiness as well as sales and earnings, increasing the company's competitiveness in the market. It has been discovered that the standard supply chain management technique leaves the supply chain with weak supply chain stability because it has a low ability to withstand the manufacturer's production behaviour. An enterprise supply chain resistance management model is built using the study's proposed particle swarm optimisation technique, which is based on a genetic algorithm with stochastic neighbourhood structure, to solve this issue. The suggested technique outperformed the other two algorithms utilised for comparison in a performance comparison test, with a stable particle swarm fitness value of 0.016 after 800 iterative iterations and the fastest convergence. The proposed model was then empirically examined, and the results revealed that the production team using the model completed the same volume of orders in 32 days while making $460,000 more in profit. With scores of 4.5, 4.5, 4.3, 4.3, 4.2, and 4.2, respectively, the team also had the lowest values of the six forms of employee anti-production conduct, outperforming the comparative management style. In summary, the study proposes an anti-disturbance management model for enterprise supply chains that can rationalise the scheduling of manufacturers' production behaviour and thus improve the stability of the supply chain.

Author 1: Tongqing Dai

Keywords: Supply chain; particle swarm optimization algorithm; genetic algorithm; inverse production behaviour; neighbourhood structure

PDF

Paper 103: Automated Analysis of Job Market Demands using Large Language Model

Abstract: This paper presents a comprehensive analysis of labor market demands for Myanmar workers in Japan, and Thailand, focusing on opportunities for individuals without higher education degrees. Leveraging ChatGPT’s text classification and summarization capabilities, we extracted vital insights from extensive job advertisements and social media groups. The dataset comprises 152 job advertisements from Thailand and 30 from Japan, collected in 2023. Our research provides a valuable snapshot of skill demands and job opportunities, offering insights for informed decision-making by both job seekers and international non-governmental organizations. The innovative approach of using ChatGPT highlights its efficacy in understanding labor market dynamics. These findings serve as a foundation for tailored interventions to bridge employment challenges faced by marginalized Myanmar youths.

Author 1: Myo Thida

Keywords: ChatGPT; labour market analysis; skills identification; online job adverts; skills demand

PDF

Paper 104: Decentralized Management of Medical Test Results Utilizing Blockchain, Smart Contracts, and NFTs

Abstract: In today’s medical landscape, the effective management and availability of diagnostic data, including current and historical medical tests, play a critical role in inform-ing physicians’ therapeutic decisions. However, the conventional centralized storage system presents a significant impediment, particularly when patients switch healthcare providers. Given the sensitive nature of medical data, retrieving this information from a different healthcare facility can be fraught with challenges. While decentralized storage models using blockchain and smart contracts have been suggested as potential solutions, these methodologies often expose sensitive personal information due to the inherently open nature of data on the blockchain. Addressing these challenges, we present an innovative approach integrating Non-Fungible Tokens (NFTs) to facilitate the creation and sharing of medical document sets based on test results within a medical environment. This novel approach effectively balances data accessibility and security, introducing four key contributions: (a) We introduce a mechanism for sharing medical test results while preserving data privacy. (b) We offer a model for generating certified, NFT-based document sets that encapsulate these results.(c) We provide a proof-of-concept reflecting the proposed model’s functionality and (d) We deploy this proof-of-concept across four EVM-supported platforms—BNB Smart Chain, Fantom, Polygon, and Celo—to identify the most compatible platform for our proposed model. Our work underscores the potential of blockchain, smart contracts, and NFTs to revolutionize medical data management, demonstrating a practical solution to the challenges posed by centralized storage systems.

Author 1: Quy T. L
Author 2: Khanh H. V
Author 3: Huong H. L
Author 4: Khiem H. G
Author 5: Phuc T. N
Author 6: Ngan N. T. K
Author 7: Triet M. N
Author 8: Bang L. K
Author 9: Trong D. P. N.
Author 10: Hieu M. D.
Author 11: Bao Q. T.
Author 12: Khoa D. T.

Keywords: Medical test result; blockchain; smart contract; NFT; Ethereum; Fantom; Polygon; Binance Smart Chain

PDF

Paper 105: Leveraging Blockchain, Smart Contracts, and NFTs for Streamlining Medical Waste Management: An Examination of the Vietnamese Healthcare Sector

Abstract: Medical waste is deemed hazardous due to its potential health implications and the predominant practice of discarding it post six months of utilization. Furthermore, the reusable proportion of such waste is minimal. The implications of this scenario were brought to the fore during the COVID- 19 pandemic when sub-optimal medical waste management was identified as a factor exacerbating the spread of the virus worldwide. The predicament is particularly grave in developing nations, such as Vietnam, where the underdeveloped state of medical infrastructure renders efficient waste management a daunting task. The waste management challenge also stems from the significant roles played by different stakeholders (healthcare workers and patients confined to isolation wards), whose actions directly influence waste classification, impact the waste treatment process, and indirectly contribute to environmental pollution. Given that waste management involves a chain of activities requiring the coordinated efforts of medical, transportation, and waste treatment personnel, inaccuracies in the initial stages, such as waste sorting, can negatively impact subsequent processes. In light of these issues, our study puts forth a unique model aimed at enhancing waste classification and management practices in Vietnam. This model innovatively integrates Blockchain technology, smart contracts, and non-fungible tokens (NFTs) with the intent to foster an increased individual and collective consciousness towards effective waste classification within healthcare settings. Our research is notable for its four-fold contribution: (a) suggesting a unique mechanism based on blockchain technology and smart contracts, designed specifically to improve medical waste classification and treatment in Vietnam; (b) introducing a model for instituting rewards or penalties based on NFT technology to influence behaviors of individuals and organizations; (c) demon-strating the feasibility of the proposed model through a proof-of-concept; (d) executing the proof-of-concept on four prominent platforms that support ERC721 - NFT of Ethereum and EVM for executing smart contracts programmed in the Solidity language, namely BNB Smart Chain, Fantom, Polygon, and Celo.

Author 1: Triet M. N
Author 2: Khanh H. V
Author 3: Huong H. L
Author 4: Khiem H. G
Author 5: Phuc T. N.
Author 6: Ngan N. T. K.
Author 7: Quy T. L.
Author 8: Bang L. K.
Author 9: Trong D. P. N.
Author 10: Hieu M. D.
Author 11: Bao Q. T.
Author 12: Khoa D. T.
Author 13: Anh T. N.

Keywords: Medical waste management; blockchain; smart contracts; NFTs; ethereum; fantom; polygon; binance smart chain

PDF

Paper 106: A Novel Dual Confusion and Diffusion Approach for Grey Image Encryption using Multiple Chaotic Maps

Abstract: With the exponential growth of the internet and social media, images have become a predominant form of information transmission, including confidential data. Ensuring the proper security of these images has become crucial in today’s digital age. This research study proposes a unique strategy for solving this demand by presenting a dual confusion and diffusion technique for encrypting gray-scale pictures. This method is presented as an innovative means of meeting this need. To im-prove the effectiveness of the encryption process, the encryption method uses several chaotic maps, including the logistic map, the tent map, and the Lorenz attractor. Python is used for the implementation of the suggested approach. Furthermore, a thorough assessment of the encryption mechanism is carried out to determine its efficacy and resilience. By employing the combined strength of chaotic maps and dual confusion and diffusion techniques, the proposed method aims to provide a high level of security for confidential image transmission. The experimental results demonstrate the algorithm’s effectiveness in terms of encryption speed, security, and resistance against common attacks. The encrypted images exhibit properties such as randomness, key sensitivity, and resilience against statistical analysis and differential attacks. Moreover, the proposed method maintains a reasonable computational efficiency, and it is compatible with real-time applications. This study makes a contribution to the growing area of picture encryption by presenting an original and effective encryption method that overcomes the shortcomings of previously used approaches. Future work can explore additional security features and extend the proposed approach to encrypt other forms of multimedia data.

Author 1: S Phani Praveen
Author 2: V Sathiya Suntharam
Author 3: S Ravi
Author 4: U. Harita
Author 5: Venkata Nagaraju Thatha
Author 6: D Swapna

Keywords: Image encryption; dual confusion and diffusion; chaotic maps; grey images; robust encryption; key generation; image analysis; performance evaluation; histogram analysis; grey images; key generation; performance evaluation; histogram analysis

PDF

Paper 107: Implementing a Blockchain, Smart Contract, and NFT Framework for Waste Management Systems in Emerging Economies: An Investigation in Vietnam

Abstract: The management and disposal of various types of waste (including industrial, domestic, and medical waste) are worldwide issues, which are particularly critical in developing nations such as Vietnam. Given the extensive population and inadequate waste treatment facilities, addressing this challenge is of utmost importance. Predominantly, the majority of such waste is not processed for composting but is instead subjected to elimination, thereby posing severe threats to public health and environmental safety. Furthermore, insufficient standards in existing waste treatment plants contribute to the rising volume of environmental waste. Emphasizing the process of waste recycling instead of total elimination is an alternate strategy that needs to be considered seriously. However, the implementation of waste segregation in Vietnam is still not sufficiently prioritized by individuals or organizations. This study presents a unique model for waste segregation and treatment, leveraging the capacities of blockchain technology and smart contracts. We also scrutinize the adherence or non-compliance to waste segregation mandates as a mechanism to incentivize or penalize individuals and organizations, respectively. To address this, we employ Non-Fungible Token (NFT) technology for the storage of compliance proofs and associated metadata. The paper’s primary contributions can be delineated into four components: i) presentation of a waste segregation and treatment model in Vietnam, utilizing Blockchain technology and Smart Contracts; ii) application of NFTs for storage of compliance-related content and its metadata; iii) offering a proof-of-concept implementation rooted in the Ethereum platform; and iv) executing the proposed model on four EVM and ERC721 compliant platforms, namely BNB Smart Chain, Fantom, Polygon, and Celo, to identify the most suitable platform for our proposition.

Author 1: Khiem H. G
Author 2: Khanh H. V
Author 3: Huong H. L
Author 4: Quy T. L
Author 5: Phuc T. N.
Author 6: Ngan N. T. K.
Author 7: Triet M. N.
Author 8: Bang L. K.
Author 9: Trong D. P. N.
Author 10: Hieu M. D.
Author 11: Bao Q. T.
Author 12: Khoa D. T.

Keywords: Vietnam waste management; blockchain; smart contracts; NFT; Ethereum; Fantom; Polygon; Binance Smart Chain

PDF

Paper 108: Deep Learning-based Sentence Embeddings using BERT for Textual Entailment

Abstract: This study directly and thoroughly investigates the practicalities of utilizing sentence embeddings, derived from the foundations of deep learning, for textual entailment recognition, with a specific emphasis on the robust BERT model. As a cornerstone of our research, we incorporated the Stanford Natural Language Inference (SNLI) dataset. Our study emphasizes a meticulous analysis of BERT’s variable layers to ascertain the optimal layer for generating sentence embeddings that can effectively identify entailment. Our approach deviates from traditional methodologies, as we base our evaluation of entailment on the direct and simple comparison of sentence norms, subsequently highlighting the geometrical attributes of the embeddings. Experimental results revealed that the L2 norm of sentence embeddings, drawn specifically from BERT’s 7th layer, emerged superior in entailment detection compared to other setups.

Author 1: Mohammed Alsuhaibani

Keywords: Textual entailment; deep learning; entailment detection; BERT; text processing; natural language processing systems

PDF

Paper 109: An Approach of Test Case Generation with Software Requirement Ontology

Abstract: Software testing plays an essential role in software development process since it helps to ensure that the developed software product is free from errors and meets the defined specifications before the delivery. As the software specification is mostly written in the form of natural language, this may lead to the ambiguity and misunderstanding by software developers and results in the incorrect test cases to be generated from this unclear specification. Therefore, to solve this problem, this paper presents a novel hybrid approach, Software Requirement Ontologies based Test Case Generation (ReqOntoTestGen) to enhance the reliability of existing software testing techniques. This approach enables a framework that combines ontology engineering with the software test case generation approaches. Controlled Natural Language (CNL) provided by the ROO (Rabbit to OWL Ontologies Authoring) tools is used by the framework to build the software requirement ontology from unstructured functional requirements. This eliminates the inconsistency and ambiguity of requirements before test case generation. The OWL ontology resulted from ontology engineering is then transformed into the XML file of data dictionary. Combination of Equivalence and Classification Tree Method (CCTM) is used to generate test cases from this XML file with the decision tree. This allows us to reduce redundancy of test cases and increase testing coverage. The proposed approach is demonstrated with the developed prototype tool. The contribution of the tool is confirmed by the validation and evaluation result with two real case studies, Library Management System (LMS) and Kidney Failure Diagnosis (KFD) Subsystem, as we expected.

Author 1: Adisak Intana
Author 2: Kuljaree Tantayakul
Author 3: Kanjana Laosen
Author 4: Suraiya Charoenreh

Keywords: Software testing; software requirement specification; ontology; test case; equivalence and classification tree method

PDF

Paper 110: Eligible Personal Loan Applicant Selection using Federated Machine Learning Algorithm

Abstract: Loan sanctioning develops a paramount financial dependency amongst banks and customers. Banks assess bundles of documents from individuals or business entities seeking loans depending on different loan types since only reliable candidates are chosen for the loan. This reliability materializes after assessing the previous transaction history, financial stability, and other diverse kinds of criteria to justify the reliance of the bank on an applicant. To reduce the workload of this laborious assessment, in this research, a machine learning (ML) based web application has been initiated to predict eligible candidates considering multiple criteria that banks generally use in their calculation, in short which can be briefed as loan eligibility prediction. Data from prior customers, who are authorized for loans based on a set of criteria, are used in this research. As ML techniques, Random Forest, K-Nearest Neighbour, Adaboost, Extreme Gradient Boost Classifier, and Artificial Neural Network algorithms are utilized for training and testing the dataset. A federated learning approach is employed to ensure the privacy of loan applicants. Performance analysis reveals that Random Forest classifier has provided the best output with an accuracy of 91%. Based on the mentioned prediction, the web application can decide whether the customers’ requested loan should be accepted or rejected. The application was developed using NodeJs, ReactJS, Rest API, HTML, and CSS. Furthermore, parameter tuning can improve the performance of the web application in the future along with a usable user interface ensuring global accessibility for various types of users.

Author 1: Mehrin Anannya
Author 2: Most. Shahera Khatun
Author 3: Md. Biplob Hosen
Author 4: Sabbir Ahmed
Author 5: Md. Farhad Hossain
Author 6: M. Shamim Kaiser

Keywords: Loan eligibility prediction; machine learning; random forest; K-Nearest Neighbour; Adaboost; extreme gradient boost; artificial neural network; federated learning

PDF

Paper 111: A Low-Cost Wireless Sensor System for Power Quality Management in Single-Phase Domestic Networks

Abstract: This article presents a novel low-cost hardware and software tool for monitoring power quality in single-phase domestic networks using an ESP32 microcontroller. The proposed embedded system allows remote evaluation and monitoring of electrical energy consumption behavior through non-invasive current measurement parameters. Based on these measurements, power, power factor, total harmonic distortion, and energy consumption are calculated. The collected data is then published and visualized on a free and open IoT application in the cloud. The tool was designed to be both cost-effective and high-quality. During laboratory testing, the equipment demonstrated a high level of precision, as compared to a network analyzer. Additionally, the design utilized the smallest number of components possible, while still maintaining quality performance. The ESP32 microcontroller enables wireless data transmission, making remote monitoring and management of energy consumption more accessible and efficient. Moreover, the non-invasive measurement method makes the tool safer and more user-friendly, as it does not require any interruption of power supply. The proposed tool can help identify and address power quality issues that arise in domestic networks, which can have a significant impact on energy consumption and costs. The IoT application enables users to access their power consumption data remotely, facilitating better energy management and reducing wastage.

Author 1: Cristian A. Aldana B
Author 2: Edison F. Montenegro A

Keywords: Cost-effective; current measurement; energy consumption; ESP32 microcontroller; non-invasive; power quality; remote monitoring

PDF

Paper 112: A Novel Convolutional Neural Network Architecture for Pollen-Bearing Honeybee Recognition

Abstract: Monitoring the pollen foraging behavior of honey-bees is an important task that is beneficial to beekeepers, allowing them to understand the health status of their honeybee colonies. To perform this task, monitoring systems should have the ability to automatically recognize images of pollen-bearing honeybees extracted from videos recorded at the beehive entrance. In this paper, a novel convolutional neural network architecture is proposed for recognizing pollen-bearing and non-pollen-bearing honeybees from their images. The performance of the proposed model is illustrated based on a real dataset and the obtained results show that it performs better than some other state-of-the-art deep learning architectures like VGG16, VGG19, or Resnet50 in terms of both accuracy and execution time. Thus, the proposed model can be considered an effective algorithm for designing automatic honeybee colony monitoring systems.

Author 1: Thi-Nhung Le
Author 2: Thi-Minh-Thuy Le
Author 3: Thi-Thu-Hong Phan
Author 4: Huu-Du Nguyen
Author 5: Thi-Lan Le

Keywords: Pollen-bearing honeybee; image classification; convolutional neural network; honeybee monitoring system; Pollen dataset

PDF

Paper 113: Tomato Disease Recognition: Advancing Accuracy Through Xception and Bilinear Pooling Fusion

Abstract: Accurate detection and classification of tomato diseases are essential for effective disease management and maintaining agricultural productivity. This paper presents a novel approach to tomato disease recognition that combines Xception, a pre-trained convolutional neural network (CNN), with bilinear pooling to advance accuracy. The proposed model consists of two parallel Xception-based CNNs that independently process input tomato images. Bilinear pooling is applied to combine the feature maps generated by the two CNNs, capturing intricate interactions between different image regions. This fusion of Xception and bilinear pooling results in a comprehensive representation of tomato diseases, leading to improved recognition performance. Extensive experiments were conducted on a diverse dataset of annotated tomato disease images to evaluate the effectiveness of the suggested approach. The model achieved a remarkable test accuracy of 98.7%, surpassing conventional CNN approaches. This high accuracy demonstrates the efficacy of the integrated Xception and bilinear pooling model in accurately identifying and classifying tomato diseases. The implications of this research are significant for automated tomato disease recognition systems, enabling timely and precise disease diagnosis. The model’s exceptional accuracy empowers farmers and agricultural practitioners to implement targeted disease management strategies, minimizing crop losses and optimizing yields.

Author 1: Hoang-Tu Vo
Author 2: Nhon Nguyen Thien
Author 3: Kheo Chau Mui

Keywords: Tomato disease recognition; Xception; Bilinear pooling; convolutional neural networks; disease management

PDF

Paper 114: Predicting Quality Medical Drug Data Towards Meaningful Data using Machine Learning

Abstract: This research aims to improve the process of finding alternative drugs by utilizing artificial intelligence algorithms. It is not an easy task for human beings to classify the drugs manually, as this requires much longer time and more effort than doing it using classifiers. The study focuses on predicting high-quality medical drug data by considering ingredients, dosage forms, and strengths as features. Two datasets were generated from the original drug dataset, and four machine learning classifiers were applied to these datasets: Random Forest, Support Vector Machine, Naive Bayes, and Decision Tree. The classification performance was evaluated under three different scenarios, which varied the ratio of the training and test data for both datasets, as follows: (i) 80% (training) and 20% (test dataset), (ii) 70%(training) and 30% (test dataset), and (iii) 50% (training) and 50% (test dataset). The results indicated that the Decision Tree, Naive Bayes, and Random Forest classifiers showed superior performance in terms of classification accuracy, with over 90%accuracy achieved in all scenarios. The results also showed that there was no significant difference between the results of the two datasets. The findings of this study have implications for streamlining the process of identifying alternative drugs.

Author 1: Suleyman Al-Showarah
Author 2: Abubaker Al-Taie
Author 3: Hamzeh Eyal Salman
Author 4: Wael Alzyadat
Author 5: Mohannad Alkhalaileh

Keywords: Classification; alternative drugs; medical; decision tree; support vector machine; naive bayes; random forest

PDF

Paper 115: Incorporating Learned Depth Perception Into Monocular Visual Odometry to Improve Scale Recovery

Abstract: A growing interest in autonomous driving has led to a comprehensive study of visual odometry (VO). It has been well studied how VO can estimate the pose of moving objects by examining the images taken from onboard cameras. In the last decade, it has been proposed that deep learning under supervision can be employed to estimate depth maps and visual odometry (VO). In this paper, we propose a DPT (Dense Prediction Transformer)-based monocular visual odometry method for scale estimation. Scale-drift problems are common in traditional monocular systems and in recent deep learning studies. In order to recover the scale, it is imperative that depth estimation to be accurate. A framework for dense prediction challenges that bases its computation on vision transformers instead of convolutional networks is characterized as an accurate model that is utilized to estimate depth maps. Scale recovery and depth refinement are accomplished iteratively. This allows our approach to simultaneously increase the depth estimate while eradicating scale drift. The depth map estimated using the DPT model is accurate enough for the purpose of achieving the best efficiency possible on a VO benchmark, eliminating the scaling drift issue.

Author 1: Hamza Mailka
Author 2: Mohamed Abouzahir
Author 3: Mustapha Ramzi

Keywords: Visual odometry; scale recovery; depth estimation; DPT model

PDF

Paper 116: Enhancing Precision in Lung Cancer Diagnosis Through Machine Learning Algorithms

Abstract: Lung cancer continues to pose a significant threat worldwide, leading to high cancer-related mortality rates and underscoring the urgent need for improved early diagnosis approaches. Despite the valuable technology currently employed for lung cancer diagnosis, some limitations hinder timely and accurate diagnoses, resulting in delayed treatment and unfavorable outcomes. In this research, we propose a comprehensive methodology that harnesses the power of various machine learning algorithms, including Logistic Regression, Gradient Boost, LGBM, and Support Vector Machine, to address these challenges and improve patient care. These algorithms have been thoughtfully chosen for their ability to effectively handle the complexity of lung cancer data and enable accurate classification and prediction of cases. By leveraging these advanced techniques, our methodology aims to enhance the efficiency and accuracy of lung cancer diagnosis, enabling earlier interventions and tailored treatment plans that can significantly impact patient outcomes and quality of life. Through rigorous assessments conducted on benchmark datasets and real-world cases, our study has yielded promising results. Random Forest achieved an impressive accuracy of 97%, showcasing its ability to effectively capture complex patterns and features within the lung cancer dataset. By pushing the boundaries of medical innovation and precision medicine, we envision a future where machine learning algorithms seamlessly integrate into healthcare systems, leading to personalized and efficient care for lung cancer patients.

Author 1: Nasareenbanu Devihosur
Author 2: Ravi Kumar M G

Keywords: Lung cancer diagnosis; machine learning; precision medicine

PDF

Paper 117: Generating Nature-Resembling Tertiary Protein Structures with Advanced Generative Adversarial Networks (GANs)

Abstract: In the field of molecular chemistry, the functions, interactions, and bonds between proteins depend on their tertiary structures. Proteins naturally exhibit dynamism under different physiological conditions, as they alter their tertiary structures to accommodate interactions with other molecular partners. Significant advancements in Generative Adversarial Networks (GANs) have been leveraged to generate tertiary structures closely mimicking the natural features of real proteins, including the backbone and local and distal characteristics. Our research has led to the development of stable model ROD-WGAN, which is capable of generating tertiary structures that closely resemble those found in nature. Four key contributions have been made to achieve this goal: (1) Utilizing Ratio Of Distribution (ROD) as a penalty function in the Wasserstein Generative Adversarial Networks (WGAN), (2) Developing a GAN network architecture that fertilizes the residual block in generator, (3) Increasing the length of the generated protein structures to 256 amino acids, and (4) Revealing consistent correlations through Structural Similarity Index Measure (SSIM) in protein structures with varying lengths. These model represent a significant step towards robust deep-generation models that can explore the highly diverse set of protein molecule structures that support various cellular activities. Moreover, they provide a valuable source of data augmentation for critical applications such as molecular structure prediction, inpainting, dynamics, and drug design. Data, code, and trained models are available at https://github.com/mena01/Generating-Tertiary-Protein-Structures-Resembling-Nature-using-Advanced-WGAN.

Author 1: Mena Nagy A. Khalaf
Author 2: Taysir Hassan A Soliman
Author 3: Sara Salah Mohamed

Keywords: Molecular structure; protein structure; protein modeling; tertiary structure; generative adversarial learning; deep learning; proteomic

PDF

Paper 118: Prediction of Heart Disease using an Ensemble Learning Approach

Abstract: The ability to predict diseases early is essential for improving healthcare quality and can assist patients in avoiding potentially dangerous health conditions before it is too late. Various machine learning techniques are used in the medical field. Nonetheless, machine learning is critical in determining the future of pharmaceuticals and patients’ health. This is because the various classification techniques provide a high level of accuracy. However, because so much data are being gathered from patients, it becomes harder to find meaningful cardiac disease predictions. A vital research task is to identify these characteristics. Individual classification algorithms in this situation cannot generate flawless models capable of reliably predicting heart disease. As a result, higher performance might be achieved by using ensemble learning approaches (ELA), producing accurate cardiac disease predictions. In the present research work, we utilized an ELA for the early prediction of heart disease, using a new combination including four machine learning algorithms—adaptive boosting, support vector machine, decision tree, and random forest—to increase the accuracy of the prediction results. We used two wrapper methods for feature selection: forward selection and backward elimination. We used the proposed model with three datasets: the StatLog UCI dataset, the Z-Alizadeh Sani dataset, and the Cardiovascular Disease (CVD) dataset. We obtained the highest accuracy when using our proposed model with the Z-Alizadeh Sani dataset, where it was 0.91, while the StatLog UCI dataset was 0.83. The CVD dataset obtained the lowest accuracy, 0.73.

Author 1: Ghalia A. Alshehri
Author 2: Hajar M. Alharbi

Keywords: Machine learning; ensemble learning; classification; disease prediction; heart disease

PDF

Paper 119: A Framework for Agriculture Plant Disease Prediction using Deep Learning Classifier

Abstract: The agricultural industry in Saudi Arabia suffers from the effects of vegetable diseases in the Central Province. The primary causes of death documented in this analysis were 32 fungal diseases, two viral diseases, two physiological diseases, and one parasitic disease. Because early diagnosis of plant diseases may boost the productivity and quality of agricultural operations, tomatoes, Pepper and Onion were selected for the experiment. The primary goal is to fine-tune the hyperparameters of common Machine Learning classifiers and Deep Learning architectures in order to make precise diagnoses of plant diseases. The first stage makes use of common image processing methods using ml classifiers; the input picture is median filtered, contrast increased, and the background is removed using HSV color space segmentation. After shape, texture, and color features have been extracted using feature descriptors, hyperparameter-tuned machine learning (ML) classifiers such as k-nearest neighbor, logistic regression, support vector machine, and random forest are used to determine an outcome. Finally, the proposed Deep Learning Plant Disease Detection System (DLPDS) makes use of Tuned ML models. In the second stage, potential Convolutional Neural Network (CNN) designs were evaluated using the supplied input dataset and the SGD (Stochastic Gradient Descent) optimizer. In order to increase classification accuracy, the best Convolutional Neural Network (CNN) model is fine-tuned using several optimizers. It is concluded that MCNN (Modified Convolutional Neural Network) achieved 99.5% classification accuracy and an F1 score of 1.00 for Pepper disease in the first phase module. Enhanced GoogleNet using the Adam optimizer achieved a classification accuracy of 99.5% and an F1 score of 0.997 for Pepper illnesses, which is much higher than previous models. Thus, proposed work may adapt this suggested strategy to different crops to identify and diagnose illnesses more effectively.

Author 1: Mohammelad Baljon

Keywords: Suggested agricultural plant disease prediction system; tuned ML models; machine learning classifiers; plant disease detection; deep learning architectures

PDF

Paper 120: Lung Cancer Classification using Reinforcement Learning-based Ensemble Learning

Abstract: Lung cancer is a significant health issue affecting millions of people worldwide annually. However, current manual detection methods used by physicians and radiologists to identify lung nodules are inefficient because of the diverse shapes and locations of the nodules in the lungs. New methods are needed to improve the accuracy and speed of detecting lung nodules. This is important because early detection of nodules can increase the likelihood of successful treatment and recovery. This paper introduces a new LLC-QE model that combines ensemble learning and reinforcement learning to classify lung cancer. Initially, the model undergoes pre-training through the utilization of the Artificial Bee Colony (ABC) algorithm. This approach aims to decrease the probability of the model getting stuck in a local optimum. Subsequently, a set of convolutional neural networks (CNNs) is used to simultaneously derive feature vectors from input images, which are subsequently combined for classification in downstream processes. The LIDC-IDRI dataset, predominantly composed of cases without cancer, was employed to train and evaluate the model. To mitigate the dataset imbalance, the training procedure using reinforcement learning is formulated as a series of interconnected decisions. During this process, the images are regarded as states; the network acts as the agent, and the agent is given a greater reward/punishment for accurately/incorrectly classifying the underrepresented class compared to the overrepresented class. The LLC-QE model achieves excellent results (F measure 89.8%; geometric mean 92.7%), outperforming other deep models. Identifying the optimal values for the reward function and determining the ideal number of CNN feature extractors in the ensemble are achieved through experiments conducted on the study dataset. Ablation studies that exclude ABC pre-training and reinforcement learning from the model confirm these components’ independent positive incremental impact on the model’s performance.

Author 1: Shengping Luo

Keywords: Lung cancer; ensemble learning; reinforcement learning; artificial bee colony; convolutional neural network

PDF

Paper 121: Secure Data Sharing in Smart Homes: An Efficient Approach Based on Local Differential Privacy and Randomized Responses

Abstract: Smart homes are smart spaces that contain devices that are connected to each other, collecting information and facilitating users’ comfortable living, safety, and energy management features. To improve the quality of individuals’ life, smart device companies and service providers are collecting data about user activities, user needs, power consumption, etc.; these data need to be shared with companies with privacy-preserving practices. In this paper, an effective approach of securing data transmission to the service provider is based on local differential privacy (LDP), which enables residents of smart homes to provide statistics on their power usage as disturbances bloom filters. Randomized Aggregatable Privacy-Preserving Ordinal (RAPPOR) is a privacy technique that allows sharing of data and statistics while pre-serving the privacy of individual users. The proposed approach applies two randomized responses: permanent random response (PRR) and instantaneous random response (IRR), then applies machine learning algorithms for decoding the perturbation bloom filters on the service provider side. The simulation results show that the proposed approach achieves good performance in terms of privacy-preserving, accuracy, recall, and f-measure metrics. The results indicate that, the proposed LDP for smart homes achieved good utility privacy when the value of LDP ϵ = 0.95. The classification accuracy is between 95.4% and 98% for the utilized classification techniques.

Author 1: Amr T. A. Elsayed
Author 2: Almohammady S. Alsharkawy
Author 3: Mohamed S. Farag
Author 4: S. E. Abo-Youssef

Keywords: Smart homes; security; privacy-preserving; differential privacy; RAPPOR; randomized responses

PDF

Paper 122: The Implementation of Image Conceptualization Split-Screen Stitching and Positioning Technology in Film and Television Production

Abstract: In order to study the technology of image conception, splitting, stitching and positioning in film and television production, this paper first discusses the relevant research literature, then designs an improved biomedical image segmentation convolution network model applied in film and television production, and then verifies the effectiveness of the proposed model. Ultimately, the paper summarizes the research findings. Aiming at the problem that the traditional image mosaic positioning model has poor robustness because of its insufficient ability to extract features and inaccurate segmentation and positioning areas, this study proposes a biomedical image segmentation convolutional network model that is based on dense block and void space convolutional pooling pyramidal module. Additionally, an attention mechanism is introduced to enhance the biomedical image segmentation convolutional network model. The results show that the accuracy, recall, and F1 value of the biomedical image segmentation convolutional network model are 96.48%, 95.24%, and 95.96%, respectively, on the Colombian uncompressed image stitching detection dataset, and the accuracy, recall, and F1 value of the improved biomedical image segmentation convolutional network model are 98.19%, 96.23%, and F1 value of 97.21%. In summary, the improved convolution network model for biomedical image segmentation has excellent performance, and it has certain application value in image conception, mirror splicing and positioning in film and television production.

Author 1: Zhouzhou Deng
Author 2: Rongshen Zhu

Keywords: Convolutional neural network; attention mechanism; null space convolutional pooling pyramid; spatial rich model; dense block

PDF

Paper 123: Rural Landscape Design Data Analysis Based on Multi-Media, Multi-Dimensional Information Based on a Decision Tree Learning Algorithm

Abstract: This paper analyzes and studies the design characteristics of multi-dimensional information rural scenes. For data mining and the Decision Tree (DT) calculation method, the pre-processing system and method of multi-dimensional information rural award design are put forward again. Through the analysis of the multi-dimensional value of multi-dimensional multimedia mountain villages, the form of planning and design analysis and corresponding methods are based on the analysis. Using one village as a case study, we were able to investigate the villagers, roads, services, greening, ecology, and other aspects of the village in complete detail and then implement the multi-media, multi-resource village's detailed planning and design.

Author 1: Ning Leng
Author 2: Hongxin Wang

Keywords: Multi-media multi-dimensional information rural landscape; data mining; decision tree; data preprocessing

PDF

Paper 124: Intelligent Detection System for Electrical Equipment based on Deep Learning and Infrared Image Processing Technology

Abstract: The demand for the reliability of power grid systems is gradually increasing with the development of the power industry. And it is necessary to promptly identify and eliminate the hidden dangers. To meet the needs of online monitoring and the early warning of electrical equipment, an intelligent detection system based on deep learning and infrared image processing technology is proposed in this study. Firstly, the infrared image is preprocessed for noise reduction. Then, an improved SSD (Single Shot MultiBox Detector) network is used to optimize the infrared image detection method. Based on this, an intelligent detection system for electrical equipment is designed. The results show that the mAP value of the improved SSD network after 1200 iterations is about 92.58%, and its area under the Precision Recall (PR) curve is higher than other algorithms. The simulation analysis results of the detection system show that the improved method detects a fault degree of 57.85%, which is closer to the 59.74% in the real situation. The experimental results indicate that the newly established intelligent detection system for electrical equipment can effectively detect its abnormal situations.

Author 1: Mingxu Lu
Author 2: Yuan Xie

Keywords: Deep learning; infrared images; electrical equipment; intelligent detection; adaptive median filtering

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org