The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 13 Issue 6

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Solutions to the Endless Addition of Transaction Volume in Blockchain

Abstract: In blockchain system, the problem of endless addition of transaction volume results in larger space occupation, heavier network transmission burden and the like. But the way of abandoning historical data is hindered by the characteristic of blockchain-tamper-proof. To solve this problem, this dissertation takes bitcoin system as an example, and gives a definition of expired transaction. By abandoning expired transactions and packing the rest transactions in several blocks into a new substitute block to replace old blocks, help to overcome the difficulty in clearing historical data. However, this solution fails to clear ineffective intermediate transactions. Thus, a follow-up solution is proposed, abandon the transactions whose outputs have been run out and retain the transactions where not all the outputs have been spent. Additionally, include records of the spending details of each transaction output, which allows us to clear ineffective intermediate transactions. In the end, an experiment is conducted to confirm the effectiveness of the two solutions as to clearing transactions.

Author 1: Hongping Cao
Author 2: Hongxing Cao

Keywords: Blockchain; endless addition; expired transactions; substitute; storage problem; consensus algorithm

PDF

Paper 2: Blockchain Privacy Data Access Control Method Based on Cloud Platform Data

Abstract: With the improvement of digital informatization and openness of the smart grid, the security of all kinds of sensitive and private data in the power grid is inevitably facing severe threats and challenges. In this paper, we propose a privacy protection scheme for multidimensional data aggregation and access control in the cloud Internet of Things for smart grid. The scalable access control based on attribute encryption is used to determine the data security of power user data in the process of data information sharing in the blockchain under the large data traffic of the cloud platform, which is to achieve privacy protection and fine-grained access control for demand-side multidimensional data. By using the EBGN homomorphic encryption algorithm, the multidimensional data is encrypted, and each dimension can be decrypted separately using the corresponding private key. The multidimensional data aggregation at the gateway can aggregate the multidimensional data into cipher-text, and the control center does not need to decrypt the cipher-text data of each dimension, thereby simplifying the operation of the gateway and the control center and improving the security and privacy of the data. By encrypting the EBGN private key of each dimension through the cipher-text policy attribute encryption algorithm, the fine-grained access control at the dimension level is realized. The experimental results show that the proposed method can effectively improve the security of private data in the aspect of multidimensional data privacy protection, thus reducing the security risk of multidimensional data being illegally accessed. The research in this paper can effectively reduce the communication overhead and computational complexity, reduce the computational cost, and is suitable for data security and privacy protection of smart grid cloud Internet of Things.

Author 1: Biying Sun
Author 2: Qian Dang
Author 3: Yu Qiu
Author 4: Lei Yan
Author 5: Chunhui Du
Author 6: Xiaoqin Liu

Keywords: Cloud platform; blockchain; private data; data encryption; access control

PDF

Paper 3: Deep Convolution Neural Networks for Image Classification

Abstract: Deep learning is a highly active area of research in machine learning community. Deep Convolutional Neural Networks (DCNNs) present a machine learning tool that enables the computer to learn from image samples and extract internal representations or properties underlying grouping or categories of the images. DCNNs have been used successfully for image classification, object recognition, image segmentation, and image retrieval tasks. DCNN models such as Alex Net, VGG Net, and Google Net have been used to classify large dataset having millions of images into thousand classes. In this paper, we present a brief review of DCNNs and results of our experiment. We have implemented Alex Net on Dell Pentium processor using MATLAB deep learning toolbox. We have classified three image datasets. The first dataset contains four hundred images of two types of animals that was classified with 99.1 percent accuracy. The second dataset contains four thousand images of five types of flowers that was classified with 86.64 percent accuracy. In the first and second dataset seventy percent randomly chosen samples from each class were used for training. The third dataset contains forty images of stained pleura tissues from rat-lungs are classified into two classes with 75 percent accuracy. In this data set eighty percent randomly chosen samples were used in training the model.

Author 1: Arun D. Kulkarni

Keywords: Deep learning; convolutional neural networks; image classification; machine learning; object recognition

PDF

Paper 4: An Improved Genetic Algorithm for the Multi-temperature Food Distribution with Multi-Station

Abstract: This paper studies on the food distribution route planning problem for improving the customer satisfaction and the operator cost of food providers. In the first, the problem is formulated to a combinatorial optimization which is hard to be solved. Thus, a polynomial time algorithm is proposed to solve the problem, combining genetic algorithm and neighbourhood search, to increase the total amount of distributed food and reduce the distribution cost. The proposed algorithm employs the genetic al-gorithm with integer coding to decide the assignment of customers to distribution vehicles, and integrates the neighbourhood search strategy into the genetic algorithm to improve its performance. Experiment results show that the proposed method improved the distribution performance up-to 111.09%, 73.10% and 70.21%, respectively, in the distributed food amount, the cost efficiency, and the customer satisfaction.

Author 1: Bo Wang
Author 2: Jiangpo Wei
Author 3: Bin Lv
Author 4: Ying Song

Keywords: Logistics; genetic algorithm; neighbourhood search; food distribution

PDF

Paper 5: An Efficient System for Real-time Mobile Smart Device-based Insect Detection

Abstract: In recent years, the rapid development of many pests and diseases has caused heavy damage to the agricultural production of many countries. However, it is difficult for farmers to accurately identify each type of insect pest, and yet they have used a large number of pesticides indiscriminately, causing serious environmental pollution. Meanwhile, spraying pesticides is very expensive, and thus developing a system to identify crop-damaging pests early will help farmers save a lot of money while also contributing to the development of sustainable agriculture. This paper presents a new efficient deep learning system for real-time insect image recognition on mobile devices. Our system achieved an accuracy of mAP@0.5 with the YOLOv5-S model of 70.5% on the 10 insect dataset and 42.9% on the IP102 large-scale insect dataset. In addition, our system can provide more information to farmers about insects such as biological characteristics, distribution, morphology, and pest control measures. From there, farmers can take appropriate measures to prevent pests and diseases, helping reduce production costs and protecting the environment.

Author 1: Thanh-Nghi Doan

Keywords: Deep learning; real-time insect pest detection; YOLOv5; mobile devices

PDF

Paper 6: Novel Framework for Enhanced Learning-based Classification of Lesion in Diabetic Retinopathy

Abstract: Diabetic retinopathy is an adverse medical condition resulting from a high level of blood sugar potentially affecting the retina and leading to permanent vision loss in its advanced stage of progression. A literature review is conducted to assess the effectiveness of existing approaches to find that Convolution Neural Network (CNN) has been frequently adopted for analyzing the fundus retinal image for detection and classification. However, existing scientific methods are mainly inclined towards achieving accuracy in their learning techniques without much deeper investigation of possibilities to improve the methodology of type using CNN. Therefore, the proposed scheme introduces a computational framework where a simplified feature enhancement operation is carried out, resulting in artifact-free images with better features. The enhanced image is then subjected to CNN to perform multiclass categorization of potential stages of diabetic retinopathy to see if it outperforms existing schemes.

Author 1: Prakruthi M K
Author 2: Komarasamy G

Keywords: Diabetic retinopathy; convolution neural network; classification; fundus retinal image; multi-class categorization

PDF

Paper 7: Hybrid Pelican Komodo Algorithm

Abstract: In this work, a new metaheuristic algorithm, namely the hybrid pelican Komodo algorithm (HPKA), has been proposed. This algorithm is developed by hybridizing two shortcoming metaheuristic algorithms: the Pelican Optimization Algorithm (POA) and Komodo Mlipir Algorithm (KMA). Through hybridization, the proposed algorithm is designed to adapt the advantages of both POA and KMA. Several improvisations regarding this proposed algorithm are as follows. First, this proposed algorithm replaces the randomized target with the preferred target in the first phase. Second, four possible movements are selected stochastically in the first phase. Third, in the second phase, the proposed algorithm replaces the agent’s current location with the problem space width to control the local problem space. This proposed algorithm is then challenged to tackle theoretical and real-world optimization problems. The result shows that the proposed algorithm is better than grey wolf optimizer (GWO), marine predator algorithm (MPA), KMA, and POA in solving 14, 12, 14, and 18 functions. Meanwhile, the proposed algorithm creates 109%, 46%, 47%, and 1% better total capital gain rather than GWO, MPA, KMA, and POA, respectively in solving the portfolio optimization problem.

Author 1: Purba Daru Kusuma
Author 2: Ashri Dinimaharawati

Keywords: Metaheuristic; Pelican Optimization Algorithm; Komodo Mlipir Algorithm; portfolio optimization algorithm; LQ45 index

PDF

Paper 8: Users’ Acceptance and Sense of Presence towards VR Application with Stimulus Effectors on a Stationary Bicycle for Physical Training

Abstract: This research’s objective is to identify lacking elements in various effectors utilized in current physical training for cyclists. This encompasses both virtual reality-based system and indoor conventional training. Another objective is to identify user’s acceptance from the use of vProCycle; which acts as the primary instrument of this study. Virtual Reality (VR) technology is a computer-generated simulation experience where immersive surroundings replicate lifelike environments – and is used for cyclists’ physical training. Distinctive combinations of stimulus effectors (such as altitude, wind-effect, visuals, audio etcetera) have been applied to simulate actual world training environment. This is in order to increase the fidelity of presence for the participants involved, with emphasis on the five human senses. However, in this research the focus is only on hearing, sight, and interaction. The methodology of this mixed-mode pilot study is inclusive of 2 cyclists as participants and a 30 minute training session inside the hypoxic chamber room, whereby they have experienced a VR visual route replica of L'Étape du Tour, France. Variables composed of distinctive stimulus effectors are employed during the training, and survey interviews are utilized to gain users’ insight. Results from this pilot study on the presence level indicate that the cyclists’ have given high scores. This high score means that the cyclists were immersed while using the vProCycle system. In addition, the cyclists’ also gave a high score on the level of technology acceptance towards using vProCycle. The main contribution from this study is to understand how various combinations of stimulus effectors can be applied in a VR-based training system.

Author 1: Imran Bin Mahalil
Author 2: Azmi Bin Mohd Yusof
Author 3: Nazrita Binti Ibrahim
Author 4: Eze Manzura Binti Mohd Mahidin
Author 5: Ng Hui Hwa

Keywords: Virtual reality; sense of presence; technology acceptance; stimulus effectors

PDF

Paper 9: Fast and Robust Fuzzy-based Hybrid Data-level Method to Handle Class Imbalance

Abstract: Conventional classification algorithms do not provide accurate results when the data distribution (class sizes) is unequal or data is corrupted with noise because the results are biased towards the bigger class. In many real life cases, there is a requirement to uncover unusual/smaller classes. There are a bundle of examples where importance of smaller/rare class is much-much higher than the bigger class for example- brain tumor detection, credit card fraud or anomaly detection and many more. This is usually called as problem of imbalance classes. The situation becomes worst when the data is corrupted with extra impurities like noise in data or overlapping of class or any other glitch in data because in this scenario traditional methods produce more poor results. This paper proposed a fast, simple and effective data level hybrid technique based on fuzzy concept to overcome the class imbalance problem in noisy condition. To appraise the classification performance of the offered technique it is tested with 40 UCI real imbalanced data sets having imbalance ratio ranges from 1.82 to 129.44 and compared with 12 other approaches. The outcome specifies that the presented hybrid data level technique performed better and in a fast manner when compared to other approaches.

Author 1: Kamlesh Upadhyay
Author 2: Prabhjot Kaur
Author 3: Ritu Sachdeva

Keywords: Data level approaches; undersampling; oversampling; fuzzy concept; imbalanced data-sets; classification

PDF

Paper 10: Proctoring and Non-proctoring Systems

Abstract: This research describes learning achievement assessment technology, especially proctor technology. This study compares and contrasts proctoring and non-proctoring procedures used for online exams. The sample case used was the test scores of students enrolled in Hasanuddin University's Indonesian Arabic translation course. The research method used was a non-experimental quantitative method that compared students' online test results using proctoring and non-proctoring systems during online exams. The test scores of 101 students (40 male and 61 female students) from two different classes were sampled. The results of the tests for both classes were collected six times: three times using the proctoring method and three times using the non-proctoring system. A trend analysis was performed on the data. SPSS 26 was used to analyze the data via the two-way ANOVA procedure. The results indicate that the online proctoring system resulted in lower test scores than the online non-proctoring system, while the variables of class and gender did not affect the learning results.

Author 1: Yusring Sanusi Baso

Keywords: Proctoring system; comparative study; Arabic translating course; online exam

PDF

Paper 11: Groundnuts Leaf Disease Recognition using Neural Network with Progressive Resizing

Abstract: Groundnut is an important oilseed crop in the world, and India is the second-largest producer of groundnuts. This crop is prone to attack by numerous diseases which is one of the most important factors contributing to the loss of productivity and degradation in the quality; both of these finally result in a low agricultural economy. Therefore, it is necessary to find better and more reliable automation solutions to recognize groundnut leaf diseases. In this paper, a deep learning based model with progressive resizing is proposed for groundnut leaf disease recognition and classification tasks. Five major categories of groundnut leaf diseases namely leaf spot, armyworms effect, wilts, yellow leaf, and healthy leaf are considered. The proposed model was trained with and without progressive resizing while it was validated using cross-entropy loss. The first of its kind dataset used for training and validation purposes was manually created from the Saurashtra region of Gujarat state of India. The created dataset was imbalanced in terms of a different number of samples for each category. To handle the imbalanced dataset problem, the extended focal loss function was used. To evaluate the performance of the proposed model, different performance measures including precision, sensitivity, F1-score, and accuracy were applied. The proposed model achieved state-of-the-art accuracy of 96.12%. The model with progressive resizing performed better than the traditional core neural network-based model built on cross-entropy loss.

Author 1: Rajnish M. Rakholia
Author 2: Jinal H. Tailor
Author 3: Jatinderkumar R. Saini
Author 4: Jasleen Kaur
Author 5: Hardik Pahuja

Keywords: Groundnut leaf disease recognition; progressive resizing; deep learning; neural network

PDF

Paper 12: A Proposed Architecture for Smart Home Systems Based on IoT, Context-awareness and Cloud Computing

Abstract: The main objective of this paper is to propose a simple, low cost, reliable and scalable architecture for building Smart Home Systems (SHSs) that can be used to remotely automate and control home appliances, using microcontroller. The proposed architecture aims to take advantage of emerging technologies to make it easier to develop Smart Home systems and to provide more management by expanding its capabilities suitably. The suggested design intends to make it easier and more convenient for many applications to access context data, as well as providing a new schematic guide for creating as complete and comprehensive Smart Home Systems and data processing as possible. Related topics like smart homes and their intelligent systems will be addressed by examining prior work and proposing the authors' opinions in order to suggest the new architecture. The proposed advanced architecture's building blocks include Classic Smart Homes, Internet of Things (IoT), Context-awareness (CA), Cloud Computing (CC), and Rule-based Event Processing Systems (RbEPS). Finally, the proposed architecture is validated and evaluated by constructing a smart home system.

Author 1: Samah A. Z. Hassan
Author 2: Ahmed M. Eassa

Keywords: Smart Home Systems (SHS); Internet of Things (IoT); Context-awareness (CA); Cloud Computing (CC); Rule-based Event Processing Systems (RbEPS); Smart Home System architecture

PDF

Paper 13: Shallow Net for COVID-19 Classification Based on Biomarkers

Abstract: In many cases, especially at the beginning of epidemic disaster, it is very important to be able to determine the severity of illness of a given patient. Picking up the severe status will help in directing the effort in a proper way. At the beginning, the number of classified status and the available data are limited, so, in such situation, one needs a system that can be trained based on limited data to give a trusted result. The current work focuses on the importance of the bioscience in differentiation between recovered patients and mortalities. Even with limited data, the decision trees (DT) was able to distinguish between recovered patients and mortalities with accuracy of 94%. Shallow dense network achieved accuracy of 75%. However, when a 10-fold technique was followed with the same data, the net achieved 99% of accuracy. The used data in this work was collected from King Faisal hospital in Taif city under a formal permission from the health ministry. PCA analysis confirmed that there are two parameters that have the greatest ability to differentiate between recovered patients and mortalities. ROC curve reveals that the parameters that can differentiate between recovered patients and mortalities are calcium and hemoglobin. The shallow net gives an accuracy of 92% when trained using calcium and hemoglobin only. This paper shows that with a suitable choosing of the parameters a small decision tree or shallow net can be trained quickly to decide which patient needs more attention so as to use the hospitals resources in a more reasonable way during the pandemic. All codes and data can be accessed from the following link “codes and data”.

Author 1: Mahmoud B. Rokaya

Keywords: COVID-19; pandemic; shallow net; deep learning; decision trees; ROC curve; PCA analysis; biomarkers

PDF

Paper 14: Advanced Medicinal Plant Classification and Bioactivity Identification Based on Dense Net Architecture

Abstract: Plant species identification helps a wide range of stakeholders, including forestry services, botanists, taxonomists, physicians and pharmaceutical laboratories, endangered species organizations, the government, and the general public. As a result, there has been a spike in interest in developing automated plant species recognition systems. Using computer vision and deep learning approaches, this work proposes a fully automated system for finding medical plants. As a result, work is being done to classify the correct therapeutic plants based on their images. A training data set contains image data; this work uses the Indian Medicinal Plants, Photochemistry, and Therapeutics (IMPPAT) benchmark dataset. Convolutional Neural Network (CNN) with DenseNet algorithm is a classification system for medicinal plants that explains how they work and what they're efficient. This study also suggests a standard dataset for medicinal plants that can be found in various parts of Manipur, India's northwest coast state. On the IMPPAT dataset, the suggested DenseNet model has a recognition rate of 99.56% and on the Manipuri dataset; it has a recognition rate of 98.51%, suggesting that the DenseNet method is a promising technique for smart forestry.

Author 1: Banita Pukhrambam
Author 2: Arun Sahayadhas

Keywords: Indian medicinal plants; convolutional neural network; DenseNet; IMPPAT dataset

PDF

Paper 15: Short Words Signature Verification using Markov Chain and Fisher Linear Discriminant Approach

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: M. Nazir
Author 2: Surendra Singh Choudhary

Keywords: Human signature verification; morphological directional transformations; structuring element; optical character recognition; fisher linear discriminant

PDF

Paper 16: Face Mask Wear Detection by using Facial Recognition System for Entrance Authorization

Abstract: A Face Mask Wear Detection Device for Entrance Authorization is designed to ensure that everyone wears a face mask at all times in a confined space. It is one of the easiest methods to lower the rate of coronavirus infection and hence save lives. Asthma, high blood pressure, heart failure, and many other chronic conditions can be fatal to those who are infected by the novel Coronavirus (nCoV-21). Consequently, the goal of this research is for face mask wear detection devices that help to reduce the rate of Novel Coronavirus infection on-premises or in public places by ensuring that customers comply with Standard Handling Procedures (SOP) set by the Ministry of Malaysian Health (MOH). Customers' faces are recognized by this device whether or not they are covered by a face mask upon entry into a facility. Additionally, the use of this device can contribute to ensuring compliance with the maximum number of customers allowed on the premises. A facial recognition system is the goal of this study that uses technology designed as an individual disciplinary aid and follows the safety procedure at this critical time. This research was developed using the engineering design process development model which has four phases namely; identifying the problem, making possible solutions, prototype development and testing and evaluating the solution. Results indicate that the developed product can function effectively. Experts have discovered that using this product helps people stick to their face mask routines. The design of this product has improved, which means that the overall quality of the product is elevated to be capable of performing as intended in terms of intelligent technologies.

Author 1: Munirah Ahmad Azraai
Author 2: Ridhwan Rani
Author 3: Raja Mariatul Qibtiah
Author 4: Hidayah Samian

Keywords: Face recognition; face detection; face mask; coronavirus; intelligent system

PDF

Paper 17: A Novel Approach to Video Compression using Region of Interest (ROI) Method on Video Surveillance Systems

Abstract: With the increasing of criminal actions, people use various surveillance techniques to create a sense of security. One of the most widely used surveillance technique is installing CCTV cameras at various locations. On the surveillance systems, there are other supporting devices apart from CCTV cameras. One of such supporting devices is a hard disk to save the recorded data. The recording on CCTV has two modes: motion detection mode and continuous mode. The continuous mode will record continuously, which affects the amount of hard disk space used. Motion detection mode records one event only, not all recordings, saving hard disk space, however, it may miss some events. Based on these two modes, compression technology is required. The current compression technology applies the ROI method. A ROI (Region of Interest) is the part of the image that wants to filter to form some operations against it. ROI allows coding differently in certain areas of the digital image to have a higher quality than the surrounding area (background). This paper offers a novel approach to saving the foreground frame generated from the ROI method and compressing it. The novel approach will be applied to the AVI, MJPEG 2000, and MPEG-4 video formats. The decompression process is used to restore the original video data to measure the method's performance. To measure the proposed method's performance, it will compare the compression ratio and Peak Signal-to-Noise Ratio (PSNR) with the traditional method without implementing the ROI-based method. The PSNR value in this paper, that measures the quality of the compression result,s are above 40 dB. It indicates that the resulting video is similar to the original video. The ROI-based compression method can increase the compression ratio 5-7 times higher than the existing method for lossy AVI format video. While on MJPEG-2000 and MPEG-4 format video, it increases the compression ratio 7-15 times and 1-3 times, respectively. The PSNR value for the proposed method is above 40 dB, which indicates that the reconstructed video is similar to the original video, even though the pixel values have changed slightly.

Author 1: DewiAnggraini Puspa Hapsari
Author 2: Sarifuddin Madenda
Author 3: Muhammad Subali
Author 4: Aini Suri Talita

Keywords: Compression; decompression; foreground; region of interest; video surveillance systems

PDF

Paper 18: Data Augmentation Techniques on Chilly Plants to Classify Healthy and Bacterial Blight Disease Leaves

Abstract: Designing an automation system for the agriculture sector is difficult using machine learning approach. So many researchers proposed deep learning system which requires huge amount of data for training the system. The proposed system suggests that geometric transformations on the original dataset help the system to generate more images that can replicate the physical circumstances. This process is known as “Image Augmentation”. This enhancement of data helps the system to produce more accurate systems in terms of all metrics. In olden days when researchers work with machine learning techniques they used to implement traditional approaches which are a time consuming and expensive process. In deep learning, most of the operations are automatically taken care by the system. So, the proposed system applies neural style and to classify the images it uses the concept of transfer learning. The system utilizes the images available in the open source repository known as “Kaggle”, this majorly consists of images related to chilly, tomato and potato. But this system majorly focuses on chilly plants because it is most productive plant in the South Indian regions. Image augmentation creates new images in different scenarios using the existing images and by applying popular deep learning techniques. The model has chosen ResNet-50, which is a pre-trained model for transfer learning. The advantage of using pre-trained model lies in not to develop the model from scratch. This pre-trained model gives more accuracy with less number of epochs. The model has achieved an accuracy of “100%”.

Author 1: Sudeepthi Govathoti
Author 2: A Mallikarjuna Reddy
Author 3: Deepthi Kamidi
Author 4: G BalaKrishna
Author 5: Sri Silpa Padmanabhuni
Author 6: Pradeepini Gera

Keywords: Image augmentation; geometric transformations; transfer learning; neural style learning; residual network

PDF

Paper 19: Towards the Smart Industry for the Sustainability through Open Innovation based on ITSM (Information Technology Service Management)

Abstract: The Indonesian coffee industry has become a trend that has a strategic role and potential for the livelihoods of the business people in it, as well as Indonesia's economic growth. One of the trends that stole attention is the concept of smart industry, the concept of a digital-based industry that is highly relevant to technological developments in this era. When companies want to implement a smart industry, companies need a strategy to implement IT (Information Technology) so that the investment spent is right to build the company's targets. This study aims to design a systematic IS/IT strategy to realize the concept of smart industries that are effective. The analysis and design method used is the Ward & Peppard framework which consists of two phases, namely the input and output phases. The input phase consists of internal business analysis, external business, IT internal and external. The output stage includes the design of IT management strategies, business information systems and IT strategies. The results of this study are in the form of a portfolio of IT designs at the Margamulya Coffee Producers Cooperative consisting of business strategy designs and IT management.

Author 1: Asti Amalia Nur Fajrillah
Author 2: Muharman Lubis
Author 3: Arariko Rezeki Pasa

Keywords: Smart industry; Ward and Peppard; IS/IT strateg

PDF

Paper 20: Application of Machine Learning Algorithms in Coronary Heart Disease: A Systematic Literature Review and Meta-Analysis

Abstract: This systematic review relied on the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) statement and 37 relevant studies. The literature search used search engines including PubMed, Hindawi, SCOPUS, IEEE Xplore, Web of Science, Google Scholar, Wiley Online, Jstor, Taylor and Francis, Ebscohost, and ScienceDirect. This study focused on four aspects: Machine Learning Algorithms, datasets, best-performing algorithms, and software used in coronary heart disease (CHD) predictions. The empirical articles never mentioned 'Reinforcement Learning,' a promising aspect of Machine Learning. Ensemble algorithms showed reasonable accuracy rates but were not common, whereas deep neural networks were poorly represented. Only a few papers applied primary datasets (4 of 37). Logistic Regression (LR), Deep Neural Network (DNN), K-Means, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and boosting algorithms were the best performing algorithms. This systematic review will be valuable for researchers predicting coronary heart disease using machine learning techniques.

Author 1: Solomon Kutiame
Author 2: Richard Millham
Author 3: Adebayor Felix Adekoya
Author 4: Mark Tettey
Author 5: Benjamin Asubam Weyori
Author 6: Peter Appiahene

Keywords: Coronary heart diseases; algorithms; datasets; ensembling algorithms; machine learning; artificial intelligence

PDF

Paper 21: Vision based Human Activity Recognition using Deep Neural Network Framework

Abstract: Human Activity Recognition (HAR) has become a well-liked subject in study as of its broad application. With the growth of deep learning, novel thoughts have emerged to tackle HAR issues. One example is recognizing human behaviors without exposing a person's identify. Advanced computer vision approaches, on the other hand, are still thought to be potential development directions for constructing a human activity classification approach from a series of video frames. To solve this issue, a deep learning neural network technique using Depthwise Separable Convolution (DSC) with Bidirectional Long Short-Term Memory (DSC-BLSTM) is proposed here. The redeeming features of the proposed network system comprises a DSC convolution that helps to reduce not only the number of learnable parameters but also computational cost in together training and testing method The bidirectional LSTM process can combine the positive and the negative time direction. The proposed method comprises of three phases, which includes Video data preparation, Feature Extraction using Depthwise Separable Convolution Neural Network algorithm and DSC-BLSTM algorithm. The proposed DSC-BLSTM method obtains high accuracy, F1-score when compared to other HAR algorithms like MC-HF-SVM, Baseline LSTM Bidir-LSTM algorithms.

Author 1: Jitha Janardhanan
Author 2: S. Umamaheswari

Keywords: Activity recognition; long short-term memory (LSTM); deep learning; feature extraction

PDF

Paper 22: Internal Works Quality Assessment for Wall Evenness using Vision-based Sensor on a Mecanum-Wheeled Mobile Robot

Abstract: Robotics in the construction industry has been used for a few decades up to this present time. There are various advanced robotics mechanisms or technologies developed for specific construction task to assist construction. However, not many researches have been found on the quality assessment of the finished structures. This research proposes a quality assessment robot that will assist in performing the assessment of the internal works of a building by assessing a quality assessment criterion in the Malaysian Construction Industry Standards. There are various assessment criteria such as hollowness, cracks and damages, finishing and jointing. This paper will focus on the wall evenness using a camera mounted on a mobile robot with a Mecanum wheel design. The wall evenness assessment was done via projecting a laser leveler on the wall and capturing the images by using a camera, which is later processed by a central controller. Results show that the deviation calculation method can be used to differentiate between even and uneven walls. Pixel deviations for even walls show values of less than 15 while uneven walls show values of more than 20 pixels.

Author 1: Ahmad Zaki Shukor
Author 2: Muhammad Herman bin Jamaluddin
Author 3: Mohd Zulkifli bin Ramli
Author 4: Ghazali bin Omar
Author 5: Syed Hazni Abd Ghani

Keywords: Construction industry standards; internal works quality assessment; vision; Mecanum wheels

PDF

Paper 23: The 4W Framework of the Online Social Community Model for Satisfying the Unmet Needs of Older Adults

Abstract: Human's cherished and respectable desires could be fulfilled by social integration through interaction with their friends and families. These kinds of interactions are critical for the elderly, particularly for someone who has retired. Online social communities could assist them and offer a beneficial impact on the elderly. However, because the elderly people are hesitant to use new technology, researchers have attempted to integrate specially built social networking applications into simple user-interface gadgets for the elderly through the context aware systems. A proper understanding amongst the aged and the supporting community people is needed for optimal execution of the platform. The study presents a 4W framework (Who, What, Where, When) to effectively comprehend and portray the online social interaction community model's application in assisting the elderly in satisfying their unmet needs, as well as to improve the system's efficiency in addressing the elderly's unfulfilled demands. It is essential to discover what the users are keen on and provide a chance for the community group to take good decisions by utilizing the insights gained from these events.

Author 1: Farhat Mahmoud Embarak
Author 2: Nor Azman Ismail
Author 3: Alhuseen Omar Alsayed
Author 4: Mohamed Bashir Buhalfaya
Author 5: Abdurrahman Abdulla Younes
Author 6: Blha Hassan Naser

Keywords: Online social community; elderly’s unmet needs; 4w framework; elderly’s requirements

PDF

Paper 24: Bayesian Network Modelling for Improved Knowledge Management of the Expert Model in the Intelligent Tutoring System

Abstract: The expert module is an essential part of the intelligent tutoring system. This module uses only declarative knowledge, excluding other types of domain knowledge: procedural and conditional. This elimination makes the expert module very delicate. To solve this issue, the authors propose to embed knowledge processing into the expert model. The contribution aims to empower the expert model via the fragmentation of the knowledge process into four categories: Analyzation, Application, Conceptualization, and Experimentation using the Bayesian Network method as an instrument for modelling expert systems in uncertain areas. According to the management of the expert system through a list of criteria, the expert module can suggest the correct type of knowledge and their following status.

Author 1: Fatima-Zohra Hibbi
Author 2: Otman Abdoun
Author 3: El Khatir Haimoudi

Keywords: Smart tutoring system; expert model; knowledge processing; Bayesian network

PDF

Paper 25: Digital Storytelling Framework to Assist Young Children in Understanding Dementia

Abstract: A digital storytelling tool is one of the interactive technologies that can help youngsters better comprehend Dementia. Dementia makes it difficult for older people to maintain their daily routines. They have difficulties in effectively communicating with those around them. Similarly, children whose grandparents have Dementia will struggle to understand their grandparents' situation. It will also negatively influence children's relationships with their grandparents. Learning through interactive digital storytelling will affect younger people's entertainment experiences, which may help them better comprehend Dementia. As a result, the children's relationships with their grandparents may be strengthened. This study aims to present the framework of digital storytelling in helping young children understand more about Dementia. The framework was developed in a step-by-step procedure that included analyzing and synthesizing current applications and relevant research, constructing the framework, and having it confirmed by experts. Researchers and developers may use the framework as a guideline to build meaningful digital storytelling features.

Author 1: Noreena Yi-Chin Liu
Author 2: Nooralisa M Tuah
Author 3: Kevin Chi-Jen Miao

Keywords: Digital storytelling; Dementia; interactive learning; entertainment experience

PDF

Paper 26: Optimization of Small Sized File Access Efficiency in Hadoop Distributed File System by Integrating Virtual File System Layer

Abstract: Storage for large datasets, handling data in different formats and data getting generated with high speed are the major highlights of the Hadoop because of which the Hadoop got invented. Hadoop is the solution for the big data problems as discussed above. In order to give the improved solution (in terms of access efficiency and time) for small sized files, this solution is proposed. A novel approach called VFS-HDFS architecture is designed in which the focus is on optimization of small sized files access problems with significant development compared with the existing solutions i.e. HDFS sequence files, HAR, NHAR. In the proposed work a Virtual file system layer has been added as a wrapper over the top of existing HDFS architecture. However, the research work is carried out without altering the existing HFDS architecture. In this paper drawbacks of existing techniques i.e. Flat File Technique and Table Chain Technique which are implemented in HDFS HAR, NHAR, sequence file is overcome by using Bucket Chain Technique. The files to merge in a single bucket are selected using ensemble classifier which is a combination of different classifiers. Combination of multiple classifiers gives the better accurate results. Using this proposed system, better results are obtained compared with the existing system in terms of access efficiency of small sized files in HDFS.

Author 1: Neeta Alange
Author 2: Anjali Mathur

Keywords: HDFS; Small sizes files; virtual file system; bucket chain; ensemble classifiers; text classification

PDF

Paper 27: Survey on Highly Imbalanced Multi-class Data

Abstract: Machine learning technology has a massive impact on society because it offers solutions to solve many complicated problems like classification, clustering analysis, and predictions, especially during the COVID-19 pandemic. Data distribution in machine learning has been an essential aspect in providing unbiased solutions. From the earliest literatures published on highly imbalanced data until recently, machine learning research has focused mostly on binary classification data problems. Research on highly imbalanced multi-class data is still greatly unexplored when the need for better analysis and predictions in handling Big Data is required. This study focuses on reviews related to the models or techniques in handling highly imbalanced multi-class data, along with their strengths and weaknesses and related domains. Furthermore, the paper uses the statistical method to explore a case study with a severely imbalanced dataset. This article aims to (1) understand the trend of highly imbalanced multi-class data through analysis of related literatures; (2) analyze the previous and current methods of handling highly imbalanced multi-class data; (3) construct a framework of highly imbalanced multi-class data. The chosen highly imbalanced multi-class dataset analysis will also be performed and adapted to the current methods or techniques in machine learning, followed by discussions on open challenges and the future direction of highly imbalanced multi-class data. Finally, for highly imbalanced multi-class data, this paper presents a novel framework. We hope this research can provide insights on the potential development of better methods or techniques to handle and manipulate highly imbalanced multi-class data.

Author 1: Mohd Hakim Abdul Hamid
Author 2: Marina Yusoff
Author 3: Azlinah Mohamed

Keywords: Imbalanced data; highly imbalanced data; highly imbalanced multi-class; data strategies

PDF

Paper 28: Synthetic Data Augmentation of Tomato Plant Leaf using Meta Intelligent Generative Adversarial Network: Milgan

Abstract: Agriculture is one of the most famous case studies in deep learning. Most researchers want to detect different diseases at the early stages of cultivation to save the farmer's economy. The deep learning technique needs more data to develop an accurate system. Researchers generated more synthetic data using basic image operations in traditional approaches, but these approaches are more complicated and expensive. In deep learning and computer vision, the system's accuracy is the crucial component for deciding the system's efficiency. The model's precision is based on the image's size and quality. Getting many images from the real-world environment in medicine and agriculture is difficult. The image augmentation technique helps the system generate more images that can replicate the physical circumstances by performing various operations. It also prevents overfitting, especially when the system has fewer images than required. Few researchers experimented using CNN and simple Generative Adversarial Network (GAN), but these approaches create images with more noise. The proposed research aims to develop more data using a Meta approach. The images are processed using kernel filters. Different geometric transformations are passed as input to the enhanced GANs to reduce the noise and create more fake images using latent points, acting as weights in the neural networks. The proposed system uses random sampling techniques, passes a few processed images to the generator component of GAN, and the system uses a discriminator component to classify the synthetic data created by the Meta-Learning Approach.

Author 1: Sri Silpa Padmanabhuni
Author 2: Pradeepini Gera

Keywords: Basic image operations; meta-learning techniques; generator; discriminator; synthetic data; sampling techniques; latent points; kernel filters

PDF

Paper 29: Comparison of Path Planning between Improved Informed and Uninformed Algorithms for Mobile Robot

Abstract: This work is concerned with the Path Planning Algorithms (PPA), which hold an important place in Robotics navigation. Navigation has become indispensable to most modern inventions. Mobile robots have to move to a relevant task point in order to achieve the tasks assigned to them. The actions, which are planned in a structure, may restrict the task duration and even in some situations, the mission tends to be accomplished. This paper aims to study and compare six commonly used informed and uninformed algorithms. Three different maps have been created with gradually increasing difficulty levels related to a number of obstacles in the tested maps. The paper provides a detailed comparison between the algorithms under investigation of several parameters such as: Total steps, straight steps, rotation steps, and search time. The promised results were obtained when the proposed algorithms were applied to a case study.

Author 1: Mohamed Amr
Author 2: Ahmed Bahgat
Author 3: Hassan Rashad
Author 4: Azza Ibrahim

Keywords: Mobile robots; informed algorithm; uninformed algorithm; path planning

PDF

Paper 30: Modified Gradient Algorithm based Noise Subspace Estimation with Full Rank Update for Blind CSI Estimator in OFDM Systems

Abstract: This paper presents a modified Gradient-based method to directly compute the noise subspace iteratively from the received Orthogonal Frequency Division Multiplexing (OFDM) symbols to estimate Channel State Information (CSI). By invoking the matrix inversion lemma which is extensively used in Recursive Least Square (RLS) algorithms, the proposed computationally efficient method enables direct computation of noise subspace using the inverse of the autocorrelation matrix of the received OFDM symbols. In the case of a vector input, the modified Gradient algorithm uses rank one update to calculate noise subspace recursively. For an input in the matrix form, the modified Gradient algorithm uses a full rank update. The validity, efficacy, and accuracy of the proposed modified Gradient algorithm have been substantiated through a relative comparison of the results with the conventional Singular Value Decomposition (SVD) algorithm, which is in wide use in the estimation of the subspaces. The simulation results obtained through the modified Gradient algorithm show a satisfactory correlation with the results of SVD, even though the computational complexity involved in modified Gradient is relatively less. Apart from the results encompassing various power levels of the multipath channel, this paper also discusses the adaptive tracking of CSI and presents a comparative study.

Author 1: Saravanan Subramanian
Author 2: Govind R. Kadambi

Keywords: Orthogonal Frequency Division Multiplexing (OFDM); Carrier Frequency Offset (CFO); Channel State Information (CSI); Recursive Least Square (RLS); Singular Value Decomposition (SVD); Channel Impulse Response (CIR); BPSK; QPSK; QAM

PDF

Paper 31: COVID-19: Challenges and Opportunities in the Online University Education

Abstract: The COVID-19 pandemic had a very severe impact on the education both in schools and in universities. In the span of several weeks, educators around the world had to transform completely the teaching method and students had to adapt to the new form of learning. The following article reviews the opinions of university students based on three different studies – one before the pandemic and the distance learning, one in the middle of it and one in the end of the distance learning. The goal is to see how students' thinking and perceptions of online learning has changed over the last three years as a result of different conditions.

Author 1: Irena Valova
Author 2: Tsvetelina Mladenova

Keywords: e-Learning; online learning; students' attitude to e-learning; pandemic outbreak; COVID-19

PDF

Paper 32: Multi-modal Brain MR Image Registration using A Novel Local Binary Descriptor based on Statistical Approach

Abstract: Medical image registration (MIR) has played an important role in medical image processing during the last decade. Its main objective is to integrate information inherent in two images, from different scanning sources, of the same object for guiding medical treatments such as diagnostic, surgery and therapy. A challenging task of MIR arises from the complex relationships of image intensities between the two images. Its performance is primarily depending on a chosen similarity measure technique. In this work, a statistical local binary descriptor (SLBD) is proposed as novel local descriptor of similarity measure, which is simple for computation and can handle Multi-modal registration more effectively. The proposed SLBD employs two statistical values, i.e., the mean and the standard deviation, of all intensities within the image patch for its computation. Finally, these experimental results have shown that SLBD outperforms other descriptors in terms of registration accuracy. In addition, SLBD has demonstrated that SLBD is robust to different modalities.

Author 1: Thuvanan Borvornvitchotikarn

Keywords: Local binary descriptor; multi-modal image registration; statistical approach; medical image registration; similarity measure

PDF

Paper 33: Core Elements Impacting Cloud Adoption in the Government of Saudi Arabia

Abstract: The Kingdom of Saudi Arabia is taking rapid steps towards digital transformation in the field of government services. Cloud computing adoption may be the next step that supports this digital transformation to providing many features and reducing costs. Therefore, this paper will present multiple factors that may make it difficult to move to the cloud by conducting several interviews and questionnaires with government sector workers, those with technical experience, and that too to take caution and develop suitable solutions in advance. This paper also presents some recommendations and suggestions useful to consider when adopting the cloud in the public sector.

Author 1: Norah Alrebdi
Author 2: Nabeel Khan

Keywords: Cloud computing; e-governance; cloud computing adoption; smart government; Saudi Arabia vision 2030

PDF

Paper 34: Decentralized Tribrid Adaptive Control Strategy for Simultaneous Formation and Flocking Configurations of Multi-agent System

Abstract: This paper focuses on the development of a tribrid control strategy for leader-follower flocking of multi-agents in octagonal polygonal formation. The tribrid approach encompasses Reinforcement Learning (RL), centralized and de-centralized control strategies. While the RL for multi-agent polygonal formation addresses the issues of scalability, the centralized strategy maintains the inter-agent distance in the formation and the decentralized strategy reduces the consensus (in position and velocity) error. Unlike the previous studies focusing only on the predefined trajectory, this paper deals with the leader-follower scenario through a decentralized tribrid control strategy. Two cases on initial positions of multi-agents dealt in this paper include the octagonal pattern from RL and the agents randomly distributed in spatial environment. The tribrid control strategy is aimed at simultaneous formation and flocking, and its stability in a shorter response time. The convergence of flocking error to zero in 3s substantiates the validity of the proposed control strategy and is faster than previous control methods. Implicit use of centralized scheme in decentralized control strategy facilitates retention of formation structure of the initial configuration. The average position error of agents with the leader is within the position band in 3s and thus it confirms the maintenance of formation during flocking.

Author 1: B. K. Swathi Prasad
Author 2: Hariharan Ramasangu
Author 3: Govind R. Kadambi

Keywords: Simultaneous; flocking; polygonal formation; decentralized; hybrid; adaptive; control strategy; simulation

PDF

Paper 35: Merged Dataset Creation Method Between Thermal Infrared and Microwave Radiometers Onboard Satellites

Abstract: Merged dataset creation method between Thermal Infrared (TIR) and Microwave Scanning Radiometer (MSR) onboard remote sensing satellites is proposed. One of the key issues here is the relation between thermal and microwave emissions from the same observation target in particular, Sea Surface Temperature (SST). An example of Tropical Rainfall Measuring Mission (TRMM) satellite based TIR and MSR, Visible Infrared Scanner (VIRS) and TRMM Microwave Imager (TMI) is shown in this paper. SST is estimated, independently, with VIRS or TMI. A method for interpolation of multi-sensor satellite images based on Multi-Resolution Analysis (MRA) is also proposed. The experimental results with TMI/SST image and VIRS/SST image show that Root Mean Square (RMS) error ranges from 0.87 to 0.91-degree C.

Author 1: Kohei Arai

Keywords: Wavelets; VIRS/SST; TMI/SST; MRA; Daubechies; TRMM; TIR; MSR

PDF

Paper 36: A Hybrid RNN based Deep Learning Approach for Text Classification

Abstract: Despite the fact that text classification has grown in relevance over the last decade, there are a plethora of approaches that have been created to meet the difficulties related with text classification. To handle the complexities involved in the text classification process, the focus has shifted away from traditional machine learning methods and toward neural networks. In this work the traditional RNN model is embedded with different layers to test the accuracy of the text classification. The work involves the implementation of RNN+LSTM+GRU model. This model is compared with RCNN+LSTM and RNN+GRU. The model is trained by using the GloVe dataset. The accuracy and recall are obtained from the models is assessed. The F1 score is used to compare the performance of both models. The hybrid RNN model has three LSTM layers and two GRU layers, whereas the RCNN model contains four convolution layers and four LSTM levels, and the RNN model contains four GRU layers. The weighted average for the hybrid RNN model is found to be 0.74, RCNN+LSTM is 0.69 and RNN+GRU is 0.77. RNN+LSTM+GRU model shows moderate accuracy in the initial epochs but slowly the accuracy increases as and when the epochs are increased.

Author 1: Pramod Sunagar
Author 2: Anita Kanavalli

Keywords: F1 score; gated recurrent unit; GloVe; long - short term memory; precision; recall; recurrent neural network; region-based convolutional neural network; text classification

PDF

Paper 37: Deep Learning Approach for Masked Face Identification

Abstract: Covid-19 is a global health emergency and a major concern in the industrial and residential sectors. It has the ability to spread leading to health problems or death. Wearing a mask in public locations and busy areas is the most effective COVID-19 prevention measure. Face recognition provides an accurate method that overcomes uncertainties such as false prediction, high cost, and time consumption, as it is understood that the primary identification for every human being is his face. As a result, masked face identification is required to solve the issue of recognizing individuals with masks in several applications such as door access systems and smart attendance systems. This paper offers an important and intelligent method to solve this issue. We propose deep transfer learning approach for masked face human identification. We created a dataset of masked-face images and examined six convolutional neural network (CNN) models on this dataset. All models show great performance in terms of very high face recognition accuracy and short training time.

Author 1: Maad Shatnawi
Author 2: Nahla Almenhali
Author 3: Mitha Alhammadi
Author 4: Khawla Alhanaee

Keywords: Masked face human identification; face recognition; deep transfer learning; convolutional neural networks

PDF

Paper 38: MSA-SFO-based Secure and Optimal Energy Routing Protocol for MANET

Abstract: Mobile Adhoc Network (MANET) is a fast deployable wireless mobile network with minimal infrastructure requirements. In these networks, autonomous nodes may function as routers. Due to the mobility of MANET nodes, the network's topology is dynamic. Recent scientific emphasis has been placed on MANET security. Few MANET attacks have been discussed in the existing literature. Wired networks provide more security choices than wireless networks. Most routing protocols fail in a MANET with a malicious node. This research focuses on S-DSR, a novel hybrid secure routing system that guarantees the delivery and performance of packets across network nodes. This protocol leverages neighbor trust information to choose the most secure route for file transfer. This protocol is used by OMNET++. It offers a higher delivery rate and lower delay than AODV, AOMDV, and other similar protocols. MANETs, or mobile ad-hoc networks, will be used in the future communication protocols of industrial wireless networks. These protocols will decentralise the connection of smart devices. Due to the unidimensional nature of digital data, it is impossible to apply encryption methods indirectly. These publications are digital. To strengthen the privacy of e-healthcare MANETs, a safe, lightweight keyframe extraction technique is required. The purpose of this project is to develop a secure protocol for MANET wireless networks. This study proposes the use of chaotic cryptography to enhance the security of MANET Wireless networks. Using Modified Self-Adaptive Sailfish Optimization (MSA-SFO), it is possible to construct vital maps in a chaotic setting. This method produces secure key pairs.

Author 1: D. Naga Tej
Author 2: K V Ramana

Keywords: MANET; sail fish optimization; energy; routing protocol

PDF

Paper 39: An Efficient and Optimal Deep Learning Architecture using Custom U-Net and Mask R-CNN Models for Kidney Tumor Semantic Segmentation

Abstract: Today, kidney medical imaging has become the backbone for health professionals in diagnosing kidney disease and determining its severity. Physicians commonly use Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) scan models to obtain kidney disease information. The significance and impact of kidney tumor analysis drew researchers to semantic segmentation of kidney tumors. Traditional image processing methodologies, in general, require more computational power and manual assistance to analyze kidney medical images for tumor segmentation. Deep Learning advances are enabling less computational and automated models for kidney medical image analysis and tumor lineation. Blobs (regions of interest) detection from medical images is gaining popularity in kidney disease diagnosis and is used widely in detecting tumors, glomeruli, and cell nuclei, among other things. Kidney Tumor segmentation is challenging compared to other segmentation models due to morphological diversity, object overlapping, intensity variance, and integrated noise. In this paper, It have proposed a kidney tumor semantic segmentation model based on CU-Net and Mask R-CNN to extract kidney tumor information from abdominal MR images. Initially, It trained the Custom U-Net architecture on abdominal MR images with kidney masks for kidney image segmentation. The Mask R-CNN model is then used to lineate tumors from kidney images. Experiments on abdominal MR images using Python image processing libraries revealed that the proposed deep learning architecture segmented the kidney images and lined up the tumors with high accuracy.

Author 1: Sitanaboina S L Parvathi
Author 2: Harikiran Jonnadula

Keywords: Kidney tumor (Blob) detection; custom U-Net; mask R-CNN; semantic segmentation; deep learning; medical image processing

PDF

Paper 40: GonioPi: Towards Developing a Scalable, Versatile, Reliable and Accurate Handheld-Wearable Digital Goniometer

Abstract: Range of Motion (ROM) Testing is an important physical examination performed in physical therapy used in assessing the ROM of a patient’s joint. The most commonly used instrument for ROM Testing is the universal goniometer. The most common cause for unreliable and inaccurate joint angle ROM measurements is measurement errors. Multiple studies have been done to mitigate measurement errors in clinical goniometry by designing and developing wearable digital goniometers using sensor technology. This study aims to design and develop a handheld-wearable digital goniometer called the GonioPi that is versatile, scalable, reliable and accurate when using the MPU-6050 IMU sensor and Raspberry Pi Pico as the main components. The results showed that the GonioPi is versatile and scalable as it is able to support multiple ROM Tests using multiple different positions on people with varying heights, weights, and BMI categories. The results also showed that the GonioPi is reliable and accurate as it was able to record joint angle ROM measurements of less than 5 degrees and 10 degrees which are the accepted standard values for reliability and accuracy, respectively.

Author 1: Thomas Jonathan R. Garcia
Author 2: Dhong Fhel K. Gom-os

Keywords: Range of Motion (ROM); goniometer; physical therapy; goniometry; wearable; sensors; MPU-6050; Raspberry Pi Pico

PDF

Paper 41: Computational Approach to Identify Regulatory Biomarkers in the Pathogenesis of Breast Carcinoma

Abstract: Breast Cancer is reckoned amongst the most common cause of morbidity and mortality among women, adversely affecting female population irrespective of age. The poor survival rate reported in invasive carcinoma cases demands the identification of early developmental stage key markers. MicroRNAs are contributing a critical role in gene regulation potential markers. Over 2000 miRNAs have been identified and considered to offer a unique opportunity for early detection of diseases. In this study, a gene-miRNA-TF interaction network was constructed from the differentially expressed genes obtained from the invasive lobular and invasive ductal carcinoma samples. The network consists of experimentally validated miRNAs and transcription factors were identified for the target genes, followed by thermodynamics studies to identify the binding free energy between mRNA-miRNA. Our analysis identified miRNA; hsa-miR-28-5p binds with MAD2L1 with unexpectedly high binding free energy equivalent to -92.54kcal/mol and also makes canonical triplex with hsa-miR-203a, which acts as a catalyst to initialize the MAD2L1 regulation. For the identified regulatory elements, we proposed a mathematical model and feed-forward loops that may serve in understanding the regulatory mechanisms in breast cancer pathogenesis and progression.

Author 1: Ghazala Sultan
Author 2: Swaleha Zubair
Author 3: Inamul Hasan Madar
Author 4: Harishchander Anandaram

Keywords: Breast cancer; invasive lobular carcinoma; invasive ductal carcinoma; biomarkers; MicroRNA; transcription factors; feed forward loops

PDF

Paper 42: Discourse-based Opinion Mining of Customer Responses to Telecommunications Services in Saudi Arabia during the COVID-19 Crisis

Abstract: This study used opinion mining theory and the potentials of artificial intelligence to explore the opinions, sentiments, and attitudes of customers expressed on Twitter regarding the services provided by the Saudi telecommunications companies during the COVID-19 crisis. A corpus of 12,458 Twitter posts was constructed covering the period 2020–2021. For data analysis, the study adopted a discourse-based mining approach, combining vector space classification (VSC) and collocation analysis. The results indicate that most users had negative attitudes and sentiments regarding the performance of the telecommunications companies during the pandemic, as reflected in both the lexical semantic properties and discoursal and thematic features of their Twitter posts. The study of collocates and the discoursal properties of the data was useful in attaining a deeper understanding of the users’ responses and attitudes to the performance of the telecommunications companies during the COVID-19 pandemic. It was not possible for text clustering based on the “bag of words” model alone to address the discoursal features in the corpus. Opinion mining applications, especially in Arabic, thus need to integrate discourse approaches to gain a better understanding of people’s opinions and attitudes regarding given issues.

Author 1: Abdulfattah Omar

Keywords: Artificial intelligence; collocate analysis; COVID-19; discourse; opinion mining; vector space clustering

PDF

Paper 43: Building Footprint Extraction in Dense Area from LiDAR Data using Mask R-CNN

Abstract: Building footprint extraction is an essential process for various geospatial applications. The city management is entrusted with eliminating slums, which are increasing in rural areas. Compared with more traditional methods, several recent research investigations have revealed that creating footprints in dense areas is challenging and has a limited supply. Deep learning algorithms provide a significant improvement in the accuracy of the automated building footprint extraction using remote sensing data. The mask R-CNN object detection framework used to effectively extract building in dense areas sometimes fails to provide an adequate building boundary result due to urban edge intersections and unstructured buildings. Thus, we introduced a modified workflow to train ensemble of the mask R-CNN using two backbones ResNet (34, 101). Furthermore, the results were stacked to fine-grain the structure of building boundaries. The proposed workflow includes data preprocessing and deep learning, for instance, segmentation was introduced and applied to a light detecting and ranging (LiDAR) point cloud in a dense rural area. The outperformance of the proposed method produced better-regularized polygons that obtained results with an overall accuracy of 94.63%.

Author 1: Sayed A. Mohamed
Author 2: Amira S. Mahmoud
Author 3: Marwa S. Moustafa
Author 4: Ashraf K. Helmy
Author 5: Ayman H. Nasr

Keywords: Deep learning; object detection; mask R-CNN; point cloud; light detecting and ranging (LiDAR)

PDF

Paper 44: Cricket Event Recognition and Classification from Umpire Action Gestures using Convolutional Neural Network

Abstract: The advancement of hardware and deep learning technologies has made it possible to apply these technologies to a variety of fields. A deep learning architecture, the Convolutional Neural Network (CNN), revolutionized the field of computer vision. One of the most popular applications of computer vision is in sports. There are different types of events in cricket, which makes it a complex game. This task introduces a new dataset called SNWOLF for detecting Umpire postures and categorizing events in cricket match. The proposed dataset will be a preliminary help, it was assessed in system development for the automatic generation of highlights from cricket sport. When it comes to cricket, the umpire has the authority to make crucial decisions about on-field incidents. The referee signals important incidents with hand signals and gestures that are one-of-a-kind. Based on detecting the referee's stance from the cricket video referee action frame, it identifies most frequently used events classification: SIX, NO BALL, WIDE, OUT, LEG BYE, and FOUR. The proposed method utilizes Convolutional Neural Networks (CNNs) architecture to extract features and classify identified frames into Umpire postures of six event classes. Here created a completely new dataset of 1040 images of Umpire Action Images containing these six events. Our method train CNNs classifier on 80% images of SNWOLF dataset and tested on 20% of remaining images. Our approach achieves an average overall accuracy of 98.20% and converges on very low cross-entropy losses. The proposed system is a influential answer for generation of cricket sport highlights.

Author 1: Suvarna Nandyal
Author 2: Suvarna Laxmikant Kattimani

Keywords: Cricket match; computer vision; deep learning; SNWOLF dataset; umpire recognition; umpire action images; CNN; event classification

PDF

Paper 45: Optimization Performance Analysis for Adaptive Genetic Algorithm with Nonlinear Probabilities

Abstract: Genetic Algorithm (GA) has been proven to be easy in falling into local optimal value due to its fixed crossover probability and mutation probability, while Adaptive Genetic Algorithm (AGA) has strong global search capability because the two probabilities adjust adaptively. There are two categories of AGA according to the different adjustment methods for crossover and mutation probabilities: probabilistic linear adjustment AGA and probabilistic non-linear adjustment AGA. AGA with linear adjustment of probability values cannot solve the problems of local optimal value and premature convergence. The nonlinear adaptive probability adjustment strategy can avoid premature convergence, poor stability and slow convergence speed. The typical AGA with nonlinear adjustment of probabilities are compared and analyzed through benchmark functions. The optimization performance of typical AGA algorithms is compared and analyzed by 10 benchmark functions. Compared with traditional GA and other AGA algorithms, AGA with crossover and mutation probabilities adjusted nonlinearly at both ends of the average fitness value has higher computational stability and is easy to find the global optimal solution, which provides ideas for the application of adaptive genetic algorithm.

Author 1: Wenjuan Sun
Author 2: Qiaoping Su
Author 3: Hongli Yuan
Author 4: Yan Chen

Keywords: Adaptive genetic algorithm; genetic algorithm; nonlinear adjustment; probability

PDF

Paper 46: Improved Particle Swarm Approach for Dynamic Automated Guided Vehicles Dispatching

Abstract: The automated guided vehicles dispatching is one of the important operations in containers terminal because it affects the loading/unloading process. This operation has become faster and more complex until the automation advent. Although this evolution, the environment has become dynamic and uncertain. This paper aims to propose an improved particle swarm approach for solving the bi-objective problem of automated guided vehicles dispatching and routing in a dynamic environment of containers terminal. The objectives are to minimize the total travel distance of all automated guided vehicles and maximize the workload balance between them. The application of particle swarm algorithm in its basic form, shows a premature convergence. To ameliorate this convergence, the authors proposed the application of a method to escape the worst particles from the local optimum. The new Hybrid Guided Particle Swarm approach consists of hybridization between Dijkstra algorithms and a Guided Particle Swarm Algorithm. The routing problem is solved with Dijkstra algorithm and the dispatching problem with guided particle swarm approach. As a first step, this approach has been applied in a static environment where the dispatching parameters and the routing parameters are fixed in advance. The second step consists of applying this approach in a dynamic environment where the number of containers associated with each automated guided vehicles can change, the shortest path and the container locations can also change during the algorithm execution. The numeric results in a static environment show a good Hybrid Guided Particle Swarm performance with a faster and more stable convergence, which surpasses previous approaches such as Hybrid Genetic Approach and the efficiency of its extension approach Dynamic Hybrid Guided Particle Swarm in a dynamic environment.

Author 1: Radhia Zaghdoud
Author 2: Marwa Amara
Author 3: Khaled Ghedira

Keywords: Dispatching; automated guided vehicles; dynamic; containers; particle swarm; genetic algorithm

PDF

Paper 47: A New Approach for Detecting and Mitigating Address Resolution Protocol (ARP) Poisoning

Abstract: Address Resolution Protocol (ARP) Poisoning attack is considered as one of the most devastating attacks in a network context. As a result of its stateless nature and lack of authentication, this protocol suffers from many spoofing attacks in which attackers poison the cache of hosts on the network. By sending spoofed ARP requests and replies. This paper proposes an approach for detecting and mitigating ARP poisoning. This approach includes three modules: Module 1 for giving permission for first time and to store information in the database. There a security measure using MD5 hash is used. Module 2 is for avoiding internal ARP. Module 3 is for detecting whether a MAC has two IPs or an IP has two MACs. The architecture includes a database that gives a great facility and support for storing ARP table information. As ARP table entries generally expire after a short amount of time. To ensure changes in the network are accounted for. Experiments were conducted on real life network environment using Ettercap to check the functionality of the proposed mechanism. The results of experiments show that the proposed approach was able to detect and mitigate ARP poisoning. Especially, whether a MAC has two IPs or an IP has two MACs.

Author 1: Ahmed A. Galal
Author 2: Atef Z. Ghalwash
Author 3: Mona Nasr

Keywords: Address Resolution Protocol (ARP); ARP detecting; ARP mitigation; ARP spoofing

PDF

Paper 48: Cross-Layer based TCP Performance Enhancement in IoT Networks

Abstract: Transmission Control Protocol (TCP) used multiple paths for performing transmission of data simultaneously to improve its performance. However, previous TCP protocols in Internet of Things (IoT) networks experienced difficulty to transmit a greater number of subflows. To overcome the above issues, we introduced cross-layer framework to perform efficient packet scheduling and congestion control for increasing the performance of TCP in IoT networks. Initially, the proposed IoT network is constructed based on grid topology using Manhattan distance which improves the scalability and flexibility of the network. After network construction, packet scheduling is performed by considering numerous parameters such as bandwidth, delay, buffer rate, etc., using fitness based proportional fair (FPF) scheduling algorithm and selecting best subflow to reduce the transmission delay. The scheduled subflow is sent over an optimal path to improve the throughput and goodput. After packet scheduling, congestion control in TCP is performed using cooperative constraint approximation 3+ (CoCoA3+-TCP) algorithm in which three stages are employed namely congestion detection, fast retransmission, and recovery. The congestion detection in TCP-IoT environment is performed by considering several parameters in which cat and mouse-based optimization (CMO) is utilized to adaptively estimate retransmission timeout (RTO) for reducing the delay and improving the convergence during retransmission. Fast retransmission and recovery are performed to improve the network performance by adjusting the congestion window size thereby avoiding congestion. The simulation of cross-layer approach is carried out using network simulator (NS-3.26) and the simulation results show that the proposed work outperforms high TCP performance in terms of throughput, goodput, packet loss, and transmission delay, jitter, and congestion window size.

Author 1: Sultana Parween
Author 2: Syed Zeeshan Hussain

Keywords: Internet of things (IoT); transmission control protocol (TCP); cross-layer approach; packet scheduling; congestion control; fast retransmission; recovery

PDF

Paper 49: Sena TLS-Parser: A Software Testing Tool for Generating Test Cases

Abstract: Currently, software complexity and size has been steadily growing, while the variety of testing has also been increased as well. The quality of software testing must be improved to meet deadlines and reduce development testing costs. Testing software manually is time consuming, while automation saves time and money as well as increasing test coverage and accuracy. Over the last several years, many approaches to automate test case creation have been proposed. Model-based testing (MBT) is a test design technique that supports the automation of software testing processes by generating test artefacts based on a system model that represents the system under test's (SUT) behavioral aspects. The optimization technique for automatically generating test cases using Sena TLS-Parser is discussed in this paper. Sena TLS-Parser is developed as a Plug-in Tool to generate test cases automatically and reduce the time spent manually creating test cases. The process of generating test cases automatically by Sena TLS-Parser is be presented through several case studies. Experimental results on six publicly available java applications show that the proposed framework for Sena TLS-Parser outperforms other automated test case generation frameworks. Sena TLS-Parser has been shown to solve the problem of software testers manually creating test cases, while able to complete optimization in a shorter period of time.

Author 1: Rosziati Ibrahim
Author 2: Samah W. G. AbuSalim
Author 3: Sapiee Jamel
Author 4: Jahari Abdul Wahab

Keywords: Software testing; schema parser; software under test (SUT); model based testing (MBT); java applications

PDF

Paper 50: Deep Sentiment Extraction using Fuzzy-Rule Based Deep Sentiment Analysis

Abstract: In the world of social media, the amount of textual data is increasing exponentially on the internet, and a large portion of it expresses subjective opinions. Sentiment Analysis (SA) also named as Opinion mining, which is used to automatically identify and extract the subjective sentiments from text. In recent years, the research on sentiment analysis started taking off because of a huge of amount of data is available on the social media like twitter, machine learning algorithms popularity is increased in IR (Information Retrieval) and NLP (Natural Language Processing). In this work, we proposed three phase systems for sentiment classification in twitter tweets task of SemEval competition. The task is predicting the sentiment like negative, positive or neutral of a twitter tweets by analyzing the whole tweet. The first system used Artificial Bee Colony (ABC) optimization technique is used with Bag-of-words (BoW) technique in association with Naive Bayes (NB) and k-Nearest Neighbor (kNN) classification techniques with combination of various categories of features in identifying the sentiment for a given twitter tweet. The second system used to preserve the context a Rider Feedback Artificial Tree Optimization-enabled Deep Recurrent neural networks (RFATO-enabled Deep RNN) is developed for the efficient classification of sentiments into various grades. Further to improve the accuracy of classification on n-valued scale Adaptive Rider Feedback Artificial Tree (Adaptive RiFArT)-based Deep Neuro fuzzy network is devised for efficient sentiment grade classification. Finally, this research work proposed a Fuzzy-Rule Based Deep Sentiment Extraction (FBDSE) Algorithm with Deep Sentiment Score computation. Accuracy measure is considered to test the proposed systems performance. It was observed that the fuzzy-rule based system achieved good accuracy compared with machine learning and deep learning based approaches.

Author 1: SIREESHA JASTI
Author 2: G. V. S. RAJ KUMAR

Keywords: Sentiment analysis; SemEval; recurrent neural networks; LSTM; word embeddings; accuracy; f1-score; fuzzy –rule; deep sentiment extraction

PDF

Paper 51: RS Invariant Image Classification and Retrieval with Pretrained Deep Learning Models

Abstract: CBIR deals with seeking of related images from large dataset, like Internet is a demanding task. Since last two decades scientists are working in this area in various angles. Deep learning provided state-of-the art result for image categorization and recovery. But pre-trained deep learning models are not strong enough to rotation and scale variations. A technique is proposed in this work to improve the precision and recall of image retrieval. This method concentrates on the extraction of high-level features with rotation and scaling invariant from ResNet18 CNN (Convolutional Neural Network) model. These features used for segregation of images using VGG19 deep learning model. Finally, after classification if the class of given query image is correct, we will get the 100% results for both precision and recall as the ideal requirement of image retrieval technique. Our experimental results shows that not only our proposed technique outstrip current techniques for rotated and scaled query images but also it has preferable results for retrieval time requirements. The performance investigation exhibit that the presented method upgrades the average precision value from 76.50% for combined features DCD (Dominant Color Descriptor), wavelet and curvelet to 99.1% and average recall value from 14.21% to 19.82% for rotated and scaled images utilizing Corel dataset. Also, the average retrieval time required is 1.39 sec, which is lower than existing modern techniques.

Author 1: D. N. Hire
Author 2: A. V. Patil

Keywords: CBIR; CNN; deep learning; ResNet18; rotation; scale; VGG19

PDF

Paper 52: Accuracy Enhancement of Prediction Method using SMOTE for Early Prediction Student's Graduation in XYZ University

Abstract: According to the Minister of Education and Culture of the Republic of Indonesia's regulations from 2014, one of the essential elements in implementing higher education is the student's study duration. Higher education institutions will use early graduation prediction as a guide when developing policy. According to XYZ University data, the student study period is Grade Point Average (GPA), Gender, and Age are all aspects to consider. Using a dataset of 8491 data, the Prediction of Early Graduation of Students based on XYZ University data was examined by this study, particularly in the information systems and informatics study program. The aim is to find significant features and compare three prediction models: Artificial Neural Networks (ANN), K-Nearest Neighbor (K-NN) method, and Support Vector Machines (SVM). The Challenge in the development of a prediction model is imbalanced data. The Synthetic Minority Oversampling Technique (SMOTE) handles the class imbalance problem. Next, the machine learning models are trained and then compared. Prediction results increase. The best test accuracy value is on ANN with a data Imbalance of 62.5% to 70.5% after using SMOTE, compared to the accuracy test on the K-NN method with SMOTE 69.3%, while the SVM method increased to 69.8%. The most significant increase in recall value to 71.3% occurred in the ANN.

Author 1: Ainul Yaqin
Author 2: Majid Rahardi
Author 3: Ferian Fauzi Abdulloh

Keywords: Prediction study period; SMOTE; neural network; k-nearest neighbors; support vector machine

PDF

Paper 53: Research on the Classification Modeling for the Natural Language Texts with Subjectivity Characteristic

Abstract: The methods of natural language text classification have the characteristic of diversification, and the text characteristics are the basis of the method effectiveness; this paper takes the car service complaint data as an example to study the classification modeling for the texts with subjectivity characteristic. The effective handling of car service complaints is important for improving user experience and maintaining brand reputation; manual classification commonly has the disadvantages of experience dependence, prone to error, heavy workload and so on; corresponding automatic classification modeling research is of great practical significance. The core links of the research method in this study include word segmentation, text vectorization, feature selection and dimensionality reduction based on correlation, classification modeling based on progressive method and random forest, and model reliability analysis; the research results show that the car service complaint texts could be effectively classified based on the method in this study, which could provide a reference for related further research and application.

Author 1: Chen Xiao Yu
Author 2: Gao Feng
Author 3: Song Ying
Author 4: Zhang Xiao Min

Keywords: Car service complaint; text classification; machine learning; natural language texts

PDF

Paper 54: Multi-layer Stacking-based Emotion Recognition using Data Fusion Strategy

Abstract: Electroencephalography (EEG), or brain waves, is a commonly utilized bio signal in emotion detection because it has been discovered that the data recorded from the brain seems to have a connection between motions and physiological effects. This paper is based on the feature selection strategy by using the data fusion technique from the same source of EEG Brainwave Dataset for Classification. The multi-layer Stacking Classifier with two different layers of machine learning techniques was introduced in this approach to concurrently learn the feature and distinguish the emotion of pure EEG signals states in positive, neutral and negative states. First layer of stacking includes the support vector classifier and Random Forest, and the second layer of stacking includes multilayer perceptron and Nu-support vector classifiers. Features are selected based on a Linear Regression based correlation coefficient (LR-CC) score with a different range like n1, n2,n3,n4 a, for d1 used n1 and n2 dataset ,for d2 dataset, combined dataset of n3 and n4 are used and developed a new dataset d3 which is the combination of d1 and d2 by using the feature selection strategy which results in 997 features out of 2548 features of the EEG Brainwave dataset with a classification accuracy of emotion recognition 98.75%, which is comparable to many state-of-the-art techniques. It has been established some scientific groundwork for using data fusion strategy in emotion recognition.

Author 1: Saba Tahseen
Author 2: Ajit Danti

Keywords: Electroencephalograph (EEG); linear regression based correlation coefficient; feature selection; multi-layer stacking model; machine learning techniques; emotion recognition

PDF

Paper 55: The Implementation of a Solution for Low-Power Wide-Area Network using LoRaWAN

Abstract: In recent years, there has been an increasing emphasis on low-power wide-area network also knowns as LPWAN (Low-Power Wide-Area Networks) technologies that allow efficient and fast data transfer, thus desiring a large-scale integration of various devices facilitating long-distance communications in various fields such as agriculture, logistics, or infrastructure. This category of technologies includes SigFox, LoRa, NB-IoT and others. One area where these low-power technologies can be used successfully is agriculture, in which monitoring the humidity and temperature are crucial. The social-economic context of 2022 highlights as one of the main priorities the security of the food and the raw materials provided by the agriculture field, so the desire is to obtain a large, efficient, and traceable production. Starting from this context, in this paper an architecture based on LoRa (Long-Range) technology and the LoRaWAN protocol it is proposed. We will place special emphasis on monitoring the extremely important parameters in agriculture, namely temperature, humidity and pressure. Although there are multiple works of research in this direction or in similar directions in other fields of activity, we should mention that each of them focuses on certain strictly geographical area and most of the times the results are purely theoretical. The gain that comes with this paper consists first in the fact that there is a practical support implementable and secondly the solution described can be adapted to different geographical regions. Moreover, at the end of this paper, we will focus on the comparison and analysis at the architectural level of two LPWAN technologies, namely SigFox vs. LoRa implemented in the same context in order to find the best results.

Author 1: Nicoleta Cristina GAITAN
Author 2: Floarea PITU

Keywords: LoRa; low-power; LoRaWAN protocol; SigFox; LPWAN

PDF

Paper 56: Chaos Detection and Mitigation in Swarm of Drones Using Machine Learning Techniques and Chaotic Attractors

Abstract: Most existing identification and tackling of chaos in swarm drone missions focus on single drone scenarios. There is a need to assess the status of a system with multiple drones, hence, this research presents an on-the-fly chaotic behavior detection model for large numbers of flying drones using machine learning techniques. A succession of three Artificial Intelligence knowledge discovery procedures, Logistic Regression (LR), Convolutional Neural Network (CNN), Gaussian Mixture Models (GMMs) and Expectation–Maximization (EM) were employed to reduce the dimension of the actual data of the swarm of drone’s flight and classify it as non-chaotic and chaotic. A one-dimensional, multi-layer perceptive, deep neural network-based classification system was also used to collect the related characteristics and distinguish between chaotic and non-chaotic conditions. The Rössler system was then employed to deal with such chaotic conditions. Validation of the proposed chaotic detection and mitigation technique was performed using real-world flight test data, demonstrating its viability for real-time implementation. The results demonstrated that swarm mobility horizon-based monitoring is a viable solution for real-time monitoring of a system's chaos with a significantly reduced commotion effect. The proposed technique has been tested to improve the performance of fully autonomous drone swarm flights.

Author 1: Emmanuel NEBE
Author 2: Mistura Laide SANNI
Author 3: Rasheed Ayodeji ADETONA
Author 4: Bodunde Odunola AKINYEMI
Author 5: Sururah Apinke BELLO
Author 6: Ganiyu Adesola ADEROUNMU

Keywords: Chaos detection; swarm of drones; machine learning; autoencoder; Rössler system

PDF

Paper 57: Integrating Big Data Analytics into Business Process Modelling: Possible Contributions and Challenges

Abstract: Business Process Modelling (BPM) is a set of organised, structured, and related activities that boost the development and evolution of an organisation's success by understanding, improving, and automating existing business processes. Recently, the integration of Big Data Analytics (BDA) into BPM has widely gained more attention as a unique opportunity for organisations to enhance their efficiency, effectiveness, added value, and competitive advantage. However, some organisations still rely on old data-driven strategies and are late in integrating BDA into their BPM. This study aims to explore the possible contributions and challenges of integrating BDA into BPM. This study discovered that better decision making, embracing the organisation's performance, upgrading business process capabilities, and supporting supply chain management are the main contributions of BDA to BPM in organisations. However, poor data quality, shortage of BDA professionals, and data security and protection are the main challenges that hinder organisations from implementing BDA. This study provides valuable insights for organisations that intend to implement big data technologies in their business processes.

Author 1: Zaeem AL-Madhrahi
Author 2: Dalbir Singh
Author 3: Elaheh Yadegaridehkordi

Keywords: Big data analytics; business process modeling; BPM; organisation’s performance

PDF

Paper 58: K-Means Customers Clustering by their RFMT and Score Satisfaction Analysis

Abstract: Businesses derive more revenue from building and maintaining long-term relationships with their customers. Therefore, it is essential to build refined strategies based on customer relationship management, with the purpose of increasing their turnover and profits while retaining their customers. In this context, customer segmentation, which is at the heart of marketing strategy, makes it possible to determine the answers to questions relating to the number of investments to be released, the marketing campaigns to be organized, and the development strategy to be implemented. This paper develops an extended RFMT (Recency, Frequency, Monetary, and Interpurchase Time) model, namely the RFMTS model, by introducing a new dimension as satisfaction ‘S’. The aim of this model is to analyze online consumer satisfaction over time and discern changes to implement customer segmentation. This article proposes an approach to a segmentation, by client clustering along the unsupervised machine learning method k-means based on data generated using the proposed RFMTS model, in order to improve the customer relationship and develop more effective personalized marketing strategies. The study shows that including satisfaction to the existing RFM model for customer clustering has a major impact and helps identify customers who are satisfied and those who are not, unlike previous attempts to develop new RFM models. By ignoring the “satisfaction” indicator, what went well and what didn't went well cannot be understood. Consequently, the business loses its unsatisfied, loyal, and profitable customers and either fails or relies only on the satisfied ones to continue making profits for an indefinite period of time.

Author 1: Doae Mensouri
Author 2: Abdellah Azmani
Author 3: Monir Azmani

Keywords: Customer segmentation; customer satisfaction; RFMT model; machine learning; k-means

PDF

Paper 59: Indoor Positioning System: A Review

Abstract: Global Positioning System (GPS) has been developed in outdoor environments in recent years. GPS offers a wide range of applications in outdoor areas, including military, weather forecasting, vehicle tracking, mapping, farming, and many more. In an outdoor environment, an exact location, velocity, and time can be determined by using GPS. Rather than emitting satellite signals, GPS receivers passively receive them. However, due to No Line-of-Sight (NLoS), low signal strength, and low accuracy, GPS is not suitable to be used indoors. As consequence, the indoor environment necessitates a different Indoor Positioning System (IPS) approach that is capable to locate the position within a structure. IPS systems provide a variety of location-based indoor tracking solutions, such as Real-Time Location Systems (RTLS), indoor navigation, inventory management, and first-responder location systems. Different technologies, algorithms, and techniques have been proposed in IPS to determine the position and accuracy of the system. This paper introduces a review article on indoor positioning technologies, algorithms, and techniques. This review paper is expected to deliver a better understanding to the reader and compared the better solutions for IPS by choosing the suitable technologies, algorithms, and techniques that need to be implemented according to their situation.

Author 1: N. Syazwani C. J
Author 2: Nur Haliza Abdul Wahab
Author 3: Noorhazirah Sunar
Author 4: Sharifah H. S. Ariffin
Author 5: Keng Yinn Wong
Author 6: Yichiet Aun

Keywords: Global positioning system (GPS); indoor positioning system (IPS); real-time location system (RTLS)

PDF

Paper 60: A Lightweight ECC-based Three-Factor Mutual Authentication and Key Agreement Protocol for WSNs in IoT

Abstract: The Internet of Things (IoT) represents a giant ecosystem where many objects are connected. They collect and exchange large amounts of data at a very high speed. One of the main parts of IoT is the Wireless Sensor Network (WSN), which is deployed in various critical applications such as military surveillance and healthcare that require high levels of security and efficiency. Authentication is a primary security factor that ensures the legitimacy of data requests and responses in WSN. Moreover, sensor nodes are characterized by their limited resources, which raise the need for lightweight authentication schemes applicable in IoT environments. This paper presents an informal analysis of the security of X. Li et al.’s protocol, which is claimed to be efficient and resistant to various attacks. The analysis results show that the reviewed protocol does not provide user anonymity and it is vulnerable to session key disclosure attack, many-time pad attack, and insider attack. To address all these requirements, a new three-factor authentication protocol is presented, which guarantees higher security using Physically Unclonable Function (PUF) and Elliptic Curve Cryptography (ECC). This protocol does not only withstand the security weaknesses in X. Li et al.’s scheme but also provides smart card revocation and is resistant to cloning attack. In terms of both computational and communicational costs, results demonstrate that the proposed scheme provides higher efficiency in comparison with other related protocols, which makes it notably suitable for IoT environments.

Author 1: Meriam Fariss
Author 2: Hassan El Gafif
Author 3: Ahmed Toumanari

Keywords: Mutual authentication; elliptic-curve cryptography; Physically Unclonable Function; wireless sensor networks; key-agreement; internet of things

PDF

Paper 61: Threshold Segmentation of Magnetic Column Defect Image based on Artificial Fish Swarm Algorithm

Abstract: Aiming at the low efficiency of magnetic column surface defect detection, the vulnerability to human influence, and the insufficient anti-noise performance of existing 2D-OTSU threshold segmentation algorithm, an improved artificial fish swarm algorithm combined with 2D-OTSU algorithm was proposed to improve the accuracy and real-time of magnetic column surface defect detection. Firstly, the weight coefficient was added on the basis of the original 2D-OTSU algorithm, and the distance function was set to optimize the weight coefficient. The objective function was established by combining the inter-class discrete matrix and the intra-class discrete matrix, and the optimal threshold was obtained. Secondly, logistic model was used to optimize the perceptual range and moving step size of the artificial fish swarm algorithm, so as to balance the local and global search ability of the algorithm and improve the convergence speed of the algorithm. Finally, the optimal segmentation threshold is used to segment the image, and compared with other algorithms on four benchmark functions. Experimental results show that the improved algorithm can effectively reduce the time complexity of threshold segmentation and improve the efficiency of the algorithm. At the same time, the segmentation accuracy of the improved algorithm for magnetic column defects reaches 93%, which has good practicability.

Author 1: Wang Jun
Author 2: Hou Mengjie
Author 3: Zhang Ruiran
Author 4: Xiao Jingjing

Keywords: Defect detecting; threshold segmentation; artificial fish swarm algorithm; improved 2D-OTSU algorithm

PDF

Paper 62: Deep Separable Convolution Network for Prediction of Lung Diseases from X-rays

Abstract: Accurate diagnosis of lung cancer has been critical, and image segmentation and deep learning (DL) techniques have made it easier for medical people. Yet, the concept's effectiveness is extremely limited due to a scarcity of skilled radiologists. Although emerging DL-based methods frequently necessitate accordance with the regulation, such as labelled feature map, to train such networks, which is difficult to terminate on a big scale. This study proposed a swarm intelligence based modified DL model called MSCOA-DSCN to classify and forecast various Lung Diseases through anterior X-rays. Image enhancement with a modified median filter and edge enhancement with statistical range applied for better image production. The disparity between min and max pixels focused on the Statistical range from each 3×3 input image cluster. Utilized Enriched Auto-Seed Fuzzy Means Morphological Clustering for segmentation (EASFMC); they could function together to identify edges in X-Ray imaging. Used A deep separable convolution network (DSCN) was in the created system to predict the class of lung cancer, and Modified Butterfly Optimization Algorithm (MBOA) applied for the feature selection procedure. This present study compared with various state-of-the-art classification algorithms using the NIH Chest-Xray-14 database.

Author 1: Geetha N
Author 2: S. J. Sathish Aaron Joseph S. J

Keywords: Lung diseases; X-rays; deep learning; filtering; edge detection; segmentation and swarm intelligence

PDF

Paper 63: Face Recognition System Design and Implementation using Neural Networks

Abstract: Face recognition technology is used in biometric security systems to identify a person digitally before granting the access to the system or the data in it. There are many kidnappings or abduction cases happen around us, however, the kidnap suspects will be set free if there is lack of evidence or when the victims are not able to testify in court because they suffer from post-traumatic stress disorder (PTSD). The objectives of this study are, to develop a device that will capture the image of a kidnapper as evidence for future reference and send the captured image to the family of the victim through email, to design a face recognition system to be used in searching kidnap suspects and to determine the best training parameters for the convolution neural network (CNN) layers used by the proposed face recognition system. The accuracy of the proposed system is tested with three different datasets, namely the AT&T database, face database from [23] and a custom face dataset. The results are 87.50%, 92.19% and 95.93% respectively. The overall face recognition accuracy of the proposed system is 98.48%. The best training parameters for the proposed CNN model are kernel size of 5x5, 32 and 64 filters for first and second convolutional layers and learning rate of 0.001.

Author 1: Jamil Abedalrahim Jamil Alsayaydeh
Author 2: Irianto
Author 3: Azwan Aziz
Author 4: Chang Kai Xin
Author 5: A. K. M. Zakir Hossain
Author 6: Safarudin Gazali Herawan

Keywords: Face recognition system; biometric identification; face detection; image processing; convolutional neural networks

PDF

Paper 64: Sparse Feature Aware Noise Removal Technique for Brain Multiple Sclerosis Lesions using Magnetic Resonance Imaging

Abstract: Medical Resonance Imaging (MRI) is non-radioactive-based medical imaging that provides a super-resolution of tissues. However, because of its complex nature using existing Deep Learning-based noise removal (i.e., Denoising) techniques, the reconstruction quality is poor and time-consuming. An extensive study shows very limited work has been done on Brain Multiple Sclerosis (MS) Lesions MRI. Designing an efficient noise removal technique will aid in improving MRI quality; thereby will aid in achieving better segmentation classification performance. In reducing computing time and enhancing image quality (i.e. reduce noise) this paper presents the Sparse Feature Aware Noise Removal (SFANR) technique for Brain MRI using Convolution Neural Network (CNN) architecture. A sparse-aware feature is incorporated into the patch-wise morphology learning model for removing noise in large-scale MRI MS lesion datasets. Experimental results demonstrated that our model SFANR outperforms all other state-of-art noise removal techniques in terms of Peak-Signal-Noise-Ratio (PSNR), Structural Similarity Index Metric (SSIM) with less running time.

Author 1: Swetha M D
Author 2: Aditya C R

Keywords: Convolution neural networks; deep learning; denoising; magnetic resonance imaging; morphology learning; multiple sclerosis; sparse features

PDF

Paper 65: Sentiment Analysis of Covid-19 Vaccination using Support Vector Machine in Indonesia

Abstract: Along with the development of the Covid-19 pandemic, many responses and news were shared through social media. The new Covid-19 vaccination promoted by the government has raised pros and cons from the public. Public resistance to covid-19 vaccination will lead to a higher fatality rate. This study carried out sentiment analysis about the Covid-19 vaccine using the Support Vector Machine (SVM). This research aims to study the public response to the acceptance of the vaccination program. The research result can be used to determine the direction of government policy. Data collection was taken via Twitter in the year 2021. The data then undergoes the preprocessing methods. Afterward, the data is processed using SVM classification. Finally, the result is evaluated by a confusion matrix. The experimental result shows that SVM produces 56.80% positive, 33.75% neutral, and 9.45% negative. The highest model accuracy was obtained by RBF kernel of 92%, linear and polynomial kernels obtained 90% accuracy, and sigmoid kernel obtained 89% accuracy.

Author 1: Majid Rahardi
Author 2: Afrig Aminuddin
Author 3: Ferian Fauzi Abdulloh
Author 4: Rizky Adhi Nugroho

Keywords: Covid-19; vaccination; support vector machine; twitter

PDF

Paper 66: Application of Optimized SVM in Sample Classification

Abstract: Support vector machines (SVM) have unique advantages in solving problems with small samples, nonlinearity and high dimension. It has a relatively complete theory and has been widely used in various fields. The classification accuracy and generalization ability of SVMs are determined by the selected parameters, for which there is no solid theoretical guidance. To address this parameter optimization problem, we applied random selection, genetic algorithms (GA), particle swarm optimization (PSO) and K-fold cross validation (k-CV) method to optimize the parameters of SVMs. Taking the classification accuracy, mean squared error and squared correlation coefficient as the goal, the K-fold cross validation method is chosen as the best way to optimize SVM parameters. In order to further verify the best performance of the SVM whose parameters are optimized by the K-fold cross validation method, the back propagation neural network and decision tree are used as the contrast models. The experimental results show that the SVM-cross validation method has the highest classification accuracy in SVM parameter selection, which lead to SVM classifiers that outperform both BP neural networks and decision tree method.

Author 1: Xuemei Yao

Keywords: Support vector machine; parameter optimization; K-fold cross validation; sample classification

PDF

Paper 67: A Distributed Intrusion Detection System using Machine Learning for IoT based on ToN-IoT Dataset

Abstract: The internet of things (IoT) is a collection of common physical things which can communicate and synthesize data utilizing network infrastructure by connecting to the internet. IoT networks are increasingly vulnerable to security breaches as their popularity grows. Cyber security attacks are among the most popular severe dangers to IoT security. Many academics are increasingly interested in enhancing the security of IoT systems. Machine learning (ML) approaches were employed to function as intrusion detection systems (IDSs) to provide better security capabilities. This work proposed a novel distributed detection system based on machine ML approaches to detect attacks in IoT and mitigate malicious occurrences. Furthermore, NSL-KDD or KDD-CUP99 datasets are used in the great majority of current studies. These datasets are not updated with new attacks. As a consequence, the ToN-IoT dataset was used for training and testing. It was created from a large-scale, diverse IoT network. The ToN-IoT dataset reflects data from each layer of the IoT system, such as cloud, fog, and edge layer. Various ML methods were tested in each specific partition of the ToN-IoT dataset. The proposed model is the first suggested model based on the collected data from the same IoT system from all layers. The Chi2 technique was used to pick features in a network dataset. It reduced the number of features to 20. Another feature selection tool employed in the windows dataset was the correlation matrix, which was used to extract the most relevant features from the whole dataset. To balance the classes, the SMOTE method was used. This paper tests numerous ML approaches in both binary and multi-class classification problems. According to the findings, the XGBoost approach is superior to other ML algorithms for each node in the suggested model.

Author 1: Abdallah R. Gad
Author 2: Mohamed Haggag
Author 3: Ahmed A. Nashat
Author 4: Tamer M. Barakat

Keywords: Intrusion detection system (IDS); internet of things (IoT); ToN-IoT dataset; machine learning (ML)

PDF

Paper 68: COVID-19 Detection on X-Ray Images using a Combining Mechanism of Pre-trained CNNs

Abstract: The COVID-19 infection was sparked by the severe acute respiratory syndrome SARS-CoV-2, as mentioned by the World Health Organization, and originated in Wuhan, Republic of China, eventually extending to every nation worldwide in 2020. This research aims to establish an efficient Medical Diagnosis Support System (MDSS) for recognizing COVID-19 in chest radiography with X-ray data. To build an ever more efficient classifier, this MDSS employs the concatenation mechanism to merge pretrained convolutional neural networks (CNNs) predicated on Transfer Learning (TL) classifiers. In the feature extraction phase, this proposed classifier employs a parallel deep feature extraction approach based on Deep Learning (DL). As a result, this approach increases the accuracy of our proposed model, thus identifying COVID-19 cases with higher accuracy. The suggested concatenation classifier was trained and validated using a Chest Radiography image database with four categories: COVID-19, Normal, Pneumonia, and Tuberculosis during this research. Furthermore, we integrated four separate public X-Ray imaging datasets to construct this dataset. In contrast, our mentioned concatenation classifier achieved 99.66% accuracy and 99.48% sensitivity respectively.

Author 1: Oussama El Gannour
Author 2: Soufiane Hamida
Author 3: Shawki Saleh
Author 4: Yasser Lamalem
Author 5: Bouchaib Cherradi
Author 6: Abdelhadi Raihani

Keywords: COVID-19; deep learning; transfer learning; feature extraction; concatenation technique

PDF

Paper 69: Sentiment Analysis of Tweets using Unsupervised Learning Techniques and the K-Means Algorithm

Abstract: Today, web content such as images, text, speeches, and videos are user-generated, and social networks have become increasingly popular as a means for people to share their ideas and opinions. One of the most popular social media for expressing their feelings towards events that occur is Twitter. The main objective of this study is to classify and analyze the content of the affiliates of the Pension and Funds Administration (AFP) published on Twitter. This study incorporates machine learning techniques for data mining, cleaning, tokenization, exploratory analysis, classification, and sentiment analysis. To apply the study and examine the data, Twitter was used with the hashtag #afp, followed by descriptive and exploratory analysis, including metrics of the tweets. Finally, a content analysis was carried out, including word frequency calculation, lemmatization, and classification of words by sentiment, emotions, and word cloud. The study uses tweets published in the month of May 2022. Sentiment distribution was also performed in three polarity classes: positive, neutral, and negative, representing 22%, 4%, and 74% respectively. Supported by the unsupervised learning method and the K-Means algorithm, we were able to determine the number of clusters using the elbow method. Finally, the sentiment analysis and the clusters formed indicate that there is a very pronounced dispersion, the distances are not very similar, even though the data standardization work was carried out.

Author 1: Orlando Iparraguirre-Villanueva
Author 2: Victor Guevara-Ponce
Author 3: Fernando Sierra-Linan
Author 4: Saul Beltozar-Clemente
Author 5: Michael Cabanillas-Carbonell

Keywords: Techniques; machine learning; classification; twitter

PDF

Paper 70: Registration Methods for Thermal Images of Diabetic Foot Monitoring: A Comparative Study

Abstract: This paper presents a comparative study of image registration techniques for Diabetic Foot (DF) thermal images. Four registration methods (Intensity-based algorithm, Iterative closest point (ICP), subpixel registration algorithm, which is mainly based on Fast Fourier Transform (FFT), and the pyramid approach for subpixel registration) have been implemented and analyzed. The performances of the four algorithms were evaluated using several overlap and symmetry metrics such as the Dice similarity coefficient (DSC), Root Mean Square Error (RMSE) and peak signal to noise ratio (PSNR). The methods were analyzed in a first step on the images of contralateral feet (right and left) of the same subject, which is called in this paper "contralateral registration" and in a second step on a pair of images of the same subject but acquired in two different times T0 and T10 after applying a cold stress test, which is called "multi-temporal registration". Results showed that the intensity-based approach and the pyramid approach for subpixel registration algorithm give the best results in both types of registration (contralateral / multitemporal) and can be used efficiently for the registration of these types of images even under changing conditions.

Author 1: Doha Bouallal
Author 2: Hassan Douzi
Author 3: Rachid Harba

Keywords: Medical imaging; diabetic foot; thermography; registration; mobile health

PDF

Paper 71: Op-RMSprop (Optimized-Root Mean Square Propagation) Classification for Prediction of Polycystic Ovary Syndrome (PCOS) using Hybrid Machine Learning Technique

Abstract: Polycystic Ovary Syndrome is a common women's health problem caused by the imbalance in the reproductive hormones which causes problems in the ovaries. An appropriate machine learning (ML) algorithm can be applied to analyze the datasets and validate the performance of the algorithm in terms of accuracy. In this paper, a unique hybrid and optimized methodology are proposed which uses SVM linear kernel with Logistic Regression functionalities in a different way. The output of this model is passed on to the RMSprop optimizer. Optimization will train the model iteratively to get better output. For this research 1600 datasets were collected from the leading hospital in Bangalore Urban region. This optimized hybrid method is tested over PCOS datasets and it exhibited 89.03% accuracy. The results showed that the optimized-hybrid model works efficiently when compared to other existing ML Algorithms like SVM, Logistic regression, Decision tree, KNN, Random forest, and Adaboost. Also, the results of the optimized-hybrid SVLR model showed good results in terms of F-measure, precision, and recall statistical criteria. Overall this paper summarizes the working of the proposed optimized-SVLR hybrid model and prediction of PCOS.

Author 1: Rakshitha Kiran P
Author 2: Naveen N. C

Keywords: SVM; decision tree; logistic regression; RM Sprop; frameworks

PDF

Paper 72: Influence of Management Automation on Managerial Decision-making in the Agro-Industrial Complex

Abstract: The preservation and rational use of the grown harvest, obtaining the maximum product output from raw materials today is one of the most important state tasks. Automation of production processes is the main area in which production is currently advancing around the world. Everything that was previously performed by the person himself, his functions, not only physical but also intellectual, are gradually transferred to automation systems that perform technological cycles and exercise control over them. The purpose of the article is to analyze the effect of automation on the ability to store grain in elevators. The main research question is what factors should be considered when introducing an automation system into the grain storage process at elevators to improve the efficiency of process control at enterprises. To solve the question posed, a qualitative study was conducted using the method of an expert survey. The article reveals the factors that affect the quality of grain; the tasks implemented in the computerized process control system (CPCS) and management information and control system (MICS); the factors that hinder the grain elevator automation; the tasks solved by the automation of grain elevators in the framework of autonomous subsystems and integrated automatic control systems (ACS). It is concluded that the implementation of automation in the process of grain storage in elevators leads to an increase in grain quality, increased productivity, reduction or elimination of losses caused by theft and the peculiarities of grain storage, saving energy resources, minimizing the impact of the human factor, as well as the risks of accidents. At that, the inclusion of non-standard tools in the MICS and CPCS makes it easier to solve several current automation problems. Creating standard problem-oriented complexes of responsible decision-makers based on an integrated ACS, with the inclusion of certified object-oriented non-standard tools in their composition, is the most rational way to further improve the efficiency of the automated control system of the industry.

Author 1: Sergey Dokholyan
Author 2: Evgeniya Olegovna Ermolaeva
Author 3: Alexander Sergeyevich Verkhovod
Author 4: Elena Vladimirovna Dupliy
Author 5: Anna Evgenievna Gorokhova
Author 6: Vyacheslav Aleksandrovich Ivanov
Author 7: Vladimir Dmitrievich Sekerin

Keywords: Grain elevator; automation; grain quality; grain storage; grain drying; grain losses

PDF

Paper 73: Enhancing the Security of Data Stored in the Cloud using customized Data Visualization Patterns

Abstract: Cloud Computing is getting popularized with the invention of latest technologies like Big Data, Artificial Intelligence, Data Science etc. The biggest challenge faced by researchers is efficient ways of accessing the data and acquiring the required results. The efficiency of the system will help the researchers to go one step further in the field of cloud computing. Alongside of storing the data in an optimal way, one biggest challenge faced by the researchers is security. How best security can be enhanced for this data in order to protect the end system from data thefts and illegal attacks. In this paper the proposed research concentrates on customized data visualization techniques that have been developed in order to store the data and also enhance the data security. These visualization patterns are dynamic in nature and can be further extended based on the need and level of the security required by the application. The proposed research in this paper will help researchers to implement the data visualization techniques with enhanced security in the real time data stored in the cloud from unauthorized access and various attacks like Malware etc. and these data patterns are dynamic in nature which will be selected based on the number of fragments need to be stored pertaining to particular cluster or region. The patterns will be selected based on two factors basically, one is the number of fragments and another important factor is how many nodes are available in the pattern. This proposed research will give an additional strength to the Cloud Computing Platforms like AWS and Google Cloud where the customers can feel that their data is in safe hands. Today when we are living in the data world, the need of this system is very much essential as it enhances the security of the data.

Author 1: Archana M
Author 2: Gururaj Murtugudde

Keywords: Artificial intelligence; big data; cloud computing; data science; data visualization

PDF

Paper 74: Incremental Learning based Optimized Sentiment Classification using Hybrid Two-Stage LSTM-SVM Classifier

Abstract: Sentiment analysis is a subtopic of Natural Language Processing (NLP) techniques that involves extracting emotions from unprocessed text. This is commonly used on customer review posts to automatically determine if user / customer sentiments are negative or positive. But quality of these analysis is completely dependent on its quantity of raw data. The conventional classifier-based sentiment prediction is not capable to handle these large datasets. Hence, for an efficient and effective sentiment prediction, deep learning approach is used. The proposed system consists of three main phases, such as 1) Data collection and pre-processing, 2) Count vectorizer and dimensionality reduction is used for feature extraction, 3) Hybrid classifier LSTM-SVM using incremental learning. Initially the input raw data is gathered from the e-commerce sites for product reviews and collected raw is given to pre-processing, which do tokenization, stop word removal, lemmatization for each review text. After pre-processing, features like keywords, length, and word count are extracted and given to feature extraction stage. Then a hybrid classifier using two-stage LSTM and SVM is developed for training the sentimental classes by passing new features and classes for incremental learning. The proposed system is developed using python and it is compared with the state-of-the-art classification techniques. The performance of the proposed system is compared based on performance metrics such as accuracy, precision, recall, sensitivity, specificity etc. The proposed model performed an accuracy of 92% which is better compared to the state-of-the-art existing techniques.

Author 1: Alka Londhe
Author 2: P. V. R. D. Prasada Rao

Keywords: Sentiment analysis; natural language processing; incremental learning; long short-term memory; support vector machine; hybrid; dimensionality reduction; principal component analysis

PDF

Paper 75: Data Recovery Approach with Optimized Cauchy Coding in Distributed Storage System

Abstract: In the professional world, the impact of big data is pulsating to change things. Data is currently generated by a wide range of sensors that are part of smart devices. It necessitates data storage and retrieval that is fault tolerant. Data loss can be caused by natural calamities, human error, or mechanical failure. Several security threats and data degradation attacks attempt to destroy storage disks, causing partial or complete data loss. The data encoding and data recovery mechanisms is proposed in this research. To produce a set of matrices utilizing matrix heuristics, the suggested system proposes an efficient Optimized Cauchy Coding (OCC) approach. In this paper, the Cauchy matrix is used as a generator matrix in Reed Solomon (RS) code to encode data blocks with fewer XOR operations. It reduces the encoding algorithm's time complexity. Furthermore, in the event of a disk failure, missing data from any data block is made available through the Code word. In terms of data recovery, it outperforms the Optimal Weakly Secure Minimum Storage Regenerating (OWSPM-MSR) and Product-Matrix Minimum Storage Regenerating (PM-MSR) methods. For data coding, a 1024KB file with various combinations of data blocks l and parity blocks m is evaluated. In the first scenario, m is 1 and l ranges from 4 to 10. The value of l is 4 in the second scenario, while m ranges from 1 to 10. The existing OWSPM-MSR approach takes an average of 0.125 seconds to encode and 0.22 seconds to decode, whereas the PM-MSR approach takes an average of 0.045 seconds to encode and 0.16 seconds to decode. The proposed OCC approach speeds up data coding by taking an average of 0.035 seconds to encode and 0.116 seconds to decode data.

Author 1: Snehalata Funde
Author 2: Gandharba Swain

Keywords: Optimized cauchy coding; fault tolerant; data availability; reed solomon code

PDF

Paper 76: Impact of COVID-19 Pandemic Measures and Restrictions on Cellular Network Traffic in Malaysia

Abstract: Due to the COVID-19 pandemic, intensive controls were put in place to prevent the pandemic from spreading. People's habits have been altered by the COVID-19 measures and restrictions imposed such as social distance and lockdown measures. These unexpected changes created a significant impact on cellular networks, such as increased use of online services and content streaming, which increased the burden on wireless networks. This research work is basically a case study that aims to examine and investigate cellular network performance, including upload speed, download speed, and latency, during two periods (MCO, CMCO) in three different regions, including Kuala Lumpur, Selangor (Cheras), and Johor Bahru, to observe the effects of lockdown enforcement and other restrictions in Malaysia on cellular network traffic. We used the phone application Speedtest™ as a tool for data collection within different times during the day, considering the peak times, including morning, evening, and night times. The research findings show how COVID-19 has affected internet traffic in Malaysia significantly. This research would help perspective developers and companies to better understand and be prepared for tough times and higher load on cellular networks in future pandemics such as COVID-19.

Author 1: Sallar Salam Murad
Author 2: Salman Yussof
Author 3: Rozin Badeel
Author 4: Reham A. Ahmed

Keywords: Cellular networks; COVID-19; network performance; pandemic

PDF

Paper 77: A Deep Learning Classification Approach using Feature Fusion Model for Heart Disease Diagnosis

Abstract: Early Diagnosis has a very critical role in medical data processing and automated system. In medical diagnosis, automation is focused in different area of applications, in which heart disease diagnosis is a prominent domain. An early detection of heart disease can save many lives or criticality issues in diagnosing patients. In the process of heart disease diagnosis spatial and frequency domain features are used in making decision by the automation system. The processing features are observed to time variant or invariant in nature and the criticality of the observing feature varies with the diagnosis need. Wherein, the current automation system utilizes the features extracted in a large count to attain a higher accuracy, the processing overhead, and delay are considerable. Different regression approaches were developed in recent past to minimize the processing feature overhead the features are optimized based on gain performance or distance factors. The characteristic variation of feature and the significance of the feature vector are not addressed. This paper outlines a method of feature selection for heart disease diagnosis, based on weighted method of feature vector in consideration of feature significance and probability of estimate. A new optimizing function for feature selection is proposed as a dual function of probability factor and feature weight value. Simulation results illustrate the improvement of accuracy and speed of computation using proposed method compared to other existing methods.

Author 1: Bhandare Trupti Vasantrao
Author 2: Selvarani Rangasamy
Author 3: Chetan J. Shelke

Keywords: Deep learning approach; heart disease diagnosis; feature fusion model; ECG analysis; weighted clustering; F-Score

PDF

Paper 78: An Effective Demand based Optimal Route Generation in Transport System using DFCM and ABSO Approaches

Abstract: The transportation network service quality is generally depends on providing demand based routing. Different existing approaches are focused to enhance the service quality of the transportation but them fails to satisfy the demand. This work presents an effective demand based objectives for optimal route generation in public transport system. The importance of this work is providing demand based optimal routing for large city transportation. The proposed demand based optimal route generation process is described in subsequent stages. Initially the passengers in each route are clustered using Distance based adaptive Fuzzy C-means clustering approach (DFCM) for collecting the passengers count in each stop. Here the number of cluster members in each cluster is equivalent to the passenger count of each stop. After the clustering process, adaptive objectives based beetle swarm optimization (ABSO) approach based routing is performed with the clustered data. Then re-routing is performed based on the demand based objectives such as passenger’s count, comfort level of passengers, route distance and average travel time using ABSO approach. This ABSO approach provides the optimal routing based on these demand based objectives. The presented methodology is implemented in the MATLAB working platform. The dataset used for the analysis is Surat city transport historical data. The experimental results of the presented work is examined with the different existing approaches in terms of root mean square error (9.5%), mean error (0.254%), mean absolute error (0.3007%), correlation coefficient (0.8993), vehicle occupancy (85%) and accuracy (99.57%).

Author 1: Archana M. Nayak
Author 2: Nirbhay Chaubey

Keywords: Clustering; optimization; demand based objectives; comfort level; optimal routing

PDF

Paper 79: Implementation of Electronic Health Record and Health Insurance Management System using Blockchain Technology

Abstract: Electronic health records (EHR) play an important role in digital health transition. EHRs contain medical information such as demographics, laboratory test results, radiological images, vaccination status, insurance policy, and claims. EHR is essential for doctors and healthcare organizations to analyze a patient's profile and provide appropriate therapy. Despite this, current electronic health record (EHR) systems lag with difficulties such as Interoperability and security. Better and faster care may be provided with an integrated and secure health record for each patient that can be transmitted easily in real-time across countries. People having health insurance policies are often confronted by insurance jargon and the insurer’s cumbersome requirements while filing a claim for treatment. There are times when the claims processing takes longer than expected. The insurer, Third-Party Administrators (TPAs), and network provider hospitals examine, approve, and initiate the sum claimed. The use of blockchain in the process allows for more efficient information sharing at a lower cost and with more security. Only authorized individuals have access to the shared ledger on a blockchain, making it more confidential and secure. All parties engaged in a health insurance policy, including the insurer, the insured, the TPA, and the network provider hospital, may be members of the blockchain network and have access to the same set of policy data. In our proposed work we implemented a Blockchain-based EHR and Health insurance management system using Ethereum and deployed smart contracts using solidity and created a web application with web3js and React Framework.

Author 1: Lincy Golda Careline S
Author 2: T. Godhavari

Keywords: Electronic health records; insurance policy and claim processing; smart contracts; Ethereum; homomorphic encryption; edge computing

PDF

Paper 80: Predicting Blocking Bugs with Machine Learning Techniques: A Systematic Review

Abstract: The application of machine learning (ML) techniques to predict blocking bugs have emerged for the early detection of Blocking Bugs (BBs) in software components to mitigate the adverse effect of BBs on software release and project cost. This study presents a systematic literature review of the trends in the application of ML techniques in BB prediction, existing research gaps, and possible research directions to serve as a reference for future research and an application insight for software engineers. We constructed search phrases from relevant terms and used them to extract peer-reviewed studies from the databases of five famous academic publishers, namely Scopus, SpringerLink, IEEE Xplore, ACM digital library, and ScienceDirect. We included primary studies published between January 2012 and February 2022 that applied ML techniques to building Blocking Bug Prediction models (BBPMs). Our result reveals a paucity of literature on BBPMs. Also, previous researchers employed ML techniques such as Decision Trees, Random Forest, Bayes Network, XGBoost, and DNN in building existing BB prediction models. However, the publicly available datasets for building BBPMs are significantly imbalanced. Despite the poor performance of the Accuracy metric where imbalanced datasets are concerned, some primary studies still utilized the Accuracy metric to assess the performance of their proposed BBPM. Further research is required to validate existing and new BBPM on datasets of commercial software projects. Also, future researchers should mitigate the effect of class imbalance on the proposed BB prediction model before training a BBPM.

Author 1: Selasie Aformaley Brown
Author 2: Benjamin Asubam Weyori
Author 3: Adebayo Felix Adekoya
Author 4: Patrick Kwaku Kudjo
Author 5: Solomon Mensah

Keywords: Blocking bugs; systematic review; software maintenance; bug report; reliability; machine learning

PDF

Paper 81: Trace Learners Clustering to Improve Learning Object Recommendation in Online Education Platforms

Abstract: E-learning platforms propose pedagogical pathways where learners are invited to mobilize their autonomy to achieve the learning objectives. However, some learners face a set of cognitive barriers that require additional learning objects to progress in the course. A mediating recommendation system is one of the efficient solutions to reinforce the resilience of online platforms, while suggesting learning objects that will be interesting for them according to their needs. The objective of this contribution is to design a new mediator recommendation model for e-learning platforms to suggest learning objects to the learner based on collaborative filtering. To this end, the proposed system relies on the implicit behaviors estimation function as an underlying technique to convert tacit traces into explicit preferences allowing to compute the similarity between learners.

Author 1: Zriaa Rajae
Author 2: Amali Said
Author 3: El Faddouli Nour-eddine

Keywords: e-learning; recommendation system; learning objects; tacit behaviors

PDF

Paper 82: DDoS Intrusion Detection Model for IoT Networks using Backpropagation Neural Network

Abstract: In today's digital landscape, Internet of Things (IoT) networking has grown dramatically broad. The major feature of IoT network devices is their ability to connect to the internet and interact with it through data collecting and exchanging. Distributed Denial of Service (DDoS) is one form of cyber-attacks in which the hackers penetrate a single connection and then multiple machines are operating together to attack one target. The direct connectivity of IoT devices to the internet makes DDoS attacks worse and more dangerous. The more businesses adapted IoT networks to streamline the operations, the more allowing of DDoS intrusions at small and large scales to take place. Therefore, the intrusion detection module in the IoT networks is not optional in today’s business environment. To achieve this objective, in this paper, an intelligent intrusion detection model is proposed to detect DDoS attacks in IoT networks. The intelligent model is a backpropagation neural network-based framework. The results are analyzed using different performance measures. The proposed model proves a detection rate of 99.46% and detection accuracy of 95.76% using the up-to-date benchmark CICDDoS2019 dataset. Furthermore, the proposed model has been compared with the most recent DDoS intrusion detection schemes and competitive performance is achieved.

Author 1: Jasem Almotiri

Keywords: DDoS; backpropagation neural network; IoT network; intrusion detection; CICDDoS2019

PDF

Paper 83: Analysis and Evaluation of Two Feature Selection Algorithms in Improving the Performance of the Sentiment Analysis Model of Arabic Tweets

Abstract: Recently, Sentiment analysis from Twitter is one of the most interesting research disciplines; it combined data mining technologies with natural language processing techniques. The sentiment analysis system aims to evaluate the texts that are posted on social platforms to express positive, negative, or neutral feelings of people regarding a certain domain. The high dimensionality of the feature vector is considered to be one of the most popular problems of Arabic sentiment analysis. The main contribution of this paper is to solve the dimensionality problem by presenting a comparative study between two feature selection algorithms, namely, Information Gain (IG), and Chi-Square to choose the best one which may lead to improve the classification accuracy. In this paper, the Arabic Jordanian sentiment analysis model is proposed through four steps. First, a preprocessing step has been applied to the database and includes (Remove Non-Arabic Symbols, Tokenizing, Arabic Stop Word Removal, and Stemming). In the second step, the TF-IDF algorithm is used as a feature extraction method to represent the text into feature vectors. Then, we utilized IG and Chi-Square as feature selection steps to obtain the best subset of features and decrease the total number of features. Finally, different algorithms have been used in the classification step such as (SVM, DT, and KNN) to classify the views people have shared on Twitter, into two classes (positive, and negative). Several experiments were performed on Jordanian dialectical tweets using the AJGT database. The experimental results show the following: 1) The information acquisition algorithm outperformed the Chi-Square Algorithm in the feature selection step, as it was able to reduce the number of features from 1170 to 713 and increase the accuracy of the classifiers by 10%, 2) SVM classifier shows the greatest classification performance among all the classifiers tested which gives the highest accuracy of 85% with IG algorithm.

Author 1: Maria Yousef
Author 2: Abdulla ALali

Keywords: Sentiment analysis; Information Gain (IG); Chi-Square; AJGT database

PDF

Paper 84: Domain Human Recognition Techniques using Deep Learning

Abstract: As a key research subject in the fields of health and human-machine interaction, human activity recognition (HAR) has emerged as a major research focus over the past few decades. Many artificial intelligence-based models are being created for activity recognition. However, these algorithms are failing to extract spatial and temporal properties, resulting in poor performance on real-world long-term HAR. A drawback in the literature is that there are only a small number of publicly available datasets for physical activity recognition that contain a small number of activities, owing to the scarcity of publicly available datasets. In this paper, a hybrid model for activity recognition that incorporates both convolutional neural networks (CNN) are developed. The CNN network is used for extracting spatial characteristics, while the LSTM network is used for learning time-related information. Using a variety of traditional and deep machine learning models, an extensive ablation investigation is carried out in order to find the best possible HAR solution. The CNN approach can achieve a precision of 90.89%, indicating that the model is suitable for HAR applications.

Author 1: Seshaiah Merikapudi
Author 2: Murthy SVN
Author 3: Manjunatha. S
Author 4: R. V. Gandhi

Keywords: Human recognition; deep learning; hybrid model; CNN; HAR

PDF

Paper 85: Tourist Reviews Sentiment Classification using Deep Learning Techniques: A Case Study in Saudi Arabia

Abstract: Now-a-days, social media sites and travel blogs have become one of the most vital expression sources. Tourists express everything related to their experiences, reviews, and opinions about the place they visited. Moreover, the sentiment classification of tourist reviews on social media sites plays an increasingly important role in tourism growth and development. Accordingly, these reviews are valuable for both new tourists and officials to understand their needs and improve their services based on the assessment of tourists. The tourism industry anywhere also relies heavily on the opinions of former tourists. However, most tourists write their reviews in their local dialect, making sentiment classification more difficult because there are no specific rules to control the writing system. Moreover, there is a gap between Modern Standard Arabic (MSA) and local dialects. one of the most prominent issues in sentiment analysis is that the local dialect lexicon has not seen significant development. Although a few lexicons are available to the public, they are sparse and small. Thus, this paper aims to build a model capable of accurate sentiment classification in the Saudi dialect for Arabic in tourist place reviews using deep learning techniques. Machine learning techniques help classifying these reviews into (positive, negative, and neutral). In this paper, three machine learning algorithms were used, Support -Vector Machine (SVM), Long short-term memory (LSTM), and Recurrent Neural Network (RNN). These algorithms are classified using Google Map data set for tourist places in Saudi Arabia. Performance classification of these algorithms is done using various performance measures such as accuracy, precision, recall and F-score. The results show that the SVM algorithm outperforms the deep learning techniques. The result of SVM was 98%, outperforming the LSTM, and RNN had the same performance of 96%.

Author 1: Banan A. Alharbi
Author 2: Mohammad A. Mezher
Author 3: Abdullah M. Barakeh

Keywords: Sentiment classification; Saudi dialect; support -vector machine; recurrent neural network; long short-term memory

PDF

Paper 86: An Outlier Detection and Feature Ranking based Ensemble Learning for ECG Analysis

Abstract: Automated classification of each heartbeat class from the ECG signal is important to diagnose cardiovascular diseases (CVDs) more quickly. ECG data acquired from the real-time or clinical databases contains exceptional values or extreme values called outliers. The separation and removal of outliers is very much useful for improving the data quality. The presence of outliers will influence the results of machine learning (ML) methods such as classification and regression. Outlier identification and removal plays a significant role in this area of research and is a part of signal denoising. Also, most of the traditional ECG-signal processing methods are facing the difficulty in finding the essential key features of recorded signal. In this work, an extreme outlier detection technique known as improved inter quartile range (IIQR) filtering method is used to find the outliers of the signal for the feature ranking process. In addition, an optimized random forest (ORF) based heterogenous ensemble classification model is proposed to improve the true positive and runtime on the ECG data. The classification of each heartbeat type is classified with majority voting technique. Ensemble learning and majority voting rule is used to enhance the accuracy of heart disease prediction. The proposed feature ranking based ORF ensemble classification model (LR + SVM + ORF + XGBoost + KNN) is evaluated on the MITBIH arrhythmia database and produces an overall accuracy of 99.45% which significantly outperforms the state-of-the-art methods such as, (LR + SVM + RF + XGBoost + KNN) with 96.17% accuracy, ensemble deep learning accuracy of 95.81% and ensemble SVM accuracy of 94.47%.

Author 1: Venkata Anuhya Ardeti
Author 2: Venkata Ratnam Kolluru
Author 3: George Tom Varghese
Author 4: Rajesh Kumar Patjoshi

Keywords: Feature ranking; improved inter quartile range; majority voting; outlier detection; optimized random forest

PDF

Paper 87: Improved Data Segmentation Architecture for Early Size Estimation using Machine Learning

Abstract: Software size estimation plays an important role in project management. According to the report given by Standish Chaos, about 65% of software projects are beyond companies budget or overdue; which could have been saved if an early estimation was imposed. Though the software size can’t be measured directly, but it is related to effort and hence a low effort will lead to low size. The calculation of effort depends upon how the data is organized or segmented. This research paper focuses on the improvement of data segmentation in order to reduce the effort and parallel the size. In order to improve the segmentation architecture, the project data is divided based on the similarity indexes which the projects have in between their attributes. Three similarity measures were used namely Cosine Similarity (CS), Soft Cosine Similarity (SC) and a hybrid similarity index which combines CS and SC. Based on these similarity indexes, the project data is divided into groups by K-means algorithm. In order to estimate the size, the co-relation between the formed groups are calculated. To calculate the correlation, Mean Square Error (MSE), Square Error (SE), and Standard Deviation (STD) is calculated and the normalized parameters are used to evaluate the software size.

Author 1: Manisha
Author 2: Rahul Rishi
Author 3: Sonia Sharma

Keywords: Cosine similarity; hybrid similarity; machine learning; size estimation and soft cosine similarity

PDF

Paper 88: Use of Information and Computer-based Distance Learning Technologies during COVID-19 Active Restrictions

Abstract: Despite the reduction of restrictive measures imposed due to the COVID-19 pandemic, the problem of organizing distance learning continues to be topical. Distance learning imposes a much greater responsibility on teachers, giving them more of a workload as learning technologies change rapidly and teachers have to actively adapt to innovations, devoting a lot of time to preparing appropriate materials to ensure the best learning outcomes. The aim of the study is to detect the use of the most effective means of organizing distance learning by teachers. The study is based on a survey of university professors who were teaching in the distance mode during 2020-2021 active administrative restrictions. Opportunities for the use of various services in the organization of distance learning are analyzed and the drawbacks and advantages of the distance learning system are highlighted. The study reveals previously unapparent issues that arose in the course of distance work in quarantine. These include, first and foremost, the high physical workload of teachers, the many technical problems that arose in the transition to distance learning, the lack of teachers’ competencies, which needs to be urgently addressed, and the complicated coordination of the learning process. Despite the problems identified, the authors argue that the system of distance learning can and must be adopted and further developed as an additional supporting direction in the organization of the learning process, which will allow educational institutions to promptly shift to distance learning as needed.

Author 1: Irina Petrovna Gladilina
Author 2: Lyudmila Nikolaevna Pankova
Author 3: Svetlana Alexandrovna Sergeeva
Author 4: Vladimir Kolesnik
Author 5: Alexey Vorontsov

Keywords: Distance learning; teachers; electronic service; online class

PDF

Paper 89: Detection of COVID-19 from Chest X-Ray Images using CNN and ANN Approach

Abstract: The occurrence of coronavirus (COVID-19), which causes respiratory illnesses, is higher than in 2003. (SARS). COVID-19 and SARS are both spreading over regions and infecting living beings, with more than 73,435 deaths and more than 2000 deaths documented as of August 12, 2020. In contrast, SARS killed 774 lives in 2003, whereas COVID-19 claimed more in the shortest amount of time. However, the fundamental difference between them is that, after 17 years of SARS, a powerful new tool has developed that could be utilized to combat the virus and keep it within reasonable boundaries. One of these tools is machine learning (ML). Recently, machine learning (ML) has caused a paradigm shift in the healthcare industry, and its use in the COVID-19 outbreak could be profitable, especially in forecasting the location of the next outbreak. The use of AI in COVID-19 diagnosis and monitoring can be accelerated, reducing the time and cost of these processes. As a result, this study uses ANN and CNN techniques to detect COVID-19 from chest x-ray pictures, with 95% and 75% accuracy, respectively. Machine learning has greatly enhanced monitoring, diagnosis, monitoring, analysis, forecasting, touch tracking, and medication/vaccine production processes for the Covid-19 disease outbreak, reducing human involvement in nursing treatment.

Author 1: Micheal Olaolu Arowolo
Author 2: Marion Olubunmi Adebiyi
Author 3: Eniola Precious Michael
Author 4: Happiness Eric Aigbogun
Author 5: Sulaiman Olaniyi Abdulsalam
Author 6: Ayodele Ariyo Adebiyi

Keywords: Machine learning; COVID-19; ANN; CNN; X-ray images

PDF

Paper 90: A Novel Region Growing Algorithm using Wavelet Coefficient Feature Combination of Image Dynamics

Abstract: Moving object detection has versatile and potential applications in video surveillance, traffic monitoring, human motion capture etc., where detecting object(s) in a complex scene is vital. In the existing background subtraction method based on frame differencing, the false positive and misclassification rate increases as the background becomes more complex and also with the presence of multiple moving objects in the scene. In this piece of work, an approach has been made to enhance the detection performance of the background subtraction method by exploiting the dynamism available in the scene. The resultant differencing frame so obtained by the spatial background subtraction method is subjected to wavelet transformation. By extracting and combining wavelet features from the dynamics of the scene, a novel method of region growing technique has been further utilized to detect the moving object(s) in the scene. Simulation of various video sequences from CDnet, SBMnet, AGVS, I2R and Urban Tracker database has been applied and the method provides satisfactory detection of the moving object in a complex scene. The quantitative measure like Recall, Precision, F1-measure, and specificity computed for the algorithms, have indicated the algorithms can be a suitable candidate for surveillance applications.

Author 1: Tamanna Sahoo
Author 2: Bibhuprasad Mohanty

Keywords: Moving object; dynamism; wavelet transformation; region growing

PDF

Paper 91: Driver Drowsiness Detection and Monitoring System (DDDMS)

Abstract: The purpose of this paper is to develop a driver drowsiness and monitoring system that could act as an assistant to the driver during the driving process. The system is aimed at reducing fatal crashes caused by driver’s drowsiness and distraction. For drowsiness, the system operates by analysing eye blinks and yawn frequency of the driver while for distraction, the system works based on the head pose estimation and eye tracking. The alarm will be triggered if any of these conditions occur. Main part of the implementation of this system will be using python with computer vision, while Raspberry Pi, which is uniquely designed, for the hardware platform and the speaker for alarming. In short, this driver drowsiness monitoring system can always monitor drivers so as to avoid accidents in real time.

Author 1: Raz Amzar Fahimi Rozali
Author 2: Suzi Iryanti Fadilah
Author 3: Azizul Rahman Mohd Shariff
Author 4: Khuzairi Mohd Zaini
Author 5: Fatima Karim
Author 6: Mohd Helmy Abd Wahab
Author 7: Rajan Thangaveloo
Author 8: Abdul Samad Bin Shibghatullah

Keywords: Distraction; drowsiness; eye blink; yawn; head pose estimation; eye tracking; computer vision; Raspberry Pi

PDF

Paper 92: BiDLNet: An Integrated Deep Learning Model for ECG-based Heart Disease Diagnosis

Abstract: Every year, around 10 million people die due to heart attacks. The use of electrocardiograms (ECGs) is a vital part of diagnosing these conditions. These signals are used to collect information about the heart's rhythm. Currently, various limitations prevent the diagnosis of heart diseases. The BiDLNet model is proposed in this paper which aims to examine the capability of electrocardiogram data to diagnose heart disease. Through a combination of deep learning techniques and structural design, BiDLNet can extract two levels of features from the data. A discrete wavelet transform is a process that takes advantage of the features extracted from higher layers and then adds them to lower layers. An ensemble classification scheme is then made to combine the predictions of various deep learning models. The BiDLNet system can classify features of different types of heart disease using two classes of classification: binary and multiclass. It performed remarkably well in achieving an accuracy of 97.5% and 91.5%, respectively.

Author 1: S. Kusuma
Author 2: Jothi. K. R

Keywords: Heart disease; ECG; deep learning; machine learning models; discrete wavelet transform

PDF

Paper 93: Deep-Learning Approach for Efficient Eye-blink Detection with Hybrid Optimization Concept

Abstract: In this research work, a novel eye-blink detection model is developed. The proposed eye blink detection model is modeled by following seven major phases: (a) video–to-frame conversion, (b) Pre-processing, (c) face detection, (d) eye region localization, (e) eye landmark detection and eye status detection, (f) eye blink detection and (f) Eye blink Classification. Initially, from the collected raw video sequence (input), each individual frames are extracted in the video –to-frame conversion phase. Then, each of the frames is subjected to pre-processing phase, where the quality of the image in the frames is improved using proposed Kernel median filtering (KMF) approach. In the face detection phase, the Viola-Jones Model has been utilized. Then, from the detected faces, the eye region is localization within the proposed eye region localization phase. The proposed Eye region localization phase encapsulates two major phases: Feature extraction and landmark detection. The features like improved active shape models (I-ASMs), Local Binary pattern are extracted from the detected facial images. Then, the eye region is localization by using a new optimized Convolution neural network framework. This optimized CNN framework is trained with the extracted features (I-ASM and LBP). Moreover, to enhance the classification accuracy of eye localization, the weight of CNN is fine-tuned using a new Seagull Optimization with Enhanced Exploration (SOEE), which is the improved version of standard Seagull Optimization Algorithm (SOA). The outcome from optimized CNN framework is providing the exact location of the eye region. Once the eye region is detected, it is essential to detect the status of the eye (whether open or close). The status of the eye is detected by computing the eye aspect ratio (EAR). Then, the identified eye blinks are classified based on the computed correlation coefficient as long and short blinks. Finally, a comparative evaluation has been accomplished to validate the projected model.

Author 1: Rupali Gawande
Author 2: Sumit Badotra

Keywords: Eye localization; CNN; Seagull Optimization with Enhanced Exploration (SOEE); improved active shape model (I-ASM); eye aspect ratio (EAR); eye-blink detection

PDF

Paper 94: A Hybrid Quartile Deviation-based Support Vector Regression Model for Software Reliability Datasets

Abstract: Software reliability estimation using machine learning play a major role on the different software quality reliability databases. Most of the conventional software reliability estimation model fails to predict the test samples due to high true positive rate of the traditional support vector regression models. Most of the traditional machine learning based fault prediction models are integrated with standard software reliability growth measures for reliability severity classification. However, these models are used to predict the reliability level of binary class with less standard error. In this paper, a hybrid support vector regression-based quartile deviation growth measure is implemented on the training fault datasets. Experimental results are simulated on various reliability datasets with different configuration parameters for fault prediction.

Author 1: Y. Geetha Reddy
Author 2: Y Prasanth

Keywords: Software fault detection; reliability prediction; support vector machine; exponential distribution; quartile deviation

PDF

Paper 95: EAGL: Enhancement Algorithm based on Gamma Correction for Low Visibility Images

Abstract: Under poor light conditions or improper acquisition settings, the image degrades due to low contrast, poor brightness and suffer poor visual quality of the picture. An enhancement is required to manipulate the scale of pixel intensity for significant improvement in the image. This paper proposed the method of gamma correction with a self-adaptive value in accordance with the intensity scale of the image. After transformation to HSI (hue, saturation and intensity) channel, a multi-scale wavelet transform is implemented on the intensity component of the image. The gamma scale is computed from the combination of reformed scale constant of logarithm function and Minkowski distance measure. Lastly, wavelet based de-noising technique is applied to suppress high noise coefficients to improve quality of the image. The proposed method is evaluated in terms of visual appearance, measure of information content, signal to noise ratio, and universal image quality index. It demonstrated that the proposed method showed its efficacy in terms of quality and improved visibility.

Author 1: Navleen S Rekhi
Author 2: Jagroop S Sidhu

Keywords: Low scale intensity images; discrete wavelet decomposition; gamma correction; quality metrics

PDF

Paper 96: Approval Rating of Peruvian Politicians and Policies using Sentiment Analysis on Twitter

Abstract: Now-a-days, using the social network Twitter, a subject can easily access, post, and share information about news, events, and incidents taking place currently in the world. Recently, due to the high number of users and the capability to transfer information instantly, Twitter had attracted the interest of politicians with the goal to interact with their followers and to communicate theirs polices. Fearing the disagreements and disturbances that the application of some policies might cause, usually politicians use surveys to support their actions. However, such studies still use traditional questionnaires to recover information and are costly and time-consuming. Recent advances in automatic natural language processing have allowed the extraction of information from textual data, like tweets. In this work, we present a method to analyze Twitter data related to Peruvian politicians and able to score the latent sentiment polarity of such messages. Our proposal is based on an embedding representation of tweets, which are classified by a convolutional neural network. For evaluation, we collected a new dataset related to the current President of Peru, where the model achieved 91.2%of sensibility and 94.4% of specificity. Furthermore, we evaluated the model in two politic topics, that were totally unknown for the model. In all of them, our approach gives comparable results to renowned Peruvian pollsters.

Author 1: Jose Yauri
Author 2: Luis Solis
Author 3: Efrain Porras
Author 4: Manuel Lagos
Author 5: Enrique Tinoco

Keywords: Twitter data analytic; sentiment analysis; Peruvian politicians; approval rating; convolutional neural networks

PDF

Paper 97: Construction of a Repeatable Framework for Prostate Cancer Lesion Binary Semantic Segmentation using Convolutional Neural Networks

Abstract: Prostate cancer is the 3rd most diagnosed cancer overall. Current screening methods such as the prostate-specific antigen test could result in overdiagonosis and overtreatment while other methods such as a transrectal ultrasonography are invasive. Recent medical advancements have allowed the use of multiparametric MRI — a noninvasive and reliable screening process for prostate cancer. However, assessment would still vary from different professionals introducing subjectivity. While con-volutional neural network has been used in multiple studies to ob-jectively segment prostate lesions, due to the sensitivity of datasets and varying ground-truth established used in these studies, it is not possible to reproduce and validate the results. In this study, we executed a repeatable framework for segmenting prostate cancer lesions using annotated apparent diffusion coefficient maps from the QIN-PROSTATE-Repeatability dataset — a publicly available dataset that includes multiparametric MRI images of 15 patients that are confirmed or suspected of prostate cancer with two studies each. We used a main architecture of U-Net with batch normalization tested with different encoders, varying data image augmentation combinations, and hyperparameters adopted from various published frameworks to validate which combination of parameters work best for this dataset. The best performing framework was able to achieve a Dice score of 0.47 (0.44-0.49) which is comparable to previously published studies. The results from this study can be objectively compared and improved with further studies whereas this was previously not possible.

Author 1: Ian Vincent O. Mirasol
Author 2: Patricia Angela R. Abu
Author 3: Rosula S. J. Reyes

Keywords: Convolutional neural networks; binary semantic segmentation; prostate cancer; computer vision; deep learning

PDF

Paper 98: Methods and Directions of Contact Tracing in Epidemic Discovery

Abstract: The contact tracing process is a mitigation and monitoring strategy that aims to capture infectious diseases to control their outbreak in a practical time. Various applications have been proposed and developed contact tracing process; most of these applications utilize the smartphone technologies to record all movements of contacts and send notifications to the expected infected ones, either high-risk or low-risk. On the other side, several challenges limit the functionality of contact tracing applications and processes; these limitations include (1) privacy concerns, (2) lack to fully identify contacts, and (3) delays in identification. In this paper, we survey the functionality of the contact tracing process, how its works, open directions and challenges, applications, and its domain of use.

Author 1: Mohammed Abdalla
Author 2: Amr M. AbdelAziz
Author 3: Louai Alarabi
Author 4: Saleh Basalamah
Author 5: Abdeltawab Hendawi

Keywords: Contact tracing; routes analysis; epidemic discov-ery; big spatial health applications

PDF

Paper 99: Emergency Decision Model by Combining Preference Relations with Trapezoidal Pythagorean Fuzzy Probabilistic Linguistic Priority Weighted Averaging PROMETHEE Approach

Abstract: The outbreak of COVID-19 in 2019 has brought greater international attention to emergency decision making and management. Since emergency situations are often uncertain, prevention and control are crucial. For better prevent and control, according to the characteristics of emergency incidents, the paper proposes a new form of linguistic expression trape-zoidal Pythagorean fuzzy probabilistic linguistic variables to express decision-making information. Next, the paper develops the operational rules, value index and ambiguity of trapezoidal Pythagorean fuzzy probabilistic linguistic variables. Then, the new trapezoidal Pythagorean fuzzy probabilistic linguistic prior-ity weighted averaging PROMETHEE approach is introduced to aggregate the trapezoidal Pythagorean fuzzy probabilistic linguistic information combining with preference relation. Finally, an emergency decision making case of prevention of infec-tious diseases analysis illustrate the necessity and effectiveness of this method, the results of comparative and experimen-tal analyses demonstrate that the constructed new trapezoidal Pythagorean fuzzy probabilistic linguistic priority weighted av-eraging PROMETHEE approach owns better performances in terms of effectiveness and reasonability.

Author 1: Xiao Yue
Author 2: Li jianhui

Keywords: COVID-19; emergency decision model; trapezoidal Pythagorean fuzzy probabilistic linguistic variables; preference relations; PROMETHEE approach

PDF

Paper 100: Analysis of the Influence of De-hazing Methods on Vehicle Detection in Aerial Images

Abstract: In recent years, object detection from space in adverse weather, incredibly foggy, has been challenging. In this study, we conduct an empirical experiment using two de-hazing methods: DW-GAN and Two-Branch, for removing fog, then eval-uate the detection performance of six advanced object detectors belonging to four main categories: two-stage, one-stage, anchor-free and end-to-end in original and de-hazed aerial images to find the best suitable solution for vehicle detection in foggy weather. We use the UIT-DroneFog dataset, a challenging dataset that includes a lot of small, dense objects captured in various altitudes, as the benchmark to evaluate the effectiveness of approaches. After experiments, we observe that each de-hazing method has different impacts on six experimental detectors.

Author 1: Khang Nguyen
Author 2: Phuc Nguyen
Author 3: Doanh C. Bui
Author 4: Minh Tran
Author 5: Nguyen D. Vo

Keywords: Foggy weather; vehicle detection; DWGAN; two-branch; YOLOv3; sparse R-CNN; deformable deter; cascade R-CNN; crossDet; adverse weather

PDF

Paper 101: Designing a Mobile Application using Augmented Reality: The Case of Children with Learning Disabilities

Abstract: The learning disorder has several difficulties to learn correctly; in many cases they have more stress because they do not understand the subjects proposed by the teacher. The aim of the research is to propose an innovative plan to design a mobile application for the treatment of learning disabilities using augmented reality in primary education. In this way, we used a methodology called Design Thinking that has five phases, empathize, define, devise, prototype and testing, which facilitates us in identifying the problems for itself to have solutions to these problems. For the prototype we used tools such as Marvel App, which is responsible for the layout of the mobile application. TinkerCad allows us to design the 3D model of the educational games and finally App Augmented Class to create the augmented reality model. The results obtained were through a survey about the prototype; identifying the acceptance about the prototype by parents for the usefulness of this idea for their children with learning disabilities, with 76% that the prototype is ideal for children. In addition, the prototype was validated by five experts, resulting in 85.4% acceptance. As a conclusion of the research is the achievement of a good design for a solution to the problems of children with learning disabilities to have a better understanding and to be free from stress.

Author 1: Misael Lazo-Amado
Author 2: Leoncio Cueva-Ruiz
Author 3: Laberiano Andrade-Arenas

Keywords: App augmented class; design thinking; marvel App; learning disorder; TinkerCad

PDF

Paper 102: Implementation of a Web System: Prevent Fraud Cases in Electronic Transactions

Abstract: The purpose of this project is to prevent cases of fraud in e-commerce of purchase and sale from person to person through social networks. For the development of the research work, the Scrum methodology was used to allow the project to be carried out in an agile and flexible way, adapting to the changes that could arise along the way. The technological tools that made this project possible were SQL Server, C++, Visual Studio and Marvel app, the latter for prototype design. In addition, there was the support of an artificial intelligence software known as Optical Character Recognition that allowed the document recognition process to be completed. The social network Facebook was also relevant for the development process since the data set for the training of the system was obtained from there, guaranteeing its functionality. The results obtained benefit both parties, sellers/suppliers and consumers, reducing the impact of fraud cases and guaranteeing safer online operations. In addition, a validation was carried out by experts in the development of web applications, taking usability, feasibility, scalability, innovation, and technology as criteria. Obtaining as a result the approval in all its criteria; with the total mean value of 2.76.

Author 1: Edwin Kcomt Ponce
Author 2: Katherine Escobedo Sanchez
Author 3: Laberiano Andrade-Arenas

Keywords: Artificial intelligence; e-commerce; fraud; optical character recognition; scrum; social networks; web system

PDF

Paper 103: Benchmarking of Motion Planning Algorithms with Real-time 3D Occupancy Grid Map for an Agricultural Robotic Manipulator

Abstract: The performance evaluation of motion planning algorithms for agricultural robotic manipulators is commonly performed via benchmarking platforms. However, creating a realistic benchmarking scene that constrains the motion planning algorithms with the characteristic of a real-work environment has always been a challenge worthy of research. In this paper, we present a lab-setup benchmarking platform to evaluate Open Motion Planning Library (OMPL) motion planners for the application of a robotic harvester of a palm-like tree using a real-time 3D occupancy grid map. First, three motion problems were defined with different levels of complexity based on a real oil palm fruit harvesting task. To achieve reliable outcomes, the benchmarking scene was modeled by converting point cloud data from a stereo-depth sensor into a 3D occupancy grid map using the Octomap algorithm. Then the benchmarking was performed, all within a real-time process. According to the results, a fair performance evaluation was achieved by modeling a realistic benchmarking scene, which can help in choosing a high-performing algorithm and efficiently conducting such harvesting tasks in real practice.

Author 1: Seyed Abdollah Vaghefi
Author 2: Mohd Faisal Ibrahim
Author 3: Mohd Hairi Mohd Zaman

Keywords: Motion planning; agricultural; harvesting; robot manipulator; benchmarking; oil palm

PDF

Paper 104: Unsupervised Domain Adaptation using Maximum Mean Covariance Discrepancy and Variational Autoencoder

Abstract: Face Recognition has progressed tremendously from its initial use of holistic learning models to using hand-crafted, shallow, and deep learning models. DeepFace, a nine-layer Deep Convolutional Neural Network (DCNN), reached near-human performance on unconstrained face recognition for the La-beled Faces in the Wild (LFW) dataset. These models performed very well on the benchmark datasets, but their performance sometimes deteriorated for real-world applications. The problem arose when there was a domain shift due to different distribution spaces of the training and testing models. Few researchers looked at Unsupervised Domain Adaptation (UDA) to find the domain-invariant feature spaces. They tried to minimize the domain discrepancy using a static loss of maximum mean discrepancy (MMD). From MMD, the researchers delved into the higher-order statistics of maximum covariance discrepancy (MCD). MMD and MCD were combined to get maximum mean and covariance discrepancy (MMCD), which captured more information than MMD alone. We use a Variational Autoencoder (VAE) with joint mean and covariance discrepancy to offer a solution for domain adaptation. The proposed MMCD-VAE model uses VAE to measure the discrepancy in the spread of variance around the mean value and uses MMCD to measure the directional discrepancy in the variance. Analysis was done using the TinyFace benchmark dataset and the Bollywood Celebrities dataset. Three objective image quality parameters, namely SSIM, pieAPP, and SIFT feature matching, demonstrate the superiority of MMCD-VAE over the conventional KL-VAE model. MMCD-VAE shows an 18 % improvement in SSIM and a remarkable improvement in the perceptual quality of the image over the conventional KL-VAE model.

Author 1: Fabian Barreto
Author 2: Jignesh Sarvaiya
Author 3: Suprava Patnaik
Author 4: Sushilkumar Yadav

Keywords: Deep learning; domain adaptation; face recognition; maximum mean covariance discrepancy; transfer learning; varia-tional autoencoders

PDF

Paper 105: Prediction of Quality of Water According to a Random Forest Classifier

Abstract: Potable or drinking water is a daily life necessity for humans. The safety of this water is a concern in many regions around the world, since polluted waters are increasing and causing the spread of disease among populations. Continuous management and evaluation of the water which is meant for drinking is very essential and must be taken seriously. Often, the quality of water is evaluated through regular laboratory testing and analysis which can be tiresome and time consuming. On the other hand, advanced technologies using big data with the help of machine learning can have better results in terms of potability evaluation. For this reason, several studies have been conducted on predicting the quality of water and the several factors and classification that affect the prediction model. In this study, a random forest model was developed using PySpark classification to predict the potability of river water by relying on ten different features: pH, hardness, presence of solids, presence of chloramines, presence of sulfate, conductivity, organic carbon, trihalomethanes, turbidity, and finally potability. In addition, The developed model was able to predict water potability classification with a 1.0 accuracy, and 1.0 F1-score.

Author 1: Shahd Maadi Alomani
Author 2: Najd Ibrahim Alhawiti
Author 3: A’aeshah Alhakamy

Keywords: Big data; machine learning; classification; random forest; water quality; PySpark

PDF

Paper 106: Virtual Reality Platform for Sustainable Road Education among Users of Urban Mobility in Cuenca, Ecuador

Abstract: A traffic accident is an event generated in an unforeseen way that is beyond the control of the people involved, which can produce bodily, functional, or organic injuries, leading to death or disability in the worst cases. According to the Empresa P´ublica Municipal de Movilidad, Tr´ansito y Transporte de Cuenca (EMOV-EP), the total accidents recorded in 2021 were, 24.97%due to ignoring traffic signs, 21.11% due to not paying attention to traffic, and 16.94% due to driving under the influence of alcohol. The EMOV-EP, is the responsible for the regulation of human mobility. Thus, the EMOV-EP in conjunction with the Universidad Polit´ecnica Salesiana (UPS) have introduced the next research question: How can a road safety education strategy, supported by Information and Communication Technologies (ICTs), can be developed to contribute to the improvement of the behavior of citizens to increase their knowledge of the traffic laws and regulations, and thus reduce the number of accidents in the city of Cuenca? Furthermore, in this paper we present the development of a Virtual Reality (VR) platform designed for road safety education. The platform is composed of a Web system, and 4 VR systems (games) that have been designed for 4 common causes of accidents respectively (drunk drivers, high-speed drivers, cyclists riding in bicycle lanes, and users of the tram transport system), using a serious games approach and the Oculus Rift/Quest technology. We have found that more than 80%of users have had a very good experience of playing and learning through the VR systems. Hence, this virtual reality platform constitutes a technological proposal with social impact because it creates an entertainment environment that can raise awareness among citizens, thereby strengthening road safety education and reducing the number of accidents in the city of Cuenca.

Author 1: Gabriel A. Leon-Paredes
Author 2: Omar G. Bravo-Quezada
Author 3: Erwin J. Sacoto-Cabrera
Author 4: Wilson F. Calle-Siavichay
Author 5: Ledys L. Jimenez-Gonzalez
Author 6: Juan Aguirre-Benalcazar

Keywords: Virtual reality; road safety education; virtual sce-narios; serious games; educational experience

PDF

Paper 107: An E2ED-based Approach to Custom Robot Navigation and Localization

Abstract: Simultaneous mapping and localization or SLAM is a basic strategy used with robots and autonomous vehicles to identify unknown environments. It is of great attention in robotics due to its importance in the development of motion planning schemes in unknown and dynamic environments, which are close to the real cases of application of a robot. This is why, in parallel with research, they are also important in specialized training processes in robotics. However, access to robotic platforms and laboratories is often complex and costly, with high demands on time and resources, particularly for small research centers. A more efficient and affordable approach to working with autonomous algorithms and motion planning schemes is often the use of the ROS-Gazebo simulator, which allows high integration with customized non-commercial robots, and the possibility of an end-to-end design (E2ED) solution. This research addresses this approach as a training and research strategy with our ARMOS TurtleBot robotic platform, creating an environment for working with navigation algorithms, in localization, mapping, and path planning tasks. This paper shows the integration of ROS into the ARMOS TurtleBot project, and the design of several subsystems based on ROS to improve the interaction in the development of service robot tasks. The project’s source code is available to the research community.

Author 1: Andres Moreno
Author 2: Daniel Paez
Author 3: Fredy Martinez

Keywords: End-to-End design; localization; navigation; path planning; robotics; SLAM

PDF

Paper 108: Identifying Community-Supported Technologies and Software Developments Concepts by K-means Clustering

Abstract: Working on technologies that have community sup-port is one of the most important factors in software development. Software developers often face difficulties during software devel-opment, and community support from other software developers help them significantly. This paper presents an approach based on K-mean clustering technique to identify the level of community support for software technologies and development concepts using Stack Overflow discussion forums. To test the approach, a case study was performed by gathering data from SO and preparing a dataset that contains over a million of Java developers’ questions. Then, K-mean clustering was applied to identify the community support levels. The goal is to find the best features that group community-supported software technologies and development concepts and identify the number of groups to determine the community support levels. Statistical error, clustering and classi-fication evaluation metrics were applied. The results indicate that the best features to formulate community supported technologies and development concept levels are Failure Rate and Wait Time. The results show that the approach identifies two groups of community supported and development concept levels based on the best silhouette index value of 97%. According to the results the majority of Java technologies and development concepts are labeled with less community supported technologies and development concepts (Cluster 2). Random Forest classifier was applied to indirectly evaluate the approach to detect the identified community support class. The result shows that RF classifier presents a good performance and shows high accuracy value of 99.49% which indicates that the identified groups improve the performance of the classifier. The approach can be utilized to assist software developers and researchers in utilizing the SO platform in developing SO-based recommendation systems.

Author 1: Farag Almansoury
Author 2: Segla Kpodjedo
Author 3: Ghizlane El Boussaidi

Keywords: Stack overflow; unsupervised machine learning; k-means clustering; empirical study; machine learning; random for-est; software development; Java; classification; community support

PDF

Paper 109: On the Role of Text Preprocessing in BERT Embedding-based DNNs for Classifying Informal Texts

Abstract: Due to highly unstructured and noisy data, analyz-ing society reports in written texts is very challenging. Classifying informal text data is still considered a difficult task in natural language processing since the texts could contain abbreviated words, repeating characters, typos, slang, et cetera. Therefore, text preprocessing is commonly performed to remove the noises and make the texts more structured. However, we argued that most tasks of preprocessing are no longer required if suitable word embeddings approach and deep neural network (DNN) ar-chitecture are correctly chosen. This study investigated the effects of text preprocessing in fine-tuning a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model using various DNN architectures such as multilayer perceptron (MLP), long short-term memory (LSTM), bidirectional long-short term memory (Bi-LSTM), convolutional neural network (CNN), and gated recurrent unit (GRU). Various experiments were conducted using numerous learning rates and batch sizes. As a result, text preprocessing had insignificant effects on most models such as LSTM, Bi-LSTM, and CNN. Moreover, the combination of BERT embeddings and CNN produced the best classification performance.

Author 1: Aliyah Kurniasih
Author 2: Lindung Parningotan Manik

Keywords: Natural language processing; bert embeddings; deep neural network; text preprocessing

PDF

Paper 110: VIHS with ROTR Technique for Enhanced Light-Weighted Cryptographic System

Abstract: Developing a bypass parallel processing block is one of the emerging and exciting research areas in the system encrypt/decrypt application areas. A Partial Pseudo-Random-based Hashing VIHS is the most suitable methodology for designing the system to encrypt/solve a block in cryptography. For this purpose, various VIHS and register techniques have been developed to process the storage system data. But, it is limited by the problems of reduced efficiency, increased computational complexity, high area consumption, and cost consumption. Thus, this research intends to develop a novel dynamic system register with hashing with optimal Hash Signature design to process the system’s encryption /decrypt data. The main intention of this paper is to analyze the transfer characteristics of the current based on the pseudo-differential pair for a proficient system detection. Then, a system window can be created and adjusted to obtain an optimized power flow with less data loss sensitivity. The major stages involved in the proposed block design are register, partition design, and VIHS design. The dynamic system register is designed at first for getting a fast decision and to enable a low input-referred offset value. Then, the partition is formed concerning the output of the register, and the VIHS is used to produce the high proportional logical work. During performance evaluation, various measures have been utilized to analyze the performance of the proposed dynamic system register-based hashing with optimal Hash Signature design. In addition to that, the estimated results are compared with the proposed technique to prove its efficiency.

Author 1: Sanjeev Kumar A N
Author 2: Ramesh Naik B

Keywords: Cryptography – partial pseudo-random based hash-ing technique; logical to sequential VIHS; system encrypts/decrypt data; dynamic system register; bypass parallel processing

PDF

Paper 111: Classification of Palm Trees Diseases using Convolution Neural Network

Abstract: The palm tree is considered one of the most durable trees , and it occupies an advanced position as one of the most famous and most important trees that are planted in different regions around the world, which enter into many uses and have a number of benefits. In the recent years , date palms have been exposed to a large number of diseases. These diseases differ in their symptoms and causes, and sometimes overlap, making the diagnosing process with the naked eye difficult, even by an expert in this field. This paper proposes a CNN-model to detect and classify four common diseases threatening palms today, Bacterial leaf blight, Brown spots, Leaf smut, white scale in addition to healthy leaves. The proposed CNN structure includes four convolutional layers for feature extraction followed by a fully connected layer for classification. For performance evaluation, we investigate the performance of the proposed model and compare to other CNN- structures, VGG-16 and MobileNet, using four evaluation metrics: Accuracy, Precision, Recall and F1 Score. Our proposed model achieves 99.10% accuracy rate while VGG- 16 and MobileNet achieve 99.35% and 99.56% accuracy rates, respectively. In general, the performance of our model and other models are very close with a minor advantage to MobileNet over others. In contrast, our model characterized by simplicity and shows low computational training time comparing to others.

Author 1: Marwan Abu-zanona
Author 2: Said Elaiwat
Author 3: Shayma’a Younis
Author 4: Nisreen Innab
Author 5: M. M. Kamruzzaman

Keywords: Palm trees diseases; convolutional neural networks; mobileNet; VGG-16

PDF

Paper 112: Dynamic Spatial-Temporal Graph Model for Disease Prediction

Abstract: Advances in the field of Neural Networks, especially Graph Neural Networks (GNNs) has helped in many fields, mainly in the areas of Chemistry and Biology where recognizing and utilising hidden patterns is of much importance. In Graph Neural Networks, the input graph structures are exploited by using the dependencies formed by the nodes. The data can also be transformed in the form of graphs which can then be used in such models. In this paper, a method is proposed to make appropriate transformations and then to use the structure to predict diseases. Current models in disease prediction do not fully use the temporal features that are associated with diseases, such as the order of the occurrence of symptoms and their significance. In the proposed work, the presented model takes into account the temporal features of a disease and represents it in terms of a graph to fully utilize the power of Graph Neural Networks and Spatial-Temporal models which take into consideration of the underlying structure that change over time. The model can be efficiently used to predict the most likely disease given a set of symptoms as input. The model exhibits the best algorithm based on its accuracy. The accuracy of the algorithm is determined by the performance on the given dataset. The proposed model is compared with the existing baseline models and proves to be outstanding and more promising in the disease prediction.

Author 1: Ashwin Senthilkumar
Author 2: Mihir Gupte
Author 3: Shridevi S

Keywords: Spatial temporal graph convolution network; disease prediction; graph neural network; graph convolutional network; deep learning; knowledge graph

PDF

Paper 113: Password Systems: Problems and Solutions

Abstract: In a security environment featuring subjects and objects, we consider an alternative to the classical password paradigm. In this alternative, a key includes a password, an object identifier, and an authorization. A master password is associated with each object. A key is valid if the password in that key descends from the master password by using a validity relation expressed in terms of a symmetric-key algorithm. We analyse a number of security problems. For each problem, a solution is presented and discussed. In certain cases, extensions to the original key paradigm are introduced. The problems considered include the revocation of access authorizations; bounded keys expressing limitations on the number of iterated utilizations of the same key to access the corresponding object; repositories, which are objects aimed at storing keys, possibly organized into hierarchical structures; and the merging of two keys into a single key featuring a composite authorization that includes the access rights in the two keys.

Author 1: Lanfranco Lopriore

Keywords: Access authorization; key; password; revocation; security

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org