The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 13 Issue 4

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: CISE: Community Engagement of CEB Cloud Ecosystem in Box

Abstract: The explosion of digital and observational data is having a profound effect on the nature of scientific inquiry, requiring new approaches to manipulating and analyzing large and complex data and increasing the need for collaborative solid research teams to address these challenges. These data, along with the availability of computational resources and recent advances in artificial intelligence, machine learning software tools, and methods, can enable unprecedented science and innovation. Unfortunately, these software tools and techniques are not uniformly accessible to all communities, mainly scientists and engineers at Minority Serving Institutions (MSI). Cloud computing resources are natural channels to enhance these institutions' research productivity. However, utilizing cloud computing resources for research effectively requires a significant investment in time and effort, awkward manipulation of data sets, and deployment of cloud-based applications workflows that support analysis and visualization tools.

Author 1: Benjamin Garlington

Keywords: Collaboration; outreach; engagement; narrowcasting; conceptual; methodological; storage-as-a-service; software-as-a-service; data-as-a-service; infrastructure-as-a-service; platform-as-a-service; cloud-ecosystem; minority serving institutions (MSI)

PDF

Paper 2: BEAM: A Network Topology Framework to Detect Weak Signals

Abstract: In these days, strategic decision making and im-mediate action are becoming a complex task for companies and policymakers, since the environment is subject to emerging changes that might include unknown factors. When facing these challenges, companies are exposed to opportunities for growth, but also to threats. Therefore, they seek to explore and analyze large amounts of data to detect emerging changes, or so-called weak signals, that can help maintaining their competitive advantages and shaping up their future operational environments. But due to the increasing volume of daily produced data, scalable and automated computer-aided systems are needed to explore and extract these weak signals. To overcome the automation and scalability challenges, and capture early signs of change in a big data environment, we propose a framework for weak signals detection relying on the network topology. It is implemented under the Cocktail project framework whose goal is to create a real-time observatory of trends, innovations and weak signals circulating in the discourses of the food and health sectors on Twitter. This method analyses quantitatively the network local structure using the graphlets (particular type of motifs) to find weak signals. It provides accordingly qualitative elements that contextualise the identified signals, which will allow business experts to interpret and evaluate their dynamics to determine which ones may have a relevant future. After testing this method on different types of networks (we present two of them in this paper), we proved that it is able to detect weak signals and provides a quantifiable signature that allows better decision making.

Author 1: Hiba Abou Jamra
Author 2: Marinette Savonnet
Author 3: Eric Leclercq

Keywords: Weak signals; network analysis; network topology; graphlets

PDF

Paper 3: Soft-sensor of Carbon Content in Fly Ash based on LightGBM

Abstract: The soft-sensor method of carbon content in fly ash is to predict and calculate the carbon content of boiler fly ash by modeling the distributed control system (DCS) data of thermal power stations. A novel data-driven soft-sensor model that combines data pre-processing, feature engineering and hyperparameter optimization for application in the carbon content of fly ash is presented. First, extract steady-state data by data mining technology. Second, twenty characteristics that may affect the carbon content in fly ash are identified as variables by feature engineering. Third, a LightGBM prediction model that captures the relation between the carbon content in fly ash and various DCS parameters is established and improves the prediction accuracy by the Bayesian optimization (BO) algorithm. Finally, to verify the prediction accuracy of the proposed model, a case study is carried out using the data of a coal-fired boiler in China. Results show that the proposed method yielded the best prediction accuracy and closely approximates the non-linear relationships between variables.

Author 1: Liu Junping
Author 2: Luo Hairui
Author 3: Huang Xiangguo
Author 4: Peng Tao
Author 5: Zhu Qiang
Author 6: Hu XinRong
Author 7: He Ruhan

Keywords: LightGBM; carbon content; fly ash; soft-sensor; feature engineering; Bayesian optimization

PDF

Paper 4: Research on Intelligent Natural Language Texts Classification

Abstract: Natural language texts widely exist in many aspects of social life, and classification is of great significance to its efficient use and normalized preservation. Manual texts classification has the problems such as labor intensive, experience dependent and error prone, therefore, the research on intelligent classification of natural language texts has great social value. In recent years, machine learning technology has developed rapidly, and related researchers have carried out a lot of works on the texts classification based on machine learning, the research methods show the characteristic of diversification. This paper summarizes and compares the texts classification methods mainly from three aspects, including technical routes, text vectorization methods and classification information processing methods, in order to provide references for further research and explore the development direction of the texts classification.

Author 1: Chen Xiao Yu
Author 2: Zhang Xiao Min

Keywords: Machine learning; natural language texts; text vectorization; classification information processing

PDF

Paper 5: Image Analysis of Heat-Affected Zone of Laser-Cut Heat-Resistant Paper using Otsu Thresholding Technique

Abstract: Since ancient times, natural fibers have been essential in paper production and packaging fabrication. However, beauty-marring carbonization, or a heat-affected zone (HAZ) generated during the laser cutting process of paper materials ¬¬¬led to an intriguing discussion on the possibility of reducing this defect zone. Thus, paper loaded with aluminum hydroxide [Al(OH)3] (AH) was prepared and tested with laser cutting. There were two input parameters of laser processing: the ratio of laser power to a maximum and the cutting speed. The study discussed the HAZ area of the paper with AH loaded at 0–40% on a dry pulp basis. The HAZ area was measured through image processing software. The Otsu thresholding technique (OTT) was applied to HAZ area determinations. The results from the image analysis signified that the smallest HAZ area was successfully achieved on samples with AH loaded at 40%. The optimal condition for the sample with 40% AH loaded was 60% power ratio and 20 mm/s in cutting speed. Based on the results, the cutting speed was the most significant parameter to produce the smallest HAZ area; therefore, the laser processing parameters were optimized to achieve a minimum HAZ area, and it was possible to reduce its dark color appearance of the material surfaces. Based on this study, it was found that the application of the Otsu thresholding technique was of significance to the HAZ area determination and reduction of the time consumption for the image analysis.

Author 1: Shalida Mohd Rosnan
Author 2: Kong Peifu
Author 3: Toshiharu Enomae
Author 4: Nakagawa-Izumi Akiko

Keywords: Heat-affected zone; image analysis; image processing; laser cutting; thresholding

PDF

Paper 6: Hadoop as a Service: Integration of a Company’s Heterogeneous Data to a Remote Hadoop Infrastructure

Abstract: Data analysis is very important for the development of any business today. It helps to identify organizational bottlenecks, optimize business processes, foresee customers’ demands and behavior, and provides summarized data that could help reducing costs and increase profits. Having this information when designing new products or services highly increases the chances of their success, and thus provides an additional competitive advantage over other businesses. However, having a single data analyst with a computer is far from enough in the era of big data. There are powerful data analytical software tools, but they are either expensive or hard to deploy and require multiple high-performance servers to run. Buying expensive hardware and software, and hiring high-qualified IT experts, is not affordable for all companies, especially for smaller ones and start-ups. Therefore, this article proposes an architecture for integration of a company’s heterogeneous data (stored within a database of any type, or in the file system) to a remote Hadoop cluster, providing powerful data analytical services on demand. This is an affordable and cost-effective cloud-based solution, suitable for a company of any size. Businesses are not required to by any hardware or software, but use the data analytical services on demand, paying a small processing fee per request or by subscription.

Author 1: Yordan Kalmukov
Author 2: Milko Marinov

Keywords: Hadoop integration; data analytical tools; heterogeneous data integration; Hadoop distributed file system (HDFS); HBase; hive

PDF

Paper 7: Method for Estimation of Oleic Acid Content in Soy Plants using Green Band Data of Sentinel-2/MSI

Abstract: A method for estimation of oleic acid content in soy plants using green band data of Sentinel-2/MSI: Multi Spectral Imager is proposed. Conventionally, vitality of agricultural plants is estimated with NDVI: Normalized Difference Vegetation Index. Spatial resolution of Near Infrared: NIR band of Sentinel-2/MSI for calculation of NDVI, however, is 20 m. Therefore, a method for estimation of vitality with only green band data of Sentinel-2/MSI is proposed here. Through regressive analysis with the satellite data as well as drone mounted NDVI camera data together with component analysis data by gas chromatography, it is found the correlation between NDVI and green band data of the optical sensor (MSI) onboard Sentinel-2 as well as the component analysis data. It is also found that the new variety of soy plant, Saga University brand: HO1 contains about 50% much oleic acid in comparison to the conventional variety of soy plant, Fukuyutaka.

Author 1: Kohei Arai
Author 2: Yoshitomo Hideshima
Author 3: Yuuhi Iwaki
Author 4: Ryota Ito

Keywords: Oleic acid; NDVI (normalized difference vegetation index); regressive analysis; soy plant; Multi Spectral Imager: MSI

PDF

Paper 8: Portable ECG Monitoring System

Abstract: The number of patients with cardiovascular diseases (CVD) is rapidly increasing in the world. Many CVDs are likely to manifest their symptoms some time prior to the onset of any adverse or catastrophic events, and early detection of cardiac abnormalities is incredibly important. To reduce the risks of life-threatening arrhythmia, it is necessary to develop and introduce portable systems for monitoring the state of the heart in conditions of free activity. This paper presents the second generation (prototype) of a portable cardiac analyzer and the developed system for non-invasive cardiac diagnostics. The portable cardiac analyzer mainly consists of an ADC for taking an electrocardiosignal (ECS) and an STM32L151xD microcontroller. To record operational data on current ECS, a block of non-volatile high-speed memory MRAM is connected to the microcontroller. A communication unit is based on the universal combo module SIM868 from SIMCOM, which supports data exchange in GSM/GPRS networks. The developed ECG monitoring system allows making decisions at different levels (cardiac analyzer, server, doctor), as well as exchanging information necessary to ensure an effective diagnostic and treatment process. We evaluated the performances of the developed system. The signal-to-noise ratio of the output signal is favorable, and all the features needed for a clinical evaluation (P waves, QRS complexes and T waves) are clearly readable.

Author 1: Zhadyra N. Alimbayeva
Author 2: Chingiz A. Alimbayev
Author 3: Nurlan A. Bayanbay
Author 4: Kassymbek A. Ozhikenov
Author 5: Oleg N. Bodin
Author 6: Yerkat B. Mukazhanov

Keywords: Electrocardiography; portable ECG device; ECG monitoring systems; cardiovascular diseases; mobile healthcare

PDF

Paper 9: Development of an IoT Device for Measurement of Respiratory Rate in COVID-19 Patients

Abstract: During the COVID-19 pandemic, patients who required face-to-face attention and tested positive, even showing signs of high risk, were forced to isolate themselves in their own homes immediately without adequate medical monitoring. Continuous remote monitoring of their vital signs would have helped to avoid subsequent hospitalization caused by the progression of the virus. Using deterministic design methods, a system to measure respiratory rate through impedance pneumography was proposed, amplifying microvolt signals to read and process data with a microcontroller. An embedded algorithm was designed to measure inspiration and expiration time. The values captured were sent via WiFi to a server, for posterior evaluation by the clinician. The key findings of this study are as follows (1) a respiratory-rate remote monitoring system was developed, displaying values calculated from impedance pneumography signals, (2) the correlation of the respiratory rate values from a patient during exercising and resting time, measured by a physician and by the device, was 0.96. (3) when analyzing separately the data obtained by the resting test and the exercise test, the performance of the device presented an average error percentage of -5.36% and +1.97%, respectively. As a conclusion, this device has practical applications for acute and chronic respiratory diseases, where respiratory rate is an indicator of the progression of these conditions.

Author 1: Jean Pierre Tincopa
Author 2: Paulo Vela-Anton
Author 3: Cender U. Quispe-Juli
Author 4: Anthony Arostegui

Keywords: Covid-19; respiration rate; internet of things; vital signs; hardware

PDF

Paper 10: Design and Implementation of Teaching Assistant System for Mechanical Course based on Mobile AR Technology

Abstract: Augmented reality technology has become a hot spot in many fields because of its unique real-time interaction and the ability to add virtual objects in 3D video space. To enable students better understand and master traditional mechanical courses and solve the limitations of the courseware teaching method, a teaching assistant system based on Vuforia platform is designed and developed by combining augmented reality technology with existing teaching methods. On the smartphone application side, the 3D model of parts can be displayed by scanning mechanical engineering drawings, which can rotate, zoom, and section view the model. It realizes 3D visualization and interactive operation of 2D drawings in textbooks and achieves the purpose of image, diversity and efficient teaching. The results show that this method can not only promote the cognition of spatial relations based on visualization, but also create a situational learning environment based on experience, which can effectively improve the learning effect of courses and enhance students' interest and enthusiasm in learning.

Author 1: Jinglei Qu
Author 2: Bingxin Ma
Author 3: Lulu Zheng
Author 4: Yuhui Kang

Keywords: Augmented reality; teaching assistance; mechanical teaching; visual operation

PDF

Paper 11: Extended Max-Occurrence with Normalized Non-Occurrence as MONO Term Weighting Modification to Improve Text Classification

Abstract: The increased volume of data due to advancements in the internet and relevant technology makes text classification of text documents a popular demand. Providing better representations of the feature vector by setting appropriate term weight values using supervised term weighting schemes improves classification performance in classifying text documents. A state-of-the-art term weighting scheme MONO with variants TF-MONO and SRTF-MONO improves text classification considering the values of non-occurrences. However, the MONO strategy suffers setbacks in weighting terms with non-uniformity values in its term's interclass distinguishing power. In this study, extended max-occurrence with normalized non-occurrence (EMONO) with variants TF-EMONO and SRTF-EMONO are proposed where EMO value is determined as MO interclass extensions as improvements to address its problematic weighting behavior of MONO as it neglected the utilization of the occurrence of the classes with short-distance document frequency in non-uniformity values. The proposed schemes' classification performance is compared with the MONO variants on the Reuters-21578 dataset with the KNN classifier. Chi-square-max was used to conduct experiments in different feature sizes using micro-F1 and macro-F1. The results of the experiments explicitly showed that the proposed EMONO outperforms the variants of MONO strategy in all feature sizes with an EMO parameter value of 2 sets number of classes in MO extension. However, the SRTF-EMONO showed better performance with Micro-F1 scores of 94.85% and 95.19% for smallest to largest feature size, respectively. Moreover, this study also emphasized the significance of interclass document frequency values in improving text classification aside from non-occurrence values in term weighting schemes.

Author 1: Cristopher C. Abalorio
Author 2: Ariel M. Sison
Author 3: Ruji P. Medina
Author 4: Gleen A. Dalaorao

Keywords: Extended MO; normalized NO; text classification; term weighting scheme

PDF

Paper 12: Assessing Digital Readiness of Small Medium Enterprises: Intelligent Dashboard Decision Support System

Abstract: The implication of the Covid-19 global pandemic is driving the transition of SMEs’ business towards digitalization. However, despite the use of the digital platform, many SMEs are unable to survive. Therefore, this study included a focus on Decision Support System (DSS)-based dashboard model as a new feature in assessing SMEs’ digitalization readiness. The twenty-four criteria appraisals are regarded in this sense as two views of business and Information Technology (IT) dimensions which include the Fuzzy-Analytical Hierarchy Process Method (F-AHP) for the weighting measurement and Objective Matrix (OMAX) for the performance mapping analysis, and both are embedded in the Business Intelligent (BI) dashboard development. In Riau Province, Indonesia, a total of 118 SMEs were interested in this study and fact thus revealed the general performance of SMEs as rated at an “Average” level of index value 4.95 with comprehensive parameters for index contribution viz., 3.79, 3.84, 7.75, 4.68, 4.32, and 5.43 for Business Activity (BA), Transaction (TC), Marketing (MC), Management (MG), Micro Environment (MI) and Macro Environment (MA) respectively. Furthermore, the dashboard prepares a tracking and analysis system with the graphical diagram extracted from each criteria hierarchy’s root cause to sub-criteria. The DSS dashboard’s information and knowledge have been developed into a promotional framework for stakeholders relevant to a digital business’s success and sustainability performance initiatives.

Author 1: Okfalisa
Author 2: Mahyarni
Author 3: Wresni Anggraini
Author 4: Saktioto
Author 5: B. Pranggono

Keywords: Decision support system; digital readiness; fuzzy analytical hierarchy process; business intelligent dashboard; objective matrix

PDF

Paper 13: A Novel IoT Architecture for Seamless IoT Integration into University Systems

Abstract: IoT architectures play critical roles in guiding IoT system construction and enhancing IoT integration. However, there is still no standardized IoT architecture that meets the varying requirements of different IoT deployments. Although this has been a focus within the research community, no specific attention has been paid to optimizing an IoT architecture toward seamless IoT integration in educational environments. Moreover, different advanced system aspects have not been considered for designing an optimized IoT architecture. These include the need for complete security and privacy support, a highly responsive system, dynamic interactivity, and wide-range IoT connectivity. Such considerations are important considering the complexity and multidimensionality of integrating IoT in educational environments. In this paper, we introduced a novel IoT architecture with the main objective of facilitating the effective integration of IoT into university systems. It also aims at optimizing the IoT-integrated system with advanced aspects to enhance system security, responsiveness, and IoT connectivity. The proposed architecture provides a modular and scalable design of six architectural layers in addition to a vertical layer that provides security support across the architecture. Only the most relevant and critical layers are added to the architecture to maintain a practical trade-off between effective modularity and less complexity. Compared with other IoT architectures, the proposed one ensures high reliability, data management, full security support, responsiveness, and wide coverage while maintaining acceptable complexity.

Author 1: Wafa Altwoyan
Author 2: Ibrahim S. Alsukayti

Keywords: Internet of things; architecture; smart campus; education

PDF

Paper 14: Software Defined Network based Load Balancing for Network Performance Evaluation

Abstract: Load balancing distributes incoming network traffic across multiple controllers that improve the availability of the internet for users. The load balancing is responsible to maintain the internet availability to users in 24 hours by 7 days a week. However, the internet become unavailable since the load balancer is inflexibility, costly, and non-programmable for settings adjustment especially in managing the network traffic congestion. An increasing user using mobile devices and cloud facilities, the current load balancer has limitations and demands for the deployment of a Software-Defined Network (SDN). SDN decouples network control, applications, network services, and forwarding roles; hence makes the network more flexible, affordable, and programmable. Furthermore, it has been found that SDN load balancing performs intelligent action, efficient and maintains better QoS (Quality of Service) performance. This study proposes the application of SDN-based Load Balancing since it provides pre-defined servers in the server-farm that receive the arrived Internet Protocol (IP) data packet from various clients in the same number of loads and process orders for each server. Experiments have been conducted using Mininet™ and based on several scenarios (Scenario A, Scenario B, and Scenario C) of network topologies. Parameters used to evaluate the load balancing in SDN are throughput, delay, and jitter. Findings indicated that scenario A gives a high throughput, scenario B and C produce a low jitter values and scenario C produces the lowest delay. The impact of SDN brings a multi-path adaptive direction in finding the best route for a better network performance.

Author 1: Omran M. A. Alssaheli
Author 2: Z. Zainal Abidin
Author 3: N. A. Zakaria
Author 4: Z. Abal Abas

Keywords: Load balancing; software-defined network (SDN); SDN load balancing; network performance; Mininet

PDF

Paper 15: An Efficient Productive Feature Selection and Document Clustering (PFS-DocC) Model for Document Clustering

Abstract: In mining, document clustering pretends to diminish the document size by constructing the clustering model which is extremely essential in various web-based applications. Over the past few decades, various mining approaches are analysed and evaluated to enhance the process of document clustering to attain better results; however, in most cases, the documents are messed up and degrade the performance by reducing the level of accuracy. The data instances need to be organized and a productive summary have to be generated for all clusters. The summary or the description of the document should demonstrate the information to the users’ devoid of any further analysis and helps in easier scanning of associated clusters. It is performed by identifying the relevant and most influencing features to generate the cluster. This work provides a novel approach known as Productive Feature Selection and Document Clustering (PFS-DocC) model. Initially, the productive features are selected from the input dataset DUC2004 which is a benchmark dataset. Next, the document clustering model is attempted for single and multiple clusters where the generated output has to be more extractive, generic, and clustering model. This model provides more appropriate and suitable summaries which is well-suited for web-based applications. The experimentation is carried out in online available benchmark dataset and the evaluation shows that the proposed PFS-DocC model gives superior outcomes with higher ROUGE score.

Author 1: Perumal Pitchandi

Keywords: Benchmark standards; document clustering; productive feature selection; multiple clustering; web applications

PDF

Paper 16: Smart Elevator Obstruction Detection System using Image Classification

Abstract: This paper proposes an approach that leverages real-time Image Classification to improve elevator safety. Elevators are a necessity for most multi story buildings. As a result, they play a crucial role in the lives of millions of people around the world. Despite this, there has been limited advancement in the technology used for elevator door operators. In the current system, elevators use multiple infrared transmitter/receiver pair of sensors to detect obstructions between the doors. This does not effectively detect smaller objects such as pets, small children, pet leashes etc. between the elevator doors which has led to thousands of tragic fatalities. This paper proposes an approach to tackle this challenge by leveraging Binary Image Classification to determine whether there is an obstruction between the elevator doors. This study includes the construction of a novel dataset of over 10,000 images and a comprehensive evaluation and comparison of several Machine Learning models for the proposed system. The results have produced novel findings that can be used to significantly improve safety and reliability of elevator door operators by preventing tragic fatalities every year.

Author 1: Preethi Chandirasekeran
Author 2: Shridevi S

Keywords: Binary image classification; machine learning; deep learning; elevator safety

PDF

Paper 17: The SMILE, A Cyber Pedagogy based Learning Management System Models

Abstract: The study's purpose is to create an LMS model that is adapted to the characteristics of university students to enhance the learning experience by utilizing various multidimensional learning resources in Cyber Pedagogy. This research and development study used the Analyze, Design, Develop, Implement, and Evaluate (ADDIE) instructional design framework as well as the Waterfall system development model to develop learning materials and infrastructure. The study involves 50 students from the Bali Institute of Technology and Business, as well as five lecturers and six judges at the expert test stage, namely learning media experts, learning design experts, and teaching experts, who were chosen through purposive sampling. The SMILE Model (Simple, Multidimensional, and Interactive Learning Ecosystem) is designed to meet the learning needs and expectations of today's largest market share of higher education, the millennial generation. The SMILE Model was developed successfully with ongoing assistance from researchers' students, particularly in the E-Tourism course. The implementation is accomplished through the combination of university E-Learning and the use of Microsoft Teams as a virtual learning platform alternative. During the COVID-19 pandemic, this was considered the new face-to-face norm.

Author 1: Paula Dewanti
Author 2: I Made Candiasa
Author 3: I Made Tegeh
Author 4: I Gde Wawan Sudatha

Keywords: ADDIE; cyber pedagogy; e-learning; higher education; learning management system

PDF

Paper 18: Location-based Mobile Application for Blood Donor Search

Abstract: Technological advances and the massive use of mobile devices have led to the exponential evolution of mobile applications in the health sector. Blood donation centers frequently suffer blood shortages due to lack of donations, which is why blood donation requests are frequently seen on social networks for blood donors in urgent need of a transfusion of a specific blood group. Mobile applications for blood donation are crucial in the health sector, since it allows donors and blood donation centers to communicate immediately to coordinate with each other, minimizing the time to perform the donation process. The present work was to develop a location-based mobile application for the search of blood donors, with the objective of increasing the number of donors, having a greater population reach, and reducing the time to search for blood donors. The results obtained show a significant increase of 39.58% in the number of donors, a reduction of 53.2% in the search time, and a greater population reach.

Author 1: Orlando Iparraguirre-Villanueva
Author 2: Fernando Sierra-Linan
Author 3: Michael Cabanillas-Carbonell

Keywords: Blood banks; location; mobile applications; donor; applicant; blood bank; mobile apps

PDF

Paper 19: Soil Nutrients Prediction and Optimal Fertilizer Recommendation for Sustainable Cultivation of Groundnut Crop using Enhanced-1DCNN DLM

Abstract: Cultivation of crops and their parallel production yields hugely depend upon the fertility composition of the soil in which the crops are being cultivated. The prime fertility factors which contribute towards the health of the soil are the available soil nutrients. Varying climatic conditions and improper cultivation patterns have resulted in unpredictable growth and yield of the groundnut crops, one of the major cause for the fluctuation seen in groundnut pod growth patterns and production, is the differing soil nutrient compositions of the land which is under cultivation. The unnecessary usage of excessive artificial fertilizers to boost the soil strength, without properly diagnosing the exact nutritional need of the soil required for the conducive growth of the crop has led to the imbalanced distribution of the soil’s major macro-nutrients constituents such as (Phosphorous (P), Potassium (K) and Nitrogen (N)). In this research article, we have made a detailed investigation for nutrient prediction mechanism of the soil nutrient datasets taken under investigation of a specific geographic location from one of the major groundnut cultivating districts (Villupuram) in the state of Tamil Nadu and have proposed a Soil nutrients prediction scheme and optimal fertilizer recommendation model for sustainable cultivation of groundnut crop using Enhanced- 1DCNN DLM. This Investigation model utilizes the natural compact robust features of 1DCNN in classifying the major macro nutrients(N,P,K)on the basis of low, Medium and high values. Based on the generated heatmap results the correlation between certain macronutrients and their corresponding micronutrient presence is classified. This proposed model has been compared for its performance and error measures with existing SVM, Naïve Bayes and ANN models and has proved to be outperforming all the compared baseline models by preserving the original data distribution with an overall accuracy of 99.78%.

Author 1: Sivasankaran S
Author 2: K. Jagan Mohan
Author 3: G. Mohammed Nazer

Keywords: Soil nutrients; Enhc-ID-CNN DLM; nutrient classification; fertilizer recommendation

PDF

Paper 20: Performance Evaluation of Different Supervised Machine Learning Algorithms in Predicting Linear Accelerator Multileaf Collimator Positioning’s Accuracy Problem

Abstract: Radiation Oncology is one of the businesses that employs Machine Learning to automate quality assurance tests so that errors and defects can be reduced, avoided, or eliminated as much as possible during tumor therapy using a Linear Accelerator with MultiLeaf Collimator (Linac MLC). The majority of Machine Learning applications have used supervised learning algorithms rather than unsupervised learning algorithms. However, in most cases, there is a clear bias in deciding which supervised machine learning algorithm to use. And prediction findings may be less accurate as a result of this bias. As a result, in this study, an evidence is presented for a novel application of Logistic Regression technique to predict Linac MLC positioning accuracy, which achieved 98.68 percent prediction accuracy with robust and consistent performance across several sets of Linac data. this evidence was obtained by comparing the performance of various supervised machine learning algorithms (i.e. Logistic Regression, Decision Tree, Support Vector Machine, Random Forest, Naive Bayes, and K-Nearest Neighbor) in the prediction of Linac MLC's positioning accuracy problem using leaves' positioning displacement datasets with labelled results as training and test datasets. For each method, two parameters were used to evaluate performance: prediction accuracy and the receiver operating characteristics curve. Based on that evaluation, the right selection sequence was proposed for supervised Machine Learning algorithms in order to achieve near-optimal prediction performance for Linac MLC's leaf positioning accuracy problem. As a result, the selection bias, as well as the negative side effects (i.e. ineffective preventive maintenance plan for Linac MLC to avoid and solve causes of inaccurate leaf displacement such as motor fatigue and stuck problems) could have occurred were successfully avoided.

Author 1: Hamed S. El-Ghety
Author 2: Ismail Emam
Author 3: AbdelMagid M. Ali

Keywords: Linear accelerators; logistic regression; performance evaluation; prediction methods; supervised learning

PDF

Paper 21: Collaborative Course Assignment Problem to Minimize Unserved Classes and Optimize Education Quality

Abstract: This work proposes a collaborative course assignment model among universities. It is different from existing studies in educational assignment problems or course timetabling, where the scope is only within the institution or department. In this work, the system consists of several universities. A collaborative approach is conducted so that lecturers exchange is possible and conducted automatically. Each university shares its courses and lecturers. The optimization is conducted to minimize the unserved classes and improve education quality. The cloud-theory based simulated annealing is deployed to optimize the assignment. This model is then benchmarked with two non-collaborative models. The first model’s objective is to minimize the unserved classes only. The second model’s objectives are to minimize the unserved classes and improve education quality. The simulation result shows that the proposed assignment model is better in minimizing the unserved classes and improving education quality. The proposed model reduces 89 to 92 percent of the unserved classes ratio compared with the non-collaborative model.

Author 1: Purba Daru Kusuma
Author 2: Ratna Astuti Nugrahaeni

Keywords: Course assignment problem; simulated annealing; collaborative model; online teaching; combinatorial problem

PDF

Paper 22: A Novel Morphological Analysis based Approach for Dynamic Detection of Inflected Gujarati Idioms

Abstract: The Gujarati language is primarily spoken by Gujarati people living in the state of Gujarat, India. It is the medium of communication in the state of Gujarat. In the Gujarati language, ‘rudhiprayog’ i.e. idiom is a very much popular form of expression. It represents the real flavour of the Gujarati language. The idiom is a group of words saying one thing literally but means something else when we explored it in context. Like Gujarati verbs, idioms can be written in many forms. Due to different morphological forms of the same Gujarati idiom, Gujarati idiom identification is a challenging task for any machine translation system (MTS). Accordingly many forms also make idiom translation more complicated. In the current paper, Gujarati idioms in their different inflected forms are collected, analyzed and classified based on ending words. After classifying idioms, their base or root forms are identified. Base idiom form and their possible idioms forms are morphologically analyzed and rules are generated based on the relationship between base form and possible inflected forms of idioms. These rules are used to generate possible idiom forms from the base idiom form. Gujarati idiom in any valid inflected form can be dynamically detected from the Gujarati input text using the proposed novel morphological analysis based approach. The results are encouraging enough to implement the proposed model of rules for natural language processing tasks as well as a machine translation system for Gujarati language idioms.

Author 1: Jatin C. Modh
Author 2: Jatinderkumar R. Saini
Author 3: Ketan Kotecha

Keywords: Gujarati; idiom; machine translation system (MTS); morphological analysis; natural language processing (NLP); rudhiprayog

PDF

Paper 23: Towards Employee Perceived Satisfaction in using Citrix Workspace Technology

Abstract: In view of increasing use of technologies and their impacts in businesses, companies are now using technology to accelerate business processes. However, some companies are still failing to achieve the desired promising results by using technology. Technology in any company succeeds only when the people working in the company accept the technology wholeheartedly and show diligence in learning and using it. In this research we have evaluated the perceptions about a technology called Citrix Workspace in a Saudi company to see whether workers are satisfied with this technology and how they find its usefulness at work. A theoretical framework Technology Acceptance Model (TAM) has been used in this research to study different variables. The data collected by employees within the organization has been analyzed in order to determine level of satisfaction with the technology and to know factors that keep the workers away from remote use of the technology. These results also help managers decide how Citrix Workspace technology or other such technologies can be used in remote sites of the company.

Author 1: Bandar Ali Al Fehaidi
Author 2: Muhammad Asif Khan

Keywords: Citrix workspace; employee satisfaction; secure remote access; mobile worker; TAM

PDF

Paper 24: Bearing Fault Detection based on Internet of Things using Convolutional Neural Network

Abstract: In the age of the industrial revolution, industry and machinery are elements of the utmost importance to the development of human civilization. As industries are dependent on their machines, regular maintenance of these machines is required. However, if the machine is too big for humans to look after, we need a system that will observe these giants. This paper proposes a convolutional neural network-based system that detects faults in industrial machines by diagnosing motor sounds using accelerometers sensors. The sensors collect data from the machines and augment the data into 261756 samples to train (70%) and test (30%) the models for better accuracy. The sensor data are sent to the server through the wireless sensor network and decomposed using discrete wavelet transformation (DWT). This big data is processed to detect faults. The study shows that custom CNN architectures surpass the performance of the transfer learning-based MobileNetV2 fault diagnosis model. The system could successfully detect faults with up to 99.64% accuracy and 99.83% precision with the MobileNetV2 pre-trained on the ImageNet Dataset. However, the Convolutional 1D and 2D architectures perform excellently with 100% accuracy and 100 % precision.

Author 1: Sovon Chakraborty
Author 2: F. M. Javed Mehedi Shamrat
Author 3: Rasel Ahammad
Author 4: Md. Masum Billah
Author 5: Moumita Kabir
Author 6: Md Rabbani Hosen

Keywords: Accuracy; convolution 1D; convolution 2D; data loss; faulty machinery; mobileNetV2; precision

PDF

Paper 25: Three Layer Authentications with a Spiral Block Mapping to Prove Authenticity in Medical Images

Abstract: Digital medical image has a potential to be manipulated by unauthorized persons due to advanced communication technology. Verifying integrity and authenticity have become important issues on the medical image. This paper proposed a self-embedding watermark using a spiral block mapping for tamper detection and restoration. The block-based coding with the size of 3×3 was applied to perform self-embedding watermark with two authentication bits and seven recovery bits. The authentication bits are obtained from a set of condition between sub-block and block image, and the parity bits of each sub-block. The authentication bits and the recovery bits are embedded in the least significant bits using the proposed spiral block mapping. The recovery bits are embedded into different sub-blocks based on a spiral block mapping. The watermarked images were tested under various tampered images such as blurred image, unsharp-masking, copy-move, mosaic, noise, removal, and sharpening. The experimental results show that the scheme achieved a PSNR value of about 51.29 dB and a SSIM value of about 0.994 on the watermarked image. The scheme showed tamper localization with accuracy of 93.8%. In addition, the proposed scheme does not require external information to perform recovery bits. The proposed scheme was able to recover the tampered image with a PSNR value of 40.45 dB and a SSIM value of 0.994.

Author 1: Ferda Ernawan
Author 2: Afrig Aminuddin
Author 3: Danakorn Nincarean
Author 4: Mohd Faizal Ab Razak
Author 5: Ahmad Firdaus

Keywords: Fragile watermarking; self-embedding; image authentication; self-recovery; medical image; spiral block mapping

PDF

Paper 26: Assessing Node Trustworthiness through Adaptive Trust Threshold for Secure Routing in Mobile Ad Hoc Networks

Abstract: In the field of communication, Mobile Ad-hoc networks (MANET) have become popular and widely used. However, there are many security challenges in communication through these networks due to the presence of malicious nodes. The aim of this article is to present a novel adaptive threshold trust based approach for isolating malicious nodes to establish secure routing between source and destination. Many existing cryptography methods are complex and do not properly address the elimination of malicious nodes. Several trust-dependent mechanisms have been proposed that supplement old traditional cryptography- related security schemes. But it is observed that most of these trust based approaches are using direct trust and comparing with static trust threshold. This article proposes a novel method, secured trust with adaptive threshold (STAT) that uses the Adaptive threshold technique (APTT) combined along with secure trust based approach (STBA) to evaluate the node trustworthiness for efficient routing. Secure trust for a node is calculated based upon three tier observations that includes direct, neighbor, self-historical to enrich the trust factor and adaptive trust threshold is determined based upon network parameters dynamically. Node’s secure trust is compared with adaptive trust threshold computed to isolate the malicious nodes from routing. The proposed method is compared with two cases where routing is performed without any trust calculation and routing with trust calculation and compared with static trust threshold approach. Results show significant performance of the proposed work in terms of metrics like packet delivery ratio, delay, throughput, false positive detection ratio and packet drop ratio. The proposed method STAT effectively isolates the malicious nodes and establishes secure routing.

Author 1: M Venkata Krishna Reddy
Author 2: P. V. S. Srinivas
Author 3: M. Chandra Mohan

Keywords: Node trustworthiness; misbehaving nodes; secure trust; static threshold; adaptive threshold; secure routing

PDF

Paper 27: An Ontology Model for Medical Tourism Supply Chain Knowledge Representation

Abstract: This study developed an application ontology related to the medical tourism supply chain domain (MTSC). The motivation for developing an ontology is that current MTSC studies use a descriptive approach to provide knowledge, which is difficult to comprehend and apply. The formalization of MTSC knowledge can provide medical tourism stakeholders with a shared understanding of the medical tourism business. As a result, the MTSC domain requires efficient semantic knowledge representation. Ontology is a popular approach for integrating knowledge and comprehension because it presents schema and knowledge base in an accurate and relevant feature. This paper employed the ontology engineering methodology, which included specification, conceptualization, and implementation steps. The conceptual model and facets of the MTSC are proposed. The MTSC objective and scope are tested with semantic competency questions against SPARQL Query formulations. The ontology metrics evaluation was used to verify the ontology quality including the external validation done by the domain experts. The results showed that the MTSC ontology has an appropriate schema design, terminologies, and query results.

Author 1: Worawit Janchai
Author 2: Abdelaziz Bouras
Author 3: Veeraporn Siddoo

Keywords: Medical tourism; application ontology; supply chain; ontology development; knowledge representation

PDF

Paper 28: Stochastic Marine Predator Algorithm with Multiple Candidates

Abstract: This work proposes a metaheuristic algorithm that modifies the marine predator algorithm (MPA), namely, the stochastic marine predator algorithm with multiple candidates (SMPA-MC). The modification is conducted in several aspects. The proposed algorithm replaces the three fixed equal size iteration phases with linear probability. Unlike the original MPA, in this proposed algorithm, the selection between exploration and exploitation is conducted stochastically during iteration. In the beginning, the exploration-dominant strategy is implemented to increase the exploration probability. Then, during the iteration, the exploration probability decreases linearly. Meanwhile, the exploitation probability increases linearly. The second modification is in the prey’s guided movement. Different from the basic MPA, where the prey moves toward the elite with small step size, several candidates are generated with equal inter-candidate distance in this work. Then, the best candidate is chosen to replace the prey’s current location. The proposed algorithm is then implemented to solve theoretical mathematic functions and a real-world optimization problem in production planning. The simulation result shows that in the average fitness score parameter, the proposed algorithm is better than MPA, especially in solving multimodal functions. The simulation result also shows that the proposed algorithm creates 9%, 19%, and 30% better total gross profit than particle swarm optimization, marine predator algorithm, and Komodo mlipir algorithm, respectively.

Author 1: Purba Daru Kusuma
Author 2: Ratna Astuti Nugrahaeni

Keywords: Metaheuristic; marine predator algorithm; stochastic system; production planning

PDF

Paper 29: A Novel High-Speed Key Transmission Technique to Avoid Fiddling Movements in e-Commerce

Abstract: To prevent fraud in e-shopping, the High-Speed Key Transmission Technique (HSKT) is primarily focused on safe and effectual transactions of payments and product content. The privacy of users, traders, trader information, product content, and the payment procedure are all protected by this framework. High speed key transmission is also committed to providing an effective, sensible approach to privacy in order to minimize the complexity of applications and the load on consumers. This paper proposes a new key transmission technique that allows users to conduct multi-banking account transactions from a single location. The proposed system is devoted to preventing unwanted people from gaining access. The secret key is used to represent access mechanisms, allowing authorized users to verify the transaction if and only if its characteristics meet secret key access requirements. Finally, the proposed high-speed key transmission technique (HSKT) improves the chances of success and throughput by reducing the time it takes to decrypt and encrypt, the amount of energy it uses, and the average delay.

Author 1: A. B. Hajira Be
Author 2: R. Balasubramanian

Keywords: Secret key; encryption; decryption; transaction; HSKT

PDF

Paper 30: Machine Learning Algorithms for Document Classification: Comparative Analysis

Abstract: Automated document classification is the machine learning fundamental that refers to assigning automatic categories among scanned images of the documents. It reached the state-of-art stage but it needs to verify the performance and efficiency of the algorithm by comparing. The objective was to get the most efficient classification algorithms according to the usage of the fundamentals of science. Experimental methods were used by collecting data from a sum of 1080 students and researchers from Ethiopian universities and a meta-data set of Banknotes, Crowdsourced Mapping, and VxHeaven provided by UC Irvine. 25% of the respondents felt that KNN is better than the other models. The overall analysis of performance accuracies through various parameters namely accuracy percentage of 99.85%, the precision performance of 0.996, recall ratio of 100%, F-Score 0.997, classification time, and running time of KNN, SVM, Perceptron and Gaussian NB was observed. KNN performed better than the other classification algorithms with a fewer error rate of 0.0002 including the efficiency of the least classification time and running time with ~413 and 3.6978 microseconds consecutively. It is concluded by looking at all the parameters that KNN classifiers have been recognized as the best algorithm.

Author 1: Faizur Rashid
Author 2: Suleiman M. A. Gargaare
Author 3: Abdulkadir H. Aden
Author 4: Afendi Abdi

Keywords: Document classification; machine learning algorithms; text classification; analysis

PDF

Paper 31: A Novel Evolving Sentimental Bag-of-Words Approach for Feature Extraction to Detect Misinformation

Abstract: The state-of-the-art misinformation detection techniques mainly focus on static datasets. However, a massive amount of information is generated online and the websites are flooded with this legitimate information and misinformation. It is difficult to keep track of this changing information and provide up-to-date accurate status of webpages giving either legitimate information or misinformation. Therefore, to keep the features up-to-date, authors have proposed evolving sentimental Bag-of-Words approach. This involves, updating sentimental features every time the new or changed web contents are read. This process accumulates the sentimental features at different time intervals that can be utilized to detect misinformation in URLs and upgrade the status of the webpage with timely information. Apart from sentimental features, other state-of-the-art features viz. syntactical, Part-Of-Speech Tagging (POST), and Term-Frequency (TF) are updated in a timely manner and utilized to detect misinformation. The model performed well with the support vector machine showing an accuracy of 80% while the decision tree classifier showed less accuracy of 56.66%.

Author 1: Yashoda Barve
Author 2: Jatinderkumar R. Saini
Author 3: Kaushika Pal
Author 4: Ketan Kotecha

Keywords: Misinformation detection; healthcare; incremental learning; sentiment analysis; bag-of-words

PDF

Paper 32: Data Security Awareness in Online Learning

Abstract: The Covid-19 pandemic has intensified the online learning activities, which have becoming the most crucial platforms for learning sessions. With these online learning activities, a major concern arises particularly on the security of educators and students’ data. Every element in an online learning system can be a potential target of hacking or attack. Therefore, this paper examines students’ awareness of data security in online learning. Google forms have been used to gather the students’ feedback. Forty-two (42) students, involving the secondary school, pre-university and university students, responded to the survey questions. The results show that most of the respondents are well aware of how to keep their data safe in online learning. It is discovered that the level of awareness about data security in online learning amongst the students is commendable. The findings of this study indicate there are various ways to secure students’ data especially during online learning.

Author 1: Rosilah Hassan
Author 2: Wahiza Wahi
Author 3: Nurul Halimatul Asmak Ismail
Author 4: Samer Adnan Bani Awwad

Keywords: Covid-19; IR4.0; education 4.0; online learning; data security

PDF

Paper 33: KP-USE: An Unsupervised Approach for Key-Phrases Extraction from Documents

Abstract: Automatic key-phrase extraction (AKE) is one of the most popular research topics in the field of natural language processing (NLP). Several techniques were used to extract the key-phrases: statistical, graph-based, classification algorithms, deep learning, and embedding techniques. AKE approaches that use embedding techniques are based on calculating the semantic similarity between a vector representing the document and the vectors representing the candidate phrases. However, most of these methods only give acceptable results in short texts such as abstracts paper, but on the other hand, their performance remains weak in long documents because it is represented by a single vector. Generally, the key phrases of a document are often expressed in certain parts of the document as, the title, the summary, and to a lesser extent in the introduction and the conclusion, and not of the entire document. For this reason, we propose in this paper KP-USE. A method extracts key-phrases from long documents based on the semantic similarity of candidate phrases to parts of the document containing key-phrases. KP-USE makes use of the Universal Sentence Encoder (USE) as an embedding method for text representation. We evaluated the performance of the proposed method on three datasets containing long papers, namely, NUS, Krapivin2009, and SemEval2010, where the results showed its performance outperforms recent AKE methods which are based on embedding techniques.

Author 1: Lahbib Ajallouda
Author 2: Fatima Zahra Fagroud
Author 3: Ahmed Zellou
Author 4: Elhabib Ben Lahmar

Keywords: Key-phrase extraction; natural language processing; semantic similarity; embedding technique; universal sentence encoder

PDF

Paper 34: Design of a Multiloop Controller for a Nonlinear Process

Abstract: Among the category of nonlinear processes, the Continuous Stirred Tank Reactor (CSTR) is one popular unit that finds application in various verticals of chemical process industries. The process variables within the CSTR are highly interactive; hence developing control strategies become a laborious task as it can be viewed as a Multi Input Multi Output (MIMO) system. Often the CSTR is assumed as a Single Input Single Output (SISO) system and during the development of control strategies or algorithm, the main objective is on maintaining only a single process variable closer to its set point, even though many measured variables form part of it. On the contrary, when compared to a SISO system, the MIMO control includes sustaining different controlled variables at their appropriate set points concurrently; thereby achieving an improved efficiency. The components’ concentration and the temperature inside the CSTR are highly interactive in nature and exhibit reasonably high non-linear steady state behaviour. Both the interaction and non-linear behaviours pose challenges to the overall system stability. A stabilizing Proportional + Integral (PI) controller employing Stability Boundary Locus (SBL) concept is designed for a CSTR which eventually encapsulates both the stability and closed loop performance in its design procedure and analysed through simulation in MATLAB with the results presented.

Author 1: S Anbu
Author 2: M Senthilkumar
Author 3: T S Murugesh

Keywords: Nonlinear process; interaction; multi input multi output control; closed loop performance; stability

PDF

Paper 35: License Plates Detection and Recognition with Multi-Exposure Images

Abstract: Automatic License Plate Recognition (ALPR) has been an important research topic for many years in the intelligent transportation system and image recognition fields. License Plate (LP) detection and recognition has always been a challenging issue due to several factors, including different weather and lighting, unavoidable data acquisition noise, and requirement for real-time performance in state-of-the-art Intelligent Transportation Systems (ITS) applications. Different techniques have been proposed based on machine learning, deep learning, and image processing for the detection and recognition of LPs. This paper proposes a method that performs vehicle LP detection and character recognition with high accuracy by using artificially generated multi-exposure images of the LP. First, one under-exposed and three over-exposed images are generated from a reference image taken from the camera. Then, LP detection and character recognition algorithms are applied on these five images – one real image and four synthesized images. At each character location in LP, the character detected with the highest confidence level among these images is selected as the final predicted character. The system is fully automated, and no pre-processing, calibration, or configuration procedures are needed. Experimental results show that the proposed method achieves high accuracy and works robustly even in challenging conditions. The proposed method can be used in any existing ALPR system and the results on three recent ALPR techniques show that the accuracies are further improved when they are combined with the proposed method.

Author 1: Seong-O Shim
Author 2: Romil Imtiaz
Author 3: Asif Siddiq
Author 4: Ishtiaq Rasool Khan

Keywords: License plate detection; character recognition; multi-exposure images; convolutional neural networks

PDF

Paper 36: An Urban Water Infrastructure Management System Design with Storm Water Intervention for Smart Cities

Abstract: Cauayan City is one of the hubs of economic development and activities in the northern part of the Philippines. Since this is an urban area, there is a tendency for people and businesses to converge, which results in higher water demand. At present, the combined distribution efficiency of its water infrastructures under the management and supervision of the Cauayan City Water District (CCWD) is only 87% or with a combined distribution loss of 10%, which is 1282.50m3 losses per day. This study suggests the necessity to introduce new and innovative water management technologies and systems adapted to climate change to address the city's needs. Problems that need to be addressed include a low-efficiency performance of the existing water infrastructure systems, lack of management tools for more efficient delivery of water services, limited service coverage of the water district due to limited water resources, and depletion and contamination of aquifers and other water sources since shallow aquifer is mainly utilized. Hence, a decision-support application based on geographic information systems (GIS) for managing urban water infrastructure with Storm Water intervention is a designed solution to address the needs of the city. The combination of decision support systems (DSS) and geographic information systems (GIS) was presented in this paper to maximize and properly utilize water infrastructure. One of the tools used as DSS is MIKE Operation. This is a complete GIS-integrated modeling tool that enables users to make sound decisions and create future concepts for urban storm water systems - cost-effective and resilient to change. A conceptual framework and relevant methodologies were presented as a guide for the success of the designed new technology.

Author 1: Edward B. Panganiban
Author 2: Rafael J. Padre
Author 3: Melanie A. Baguio
Author 4: Oliver B. Francisco
Author 5: Orlando F. Balderama

Keywords: Water infrastructure; geographic information systems; storm water; decision support systems; water management

PDF

Paper 37: Sentiment Analysis on Amazon Product Reviews using the Recurrent Neural Network (RNN)

Abstract: In this paper, the problem of sentiment analysis on Amazon products is tackled. In fact, sentiment analysis systems are applied in all business and social fields. This is because the opinions are at the center of all human activities, and they are key influencers of our behaviors. In this study, the recurrent neural network (RNN) model is used to classify the reviews. Three Amazon review datasets were applied to predict the sentiments of the authors. In conclusion, our results are comparable to the best state of the art models with an accuracy of 85%, 70% and 70% for three datasets.

Author 1: Roobaea Alroobaea

Keywords: Sentiment analysis; natural language processing; deep learning; RNN

PDF

Paper 38: Arabic Sign Language Recognition using Lightweight CNN-based Architecture

Abstract: Communication is a critical skill for humans. People who have been deprived from communicating through words like the rest of humans, usually use sign language. For sign language, the main signs features are the handshape, the location, the movement, the orientation and the non-manual component. The vast spread of mobile phones presents an opportunity for hearing-disabled people to engage more into their communities. Designing and implementing a novel Arabic Sign Language (ArSL) recognition system would significantly affect their quality of life. Deep learning models are usually heavy for mobile phones. The more layers a neural network has, the heavier it is. However, typical deep neural network necessitates a large number of layers to attain adequate classification performance. This project aims at addressing the Arabic Sign Language recognition problem and ensuring a trade-off between optimizing the classification performance and scaling down the architecture of the deep network to reduce the computational cost. Specifically, we adapted Efficient Network (EfficientNet) models and generated lightweight deep learning models to classify Arabic Sign Language gestures. Furthermore, a real dataset collected by many different signers to perform hand gestures for thirty different Arabic alphabets. Then, an appropriate performance metrics used in order to assess the classification outcomes obtained by the proposed lightweight models. Besides, preprocessing and data augmentation techniques were investigated to enhance the models generalization. The best results were obtained using the EfficientNet-Lite 0 architecture and the Label smooth as loss function. Our model achieved 94% and proved to be effective against background variations.

Author 1: Batool Yahya AlKhuraym
Author 2: Mohamed Maher Ben Ismail
Author 3: Ouiem Bchir

Keywords: Arabic sign language recognition; supervised learning; deep learning; efficient lightweight network based convolutional neural network

PDF

Paper 39: Review of Industry Workpiece Classification and Defect Detection using Deep Learning

Abstract: Object detection and classification denotes one of the most extensively-utilized machine vision applications given the high requirements put forward for object classification and defect detection with the rise of object recognition scenes. Notwithstanding, conventional image recognition processing technology encounters specific drawbacks. Its benefits and limitations were duly compared upon selecting several typical conventional image recognition techniques. Resultantly, such recognition approaches required multiple manual participation elements and extensive manpower with restricted object identification. As a branch of machine learning, deep learning has attained more optimal results in the image recognition discipline. In the classification and defect detection of industrial workpieces, over 70 literature reviews of deep learning algorithms across multiple application scenarios for classical algorithm model and network structure assessment based on the deep learning theory. Relevant network model performance was compared and analyzed based on network intricacies parallel to natural image classification. Six research gaps were found based on the reviewed algorithm pros and cons. The corresponding six research proposal in workpiece image classification was highlighted with prospects on the workpiece image classification and defect detection direction development. It provides an empirical solution for the selection of workpiece classification and defect detection deep learning model in the future.

Author 1: Changxing Chen
Author 2: Azween Abdullah
Author 3: S. H. Kok
Author 4: D. T. K. Tien

Keywords: Convolutional neural network; image processing; image recognition; defect detection; deep learning

PDF

Paper 40: Enterprise Architecture for Smart Enterprise System

Abstract: The chili agrosystem faces many challenges, and the enterprise architecture (EA) artifacts as the building block of a chili enterprise system (ES) are not exist. This research is a qualitative systematic literature review as part of developing intelligent ES to examine chili's production, consumption, and price. The first step toward ES development is recognizing worthy chili EA and EA frameworks that characterize existing chili market conditions. The study aims to answer three research questions (RQ) and uses a state-of-the-art approach, employing predetermined keywords, to six research databases and data gathered from the corresponding institutional agencies. The findings on RQ1 revealed eight dynamics chili main supply chain patterns and data segregation among institutional agencies. The RQ2 disclosed numerous studies on EA; however, none offered for the chili agrosystem. In addition, the RQ3 results are multiple and expose different EAF characteristics. Again, no study considers its applicability to the chili agrosystem. To conclude, the strength of enterprise architecture for the chili enterprise system is the resulting deliverables that fall into three categories. These are factors of the chili agrosystem, enterprise, and architecture factors. Of many available frameworks, the Zachman framework - The Ontology gives more offerings.

Author 1: Meuthia Rachmaniah
Author 2: Arif Imam Suroso
Author 3: Muhamad Syukur
Author 4: Irman Hermadi

Keywords: Chili agrosystem; enterprise architecture; intelligent enterprise; supply chain; Zachman framework

PDF

Paper 41: Virtual Reality Application for Pain Management: User Requirements

Abstract: The usage of fully immersive Virtual Reality applications for pain relief is still in an exploratory stage. In consequence there is a need to understand the user perspective and wishes regarding this kind of product. To address this issue, this paper presents quantitative research in order to establish the functional and non-functional requirements of our application. Voluntary response sampling was used for the research (N = 55). The inquiry form contained questions regarding serious game content, eagerness for testing, performance, resource consumption optimization, portability, data security, accessibility. The questionnaire was shared via Google Forms. The answers were collected and interpreted. The study revealed that a significant part of the participants was willing to test the application and that they would use an immersive Virtual Reality application during a normal treatment session if the opportunity is available. As functional requirements, the following were considered important: the presence of animals in game, a bright environment and nature-based background sounds. The following non-functional requirements were considered important: game optimization, portability, data security, accessibility, graphics quality and a short learning curve.

Author 1: Ioan Alexandru Bratosin
Author 2: Ionel Bujorel Pavaloiu
Author 3: Nicolae Goga
Author 4: Andreea Iuliana Luca

Keywords: Virtual Reality; user requirements; serious games; therapy

PDF

Paper 42: Integration of Convolutional Neural Networks and Recurrent Neural Networks for Foliar Disease Classification in Apple Trees

Abstract: Automated methods intended for image classification have become increasingly popular in recent years, with applications in the agriculture field including weed identification, fruit classification, and disease detection in plants and trees. In image classification, convolutional neural networks (CNN) have already shown exceptional results but the problem with these models is that these models cannot extract some relevant image features of the input image. On the other hand, the recurrent neural network (RNN) can fully exploit the relationship among image features. In this paper, the performance of combined CNN and RNN models is evaluated by extracting relevant image features on images of diseased apple leaves. This article suggested a combination of pre-trained CNN network and LSTM, a particular type of RNN. With the use of transfer learning, the deep features were extracted from several fully connected layers of pre-trained deep models i.e. Xception, VGG16, and InceptionV3. The extracted deep features from the CNN layer and RNN layer were concatenated and fed into the fully connected layer to allow the proposed model to be more focused on finding relevant information in the input data. Finally, the class labels of apple foliar disease images are determined by the integrated model for apple foliar disease classification, experimental findings demonstrate that the proposed approach outperforms individual pre-trained models.

Author 1: Disha Garg
Author 2: Mansaf Alam

Keywords: Artificial intelligence; deep learning; transfer learning; convolutional neural network; recurrent neural network; LSTM

PDF

Paper 43: Modern City Issues, Management and the Critical Role of Information and Communication Technology

Abstract: Cities are currently dealing with major difficulties that no longer allow slight adjustments to the way cities operate. Instead, local officials must devise creative, transformative solutions. Fortunately, novel approaches to municipal administration and technological advancements are providing city officials with new and beneficial tools. As a result of these improvements, citizens, businesses, and other groups in the city will be able to actively participate in implementing the reforms. In a nutshell, technology can assist cities in becoming smarter. This paper highlighted the implications of the management challenges of cities, the types of cities, and the issues that cities face during epidemics. A coordinated approach that reacts to both COVID-19 and climate change is required to avoid negative outcomes from both. To figure out how Malaysian city administration works and how important information and communication technology is in Malaysian smart cities, it is important to look into technology and data opportunities.

Author 1: Qasim Hamakhurshid Hamamurad
Author 2: Normal Mat Jusoh
Author 3: Uznir Ujang

Keywords: Cities; modern city; smart city; COVID-19; information communication technology (ICT); challenges; data; geographic information system (GIS)

PDF

Paper 44: A Novel Framework for Sanskrit-Gujarati Symbolic Machine Translation System

Abstract: Sanskrit falls under the Indo-European language family category. Gujarati, which has descended from the Sanskrit language, is a widely spoken language particularly in the Indian state of Gujarat. The proposed and realized Machine Translation framework uses a grammatical transfer approach to translate the written Sanskrit language to Gujarati. Because both languages are morphologically rich, studying the morphology of each item is difficult but necessary to incorporate into implementation. To improve the implementation accuracy and translation clarity, an in-depth research of the creation of Nouns, Verbs, Pronouns, and Indeclinables, as well as their mappings, has been carried out. Tokenization, lemmatization, morphological analysis, Sanskrit-Gujarati bilingual synonym-based dictionary, language synthesis, and transliteration are the proposed framework's primary components. The implementation outcome was tested on 1,000 phrases, using the automated Bilingual Evaluation Understudy (BLEU) scale which yielded a value of 58.04 It was also tested on the ALPAC scale, yielding the Intelligibility score of 69.16 and the Fidelity score of 68.11. The results are encouraging and prove that the proposed system is promising and robust for the implementation in the real world applications.

Author 1: Jaideepsinh K. Raulji
Author 2: Jatinderkumar R. Saini
Author 3: Kaushika Pal
Author 4: Ketan Kotecha

Keywords: Bilingual synonym dictionary; Gujarati; lemmatization; machine translation system (MTS); morphological analyzer; Sanskrit; synthesizer; transliteration

PDF

Paper 45: Detection and Analysis of Oil Spill using Image Processing

Abstract: Thousands of oil spills occur every year on offshore oil production platforms. Moreover, ships that crosses rivers to reach the harbor cause spills each year. The current study focuses on IRAQI marine and rivers, especially Al-Bakr, Khor al-Amaya, ABOT oil terminal and SHAT AL-Arab river inside Al Başrah oil terminal. In order to mitigate and manage oil spill impacts, an unmanned aerial vehicle has proven to be a valuable tool in mitigating and managing incidents. To achieve high accuracy, the objective of the current research is to analyze captured images for rivers, identify oil pollution and determine its location. The images were taken from the Iraqi Regional Organization for the Protection, General Company for Ports of Iraq, Iraqi Ministry of Environment and online websites. In the current paper is presented a software framework for detecting oil spills, pollution in rivers and other kinds of garbage. The framework based on artificial intelligence is divided into two parts: a training model and an operational model. In the training model part, a machine learning model is applied, which is one of the fastest and most accurate methods, integrated inside PipelineMLML. Thus, the object detection technique used can identify one or more categories of objects in a picture or video. Furthermore, the locations of objects can be identified with the help of neural networks. In the operational mode, models can identify oil spills in images.

Author 1: Myssar Jabbar Hammood AL-BATTBOOTTI
Author 2: Nicolae GOGA
Author 3: Iuliana MARIN

Keywords: PipelineMLML; oil spill; artificial intelligence; machine learning

PDF

Paper 46: Smart Information Retrieval using Query Transformation based on Ontology and Semantic-Association

Abstract: A notable problem with current information retrieval systems is that the input queries cannot express user information needs properly. This imprecise representation of the query hampers the effectiveness of the retrieval system. One method to solve this problem is to transform the original query into a more meaningful form. This paper proposes an ontology-based retrieval system that transforms initial user queries using domain ontologies and applies semantic association during the indexing process. The proposed system performs a semantic matching between an ontologically enhanced query and index to capture query-related terms. To show the performance of the proposed system, it is evaluated using standard parameters like precision, recall, and NDCG. In addition, the authors presented a comparison between the proposed and existing retrieval systems on three test datasets. Experimental results on these datasets indicate that the use of ontology and semantics has significantly increased the retrieval efficiency obtained by baseline. This work highlights the importance of ontology and semantics in information retrieval.

Author 1: Ram Kumar
Author 2: S. C. Sharma

Keywords: Ontology; semantics; information retrieval; query transformation; indexing

PDF

Paper 47: Characters Segmentation from Arabic Handwritten Document Images: Hybrid Approach

Abstract: Character segmentation in Unconstrained Arabic handwriting is a complex and challenging task due to the overlapping and touching of words or letters. Such issues have not been widely investigated in the literature. Addressing these issues in the segmentation stage reduces errors in the segmentation process, which plays a significant role in enhancing the accuracy of the Arabic optical character recognition. Therefore, this paper proposes a hybrid approach to improve the accuracy for interconnection, overlapping or touching character segmentation. The proposed method includes several stages: removing extra shapes such as signatures from the document. Using morphological operations, connected components and bounding box detection, detect and extract individual words directly from the document. Finally, the touching characters segmentation is achieved based on background thinning and computational analysis of the word's region. The proposed method has been tested on KHATT, IFN/ENIT database and our own collected dataset. The experimental results showed that the proposed method obtained high performance and improved the accuracy compared to other methods.

Author 1: Omar Ali Boraik
Author 2: M. Ravikumar
Author 3: Mufeed Ahmed Naji Saif

Keywords: Arabic handwritten character recognition; connected components; word segmentation; character segmentation; morphological operators; overlapping and touching characters

PDF

Paper 48: Machine Learning for Exhibition Recommendation in a Museum’s Virtual Tour Application

Abstract: The museum visit is having a crisis during the COVID-19 pandemic. SMBII Museum in Palembang has a remarkable decrease of visitors up to 90%. A strategy is needed to increase museum visits and enable educational and tourism roles in a pandemic situation. This paper evaluates the machine learning model for exhibition recommendations given to visitors through virtual tour applications. Exploring unfamiliar museum exhibitions to visitors through virtual museum applications will be tedious. If virtual collections are ancient and do not display any interest, they will quickly lead to boredom and reluctance to explore virtual museums. For this reason, an effective method is needed to provide suggestions or recommendations that meet the interests of visitors based on the profiles of museum visitors, making it easier for visitors to find exciting exhibition rooms for learning and tourism. Machine learning has proven its effectiveness for predictions and recommendations. This study evaluates several machine learning classifiers for exhibition recommendations and development of virtual tour applications that applied machine learning classifiers with the best performance based on the model evaluation. The experimental results show that the KNN model performs best for exhibition recommendations with cross-validation accuracy = 89.09% and F-Measure = 90.91%. The SUS usability evaluation on the exhibition recommender feature in the virtual tour application of SMBII museum shows average score of 85.83. The machine learning-based recommender feature usability is acceptable, making it easy and attractive for visitors to find an exhibition that might match their interests.

Author 1: Shinta Puspasari
Author 2: Ermatita
Author 3: Zulkardi

Keywords: Machine learning; recommender system; museum exhibition; virtual tour; pandemic

PDF

Paper 49: Classification of Osteoporosis in the Lumbar Vertebrae using L2 Regularized Neural Network based on PHOG Features

Abstract: One of the most common bone diseases in humans is osteoporosis, which is a major concern for the public health. Osteoporosis can be prevented if it is detected at an early stage. The research agenda consists of two phases: pre-processing of X-ray images of the spine and analysis of texture features from trabecular bone lumbar vertebrae L1-L4 for detecting osteoporosis. The preprocessing involves image enhancement of texture features and co-register the images in order to segment the L1-L4 regions in the lumbar spine. Range filtering and Pyramid Histogram of Orientation Gradient (PHOG) are used to analyze texture features. Input images are filtered with a range filter to adjust the local sub range intensities in a specified window to detect edges. Then a PHOG algorithm is designed to determine both the local shape of an image texture and its spatial layout. Based on texture features of lumbar vertebrae L1-L4, classify them as normal or osteoporotic using neural network (NN) models with L2 regularization. In an experiment, X-ray images and dual-energy X-ray absorptiometry (DXA) reports of individual patients are used to verify the system. DXA reports describe a statistical analysis of normal and osteoporotic results. However, the proposed work is categorized according to the texture features as normal or osteoporotic. 99.34% classification accuracy is achieved; cross-validation of these classified results is done with the DXA reports. Diagnostic accuracy of the proposed method is higher than that of the existing DXA with X-ray. Further, the area under the Receiver Operating Characteristic (ROC) curve for L1-L4 had a significantly higher sensitivity for osteoporosis.

Author 1: Kavita Avinash Patil
Author 2: K. V. Mahendra Prashanth
Author 3: A Ramalingaiah

Keywords: Cross validation; image enhancement; lumbar spine; lumbar vertebrae; neural network; osteoporosis; PHOG; regularization; trabecular bone; texture features; x-ray images

PDF

Paper 50: Deep Learning Approach for Spoken Digit Recognition in Gujarati Language

Abstract: Speech Recognition is an emerging field in the area of Natural Language Processing which provides ease for human machine interaction with speech. Speech recognition for digits is useful for numbers oriented communication such as mobile number, scores, account number, registration code, social security code etc. This research paper seeks to achieve recognition of ten Gujarati digits from zero to nine (૦ to ૯) by using a deep learning approach. Dataset is generated with total 8 native speakers 4 male 4 female with the age group of 20 to 40. The dataset includes 2400 labeled audio clips of both genders. To implement a deep learning approach, Convolutional Neural Network (CNN) with MFCC is used to analyze audio clips to generate spectrograms. For the proposed approach three different experiments were performed with different dataset sizes as 1200, 1800 and 2400. With this approach maximum 98.7% accuracy is achieved for spoken digits in Gujarati language with 98% Precision and 98% Recall. It is analyzed from various experiments that increase in dataset size improves the accuracy rate for spoken digit recognition. No of epochs in CNN also improves accuracy to some extent.

Author 1: Jinal H. Tailor
Author 2: Rajnish Rakholia
Author 3: Jatinderkumar R. Saini
Author 4: Ketan Kotecha

Keywords: CNN; Deep learning; digit; Gujarati; speech recognition

PDF

Paper 51: Tweet Credibility Detection for COVID-19 Tweets using Text and User Content Features

Abstract: The deadly COVID-19 pandemic is currently sweeping the globe, and millions of people have been exposed to false information about the disease, its remedies, prevention, and origins. During such perilous times, the propagation of fake news and misinformation can have serious implications, causing widespread panic and exacerbating the pandemic's threat. This increasing threat factor has given rise to considerable research challenges. This article is mainly concerned about fake news identification and experimentation is specifically performed considering COVID-19 fake news as a case study. Fake news is spread intentionally to mislead the people and therefore we need to identify user’s involvement and it’s correlation with additional features. The aim of this research is to develop a model that can predict the essence of a tweet given as an input with the help of multiple features. Our strategy is to make use of the tweet's text as well as the user's metadata and develops a model using natural processing technique and deep learning method. In this process, we have analyzed the behavior of the accounts, observed the impact of the various factors that can lead to fake news. The experimental analysis shows that hybrid model with text and content features have generated a benchmark result than the existing state of art techniques. We have obtained a best F1-score of 0.976 during the experimentation.

Author 1: Vaishali Vaibhav Hirlekar
Author 2: Arun Kumar

Keywords: Fake news; machine learning; natural language processing; deep learning

PDF

Paper 52: Integration of Ensemble Variant CNN with Architecture Modified LSTM for Distracted Driver Detection

Abstract: Driver decisions and behaviors are the major factors in on-road driving safety. Most significantly, traffic injuries and accidents are reduced using the accurate driver behavior monitoring system. However, the challenges occur in understanding human behaviors in the practical environment due to uncontrolled scenarios like cluttered and dynamic backgrounds, occlusion, and illumination variation. Recently, traffic accidents are mainly caused by distracted drivers, which has increased with the popularization of smartphones. Therefore, the distracted driver detection model is necessary to appropriately find the behavior of the distracted driver and give warnings to the driver to prevent accidents, which need to be concentrated as serious issues. The main intention of this paper is to design and implement a novel deep learning framework for driver distraction detection. First, the datasets for driver distraction detection are gathered from public sources. Furthermore, the Optimal Fusion-based Local Gradient Pattern (LGP) and Local Weber Pattern (LWP) perform the pattern extraction of the images. These patterns are inputted into the new deep learning framework with Ensemble Variant Convolutional Neural Network (EV-CNN) for feature learning. The EV-CNN includes three different models, like Resnet50, Inceptionv3, and Xception. The extracted features are subjected to the architecture-optimized Long Short-Term Memory (LSTM). The Hybrid Squirrel Whale Optimization Algorithm (HSWOA) performs both the pattern extraction and the LSTM optimization. The experimental results demonstrate the effective classification performance of the suggested model in terms of accuracy during the detection of distracted driving and are helpful in maintaining safe driving habits.

Author 1: Zakaria Boucetta
Author 2: Abdelaziz El Fazziki
Author 3: Mohamed El Adnani

Keywords: Distracted driver detection; ensemble variant convolutional neural network; hybrid squirrel whale optimization algorithm; local gradient pattern; local weber pattern; optimal fusion-based pattern descriptors; long short term memory

PDF

Paper 53: A Review of the Integration of Cyber-Physical System and Internet of Things

Abstract: Cyber-physical system has a bigger impact on the present and future of the engineered systems. The integration of cyber-physical system with other technologies often poses challenges that are unique in the fields of design, implementation, and applications. This study explores the definitions of the integrated cyber-physical system from different perceptions, how cyber-physical systems evolved, emerging fields of research in cyber-physical systems. Efficiency, reliability, predictability, and security are some of the challenges that cyber-physical systems pose in the overall implementation. cyber-physical systems when integrated with the Internet of Things evolves into a hybrid technology that advances the technology aspects. The CPS-IoT models complement each other offering useful insights to deal with engineering systems and control modules.

Author 1: Ramesh Sneka Nandhini
Author 2: Ramanathan Lakshmanan

Keywords: Cyber-physical system; internet of things; industrial internet (ii); industry 4.0

PDF

Paper 54: Is Deep Learning on Tabular Data Enough? An Assessment

Abstract: It is critical to select the model that best fits the situation while analyzing the data. Many scholars on classification and regression issues have offered ensemble techniques on tabular data, as well as other approaches to classification and regression problems (Like Boosting and Logistic Model tree ensembles). Furthermore, various deep learning algorithms have recently been implemented on tabular data, with the authors claiming that deep models outperform Boosting and Model tree approaches. On a range of datasets including historical geographical data, this study compares the new deep models (TabNet, NODE, and DNF-net) against the boosting model (XGBoost) to see if they should be regarded a preferred choice for tabular data. We look at how much tweaking and computation they require, as well as how well they perform based on the metrics evaluation and statistical significance test. According to our study, XGBoost outperforms these deep models across all datasets, including the datasets used in the journals that presented the deep models. We further show that, when compared to deep models, XGBoost requires considerably less tweaking. In addition, we can also confirm that a combination of deep models with XGBoost outperforms XGBoost alone on almost all datasets.

Author 1: Sheikh Amir Fayaz
Author 2: Majid Zaman
Author 3: Sameer Kaul
Author 4: Muheet Ahmed Butt

Keywords: Deep learning; XGBoost; NODE; TabNet; DNF-net; statistical significance test; tabular geographical data

PDF

Paper 55: Smart Home Automation by Internet-of-Things Edge Computing Platform

Abstract: Internet of Things (IoTs) offer significant benefits to various applications, including homes automation (HA), environmental monitoring, healthcare, homeland security, agriculture, and many others. Consequently, the trend of IoTs is rapidly evolving in many new sectors leading to higher comfort, better quality of life, and conveniences that can offer optimum consumption of valuable resources for the users. This paper presents a low-cost and flexible HA system that uses different sensors and other resources to control commonly used home appliances and connected devices by establishing IP connectivity to access, manage, and monitor them remotely. To enable remote access, an android-based application was developed to monitor and control the home devices. Edge computing (EC) platform is used in the proposed system to enhance reliability and robustness when the computation offloading is required. The system operates automatically according to the environmental conditions in the home based on the home occupants’ requirements. The outcomes of the proposed system reveal that it greatly supports the concept of smart HA and capable to significantly reduces the wastage of electricity via optimum utilization.

Author 1: Zubair Sharif
Author 2: Low Tang Jung
Author 3: Muhammad Ayaz
Author 4: Mazlaini Yahya
Author 5: Dodo Khan

Keywords: Smart home; Internet of Things (IoTs); home automation; android application; raspberry pi; edge computing

PDF

Paper 56: An Integrated Approach to Research Paper and Expertise Recommendation in Academic Research

Abstract: Research papers and expertise recommendation systems have been operating independently of one another, resulting in avoidable search queries and unsatisfactory recommendations. This study developed an integrated recommendation system to harmonize research papers and expertise recommendations in the academic domain for enhanced research collaboration. It aims to address issues related to the isolated existence of these systems. A recommendation algorithm was developed to synergize the research paper and expertise recommendation models. The Cosine similarity function between user query and available research papers as well as experts, was combined with selected criteria to achieve recommendation. The synergized model was simulated and evaluated using Precision, Recall, F-measure and Root Mean Square Error as performance metrics. The findings showed that the harmonization of research paper and expertise recommendation approaches provides a holistic and enhanced approach towards research paper and expertise recommendation. Thus, academic researchers now have a reliable way to recommend experts and research papers, which will lead to more collaborative research activities.

Author 1: Charles N. Mabude
Author 2: Iyabo O. Awoyelu
Author 3: Bodunde O. Akinyemi
Author 4: Ganiyu A. Aderounmu

Keywords: Research paper; expertise; recommendation systems; academic

PDF

Paper 57: Organizational Architecture and Service Delivery Re-Alignment based on ITIL and TOGAF: Case Study of the Provincial Development Bank

Abstract: The operations function area is the core function areas of the development bank to serve its customer related to the financial needs. Interestingly, the total scope of services provided in this case can be categorized as small or not optimal compared to the total population of its coverage area in the West Java and Banten. The lack of customer confidence in the services offered can be said as one of the reason cause by the unpreparedness of organization to adopt business agility and technological innovation as their alignment framework. Thus, as the beginning, IT planning in the function area can be utilized as the solution to be implemented to increase the service delivery performance through strengthening the organizational architecture. To support this, Enterprise Architecture (EA) should align business and IT with mapping ITIL best practice as a foundation and practical direction to bring the company operational services to have sustainability in growth, profit and satisfaction. This study delivers the roadmap design using the TOGAF framework to identify the current state of the company and the desired IT architecture with business strategies in the area of operations functions.

Author 1: Asti Amalia Nur Fajrillah
Author 2: Muharman Lubis
Author 3: Irmayanti Syam

Keywords: Organization; service; innovation; alignment; ITIL; TOGAF; alignment

PDF

Paper 58: A Lasso-based Collaborative Filtering Recommendation Model

Abstract: This paper proposes a new approach to solve the problem of lack of information in rating data due to new users or new items, or there is too little rating data of the user for items of the collaborative filtering recommendation models (CFR models). In this approach, we consider the similarity between users or items based on the lasso regression to build the CFR models. In the commonly used CFR models, the recommendation results are built only based on the feedback matrix of users. The results of our model are predicted based on two similarity calculated values: (1) the similarity calculated value based on the rating matrix; (2) the similarity calculated value based on the prediction results of the Lasso regression. The experimental results of the proposed models on two popular datasets have been processed and integrated into the recommenderlab package showed that the suggested models have higher accuracy than the commonly used CFR models. This result confirms that Lasso regression helps to deal with the lack of information in the rating data problem of the CFR models.

Author 1: Hiep Xuan Huynh
Author 2: Vien Quang Dam
Author 3: Long Van Nguyen
Author 4: Nghia Quoc Phan

Keywords: UBCF-LASSO; IBCF-LASSO; Lasso regression

PDF

Paper 59: Face Age Estimation using Shortcut Identity Connection of Convolutional Neural Network

Abstract: Depletion of skin and muscle tone has a considerable impact on the appearance of the face, which is constantly evolving. Algorithms necessitate a large number of aging faces for this purpose. Another popular deep learning technique is convolutional neural networks. In a recent study, many computer vision and pattern recognition problems have been successfully tackled using it. But these methods have architectural issues (e.g., the training process) that have a negative impact on their age estimation performance. As a result, a whole new approach is proposed in this research to address the issue. Using a convolutional neural network framework and resnet50 architecture, researchers were able to detect the age of a human face. This proposed shortcut identity connection strategy, which enables age estimation from the face image, has improved the success of the resnet50 architecture. To be able to tell a person's age from a picture of their face, it was important to know the characteristics of aging. As a result, the rhetorical classification method, which employs the resnet50 structure, is used to shift the face aging levels to a probability level. All of the 50 layers in the proposed residual network have a residual block that connects them. As far as face-aging databases go, ImageNet and FG-NET are both good choices for the proposed age estimation process. In the training session, the experiment results are 2.27 and 2.38, based mostly on the mean absolute error. The test accuracy results for the ImageNet dataset are 81.75% with the FG-NET dataset and 57% with the ImageNet dataset.

Author 1: Shohel Pramanik
Author 2: Hadi Affendy Bin Dahlan

Keywords: Shortcut identity connection; ImageNet; residual connection; face aging

PDF

Paper 60: Framework to Deploy Containers using Kubernetes and CI/CD Pipeline

Abstract: Containers are continuously replacing the usage of virtual machines and gaining popularity in terms of scalability and agility in IT Industry. The key concept behind containers is Operating system based virtualization. In cloud, computing containers are getting deployed in terms of computing instances whereas in premises they are getting deployed using Docker as a part of CI/CD pipelines using Jenkin Server. When containers are going to be increased in number, its deployment and resource management is always a concern which is managed using the Kubernetes. Kubernetes is used to deploy and manage the containers in an autonomous manner and Rancher is used to manage the Kubernetes Cluster in an efficient manner. First Analysis is done for the scheduler, resource management which is used by Kubernetes to deploy the containers and proposed a framework which will automate the whole process using the helm-charts, ansible scripts from container deployment to the management of Kubernetes Cluster in a scalable manner. It is fully automated framework and can be used to deploy the scalable applications in form of containers as Docker images. CI/CD pipeline is also considered using Jenkin Server.

Author 1: Manish Kumar Abhishek
Author 2: D. Rajeswara Rao
Author 3: K. Subrahmanyam

Keywords: Containers; Docker; Jenkin; Kubernetes; rancher; virtualization

PDF

Paper 61: Adaptive Generation-based Approaches of Oversampling using Different Sets of Base and Nearest Neighbor’s Instances

Abstract: Standard classification algorithms often face a challenge of learning from imbalanced datasets. While several approaches have been employed in addressing this problem, methods that involve oversampling of minority samples remain more widely used in comparison to algorithmic modifications. Most variants of oversampling are derived from Synthetic Minority Oversampling Technique (SMOTE), which involves generation of synthetic minority samples along a point in the feature space between two minority class instances. The main reasons these variants produce different results lies in (1) the samples they use as initial selection / base samples and the nearest neighbors. (2) Variation in how they handle minority noises. Therefore, this paper presented different combinations of base and nearest neighbor’s samples which never used before to monitor their effect in comparison to the standard oversampling techniques. Six methods; three combinations of Only Danger Oversampling (ODO) techniques, and three combinations of Danger Noise Oversampling (DNO) techniques are proposed. The ODO’s and DNO’s methods use different groups of samples as base and nearest neighbors. While the three ODO’s methods do not consider the minority noises, the three DNO’s include the minority noises in both the base and neighbor samples. The performances of the proposed methods are compared to that of several standard oversampling algorithms. We present experimental results demonstrating a significant improvement in the recall metric.

Author 1: Hatem S Y Nabus
Author 2: Aida Ali
Author 3: Shafaatunnur Hassan
Author 4: Siti Mariyam Shamsuddin
Author 5: Ismail B Mustapha
Author 6: Faisal Saeed

Keywords: Class imbalance; nearest neighbors; base samples; initial selection; SMOTE

PDF

Paper 62: Social Group Optimization-based New Routing Approach for WMN’s

Abstract: Wireless Mesh Networks (WMNs) are hop-to-hop communication networks that are quickly deployable, dynamically self-organizing, self-configuring, self-healing, self-balancing, and self-aware. In WMNs, a node can leave or join the network at any time. Due to the mobile nature of nodes, the routes between source and destination can change frequently. Computing the shortest path under dynamic conditions for a given time constraint imposed due to node mobility can also be placed in the class of highly complex problems. However, as the network size grows, the performance of the nodes decreases. As a result, we require Soft Computing approaches to handle this problem. This article proposes a Social Group Optimization (SGO) based routing approach to wireless mesh networks. The proposed approach was implemented in MATLAB and tested on different dynamic nodes network scenarios. We compare the performance of the proposed approach with Ant Colony Optimization (ACO), Ad-hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), BAT, Biogeography-based optimization (BBO), and Firefly Algorithm based routing approaches. We observe that the proposed approach outperformed all other approaches on the more than 1000 node network scenarios.

Author 1: Bhanu Sharma
Author 2: Amar Singh

Keywords: ACO; AODV; DSR; BAT; BBO; wireless mesh network; social group optimization

PDF

Paper 63: A New Combination Approach to CPU Scheduling based on Priority and Round-Robin Algorithms for Assigning a Priority to a Process and Eliminating Starvation

Abstract: The main purpose of an operating system is to control a group of processes, through a method known as CPU scheduling. The performance and efficiency of multitasking operating systems are determined by the use of a CPU scheduling algorithm. Round-robin scheduling is the best solution for time-shared systems, but it is not ideal for real-time systems as it causes more context shifts, longer wait times, and slower turnaround times. Its performance is mostly determined by the time quantum. Processes cannot have priorities set for them. Round-robin scheduling does not give more critical work greater consideration, which may affect system performance in solving processes. On the other hand, a priority algorithm can resolve processes' priority levels. This means that each process has a priority assigned to it, and processes with highest priority are executed first. If which process should come first and the process waiting time in CPU are not considered, this can cause a starvation problem. In this paper, a new CPU scheduling algorithm called the mix PI-RR algorithm was developed. The proposed algorithm is based on a combination of round-robin (RR) and priority-based (PI) scheduling algorithms for determining which tasks run and which should be waiting. The disadvantages of both round-robin and priority CPU scheduling algorithms are addressed by this novel algorithm. When using the proposed mix PI-RR algorithm, the performance measures indicated improved CPU scheduling. Other processes should not be affected by the CPU's requirements. This algorithm helps the CPU to overcome some of the problems of both algorithms.

Author 1: Hussain Mohammad Abu-Dalbouh

Keywords: Average turnaround time; average waiting time; utilization; performance measures; operating system; process

PDF

Paper 64: Balancing a Practical Inverted Pendulum Model Employing Novel Meta-Heuristic Optimization-based Fuzzy Logic Controllers

Abstract: This paper concentrates on proposing a newly effective control approach to ensure the balance of an inverted pendulum (IP) system consisting of a free-rotational rod and a small cart. The novel idea is to create an effective integration between a PD-like fuzzy logic architecture and a modified genetic algorithm (mGA). The mGA is executed with an appropriately fast optimization as an initial phase in dealing with optimizing scaling factors of the fuzzy logic controller. There are totally six meaningful scaling factors corresponding to two fuzzy logic controllers applied in the balance control system of the IP. These six scaling coefficients are highly significant to strongly affect control quality of the system applying such a PD-like fuzzy logic control methodology. Excellent results obtained in terms of numerical simulations compared with those of both the conventional PID and existed fuzzy logic counterparts as well as practical experiments implemented in a real IP topology verify the promising applicability of the new control methodology proposed in this study.

Author 1: Dao-Thi Mai-Phuong
Author 2: Pham Van-Hung
Author 3: Nguyen Ngoc-Khoat
Author 4: Pham Van-Minh

Keywords: mGA; inverted pendulum (IP); balance control; scaling factors; optimization

PDF

Paper 65: Application of Affective Computing in the Analysis of Advertising Jingles in the Political Context

Abstract: Affective computing is an emerging research area focused on the development of systems with the ability to recognize, process and simulate human emotions in order to improve a user’s experience in an interactive system. One of the possible fields of application of affective computing is marketing and advertising, where the application of emotion analysis techniques on the opinions made by users in different contexts has been evidenced, being a challenge the analysis of different types of multimedia content, such as advertising jingles. Thus, in this article we propose as a contribution the development of an emotion analysis study on the advertising jingles of the main candidates for mayor of Cartagena-Colombia in the period 2020-2023. For the development of this study, the acoustic properties of arousal and valence present throughout the audio track of each advertising jingle were taken into consideration, in such a way that through these properties it is possible to classify an audio fragment into an emotion belonging to the circumflex model or Russell model. To perform the segmentation of the audio track and the extraction of the acoustic properties of arousal and valence, we developed the MUSEMAN tool, which allows determining the emotions fluctuation in the advertising jingle audio track.

Author 1: Gabriel Elias Chanchi Golondrino
Author 2: Manuel Alejandro Ospina Alarcon
Author 3: Luz Marina Sierra Martínez

Keywords: Affective computing; advertising jingles; emotion analysis; arousal; valence

PDF

Paper 66: An Intelligent Approach based on the Combination of the Discrete Wavelet Transform, Delta Delta MFCC for Parkinson's Disease Diagnosis

Abstract: To diagnose Parkinson’s disease (PD), it is necessary to monitor the progression of symptoms. Unfortunately, diagnosis is often confirmed years after the onset of the disease. Communication problems are often the first symptoms that appear earlier in people with Parkinson’s disease. In this study, we focus on the signal of speech to discriminate between people with and without PD, for this, we used a Spanish database that contains 50 records of which 28 are patients with Parkinson’s disease and 22 are healthy people, these records contain five types of supported vowels (/a/, /e/, /i/, /o/ and /u/), The proposed treatment is based on the decomposition of each sample using Discrete Wavelet Transform (DWT) by testing several kinds of wavelets, then extracting the delta delta Mel Frequency Cepstral Coefficients (delta delta MFCC) from the decomposed signals, finally we apply the decision tree as a classifier, the purpose of this process is to determine which is the appropriate wavelet analyzer for each type of vowel to diagnose Parkinson’s disease.

Author 1: BOUALOULOU Nouhaila
Author 2: BELHOUSSINE DRISSI Taoufiq
Author 3: NSIRI Benayad

Keywords: Parkinson’s disease; discrete wavelet transform; delta delta MFCC; decision tree classifier

PDF

Paper 67: Efficient Intrusion Detection System for IoT Environment

Abstract: These days, the Internet is subjected to a variety of attacks that can harm network devices or allow attackers to steal the most sensitive data from these devices. IoT environment provides new perspective and requirements for Intrusion detection due to its heterogeneity. This paper proposes a newly developed Intrusion Detection System (IDS) that relies on machine learning and deep learning techniques to identify new attacks that existed systems fail to detect in such an IoT environment. The paper experiments consider the benchmark dataset ToN_IoT that includes IoT services telemetry, Windows, Linux operating system, and network traffic. Feature selection is an important process that plays a key role in building an efficient IDS. A new feature selection module has been introduced to the IDS; it is based on the ReliefF algorithm which outputs the most essential features. These extracted features are fed into some selected machine learning and deep learning models. The proposed ReliefF-based IDSs are compared to the existed IDSs based correlation function. The proposed ReliefF-based IDSs model outperforms the previous IDSs based correlation function models. The Medium Neural Network model, Weighted KNN model, and Fine Gaussian SVM model have an accuracy of 98.39 %, 98.22 %, and 97.97 %, respectively.

Author 1: Rehab Hosny Mohamed
Author 2: Faried Ali Mosa
Author 3: Rowayda A. Sadek

Keywords: Intrusion detection systems (IDSs); TON IoT dataset; machine learning; deep learning; ReliefF

PDF

Paper 68: Augmented System for Food Crops Production in Agricultural Supply Chain using Blockchain Technology

Abstract: The elevated version of the Agricultural Traceability System dealing with food production holds utmost significance in not only assuring food insecure, smart contracts and agri-insurance to farmers but also guaranteeing insurance to them during natural disaster. The proposed and improved system for food cultivation traceability deals with agri-crops and farmers that hires the Blockchain technology guarantee at par safety, agreement, distributed ledger, immediate payment, de-centralization and thereby achieving the goal of minimizing the cost incurred in the food processing system and building trust. Smart contracts play a pivot role in the field of agricultural insurance. Agricultural insurance based upon Blockchain comprises of major weather incidents and associated payouts enlisted on a smart contract, connected to the mobile wallets with timely weather updates notified by the field sensors and interrelated with data from proximity weather stations would enable prompt payout during any natural calamity such as flood or drought. A panel of advisers in the decentralized system which is professionally governed and managed by certain retired officers makes the traceability system more trustworthy. These professionals can offer wise suggestions to the planters aiding them to acquire productive outcome.

Author 1: Dayana D. S
Author 2: Kalpana G

Keywords: Blockchain; distributed ledger; consensus; decentralized; stakeholders; agricultural insurance; payouts; trust-based farming system; food safety; panel of advisers; agri-supply chain

PDF

Paper 69: A Comprehensive Analysis of Blockchain-based Cryptocurrency Mining Impact on Energy Consumption

Abstract: Blockchain already has gained popularity due to its highly secured network and same time enormous computational power consumption has become an undifferentiated debate among the users. A Blockchain network is reliable, secure, transparent, and immutable where the transactions cannot be reversed between sender and receiver. Blockchain technology is not only used for mining cryptocurrency, it has other applications in different sectors like agriculture, education, insurance, etc., but the noticeable concern is still energy consumption. On the other hand, there is a significant impact on the environment due to the use of excessive energy for mining cryptocurrency which releases more carbon dioxide (CO2) in nature. The Proof-of-Work (PoW) algorithm is used for mining ‘Bitcoin’ which is consumed enormous computational power. However, an alternative solution like Proof-of-Stake (PoS) consensus protocol has been proposed to use instead of the Proof-of-Work algorithm for mining cryptocurrencies which is capable to reduce the significant amount of energy consumption. Not only that, but the use of renewable energy can also be an alternate option to use the Proof-of-Work algorithm for mining cryptocurrencies which is environment friendly. This paper aims to highlight blockchain technology, the energy consumption and impact on the environment, energy reducing method by using PoS consensus protocol instead of using the PoW algorithm, and discussion with some recommendations.

Author 1: Md Rafiqul Islam
Author 2: Muhammad Mahbubur Rashid
Author 3: Mohammed Ataur Rahman
Author 4: Muslim Har Sani Bin Mohamad
Author 5: Abd Halim Bin Embong

Keywords: Blockchain; cryptocurrency; bitcoin; Proof-of-Work (PoW); Proof-of-Stake (PoS)

PDF

Paper 70: Bridge Pillar Defect Detection using Close Range Thermography Imagery

Abstract: Currently, radiometric thermography image has been explored adequately as alternative advance Non-Destructive Testing (NDT) especially for early detection analysis in various applications. Systematic image calibration, higher spatial resolution and high degree order image processing, thermography imagery potential to be used in concrete structure defect detection. Therefore, this study is carried out to examine the defect on bridge pillar surface concrete using drone-based thermography sensor (7–13 µm). Close range remote sensing NDT based on drone platform and imagery segmentation analysis have been applied to interpret the crack line on two pillars at North-South Expressway Central Link (ELITE) Highway. As a result, thermography imagery segmentation and support by multispectral radiometric imagery (RGB) successfully to delineate the micro crack line on the bridge pillar concrete using K-means clustering method. Overall, this study successfully shows the higher order optional platform using drone and thermography sensor that potentially to be applied in forensic concrete structure defect detection for tall structure building.

Author 1: Abd Wahid Rasib
Author 2: Muhammad Latifi Mohd Yaacob
Author 3: Nurul Hawani Idris
Author 4: Khairulazhar Zainuddin
Author 5: Rozilawati Dollah
Author 6: Norbazlan Mohd Yusof
Author 7: Norisam Abd Rahaman
Author 8: Shahrin Ahmad
Author 9: Norhadi A. Hamid
Author 10: Abdul Manaf Mhapo

Keywords: Defect; detection; bridge pillar; drone; thermography; close range remote sensing

PDF

Paper 71: Learning Effectivess of Virtual Land Surveying Simulator for Blended Open Distance Learning Amid Covid-19 Pandemic

Abstract: Many universities worldwide were forced to physically close campuses due to lockdown and resumed the in-person classes compliant with a stringent set of standards of procedures (SOPs) as Covid cases drop. This has profoundly disrupted the hands-on lab face-to-face learning process that is harder to be moved online. Virtual simulation lab could be the answer and its use in many courses has been extensively studied. However, it is relatively little studied when it comes to land surveying courses. The purpose of the study is to explore the learning effectiveness of virtual surveying field lab for blended open distance learning (ODL) students at Wawasan Open University (WOU) in the time of Covid-19. This study used a mixed-method that combines qualitative and quantitative approaches to get a fuller picture and deeper meaning of learning behavior while using descriptive and inferential statistical methods in SPSS platform. Respondents were selected using the purposive sampling method. Survey questionnaires were designed and distributed to students before and after lab simulation class. Instructors were interviewed after the lab simulation class. Students’ learning results for the surveying course were compared with the past-year examination results at pre-Covid-19 times before the virtual simulator was introduced. Both qualitative and quantitative data set were collected and analyzed. The findings revealed that the virtual simulator has enhanced students’ learning interest and efficiency for surveying course in a ODL setting. Both students and instructors have responded positively towards the virtual simulator learning experiences. Students’ achievement in the final examination amid Covid-19 was better than pre-Covid-19 performance. It is recommended that the virtual simulator shouldn’t be a replacement to physical instrument but as a complement.

Author 1: Yeap Chu Im
Author 2: Muhammad Norhadri Bin Mohd Hilmi
Author 3: Tan Cheng Peng
Author 4: Azrina Jamal
Author 5: Noor Halizah binti Abdullah
Author 6: Suzi Iryanti Fadilah

Keywords: Covid-19; 3D simulator; virtual laboratory; land surveying; blended open distance learning

PDF

Paper 72: Fusion of Statistical Reasoning for Healing Highly Corrupted Image

Abstract: The accurate approximation of pixel value for preserving image details at a high concentration of noise has led the researchers to improve filters performance. A few image restoration filters are effective at lower density noise. Filters are commonly deployed for cameras, image processing tasks, medical image analysis, guided media data transmission, and real-time machine learning. This article proposes a mathematical model for the exact pixel value estimation at a high noise density for RGB and Gray images. The mathematical model is implemented to fuse statistical reasoning on the optimized mask sizes while preserving image details. Different parameter returns from the median filter, the trimmed median filter, the trimmed mean filter, and mood analysis form a mathematical function. The filter iteratively selects different schemes to calculate pixel values at different noise densities with minimum image information. Different processing masks are analyzed to preserve local data at specific image locations correctly in high density. A robust estimator counts false approximation of pixel values as discontinued, identified, and removed. At the post smoothening process, the filter recovers the misclassification of noise-free pixel and blur effects in the image. The qualitative experiments show satisfactory results in storing the details of the image from any image. The performance of the fusion filter is verified with visual quality and performance analysis matrices such as the image enhancement factor, the similarity indicator and the noise ratio from the peak signal.

Author 1: Golam Moktader Daiyan
Author 2: Leiting Chen
Author 3: Chuan Zho
Author 4: Golam Moktader Nayeem

Keywords: Salt and pepper noise; median filter; statistical reasoning; performance analysis matrices; high-density noise; mood; trimmed median; trimmed mean; peak signal-to-noise ratio; image enhancement factor; structural similarity index

PDF

Paper 73: Framework of Infotainment using Predictive Scheme for Traffic Management in Internet-of-Vehicle

Abstract: Infotainment system potentially contributes towards controlling accident fatalities in the era of Internet-of-Vehicles (IoV). Review of existing system is carried out to find that irrespective of various methods towards infotainment system, the quality of data being retrieved as well as issues associated with power and traffic congestion in vehicular communication is still an impending challenge. Therefore, this manuscript introduces a novel predictive scheme that offers enriched set of information from the environment to assists in decision making. Reinforcement learning is adopted for controlling traffic signal and power while the proposed system introduce augmented Long Short Term Memory scheme in order to predict the best possible traffic scenario for assisting the infotainment system to make a precise decision. The simulation is carried out for proposed system with existing learning schemes to find out proposed scheme offers better performance in every respect over challenging scene of an IoV.

Author 1: Reshma S
Author 2: Chetanaprakash

Keywords: Infotainment system; internet-of-vehicle; reinforcement learning; decision making; power; long short term memory

PDF

Paper 74: Internet of Things (IoT) Application for Management in Automotive Parts Manufacturing

Abstract: Automotive parts manufacturing focuses on sustainable development manufacturing capabilities with future technology changes. Internet of Things (IoT) plays an important role in applying internet technology to machines and equipment in manufacturing processes for transformation towards Industry 4.0 as well as creating values added and higher competitive advantage for the sustainability of the industries. This research aims to study factors that influence decision-makers in selecting IoT applications for managing auto parts production and they consist of connectivity, telepresence, intelligence, security, and value including their fifteen sub-factors. In this research, The Fuzzy Analytic Network Process (FANP) is a Multiple Criteria Decision Making (MCDM) technique used to analyze, identify, and prioritize factors in selecting IoT applications for managing production processes. The questionnaire is designed based on the FANP technique to survey the importance of weight for each factor from executives of 88 auto parts manufacturers who authorize as the decision-makers for selecting IoT applications. The results have indicated that telepresence is the most important factor that will assist them in controlling production to guarantee that production capabilities meet the objective and connectivity is the second important factor that must ensure that IoT applications are compatible with their machinery and equipment can be controlled smoothly and precisely. Meanwhile, performance is the most important sub-factor and other sub-factors are ranked as functional orientation, data management, control, and compatibility respectively. Therefore, manufacturers can use this research as a criterion for selecting appropriate IoT applications for controlling their manufacturing for sustainable effectiveness.

Author 1: Apiwat Krommuang
Author 2: Opal Suwunnamek

Keywords: Internet of things; auto parts industry; FANP; MCDM; sustainable manufacturing

PDF

Paper 75: Investigation of Hybrid Feature Selection Techniques for Autism Classification using EEG Signals

Abstract: Autism Spectrum Disorder (ASD), the non-uniform neurodevelopment condition that is characterized by the impairment of behaviour in communication and social interaction with some restricted their repetitive behaviour. Today, to measure the voltage created during brain activity is measured using electroencephalography (EEG). The wavelet transform is used for decomposing the time-frequency of the EEG signal. Feature Selection is the process that significantly reduces feature space dimensionality, while maintaining the right representation of their original data. In this work, metaheuristic algorithm is utilized for feature selection. The proposed feature selection is based on River Formation Dynamics (RFD) and a hybrid Greedy RFD is presented. Support Vector Machine (SVM) can be a concept consisting of a set of methods of supervised learning to analyze pattern recognition that is a successful tool in the analysis of regression and classification. Experimental results show the proposed Greedy RFD feature selection improves the performance of the classifiers and enhance the accuracy of classifying ASD.

Author 1: S. Thirumal
Author 2: J. Thangakumar

Keywords: Autism spectrum disorder (ASD); electroencephalo graphy (EEG); feature selection; River Formation Dynamics (RFD); Support Vector Machine (SVM); hybrid greedy RFD

PDF

Paper 76: An Optimized Hybrid Fuzzy Weighted k-Nearest Neighbor with the Presence of Data Imbalance

Abstract: We present an optimized hybrid fuzzy Weighted k-Nearest Neighbor classification model in the presence of imbalanced data. More attention is placed on data points in the boundary area between two classes. Finding greater results in the general classification of imbalanced data for both the minority and the majority classes. The fuzzy weighted approach assigns large weights to small classes and small weights to large classes. It improves the classification performance for the minority class. Experimental results show a higher average performance than other relevant algorithms, e.g., the variants of kNN with SMOTE such as Weighted kNN alone and Fuzzy kNN alone. The results also signify that the proposed approach makes the overall solution more robust. At the same time, the overall classification performance on the complete dataset is also increased, thereby improving the overall solution.

Author 1: Soha A. Bahanshal
Author 2: Rebhi S. Baraka
Author 3: Bayong Kim
Author 4: Vaibhav Verdhan

Keywords: Imbalanced data; fuzzy weighted kNN; SMOTE; classification model; optimized hybrid kNN

PDF

Paper 77: SPKP: A Web-based Application System for Managing Mental Health in Higher Institution

Abstract: Client psychology profile is one of the methods used by UUM’s Counselling Centre to analyze client psychological health conditions. This psychological profile is in a physical form which means the questions that need to be answered are on the paper. This psychology profile consists of three types of psychological modules whereby each module consists of questions related to psychology. The estimated time taken to answer this psychological profile is around 10 minutes. This physical form method may cause data to be lost and it will create issues when counselors want to retrieve the data back. This paper is aimed to develop Sistem Profil Kesejahteraan Psikologi (SPKP) for Counselling Centre of Universiti Utara Malaysia (UUM) in helping to know the client’s psychological health before meeting up with the counselor. By focusing on analyzing Counselling Centre client’s data, the basis of this web application system is to create a new space that will help Counselling Centre to improve the way they collect data, store data, and also improve their quality in time management whereby all those data can be collect, store, retrieve and analyze in a single click. By developing this system, Counselling Centre can monitor clients’ psychological health before they meet up with the counselor.

Author 1: Mohamad Fadli Zolkipli
Author 2: Zahidah Mohamad Said
Author 3: Massudi Mahmuddin

Keywords: Psychology tests; psychology health; counselling; counselor

PDF

Paper 78: Breast Cancer Classification using Decision Tree Algorithms

Abstract: Cancer is a major health issue that affects individuals all over the world. This disease has claimed the lives of many people, and will continue to do so in the future. Breast cancer has recently surpassed cervical cancer as the most frequent cancer among women in both industrialized and developing countries and it is now the second leading cause of cancer mortality among women. A high number of women die each year as a result of this disease. Breast cancer is significantly easier to treat if caught early. This paper introduces a decision tree-based data mining technique for breast cancer early detection with highest accuracy, which helps patients to recover. Breast cancers are classed as benign (unable to penetrate surrounding tissue) or malignant (able to infiltrate adjacent tissue) breast growths. Two tests were included in the review. The primary study uses 10 breast cancer samples from the Kaggle archive, whereas the follow-up study uses 286 breast cancer samples from the same pool. The Decision Tree's accuracy in the first trial was 100%, while it was 97.9% in the follow-up inquiry. These findings justify the use of the proposed machine learning-based Decision Tree classifier in pre-evaluating patients for triage and decision-making prior to the availability of data.

Author 1: Omar Tarawneh
Author 2: Mohammed Otair
Author 3: Moath Husni
Author 4: Hayfa.Y. Abuaddous
Author 5: Monther Tarawneh
Author 6: Malek A Almomani

Keywords: Data mining; decision tree; classifier; breast cancer

PDF

Paper 79: Development and Validation of e-Books during the Post-Pandemic to Improve Attitude towards Environmental Care in Case of Indonesia

Abstract: Indonesia has natural diversity in the form of tropical rain forests, the sea, flora, and fauna. Information technology adaptation is needed considering its development and use is getting higher, especially during the pandemic and post-pandemic periods. Through information technology in the form of E-books based on local wisdom, an attitude of caring for the environment grows. This research aims to combine information technology, environmental-based science learning, and local wisdom in growing students' environmental care character. Through the use of e-books based on Google's website as information technology in implementing a green science learning model that is oriented to local wisdom. This study develops and validates an e-book based on local wisdom through the ADDIE research design. The results of the analysis and discussion are that the development of an e-book based on Google's website with a green science model oriented to local wisdom is declared valid and feasible to use with an average percentage of 90.74% from the three experts. Then the results of student responses also showed a positive response from students with a percentage of 90.29%. The application of google site-based e-books in the green science model oriented to local wisdom in learning can increase the growth of environmental care characters for junior high school students as evidenced by the results of multiple regression analysis of significance 0.00 below 0.05.

Author 1: Anggraeni Mashinta Sulistyani
Author 2: Zuhdan Kun Prasetyo
Author 3: Farida Hanum
Author 4: Rizki Noor Prasetyono

Keywords: e-Book; Google site; green science learning model; local wisdom; environmental care character

PDF

Paper 80: A Two-Stage Assessment Approach for QoS in Internet of Things based on Fuzzy Logic

Abstract: In the sphere of IoT, one of the most significant issues is quality of service (QoS), which is critical for both developers and customers. As a result, IoT platform developers are working to enhance models that will meet consumer expectations in terms of IoT services meeting their expected specified levels of quality. The multidimensional architecture of the IoT platform, combined with the ambiguous mindset of consumers' thinking, makes QoS evaluation a difficult process. As a result, this study seeks to solve these issues and proposes a new paradigm for assessing QoS in IoT ecosystems. The proposed approach evaluates QoS in two steps, with the goal of assessing QoS at all levels. To address the issue of uncertainty, the metric values and QoS were represented using a fuzzy logic method. The model correctly estimated the QoS for 50 services in the dataset, and the results show that 16 services are classed as high quality, while 25 are rated as medium quality and the rest rated as low quality.

Author 1: Mutasim Elsadig Adam
Author 2: Yasir Abdalgadir Ahmed Hamid

Keywords: IoT; internet of things; QoS; quality of services; fuzzy logic; evaluate; assess

PDF

Paper 81: Design and Usability Study of Hypertension Management Guideline Mobile Application with Hypertension and Non-hypertension Patients

Abstract: Hypertension is currently rising steadily among the world population. The first level of screening to know whether one is suffering from hypertension is essential as this lays the foundation for the actual diagnosis. This research details the user interface design and usability evaluation of the hypertension management guideline. The proposed mobile application prototype assists people in screening themselves with regards to hypertension based on symptoms. This prototype also acts as a sharing platform for hypertension patients to help them share their concerns and advice within the related online community. The eye-tracker experiment was used to support the visual strategy of the prototype design. In studying the usability of the mobile application, an experiment carried out with two groups of people, of which one group of people have hypertension. In contrast, the other group of people do not have hypertension. An independent-samples t-test conducted to compare the user performance scores using the proposed prototype. Based on the usability study, both user groups understood and used the applications with ease. However, the findings revealed there was a significant difference in overall scores for hypertension patients and non-hypertension patients. The findings of this study could help software developer design an effective application for hypertension guideline tool for monitoring health and well-being.

Author 1: Nor Azman Ismail
Author 2: Nor Atiqah Mohd Fuaad
Author 3: Muhammad Syahmi Zulkifli
Author 4: Farhat Embarak
Author 5: Nur Zuhairah Afiqah Husni
Author 6: Su Elya Mohamed
Author 7: Puteri Syaza Kamarina Megat Mohd Zainon

Keywords: Hypertension management guideline; hypertension patient; hypertension symptoms; user interface; user experience

PDF

Paper 82: A CNN based Approach for Handwritten Character Identification of Telugu Guninthalu using Various Optimizers

Abstract: Handwritten character recognition is the most critical and challenging area of research in image processing. A computer's ability to detect handwriting input from various original sources, such as paper documents, images, touch screens, and other online and offline devices, may be classified as this recognition. Identifying handwriting in Indian languages like Hindi, Tamil, Telugu, and Kannada has gotten less attention than in other languages like English and Asian dialects like Japanese and Chinese. Adaptive Moment Estimation (ADAM), Root Mean Square Propagation (RMSProp) and Stochastic Gradient Descent (SGD) optimization methods employed in a Convolution Neural Network (CNN) have produced good recognition, accuracy, and training and classification times for Telugu handwritten character recognition. It's possible to overcome the limitations of classic machine learning methods using CNN. We used numerous handwritten Telugu guninthalu as input to construct our own data set used in our proposed model. Comparatively, the RMSprop optimizer outperforms ADAM and SGD optimizer by 94.26%.

Author 1: B. Soujanya
Author 2: Suresh Chittineni
Author 3: T. Sitamahalakshmi
Author 4: G. Srinivas

Keywords: Character recognition; Adam; RMSProp; SGD; CNN

PDF

Paper 83: Information Security Enhancement by Increasing Randomness of Stream Ciphers in GSM

Abstract: Information security is a crucial issue and needs to be addressed efficiently. Encryption of the original information is used to ensure privacy during exchange of information. In GSM (Global System for Mobile) standard, once the voice traffic initiates after signaling, encryption comes into the picture to ensure privacy during the call, after authentication. Here in this process, the plaintext is encrypted in to cipher-text using stream ciphers. For stronger security, strong ciphers with strong randomness are required. Linear Feedback Shift Registers (LFSRs) based A5 algorithm family is used for encryption in GSM. There are many shortcomings of this cipher and with these, privacy can’t be assured. Some ways are proposed in this paper to ensure better security by enhancing the randomness of the generated bit stream being used for encryption. These are incorporation of user’s current location, reuse of already generated 32 bit SRES during authentication process and conversion of linear FSRs into nonlinear FSRs. Statistical Test Suite NIST is used to test the various properties of random bit stream and an attempt has been made to achieve better randomness, hence more security.

Author 1: Ram Prakash Prajapat
Author 2: Rajesh Bhadada
Author 3: Arjun Choudhary

Keywords: Security; encryption; A5/1 stream cipher; randomness; NIST test suite

PDF

Paper 84: Plant Disease Detection using AI based VGG-16 Model

Abstract: Agriculture and modern farming is one of the fields where IoT and automation can have a great impact. Maintaining healthy plants and monitoring their environment in order to identify or detect diseases is essential in order to maintain a maximum crop yield. The implementation of current high rocketing technologies including artificial intelligence (AI), machine learning, and deep learning has proved to be extremely important in modern agriculture as a method of advanced image analysis domain. Artificial intelligence adds time efficiency and the possibility of identifying plant diseases, in addition to monitoring and controlling the environmental conditions in farms. Several studies showed that machine learning and deep learning technologies can detect plant diseases upon analyzing plant leaves with great accuracy and sensitivity. In this study, considering the worth of machine learning for disease detection, we present a convolutional neural network VGG-16 model to detect plant diseases, to allow farmers to make timely actions with respect to treatment without further delay. To carry this out, 19 different classes of plants diseases were chosen, where 15,915 plant leaf images (both diseased and healthy leaves) were acquired from the Plant Village dataset for training and testing. Based on the experimental results, the proposed model is able to achieve an accuracy of about 95.2% with the testing loss being only 0.4418. The proposed model provides a clear direction toward a deep learning-based plant disease detection to apply on a large scale in future.

Author 1: Anwar Abdullah Alatawi
Author 2: Shahd Maadi Alomani
Author 3: Najd Ibrahim Alhawiti
Author 4: Muhammad Ayaz

Keywords: Machine learning; VGG-16; disease detection; convolutional networks; Plant Village; modern farming

PDF

Paper 85: Analysis of Factors Influencing the COVID-19 Mortality Rate in Indonesia using Zero Inflated Negative Binomial Model

Abstract: This research aims to create a model, analyze the factors that influence the COVID-19 mortality rate in Indonesia. There are five independent variables and one dependent variable used in the research. The independent variables used are the percentage of poor people, the percentage of households using shared toilet facilities, the percentage of households using wood as the main fuel for cooking, the percentage of the population whose drinking water source comes from pumped water and the percentage of population who have health insurance from private insurance. While the dependent variable used is the Annual Parasite incidence COVID-19. The results obtained are as follows. First, a Zero-Inflated Negative Binomial regression model was obtained for the case of COVID-19 morbidity where this model could overcome overdispersion and excess zero values in observations. Second, there are 4 independent variables that have a significant effect on the count model and there is no independent variable that has a significant effect on the Zero inflation model. Third, a web application is produced that can display the Zero-Inflated Negative Binomial regression model (ZINB).

Author 1: Maria Susan Anggreainy
Author 2: Abdullah M. Illyasu
Author 3: Hanif Musyaffa
Author 4: Florence Helena Kansil

Keywords: Mortality rate; overdispersion; zero-inflated negative binomial; Poisson regression; correlation

PDF

Paper 86: Automatic Healthy Sperm Head Detection using Deep Learning

Abstract: Infertility is one of the diseases in which researchers are interested. Infertility disease is a global health concern, and andrologists are constantly looking for more advanced solutions for this disease. The intracytoplasmic sperm injection (ICSI) process is considered as one of the most common procedures for achieving fertilization. Sperm selection is performed using visual assessment which is dependent upon the skills of the laboratory technicians and as such prone to human errors. Therefore, an automatic detection system is needed for quick and more accurate results. This study utilizes a deep learning technique for the classification of heads of human sperms which indicate the healthy human sperms. The Convolutional Neural Network (CNN) model of visual Geometry Group of 16 layers (VGG16) was used for classification, and it is one of the best architectures used for image classification. The dataset consists of 1200 images of human sperm heads divided into healthy and unhealthy. Here, the VGG16 model is fine-tuned and achieved an accuracy of 97.92% and a sensitivity of 98.82%. Moreover, it achieved an F1 score of 98.53%. The model is an effective and real-time system for detecting healthy sperms that can be injected into eggs for achieving successful fertilization. This model quickly recognizes healthy sperms and makes the sperm selection process more accurate and easier for the andrologists.

Author 1: Ahmad Abdelaziz Mashaal
Author 2: Mohamed A. A. Eldosoky
Author 3: Lamia Nabil Mahdy
Author 4: Kadry Ali Ezzat

Keywords: Infertility; sperm morphology; deep learning; human sperm head; healthy sperms

PDF

Paper 87: RTL Design and Testing Methodology for UHF RFID Passive Tag Baseband-Processor

Abstract: With the rapid growth and widespread implementation of Internet-of-Things (IoT) technology, Radio Frequency Identification (RFID) has become a vital supporting technology to enable it. Various researchers have studied the design of digital or analog blocks for RFID readers. However, most of these works did not provide a comprehensive design methodology. Hence, the motivation of this study is to full fill the research gap. This paper proposes a comprehensive design and testing methodology for the Ultrahigh Frequency (UHF) RFID passive tag baseband processor at the register transfer (RTL). A complete design procedure of each block from state diagram to schematic level is presented; it comprises several blocks, i.e., transmitter, receiver, Cyclic Redundancy Check (CRC), command processing, and Pseudorandom Number Generator (PRNG). Each block produces low latency (<400 ns). Two CRCs were applied to this system for different purpose: CRC-5 and CRC-16. To perform multi-parameter combinations of as many as 1344 combinations (including timing parameter, query respond, state transition, and BLF), a Universal Verification Methodology (UVM)-based test is conducted. The simulation results reveal that the proposed RFID baseband processor passes all the testing scenarios using UVM (version 1.1d). Moreover, we also implemented the proposed design on the FPGA board (ALTERA DE2-115). The system consumes 976 logic elements and 173.14 mW of total power dissipation (i.e., 0.13 mW of dynamic power dissipation, 98.6 mW of static power dissipation, and 74.34 mW of I/O dissipation), which is reasonably low. This demonstrates that our design is synthesizable and ready to be processed further. All system design and test criteria were conducted following the EPC Gen-2 standard. The developed chip can be a solution for various kinds of RFID chip-based IoT applications.

Author 1: Syifaul Fuada
Author 2: Aris Agung Pribadi
Author 3: Trio Adiono
Author 4: Tengku Ahmad Madya

Keywords: UHF RFID passive tag; baseband processor; register transfer level; universal verification methodology; Internet-of-things enabler; FPGA

PDF

Paper 88: IAGA: Interference Aware Genetic Algorithm based VM Allocation Policy for Cloud Systems

Abstract: Diversified systems hosted on cloud infrastructure have to work increasingly on physical servers. Cloud applications running on physical machines require diverse resources. The resource requirements of cloud applications are fluctuating based on the resource intensity of the applications. The multi-tenancy of Cloud servers can be achieved based on effective resource utilization. The optimum resource utilization, maximum service level agreement, and minimization of interference are the major objectives to be achieved. Using live Virtual Machine (VM) migration techniques cloud resources can be utilized efficiently. But the migrated VMs can interfere with the ongoing applications on the targeted server which may lead to the service level agreement violation (SLAV) and performance degradation. To resolve this issue, understanding the current state of cloud hosts before the allocation of newly migrated VM is necessary. This paper presents Interference Attentive Genetic Algorithm (IAGA) based VM allocation strategy to achieve the aforementioned objectives. The proposed IAGA policy has outperformed existing policies for quantifiable performance metrics such as energy consumed by cloud systems, count of hosts shut down, average SLAV, and count of VM migrations.

Author 1: Tarannum Alimahmad Bloch
Author 2: Sridaran Rajagopal
Author 3: Prashanth C. Ranga

Keywords: Cloud computing; interference; VM allocation; SLA violation; resource utilization

PDF

Paper 89: A Hybrid Deep Learning Approach for Freezing of Gait Prediction in Patients with Parkinson's Disease

Abstract: The main objective of this work is to enhance the prediction of the Freezing of Gait (FoG) episodes for patients with Parkinson's Disease (PD). Thus, this paper proposes a hybrid deep learning approach that considers FoG prediction as an unsupervised multiclass classification problem with 3 classes: namely, normal walking, pre-FoG, and FoG events. The proposed hybrid approach Deep Conv-LSTM is based on the use of Convolutional Neural Network layers (CNN) and Long Short-Term Memory (LSTM) units with spectrogram images generated based on angular axes features instead of the normal principle-axes features as the model input. Experimental results showed that the proposed approach achieved an average accuracy of 94.55% for FoG episodes early detection using Daphnet and Opportunity publicly available benchmark datasets. Furthermore, the proposed approach achieved an accuracy of 93.5% for FoG events prediction using the Daphnet dataset with the subject independent mode. Thus, the significance of this study is to investigate and validate the impact of using hybrid deep learning method for improving FoG episodes prediction.

Author 1: Hadeer El-ziaat
Author 2: Nashwa El-Bendary
Author 3: Ramadan Moawad

Keywords: Freezing of Gait (FoG); Parkinson's disease (PD); angular axes features; spectrogram; convolutional neural network (CNN); long short-term memory (LSTM)

PDF

Paper 90: Emotions Classification from Speech with Deep Learning

Abstract: Emotions are the essential parts that convey mean-ing to the interlocutors during social interactions. Hence, recog-nising emotions is paramount in building a good and natural affective system that can naturally interact with the human interlocutors. However, recognising emotions from social inter-actions require temporal information in order to classify the emotions correctly. This research aims to propose an architecture that extracts temporal information using the Temporal model of Convolutional Neural Network (CNN) and combined with the Long Short Term Memory (LSTM) architecture from the Speech modality. Several combinations and settings of the architectures were explored and presented in the paper. The results show that the best classifier achieved by the model trained with four layers of CNN combined with one layer of Bidirectional LSTM. Furthermore, the model was trained with an augmented training dataset with seven times more data than the original training dataset. The best model resulted in 94.25%, 57.07%, 0.2577 and 1.1678 for training accuracy, validation accuracy, training loss and validation loss, respectively. Moreover, Neutral (Calm) and Happy are the easiest classes to be recognised, while Angry is the hardest to be classified.

Author 1: Andry Chowanda
Author 2: Yohan Muliono

Keywords: Emotions recognition; speech modality; temporal information; affective system

PDF

Paper 91: A Comprehensive Overview on Biometric Authentication Systems using Artificial Intelligence Techniques

Abstract: Biometric authentication is becoming more preva-lent as it allows consumers to authenticate themselves without entering a physical address or a personal identification number. Thus, a simple finger gesture or a glance at a camera can still prove one’s identity. In this review, we explain in detail how the concept of authentication and the various types of biometric techniques is used for user identification. Then, we discuss the various ways these techniques can be combined to create a truly multimodal authentication system. For a more organized approach, our overview is classified into two main categories based on human biometric traits. First, the physiolog-ical traits include fingerprint, facial, iris/retina, hand, and finger-vein. Second, the behavioral traits includes voice, signature, and keystroke recognition systems. Finally, we offer a comprehensive comparison of selected methods and techniques and focus on three criteria: algorithms, merits, and drawbacks. Based on this comparison, we provide insight into our future research in iris recognition, by which we combine several artificial intelligence algorithms to develop our system.

Author 1: Shoroog Albalawi
Author 2: Lama Alshahrani
Author 3: Nouf Albalawi
Author 4: Reem Kilabi
Author 5: Aaeshah Alhakamy

Keywords: Biometric authentication; physiological traits; be-havioral traits; facial recognition; iris recognition; voice recogni-tion; signature

PDF

Paper 92: Protecting User Preference in Ranked Queries

Abstract: Protecting data privacy is of extremely importance for users who outsource their data to a third-party in cloud computing. Although there exist plenty of research work on data privacy protection, the problem of protecting user’s preference information has received less attention. In this paper, we consider the problem of how to prevent user preference information leakage from the third-party when processing ranked queries. We propose two algorithms to solve the problem, where the first one is based on distortion of preference vector and the second one is based on homomorphic encryption. We conduct extensive experiments to verify effectiveness of our approaches.

Author 1: Rong Tang
Author 2: Xinyu Yu

Keywords: Preference privacy; distortion; homomorphic encryption

PDF

Paper 93: Towards Security Awareness of Mobile Applications using Semantic-based Sentiment Analysis

Abstract: With the rapid increase of smartphones and the growing interest in their applications, e.g., Google Play Apps, it becomes necessary to analyze users’ reviews whether they are expressed as ratings or comments. This is because recent studies reported that users’ reviews could provide us with useful clues and valuable features that can help in understanding the broad opinion about some applications in term of security awareness. Several techniques have been developed for this crucial task and significant progress have been achieved such as Semantic and Sentiment Analysis, Topic Modelling, and Clustering. The majority of the existing methods are mainly based on representing reviews’ words in a Bag-Of-Words vector space with String-matched approaches without considering the common polysemy and synonymy problems of words. This is true due to the fact that users who make use of these applications are often from a diverse background and thus, different vocabulary. This paper proposes a new approach to classifying security opinions about applications from users’ reviews while considering special features of synonymous and polysemous words. To achieve this task, the proposed model makes use of word embedding, topic modelling, Bi-LSTM, and n-grams approach. For the proposed model, a new dataset is built that contains reviews about 18 popular applications. The application’s selection was primarily governed by making the dataset diverse in its domain. The experiment results showed that the proposed ensemble model which combines the prediction of the extracted features, which in turn captures synonymy, polysemy, and dependency of words-is significantly useful, and it achieves better results with an accuracy approaching 90% compared to the use of each technique separately. The model could contribute in preventing mobile users from unsafe applications.

Author 1: Ahmed Alzhrani
Author 2: Abdulmjeed Alatawi
Author 3: Bandar Alsharari
Author 4: Umar Albalawi
Author 5: Mohammed Mustafa

Keywords: Security awareness; semantic analysis; sentiment analysis; mobile applications; topic modelling; clustering

PDF

Paper 94: Deep Multi View Spatio Temporal Spectral Feature Embedding on Skeletal Sign Language Videos for Recognition

Abstract: To build a competitive global view from multiple views which will represent all the views within a class label is the primary objective of this work. The first phase involves the extraction of spatio temporal features from videos of skeletal sign language using a 3D convolutional neural network. In phase two, the extracted spatio temporal features are ensembled into a latent low dimensional subspace for embedding in the global view. This is achieved by learning the weights of the linear combination of Laplacian eigenmaps of multiple views. Subsequently, the constructed global view is applied as training data for sign language recognition.

Author 1: SK. Ashraf Ali
Author 2: M. V. D. Prasad
Author 3: P. Praveen Kumar
Author 4: P. V. V. Kishore

Keywords: Laplacian eigenmaps; 3D convolutional networks; sign language recognition; multi view; skeletal data

PDF

Paper 95: Human Activity Recognition in Car Workshop

Abstract: Human activity recognition has become so widespread in recent times. Due to the modern advancements of technology, it has become an important solution to many prob-lems in various fields such as medicine, industry, and sports. And this subject got the attention of a lot of researchers. Along with problems like wasted time in maintenance centers, we proposed a system that extracts worker poses from videos by using pose classification. In this paper, we have tested two algorithms to detect worker activity. This system aims to detect and classify positive and negative worker’s activities in car maintenance centers such as (changing the tire, changing oil, using the phone, standing without work). We have conducted two experiments, the first experiment was for comparison between algorithms to determine the most accurate algorithm in recognizing the activities performed. The experiment was done using two different algorithms (1 dollar recognizer and Fast Dynamic time warping) on 3 participants in a controlled area. The one-dollar recognizer has achieved a 97% accuracy compared to the fastDTW with 86%. The second experiment was conducted to measure the performance of a one-dollar algorithm with different participants. The results show that a 1 dollar recognizer achieved an accuracy of 94.2% when tested on 420 different videos.

Author 1: Omar Magdy
Author 2: Ayman Atia

Keywords: Machine learning; human activity recognition; pose identification; industry analysis

PDF

Paper 96: BMP: Toward a Broker-less and Microservice Platform for Internet of Thing

Abstract: The Internet of Things (IoT), currently, is one of the most interesting technology trends. IoT is the foundation and driving force for the development of other scientific fields based on its ability to connect things and the huge amount of data it collects. The IoT Platform is considered the backbone of every IoT architecture that not only allows the transfer of data between user and device but also the feed of high-level applications such as big data or deep learning. As a result, the optimal design of the IoT Platform is a very important issue, which should be carefully considered in many aspects. Although the IoT is applied in multiple domains, there are three indispensable features including (a) data collections, (b) devices and users management, and (c) remote device control. These functions usually come with some requirements, for example, security, high-speed transmission, low energy consumption, reliable data exchange, and scalable systems. In this paper, we propose the IoT Platform,called BMP (Broker-less and Microservice Platform) designed according to microservice and broker-less architecture combined with gRPC protocol to meet the requirements of the three features mentioned above. Our IoT Platform addresses five issues: (1) address the limited processing capacity of devices, (2) reduce energy consumption, (3) speed up transmission rate and enhance the accuracy of the data exchange, (4) improve security mechanisms, and (5) improve the scalability of the system. Moreover, we describe the evaluation to prove the effectiveness of the BMP (i.e., proof-of-concept) in three scenarios. Finally, a source code of the BMP is publicized on the GitHub repository to engage further reproducibility and improvement.

Author 1: Lam Nguyen Tran Thanh
Author 2: Khoi Le Quoc
Author 3: The Anh Nguyen
Author 4: Huong Hoang Luong
Author 5: Hong Khanh Vo
Author 6: Tuan Dao Anh
Author 7: Hy Nguyen Vuong Khang
Author 8: Khoi Nguyen Huynh Tuan
Author 9: Hieu Le Van
Author 10: Nghia Huynh Huu
Author 11: Khoa Tran Dang
Author 12: Khiem Huynh Gia

Keywords: Internet of Things (IoT); gRPC; Single Sign-On; Broker-Less; Kafka; Microservice; Role-based Access Control (RBAC)

PDF

Paper 97: CNN-LSTM Based Approach for Dos Attacks Detection in Wireless Sensor Networks

Abstract: A denial-of-service (DoS) attack is a coordinated attack by many endpoints, such as computers or networks. These attacks are often performed by a botnet, a network of malware-infected computers controlled by an attacker. The endpoints are instructed to send traffic to a particular target, overwhelming it and preventing legitimate users from accessing its services. In this project, we used a CNN-LSTM network to detect and classify DoS intrusion attacks. Attacks detection is considered a classification problem; the main aim is to clarify the attack as Flooding, Blackhole, Normal, TDMA, or Grayhole. This research study uses a computer- generated wireless sensor network-detection system dataset. The wireless sensor network environment was simulated using network simulator NS-2 based on the LEACH routing protocol to gather data from the network and preprocessed to produce 23 features classifying the state of the respective sensor and simulate five forms of Denial of Service (DoS) attacks. The developed CNN-LSTM model is further evaluated on 25 epochs with accuracy, Precision score, and Recall score of 0.944, 0.959, and 0.922, respectively, all on a scale of 0-1.

Author 1: Salim Salmi
Author 2: Lahcen Oughdir

Keywords: Denial of Service (DoS); Wireless Sensor Networks (WSN); Convolutional Neural Network (CNN); Long Short-Term Memory (LSTM)

PDF

Paper 101: An Enhanced Predictive Approach for Students’ Performance

Abstract: Applying data mining for improving the outcomes of the educational process has become one of the most significant areas of research. The most important corner stone in the educational process is students’ performance. Therefore, early prediction of students’ performance aims to assist at-risk students by providing appropriate and early support and intervention. The objective of this paper is to propose an enhanced predictive model for students’ performance prediction. Selecting the most impor-tant features is a crucial indicator for the academic institutions to make an appropriate intervention to help students with poor performance and the top influencing features were selected in feature selection step besides the dimensionality reduction and build an efficient predictive model. DB-Scan clustering technique is applied to enhance the proposed predictive model performance in the preprocessing step. Various classification techniques are used such as Decision Tree, Logistic regression, Naive Bayes, Random Forest, and Multilayer Perceptron. Moreover ensemble method is used to solve the trade-off between the bias and the variance and there are two proposed ensemble methods through the experiments to be compared. The proposed model is an ensemble classifier of Multilayer Perceptron, Decision Tree, and Random Forest classifiers. The proposed model achieves an accuracy of 83.16%.

Author 1: Mohamed Farouk Yacoub
Author 2: Huda Amin Maghawry
Author 3: Nivin A Helal
Author 4: Sebastian Ventura
Author 5: Tarek F. Gharib

Keywords: Educational data mining; students’ performance; classification; feature selection; machine learning

PDF

Paper 102: A Comparative Performance of Optimizers and Tuning of Neural Networks for Spoof Detection Framework

Abstract: The breakthroughs in securing speaker verification systems have been challenging and yet are explored by many researchers over the past five years. The compromise in security of these systems is due to naturally sounding synthetic speech and handiness of the recording devices. For developing a spoof detection system, the back-end classifier plays an integral role in differentiating spoofed speech from genuine speech. This work conducts the experimental analysis and comparison of up-to-date optimization techniques for a modified form of Convolutional Neural Network (CNN) architecture which is Light CNN (LCNN). The network is standardized by exploring various optimizers such as Adaptive moment estimation, and other adaptive algorithms, Root Mean Square propagation and Stochastic Gradient Descent (SGD) algorithms for spoof detection task. Furthermore, the ac-tivation functions and learning rates are also tested to investigate the hyperparameter configuration for faster convergence and improving the training accuracy. The counter measure systems are trained and validated on ASV spoof 2019 dataset with Logical (LA) and Physical Access (PA) attack data. The experimental results show optimizers perform better for LA attack in contrast to PA attack. Additionally, the lowest Equal Error Rate (EER) of 9.07 is obtained for softmax activation with SGD with momentum wrt LA attack and 9.951 for SGD with nestrov wrt PA attack.

Author 1: Ankita Chadha
Author 2: Azween Abdullah
Author 3: Lorita Angeline

Keywords: Spoof detection; speech synthesis; voice conversion; convolutional neural networks; optimizers; gradient descent algo-rithm; spoofed speech; automatic speaker verification

PDF

Paper 103: Hybrid Deep Learning Approach for Sentiment Classification of Malayalam Tweets

Abstract: Social media content in regional languages is ex-panding from day to day. People use different social media platforms to express their suggestions and thoughts in their native languages. Sentiment Analysis (SA) is the known procedure for identifying the hidden sentiment present in the sentences for categorizing it as positive, negative, or neutral. The SA of Indian languages is challenging due to the unavailability of benchmark datasets and lexical resources. The analysis has been done using lexicon, Machine Learning (ML), and Deep Learning (DL) techniques. In this work, the baseline models and hybrid models of Deep Neural Network (DNN) architecture have been used for the classification of Malayalam tweets as positive, negative and neutral. Since, sentiment-tagged dataset for Malayalam is not readily available, the analysis has been done on the manually created dataset and translated Kaggle dataset. The hybrid models used in this study combine Convolutional Neural Networks (CNN) with variants of Recurrent Neural Net-works (RNN). The RNN models are Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM) and Gated Recurrent Unit (GRU). All these hybrid models improve the performance of Sentiment Classification (SC) compared to baseline models LSTM, Bi-LSTM and GRU.

Author 1: Soumya S
Author 2: Pramod K V

Keywords: Bi-LSTM; CNN; NLP; Malayalam; Twitter

PDF

Paper 104: IoDEP: Towards an IoT-Data Analysis and Event Processing Architecture for Business Process Incident Management

Abstract: IoT is becoming a hot spot area of technological innovations and economic development promises for many in-dustries and services. This new paradigm shift affects all the enterprise architecture layers from infrastructure to business. Business Process Management (BPM) is a field among others that is affected by this new technology. To assist data and events ex-plosion resulting, among others, from IoT, data analytic processes combined with event processing techniques, examine large data sets to uncover hidden patterns, unknown correlations between collected events, either at a very technical level (incident/anomaly detection, predictive maintenance) or at business level (customer preferences, market trends, revenue opportunities) to provide improved operational efficiency, better customer service and competitive advantages over rival organizations. In order to capitalize the business value of data and events generated by IoT sensors, IoT, Data Analytics and BPM need to meet in the middle. In this paper, we propose an end-to-end IoT-BPM integration architecture (IoDEP: IoT-Data-Event-Process) for a proactive business process incident management. A case study is presented and the obtained results from our experimentations demonstrate the benefit of our approach and allowed us to confirm the efficiency of our assumptions.

Author 1: Abir Ismaili-Alaoui
Author 2: Karim Baina
Author 3: Khalid Benali

Keywords: Business process management; internet of things; machine learning; complex event processing; data analytics

PDF

Paper 105: Route Planning using Wireless Sensor Network for Garbage Collection in COVID-19 Pandemic

Abstract: Garbage collection is a responsibility faced by all cities and, if not properly carried out, can generate greater costs or sanitary problems. Considering the sanitary situation due to the COVID-19 pandemic, it is necessary to take sanitary safety measures to prevent its spread. The challenge of the present work is to provide an efficient and effective solution that guarantees a garbage collection that optimizes the use of resources and prioritizes the attention to garbage containers located in or near contagion risk zones. To this end, this research proposes the integration of a basic garbage monitoring system, consisting of a wireless sensor network, and a route planning system that implements the decomposition of the Vehicle Routing problem into the subproblems of clustering and sequencing of containers using the K-Means and Ant Colony algorithms. For the monitoring of garbage, a significant reduction in the measurement error of waste level in the containers was achieved compared to other authors. About route planning, adequate error ranges were obtained in the calculation of the optimal values of distance traveled and travel time indicators with respect to an exhaustive enumeration of routes.

Author 1: Javier E. Ramirez
Author 2: Caleb M. Santiago
Author 3: Angelica Kamiyama

Keywords: K-Means; ant colony optimization; route planning; vehicle routing problem; garbage collection; wireless sensor network

PDF

Paper 106: Empirical Analysis of Learning-based Malware Detection Methods using Image Visualization

Abstract: Malware, a short name for malicious software is an emerging cyber threat. Various researchers have proposed ways to build advanced malware detectors that can mitigate threat actors and enable effective cybersecurity decisions in the past. Recent research implements malware detectors based on visualized images of malware executable files. In this framework, a malware binary is converted into an image, and by extracting image features and applying machine learning methods, the malware is identified based on image similarity. In this research work, we implement the Image visualization-based malware detection method and conduct an empirical analysis of vari-ous learners for selecting a candidate learning classifier that can provide better prediction performance. We evaluate our framework using the following malware datasets, Search And RetrieVAl of Malware (SARVAM), Xue-dataset, and Canadian Institutes for Cyber Security (CIC) datasets. Our experiments include the following learning algorithms, Linear Regression, Random Forest, K-Nearest Neighbor (KNN), Classification and Decision Tree (CART), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and deep learning-based Convolutional Neural Network (CNN). This image-visualization-based method proves to be effective in terms of prediction accuracy. Some conclusions emerge from our initial study and find that a Con-volutional Neural Network (CNN) algorithm provides relatively better performance when used against SARvAM and various malware datasets. The CNN model achieved a high performance of F1-score and accuracy in the binary classification task reaching 95.70% and 99.50%, consecutively. The model in the multi-classification task achieved of 95.96% and 99.30% (F1-score and accuracy) for detecting malware types. We find that the KNN model outperforms other traditional classifiers.

Author 1: Abdullah Sheneamer
Author 2: Essa Alhazmi
Author 3: James Henrydoss

Keywords: Malware detection; malware analysis; deep learning; machine learning; malware features

PDF

Paper 107: Eye-movement Analysis and Prediction using Deep Learning Techniques and Kalman Filter

Abstract: Eye movement analysis has gained significant atten-tion from the eye-tracking research community, particularly for real-time applications. Eye movement prediction is predominantly required for the improvement of sensor lag. The previously introduced eye-movement approaches focused on classifying eye movements into two categories: saccades and non-saccades. Al-though these approaches are practical and relatively simple, they confuse fixations and smooth pursuit by putting them up within the non-saccadic category. Moreover, Eye movement analysis has been integrated into different applications, including psychology, neuroscience, human attention analysis, industrial engineering, marketing, advertising, etc. This paper introduces a low-cost eye-movement analysis system using Convolutional Neural Network (CCN) techniques and the Kalman filter to estimate and analyze eye position. The experiment results reveal that the proposed system can accurately classify and predict eye movements and detect pupil position in frames, notwithstanding the face tracking and detection. Additionally, the obtained results revealed that the overall performance of the proposed system is more efficient and effective comparing to Recurrent Neural Network (RNN).

Author 1: Sameer Rafee
Author 2: Xu Yun
Author 3: Zhang Jian Xin
Author 4: Zaid Yemeni

Keywords: Eye Movement Classification; Eye Movement Prediction; Convolutional Neural Network (CNN); Recurrent Neural Network (RNN)

PDF

Paper 108: Independent Channel Residual Convolutional Network for Gunshot Detection

Abstract: The main purpose of this work is to propose a robust approach for dangerous sound events detection (e.g. gunshots) to improve recent surveillance systems. Despite the fact that the detection and classification of different sound events has a long history in signal processing, the analysis of environmental sounds is still challenging. The most recent works aim to prefer the time-frequency 2-D representation of sound as input to feed convolutional neural networks. This paper includes an analysis of known architectures as well as a newly proposed Independent Channel Residual Convolutional Network architecture based on standard residual blocks. Our approach consists of processing three different types of features in the individual channels. The UrbanSound8k and the Free Firearm Sound Library audio datasets are used for training and testing data generation, achieving a 98 % F1 score. The model was also evaluated in the wild using manually annotated movie audio track, achieving a 44 % F1 score, which is not too high but still better than other state-of-the-art techniques.

Author 1: Jakub Bajzik
Author 2: Jiri Prinosil
Author 3: Roman Jarina
Author 4: Jiri Mekyska

Keywords: Acoustic signal processing; gunshot detection systems; audio signal analysis; machine learning; deep learning; residual networks

PDF

Paper 109: Improving Intrusion Detection for Imbalanced Network Traffic using Generative Deep Learning

Abstract: Network security has become a serious issue since networks are vulnerable and subject to increasing intrusive activities. Therefore, network intrusion detection systems (IDSs) are an essential component to defend against these activities. One of the biggest issues encountered by IDSs is the class imbalance problem which leads to a biased performance by most machine learning models to normal activities (majority class). Several techniques were proposed to overcome the class-imbalance problem such as resampling, cost-sensitive, and en-semble learning techniques. Other issues related to intrusion detection data include mixed data types, and non-Gaussian and multimodal distributions. In this study, we employed a conditional tabular generative adversarial network (CTGAN) model with common machine learning algorithms to construct more effective detection systems while addressing the imbalance issue. CTGAN can generate samples of the minority class during training to make the dataset more balanced. To assess the effectiveness of the proposed IDS, we combined CTGAN with three machine learning algorithms: support vector machine (SVM), K-nearest neighbor (KNN), and decision tree (DT). The imbalanced NSL-KDD dataset was used and several experiments were conducted. The results showed that CTGAN can improve the performance of imbalance learning for intrusion detection with SVM and DT. On the other hand, KNN showed no improvement in the performance since it is less sensitive to the class imbalance problem. Moreover, the results proved that CTGAN can capture the distribution of discrete features better than continuous features.

Author 1: Amani A. Alqarni
Author 2: El-Sayed M. El-Alfy

Keywords: Intrusion detection; machine learning; imbalance learning; conditional tabular generative adversarial networks

PDF

Paper 110: A Machine Learning Model for the Diagnosis of Coffee Diseases

Abstract: The growing and marketing of coffee is an impor-tant source of economic resources for many countries, especially those with economies dependent on agricultural production, as is the case of Colombia. Although the country has done a lot of research to develop the sector, the truth is that most of its cultivation is carried out by small coffee families without a high degree of technology, and without major resources to access it. The quality of the coffee bean is highly sensitive to diverse diseases related to environmental conditions, fungi, bacteria, and insects, which directly and strongly affect the economic income of the entire production chain. In many cases the diseases are transmitted rapidly, causing great economic losses. A quick and reliable diagnosis would have an immediate effect on reducing losses. In this sense, this research advances the development of an embedded system based on machine learning capable of performing on-site diagnoses by untrained personnel but taking advantage of the know-how of expert coffee growers. Such a system seeks to instrument the visual characteristics of the most common plant diseases on low-cost, robust, and highly reliable hardware. We identified a deep network architecture with high performance in disease categorization and adjusted the hyperparameters of the model to maximize its characterization capacity without incurring overfitting problems. The prototype was evaluated in the laboratory on real plants for recognized disease cases, tests that matched the performance of the model validation dataset.

Author 1: Fredy Martinez
Author 2: Holman Montiel
Author 3: Fernando Martinez

Keywords: Cercospora Coffeicola; convolutional neural network; coffee leaf miner; coffee leaf rust; deep learning; image processing; phoma leaf spot

PDF

Paper 111: Deep Learning-based Hybrid Model for Efficient Anomaly Detection

Abstract: It is common among security organizations to run processes system call trace data to predict its anomalous behavior, and it is still a dynamic study region. Learning-based algorithms can be employed to solve such problems since it is typical pattern recognition problem. With the advanced progress in operating systems, some datasets became outdated and irrelevant. System calls datasets such as Australian Defense Force Academy Linux Dataset (ADFA-LD) are amongst the current cohort containing labeled data of system call traces for normal and malicious processes on various applications. In this paper, we propose a hybrid deep learning-based anomaly detection system. To advance the detection accurateness and competence of anomaly detection systems, Convolution Neural Network (CNN) with Long Short Term Memory (LSTM) is employed. The raw sequence of system call trace is fed to the CNN network first, reducing the traces' dimension. This reduced trace vector is further fed to the LSTM network to learn the sequences of the system calls and produce the concluding detection outcome. Tensorflow-GPU was used to implement and train the hybrid model and evaluated on the ADFA-LD dataset. Experimental results showed that the proposed method had reduced training time with an enhanced anomaly detection rate. Therefore, this method lowers the false alarm rates.

Author 1: Frances Osamor
Author 2: Briana Wellman

Keywords: Anomaly detection; system call sequence; convolution neural network; long short term memory

PDF

Paper 112: An Ensemble Deep Learning Approach for Emotion Detection in Arabic Tweets

Abstract: Now-a-days people use social media websites for different activities such as business, entertainment, following the news, expressing their thoughts, feelings, and much more. This initiated a great interest in analyzing and mining such user-generated content. In this paper, the problem of emotion detection (ED) in Arabic text is investigated by proposing an ensemble deep learning approach to analyze user-generated text from Twitter, in terms of the emotional insights that reflect different feelings. The proposed model is based on three state-of-the-art deep learning models. Two models are special types of Recurrent Neural Networks RNNs (Bi-LSTM and Bi-GRU), and the third model is a pre-trained language model (PLM) based on BERT and it is called MARBERT transformer. The experiments were evaluated using the SemEval-2018-Task1-Ar-Ec dataset that was published in a multilabel classification task: Emotion Classification (EC) inside the SemEval-2018 competition. MARBERT PLM is compared to one of the most famous PLM for dealing with the Arabic language (AraBERT). Experiments proved that MARBERT achieved better results with an improvement of 4%, 2.7%, 4.2%, and 3.5% regarding Jaccard accuracy, recall, F1 macro, and F1 micro scores respectively. Moreover, the proposed ensemble model showed outperformance over the individual models (Bi-LSTM, Bi-GRU, and MARBERT). It also outperforms the most recent related work with an improvement ranging from 0.2% to 4.2% in accuracy, and from 5.3% to 23.3% in macro F1 score.

Author 1: Alaa Mansy
Author 2: Sherine Rady
Author 3: Tarek Gharib

Keywords: Deep learning; emotion detection; transformers; RNNs; Bi-LSTM; Bi-GRU

PDF

Paper 113: Chatbots for the Detection of Covid-19: A Systematic Review of the Literature

Abstract: At present, the development of chatbots is one of the key activities for the diagnosis of Covid-19. The aim is to understand how these chatbots operate in the health area to make a respective diagnosis. The purpose of the research is to determine the state of the art on the use of chatbots and its impact on Covid-19 diagnosis during the last 2 years. The data sources that have been consulted are IEEE Xplore, Taylor & Francis Online, ProQuest, World Wide Science, Science Direct, Microsoft Academic, Google Scholar, ACM Digital Library, Wiley Online Library, and ETHzurich. The search strategy identified 5701 papers, of which 101 papers were selected through 8 selection criteria and 7 quality assessments. This review presents discussions regarding the methodologies used for chatbot development, i.e., what are the purposes and impact of using chatbots for Covid-19 diagnosis. In addition, this is presents the results of how important the development and implementation of chatbots are in the area of health in the face of this pandemic.

Author 1: Antony Albites-Tapia
Author 2: Javier Gamboa-Cruzado
Author 3: Junior Almeyda-Ortiz
Author 4: Alberto Moreno Lázaro

Keywords: Covid-19 diagnosis; chatbot; NLP; digital assistants; health; systematic review

PDF

Paper 114: An Enhanced Genetic Algorithm (EGA)-based Multi-Hop Path for Energy Efficient in Wireless Sensor Network (WSN)

Abstract: Wireless Sensor Networks (WSNs) encounter a number of issues in terms of performance. In WSN, the majority of the energy is utilized to transfer data from sensor nodes to a central station or hub (BS). There have therefore been many different types of routing protocols devised to help with the distribution of data in WSNs. Large-scale networks have been designed with minimal power consumption and multipurpose processing due to recent improvements in wireless communication and networking technology. For the time being, sensor energy remains a restricted resource for constructing routing protocols between sensor nodes and the base station, despite advances in energy collection technologies. For wireless sensor networks with far-flung cluster heads and base stations, direct transmission is a critical component since it impacts the network's efficiency in terms of power consumption and lifespan. A new approach for identifying an effective multi-hop routing between a source (CH) and a destination (BS) is investigated in this study in order to decrease power consumption and hence increase the life of a network (OMPFM). The suggested technique utilizes a genetic algorithm and a novel fitness metric to discover the best route. For selecting CHs and enhancing the speed and quality of created chromosomes, they suggest two pre-processes, which they call CH-selection and Chromosome Quality Improvement (CHI). The proposed method is evaluated and compared to others of its kind using MATLAB simulator. It has been found that using the proposed method outperforms other methods such as LEACH, GCA, EAERP, GAECH, and HiTSeC in terms of the first node die metric by 35%, 34%, 26%, 19% and 50%, respectively. It also outperforms other methods by 100% in terms of the last node die metric.

Author 1: Battina Srinuvasu Kumar
Author 2: S. G. Santhi
Author 3: S. Narayana

Keywords: Cluster head; energy efficient; multi-hop path; enhanced genetic algorithm; wireless sensor network (WSN)

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org