The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 12

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Machine Learning Augmented Breast Tumors Classification using Magnetic Resonance Imaging Histograms

Abstract: At present, breast cancer survival rate significantly varies with the stage at which it was first detected. It is crucial to achieve early detection of malignant tumors to reduce their negative effects. Magnetic resonance imaging (MRI) is currently an important imaging modality in the detection of breast tumors. A need exists to develop computer aided methods to provide early diagnosis of malignancy. In this study, I present machine learning models utilizing new image histogram features using the pixels least significant bit. The models were first trained on an MRI breast dataset that included 227 images captured using the short TI inversion recovery (STIR) sequence and diagnosed as either benign or malignant. Three data classification methods were utilized to differentiate between the tumor’s classes. The examined classification methods were the Discriminant Analysis, K-Nearest Neighborhood, and the Random Forest. Algorithms’ testing was performed on a completely different dataset that included another 186 MRI STIR images showing breast tumors with verified biopsy diagnostics. A significant tumor classification efficiency was found, as judged by the pathological diagnosis. Classification’s accuracy was calculated as 94.1% for the DA, 94.6% for the KNN and 80.6% for the RF algorithm. Receiver operating curves also showed significant classification performances. The proposed tumor classification techniques can be used as non-invasive and fast diagnostic tools for breast tumors, with the capability of significantly reducing false errors associated with common MRI imaging-based diagnosis.

Author 1: Ahmed M. Sayed

Keywords: Tumor classification; histogram analysis; magnetic resonance imaging; breast cancer; machine learning

PDF

Paper 2: New Feature Engineering Framework for Deep Learning in Financial Fraud Detection

Abstract: The total losses through online banking in the United Kingdom have increased because fraudulent techniques have progressed and used advanced technology. Using the history transaction data is the limit for discovering various patterns of fraudsters. Autoencoder has a high possibility to discover fraudulent action without considering the unbalanced fraud class data. Although the autoencoder model uses only the majority class data, in our hypothesis, if the original data itself has various feature vectors related to transactions before inputting the data in autoencoder then the performance of the detection model is improved. A new feature engineering framework is built that can create and select effective features for deep learning in remote banking fraud detection. Based on our proposed framework [19], new features have been created using feature engineering methods that select effective features based on their importance. In the experiment, a real-life transaction dataset has been used which was provided by a private bank in Europe and built autoencoder models with three different types of datasets: With original data, with created features and with selected effective features. We also adjusted the threshold values (1 and 4) in the autoencoder and evaluated them with the different types of datasets. The result demonstrates that using the new framework the deep learning models with the selected features are significantly improved than the ones with original data.

Author 1: Chie Ikeda
Author 2: Karim Ouazzane
Author 3: Qicheng Yu
Author 4: Svetla Hubenova

Keywords: Financial fraud; online banking; feature engineering; unbalanced class data; deep learning; autoencoder

PDF

Paper 3: Multifractal Analysis of Heart Rate Variability by Applying Wavelet Transform Modulas Maxima Method

Abstract: The analysis of heart rate variability is based on the intervals between the successive heartbeats and thanks to it information about the functional state of the person can be obtained and the dynamics of its change can be traced. The nonlinear dynamics methods provide additional, prognostic information about the patient's health, complementing traditional analyses and are considered potentially promising tools for assessing heart rate variability. In this article, studies have been carried out to identify the mono- and multifractal properties of two groups of people: healthy controls and patients with arrhythmia using Wavelet Transform Modulas Maxima Method. The obtained results from the studies show that for healthy subjects the multifractal spectrum is broader than the spectrum of patients with arrhythmia. The value of the Hurst exponent is lower in healthy controls, and in patients with arrhythmia this parameter tends to one. For the healthy subjects, the scaling exponent showed nonlinear behaviour, while for patients with arrhythmia it was linear. This indicates that heart rate variability in healthy controls has multifractal behaviour while patients with arrhythmia have monofractal behaviour. The finding may be useful in diagnosing subjects with cardiovascular disease, as well as in predicting future diseases, as the heart rate variability changes at the slightest deviation in the health status of subjects before the onset of relevant signs of the disease.

Author 1: Evgeniya Gospodinova
Author 2: Galya Georgieva-Tsaneva
Author 3: Penio Lebamovski

Keywords: RR time series; heart rate variability; wavelet transform modulas maxima method; monofractal; multifractal

PDF

Paper 4: Neural Network Model for Artifacts Marking in EEG Signals

Abstract: One of the main methods for research of the holistic activity system of human brain is the method of electroencephalography (EEG). For example, eye movements, blink, hearth activity, muscle activity that affects EEG signal interfere with cerebral activity. The paper describes the development of an intelligent neural network model aimed at detecting the artifacts in EEG signals. The series of experiments were conducted to investigate the performance of different neural networks architectures for the task of artifact detection. As a result, the performance rates for different ML methods were obtained. The neural network model based on U-net architecture with recurrent networks elements was developed. The system detects the artifacts in EEG signals using the model with 128 channels and 70% accuracy. The system can be used as an auxiliary instrument for EEG signal analysis.

Author 1: Olga Komisaruk
Author 2: Evgeny Nikulchev

Keywords: Artifacts in EEG signal; neural network model; recurrent neural network; U-net architecture

PDF

Paper 5: Changing Communication Path to Maintain Connectivity of Mobile Robots in Multi-Robot System using Multistage Relay Networks

Abstract: Mobile robots are being increasingly used to gather information from disaster sites and prevent further damage in disaster areas. Previous studies discussed a multi-robot system that uses a multistage relay backbone network to gather information in a closed space after a disaster. In this system, the mobile robot explores its search range by switching the connected nodes. Here it is necessary to maintain the communication quality required for the teleoperation of the mobile robot and to send and receive packets between the operator PC and the mobile robot. However, the mobile robot can become isolated when it is not able to maintain the communication quality required for teleoperations in the communication path after changing the nodes. This paper proposes a method to change the communication path of a mobile robot while maintaining its communication connectivity. In the proposed method, the mobile robot changes its route while maintaining communication connectivity without any communication loss time by connecting to two nodes.

Author 1: Ryo Odake
Author 2: Kei Sawai

Keywords: Multi-robot; multistage relay network; communication connectivity; changing communication path

PDF

Paper 6: A Conceptual Design Framework based on TRIZ Scientific Effects and Patent Mining

Abstract: Conceptual design represents a critical initial design stage that involves both technical and creative thinking to develop and derive concept solutions to meet design requirements. TRIZ Scientific Effects (TRIZSE) is one of the TRIZ tools that utilize a database on functional, transformation, parameterization of scientific effects to provide conceptual solutions to engineering and design problems. Although TRIZSE has been introduced to help engineers solve design problems in the conceptual design phase, the current TRIZSE database presents general scientific concept solutions with a few examples of solutions from patents which are very abstract and not updated since its introduction. This research work explores the derivation of a novel framework that integrates TRIZ scientific effects to the current patent information (USPTO) using data mining techniques to develop a better design support tool to assist engineers in deriving innovative design concept solutions. This novel framework will provide better, updated, relevant and specific examples of conceptual design ideas from patents to engineers. The research used Python as the base programming platform to develop a conceptual design software prototype based on this new framework where both the TRIZSE Database and Patents Database (USPTO) are searched and processed in order to build a Doc2Vec similarity model. A case study on the corrosion of copper pipelines by seawater is presented to validate this novel framework and results of the novel TRIZSE Database and patents examples are presented and further discussed in this paper. The results of the case study indicated that the Doc2Vec model is able to perform its intended similarity queries. The patent examples from results of the case study warrant further consideration in conceptual design activities.

Author 1: E-Ming Chan
Author 2: Ah-Lian Kor
Author 3: Kok Weng Ng
Author 4: Mei Choo Ang
Author 5: Amelia Natasya Abdul Wahab

Keywords: TRIZ; patent mining; natural language processing; product design

PDF

Paper 7: On Validating Cognitive Diagnosis Models for the Arithmetic Skills of Elementary School Students

Abstract: Cognitive diagnosis models (CDMs) have been shown to provide detailed evaluations of students’ achievement in terms of proficiency of individual cognitive attributes. Attribute hierarchy model (AHM), a variant of CDM, takes the hierarchical structure of those cognitive attributes to provide more accurate and interpretable measurements of learning achievement. However, advantages of the richer model come at the expense of increased difficulty in designing the hierarchy of the cognitive attributes and developing corresponding test sets. In this study, we propose quantitative tools for validating the hierarchical structures of cognitive attributes. First, a method to quantitatively compare alternative cognitive hierarchies is established by computing the inconsistency between a given cognitive hierarchy and students’ responses. Then, this method is generalized to validate a cognitive hierarchy without real responses. Numerical simulations were performed starting from an AHM designed by experts and responses of elementary school students. Results show that the expert-designed cognitive attribute explains the students’ responses better than most of alternative hierarchies do, but not all; a superior cognitive hierarchy is identified. This discrepancy is discussed in terms of internalization of cognitive attributes.

Author 1: Hyejung Koh
Author 2: Wonjin Jang
Author 3: Yongseok Yoo

Keywords: Cognitive diagnosis model; attribute hierarchy model; cognitive hierarchy; model validation

PDF

Paper 8: Gabor Descriptor for Representation of Spatial Feature

Abstract: New spatial feature descriptor based on Gabor wavelet function is proposed. The proposed method is compared to Fourier descriptor. The experimental results with Advanced Earth Observing Satellite: ADEOS / Advanced Visible and Near-Infrared Radiometer: AVNIR image show an effectiveness of the proposed method. It is found that the restored image quality, in terms of root mean square error between the original and the restored images depends on the support length of the mother wavelet and is much better than that with the conventional Fourier descriptor method for spatial feature description.

Author 1: Kohei Arai

Keywords: Spatial feature; Gabor wavelet descriptor; Fourier descriptor; wavelet transformation; ADEOS (advanced earth observing satellite); AVNIR (advanced visible and near-infrared radiometer)

PDF

Paper 9: Comparative Analysis of National Cyber Security Strategies using Topic Modelling

Abstract: Comprehensive comparative analyses of national cyber security strategies (NCSSs) have thus far been limited or complicated by the unique nature of cybersecurity, which combines various areas such as technology, industry, economy, and defense in a complex manner. This study aims to characterize the NCSSs of major countries, quantitatively considering the time series, and identify further cybersecurity agendas for the benefit of NCSS revision in South Korea, by applying topic modelling to the analysis of eight NCSSs from the US, UK, Japan, and EU. As a result, fifteen agendas were identified and grouped into four sectors. We determined from the agenda distribution that the approach of each country to cybersecurity was different. In addition, additional agendas worthy of consideration for future NCSS revisions in South Korea were proposed, based on a comparison of the 15 aforementioned agendas with those of South Korea. This study is significant for cybersecurity policy in terms of enabling quantitative analysis in a single framework via latent dirichlet allocation (LDA) topic modelling, and deriving further cybersecurity agendas for future NCSS revisions in South Korea.

Author 1: Minkyoung Song
Author 2: Dong Hee Kim
Author 3: Sunha Bae
Author 4: So-Jeong Kim

Keywords: Cybersecurity policy; national cyber security strategy (NCSS); policy analysis; quantitative analysis

PDF

Paper 10: Digital Transformation of Human Resource Processes in Small and Medium Sized Enterprises using Robotic Process Automation

Abstract: The aim of this paper was to obtain data and information on the digital transformation of human resource (HR) processes in small- and medium-sized enterprises (SMEs) with the help of robotic process automation (RPA), in order to increase competitiveness in the digital age. Romanian businesses are attempting to close the gap with companies in developed countries by implementing projects that allow the adoption of emerging technologies in HR departments. This paper presents some of the preliminary findings, resulted from a collaboration between a university and an SME, for the efficient implementation of specific HR processes using RPA. The paper provides a brief introduction of the RPA concept as well as a list of HR processes that can be automated within enterprises, with the benefits brought to the enterprise and employees presented in both qualitative and quantitative terms for each HR process. In addition, a case study for the automatic collection of candidates' documents and extraction of primary information about them was considered. Further on, the problems encountered during implementation were listed, along with potential solutions. Given the benefits offered, RPA could play an important role in transitioning HR functions into the digital era.

Author 1: Cristina Elena Turcu
Author 2: Corneliu Octavian Turcu

Keywords: Robotic process automation (RPA); small- and medium-sized enterprises (SME); human resource (HR); digital HR; recruitment

PDF

Paper 11: Computerization of Local Language Characters

Abstract: The objective of this study is to provide innovative model for the approach of language preservation. It is necessary to maintain indigenous languages in order to avoid language death. Script applications for indigenous languages are one of the solutions being pursued. This script program will facilitate communication through writing between speakers of indigenous languages. Additionally, the study illustrates the implementation of the Lontara script (Bugis-Makassar local language letters and characters). This script application is compatible with the Microsoft Windows operating system and the Hypertext Transfer Protocol (HTTP). This study employed the research and development (R&D) approach. Six stages are followed in this R & D study: 1) doing a requirements analysis to determine the viability of Bugis-Makassar indigenous languages in everyday life and also to determine ways to retain them 2) designing and constructing Lontara scripts with hypertext-based applications, 3) producing Lontara scripts with hypertext-based applications, and 4) validating the hypertext-based applications through one-to-one testing, small and large group testing. 5) Lontara application revision; and 6) Lontara application as a finished product. This product is designed to be used in conjunction with other interactive applications.

Author 1: Yusring Sanusi Baso
Author 2: Andi Agussalim

Keywords: Innovative model; language maintenance; Lontara script; Makassarese; local language; hypertext-based application

PDF

Paper 12: Trend of Bootstrapping from 2009 to 2016

Abstract: The pedestal of fully homomorphic encryption is bootstrapping which allows unlimited processing on encrypted data. This technique is a bottleneck in the practicability of homomorphic encryption. From 2009 to 2016, the execution time of bootstrapping decreased from several hours to a few thousandths of a second for processing a logic gate on two encrypted bits. This paper makes a comparative study of the evolution of bootstrapping during the period. An implementation of multiplication on 16-bit integers on an Intel i7 architecture through three schemes whose libraries are respectively DGHV, FHEW and TFHE makes it possible to corroborate the trend that to date the best bootstrapping on bits is that of the TFHE which executes this processing in 29 seconds improving that of the FHEW 30 times despite the multiplication algorithm used.

Author 1: Paulin Boale Bomolo
Author 2: Eugene Mbuyi Mukendi
Author 3: Simon Ntumba Badibanga

Keywords: Bootstrapping; homomorphic encryption; binary multiplication; logic gates

PDF

Paper 13: A Hybrid Similarity Measure for Dynamic Service Discovery and Composition based on Mobile Agents

Abstract: With the ever-present competition among companies, the prevalence of web services (WSs) is increasing dramatically. This leads to the diversity of the similar services and their developed nature, which makes the discovery of a relevant service during the composition phase a complex task. Since most of the competition companies aim to discover high-quality services with minimum charges in order to increase the number of customers and their profit. The semantic WSs allow performing dynamic service discovery through the entities software and intelligent agents. However, the solutions provided to the discovery process are limited to their performance in terms of the quickness to respond to the request in real-time, without considering the constraints such as the accuracy in the discovery phase and the quality of the similarity mechanism evaluation. They usually are based on the similarity measure of distance between concepts in the ontology instead of taking into consideration the relationships semantically and the strength of the semantic relationship between concepts in the context. In this paper, we proposed a novel hybrid semantic similarity method to improve the service discovery process. The hybrid method is applied to an architecture based on mobile agents, where cooperative agents are integrated to facilitate and speed up the discovery process. In the first hybrid method, we defined the Latent Semantic Analysis (LSA) with a semantic relatedness measure to avoid the ambiguity of the terms and obtain a purely semantic relatedness at level of the service description. The second one is defined to analyze the relationships at the level of the I/O service based on the subsumption reasoning, called IO-MATCHING. Experimental results on a real data set demonstrate that our solution outperforms the state-of-the-art approaches in terms of precision, recall, F-measure, and consumed time of the service discovery.

Author 1: Naoufal EL ALLALI
Author 2: Mourad FARISS
Author 3: Hakima ASAIDI
Author 4: Mohamed BELLOUKI

Keywords: IO-MATCHING; latent semantic analysis; mobile agents; OWL-S; semantic web services; semantic similarity; semantic relatedness

PDF

Paper 14: Linear Mixed Effect Modelling for Analyzing Prosodic Parameters for Marathi Language Emotions

Abstract: Along with linguistic messages, prosody is an essential paralinguistic component of emotional speech. Prosodic parameters such as intensity, fundamental frequency (F0), and duration were studied worldwide to understand the relationship between emotions and corresponding prosody features for various languages. For evaluating prosodic aspects of emotional Marathi speech, the Marathi language has received less attention. This study aims to see how different emotions affect suprasegmental properties such as pitch, duration, and intensity in Marathi's emotional speech. This study investigates the changes in prosodic features based on emotions, gender, speakers, utterances, and other aspects using a database with 440 utterances in happiness, fear, anger, and neutral emotions recorded by eleven Marathi professional artists in a recording studio. The acoustic analysis of the prosodic features was employed using PRAAT, a speech analysis framework. A statistical study using a two-way Analysis of Variance (two-way ANOVA) explores emotion, gender, and their interaction for mean pitch, mean intensity, and sentence utterance time. In addition, three distinct linear mixed-effect models (LMM), one for each prosody characteristic designed comprising emotion and gender factors as fixed effect variables, whereas speakers and sentences as random effect variables. The relevance of the fixed effect and random effect on each prosodic variable was verified using likelihood ratio tests that assess the goodness of fit. Based on Marathi's emotional speech, the R programming language examined linear mixed modeling for mean pitch, mean intensity, and sentence duration.

Author 1: Trupti Harhare
Author 2: Milind Shah

Keywords: Prosodic parameters; a marathi language prosody model; a two-way analysis of variance; linear mixed-effect models; r programming language

PDF

Paper 15: Low Time Complexity Model for Email Spam Detection using Logistic Regression

Abstract: Spam emails have recently become a concern on the Internet. Machine learning techniques such as Neural Networks, Naïve Bayes, and Decision Trees have frequently been used to combat these spam emails. Despite their efficiency, time complexity in high-dimensional datasets remains a significant challenge. Due to a large number of features in high-dimensional datasets, the intricacy of this problem grows exponentially. The existing approaches suffer from a computational burden when thousands of features are used (high-time complexity). To reduce time complexity and improve accuracy in high-dimensional datasets, extra steps of feature selection and parameter tuning are necessary. This work recommends the use of a hybrid logistic regression model with a feature selection approach and parameter tuning that could effectively handle a big dimensional dataset. The model employs the Term Frequency-Inverse Document Frequency (TF-IDF) feature extraction method to mitigate the drawbacks of Term Frequency (TF) to obtain an equal feature weight. Using publicly available datasets (Enron and Lingspam), we compared the model’s performance to that of other contemporary models. The proposed model achieved a low level of time complexity while maintaining a high level of spam detection rate of 99.1%.

Author 1: Zubeda K. Mrisho
Author 2: Jema David Ndibwile
Author 3: Anael Elkana Sam

Keywords: Machine learning; feature selection; feature extraction; parameter tuning

PDF

Paper 16: Securing Images through Cipher Design for Cryptographic Applications

Abstract: The emphasis of this work is image encoding based on permutation as well as changes that utilize Latin cube as well as Latin square image cipher meant for both color and gray images. Generally, multimedia data are transmitted in the network as well websites, numerous methods have been established for securing the information without any negotiation. Security of information in all the areas is required to ensure that the information sustains privacy, is presentable for recovery as well as governance purposes. These data can be secured by taking CIA (confidentiality, Integrity, and Availability) to realize information like confidentiality about the data that can be reserved as undisclosed from an illegal user of source, the integrity of the data is maintained unaffected for unauthorized font, availability of resources for official personal to retrieve data for access the information. Authentication of a person is by identification, conserving information and its validation of data. Implementing this authentication will store the data in the required format that is either exchanged or transmitted for the internet application. By breaching the misuse of data is can be protected for confidential as well as sensitive data. To achieve security and maintain confidentiality cryptographic methods are implemented.

Author 1: Punya Prabha V
Author 2: M D Nandeesh
Author 3: Tejaswini S

Keywords: Decryption; encryption; Latin square generator; sequence generator

PDF

Paper 17: A Review of Feature Selection Algorithms in Sentiment Analysis for Drug Reviews

Abstract: Social media data contain various sources of big data that include data on drugs, diagnosis, treatments, diseases, and indications. Sentiment analysis (SA) is a technology that analyses text-based data using machine learning techniques and Natural Language Processing to interpret and classify emotions in the subjective language. Data sources in the medical domain may exist in the form of clinical documents, nurse’s letter, drug reviews, MedBlogs, and Slashdot interviews. It is important to analyse and evaluate these types of data sources to identify positive or negative values that could ensure the well-being of the users or patients being treated. Sentiment analysis technology can be used in the medical domain to help identify either positive or negative issues. This approach helps to improve the quality of health services offered to consumers. This paper will be reviewing feature selection algorithms, sentiment classifications, and standard measurements that are used to measure the performance of these techniques in previous studies. The combination of feature extraction techniques based on Natural Language Processing with Machine Learning techniques as a feature selection technique can reduce the size of features, while selecting relevant features can improve the performance of sentiment classifications. This study will also describe the use of metaheuristic algorithms as a feature selection algorithm in sentiment analysis that can help achieve higher accuracy for optimal subset selection tasks. This review paper has also identified previous studies that applied metaheuristics algorithm as a feature selection algorithm in the medical domain, especially studies that used drug review data.

Author 1: Siti Rohaidah Ahmad
Author 2: Nurhafizah Moziyana Mohd Yusop
Author 3: Afifah Mohd Asri
Author 4: Mohd Fahmi Muhamad Amran

Keywords: Sentiment analyis; drug reviews; feature selection; metaheuristic

PDF

Paper 18: Detection of Covid-19 through Cough and Breathing Sounds using CNN

Abstract: Covid-19 is declared a global pandemic by WHO due to its high infectivity rate. Medical attention is required to test and diagnose those with Covid-19 like symptoms. They are required to take an RT-PCR test which takes about 10-15 hours to obtain the result, and in some cases, it goes up to 3 days when the demand is too high. Majority of victims go unnoticed because they are not willing to get tested. The commonly used RT-PCR technique requires human contact to obtain the swab samples to be tested. Also, there is a shortage of testing kits in some areas and there is a need for self-diagnostic testing. This solution is a preliminary analysis. The basic idea is to use sound data, in this case, cough sounds, breathing sounds and speech sounds to isolate its characteristics and deduce if it belongs to a person who is infected or not, based on the trained model analysis. An Ensemble of Convolution neural networks have been used to classify the samples based on cough, breathing and speech samples, the model also considers symptoms exhibited by the person such as fever, cold, muscle pain etc. These Audio samples have been pre-processed and converted into Mel spectrograms and MFCC (Mel Cepstral Coefficients) are obtained that are fed as input to the model. The model gave an accuracy of 88.75% with a recall of 71.42 and Area Under Curve of 80.62%.

Author 1: Evangeline D
Author 2: Sumukh M Lohit
Author 3: Tarun R
Author 4: Ujwal K C
Author 5: Sai Viswa Sumanth D

Keywords: Coronavirus; cough sounds; mel frequency cepstral coefficients; convolutional neural network; reverse transcription–polymerase chain reaction (RT-PCR)

PDF

Paper 19: An Empirical Study on Fake News Detection System using Deep and Machine Learning Ensemble Techniques

Abstract: With the revolution that happened in electronic gadgets in the past few years, information sharing has evolved into a new era that can spread the news globally in a fraction of minutes, either through yellow media or through satellite communication without any proper authentication. At the same time, all of us are aware that with the increase of different social media platforms, many organizations try to grab people's attention by creating fake news about celebrities, politicians (or) politics, branded products, and others. There are three ways to generate fake news: tampering with an image using advanced morphing tools; this is generally a popular technique while posting phony information about the celebrities (or) cybercrimes related to women. The second one deals with the reposting of the old happenings with new fake content injected into it. For example, in generally few social media platforms either to increase their TRP ratings or to expand their subscribers, they create old news that happened somewhere years ago as latest one with new fake content like by changing the date, time, locations, and other important information and tries to make them viral across the globe. The third one deals with the image/video real happened at an event or place, but media try to change the content with a false claim instead of the original one that occurred. A few decades back, researchers started working on fake news detection topics with the help of textual data. In the recent era, few researchers worked on images and text data using traditional and ensemble deep and machine learning algorithms, but they either suffer from overfitting problems due to insufficient data or unable to extract the complex semantic relations between documents. The proposed system designs a transfer learning environment where Neural Style Transfer Learning takes care of the size and quality of the datasets. It also enhances the auto-encoders by customizing the hidden layers to handle complex problems in the real world.

Author 1: T V Divya
Author 2: Barnali Gupta Banik

Keywords: Transfer learning; GANS; glove algorithms; word2vec; ensemble techniques; auto encoders; pre-trained models; word embeddings; BERT models

PDF

Paper 20: DoItRight: An Arabic Gamified Mobile Application to Raise Awareness about the Effect of Littering among Children

Abstract: Littering contributes significantly to environmental pollution. Previous studies have noted that children are more likely to litter than adults. This target age group can be easily reached through mobile applications and games. Therefore, this study aims to investigate the effect of a gamified application in raising awareness on the effect of littering in the environment. We developed a gamified app, called DoItRight to promote an environment friendly behavior and improve the littering behavior of children. The DoItRight app is in Arabic language and targets children between 5 and 13 years old. It is a gamified application that enables kids to learn the importance of picking up litters and dropping it in trash cans. The app was evaluated using the System Usability Scale (SUS) standardized instrument which was administered on the target audience. The results of the evaluation showed that the DoItRight app has an SUS score of 93.25 which represents an A+ grade and a percentile range of 96 to 100. This indicates that the DoItRight app is technically usable and can potentially serve the purpose of increasing kids’ awareness about the downsides of littering on the environment.

Author 1: Ayman Alfahid
Author 2: Hind Bitar
Author 3: Mayda Alrige
Author 4: Hend Abeeri
Author 5: Eman Sulami

Keywords: Littering; mobile application; gamification; children intention; raise awareness; behavior change Saudi Arabia

PDF

Paper 21: Noise Cancellation in Computed Tomography Images through Adaptive Multi-Stage Noise Removal Paradigm

Abstract: Image de-noising is a noise removal approach, which is utilized to remove noise from the noisy image and is utilized to protect the significant features of images namely, corners, edges, textures, and sharp structures. For medical diagnosis Computer tomography (CT) images are mainly utilized. Due to acquisition and transmission in CT imaging, the noise that appears leads to poor image quality. To overcome this problem, an efficient Noise cancellation in computed tomography images using adaptive multi-stage noise removal paradigm is proposed. The proposed approach consists of three phases namely, Optimal Discrete Wavelet Transform, first stage noise removal using Block Matching, and 3D filtering (BM3D) filter and second stage noise removal using the bilateral filter (BF). Initially, Discrete Wavelet Transform (DWT) is applied to the input image to diminish noise in CT images. In this method, co-efficient ranges are optimally selected with the help of Crow Search Optimization (CSO) algorithm. Secondly, to remove the noise present in the bands, BM3D algorithm is applied. Finally, bilateral filter is applied to the BM3D output image to further enhance the image. The performance of the proposed methodology is analyzed in terms of Peak signal-to-noise ratio (PSNR), Root Mean Square Error (RMSE), and Structural Similarity Index (SSIM). Furthermore, the multi-stage noise removal model obtained gives the best PSNR values compared to other techniques.

Author 1: Jenita Subash
Author 2: Kalaivani S

Keywords: De-noising; computer tomography; discrete wavelet transform; crow search optimization; bilateral filter

PDF

Paper 22: Smart Tourism Recommendation Model: A Systematic Literature Review

Abstract: The tourism industry has become a potential sector to leverage economic growth. Many attractions are detected on several platforms. Machine learning and data mining are some potential technologies to improve the service of tourism by providing recommendations for a specific attraction for tourists according to their location and profile. This research applied for a systematic literature review on tourism, digital tourism, smart tourism, and recommender system in tourism. This research aims to evaluate the most relevant and accurate techniques in tourism that focused on recommendations or similar efforts. Several research questions were defined and translated into search strings. The result of this research was promoting 41 research that discussed tourism, digital tourism, smart tourism, and recommender systems. All of the literature was reviewed on some aspects, in example the problem addressed, methodology used, data used, strength, and the limitation that can be an opportunity for improvement in future research. This study proposed some references for further study based on reviewed papers regarding tourism management, tourist experience, tourist motivation, and tourist recommendation system. The opportunities for a further research study can be conducted with more data usage especially for a smart recommender system in tourism through many types of recommendation techniques such as content-based, collaborative filtering, demographic, knowledge-based, community-based, and hybrid recommender systems.

Author 1: Choirul Huda
Author 2: Arief Ramadhan
Author 3: Agung Trisetyarso
Author 4: Edi Abdurachman
Author 5: Yaya Heryadi

Keywords: Systematic review; tourism; smart tourism; digital tourism; recommender system

PDF

Paper 23: Predicting Aesthetic Preferences: Does the Big-Five Matters?

Abstract: User experience is imperative for the success of interactive products. User experience is notably affected by user preferences; the higher the preference, the better the user experience. The way users develop their preferences are closely related to personality traits. However, there is a void in understanding the association between personality traits and aesthetic dimensions that may potentially explain how users develop their preferences. This paper examines the relationship between the Big-Five personality traits (Openness to Experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) and the two dimensions of aesthetics (classical aesthetics, expressive aesthetics). Two hundred twenty participants completed the Big-Five questionnaire and rated their preference for each of the ten images of web pages on a 7-point Likert scale. Results show Openness to Experience, Conscientiousness, Extraversion, and Neuroticism were not significantly correlated with aesthetic dimensions. Only Agreeableness showed a significant correlation (although weakly) with both classical and expressive aesthetics. The finding conforms to literature that personality traits have influence on the preference of individual design features in lieu of aesthetic dimensions. In other words, personality traits are inapt predictor of aesthetic dimension. Therefore, more studies are needed to explore other factors that potentially help to predict aesthetic dimensions.

Author 1: Carolyn Salimun
Author 2: Esmadi Abu bin Abu Seman
Author 3: Wan Nooraishya binti Wan Ahmad
Author 4: Zaidatol Haslinda binti Abdullah Sani
Author 5: Saman Shishehchi

Keywords: User experience; aesthetic dimensions; personality traits; big-five

PDF

Paper 24: An Ontology-based Decision Support System for Multi-objective Prediction Tasks

Abstract: Student profile modeling is a topic that continues to attract the interest of both academics and researchers because of its crucial role in the development of predictive or decision support systems. It provides platforms to build intelligent systems such as e-orientation, e-recruitment, recommendation, and prediction systems. The purpose of this research is to propose an ontology-based decision support system that can be used for multi-objective prediction tasks such as prediction of failure/abundance, orientation or decision-making. Two major contributions are proposed here: a new domain ontology that models the profile of a student and a system that is based on this ontology to perform multiple prediction tasks. The proposed approach relies on the efficiency of the ontology to ensure semantic interoperability and the benefits of machine learning techniques to build an intelligent system for a multipurpose decision support objectives. The proposed system uses Decision Tree algorithm (C5.0), but other machine learning models can be added if they prove to be more efficient. Furthermore, the performance of the developed method is computed using performance metrics and achieved 83.6% for accuracy and 81.9% for recall.

Author 1: Touria Hamim
Author 2: Faouzia Benabbou
Author 3: Nawal Sael

Keywords: Profile modeling; student; ontology; machine learning; academic domain

PDF

Paper 25: Towards a New Metamodel Approach of Scrum, XP and Ignite Methods

Abstract: The agile approach is a philosophy that aims to avoid the traditional management approach problems. It concentrates on the collaborative approach, using iterative and incremental development. The client receives a first production version (increment) of his software, faster thanks to agile methodologies. Project needs are influenced by the rapid expansion of technologies, particularly after the emergence of the Internet of Things (IoT). They are becoming larger and more complex. IoT provides a standardization and unification of electronic identities, digital entities, and physical objects. Consequently, interconnected devices can retrieve, store, send, and process data easier from both physical and virtual worlds. Scalable methods such as SAFe, LeSS, SPS, and others are existing methodologies ameliorated and dedicated to large projects. These methods are tough to adopt and do not consider the physical side of the project, according to IoT enterprise teams. Based on their managerial and IoT expertise, they suggest their own methods (Ignite | IoT Methodology and IoT Methodology). Model Driven Architecture (MDA) was coined by the Object Management Group (OMG) in 2000 to develop perpetual models that are independent of the technical intricacies of the execution platforms. The purpose of this paper is to propose a metamodel for each methodology among: Scrum, XP, and Ignite.

Author 1: Merzouk Soukaina
Author 2: Elkhalyly Badr
Author 3: Marzak Abdelaziz
Author 4: Sael Nawal

Keywords: Agile software development; scrum; extreme programming; XP; internet of things; IoT; Ignite | IoT Methodology; IoT Methodology; metamodel; MDA; OMG

PDF

Paper 26: Towards a Computational Model to Thematic Typology of Literary Texts: A Concept Mining Approach

Abstract: In recent years, computational linguistic methods have been widely used in different literary studies where they have been proved useful in breaking into the mainstream of literary critical scholarship as well as in addressing different inherent challenges that were long associated with literary studies. Such computational approaches have revolutionized literary studies through their potentials in dealing with large datasets. They have bridged the gap between literary studies and computational and digital applications through the integration of these applications including most notably data mining in reconsidering the way literary texts are analyzed and processed. As thus, this study seeks to use the potentials of computational linguistic methods in proposing a computational model that can be usefully used in the thematic typologies of literary texts. The study adopts concept mining methods using semantic annotators for generating a thematic typology of the literary texts and exploring their thematic interrelationships through the arrangement of texts by topic. The study takes the prose fiction texts of Thomas Hardy as an example. Findings indicated that concept mining was usefully used in extracting the distinctive concepts and revealing the thematic patterns within the selected texts. These thematic patterns would be best described in these categories: class conflict, Wessex, religion, female suffering, and social realities. It can be finally concluded that computational approaches as well as scientific and empirical methodologies are useful adjuncts to literary criticism. Nevertheless, conventional literary criticism and human reasoning are also crucial and irreplaceable by computer-assisted systems.

Author 1: Abdulfattah Omar

Keywords: Computational linguistics; concept mining; data mining; empirical methodologies; semantic annotators; text clustering; typology

PDF

Paper 27: Educational Data Mining in Predicting Student Final Grades on Standardized Indonesia Data Pokok Pendidikan Data Set

Abstract: Educational Data Mining has been implemented in predicting student final grade in Indonesia. It can be used to improve learning efficiency by paying more attention to students who are predicted to have low scores, but in practice it shows that each algorithm has a different performance depending on the attributes and data set used. This study uses Indonesian standardized students’ data named Data Pokok Pendidikan to predict the grades of junior high school students. Several prediction techniques of K-Nearest Neighbor, Naive Bayes, Decision Tree and Support Vector Machine are compared with implementation of parameter optimization and feature selection on each algorithm. Based on accuracy, precision, recall and F1-Score shows that various algorithm performs differently based on the high school data set, but in general Decision Tree with parameter optimization and feature selection outperform other classification algorithm with peak F1-Score at 61.48% and the most significant attribute in are First Semester Natural Science and First Semester Social Science score on predicting student final score.

Author 1: Nathan Priyasadie
Author 2: Sani Muhammad Isa

Keywords: Educational data mining; student performance; classification models; feature selection; parameter optimization

PDF

Paper 28: Cyberbullying Detection in Textual Modality

Abstract: Cyberbullying is the use of technology to harass, threaten or target another individual. Online bullying can be particularly damaging and upsetting since it is usually anonymous and it’s often hard to trace the bully. Sometimes cyberbullying can lead to issues like anxiety, depression, shame, suicide, etc. Most of the cyberbullying cases are not revealed to the public and the number of cases reported to the legal system is only few. Certain victims do not reveal their bully experiences out of shame or due to difficult procedures for reporting to the legal system. Our cyberbullying detection system aims to bring cases involving cyberbullying under control by detecting and warning the bully. Such cases are also reported to appropriate authorities, which can then be verified and necessary actions can be taken depending on the situation. The technology stack used for implementation include Flask, Scikit learn, Chat application APIs, Firebase, HTML, Javascript and CSS. The model was tested on classifiers like SVM, KNN, Logistic regression and Random Forest. F1 score was used as a metric to assess the four models. While analyzing the performances of these models, it was observed that Random Forest Classifier outperformed all the models. F1 score of 93.48% was achieved using the Random Forest Classifier.

Author 1: Evangeline D
Author 2: Amy S Vadakkan
Author 3: Sachin R S
Author 4: Aakifha Khateeb
Author 5: Bhaskar C

Keywords: Cyberbullying detection; support vector machine (SVM); kNN (k nearest neighbor); logistic regression; random forest classifier

PDF

Paper 29: Customers’ Opinions on Mobile Telecommunication Services in Malaysia using Sentiment Analysis

Abstract: Mobile telecommunication companies in Malaysia have been widely used in the recent decade. There is intense competition among them to keep and gain new customers by offering various services. The reviews of the services by the customers are commonly shared on social media such as Twitter. Those reviews are essential for mobile telecommunication companies to improve their services and at the same time to keep their customers from churning to another company. Hence, this study focuses on the public sentiment on Twitter towards mobile telecommunication services in Malaysia. Data on Twitter was scraped using three keywords: Celcom, Digi, and Maxis. The keywords used to refer to Malaysia's top three mobile telecommunication companies. The timeline for the tweets was between December 2020 until January 2021 and was based on the promotion sales commonly used by the organisation to boost their sales which is called Year End Sales. Corpus-based approach and Machine Learning model using RapidMiner were used in this study, namely, Support Vector Machine (SVM), Naïve Bayes, and Deep Learning. The corpus determines the sentiment from the tweets, either positive, negative, or neutral. The models' performances were compared in terms of accuracy, and the outcome shows that Deep Learning classifiers have the highest performance compared to other classifiers. The results of this sentiment analysis are visualised for easy understanding.

Author 1: Muhammad Radzi Abdul Rahim
Author 2: Shuzlina Abdul-Rahman
Author 3: Yuzi Mahmud

Keywords: Sentiment analysis; predictive analytics; RapidMiner; mobile telecommunications

PDF

Paper 30: Detecting Server-Side Request Forgery (SSRF) Attack by using Deep Learning Techniques

Abstract: Server-side request forgery (SSRF) is a security vulnerability that arises from a vulnerability in web applications. For example, when the services are accessed via URL the attacker supply or modify a URL to access services on servers that he is not permitted to use. In this research, various types of SSRF attacks are discussed, and how to secure web applications are explained. Various techniques have been used to detect and mitigate these attacks, most of which are concerned with the use of machine learning techniques. The main focus of this research was the application of deep learning techniques (LSTM networks) to create an intelligent model capable of detecting these attacks. The generated deep learning model achieved an accuracy rate of 0.969, which indicates the strength of the model and its ability to detect SSRF attacks.

Author 1: Khadejah Al-talak
Author 2: Onytra Abbass

Keywords: Server-side request forgery (SSRF); machine learning (ML); deep learning (DL); long short-term memory (LSTM)

PDF

Paper 31: English Semantic Similarity based on Map Reduce Classification for Agricultural Complaints

Abstract: Due to environmental changes, including global warming, climatic changes, ecological impact, and dangerous diseases like the Coronavirus epidemic. Since coronavirus is a hazardous disease that causes many deaths, government of Egypt undertook many strict regulations, including lockdowns and social distancing measures. These circumstances have affected agricultural experts' presence to help farmers or advise on solving agricultural problems. For helping this issue, this work focused on improving support for farmers on the major field crops in Egypt Retrieving solutions corresponding to farmer query. For our work, we have mainly focused on detecting the semantic similarity between large agriculture dataset and user queries using Latent Semantic Analysis (LSA) based on Term Frequency Weighting and Inverse Document Frequency (TF-IDF) method. In this research paper, we apply SVM MapReduce classifier as a framework for paralleling and distributing the work on the dataset to classify the dataset. Then we apply different approaches for computing the similarity of sentences. We presented a system based on semantic similarity methods and support vector machine algorithm to detect the similar complaints of the user query. Finally, we run different experiments to evaluate the performance and efficiency of the proposed system as the system performs approximately 77.8%~94.8% in F-score measure. The experimental results show that the accuracy of SVM classifier is approximately 88.68%~89.63% and noted the leverage of SVM classification to the semantic similarity measure between sentences.

Author 1: Esraa Rslan
Author 2: Mohamed H. Khafagy
Author 3: Kamran Munir
Author 4: Rasha M.Badry

Keywords: Agricultural system; semantic textual similarity; text classification; latent semantic analysis; part of speech

PDF

Paper 32: Multi-objective based Cloud Task Scheduling Model with Improved Particle Swarm Optimization

Abstract: Now-a-days, advanced technologies have emerged from the parallel, cluster, client-server, distributed, and grid computing paradigms. Cloud is one of the advanced technology paradigms that deliver services to users on demand by cost per usage over the internet. Nowadays, a number of cloud services have rapidly increased to facilitate the user requirements. The cloud is able to provide anything as a service over web networks from hardware to applications on demand. Due to the complex infrastructure of the cloud, it needs to manage resources efficiently, and constant monitoring is required from time to time. Task scheduling plays an integral role in improving cloud performance by reducing the number of resources used and efficiently allocating tasks to the requested resources. The paper's main idea attempts to assign and schedule the resources efficiently in the cloud environment by using proposed Multi-Objective based Hybrid Initialization of Particle Swarm Optimization (MOHIPSO) strategy by considering both sides of the cloud vendor and user. The proposed algorithm is a novel hybrid approach for initializing particles in PSO instead of random values. This strategy can obtain the minimum total task execution time for the benefit of the cloud user and maximum resource usage for the benefit of the cloud provider. The proposed strategy shows improvement over standard PSO and the other heuristic initialization of PSO approach to reduce the makespan, execution time, waiting time, and virtual machine imbalance parameters are considered for comparison results.

Author 1: Chaitanya Udatha
Author 2: Gondi Lakshmeeswari

Keywords: Cloud computing; task scheduling; cloud service provider; virtual machines; PSO; multi-objective; cloud service broker

PDF

Paper 33: GML_DT: A Novel Graded Multi-label Decision Tree Classifier

Abstract: The goal of Graded Multi-label Classification (GMLC) is to assign a degree of membership or relevance of a class label to each data point. As opposed to multi-label classification tasks which can only predict whether a class label is relevant or not. The graded multi-label setting generalizes the multi-label paradigm to allow a prediction on a gradual scale. This is in agreement with practical real-world applications where the labels differ in matter of level relevance. In this paper, we propose a novel decision tree classifier (GML_DT) that is adapted to the graded multi-label setting. It fully models the label dependencies, which sets it apart from the transformation-based approaches in the literature, and increases its performance. Furthermore, our approach yields comprehensive and interpretable rules that efficiently predict all the degrees of memberships of the class labels at once. To demonstrate the model’s effectiveness, we tested it on real-world graded multi-label datasets and compared it against a baseline transformation-based decision tree classifier. To assess its predictive performance, we conducted an experimental study with different evaluation metrics from the literature. Analysis of the results shows that our approach has a clear advantage across the utilized performance measures.

Author 1: Wissal Farsal
Author 2: Mohammed Ramdani
Author 3: Samir Anter

Keywords: Graded multi-label classification; algorithm adaptation; decision tree classifier; label dependencies

PDF

Paper 34: A Recognition Method for Cassava Phytoplasma Disease (CPD) Real-Time Detection based on Transfer Learning Neural Networks

Abstract: Object detection technology aims to detect the target objects with the theories and methods of image processing and pattern recognition, determine the semantic categories of these objects, and mark the specific position of the target object in the image. This study generally aims to establish a recognition method for Cassava Phytoplasma Disease (CPD) real-time detection based on transfer learning neural networks. Several methods and procedures were conducted, such as the testing of two methods in transmitting long-distance high definition (HD) video capture; establishment of a compact setup for a long-range wireless video transmission system; the development, testing of the real-time CPD detection and quantification monitoring system, providing the comparative performance analysis of the three models used. We have successfully custom-trained three artificial neural networks using transfer learning: Faster Regions with Convolutional Neural Networks (R-CNN) Inception v2, Single Shot Detector (SSD) Mobilenet v2, and You Only Look Once (YOLO) v4. These deep learning models can detect and recognize CPD in actual environment settings. Overall, the developed real-time CPD detection and quantification monitoring system was successfully integrated into the wireless video receiver and seamlessly visualized all the incoming data using the three different CNN models. If the consideration is the image processing speed, YOLOv4 is better compared to other models. But, if accuracy is the priority, Faster R-CNN inception v2 performs better. However, since CPD detection is the main purpose of this study, the Faster R-CNN model is recommended for adoption to detect CPD in a real-time environment.

Author 1: Irma T. Plata
Author 2: Edward B. Panganiban
Author 3: Darios B. Alado
Author 4: Allan C. Taracatac
Author 5: Bryan B. Bartolome
Author 6: Freddie Rick E. Labuanan

Keywords: Cassava phytoplasma disease; faster regions with convolutional neural networks (R-CNN) inception v2; you only look once (YOLO) v4; object detection; precision agriculture

PDF

Paper 35: Optimizing Smartphone Recommendation System through Adaptation of Genetic Algorithm and Progressive Web Application

Abstract: The ubiquity of smartphone use nowadays is undeniable exponentially growing, replaced cell phones, and a host of other gadgets replaced personal computers to a certain degree. Different smartphones specifications and overwhelmed smartphone advertisements have caused broader choices for the customer. Many qualitative and quantitative criteria need to consider, and customers want to select the most suitable smartphones. They face difficulties deciding the best smartphone according to their budget and desire. Thus, a new method is needed to recommend the customer according to their preferences and budget. This study proposed a method for optimizing the recommendation system of the smartphone using the genetic algorithm (GA). Moreover, it is implemented with a progressive web application (PWA) platform to ensure the customer can use it on multiple platforms. They can choose the platform to input any specification of smartphone preferences besides the budget. Functional testing results showed the achievement of the study’s objectives, and usability testing using UEQ managed to receive feedback of 93.64%, with an overall average mean of 4.682. Therefore, according to the outcome, it can be concluded that optimizing the smartphone recommendations through GA enables the customer to ease the comparison based on the obtained optimum result.

Author 1: Khyrina Airin Fariza Abu Samah
Author 2: Nursalsabiela Affendy Azam
Author 3: Raseeda Hamzah
Author 4: Chiou Sheng Chew
Author 5: Lala Septem Riza

Keywords: Genetic algorithm; progressive web application; recommendation; smartphone introduction

PDF

Paper 36: SG-TSE: Segment-based Geographic Routing and Traffic Light Scheduling for EV Preemption based Negative Impact Reduction on Normal Traffic

Abstract: Emergency Vehicles (EVs) play a significant role in giving timely assistance to the general public by saving lives and avoiding property damages. The EV preemption models help the EVs to maintain their speed along their path by pre-clearing the normal vehicles from the path. However, few preemption models are designed in literature, and they lack in minimizing the negative impacts of EV preemption on normal vehicle traffic and also negative impacts of normal vehicle traffic on EV speed. To accomplish such goals, the work proposes a Segment-based Geographic routing and Traffic light Scheduling based EV preemption (SG-TSE) that incorporates two mechanisms: Segment based Geographic Routing (SGR) and Dynamic Traffic Light Scheduling and EV Preemption (DTSE) for efficient EV preemption. Firstly, the SGR utilized a geographic routing model through the Segment Heads (SHs) along the selected route and passed the EV arrival messages to the traffic light controller to pre-clear the normal traffic. Secondly, the DTSE designs effective scheduling at traffic lights by dynamically adjusting the green time phase based on the minimum detection distance of EVs to the intersections. Thus, the EVs are passed through the intersections quickly without negatively impacting normal traffic, even the signal head in the red phase. Moreover, the proposed SG-TSE activates the green phase time at the correct time and minimizes the negative impacts on the EV preemption model. Finally, the performance of SG-TSE is evaluated using Network Simulator-2 (NS-2) with different performance metrics and various network traffic scenarios.

Author 1: Shridevi Jeevan Kamble
Author 2: Manjunath R Kounte

Keywords: Emergency vehicle (EV) preemption; Segment-based Geographic routing and Traffic light Scheduling based EV preemption (SG-TSE); geographic routing; Segment based Geographic Routing (SGR); dynamic traffic light scheduling; Dynamic Traffic light Scheduling and EV preemption (DTSE); green phase adjustment

PDF

Paper 37: Detection of Data Leaks through Large Scale Distributed Query Processing using Machine Learning

Abstract: With the growth in the distributed data processing and data being the fuel for each of the processes, the query processes of the data are expected to be significantly lower. Hence, the distribution of the data is highly expected and during the distributing of the data, the chances for data leakage increases to a significant extend. The data leakage problems are not generally caused by intentional errors, rather this is caused by the higher visibility of the data over multiple clusters. Henceforth, the detection process is also very critical. Many of the parallel research attempts have demonstrated various methods for the detection and as well as the prevention methods. The works in the direction of the detection of the data leaks are highly dependent either on the historical information of the leaks or depends on the contextual importance of the data. In both the cases, the outcomes of the detection process accuracy cannot be ensured. In the other hand, the preventive measures can also turn into a reactive process for detection by reversing the principles proposed in these research outcomes, but the computational complexities are significantly higher. Thus, this work proposes a novel strategy for detection of the data leakages after the data distribution during the query processing events. This work proposes an initial Occurrence Based Rule Set Extraction method using Adaptive Threshold for generating the rulesets, further for reducing the time complexity and reducing the loss of dataset attribute information, this work introduces yet another algorithm for Dynamic Inference-based Rule Set Reduction. After the inferences are generated, finally this work deploys the Attribute Subset Equivalence-based Leak Detection mechanism for final detection of the clusters with data leaks. This work demonstrates nearly 89% accuracy for the detection process.

Author 1: Kiranmai MVSV
Author 2: D Haritha

Keywords: Distributed query processing; distributed data leak; data leak detection; attribute subset equivalence; dynamic inference; adaptive threshold model introduction

PDF

Paper 38: Knowledge Graph-based Framework for Domain Expertise Elicitation and Reuse in e-Learning

Abstract: Reusing knowledge expertise of different domains in e-learning is an ideal approach to sustain knowledge and disseminate it throughout the different organizations’ processes. This approach generates a valuable source for instruction which can enrich significantly the quality of teaching and training as it uses effortlessly expertise from its original sources. It is also very useful for teaching activities since it connects learners with real-life scenarios involving field experts and reliefs instructors from the tedious task of authoring teaching material. In this paper we propose a framework that allows gathering automatically expertise from domain experts while doing their activities and then represents it in a form that can be shared and reused in e-learning by different types of learners. The framework relies on knowledge graphs that are knowledge representation structures which facilitate mapping expertise to e-learning objects. A case study is presented showing how inspector reports are handled to generate on-demand e-courses specifically adapted to learners’ needs.

Author 1: Jawad Berri

Keywords: Knowledge graph; domain expertise; e-learning; knowledge elicitation; learning web

PDF

Paper 39: Leveraging Artificial Intelligence–enabled Workflow Framework for Legacy Transformation

Abstract: The rapid advancement of web technologies coupled with evolving business needs make legacy transformation a necessity for enterprises around the world. However, the risks in such a transformation must be mitigated with an approach that is flexible enough to allow for a gradual and low risk transformation process. This paper presents a Service Oriented Architecture (SOA) workflow-based legacy transformation approach that allows for phased transformation in which a legacy system is first transformed into self-contained modular services accessible via a dedicated service layer. These modular services are managed through an AI-enabled workflow management layer that interacts with improved UI frontend for the system’s end users. This paper presents a hypothetical prototype in which an Oracle 5 legacy system is transformed using the proposed architecture. ASP .NET Core MVC as well as Pega business process management platform are utilized to practically assess the feasibility of the proposed approach.

Author 1: Abdullah Al-Barakati

Keywords: Legacy systems; service oriented architecture; workflow management; legacy transformation; digital transformation; artificial intelligence (AI)

PDF

Paper 40: Developing the Mathematical Model of the Bipedal Walking Robot Executive Mechanism

Abstract: The paper considers the accuracy of footstep control in the vicinity of the application object. The methodology of forming a simulation of the executive electro-hydraulic servomechanism is developed. The paper presents control algorithms in the dynamic walking mode. The issues of stabilization of the sensors installed in the soles are investigated. The description of the laboratory model and simulation of the main links of the exoskeleton, approximated to human parameters, allowing to insert the studied algorithms of motion of the executive mechanism into the program of automation of calculations of the links of motion are given. The authors for the first time simulated the bipedal walking robot using modern digital technologies, including the joint use of pneumatic electric drive. This paper proposes an automated control scheme for manipulators controlling immobilized human limbs. Considering the functions of the leg and the phases of movement, the structure scheme is chosen so that the same actuator performs several functions. This construction partially reduces the load on the person, because the drives of the various links due to their gravity can overturn a person. Using the kinematic structure of the model and the method of adaptive control of the manipulator, as well as replacing some movement parts with plastic material, the authors were successful in reducing the total weight by three times compared with foreign analogues, which is important for a sick person.

Author 1: Zhanibek Issabekov
Author 2: Nakhypbek Aldiyarov

Keywords: Exoskeleton; manipulator; model; kinematics; dynamics

PDF

Paper 41: Data Backup Approach using Software-defined Wide Area Network

Abstract: Over the past several years, the traditional approaches of managing and utilizing hybrid Wide Area Network (WAN) connections, between sites across geographical regions, have posed many challenges to enterprises. Software-Defined Wide Area Network (SD-WAN) has emerged as a new paradigm that can overcome the traditional WAN challenges like the lack of visibility for WAN bandwidth utilization and the inefficient usage of expensive WAN resources. The flexibility and agility brought to WAN by applying the SD-WAN paradigm helped to improve the efficiency of bandwidth utilization and to address the surge of bandwidth demands. The SD-WAN capabilities become essential for meeting the heavy inter-data center's traffic exchange required for business continuity and disaster recovery operations. In this paper, a data backup approach is introduced using SD-WAN that makes the network centrally programmable. This will leverage the ability to make fine-grained traffic engineering for different data flows over WAN to optimize the bandwidth utilization of the expensive WAN resources by balancing the traffic load across network links between data centers and to minimize the time required to transfer backup data to disaster recovery sites. The proposed approach proved its efficiency according to the bandwidth utilization if it is compared to the other related works.

Author 1: Ahmed Attia
Author 2: Nour Eldeen Khalifa
Author 3: Amira Kotb

Keywords: Wide area networks; software defined network; software defined wide area network

PDF

Paper 42: Critical Data Consolidation in MDM to Develop the Unified Version of Truth

Abstract: Organization seeking growth and competitive lead should use Master Data Management (MDM) as a foundation for efficient decision making. An MDM framework creates a trusted and reliable continuous record of customers, products, suppliers and other shared data sets. In master data, the critical data is consolidated to portray essential business entities into a Unified version of Truth. To create trusted view of master data challenges like quality, identity resolution, analytics and investment are faced. In proposed research, a technique has been designed to generate Master Data to assist the policy maker to address the said issues. In this paper, four steps have been taken for master data creation namely: Data Enrichment, Data Matching, Data Merging and Data Governance. To achieve legitimate data quality TALEND open studio has been used for data pre-processing and enrichment. An algorithm is designed to match and merge the master records. To validate the designed approach, results are evaluated using Pandas Data Frame on Python platform. This paper will assist the policy makers of the organizations in formulating the business strategies.

Author 1: Dupinder Kaur
Author 2: Dilbag Singh

Keywords: Master data management (MDM); master record; TALEND; data matching and merging

PDF

Paper 43: “Digital Influencer”: Development and Coexistence with Digital Social Groups

Abstract: Digital identities, also known as virtual influencers, are created by humans through the creation of digital tools that mimic human behavior through the use of creative design. As a result of this, it has resulted in the creation of a group of people who are very fond of and trendy called "virtual influencers", particularly in the modern day. With the rise of virtual influencers, they must be used as tools in marketing and media, particularly in the online world. Because such a character is able to overcome a variety of limitations that humans are unable to provide, character styles, which do not need to have the same look or composition as people, are factors that make these characters popular, but the development of the virtual influencer depends on the social and cultural factors of the people of that era, as well as relying on technology to play a role for humans to be able to apply and use these elements to integrate with the existing virtual influencer to grow and develop more.

Author 1: Jirawat Sookkaew
Author 2: Pipatpong Saephoo

Keywords: Virtual influencer; online social; virtual character; media

PDF

Paper 44: Modified Deep Residual Quantum Computing Optimization Technique for IoT Platform

Abstract: Internet of Things (IoT) is defined as millions of interconnections between wireless devices to obtain data globally. The multiple data are targeting to observe the data through a common platform, and then it becomes essential to investigate accuracy for realizing the best IoT platform. To address the growing demand for time-sensitive data analysis and real-time decision-making, accuracy in IoT data collecting has become critical. The Res-HQCNN is a hybrid quantum-classical neural network with deep residual learning. The model is qualified in an end-to-end analog method in a traditional neural network, backpropagation is used. To discover the Res-HQCNN efficiency to perform on the classical computer, there has been a lot of investigation into quantum data with or without noise. Then focus on the application of the artificial neural network to analyze the dangers to these IoT networks. For data recording purposes, to undertake in-depth analysis on the threat severity, kind, and source, a model is trained using recurrent and convolutional neural networks. The intrusion detection system (IDS) explored in this study has a success rate of 99% based on the empirical data supplied to the model. Due to irregularly distributed robust execution, larger affectability for the introduction of authority dimension, steadiness, and the extremely large crucial area, a quantum hash function work has been proposed as an amazing method for secure communication between the IoT and cloud.

Author 1: Rasha M. Abd El-Aziz
Author 2: Alanazi Rayan
Author 3: Osama R. Shahin
Author 4: Ahmed Elhadad
Author 5: Amr Abozeid
Author 6: Ahmed I. Taloba

Keywords: Internet of things (IoT); cloud; Res-HQCNN; intrusion detection system (IDS); optimization

PDF

Paper 45: Adding Water Path Capabilities to QWAT Databases

Abstract: The main purpose of this article is to show how to extend an existing open source database, namely QWAT (Acronym from Quantum GIS Water Plugin), by using pgRouting (PostgreSQL routing extension) in order to achieve the ability to find the water flow in a water network. The water path in a water network is a key information needed by any water supplying company for different activities such as customer identification, meter the water flow or isolating areas of the water network. In our environment an open source database was used and that database didn’t have any means to identify the water path, so our research is intended into that direction. Once a water path is found, our next goal was to show that identifying customers for a water supplying company is just a click away (by using no directional graphs). Another key information needed by the water supplying companies is to know which valves should be closed in order to shut off the water for an area of the water network. As result, the second purpose of the article is to show how to identify the necessary valves, to be closed or open, in order to shut off or on the water (within the pipe network).

Author 1: Bogdan Vaduva
Author 2: Honoriu Valean

Keywords: Relational database; graphs; water network; water path; open source; QWAT

PDF

Paper 46: An Integrated Reinforcement DQNN Algorithm to Detect Crime Anomaly Objects in Smart Cities

Abstract: In olden days it is difficult to identify the unsusceptible forces happening in the society but with the advancement of smart devices, government has started constructing smart cities with the help of IoT devices, to capture the susceptible events happening in and around the surroundings to reduce the crime rate. But, unfortunately hackers or criminals are accessing these devices to protect themselves by remotely stopping these devices. So, the society need strong security environment, this can be achieved with the usage of reinforcement algorithms, which can detect the anomaly activities. The main reason for choosing the reinforcement algorithms is it efficiently handles a sequence of decisions based on the input captured from the videos. In the proposed system, the major objective is defined as minimum identification time from each frame by defining if then decision rules. It is a sort of autonomous system, where the system tries to learn from the penalties posed on it during the training phase. The proposed system has obtained an accuracy of 98.34% and the time to encrypt the attributes is also less.

Author 1: Jyothi Mandala
Author 2: Pragada Akhila
Author 3: Vulapula Sridhar Reddy

Keywords: HybridFly; Advanced Encryption Standard (AES); reinforcement; anomaly detection; crime rate prediction; security attacks; RCNN

PDF

Paper 47: Monitoring the Growth of Tomatoes in Real Time with Deep Learning-based Image Segmentation

Abstract: Increasing agricultural productivity such as tomatoes needs to be increased, considering the consumption growth reaches 6.34% per year. Efforts to increase productivity can be made through several methods, such as counting and predicting the time of fruit to be harvested. This information is a a visual problem, so computer vision should solve it as an automation method in the industry world. With this information, the farmer can monitor the tomato fruit growth. The proposed method is a framework that has been implemented in real-time processing. To obtain growth information of tomatoes, the tomato area can be used as a region of interest (ROI) every week or another scheduled time. As the challenge of this research, this ROI can be extracted using segmentation analysis. The segmentation method used is Mask Region-Convolutional Network (R-CNN) with ResNet101 architecture. The accuracy of this method is obtained from the similarity value between the proposed method and the ground truth used, namely 97.34% using the Dice Coefficient and 94.83% using the Jaccard Coefficient. This result indicates that the method can extract the ROI information with high accuracy. So, the result can be used as a reference for the farmer to treat each tomato plant.

Author 1: Sigit Widiyanto
Author 2: Dheo Prasetyo Nugroho
Author 3: Ady Daryanto
Author 4: Moh Yunus
Author 5: Dini Tri Wardani

Keywords: Deep learning; Mask R-CNN; segmentation; tomato; growth

PDF

Paper 48: Personalized Recommender System for Arabic News on Twitter

Abstract: Reading online news is the most popular way to read articles from news sources worldwide. Nowadays, we have observed a mass increase of information that is shared through social media and specially news. Many researchers have proposed different techniques that focus on providing recommendations to news articles, but most of these researches focused on presenting solution for English text. This research aimed to develop a personalized news recommender system that can be used by Arabic newsreaders; to display news articles based on readers’ interests instead of presenting them only in order of their occurrence. To develop the system we have created an Arabic dataset of tweets and a set of Arabic news articles to serve as the source of recommendations. Then we have used CAMeL tools for Arabic natural language processing to preprocess the collected data. After that, we have built a hybrid recommender system through combining two filtering approaches: First, using a content-based filtering approach to consider the user's profile to recommend news articles to the user. Second, using collaborative filtering approach to consider the article's popularity with the support of Twitter. The system’s performance was evaluated using two evaluation metrics. We have conducted a user experimental study of 25 respondents to perform an assessment to get the users’ feedbacks. Also, we have used Mean Absolute Error (MAE) metrics as another way to evaluate the system accuracy. Based on evaluation results we found that hybrid recommender systems would recommend more relevant articles to users compared to the other two types of recommender system.

Author 1: Bashaier Almotairi
Author 2: Mayada Alrige
Author 3: Salha Abdullah

Keywords: Hybrid recommender system; online social network; Arabic news recommendation

PDF

Paper 49: Machine Learning Model through Ensemble Bagged Trees in Predictive Analysis of University Teaching Performance

Abstract: The objective of this study is to analyze and discuss the metrics of the Machine Learning model through the Ensemble Bagged Trees algorithm, which will be applied to data on satisfaction with teaching performance in the virtual environment. Initially the classification analysis through the Matlab R2021a software, identified an Accuracy of 81.3%, for the Ensemble Bagged Trees algorithm. When performing the validation of the collected data, and proceeding with the obtaining of the predictive model, for the 4 classes (satisfaction levels), total precision values of 82.21%, Sensitivity of 73.40%, Specificity of 91.02% and of 90.63% Accuracy. In turn, the highest level of the area under the curve (AUC) by means of the Receiver operating characteristic (ROC) is 0.93, thus considering a sensitivity of the predictive model of 93%. The validation of these results will allow the directors of the higher institution to have a database, to be used in the process of improving the quality of the educational service in relation to teaching performance.

Author 1: Omar Chamorro-Atalaya
Author 2: Carlos Chávez-Herrera
Author 3: Marco Anton-De los Santos
Author 4: Juan Anton-De los Santos
Author 5: Almintor Torres-Quiroz
Author 6: Antenor Leva-Apaza
Author 7: Abel Tasayco-Jala
Author 8: Gutember Peralta-Eugenio

Keywords: Machine learning; ensemble; bagged trees; predictive analysis; teaching performance

PDF

Paper 50: Feature Extraction based Breast Cancer Detection using WPSO with CNN

Abstract: The cancer reports of the past few years in India says that 30% cases have breast cancer and moreover it may increase in near future. It is added that in every two minutes, one woman is diagnosed and one expires in every nine minutes. Early diagnosis of cancer saves the lives of the individuals affected. To detect breast cancer in early stages, micro calcifications is considered as one key symptom. Several scientific investigations were performed to fight against this disease for which machine learning techniques can be extensively used. Particle swarm optimization (PSO) is recognized as one among several efficient and promising approach for diagnosing breast cancer by assisting medical experts for timely and apt treatment. This paper uses weighted particle swarm optimization (WPSO) approach for extracting textural features from the segmented mammogram image for classifying micro calcifications as normal, benign or malignant thereby improving the accuracy. In the breast region, tumour part is extracted using optimization methods. Here, Convolutional Neural Networks (CNNs) is proposed for detecting breast cancer which reduces the manual overheads. CNN framework is constructed for extracting features efficiently. This designed model detects the cancer regions in mammogram (MG) images and rapidly classifies those regions as normal or abnormal. This model uses MG images which were obtained from various local hospitals.

Author 1: Naga Deepti Ponnaganti
Author 2: Raju Anitha

Keywords: Breast cancer; microcalcifications; weighted particle swarm optimization (WPSO); Convolutional Neural Networks (CNNs) mammogram

PDF

Paper 51: Real-Time Emotional Expression Generation by Humanoid Robot

Abstract: Emotion integrates different aspects of a person, including mood (current emotional state), personality, voice or speech, color around the eyes, and facial organs' movement. We are considering the mood because a person’s current emotional state must always affect upcoming emotions. So behind an emotion, all these parameters are involved, and a human being can easily recognize it by seeing that face even if more than one person is there, so for the robot to make human-like emotion, all these parameters have to be considered to imitate artificial facial expression against that emotion. Most researchers working in this area still find difficulties in determining exact emotion by the robot because facial information is not always available, especially when interacting with a group of people and mimicking exact emotion that the user can effectively recognise. In our study, the loud most speeches among the people sensed by the robot and color around eyes are considered to cope with these issues. Another issue is the rise time and fall time of emotional intensity. In other words, how long should the robot keep an emotion here? An experimental approach is applied to get these values. The proposed method used an emotional speech database to recognize the human emotion using convunational neural network (CNN) and RGB patterns to mimic the emotion, which simulates an improved humanoid robot that can express emotion like human beings and give real-time responses to the user or group of users that can make more effective Human-Robot Interaction (HRI).

Author 1: Master Prince

Keywords: Artificial facial expression; emotional speech database; convunational neural network; RGB pattern; humanoid robot; human-robot interaction

PDF

Paper 52: Deep Learning-enabled Detection of Acute Ischemic Stroke using Brain Computed Tomography Images

Abstract: Stroke is the second leading cause of death globally. Computed Tomography plays a significant role in the initial diagnosis of suspected stroke patients. Currently, stroke is subjectively interpreted on CT scans by domain experts, and significant inter- and intra-observer variation has been documented. Several methods have been proposed to detect ischemic brain stroke automatically on CT scans using machine learning and deep learning, but they are not robust and their performance is not ready for clinical practice. We propose a fully automatic method for acute ischemic stroke detection on brain CT scans. The system’s first component is a brain slice classification module that eliminates the CT scan’s upper and lower slices, which do not usually include brain tissue. In turn, a brain tissue segmentation module segments brain tissue from CT slices, followed by tissue contrast enhancement using the Extreme-Level Eliminating Histogram Equalization technique. Finally, the processed brain tissue is classified as either normal or ischemic stroke using a classification module, to determine whether the patient is suffering from an ischemic stroke. We leveraged the use of the pre-trained ResNet50 model for slice classification and tissue segmentation, while we propose an efficient lightweight multi-scale CNN model (5S-CNN), which outperformed state-of-the-art models for brain tissue classification. Evaluation included the use of more than 130 patient brain CT scans curated from King Fahad Medical City (KFMC). The proposed method, using 5-fold cross-validation to validate generalization and susceptibility to overfitting, achieved accuracies of 99.21% in brain slice classification, 99.70% in brain tissue segmentation, ‎87.20% in patient-wise brain tissue classification, and 90.51% in slice-wise brain tissue classification. The system can assist both expert and non-expert radiologists in the early identification of ischemic stroke on brain CT scans.

Author 1: Khalid Babutain
Author 2: Muhammad Hussain
Author 3: Hatim Aboalsamh
Author 4: Majed Al-Hameed

Keywords: Acute ischemic brain stroke; deep learning; ‎‎‎convolutional neural ‎‎network; ‎ CT brain slice classification; brain tissue segmentation; brain tissue contrast enhancement; brain tissue classification

PDF

Paper 53: Learning Cultural Heritage History in Muzium Negara through Role-Playing Game

Abstract: The traditional classroom-based teaching and learning of the History subject are ineffective and less interactive, influencing the students’ interest and motivation to learn history. Therefore, museum-based learning was proposed to supplement classroom-based learning for effective teaching and learning of the History subject. However, the excursions to the museum are often hindered by issues caused by the geographical location, the museum’s policies, and student commitments. The hindrances motivated the researchers to design and develop a role-playing game (RPG) in Muzium Negara (National Museum of Malaysia) known as “Waktu Silam” to enhance students’ interest, motivation, and knowledge on the cultural and historical heritage of Malaysia. A survey questionnaire was distributed to assess the enjoyment level provided by the game. The results showed that 84.8% of participants had experienced the element of enjoyment in this game. This study anticipated enhancing the student interest and knowledge in history, enhancing visitors’ experience, and promoting tourism to Muzium Negara. Additionally, the project is expected to include multiplayer functionality to add more interactivity to the game in future works.

Author 1: Nor Aiza Moketar
Author 2: Nurul Hidayah Mat Zain
Author 3: Siti Nuramalina Johari
Author 4: Khyrina Airin Fariza Abu Samah
Author 5: Lala Septem Riza
Author 6: Massila Kamalrudin

Keywords: Muzium Negara; history; role-playing games; gamification; museum-based learning; enjoyable

PDF

Paper 54: Smart Irrigation and Precision Farming of Paddy Field using Unmanned Ground Vehicle and Internet of Things System

Abstract: Paddy is one of the largest consumed staple foods across the globe, especially in Asian countries. With the population growing larger and the agricultural land shrinking, there is a need to increase the yield of the crop to meet the ever-growing food demand. The yield of paddy largely depends on the irrigation of the paddy field, that is maintaining the optimum water level in the paddy field. The solution to this irrigation problem has been proposed in this paper, by addressing various challenges in implementing the Unmanned Ground Vehicle (UGV) and Internet of Things (IoT) system in paddy cultivation. A UGV which carries the sensors were used to collect the sensor data (water level, rainwater, humidity, temperature, light intensity) from the paddy field, which is controlled by cloud-based solution and by the mobile application-based solution. The data was then processed and used to control the water valves which can again be controlled by using cloud and mobile application. Water level maintained by using the mobile application-based solution, cloud-based solution and by following the traditional method of irrigation was compared and the cloud-based solution was found to be more efficient. Thereby providing a solution which reduces the manpower required for the process of irrigation when compared to the traditional irrigation method, also reducing the water wastage, therefore conserving water.

Author 1: Srinivas A
Author 2: J Sangeetha

Keywords: Sensor; cloud; mobile application; agriculture; water valve

PDF

Paper 55: Adaptive Trajectory Control Design for Bilateral Robotic Arm with Enforced Sensorless and Acceleration based Force Control Technique

Abstract: This study offers an approach for tackling the issue of instability on the computed force generated on a joint of a robotic arm by improving the model of a bilateral master-slave haptic system with an adaptive technique known as Reaction Force Observer (RFOB). The purpose of recommended modelling is to correct unsought signals coming from the employed standard controller and the surroundings produced within the moving joint of the articulated robotic arm. RFOB is employed to adjust the signal interference by modifying its position response to obtain the desired final location. The investigation and observation were carried out in two separate tests to evaluate the outcomes of the recommended integration technique with the former system that only enforced Disturbance Observer (DOB). Generated feedbacks produced from the organised experiments are measured inside a simulation platform. All numerical records and signal charts illustrate the durability of the proposed method since the system integrated with acceleration-based force control is more precise and quicker.

Author 1: Nuratiqa Natrah Mansor
Author 2: Muhammad Herman Jamaluddin
Author 3: Ahmad Zaki Shukor

Keywords: Force and position controller; reaction force observer; bilateral control robotic arm; sensorless; system response

PDF

Paper 56: Assessment System of Local Government Projects Prototype in Indonesia

Abstract: The purpose of this research is to build an application that is used for project evaluation and provide recommendations on project performance in local government agencies. In this study, project evaluation was carried out using the Group Decision Making (GDM) model based on the Group Decision Support System (GDSM) concept. The project output and outcome parameters used by the Decision Maker (DM) use a hybrid of the Multi-Criteria Decision Making (MCDM) and Project Management Body of Knowledge (PMBOK) methods to reduce subjectivity in scoring qualitative data and to determine project ratings from all DMs. Copeland Score voting method. The results of application computing on the implementation of Group Decision Support System (GDSS) and MCDM indicate that the project ranking process will be faster and more accurate. The results of the sensitivity test show that two criteria have a great influence on project performance so that they have a very important role in project evaluation.

Author 1: Herri Setiawan
Author 2: Husnawati
Author 3: Tasmi

Keywords: Group Decision Making (GDM); Group Decision Support System (GDSS); MULTI-CRITERIA DECISION MAKING (MCDM); Project Management Body of Knowledge (PMBOK); local government

PDF

Paper 57: M-SVR Model for a Serious Game Evaluation Tool

Abstract: Today, due to their interactive, participatory and entertaining nature, the Serious Games set themselves apart from other learning methods used in teaching. Much progress has been made in the design techniques and methods of Serious Games, but little in their evaluation. In order to fill this gap, we had proposed in our previous work an evaluation tool capable of helping practitioners to evaluate Serious Games in different training contexts. This evaluation tool for Serious Games is designed around four dimensions, namely the pedagogical, technological, ludic and behavioral dimensions, which are measured by clearly defined criteria. During this process, it was highlighted that the human factor (evaluator) influences considerably the result of the weightings through the choice to weight the evaluation dimensions of the Serious Games. In order to reduce this influence during the evaluation process and to keep the correlation between the variables of our evaluation system, we present in this paper, an improvement of our evaluation tool by equipping it with an intelligent supervised self-learning algorithm allowing self-regulation of the weights according to the context of use of the Serious Game to be evaluated. Thanks to the experimental verification of the optimization results, the root mean square error and the coefficient of determination are 0.016 and 98.59 percent respectively, indicating that the model has high precision which guaranteed better predictive performance. A comparison was made between this intelligent model and the models presented in our previous work; the results obtained indicated the same order of the four dimensions, and this by reducing the influence of the human factor during the Multi-Output Support Vector Regression weighting process.

Author 1: Kamal Omari
Author 2: Said Harchi
Author 3: Mohamed Moussetad
Author 4: El Houssine Labriji
Author 5: Ali Labriji

Keywords: Serious game; evaluation tool; multi-output support vector regression

PDF

Paper 58: Modified Method of Traffic Engineering in DCN with a Ramified Topology

Abstract: This article consider two main local network topologies. Based on the basic DFS protocol, a mathematical model has been developed for a new method of multipath routing and traffic engineering in data centers with a ramified topology. This method was developed with the features and benefits of SDN in mind. Also, the simulation of the developed method was carried out on two local topologies considered earlier.

Author 1: As’ad Mahmoud As’ad Alnaser
Author 2: Yurii Kulakov
Author 3: Dmytro Korenko

Keywords: Local networks; traffic engineering; SDN; DCN; DFS; Mininet

PDF

Paper 59: Analysis of Crime Pattern using Data Mining Techniques

Abstract: The advancement in Information Technology permits high volume of data to be generated in databases of institutions, organizations, government, including Law Enforcement Agencies (LEAs). Technologies have also been developed to store and manipulate these data to enhance decision making. Crime remains a severe threat to humanity. Criminals currently, exploit highly sophisticated technologies to perform criminal activities. To effectively combat crime, LEAs must be adequately equipped with technological tools such as data mining technology to enable useful discoveries from databases. To achieve this, a Real-time Integrated Crime Information System (RICIS) was developed and mobile phones were used by informants (general public) to capture information about crimes being committed within Southern-East, Nigeria. Each crime information captured is being sent to the LEA responsible for the crime type and the information is stored in the agency database for data analysis. Thus, this study uses data mining algorithms to analyze crime trends and patterns in Southern-Eastern part of Nigeria between 2012 and 2013. The algorithms adopted were Classification and Rule Induction. The data set of 973 were collected from Eleme Police station, PortHarcourt (2012) and Nsukka Police station (2013). The analysis enables identifications of some trends of crimes and criminal activities from various LEAs databases, enhancing crime control and public safety.

Author 1: Chikodili Helen Ugwuishiwu
Author 2: Peter O. Ogbobe
Author 3: Matthew Chukwuemeka Okoronkwo

Keywords: Information technology; law enforcement agency; data mining; crime; classification and rule induction

PDF

Paper 60: Human Face Recognition from Part of a Facial Image based on Image Stitching

Abstract: Most of the current techniques for face recognition require the presence of a full face of the person to be recognized, and this situation is difficult to achieve in practice, the required person may appear with a part of his face, which requires prediction of the part that did not appear. Most of the current forecasting processes are done by what is known as image interpolation, which does not give reliable results, especially if the missing part is large. In this work, we adopted the process of stitching the face by completing the missing part with the flipping of the part shown in the picture, depending on the fact that the human face is characterized by symmetry in most cases. To create a complete model, two facial recognition methods were used to prove the efficiency of the algorithm. The selected face recognition algorithms that are applied here are Eigenfaces and geometrical methods. Image stitching is the process during which distinctive photographic images are combined to make a complete scene or a high-resolution image. Several images are integrated to form a wide-angle panoramic image. The quality of the image stitching is determined by calculating the similarity among the stitched image and original images and by the presence of the seam lines through the stitched images. The Eigenfaces approach utilizes PCA calculation to reduce the feature vector dimensions. It provides an effective approach for discovering the lower-dimensional space. In addition, to enable the proposed algorithm to recognize the face, it also ensures a fast and effective way of classifying faces. The phase of feature extraction is followed by the classifier phase. Displacement classifiers using square Euclidean and City-Block distances are used. The test results demonstrate that the proposed algorithm gave a recognition rate of around 95%, to validate the proposed algorithm; it compared to the existing CNN and Multibatch estimator method.

Author 1: Osama R. Shahin
Author 2: Rami Ayedi
Author 3: Alanazi Rayan
Author 4: Rasha M. Abd El-Aziz
Author 5: Ahmed I. Taloba

Keywords: Face recognition; image stitching; principal component analysis; Eigenfaces distance classifiers; geometrical approach

PDF

Paper 61: Comparative Heart Rate Variability Analysis of ECG, Holter and PPG Signals

Abstract: The article presents a demonstrative software system with included procedures for input, preprocessing and mathematically based analysis of cardiac data. The created program has the ability to work with the following signals: ECG, holter recordings, PPG signals. The presented system uses real cardiological data from patients and obtained with modern medical devices - electrocardiography, continuous holter monitoring, photoplethysmography device and others. The presented system allows mathematically based study of cardiac records through the use of linear, nonlinear, fractal and wavelet based methods. A comparative analysis was made of the results obtained in the evaluation of the HRV parameters in the three types of signals used. The difference between HRV time series (cardiac intervals and HRV analysis) obtained by examination of individuals diagnosed with heart failure and healthy individuals is graphically presented. The findings indicate that studies of heart rate variability on ECG, Holter and PPG records can be used to support the cardiac practice of physicians.

Author 1: Galya N. Georgieva-Tsaneva
Author 2: Evgeniya Gospodinova

Keywords: Heart rate variability; cardiovascular diseases; mathematical analysis; holter records; software system

PDF

Paper 62: Multistage Relay Network Topology using IEEE802.11ax for Construction of Multi-robot Environment

Abstract: This paper describes an information gathering system comprising multiple mobile robots and a wireless sensor network. In general, a single robot searches an environment using a teleoperation system in a multistage relay network while maintaining communication quality. However, the search range of a single robot is limited, and it is difficult to gather comprehensive information in large-scale facilities. This paper proposes a multistage relay network topology using IEEE802.11ax for information gathering by multi-robot. In this multi-robot operation, a mobile robot carries wireless relay nodes and deploys them into the environment. After a network is constructed, each robot connects to this network and gathers information. An operator then controls each robot remotely while monitoring its end-to-end communication quality with each mobile robot in the network. This paper proposes a method assuming the end-to-end throughput with multiple mobile robots. The validity of the proposed method is then inspected via an evaluation experiment on multi-robot teleoperation. The experimental results show that the network constructed with the proposed topology is capable of maintaining the communication connectivity of more than three mobile robots.

Author 1: Ryo Odake
Author 2: Kei Sawai
Author 3: Noboru Takagi
Author 4: Hiroyuki Masuta
Author 5: Tatsuo Motoyoshi

Keywords: Multi-robot system; IEEE802.11ax; information gathering; multistage relay network; network topology

PDF

Paper 63: Use of Value Chain Mapping to Determine R&D Domain Knowledge Retention Framework Extended Criteria

Abstract: Implementing a knowledge retention (KR) strategy is crucial to overcome the loss of expert knowledge due to employee turnover and retirement. The knowledge loss phenomenon caused organizations to face enormous risks which affect performance. KR frameworks and models are made available beyond research and development (R&D) organizations, to address knowledge retention strategies for administrative, operational, and manufacturing organizations. For research-intensive portfolios within R&D organizations, using the available KR frameworks requires fitting. The difficulty to address knowledge loss due to the uniqueness of the R&D organization’s knowledge artifacts requires an extended KR framework. Before designing the extended KR framework, it is crucial to determine the framework’s additional criteria. The paper reports the use of value chain mapping to determine the extended criteria of the KR framework fit for R&D organizations. The value chain mapping method identifies the knowledge activities in the R&D using Porter Value Chain (PVC) as the reference model. The output is a Knowledge Chain Model (KCM) that defines the critical points of knowledge loss in the R&D value chain. These critical points are project-based expert critical knowledge focus, project-based tacit knowledge transfer, and project-based knowledge repository which are nominated extended criteria of the KR Framework fit for R&D organizations.

Author 1: Mohamad Safuan Bin Sulaiman
Author 2: Ariza Nordin
Author 3: Nor Laila Md Noor
Author 4: Wan Adilah Wan Adnan

Keywords: Knowledge retention framework; research and development; porter value chain; knowledge management; knowledge loss; research intensive portfolio; knowledge chain model

PDF

Paper 64: Adaptive Deep Learning based Cryptocurrency Price Fluctuation Classification

Abstract: This paper proposes a deep learning based predictive model for forecasting and classifying the price of cryptocurrency and the direction of its movement. These two tasks are challenging to address since cryptocurrencies prices fluctuate with extremely high volatile behavior. However, it has been proven that cryptocurrency trading market doesn’t show a perfect market property, i.e., price is not totally a random walk phenomenon. Based upon this, this study proves that the price value forecast and price movement direction classification is both predictable. A recurrent neural networks based predictive model is built to regress and classify prices. With adaptive dynamic features selection and the use of external dependable factors with a potential degree of predictability, the proposed model achieves unprecedented performance in terms of movement classification. A naïve simulation of a trading scenario is developed and it shows a 69% profitability score a cross a six months trading period for bitcoin.

Author 1: Ahmed Saied El-Berawi
Author 2: Mohamed Abdel Fattah Belal
Author 3: Mahmoud Mahmoud Abd Ellatif

Keywords: Computer intelligence; cryptocurrency; deep learning; market movement; recurrent neural network; timeseries forecasting

PDF

Paper 65: User-centric Activity Recognition and Prediction Model using Machine Learning Algorithms

Abstract: Human Activity Recognition has been a dynamic research area in recent years. Various methods of collecting data and analyzing them to detect activity have been well investigated. Some machine learning algorithms have shown excellent performance in activity recognition, based on which many applications and systems are being developed. Unlike this, the prediction of the next activity is an emerging field of study. This work proposes a conceptual model that uses machine learning algorithms to detect activity from sensor data and predict the next activity from the previously seen activity sequence. We created our activity recognition dataset and used six machine learning algorithms to evaluate the recognition task. We have proposed a method for the next activity prediction from the sequence of activities by converting a sequence prediction problem into a supervised learning problem using the windowing technique. Three classification algorithms were used to evaluate the next activity prediction task. Gradient Boosting performs best for activity recognition, yielding 87.8% accuracy for the next activity prediction over a 16-day timeframe. We have also measured the performance of an LSTM sequence prediction model for predicting the next activity, where the optimum accuracy is 70.90%.

Author 1: Namrata Roy
Author 2: Rafiul Ahmed
Author 3: Mohammad Rezwanul Huq
Author 4: Mohammad Munem Shahriar

Keywords: Machine learning algorithms; activity recognition; gradient boosting; next activity prediction; LSTM sequence prediction model

PDF

Paper 66: Study of Haar-AdaBoost (VJ) and HOG-AdaBoost (PoseInv) Detectors for People Detection

Abstract: Object detection in general and pedestrians in particular in images and videos is a very popular research topic within the computer vision community; it is an issue that is currently at the heart of much research. The detection of people is a particularly difficult subject because of the great variability of appearances and the situations in which a person may find themselves (a person is not a rigid object; it is articulate and unpredictable; its shape changes during its movement). The situations in which a person may find themselves are very varied: They are alone, near a group of people or in a crowd, obscured by an object. In addition, the characteristics vary from one person to another (color of the skin, hair, clothes, etc.), the background simple, clear or complex, the lighting or weather conditions, the shadow caused by different light sources, etc. greatly complicate the problem. In this article, we will present a comparative study of the performance of the two detectors Haar-AdaBoost and HOG-AdaBoost in detecting people in the INRIA images database of persons. An evaluation of the experiments will be presented after making certain modifications to the detection parameters.

Author 1: Nagi OULD TALEB
Author 2: Mohamed Larbi BEN MAATI
Author 3: Mohamedade Farouk NANNE
Author 4: Aicha Mint Aboubekrine
Author 5: Adil CHERGUI

Keywords: Pedestrian detection; learned-based methods; Haar like features; HOG descriptor; AdaBoost; behavior analysis

PDF

Paper 67: Towards Stopwords Identification in Tamil Text Clustering

Abstract: Now-a-days, digital documents have become the primary source of information. Therefore, natural language processing is widely utilized in information retrieval, topic modeling, document classification, and document clustering. Preprocessing plays a significant role in all of these applications. One of the critical steps in preprocessing is removing stopwords. Many languages have defined their list of stopwords. However, a publicly available stopwords list isn't available for the Tamil language since it is under-resourced. This study identified 93 general and some domain-specific stopwords for sports, entertainment, local and foreign news by analyzing more than 1.7 million Tamil documents with more than 21 million words. Also, this study shows that removing stopwords improves the accuracy of a Tamil document clustering system. It showed an improvement of 2.4%, 0.95% in the F-score for TF-IDF with one pass algorithm and FastText with the one-pass algorithm, respectively.

Author 1: M. S. Faathima Fayaza
Author 2: F. Fathima Farhath

Keywords: Stopwords; Tamil; pre-processing; TF-IDF; clustering

PDF

Paper 68: Improving Chi-Square Feature Selection using a Bernoulli Model for Multi-label Classification of Indonesian-Translated Hadith

Abstract: Hadith is the foundational knowledge in Islam that must be studied and practiced by Muslims. In the Hadith, several types of teachings are beneficial to Muslims and all of mankind. Some Hadith serve as advice, while others contain prohibitions that Muslims should adhere to. There are yet others that do not belong to these categories and serve only as information. This study focuses on increasing the performance of Chi-Square feature selection to obtain relevant features for multilabel classification of Indonesian-translated Bukhari Hadith data. This study proposes a Chi-Square-based Bernoulli model to improve Chi-Square feature selection which is appropriate for short-text data such as Hadith. The findings of this study show that the proposed method can select relevant features based on data classes; thereby improving Hadith classification performance with an error value of 9.38% compared to that (9.91%) obtained using the basic Chi-Square feature selection.

Author 1: Fahmi Salman Nurfikri
Author 2: Adiwijaya

Keywords: Bernoulli model; Chi-Square; feature selection; hadith classification

PDF

Paper 69: Transfer Learning-based One Versus Rest Classifier for Multiclass Multi-Label Ophthalmological Disease Prediction

Abstract: The main objective of this paper is to propose transfer learning technique for multiclass multilabel opthalmological diseases prediction in fundus images by using the one versus rest strategy. The proposed transfer learning-based techniques to detect eight categories (seven diseases and one normal class) are Normal, Diabetic retinopathy, Cataract, Glaucoma, Age-related macular degeneration, Myopia, Hypertension and Other abnormalities in fundus images collected and augmented from Ocular Disease Intelligent Recognition (ODIR) dataset. To increase the data set, no differentiation between left and right eye images has been done and these images were used on VGG-16 CNN network to binary classify each disease separately and trained 8 separate models using one versus rest strategy to identify these 7 diseases plus normal eyes. In this paper, various results has been showcased such as accuracy of each organ and accuracy of the overall model compared to benchmark papers. Base line accuracy have increased from 89% to almost 91% and also proposed model has improved the performance of identifying disease drastically prediction of glaucoma has increased from 54% to 91%, normal images prediction has increased from 40% to 85.28% and other diseases prediction has increased from 44% to 88%. Out of 8 categories prediction, proposed model prediction rate has improved in 6 diseases by using proposed transfer learning technique vgg16 and eight different one versus classifier classification algorithms.

Author 1: Akanksha Bali
Author 2: Vibhakar Mansotra

Keywords: Fundus images; one versus rest strategy; transfer learning; VGG-16; augmentation

PDF

Paper 70: Collaborative Multi-Resolution MSER and Faster RCNN (MRMSER-FRCNN) Model for Improved Object Retrieval of Poor Resolution Images

Abstract: Object detection and retrieval is an active area of research. This paper proposes a collaborative approach that is based on multi-resolution maximally stable extreme regions (MRMSER) and faster region-based convolutional neural network (FRCNN) suitable for efficient object detection and retrieval of poor resolution images. The proposed method focuses on improving the retrieval accuracy of object detection and retrieval. The proposed collaborative model overcomes the problems in a faster RCNN model by making use of multi-resolution MSER. Two different datasets were used on the proposed system. A vehicle dataset contains three classes of vehicles and the Oxford building dataset with 11 different landmarks. The proposed MRMSER-FRCNN method gives a retrieval accuracy 84.48% on Oxford 5k building dataset and 92.66% on vehicle dataset. Experimental results show that the proposed collaborative approach outperform the faster RCNN model for poor-resolution conditioned query images.

Author 1: Amitha I C
Author 2: N S Sreekanth
Author 3: N K Narayanan

Keywords: Faster RCNN; feature representation; multi-resolution MSER; object detection; object retrieval

PDF

Paper 71: A Framework for Weak Signal Detection in Competitive Intelligence using Semantic Clustering Algorithms

Abstract: Companies nowadays are sharing a lot of data on the web in structured and unstructured format, the data holds many signals from which we can analyze and detect innovation using weak signal detection approaches. To gain a competitive advantage over competitors, the velocity and volume of data available online must be exploited and processed to extract and monitor any type of strategic challenge or surprise whether it is in form of opportunities or threats. To capture early signs of a change in the environment in a big data context where data is voluminous and unstructured, we present in this paper a framework for weak signal detection relying on the crawling of a variety of web sources and big data based implementation of text mining techniques for the automatic detection and monitoring of weak signals using an aggregation approach of semantic clustering algorithms. The novelty of this paper resides in the capability of the framework to extend to an unlimited amount of unstructured data, that needs novel approaches to analyze, and the aggregation of semantic clustering algorithms for better automation and higher accuracy of weak signal detection. A corpus of scientific articles and patents is collected in order to validate the framework and provide a use case for future interested researchers in identifying weak signals in a corpus of data of a specific technological domain.

Author 1: Adil Bouktaib
Author 2: Abdelhadi Fennan

Keywords: Competitive intelligence; apache spark; big data; weak signal detection; web mining; semantic clustering

PDF

Paper 72: Virtual Reality Simulation to Help Decrease Stress and Anxiety Feeling for Children during COVID-19 Pandemic

Abstract: The occurrence of COVID-19 pandemic has changed people’s life in every aspect, such as applying social distancing, the transition from offline to online activity are applied in order to decrease and stop the spread of the virus. This sudden change causes a fairly high level of anxiety and stress in society, especially for children because of activity restrictions. Various innovations, especially technology have been carried out to overcome the problems in distance restrictions that have arisen due to the COVID-19 pandemic. Virtual reality believes becoming one of the innovations that can be used to reduced anxiety levels and boredom during activity rectrictions, because it creates an artificial environment for humans to socialize. In this research, combine the Unity3D and blender software to build a virtual reality simulation with the help of virtual glassed to give a real impression of the virtual room that has been created. This VR application consists of three environments that children can use it to explore the virtual room without need being in crowded atmosphere. Based on the result of pretest and posttest questionnaire in 30 participants with the range age from seven to ten, it concludes that this VR applications can decrease the level of stress and anxiety in children by one to two levels. Besides that, this application located in acceptable area based on SUS score system.

Author 1: Devi Afriyantari Puspa Putri
Author 2: Ratri Kusumaningtyas
Author 3: Tsania Aldi
Author 4: Fikri Zaki Haiqal

Keywords: Blender; COVID-19; Unity3D; virtual reality

PDF

Paper 73: Gesture based Arabic Sign Language Recognition for Impaired People based on Convolution Neural Network

Abstract: The Arabic Sign Language has endorsed outstanding research achievements for identifying gestures and hand signs using the deep learning methodology. The term "forms of communication" refers to the actions used by hearing-impaired people to communicate. These actions are difficult for ordinary people to comprehend. The recognition of Arabic Sign Language (ArSL) has become a difficult study subject due to variations in Arabic Sign Language (ArSL) from one territory to another and then within states. The Convolution Neural Network has been encapsulated in the proposed system which is based on the machine learning technique. For the recognition of the Arabic Sign Language, the wearable sensor is utilized. This approach has been used a different system that could suit all Arabic gestures. This could be used by the impaired people of the local Arabic community. The research method has been used with reasonable and moderate accuracy. A deep Convolutional network is initially developed for feature extraction from the data gathered by the sensing devices. These sensors can reliably recognize the Arabic sign language's 30 hand sign letters. The hand movements in the dataset were captured using DG5-V hand gloves with wearable sensors. For categorization purposes, the CNN technique is used. The suggested system takes Arabic sign language hand gestures as input and outputs vocalized speech as output. The results were recognized by 90% of the people.

Author 1: Rady El Rwelli
Author 2: Osama R. Shahin
Author 3: Ahmed I. Taloba

Keywords: Arabic sign language; convolution neural network; hand movements; sensing device

PDF

Paper 74: Micro Expression Recognition: Multi-scale Approach to Automatic Emotion Recognition by using Spatial Pyramid Pooling Module

Abstract: Facial expression is one of the obvious cues that humans used to express their emotions. It is a necessary aspect of social communication between humans in their daily lives. However, humans do hide their real emotions in certain circumstances. Therefore, facial micro-expression has been observed and analyzed to reveal the true human emotions. However, micro-expression is a complicated type of signal that manifests only briefly. Hence, machine learning techniques have been used to perform micro-expression recognition. This paper introduces a compact deep learning architecture to classify and recognize human emotions of three categories, which are positive, negative, and surprise. This study utilizes the deep learning approach so that optimal features of interest can be extracted even with a limited number of training samples. To further improve the recognition performance, a multi-scale module through the spatial pyramid pooling network is embedded into the compact network to capture facial expressions of various sizes. The base model is derived from the VGG-M model, which is then validated by using combined datasets of CASMEII, SMIC, and SAMM. Moreover, various configurations of the spatial pyramid pooling layer were analyzed to find out the most optimal network setting for the micro-expression recognition task. The experimental results show that the addition of a multi-scale module has managed to increase the recognition performance. The best network configuration from the experiment is composed of five parallel network branches that are placed after the second layer of the base model with pooling kernel sizes of two, three, four, five, and six.

Author 1: Lim Jun Sian
Author 2: Marzuraikah Mohd Stofa
Author 3: Koo Sie Min
Author 4: Mohd Asyraf Zulkifley

Keywords: Micro expression recognition; facial expression; spatial pyramid pooling module; multi-scale approach; deep learning

PDF

Paper 75: Automated Telugu Printed and Handwritten Character Recognition in Single Image using Aquila Optimizer based Deep Learning Model

Abstract: Machine printed or handwritten character recognition becomes a major research topic in several real time applications. The recent advancements of deep learning and image processing techniques can be employed for printed and handwritten character recognition. Telugu character Recognition (TCR) remains a difficult task in optical character recognition (OCR), which transforms the printed and handwritten characters into respective text formats. In this aspect, this study introduces an effective deep learning based TCR model for printed and handwritten characters (DLTCR-PHWC). The proposed DLTCR-PHWC technique aims to detect and recognize the printed as well as handwritten characters that exist in the same image. Primarily, image pre-processing is performed using the adaptive fuzzy filtering technique. Next, line and character segmentation processes are performed to derive useful regions. In addition, the fusion of EfficientNet and CapsuleNet models is used for feature extraction. Finally, the Aquila optimizer (AO) with bi-directional long short-term memory (BiLSTM) model is utilized for recognition process. A detailed experimentation of the proposed DLTCR-PHWC technique is investigated using Telugu character dataset and the simulation outcome portrayed the supremacy of the proposed DLTCR-PHWC technique over the recent state of art approaches.

Author 1: Vijaya Krishna Sonthi
Author 2: S. Nagarajan
Author 3: N. Krishnaraj

Keywords: Optical character recognition; Telugu; deep learning; Aquila optimizer; BiLSTM; handwritten characters; printed characters

PDF

Paper 76: Industrial Revolution 5.0 and the Role of Cutting Edge Technologies

Abstract: IR 4.0 emphasizes the interconnection of machines and systems to achieve optimal performance and productivity gains. IR 5.0 is said to take it a step further by fine-tuning the human-machine connection. IR 5.0 is more collaboration between the two: automated technology's ultra-fast accuracy combines with a human's intelligence and creativity. The driving force behind IR 5.0 is customer demand for customization and personalization, necessitating a greater human involvement in the production process. As IR 5.0 evolves, we may expect to see a slew of breakthroughs across various industries. However, just automating jobs or digitizing processes will not be enough; the finest and most successful businesses will be those that can combine the dual powers of technology and human ingenuity. IR 5.0 focuses on the use of modern cutting-edge technologies, namely, AI, IoT, big data, cloud computing, Blockchain, Digital twins, edge computing, collaborative robots, and 6G along with leveraging human creativity and intelligence. Wherever possible, IR 5.0 will change industrial processes worldwide by removing mundane, filthy, and repetitive activities from human workers. Intelligent robotics and systems will have unparalleled access to industrial supply networks and production floors. However, to understand and leverage the benefits of IR 5.0 better, there is a need to understand the role of modern CET in industrial revolution 5.0. To fill this gap, this article will examine IR 5.0 prospective, uses, supporting technologies, opportunities, and issues involved that need to be understood for leveraging the potentials of IR 5.0.

Author 1: Mamoona Humayun

Keywords: Industry 5.0; cutting-edge technologies; Internet of Things; Aartificial intelligence; big data

PDF

Paper 77: Detecting Distributed Denial of Service Attacks using Machine Learning Models

Abstract: The Software Defined Networking (SDN) is a vital technology which includes decoupling the control and data planes in the network. The advantages of the separation of the control and data planes including: a dynamic, manageable, flexible, and powerful platform. In addition, a centralized network platform offers situations that challenge security, for instance the Distributed Denial of Service (DDoS) attack on the centralized controller. DDoS attack is a well-known malicious attack attempts to disrupt the normal traffic of targeted server, network, or service, by overwhelming the target’s infrastructure with a flood of Internet traffic. This paper involves investigating several machine learning models and employ them with the DDoS detection system. This paper investigates the issue of enhancing the DDoS attacks detection accuracy using a well-known DDoS named as CICDDoS2019 dataset. In addition, the DDoS dataset has been preprocessed using two main approaches to obtain the most relevant features. Four different machine learning models have been selected to work with the DDoS dataset. According to the results obtained from real experiments, the Random Forest machine learning model offered the best detection accuracy with (99.9974%), with an enhancement over the recent developed DDoS detection systems.

Author 1: Ebtihal Sameer Alghoson
Author 2: Onytra Abbass

Keywords: Cybersecurity; distributed denial of service (DDoS); machine learning (ML); Canadian institute cybersecurity - distributed denial of service (CICDDoS2019) dataset

PDF

Paper 78: A Patient Care Predictive Model using Logistic Regression

Abstract: Medical treatments and operations in hospitals are divided into in-patient and out-patient procedures. It is critical for patients to know and understand the differentiation between these two forms of treatment since it will affect the time of a patient's stay in a hospital or a medical institution as well as the cost of a treatment. In today's era of information, a person's talents and expertise may be put to good use by automating activities wherever possible. A medical service will be termed inpatient care if a doctor issues an order and the patient is admitted to the hospital on that order whereas a patient seeking outpatient care do not need to spend the night in a hospital. Choosing between in-patient and out-patient care is usually a matter of how involved the doctor wants to be with the patient's treatment. With the aid of numerous data points regarding the patients, their illnesses, and lab tests, our main objective is to develop a system as part of the hospital automation system that predicts and estimates whether the patient should be given an in-patient care or an out-patient care. The main idea of the paper is to understand and develop a logistic regression model to predict whether a patient needs to be treated as an in-patient or an out-patient depending on the results of laboratory tests. Furthermore, this study also focuses on how logistic regression performs for this dataset. In addition, research on how logistic regression performs for this dataset was also not done. From the study, the results show that logistic regression gives an accuracy of 75%, F1-score of 73%, precision of 74% and recall of 74%.

Author 1: Harkesh J. Patel
Author 2: Jatinderkumar R. Saini

Keywords: Health-care; inpatient care; logistic regression; machine learning model; outpatient care; stacking classifier

PDF

Paper 79: Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

Abstract: 3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.

Author 1: Zainal Rasyid Mahayuddin
Author 2: A F M Saifuddin Saif

Keywords: Augmented reality; virtual reality; 3D gesture tracking

PDF

Paper 80: A Framework for Secure Healthcare Data Management using Blockchain Technology

Abstract: In the current era of smart cities and smart homes, the patient’s data like name, personal details and disease description are highly insecure and violated most often. These details are stored digitally in a network called Electronic Health Record (EHR). The EHR can be useful for future medical researches to enhance patients’ healthcare and the performance of clinical practices. These data cannot be accessible for the patients and their caretakers, but they are readily available for unauthorized external agencies and are easily breached by hackers. This creates an imbalance in data accessibility and security. This can be resolved by using blockchain technology. The blockchain creates an immutable ledger and makes the transaction to be decentralized. The blockchain has three key features namely Security, Transparency, and Decentralization. These key features make the system to be highly secured, prevent data manipulation, and can only be accessible by authorized persons. In this paper, a blockchain-based security framework has been proposed to secure the EHR and provide a safe way of accessing the clinical data of the patients for the patients and their caretakers, doctors, and insurance agents using cryptography and decentralization. The proposed system also maintains the balance between data accessibility and security. This paper also establishes how the proposed framework helps doctors, patients, caretakers, and external authorities to securely store and access patients’ medical data in EHR.

Author 1: Ahmed I. Taloba
Author 2: Alanazi Rayan
Author 3: Ahmed Elhadad
Author 4: Amr Abozeid
Author 5: Osama R. Shahin
Author 6: Rasha M. Abd El-Aziz

Keywords: Blockchain; electronic health record (EHR); storage; security; accessibility; cryptography; decentralization

PDF

Paper 81: Usability Evaluation of Web Search User Interfaces from the Elderly Perspective

Abstract: The elderly population is increasing in many countries, often with health and incapacity challenges, largely disengaged them from the world of digital tools like Internet usage. They browse the Internet daily for obtaining needed information through various search engines through the search UI. Earlier technologies were fabricated for improving daily life, but the specific needs of the elderly are neglected often. Currently, available online search UIs are well-developed, but they did not consider usability in their design specifically for the elderly. This research aims to evaluate web search UIs based on the elderly perspectives to identify existing search UIs usability issues and recommend improvements to web search UI designs. The observation technique evaluated two web search UIs (Google interface and Bing interface) with fifteen participants aged 60 years and above. System Usability Scale (SUS) questionnaire was applied to measure the user satisfaction of the current two interfaces. The data collected from the observations were analyzed using content analysis, while the data acquired from the questionnaires were analyzed using the t-test. The results revealed a statistically significant difference in SUS ratings, with Google scoring 73.5 and Bing scoring 66.5, indicating that users prefer the Google interface over the Bing interface. Besides that, the usability issues were identified, and recommendations to improve the design of the search UI were suggested. These findings contribute to a better understanding of the issues that prevent elderly users from using web search UI and valuable feedback to designers on improving the UI to suit the elderly better.

Author 1: Khalid Krayz Allah
Author 2: Nor Azman Ismail
Author 3: Layla Hasan
Author 4: Wong Yee Leng

Keywords: Usability; Google interface; Bing interface; SUS questionnaire; web search user interfaces; observation method

PDF

Paper 82: A Novel Framework for Cloud based Virtual Machine Security by Change Management using Machine

Abstract: The increased growth in the cloud-based application development and hosting, the demand for higher application and data security is also increasing. The cloud-based applications are hosted on virtual machines and the data generated or used by these applications are also hosted inside the virtual machines. Hence, the security of the applications and the data can be achieved only by securing the virtual machines. There are number of challenges to achieve the security of the virtual machines. Firstly, the size of the virtual machines is large, and the generic cryptographic methods are primarily designed to handle smaller size of the data. Thus, the applicability of these methods for virtual machine are subjected to analysis. Secondly, the additional time required for applying the cryptographic algorithms on the virtual machines impact the response time of the applications, which again impacts the service level agreements. Finally, the virtual machines during the migration are highly vulnerable as the virtual machines are migrated inside the data center networks as simple text data. A good number of research attempts have tried to solve these challenges. Nonetheless, most of the parallel research works have either compromised on the strength of the security protocols or have compromised on the time taken to apply the cryptographic methods. However, the need of the research is to identify the attacks based on the characteristics of connection requests and reduce the time for the encryption and decryption of the virtual machines. This work proposes a novel framework for detection of the attacks based on a machine learning driven algorithm by analyzing the connection properties and prevent the attacks by selective encryption of the virtual machines using another machine learning driven algorithm. This work demonstrates nearly 98% accuracy in detection of the newer and existing attack types.

Author 1: S. Radharani
Author 2: V. B. Narasimha

Keywords: DevOps; deep clustering; VM security; cloud security; VM versioning; progression cryptography

PDF

Paper 83: Comparison of Convolutional Neural Network Architectures for Face Mask Detection

Abstract: In 2020 World Health Organization (WHO) has declared that the Coronaviruses (COVID-19) pandemic is causing a worldwide health disaster. One of the most effective protections for reducing the spread of COVID-19 is by wearing a face mask in densely and close populated areas. In various countries, it has become mandatory to wear a face mask in public areas. The process of monitoring large numbers of individuals to comply with the new rule can be a challenging task. A cost-effective method to monitor a large number of individuals to comply with this new law is through computer vision and Convolution Neural Network (CNN). This paper demonstrates the application of transfer learning on pre-trained CNN architectures namely; AlexNet, GoogleNet ResNet-18, ResNet-50, ResNet-101, to classify whether or not a person in the image is wearing a facemask. The number of training images are varied in order to compare the performance of these networks. It is found that AlexNet performed the worst and requires 400 training images to achieve Specificity, Accuracy, Precision, and F-score of more than 95%. Whereas, GoogleNet and Resnet can achieve the same level of performance with 10 times fewer number of training images.

Author 1: Siti Nadia Yahya
Author 2: Aizat Faiz Ramli
Author 3: Muhammad Noor Nordin
Author 4: Hafiz Basarudin
Author 5: Mohd Azlan Abu

Keywords: Convolution neural network; deep learning; transfer learning; computer vision; facemask detection; COVID-19

PDF

Paper 84: Encoding LED for Unique Markers on Object Recognition System

Abstract: In this paper a new approach of unique markers to detect and track moving objects with the encoding LED marker is presented. In addition, an LED spotlight system that can shoot light in the direction of the target is also proposed. The encoding process is done by making a unique blinking pattern on the LED, and thus the camera and a servomotor as an object recognition system would only recognize a unique marker given by an LED. In this work the camera with OpenCV could capture a unique marker in all variant of blinking patterns. A unique marker is important on object recognition system so the camera could identify the object marked by our unique marker and ignore all other markers that might captured by the camera. In addition, analysis of the PWM signal from an LED is carried out here to determine the characteristics of the LEDs in each color, determine the accuracy of the duty cycle, and the use of the bright-dim method on the LEDs. The results show that the highest accuracy is obtained when a 50% duty cycle is used with the on and off time are set to be 1 second for all LED colors. The benefit of the proposed system is confirmed by implementing an integrated control system as a unique marker. The effectiveness of the blinking LED against other laser interferences is also discussed.

Author 1: Wildan Pandji Tresna
Author 2: Umar Ali Ahmad
Author 3: Isnaeni
Author 4: Reza Rendian Septiawan
Author 5: Iyon Titok Sugiarto
Author 6: Alex Lukmanto Suherman

Keywords: Encoding LED; PWM signal; target markers; object recognition system

PDF

Paper 85: Inherent Feature Extraction and Soft Margin Decision Boundary Optimization Technique for Hyperspectral Crop Classification

Abstract: Crop productivity and disaster management can be enhanced by employing hyperspectral images. Hyperspectral imaging is widely utilized in identifying and classifying objects on the ground surface for various agriculture application uses such as crop mapping, flood management, identifying crops damaged due to flood/drought, etc. Hyperspectral imaging-based crop classification is a very challenging task because of spectral dimensions and poor spatial feature representation. Designing efficient feature extraction and dimension reduction techniques can address high dimensionality problems. Nonetheless, achieving good classification accuracies with minimal computation overhead is a challenging task in Hyperspectral imaging-based crop classification. In meeting research challenges, this work presents Hyperspectral imaging-based crop classification using soft-margin decision boundary optimization (SMDBO) based Support Vector Machine (SVM) along with Image Fusion-Recursive Filter (IFRF) and Inherent Feature Extraction (IFE). In this work, IFRF is used for reducing spectral features with meaningful representation. Then, IFE is used for differentiating physical properties and shading elements of different objects spatially. Both spectral and spatial features extracted are trained using SMDBO-SVM for performing hyperspectral object classification. Using SMDBO-SVM for Hyperspectral object classification aid in addressing class imbalance issues; thus, the proposed IFE-SMDBO-SVM model achieves better accuracies and with minimal misclassification in comparison with state-of-art statistical and Deep Learning (DL) based Hyperspectral object classification model using standard dataset Indian Pines and Pavia University.

Author 1: M. C. Girish Babu
Author 2: Padma M. C

Keywords: Crop classification; decision boundary; deep learning; dimensionality; feature selection; hyperspectral image; support vector machines

PDF

Paper 86: Arabic Sentiment Analysis for Multi-dialect Text using Machine Learning Techniques

Abstract: Social media networks facilitated the availability and accessibility of a wide range of information and data. It allows the users to share and express their opinions. In addition, it presents the appraisals of the top news and the evaluation of movies, products, and services. This headway has been controlled by a well-known field called Sentiment Analysis (SA). Compared to the research studies conducted in English Sentiment Analysis (ESA), little effort is exerted in Arabic Sentiment Analysis (ASA). The Arabic language is a morphologically rich language that poses significant challenges to Natural Language Processing (NLP) systems. The purpose of the paper is to enrich the Arabic Sentiment Analysis via proposing a sentiment analysis model for analyzing an Arabic multi-dialect text using machine learning algorithms. The proposed model is applied to two datasets: ASTD Egyptian-Dialect tweets and RES Multi-Dialect restaurant reviews. Different evaluation measures were used to evaluate the proposed model to identify the best performing classifiers. The findings of this research revealed that the developed model outperformed the other two research works in terms of accuracy, precision, and recall. In addition, the Bernoulli Naive Bayes (B-NB) classifier achieved the best results with 82% for the ASTD Egyptian-Dialect tweets dataset, while the SVM classifier scored the best accuracy result for the RES Multi-Dialect reviews dataset with 87.7%.

Author 1: Aya H. Hussein
Author 2: Ibrahim F. Moawad
Author 3: Rasha M. Badry

Keywords: Arabic sentiment analysis (ASA); arabic tweets; sentiment analysis (SA); natural language processing (NLP); machine learning (ML)

PDF

Paper 87: A Systematic Review on e-Wastage Frameworks

Abstract: The electronic devices that are targeted to the end users have become day to day essential parts. Traditional methodologies have changed drastically resulting in efficient mode of communication and fast information retrieval. As the demand and the production are exponentially growing, patterns of sales, storage and their destruction and then again, their collection have also been changed. This paper analyses many such behaviors of (electronic) waste management and recommends solutions like recycling management, different directives and policies required to be followed. Authors have emphasized on providing substantial information that can be useful to the regulating authorities responsible for waste management or the manufacturers of various electronic products and then the policy makers. With an extensive review of electronic wastages, authors have emphasized three variables (sales, stock and lifespan) for replacing/upgrading the older products with advanced versions. The root causes of electronic wastages are found in industrializing countries like India, China, Vietnam, Pakistan, the Philippines, Ghana and Nigeria whereas industrialized countries also play equally important role for its generation. This paper signifies the importance of e-waste management practice to reduce the emerging electronic waste hazards. Authors focus on today’s demand of electronic devices, importance of e-waste management and management practices. The paper recommends key findings based on surveying data regarding the lack of regulation to manage the e-waste. The review concludes that the lack of regulation and improper awareness are the basic factors responsible for e-wastage and requires major focus to manage the e-waste.

Author 1: Sultan Ahmad
Author 2: Sudan Jha
Author 3: Abubaker E. M. Eljialy
Author 4: Shakir Khan

Keywords: e-Wastage; e-Wastage management; barriers; policy; findings; e-Wastage regulations; industrializing countries; industrialized countries

PDF

Paper 88: SCADA and Distributed Control System of a Chemical Products Dispatch Process

Abstract: The objective of this article is to show the application of a supervision, control and acquisition system of an industrial network of chemical products, for which the design of the control logic and the architecture of the industrial network on Profibus-DP protocols is described and Ethernet, with a peripheral terminal station ET-200, through the Siemens CP433 programmable logic controller and level sensor sensors, coupled to radar-type transmitters with an accuracy of ± 0.5 mm. As findings of the implementation of the control system, it was possible to demonstrate the optimal regulation of the filling system of the 3-compartment trucks with a capacity of 300 Kilos each, generating the elimination of spills of the chemical product, as well as the reduction of polluting particles in the work environment. Finally, as a direct consequence, the productivity of the company was improved, which is a relevant aspect at the level of planning, management and direction.

Author 1: Omar Chamorro-Atalaya
Author 2: Dora Arce-Santillan
Author 3: Guillermo Morales-Romero
Author 4: Nicéforo Trinidad-Loli
Author 5: Adrián Quispe-Andía
Author 6: César León-Velarde

Keywords: Distributed control; supervision; acquisition; chemical products; dispatch of chemical supplies

PDF

Paper 89: Supervised Learning through Classification Learner Techniques for the Predictive System of Personal and Social Attitudes of Engineering Students

Abstract: In this competitive scenario of the educational system, higher education institutions use intelligent learning tools and techniques to predict the factors of student academic performance. Given this, the article aims to determine the supervised learning model for the predictive system of personal and social attitudes of university students of professional engineering careers. For this, the Machine Learning Classification Learner technique is used by means of the Matlab R2021a software. The results reflect a predictive system capable of classifying the four satisfaction classes (1: dissatisfied, 2: not very satisfied, 3: satisfied and 4: very satisfied) with an accuracy of 91.96%, a precision of 79.09%, a Sensitivity of 75.66% and a Specificity of 92.09%, regarding the students' perception of their personal and social attitudes. As a result, the higher institution will be able to take measures to monitor and correct the strengths and weaknesses of each variable related to satisfaction with the quality of the educational service.

Author 1: Omar Chamorro-Atalaya
Author 2: Soledad Olivares-Zegarra
Author 3: Alejandro Paredes-Soria
Author 4: Oscar Samanamud-Loyola
Author 5: Marco Anton-De los Santos
Author 6: Juan Anton-De los Santos
Author 7: Maritte Fierro-Bravo
Author 8: Victor Villanueva-Acosta

Keywords: Supervised learning; classification learner; predictive system; personal and social attitudes; engineering students

PDF

Paper 90: Workflow Scheduling and Offloading for Service-based Applications in Hybrid Fog-Cloud Computing

Abstract: Fog and edge computing has emerged as an important paradigm to address many challenges related to time-sensitive and real-time applications, high network loads, user privacy, security, and others. While these developments offer huge potential, many efforts are needed to study and design applications and systems for these emerging computing paradigms. This paper provides a detailed study of workflow scheduling and offloading of service-based applications. We develop different models of cloud, fog and edge systems and study the scheduling of workflows (such as scientific and machine learning workflows) using a range of system sizes and application intensities. Firstly, we develop several Markov models of cloud, fog, and edge systems and compute the steady-state probabilities for system utilization and stability. Secondly, using steady-state probabilities, we define a range of system metrics to study the performance of workflow scheduling and offloading including, network load, response delay, energy consumption, and energy costs. An extensive investigation of application intensities and cloud, fog, and edge system sizes reveals that significant benefits can be accrued from the use of fog and edge computing in terms of low network loads, response times, energy consumption and costs.

Author 1: Saleh M. Altowaijri

Keywords: Workflow scheduling; workflow offloading; cloud computing; fog computing; edge computing; scientific workflows

PDF

Paper 91: Advanced Machine Learning Algorithms for House Price Prediction: Case Study in Kuala Lumpur

Abstract: House price is affected significantly by several factors and determining a reasonable house price involves a calculative process. This paper proposes advanced machine learning (ML) approaches for house price prediction. Two recent advanced ML algorithms, namely LightGBM and XGBoost were compared with two traditional approaches: multiple regression analysis and ridge regression. This study utilizes a secondary dataset called ‘Property Listing in Kuala Lumpur’, gathered from Kaggle and Google Map, containing 21984 observations with 11 variables, including a target variable. The performance of the ML models was evaluated using mean absolute error (MAE), root mean square error (RMSE), and adjusted r-squared value. The findings revealed that the house price prediction model based on XGBoost showed the highest performance by generating the lowest MAE and RMSE, and the closest adjusted r-squared value to one, consistently outperformed other ML models. A new dataset which consists of 1300 samples was deployed at the model deployment stage. It was found that the percentage of the variance between the actual and predicted price was relatively small, which indicated that this model is reliable and acceptable. This study can greatly assist in predicting future house prices and the establishment of real estate policies.

Author 1: Shuzlina Abdul-Rahman
Author 2: Nor Hamizah Zulkifley
Author 3: Ismail Ibrahim
Author 4: Sofianita Mutalib

Keywords: House price; house price prediction; machine learning; property; regression analysis

PDF

Paper 92: Prediction of Tourist Visit in Taman Negara Pahang, Malaysia using Regression Models

Abstract: Tourism is among the significant source of income to Malaysia and Taman Negara Pahang is one of the Malaysia's tourism spots and the heritage of Malaysia in achieving the Sustainable Development Goals (SDG). It has attracted many international and local tourists for its richness in flora and fauna. Currently, the information of tourists’ visits is not properly analyzed. This study integrates the internal and public information to analyze the visits. The regression models used are multiple linear regression, support vector regression, and decision tree regression to predict the tourism demand for Taman Negara, Malaysia and the best model was deployed. Predictive analytics can support the decision-making process for tourism destinations management. When the management gets a head-up of the demand in the future, they can choose a strategic planning and be more aware about the factors influencing tourism demand, such as the tourists’ web search engine behaviors for accommodation, facilities, and attractions. The factors affecting the tourism demand are determined as the first objective. The role of independent variable was set to the total number of visitors, subsequently being set as the target variable in the modeling process. A total of 30 models were generated by tuning the cross-validation parameters. This study concluded that the best model is the multiple linear regression due to lower root mean square error (RSME) value.

Author 1: Sofianita Mutalib
Author 2: Athila Hasya Razali
Author 3: Siti Nur Kamaliah Kamarudin
Author 4: Shamimi A Halim
Author 5: Shuzlina Abdul-Rahman

Keywords: Regression models; SDG; Taman Negara Pahang; tourist analytics

PDF

Paper 93: Non-functional Requirements (NFR) Identification Method using FR Characters based on ISO/IEC 25023

Abstract: The researches show that software quality depends on Functional Requirements (FR) and Non-Functional Requirements (NFR). The developers identify NFR attributes by interviewing stakeholders. The difficulty in identifying NFR attributes makes quality requirements often ignored. The basic concept of software quality measurements is the quality measuring of the software product. During product-based quality measurement, the potential of software development process repetition will occur. Factors measuring software product quality are not suitable for NFR identification. These differences result in the software development process repeating itself and additional costs. This research proposes easy NFR attributes identification using FR characters. The NFR and FR tightly relations are obtained by extending the NFR measurement at ISO/IEC25023 to programming coding level, then generalizing to get the FR character. The generalization uses the Grounded Theory method. The result is the NFR attributes identification method using FR character based on ISO/IEC 25023. The analyst or programmer can identify the NFR attributes from the FR using the FR character in the requirements stage. This research produces an NFR Identification Method that has been validated by experimenting with several programmers and experts. Tests on programmers identify NFR using the FR character method. The test is to see the level of similarity of the resulting NFR. The result of the test shows level similarity upper 75%.

Author 1: Nurbojatmiko
Author 2: Eko K. Budiardjo
Author 3: Wahyu C. Wibowo

Keywords: Non functional requirements; FR character; ISO/IEC 25023; NFR identification

PDF

Paper 94: Blockchain-oriented Inter-organizational Collaboration between Healthcare Providers to Handle the COVID-19 Process

Abstract: Collaborative business activities have aroused great interest from organizations because of the benefits they offer. However, sharing data, services, and resources and exposing them to external use can prevent organizations involved in collaboration from being engaged. Therefore, the need for advanced mechanisms to ensure trust between the different parties involved is paramount. In this context, blockchain and smart contracts are promising solutions for performing choreography processes. However, the seamless integration of these technologies as non-functional requirements in the design and implementation phases of inter-organizational collaborative activities is a challenging task, as reported in the literature. Consequently, we aim in the proposed approach to extend the modeling and implementation of the choreography lifecycle based on service-oriented processes. This is fulfilled by integrating blockchain transactions and smart contract calls, to allow collaboration and interoperability between different entities while guaranteeing trust and auditability. Moreover, to conduct this extension efficiently we use a BPMN choreography diagram combined with Finite State Automata to ensure a meticulous modeling which targets the processes’ internal interactions. Hyperledger Fabric is used as a permissioned blockchain for proof-of-concept implementation. A use case of COVID-19 collaborative processes is used to experiment our approach, which aims to guarantee a fluid collaboration between healthcare providers and epidemiological entities at a national scale in Morocco.

Author 1: Ilyass El Kassmi
Author 2: Zahi Jarir

Keywords: Blockchain, inter-organizational collaboration, choreography, permissioned blockchain, business process management, COVID-19

PDF

Paper 95: A Language Tutoring Tool based on AI and Paraphrase Detection

Abstract: A language tutoring tool (LTT) helps learning a language through casual human-like conversations. Natural language understanding (NLU) and natural language generation (NLG) are two key components of an LTT. In this paper, we propose a paraphrase detection algorithm that is used as the building block of the NLU. Our proposed tree-LSTM with a self-attention method for paraphrase detection shows accuracy of 87% with a lower parameter of 6.5m, which is much robust and lighter than the other existing paraphrase detection algorithms. Furthermore, we discuss an LTT prototype using the proposed algorithm with having some featured components like- message analysis, grammar detection, dialogue management, and response generation component. Each component is discussed in detail in the methodology section of this paper.

Author 1: Anas Basalamah

Keywords: LTT; NLG; NLU; paraphrase detection; LSTM

PDF

Paper 96: Secured 6-Digit OTP Generation using B-Exponential Chaotic Map

Abstract: Today, the traditional username and password sys-tems are becoming less popular on the internet due to their vulnerabilities. These systems are prone to replay attacks and eavesdropping. During the Coronavirus pandemic, most of the important transactions take place online. Hence we require a more secure method like one-time password generation to avoid any online frauds. one-time password generation has multiple techniques. With one-time password generation it has become possible to overcome the drawbacks posed by the traditional username and password systems. The one-time password is a two-way authentication technique and hence secure one-time password generation is very important. The current method of one-time password generation is time-consuming and consumes a lot of memory on backend servers. The 4-digit one-time password system limits its uses to 9999 users and with advance deep learning approaches and faster computing it is possible to break through the existing one-time password generation method. Hence we need a system that is not vulnerable to predictive learning algorithms. We propose a 6-digit one-time password generation technique based on a B-exponential chaotic map. The proposed 24-bit (6-digit) long one-time password system offers 120 times higher security as compared traditional 4-digit systems, with a faster backend computing system that selects 24-bits out of 10 8 bits in 89 seconds at 1.09 Kilo-bits per milliseconds. The proposed method can be used for online transactions, online banking, and even automated teller machines.

Author 1: Rasika Naik
Author 2: Udayprakash Singh

Keywords: One-time password generation; B-exponential chaotic map; 6-digit one-time password; online transactions; se-curity

PDF

Paper 97: Mobile Application Aimed at Older Adults to Increase Cognitive Capacity

Abstract: The research work focuses on people with dementia of the Alzheimer’s type, since, among the types of dementia, this is the most common worldwide. In Peru, more than 200 thousand adults over 60 years of age suffer from this disease and many others who still do not know it or are in its initial stage. Therefore, it was decided to create a prototype of a mobile application with memory games, riddles, reminders and different types of physical activities to perform during the day. The scrum methodology was implemented to promote good practices for team and collaborative work, in terms of us phases from inception to launch of the product which is the mobile application. In addition, balsamiq was used as a prototype design tool. And so the objective of creating the prototype for its development was achieved. The goal of creating the prototype for the application was achieved. Positive results were obtained in terms of user and customer satisfaction. This will allow the benefit of adults for the improvement of cognitive ability, being able to perform their daily activities in the best way and socializing with family and friends.

Author 1: Ricardo Leon-Ayala
Author 2: Gerald G´omez-Cortez
Author 3: Laberiano Andrade-Arenas

Keywords: Alzheimer’s; balsamiq; mobile; prototype; scrum

PDF

Paper 98: Implementation of an Expert System for Automated Symptom Consultation in Peru

Abstract: The human being has a fragile life, and is attacked by different diseases throughout his life, neglecting or ignoring some of them because it is considered minimal, can be fatal, but many do not want to attend a health center, so they seek your symptoms on the Internet and finding pages with false information, that is the problem that we will address in this investigation. The objective of the research is to implement an expert system, creating a web page that provides real information when a user enters their symptoms. This was achieved based on the logic of rules developed in Prolog, so when a user fills out the created questionnaire, the expert system will follow. the rules to conclude with the desired diagnosis; all these steps were carried out using the buchanan methodology. The result was an improvement in the accessibility of truthful information through the Internet, facilitating the management of appointments of users if they have a serious illness, or the treatment in case of a minor illness. The beneficiaries of the research were the population that required the use of the automated query application.

Author 1: Gilson Vasquez Torres
Author 2: Luis Lunarejo Aponte
Author 3: Laberiano Andrade-Arenas

Keywords: Automated query; buchanan; expert system; prolog; symptoms

PDF

Paper 99: Implementation of a Web System to Detect Anemia in Children of Peru

Abstract: Now-a-days, anemia is considered a worldwide problem that not only seriously affects our health, but also has economic and social consequences. Therefore, seeks to provide a solution to the problem to detect anemia with a non-invasive method quickly, simple and low-cost way. In this research work, a web system was designed applying the scrum methodology to detect anemia and simplify the detection process of anemia in Peruvian children. In addition, this study shows as a result a technological prototype that helped in the diagnosis of anemia ; at the same time it provides food recommendations to patients to combat anemia efficiently, with a variety of recipes and ingredients that are available in any home, helping in the recovery process. In addition, the analysis carried out on children with anemia in Peru is shown, where it is known that Puno is the most affected department. With respect to the capital Lima, the most affected district is Callao. However, this amount is expected to drop considerably in the coming years.

Author 1: Ricardo Leon Ayala
Author 2: Noe Vicente Rosas
Author 3: Laberiano Andrade-Arenas

Keywords: Anemia; diagnosis; health; scrum; web system

PDF

Paper 100: Relationship between Stress and Academic Performance: An Analysis in Virtual Mode

Abstract: This research work analyzes the relationship be-tween stress and academic performance of engineering students at the University of Sciences and Humanities, in Peru, in the context of the pandemic. During this period, classes at the university were held virtually; and the difficulties that students present to carry out their classes were identified, such as lack of connectivity, family and financial problems and anxiety. The objective of the research is to analyze the relationship between stress and academic performance of engineering students. The work is part of a mixed approach, and data collection was carried out through interviews and surveys of engineering students based on the two variables identified: stress and academic performance. It began with a descriptive analysis, then moving on to inferential analysis to perform the hypothesis test. The SPSS was used for the reliability analysis using Cronbach’s alpha with 0.84 as a result and validation by expert judgment with 84.5% acceptance. It was obtained as a result that between the Stress variable based on its three dimensions and academic performance there is not relationship since it was obtained that its P-value is greater than 0,05; it is concluded that stress is not only academic but also should consider others as labor, social. In addition, the positive stress that drives academic performance emerged. The beneficiaries with the research are students, and the university. It is concluded that there is no relationship between stress and academic performance.

Author 1: Janet Corzo Zavaleta
Author 2: Roberto Yon Alva
Author 3: Samuel Vargas Vargas
Author 4: Eleazar Flores Medina
Author 5: Yrma Principe Somoza
Author 6: Laberiano Andrade-Arenas

Keywords: Academic performance; anxiety; stress; teaching-learning; virtual mode

PDF

Paper 101: Implementation of a Web System to Improve the Evaluation System of an Institute in Lima

Abstract: In Peru and the world, millions of students saw their education interrupted caused by the problems brought by the COVID-19 virus, due to this many educational entities began to adopt learning platforms and web systems so that the teaching process is not affected, having to comply with all the guidelines and requirements of the institution to solve any academic dif-ficulty. That is why in the present work the implementation of a web system was proposed for the improvement of the qualification and evaluation processes of an institution using the scrum methodology since it is an agile framework that is based on empiricism and offers adaptation and flexibility in the projects. For the software development, the open source language PHP was used since it is more adapted to these web systems, Mysql was also used, which is a database manager for relational databases. The results of this research was the correct implementation of this system to the educational institution, verifying the absence of errors and the improvement of the processes involved so that the institution can provide students with an adequate learning process.

Author 1: Franco Manrique jaime
Author 2: Laberiano Andrade-Arenas

Keywords: COVID-19; evaluation; learning platform; scrum; web system

PDF

Paper 102: Design of an Anti-theft Alarm System for Vehicles using IoT

Abstract: Automobiles have become one of the most sought-after targets for criminals due to their worldwide popularity. Crime is reflected in the statistics, which show that over the years, the crime rate of vehicle theft has been on the rise. As part of the fight against this crime, the vehicles come with certain systems incorporated to avoid this type of situations; obtaining many outstanding results. In this research project, a system was developed that allows through the application of the Internet of Things (IoT), the management of software and hardware technologies that allow the user to have access to various actions, such as vehicle location through the global positioning system (GPS), and identification of the offender, through radio frequency identification (RFID), as well as the global system of mobile communications (GSM). The objective of the research is to design a mobile and IoT application to reduce robberies in the department of Lima-Peru, using the scrum methodology. The result obtained is the design of the mobile application, with its anti-theft system, vehicle blocking and notification of unauthorized ignition.

Author 1: Jorge Arellano-Zubiate
Author 2: Jheyson Izquierdo-Calongos
Author 3: Laberiano Andrade-Arenas

Keywords: Global mobile communications system; global posi-tioning system; internet of things; radio frequency identification; scrum

PDF

Paper 103: Framework and Method for Measurement of Particulate Matter Concentration using Low Cost Sensors

Abstract: Rapid urbanisation and infrastructure shortcom-ings leading to heavy traffic, heavy construction activities are ma-jor contributors to emission of particulate matter into the ambient atmosphere. This is especially true in developing countries, such as India and China. There have been numerous attempts from government authorities and civic agencies to curtail pollution, but these efforts have been in vain. Cities like Beijing, New Delhi suffer from extremely unhealthy air quality during multiple months of the year. Hence, the onus of keeping oneself safe from extreme affects of air pollution falls on the individual. The following study presents a method and framework to measure particulate matter (PM2.5) concentration using low cost sensors, and infer patterns from the data collected. The study uses a SDS011 high precision laser PM2.5 detector module combined with a raspberry pi, which communicates the measurements through message queueing telemetry transport (MQTT) protocol to a ponte server which inturn persists the data into a MongoDB, which can be consumed by algorithms for further analysis. For example, the data obtained from the sensors can be fused with data from measurement stations and geographical land use information to estimate dense spatio-temporal pollution maps which is the basis for computing individual exposure to pollutants.

Author 1: Shree Vidya Gurudath
Author 2: Krishna Raj P M
Author 3: Srinivasa K G

Keywords: Air pollution; low cost sensor; optical dust sensors; particulate matter; MQTT; ponte

PDF

Paper 104: Sustainable Android Malware Detection Scheme using Deep Learning Algorithm

Abstract: The immense popularity of smartphones has led to the constant use of these devices for productive and entertainment purposes in daily life. Among the different operating systems, the Android system plays a very important role in the development of mobile technology as it is the most popular operating system. This makes it a target for cyberattack, with severe negative effects in terms of monetary and privacy costs. Thus, this study implements a detection scheme using effective deep learning algorithms (LSTM and MLP). Also, it tests their ability to detect malware by employing private and public datasets, with accuracy of over than 99%.

Author 1: Abdulaziz Alzubaidi

Keywords: Smartphone security; machine learning; mobile malware; classification; big data

PDF

Paper 105: Solving the Steel Continuous Casting Problem using an Artificial Intelligence Model

Abstract: Over the past decade, the steel continuous casting problem has revolutionized in important and remarkable ways. In this paper, we consider a multiple parallel device for the steel continuous casting problem (SCC) known as one of the hardest scheduling problem. The SCC problem is an important NP-hard combinatorial optimization problem and can be seen as three stages hybrid flowshop problem. We have proposed to solve it a recurrent neural network (RNN) with LSTM cells that we will executed in the cloud. For our problem, we consider several machines at each stage that are the converter stage, the refining stage and the continuous casting stage. We formulate the mathematical model and implemented a RNN with LSTM cells to approximately solve the problem. The proposed neural network has been trained on a big dataSet, which contains 10 000 real use cases and others generated randomly. The performances of the proposed model are very interesting such that the success rate is 93% and able to resolve large instances while the traditional approaches are limited and fail to resolve very large instances. We analyzed the results taking into account the quality of the solution and the prediction time to highlight the performance of the approach.

Author 1: Achraf BERRAJAA

Keywords: Artificial intelligence; SCC Program; RNN; LSTM; big data

PDF

Paper 106: Predicting Stock Closing Prices in Emerging Markets with Transformer Neural Networks: The Saudi Stock Exchange Case

Abstract: Deep learning has transformed many fields includ-ing computer vision, self-driving cars, product recommendations, behaviour analysis, natural language processing (NLP), and medicine, to name a few. The financial sector is no surprise where the use of deep learning has produced one of the most lucrative applications. This research proposes a novel fintech machine learning method that uses Transformer neural networks for stock price predictions. Transformers are relatively new and while have been applied for NLP and computer vision, they have not been explored much with time-series data. In our method, self-attention mechanisms are utilized to learn nonlinear patterns and dynamics from time-series data with high volatility and nonlinearity. The model makes predictions about closing prices for the next trading day by taking into account various stock price inputs. We used pricing data from the Saudi Stock Exchange (Tadawul) to develop this model. We validated our model using four error evaluation metrics. The applicability and usefulness of our model to fintech are demonstrated by its ability to predict closing prices with a probability above 90%. To the best of our knowledge, this is the first work where transformer networks are used for stock price prediction. Our work is expected to make significant advancements in fintech and other fields depending on time-series forecasting.

Author 1: Nadeem Malibari
Author 2: Iyad Katib
Author 3: Rashid Mehmood

Keywords: Stock price prediction; time-series forecasting; transformer deep neural networks; Saudi Stock Exchange (Tadawul); financial markets

PDF

Paper 107: Real Time Multi-Object Tracking based on Faster RCNN and Improved Deep Appearance Metric

Abstract: Computer Vision has set a new trend in image resolution, object detection, object tracking, and more by incor-porating advanced techniques from Artificial Intelligence (AI). Object detection and tracking have many use cases such as driverless cars, security systems, patient monitoring, and so on. Various methods have been proposed to overcome the challenges such as long-term occlusion, identity switching, and fragmenta-tion in real-time multi-object detection and tracking. However, reducing the number of identity switches and fragmentation remains unclear in multi-object detection and tracking. Hence, in this paper, we proposed a multi-object detection and tracking technique that involves two stages. The first stage helps to detect the multiple objects with high uniqueness using Faster RCNN and the second stage, Improved Sqrt cosine similarity, helps to track the multiple objects by using appearance and motion features. Finally, we evaluated our proposed technique using the Multi-Object Tracking (MOT) benchmark dataset with current state-of-the-art methods. The proposed technique resulted in enhanced accuracy and reduces identity switching and fragmentation.

Author 1: Mohan Gowda V
Author 2: Megha P Arakeri

Keywords: Multi-object detection; tracking; faster RCNN; con-volution neural network; data association

PDF

Paper 108: OBEInsights: Visual Analytics Design for Predictive OBE Knowledge Generation

Abstract: Gaining traction in modern higher education, outcome-based education (OBE) focuses on strategizing peda-gogical approaches to help the student achieve specified learning outcomes. In the context of Malaysia, OBE is oriented towards holistic development of graduates to ensure readiness towards the working sector. To empower OBE implementation, standardized measuring instrument iCGPA was introduced to higher education institutions nationwide. With lower dependency on provided curriculum, graduate abilities and values development are also attainable via extracurricular activities. However, analyzing the curriculum results in hand with extracurricular activities can be a daunting task, albeit the potential enriched performance assessment. In addition, the current iCGPA instrument employs radar map that restricts data exploration despite its capability in visualizing multivariate information. This study aims to enable predictive knowledge generation on understanding the relation-ship between learning activities and performance in OBE. There-fore, a predictive visual analytics system namely OBEInsights is proposed to facilitate education analysts in assessing OBE. The system development started with the identification of crucial design and analytic requirements via a domain expert case study. The system is then built with deliberate considerations of guiding factors of a design framework conceptualized from the case study. Subsequently, the system was then evaluated in usability testing with 10 domain experts that consist of usability rating and expert validation. The evaluation and expert validation results demonstrated the effectiveness and usability of OBEInsights in facilitating OBE predictive assessment. Several design insights on constructing visual analytics for OBE assessment were also discovered in terms of effective visualization, predictive modeling, and knowledge generation. Analytic designers and builders can leverage the reported design insights to enhance knowledge generation tools for OBE assessment.

Author 1: Leona Donna Lumius
Author 2: Mohammad Fadhli Asli

Keywords: Visual analytics; visualization; learning analytics; outcome-based education (OBE)

PDF

Paper 109: Modelling and Simulating Exit Selection during Assisted Hospital Evacuation Process using Fuzzy Logic and Unity3D

Abstract: Evacuation procedures are an integral aspect of the emergency response strategy of a hospital. Evacuation simulation models help to properly evaluate and improve evacuation strate-gies. However, the issue of exit selection during evacuation is often overlooked and oversimplified in the evacuation simulation models. Moreover, most of the available evacuation simulation models lack integration of movement devices and assisted evacu-ation features. However, finding a solution of these limitations is a necessity to properly evaluate evacuation strategies. To tackle this problem, we propose an effective approach to model exit selection using a fuzzy logic controller (FLC) and simulate assisted hospital evacuation using Unity3D game engine. Our research demonstrates that selecting exits based on distance only is not sufficient for real life situation because it ignores the unpredictability of human behavior. On the contrary, the use of the proposed FLC for exit selection makes the simulation more realistic by addressing the uncertainty and randomness in an evacuee’s decision-making process. This research can play a vital role in future developments of evacuation simulation models.

Author 1: Intiaz Mohammad Abir
Author 2: Ali Ahmed Ali Moustafa Allam
Author 3: Azhar Mohd Ibrahim

Keywords: Evacuation simulation; exit selection; fuzzy logic; unity3D

PDF

Paper 110: Lattice-based Group Enlargement for a Robot Swarm based on Crystal Growth Models

Abstract: Swarm robotic systems control multiple robots in a coordinated manner for using this flexible coordination to solve complex tasks in various environments. Such systems can utilize the individual capabilities of robots scattered within the swarm as well as the collective capabilities of the assembled robots. By coordinating these capabilities, swarms can solve tasks with a range of purposes, including carrying out rough sweeps of the overall environment using scattered robots or detailed observation of a part of the environment using assembled robots. This study developed a self-organization method for constructing regular groups of robots from scattered robots to achieve coor-dination between individual and collective states. An approach that integrates elements of self-organization with different input information requires centralized control to manage them. To provide this self-organization without centralized control, we focus on using the phase-field method and cellular automata to facilitate crystal growth that produces ordered structures from scattered particles. We formulate a method for arranging robots in a self-organizing manner based on the geometrical regularities of tile-able lattices (honeycomb, square, and hexagonal lattices) on a two-dimensional plane, demonstrate the process undertaken in carrying out the proposed method, and quantitatively evaluate the effectiveness of the lattice-based geometrical regularity approach. The proposed method contributes to carrying out tasks with a range of purposes by organizing states with either individual or collective capabilities of robot groups.

Author 1: Kohei Yamagishi
Author 2: Tsuyoshi Suzuki

Keywords: Multi-robot systems; self-organization; distributed control; crystal growth

PDF

Paper 111: Secure and Efficient Proof of Ownership Scheme for Client-Side Deduplication in Cloud Environments

Abstract: Data deduplication is an effective mechanism that reduces the required storage space of cloud storage servers by avoiding storing several copies of the same data. In contrast with server-side deduplication, client-side deduplication can not only save storage space but also reduce network bandwidth. Client-side deduplication schemes, however, might suffer from serious security threats. For instance, an adversary can spoof the server and gain access to a file he/she does not possess by claiming that she/he owns it. In order to thwart such a threat, the concept of proof-of-ownership (PoW) has been introduced. The security of the existing PoW scheme cannot be assured without affecting the computational complexity of the client-side deduplication. This paper proposes a secure and efficient PoW scheme for client-side deduplication in cloud environments with minimal computational overhead. The proposed scheme utilizes convergent encryption to encrypt a sufficiently large block specified by the server to challenge the client that claims possession of the file requested to be uploaded. To ensure that the client owns the entire file contents, and hence resist collusion attacks, the server challenges the client by requesting him to split the file he asks to upload into fixed-sized blocks and then encrypting a randomly chosen block using a key formed from extracting one bit at a specified location in all other blocks. This ensures a significant reduction in the communication overhead between the server and client. Computational complexity analysis and experimental results demonstrate that the proposed PoW scheme outperforms state-of-the-art PoW techniques.

Author 1: Amer Al-Amer
Author 2: Osama Ouda

Keywords: Client-side deduplication; proof of ownership; con-vergent encryption; could storage services

PDF

Paper 112: A Secure Fog-cloud Architecture using Attribute-based Encryption for the Medical Internet of Things (MIoT)

Abstract: The medical internet of things (MIoT) has affected radical transformations in people’s lives by offering innovative solutions to health-related issues. It enables healthcare pro-fessionals to continually monitor various medical concerns in their patients, without requiring visits to hospitals or healthcare professionals’ offices. The various MIoT systems and applications promote healthcare services that are more readily available, accessible, quality-controlled, and cost-effective. An essential requirement is to secure medical data when developing MIoT architectures, as MIoT devices produce considerable amounts of highly sensitive, diverse real-time data. The MIoT architectures discussed in previous works possessed numerous security issues. The integration of fog computing and MIoT is acknowledged as an encouraging and suitable solution for addressing the challenges within data security. In order to ensure data security and to prevent unauthorized access, medical information is kept in fog nodes, and safely transported to the cloud. This paper presents a secure fog-cloud architecture using attribute-based encryption for MIoT to protect medical data. It investigates the feasibility of the proposed architecture, and its ability to intercept security threats. The results demonstrate the feasibility of adopting the fog-based implementation to protect medical data, whilst conserving MIoT resources, and the capability to prevent various security attacks.

Author 1: Suhair Alshehri
Author 2: Tahani Almehmadi

Keywords: MIoT; fog computing; cloud computing; ciphertext-policy; attribute-based encryption; security

PDF

Paper 113: Efficient Weighted Edit Distance and N-gram Language Models to Improve Spelling Correction of Segmentation Errors

Abstract: In most research that has dealt with the correction of spelling errors, the errors are caused by the misuse of space (deletion or insertion of space) are not tackled. Forgetting to deal with this type of errors in the texts poses a problem of understanding and ambiguity of the meaning of the sentence containing these errors. In this article, we propose a new approach to correct errors due to the insertion of space in a word, and at the same time correct other types of editing errors. This approach is based on the edit distance and uses bi-grams language models to correct words in context. The test conducted on hundreds of erroneous words (by insertion of space and/or by simple editing errors) made it possible to assess the relevance and validity of the methods developed to correct this type of error. The approaches proposed in this article provide a very important clarification and reminder by comparing them to those of other existing approaches.

Author 1: Hicham GUEDDAH

Keywords: Spelling correction; error; natural language; inser-tion; space; distance; language models; probability

PDF

Paper 114: New SARIMA Approach Model to Forecast COVID-19 Propagation: Case of Morocco

Abstract: The aim of this paper is to avoid any future health crises by analysing COVID-19 data of Morocco using Time Series to get more information about how the pandemic is spreading. For this reason, we used a statistical model called Seasonal Autoregressive Integrated Moving Average (SARIMA) to forecast the new confirmed cases, new deaths, cumulative cases and deaths. Besides predicting the spreading of COVID-19, this study will also help decision makers to better take the right decisions at the right time. Finally, we evaluated the performance of our model by measuring metrics such as Mean Squared Error (MSE). We have applied our SARIMA model for a forward forecasting in a period of 50 days, the MSE reported was 62196.46 for cumulative cases forecasting, and 621.14 for cumulative deaths forecasting.

Author 1: Ibtissam CHOUJA
Author 2: Sahar SAOUD
Author 3: Mohamed SADIK

Keywords: COVID-19, machine learning, seasonal autoregres-sive integrated moving average, SARIMA, statistical modeling, time series forecasting

PDF

Paper 115: Text to Image GANs with RoBERTa and Fine-grained Attention Networks

Abstract: Synthesizing new images from textual descriptions requires understanding the context of the text. It is a very chal-lenging problem in Natural Language Processing and Computer vision. Existing systems use Generative Adversarial Network (GAN) to generate images using a simple text encoder from their captions. This paper consist synthesizing images from textual descriptions using Caltech-UCSD birds datasets by baselining the generative model using Attentional Generative Adversarial Networks (AttnGAN) and using RoBERTa pre-trained neural language model for word embeddings. The results obtained are compared with the baseline AttnGAN model and conduct various analyses on incorporating RoBERTa text encoder concerning simple encoder in the existing system. Various performance improvements were noted compared to baseline Attention Gen-erative networks. The FID score has decreased from 23.98 in AttnGAN to 20.77 with incorporation of RoBERTa model with AttnGAN.

Author 1: Siddharth M
Author 2: R Aarthi

Keywords: Natural language processing; computer vision; GANs; AttnGAN; RoBERTa

PDF

Paper 116: Performance Evaluation of BDAG Aided Blockchain Technology in Clustered Mobile Ad-Hoc Network for Secure Data Transmission

Abstract: In mobile ad-hoc network (MANET) environment, routing of data packets is a challenging task due to rapid changes in mobility and network topology. In addition, the security aspect of routing is disturbed by attacks caused by malicious nodes. These attacks greatly affect the Quality of Service. To overcome the challenges faced in routing message packets, the Bayesian Directed Acyclic Graph (B DAG) Aided Blockchain model is proposed for clustered MANET environment. The proposed model encompasses the following processes: (i) Multi factor authentication of users by using BLISS algorithm. This step involves acquisition of user credentials and generates hash values for those credentials using Cube Hash algorithm. These hash values are further used to generate public and private keys by BLISS algorithm, (ii) Weighted sum computation for clustering to reduce complexity in the MANET environment. Cluster head (CH) and cluster member (CM) are classified based on energy status, geometric distance, link quality and direction, (iii) A secure AODV based routing protocol using Dolphin Swarm Optimization (DSO) algorithm. This step involves selection of reputed node based on link stability, relative velocity, available bandwidth, energy, queue length, and trust. The packet forwarding is based on the reputation value of the node, by which the trust provided by malicious nodes are eliminated to improve security and (iv) Bayesian DAG aided blockchain, in which the user authenticity, data integrity of packets and signature are verified to mitigate the routing attacks created by nodes in the MANET environment. The proposed model is experimented in NS 3.26 network simulator tool and its performance in terms of multiple QoS metrics is evaluated.

Author 1: B. Harikrishnan
Author 2: T. Balasubaramanian

Keywords: Mobile ad-hoc network; node authentication; BLISS algorithm; blockchain; Bayesian direct acyclic graph

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org