The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 12 Issue 6

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Real-Time Driver’s Focus of Attention Extraction and Prediction using Deep Learning

Abstract: Driving is one of the most common activities in our modern lives. Every day, millions drive to and from their schools or workplaces. Even though this activity seems simple and everyone knows how to drive on roads, it actually requires drivers’ complete attention to keep their eyes on the road and surrounding cars for safe driving. However, most of the research focused on either keeping improving the configurations of active safety systems with high-cost components like Lidar, night vision cameras, and radar sensor array, or finding the optimal way of fusing and interpreting sensor information without consid-ering the impact of drivers’ continuous attention and focus. We notice that effective safety technologies and systems are greatly affected by drivers’ attention and focus. In this paper, we design, implement and evaluate DFaep, a deep learning network for automatically examining, estimating, and predicting driver’s focus of attention in a real-time manner with dual low-cost dash cameras for driver-centric and car-centric views. Based on the raw stream data captured by the dash cameras during driving, we first detect the driver’s face and eye and generate augmented face images to extract facial features and enable real-time head movement tracking. We then parse the driver’s attention behaviors and gaze focus together with the road scene data captured by one front-facing dash camera faced towards the roads. Facial features, augmented face images, and gaze focus data are then inputted to our deep learning network for modeling drivers’ driving and attention behaviors. Experiments are then conducted on the large dataset, DR(eye)VE, and our own dataset under realistic driving conditions. The findings of this study indicated that the distribution of driver’s attention and focus is highly skewed. Results show that DFaep can quickly detect and predict the driver’s attention and focus, and the average accuracy of prediction is 99.38%. This will provide a basis and feasible solution with a computational learnt model for capturing and understanding driver’s attention and focus to help avoid fatal collisions and eliminate the probability of potential unsafe driving behavior in the future.

Author 1: Pei-heng Hong
Author 2: Yuehua Wang

Keywords: Driving; attention; interesting zones; deep neural network; deep learning; models

PDF

Paper 2: Predicting Strength Ratio of Laminated Composite Material with Evolutionary Artificial Neural Network

Abstract: In this paper, an alternative methodology to obtain the strength ratio for the laminated composite material is pre-sented. Traditionally, classical lamination theory and related fail-ure criteria are used to calculate the numerical value of strength ratio of laminated composite material under in-plane and out-of-plane loading from a knowledge of the material properties and its layup. In this study, to calculate the strength ratio, an alternative approach is proposed by using an artificial neural network, in which the genetic algorithm is proposed to optimize the search process at four different levels: the architecture, parameters, connections of the neural network, and active functions. The results of the present method are compared to those obtained via classical lamination theory and failure criteria. The results show that an artificial neural network is a feasible method to calculate the strength ratio concerning in-plane loading instead of classical lamination and associated failure theory.

Author 1: Huiyao Zhang
Author 2: Atsushi Yokoyama

Keywords: Classical lamination theory; genetic algorithm; ar-tificial neural network; optimization

PDF

Paper 3: UIP2SOP: A Unique IoT Network applying Single Sign-On and Message Queue Protocol

Abstract: Internet of Things (IoT), currently, plays an importance role in our life, also, this is one of the most rapidly developing technology trends. However, the present structure has some limitation - one of these is the communication via client-server model - the users, devices, and applications using IoT services where all the connection/requirement is managed at IoT service providers. On the one hand, the IoT service providers (e.g., individual, organization) have different method to manage their devices, services, and users. Thus, the unique standard (i.e., communication method among the service providers and between client server) is still the challenge for the developers. On the other hand, Message Queuing Telemetry Protocol (MQTT) that is one of the most popular protocols in IoT deployments, has signif-icant security and privacy issues by itself (e.g., authentication, authorization, as well as privacy problem). Therefore, this paper proposes UIP2SOP - an unique IoT network by using Single Sign-On (SSO) and message queue to improve the MQTT protocol’s security problem. Besides, this model allows the organizations to provide the IoT services to connect into a single network but does not change the architecture of organization at all. The evaluation section proves the effectiveness of our proposed model. In particular, we consider the number of concurrent users publishing messages simultaneously in the two scenarios i) internal communication and ii) external communication. In addition, we evaluate recovery ability of system when occurred broken connection. Finally, to engage further reproducibility and improvement, we share a complete code solution is publicized on the GitHub repository.

Author 1: Lam Nguyen Tran Thanh
Author 2: Nguyen Ngoc Phien
Author 3: The Anh Nguyen
Author 4: Hong Khanh Vo
Author 5: Hoang Huong Luong
Author 6: Tuan Dao Anh
Author 7: Khoi Nguyen Huynh Tuan
Author 8: Ha Xuan Son

Keywords: Internet of Things (IoT); MQTT; OAuth; Single Sign-On; Kafka; message queue

PDF

Paper 4: A Hybrid Encryption Solution to Improve Cloud Computing Security using Symmetric and Asymmetric Cryptography Algorithms

Abstract: Ensuring the security of cloud computing is one of the most critical challenges facing the cloud services sector. Dealing with data in a cloud environment, which uses shared resources, and providing reliable and secure cloud services, requires a robust encryption solution with no or low negative impact on performance. Thus, this study proposes an effective cryptography solution to improve security in cloud computing with a very low impact on performance. A complex cryptography algorithm is not helpful in cloud computing as computing speed is essential in this environment. Therefore, this solution uses an improved Blowfish algorithm in combination with an elliptic-curve-based algorithm. Blowfish will encrypt the data, and the elliptic curve algorithm will encrypt its key, which will increase security and performance. Moreover, a digital signature technique is used to ensure data integrity. The solution is evaluated, and the results show improvements in throughput, execution time, and memory consumption parameters.

Author 1: Hossein Abroshan

Keywords: Cloud computing; security; cryptography; digital signature

PDF

Paper 5: Evil Twin Attack Mitigation Techniques in 802.11 Networks

Abstract: Evil Twin Wi-Fi attack is an attack on the 802.11 IEEE standard. The attack poses a threat to wireless connections. Evil Twin Wi-Fi attack is one of the attacks which has been there for a long time. Once the Evil Twin Wi-Fi attack is performed this acts as a gateway to many other attacks such as DNS spoofing, SSL Strip, IP Spoofing, and many more attacks. Thus, preventing the attack is essential for privacy and data security. This paper will be going through in detail how the attack is performed and different measures to prevent the attack. The proposed algorithm sniffs for fake AP using the whitelist in all the channels, once an unauthorized AP is detected the user has an option to de-authenticate any user in the unauthorized network in case any clients do connect to it by accident also the algorithm will be checking if any de-authentication frame is being sent to any of the AP to know which of the AP is being compromised. The efficiency of proposed approach is verified by simulating and mitigating the evil-twin attack.

Author 1: Raja Muthalagu
Author 2: Sachin Sanjay

Keywords: Evil twin Wi-Fi attack; DNS spoofing; SSL strip; IP spoofing

PDF

Paper 6: Solving Reactive Power Scheduling Problem using Multi-objective Crow Search Algorithm

Abstract: This paper presents the solution of multi-objective based optimal reactive power dispatch (MO-ORPD) problem by optimizing the system power losses and voltage stability enhancement index (VSEI)/L-index objectives. ORPD problem is considered as an important issue from system security and operational point of view for optimal steady-state operation of power system. Here, single-objective based ORPD problem is solved using Crow Search Algorithm (CSA) and multi-objective based ORPD problem is solved using multi-objective CSA (MO-CSA). The CSA is considered as an efficient and robust algorithm which determines the global optimal solution for solving the non-linear and discontinuous objective functions. Two standard test systems, i.e., IEEE 30 and 57 bus systems are considered to show the effectiveness, suitability and robustness of CSA and MO-CSA for solving the ORPD problem.

Author 1: Surender Reddy Salkuti

Keywords: Optimal power flow; crow search algorithm; evolutionary algorithms; loss minimization; reactive power scheduling; multi-objective optimization

PDF

Paper 7: Controlling a Wheelchair using a Brain Computer Interface based on User Controlled Eye Blinks

Abstract: Data published by different organizations such as the United Nations indicates that there are a large number of people who suffer from different types of movement disabilities. In many cases, the disability is so severe that they cannot have any kind of movements. Faced with this situation, Brain Computer Interface technology has taken up the challenge of developing solutions that allow delivering a better quality of life to those people, and one of the most important areas has been the mobility solutions, which includes the brain computer interface enabled electric wheelchairs as one of the most helpful solutions. Faced with this situation, the present work has developed a Brain Computer Interface solution that allows users to control the movement of their wheelchairs using the brain waves generated when blinks their eyes. For the creation of this solution, the Incremental Prototyping methodology has been used to optimize the development process by generating independent modules. The solution is made up of several components i.e. EEG System (OpenBCI), Main Controller, Wheelchair Controller and Wheelchair that allows to have a modularity to carry out updates (improvements) of their functionalities in a simple way. The developed system has shown that it requires a low amount of training time and has a real world applicable response time. Experimental results show that the users can perform different tasks with an acceptable grade of error in a period of time that could be considered as acceptable for the system. Considering that the prototype was created for people with disabilities, the system could grant them a certain level of independence.

Author 1: Sebastián Poveda Zavala
Author 2: Sang Guun Yoo
Author 3: David Edmigio Valdivieso Tituana

Keywords: BCI; EEG; brain computer interface; OpenBCI; eye blink detection

PDF

Paper 8: IoT based Smart Water Quality Prediction for Biofloc Aquaculture

Abstract: Traditional fish farming faces several challenges, including water pollution, temperature imbalance, feed, space, cost, etc. Biofloc technology in aquaculture transforms the manual into an advanced system that allows the reuse of unused feed by converting them into microbial protein. The objective of the research is to propose an IoT-based solution to aquaculture that increases efficiency and productivity. The article presented a system that collects data using sensors, analyzes them using a machine learning model, generates decisions with the help of Artificial Intelligence (AI), and sends notifications to the user. The proposed system has been implemented and tested to validate and achieve a satisfactory result.

Author 1: Md. Mamunur Rashid
Author 2: Al-Akhir Nayan
Author 3: Sabrina Afrin Simi
Author 4: Joyeta Saha
Author 5: Md. Obaidur Rahman
Author 6: Muhammad Golam Kibria

Keywords: Smart aquaculture system; biofloc technology; machine learning; life below water

PDF

Paper 9: A Novel High-order Linguistic Time Series Forecasting Model with the Growth of Declared Word-set

Abstract: The existing researches have shown that the fuzzy forecasting methods based on the high-order fuzzy time series are better than the fuzzy forecasting methods based on the first-order fuzzy time series. The linguistic forecasting method based on the first-order linguistic time series which can handle directly the word-set of the linguistic variable have been examined by Hieu et al. This paper examines a novel model of high-order linguistic time series with the growth of declared word-set. A procedure for forecasting the enrollments of University of Alabama and Lahi crop production of India is developed based on the proposed model. In the proposed forecasting method, the high-order linguistic logical relationship groups will be established and utilized to calculate the forecasted values based on the quantitative semantics of the used words generated by the hedge algebras structure. The experimental results show that the forecasting accuracy of the proposed high-order forecasting method is better than their counterparts and the growth of the word-set of the linguistic variable is significant in increasing the accuracy of the forecasted results.

Author 1: Nguyen Duy Hieu
Author 2: Pham Dinh Phong

Keywords: Linguistic time series; high-order linguistic time series; linguistic logical relationship; hedge algebras; time series forecasting

PDF

Paper 10: Opinion Mining of Saudi Responses to COVID-19 Vaccines on Twitter

Abstract: In recent months, many governments have announced COVID-19 vaccination programs and plans to help end the crises the world has been facing since the emergence of the coronavirus pandemic. In Saudi Arabia, the Ministry of Health called for citizens and residents to take up the vaccine as an essential step to return life to normal. However, the take-up calls were made in the face of profound disagreements on social media platforms and online networks about the value and efficacy of the vaccines. Thus, this study seeks to explore the responses of Saudi citizens to the COVID-19 vaccines and their sentiments about being vaccinated using opinion mining methods to analyze data extracted from Twitter, the most widely used social media network in Saudi Arabia. A corpus of 37,467 tweets was built. Vector space classification (VSC) methods were used to group and categorize the selected tweets based on their linguistic content, classifying the attitudes and responses of the users into three defined categories: positive, negative, and neutral. The lexical semantic properties of the posts show a prevalence of negative responses. This indicates that health departments need to ensure citizens are equipped with accurate, evidence-based information and key facts about the COVID-19 vaccines to help them make appropriate decisions when it comes to being vaccinated. Although the study is limited to the analysis of attitudes of people to the COVID-19 vaccines in Saudi Arabia, it has clear implications for the application of opinion mining using computational linguistic methods in Arabic.

Author 1: Fahad M. Alliheibi
Author 2: Abdulfattah Omar
Author 3: Nasser Al-Horais

Keywords: Computational linguistics; COVID-19 vaccines; lexical semantics; opinion mining; vector space classification

PDF

Paper 11: Indoor Localization and Navigation based on Deep Learning using a Monocular Visual System

Abstract: Now-a-days, computer systems are important for artificial vision systems to analyze the acquired data to realize crucial tasks, such as localization and navigation. For successful navigation, the robot must interpret the acquired data and determine its position to decide how to move through the environment. This paper proposes an indoor mobile robot visual-localization and navigation approach for autonomous navigation. A convolutional neural network and background modeling are used to locate the system in the environment. Object detection is based on copy-move detection, an image forensic technique, extracting features from the image to identify similar regions. An adaptive threshold is proposed due to the illumination changes. The detected object is classified to evade it using a control deep neural network. A U-Net model is implemented to track the path trajectory. The experiment results were obtained from real data, proving the efficiency of the proposed algorithm. The adaptive threshold solves illumination variation issues for object detection.

Author 1: Rodrigo Eduardo Arevalo Ancona
Author 2: Leonel Germán Corona Ramírez
Author 3: Oscar Octavio Gutiérrez Frías

Keywords: Visual localization; visual navigation; autonomous navigation; feature extractor; object detection

PDF

Paper 12: Firm Performance Prediction for Macroeconomic Diffusion Index using Machine Learning

Abstract: Utilizing firm performance in the prediction of macroeconomic conditions is an interesting research trend with increasing momentum that supports to build nowcasting and early warning systems for macroeconomic management. Firm-level data is normally high volume, with which the traditional statistics-based prediction models are inefficient. This study, therefore, attempts to assess achievements of Machine Learning on firm performance prediction and proposes an emerging idea of applying it for macroeconomic prediction. Inspired by “micro-meso-macro” framework, this study compares different machine learning algorithms on each Vietnamese firm group categorized by the Vietnamese Industry Classification Standard. This approach figures out the most suitable classifier for each group that has specific characteristics itself. Then, selected classifiers are used to predict firms’ performance in the short term, where data was collected in wide range enterprise surveys conducted by the General Statistics Office of Vietnam. Experiments showed that Random Forest and J48 outperfomed other ML algorithms. The prediction result presents the fluctuation of firms’ performance across industries, and it supports to build a diffusion index that is a potential early warning indicator for macroeconomic management.

Author 1: Cu Nguyen Giap
Author 2: Dao The Son
Author 3: Dinh Thi Ha
Author 4: Vu Quang Huy
Author 5: Do Thi Thu Hien
Author 6: Le Mai Trang

Keywords: Firm performance prediction; machine learning algorithms; diffusion index

PDF

Paper 13: A Review on SDR, Spectrum Sensing, and CR-based IoT in Cognitive Radio Networks

Abstract: The inherent scarcity of frequency spectrum, along with the fixed spectrum allocation adopted policy, has led to a dire shortage of this indispensable resource. Furthermore, with the tremendous growth of wireless applications, this problem is intensified as the unlicensed frequency spectrum becomes overcrowded and unable to meet the requirement of emerging radio devices operating at higher data rates. Additionally, the already assigned spectrum is underutilized. That has prompted researchers to look for a way to address spectrum scarcity and enable efficient use of the available spectrum. In this context, Cognitive Radio (CR) technology has been proposed as a potential means to overcome this issue by introducing opportunistic usage to less congested portions of the licensed spectrum. In addition to outlining the fundamentals of Cognitive Radio, including Dynamic Spectrum Access (DSA) paradigms and CR functions, this paper has a three-fold objective: first, providing an overview of Software Defined Radio (SDR), in which the architecture, benefits, and ongoing challenges of SDR are presented; second, giving an extensive review of spectrum sensing, covering sensing types, narrowband and wideband sensing schemes with their pros and cons, Machine Learning-based sensing, and open issues that need to be further addressed in this field; third, exploring the use of Cognitive Radio in the Internet of Things (IoT) while highlighting the crucial contribution of CR in enabling IoT. This Review is elaborated in an informative fashion to help new researchers entering the area of Cognitive Radio Networks (CRN) to easily get involved.

Author 1: Nadia Kassri
Author 2: Abdeslam Ennouaary
Author 3: Slimane Bah
Author 4: Hajar Baghdadi

Keywords: Cognitive radio; cognitive radio networks; software defined radio; spectrum sensing; machine learning; CR-based IoT

PDF

Paper 14: Artificial Intelligence based Recommendation System for Analyzing Social Bussiness Reviews

Abstract: Recently, analysing reviews presented by clients to products that are provided by e-commerce companies, such as Amazon, to produce efficient recommendations has been receiving a lot of attention. However, ensuring and generating effective recommendations on time is a challenge. This research paper proposes an artificial intelligence-based system. The proposed system uses the Incremental Learning - based Method (ILbM) to learn a neural network classifier. The ILbM uses the bagging technique in the process of training the classifier. To ensure a high degree of performance, the ILbM is implemented on the Hadoop since it allows the execution in parallel. Compared to a similar system, the proposed system shows better results in terms of accuracy (97.5%), precision (95.7%), recall (91.5%), and time of response (36 seconds).

Author 1: Asma Alanazi
Author 2: Marwan Alseid

Keywords: ILbM; reviews; classifier; text analysing; training bagging; MapReduce; big data

PDF

Paper 15: Comprehensive Analysis for Sensor-Based Hydraulic System Condition Monitoring

Abstract: Condition monitoring of equipment can be very effective in predicting faults and taking early corrective actions. As hydraulic systems constitute the core of most industrial plants, predictive maintenance of such systems is of vital importance. Due to the availability of huge data collected from industrial plants, machine learning can be used for this purpose. In this work, a hydraulic system condition monitoring (HSCM) is addressed via a public dataset with 17 sensors distributed throughout the system. Using a set of 6 features extracted from sensory data, the random forest classifier was proven, in the literature, to achieve classification rate exceeding 99% for four independent target classes, namely Cooler, Valve, Pump and Accumulator. In this paper, sensor dependency is examined and experimental results show that a reduced set of important sensors may be sufficient for the addressed classification task. In addition, feature importance as well as implementation issues, i.e. training time and model size on disk, are analyzed. It is found that the training time can be reduced by 25.7% to 36.4% while the size on disk is reduced by 70.3% to 85.5%, using the optimized models, with only important sensors employed, in comparison with the basic model, with full set of sensors, while maintaining classification precision.

Author 1: Ahmed Alenany
Author 2: Ahmed M. Helmi
Author 3: Basheer M. Nasef

Keywords: Condition monitoring; sensory data analysis; machine learning; classification

PDF

Paper 16: A New Communication Protocol for Drones Cooperative Network: 5G Site Survey Case Study

Abstract: Directing the antennas of the 5th generation mobile network optimally became a hard and tedious process due to the abundance of the antennas that the 5th generation networks relay on. Also, due to the traditional way of measuring the signal strength of the 5th generation networks. The process that could take weeks of working until it is done. So, the solution is to make an automated process to measure signal strength and to direct antennas using "Drones" rather than human power. This way will ease the process of directing antennas in a quite shorter amount of time which will be some hours instead of weeks. Plus the low cost and the high accuracy achieved. So, a cooperative network between "Drones" and a new communication protocol to support that network will be designed. "Drones" will communicate with each others and with antennas through exchanging messages by "MQTT cloud" using New designed communication protocol. "Raspberry pi" platform will be used as a server to control the direction of antennas. "Drones" will carry a "4G mobile" and a "Raspberry pi" with an "Building Identification System (BIS)" setuped on it. The "(BIS)" gives every building a number and will recognize the entrance of buildings to measure signal strength at every floor. Then, performance analysis metrics (Throughput) will be measured. OMNeT++ "will be used for simulation, and "Raspberry Pi" platform will be used to implement the system and measure the performance of the new Communication Protocol.

Author 1: Youssef Shawky Othman
Author 2: Mohamed Helmy Megahed
Author 3: Mohamed Abo Rezka
Author 4: Fathy Ahmed Elsayed Amer

Keywords: Drones; communication protocol; Message Queuing Telemetry Transport (MQTT); cooperative network

PDF

Paper 17: Real Time Face Expression Recognition along with Balanced FER2013 Dataset using CycleGAN

Abstract: Human face expression recognition is an active research area that has massive applications in medical field, crime investigation, marketing, online learning, automobile safety and video games. The first part of this research defines a deep neural network model-based framework for recognizing the seven main types of facial expression, which are found in all cultures. The proposed methodology involves four stages: (a) pre-processing the FER2013 dataset through relabeling to avoid misleading results and getting rid of non-face and non-frontal faces; (b) design of an efficient stable Cycle Generative Adversarial Network (CycleGAN), which provides unsupervised expression-to-expression translation. The CycleGAN has been designed and trained with a new cycle consistency loss. (c) Generating new images to overcome the class imbalance and finally (d) building the DNN architecture for recognizing the face sign expression, using the pretrained VGG-Face model with vggface weights. The second part encompasses the design of a GPU-accelerated face expression recognition system for real time video sequences using NVIDIAs Compute Unified Device Architecture (CUDA). OpenCV library has been compiled from scratch with CUDA and NVIDIA CUDA Deep Neural Network library, cuDNN. For face detection stage Haar Cascaded and deep learning were used and tested using both CPU and GPU as backend. Results show that the designed model run time to recognize a face sign is 0.44 seconds. Besides, the average test accuracy has been increased from 64% for the original FER2013 dataset to 91.76% for the modified balanced version using the same transfer learning model.

Author 1: Fatma Mazen Ali Mazen
Author 2: Ahmed Aly Nashat
Author 3: Rania Ahmed Abdel Azeem Abul Seoud

Keywords: Facial expressions detection and recognition; multi-task cascaded convolutional networks; transfer learning; residual neural network; CycleGAN; FER2013; GPU and CUDA; HAAR

PDF

Paper 18: A Deep Learning Approach Combining CNN and Bi-LSTM with SVM Classifier for Arabic Sentiment Analysis

Abstract: Deep learning models have recently been proven to be successful in various natural language processing tasks, including sentiment analysis. Conventionally, a deep learning model’s architecture includes a feature extraction layer followed by a fully connected layer used to train the model parameters and classification task. In this paper, we employ a deep learning model with modified architecture that combines Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM) for feature extraction, with Support Vector Machine (SVM) for Arabic sentiment classification. In particular, we use a linear SVM classifier that utilizes the embedded vectors obtained from CNN and Bi-LSTM for polarity classification of Arabic reviews. The proposed method was tested on three publicly available datasets. The results show that the method achieved superior performance than the two baseline algorithms of CNN and SVM in all datasets.

Author 1: Omar Alharbi

Keywords: Sentiment analysis; Arabic sentiment analysis; deep learning approach; convolutional neural network CNN; bidirectional long short-term memory Bi-LSTM; support vector machine; SVM

PDF

Paper 19: Emotional Intelligence Robotics to Motivate Interaction in E-Learning: An Algorithm

Abstract: The development of emotional intelligence robotics in the learning environment plays valuable support for social interaction among students. Emotional intelligence robots should be scalable to recognize emotions, appear empathetic in learning situations, and enrich the confidence with students for active interaction. This paper presents some related issues about integrating emotional intelligence robotics in E-learning such as its role and outcomes to motivate interaction during education and discover the main aspects of the emotional intelligence between humans and robots. This paper aims to determine the design requirements of emotional robots. Besides, this paper proposed a framework of educational Robotics with Emotional Intelligence in Learning(EREIL). EREIL consists of three main units; student emotions discovery, student emotions representation, and EREIL-Student Communication (RSC). In addition, it introduces a perception of EREIL working. In the future, this paper tries to merge more sensor devices and machine learning algorithms to integrate face analysis with speech recognition. Besides, it can add a persuasion unit in the EREIL robot to convince students with better learning choices to their abilities.

Author 1: Dalia khairy
Author 2: Salem Alkhalaf
Author 3: M. F. Areed
Author 4: Mohamed A. Amasha
Author 5: Rania A. Abougalala

Keywords: Robotics; emotional intelligence; interaction; E-learning; motivation; robot with emotional intelligence; machine learning algorithms; face analysis; speech recognition

PDF

Paper 20: Building Research Productivity Framework in Higher Education Institution

Abstract: The purpose of this study is to build a framework for improving research productivity in higher education institutions. The research begins by collecting data and defining candidate variables. The next process is to determine the selected variable from the candidate variable. Variable selection is carried out in three stages, univariate selection, feature importance, and correlation matrix. After the variable selection stage, eight input variables and one target variable were obtained. The eight input variables are Article (C), Conference (CO), Grant (GT), Research Grantee (RG), Rank (R), Degree (D), IPR, and Citation (C). The target variable is Research Productivity (RP). This selected variable is used to build the framework. The next step is to test the framework that has been built. The testing process involves four data mining classifiers, Support Vector Machine, Decision Tree, K-Nearest Neighbor, and Naïve Bayes. The classification results are tested using confusion matrix-based testing, accuracy, precision, sensitivity, and f-measure. The testing results show the proposed framework is able to obtain high accuracy scores for each classification algorithm. It means the proposed framework is relevant to use.

Author 1: Ahmad Sanmorino
Author 2: Ermatita
Author 3: Samsuryadi
Author 4: Dian Palupi Rini

Keywords: Framework; research productivity; variable selection; data mining classifier

PDF

Paper 21: Method for Determination of Support Length of Daubechies Basis Function for Wavelet MRA based Moving Characteristic Estimation

Abstract: Method for determination of support length of Daubechies basis function for wavelet Multi Resolution Analysis: MRA based moving characteristic estimation is proposed. The method based on root mean square difference between original and reconstructed image from just a low frequency component from MRA. Also, one of applications of the method for detection and tracking of moving targets, typhoon and boundary between warm and cold current observed from satellite remote sensing images is shown. It is found that the proposed method allows to detect and to track for moving targets of a typhoon and a boundary in the time series of GOES images.

Author 1: Kohei Arai

Keywords: Multi-dimensional wavelet transformation; multi resolution analysis: MRA; moving target detection; support length; typhoon movement; boundary detection and tracking

PDF

Paper 22: DeepfakeNet, an Efficient Deepfake Detection Method

Abstract: Different CNNs models do not perform well in deepfake detection in cross datasets. This paper proposes a deepfake detection model called DeepfakeNet, which consists of 20 network layers. It refers to the stacking idea of ResNet and the split-transform-merge idea of Inception to design the network block structure, That is, the block structure of ResNeXt. The study uses some data of FaceForensics++, Kaggle and TIMIT datasets, and data enhancement technology is used to expand the datasets for training and testing models. The experimental results show that, compared with the current mainstream models including VGG19, ResNet101, ResNeXt50, XceptionNet and GoogleNet, in the same dataset and preset parameters, the proposed detection model not only has higher accuracy and lower error rate in cross dataset detection, but also has a significant improvement in performance.

Author 1: Dafeng Gong
Author 2: Yogan Jaya Kumar
Author 3: Ong Sing Goh
Author 4: Zi Ye
Author 5: Wanle Chi

Keywords: DeepfakeNet; deepfake detection; data enhancement; CNNs; cross dataset

PDF

Paper 23: A New Key Generation Technique based on Neural Networks for Lightweight Block Ciphers

Abstract: In recent years, small computing devices used in wireless sensors, radio frequency identification (RFID) tags, Internet of Things (IoT) are increasing rapidly. However, the resources and capabilities of these devices are limited. Conventional encryption ciphers are computationally expensive and not suitable for lightweight devices. Hence, research in lightweight ciphers is important. In this paper, a new key scheduling technique based on neural network (NN) is introduced for lightweight block ciphers. The proposed NN approach is based on a multilayer feedforward neural network with a single hidden layer with the concept of nonlinear activation function to satisfy the Shannon confusion properties. It is shown here that NN consisting of 4 input, 4 hidden, and 4 output neurons is the best in key scheduling process. With this architecture, 5 unique keys are generated from 64 bit input data. Nonlinear bit shuffling is applied to create enough diffusion. The 4-4-4 NN approach generates the se-cure keys with an avalanche effect of more than 50 percent and consumes less power and memory, thus ensuring better performance than that of the existing algorithms. In our experiments, the memory usage and execution cycle of the NN key scheduling technique are evaluated on the fair evaluation of lightweight cryptographic systems (FELICS) tool that runs on the Linux operating system. The proposed NN approach is also implemented using MATLAB 2021a to test the key sensitivity by the histogram and correlation graphs of several encrypted and decrypted images. Results also show that compared to the existing algorithms, the proposed NN-cipher algorithm has lower number of execution cycles and hence less power consumption.

Author 1: Sohel Rana
Author 2: M. Rubaiyat Hossain Mondal
Author 3: A. H. M. Shahariar Parvez

Keywords: Lightweight cryptography; IoT; resource limited devices; neural network; avalanche effect; FELICS; MATLAB

PDF

Paper 24: Fully Automated Ontology Increment`S user Guide Generation

Abstract: This research focuses on a domain and schema independent user-guide generation for ontology increments. Having a user guide or a catalogue/manual is vital for quick and effective knowledge dissemination. If a user guide can be generated for an ontology as well, there could be ample advantages. Stakeholders can scan across the user guide of the ontology and verify the eligibility of it, against the intended purposes. Additionally, this could be useful in ontology`s version management requisites and knowledge verification requirements as well. Even though, ontology construction being iterative and incremental operational, there will be several intermediate versions before it reaches to the fine-tuned final version. Therefore, manual user guide creation will be a tedious and impossible operation. Consequently, this research focuses on a novel algorithmic approach to domain and schema independent ontology verbalization. A special algorithm is created to alter the functionality of Google`s AliceBot to work as a verbalizer, instead of a chatterbot. Artificial Intelligent Modelling Language (AIML) technology is utilized to create the templates for the ontology specific knowledge embeddings. This entire process is fully automated via the proposed novel algorithm, which is a key contribution of this research. Eventually, the generated user guide generation tool is evaluated against three different domains with the involvement of fifteen stakeholders and 82% of averaged acceptance has been yielded.

Author 1: Kaneeka Vidanage
Author 2: Rosmayati Mohemad
Author 3: Noor Maizura Mohamad Noor
Author 4: Zuriana Abu Bakar

Keywords: AliceBot; artificial intelligent modelling language; ontologist; verbalizing

PDF

Paper 25: Design of an Efficient RPL Objective Function for Internet of Things Applications

Abstract: Over the decade, a rapid growth in use of smart devices connected and communicating over the Internet is seen in various domains. The IPv6 Routing protocol for Low power and Lossy Networks (RPL) is the routing backbone of such IOT networks. RPL is a proactive, distance vector protocol which constructs the routes based on an Objective Function. The performance of RPL protocol largely depends on the design of Objective Function. Depending on application requirements, the RPL standard offers flexibility in design of the objective function and scope for improving the routing process. In this paper, an efficient Objective Function, RPL-FZ, is proposed. Speedy communication across nodes, low energy consumption and reliable data delivery is key to achieve quality of Service. Considering this, RPL-FZ uses relevant metrics like Residual Energy of Node, Delay and ETX (Expected Transmission count) to make the routing decisions. The metrics are combined using fuzzy logic technique to obtain a single metric Quality for each neighbor node. The neighbor with highest value of Quality is chosen as best parent to forward sensed data toward the collection unit. The proposed objective function RPL-FZ is integrated in the Contiki OS and network simulations are performed using the COOJA simulator. The performance evaluation reveals that RPL-FZ achieves 7% higher Packet Delivery rate, 8% lower energy consumption and 8% lesser latency as compared to single metric based standard objective functions OF0 and MRHOF.

Author 1: Sonia Kuwelkar
Author 2: H.G. Virani

Keywords: Internet of things; low power Lossy Networks; IPv6 routing protocol for LLN; objective function; fuzzy logic

PDF

Paper 26: Enhance Risks Management of Software Development Projects in Concurrent Multi-Projects Environment to Optimize Resources Allocation Decisions

Abstract: In software development project management, Risk management represents critical knowledge and skills at the level of a single software project and at the enterprise ‎level, which executes multiple software projects concurrently. ‎The best decision of Risk management contributes to optimizing resource allocation ‎ at the enterprise level for achieving its goals. Therefore, the issue needs centralized risk management at the enterprise level as a whole and not for each project. Risk management is implemented through several stages and using different methods. ‎Various studies deal with multiple aspects of software ‎management. This research provides an analytical view of risk assessment in multi-environment software development projects that take place simultaneously. The study uses a public dataset previously used in previous research for several simultaneous projects in one organization. It describes the multi-software project's risks through 12 variables. A comparative ‎ analysis uses classification methods (‎Random Forest- TreesJ48 - REP Tree - Simple Logistic)‎ to assess risks and put them in central view. The research experiment has proven high accuracy in determining risk levels in a multi-project environment, reaching approximately 98%, using the REP tree technique.

Author 1: Ibraheem M Alharbi
Author 2: Adel A Alyoubi
Author 3: Majid Altuwairiqi
Author 4: Mahmoud Abd Ellatif

Keywords: Risk management; multiple software projects; risk assessment; software development projects

PDF

Paper 27: Fog-based Remote in-Home Health Monitoring Framework

Abstract: As a result of what happened to the world during the past and current year of the spread of the Covid-19 epidemic, it was necessary to have a reliable health care system for remote observation, especially in care homes for the elderly. There are many research works have been done in this field, but still have limitations in terms of latency, security, response delay, and long execution times. To remove these limitations, this paper introduces a Smart Healthcare Framework called Remote in-Home Health Monitoring (RHHM), which provides an architecture and functionalities in order to facilitate the control of patients' conditions when they are at home. The framework exploits the benefits of fog layers with high-level services such as local storage, local real-time data processing, and embedded data mining for taking responsibility for handling some burdens of the sensor network and the cloud and to become a decision maker. In addition to, it incorporates camera with body sensors in diagnosis for more reliability and efficiency with privacy preserving. The performance of the proposed framework was evaluated using the popular iFogSim toolkit. The results show the proposed system's ability to reduce latency, energy consumption, network communications, and overall response time. The efforts of this work will help support the overall goal to establish a high performance, secured and reliable smart Healthcare system.

Author 1: Fatma H. Elgendy
Author 2: Amany M. Sarhan
Author 3: Mahmoud A. M. Alshewimy

Keywords: Fog computing; health monitoring; iFogSim; IoT; cloud computing

PDF

Paper 28: What Drives Airbnb Customers’ Satisfaction in Amsterdam? A Sentiment Analysis

Abstract: The sharing economy is a new socio-economic system that allows individuals to rent out their personal belongings, such as their private car or a room in their home, for a short period. This study aims to investigate the attributes that impact customers’ satisfaction when using the sharing economy propriety rentals websites. Large data sets of Airbnb’s online reviews and listings in Amsterdam were analyzed using sentiment analysis, word clustering, ordinal logistic regression, and visualization techniques. Findings reveal that the polarity of Airbnb guests reviews in Amsterdam is significantly impacted by property price, value, cleanliness, rate, host communication, easiness of check-in, the accuracy of property description, and whether the host is super host or not. Surprisingly, the property neighborhood was not found to impact customers’ sentiment in Amsterdam. In addition, Airbnb guests in Amsterdam tend to positively express their experience satisfaction level mainly based on property exact location and host interaction followed with the facilities surrounding the property, property cleanliness, and room quality. On the other hand, negative online reviews tend to be mainly linked to problems with check-in services followed by aspects related to weak host interaction, location, and room quality. The results indicate that Airbnb hosts need to offer clear and easy check-in services with emphasizing the importance of keeping a good communication channel with their guests to enhance customers’ experience and increase customers’ satisfaction level. Future studies should investigate the applicability of the findings of this study in the context of other cities.

Author 1: Heyam Abdullah Bin Madhi
Author 2: Muna M. Alhammad

Keywords: Airbnb; customer satisfaction; customer experience; big data; sentiment analysis; ordinal logistic regression

PDF

Paper 29: Motivational Factors Impacting the Use of Citizen Reporting Applications in Saudi Arabia: The Case of Balagh Application

Abstract: Citizen reporting applications are considered a new approach for interaction between government authorities and citizens. Citizen reporting applications are implemented to collectively gather information from citizens on issues related to public interest such as accidents, traffic violations, and commercial frauds. Through utilizing citizen reporting applications, citizens are able to provide information about incidents efficiently and conveniently to the local authorities via mobile applications that are designed for these specific purposes. For such applications to be successful, citizens’ willingness to participate continually and to become daily users of these applications is required. This paper applies the self-determination theory to investigate the factors that encourage citizens to participate in citizen reporting applications. In this study, the factors impacting behavioural intention to use the applications are divided into two categories. First, intrinsic motivation factors that include self-concern, social responsibility, and revenge. Second, extrinsic motivation factors that include output quality and rewards. The study empirically surveyed 297 Saudi citizens from different age groups. The partial least square (PLS) approach validates the research model. Findings reveal that output quality, revenge, and self-concern are significantly associated with citizens’ motivations to use the applications, whereas rewards and social responsibility do not significantly influence citizens’ motivations to engage with such applications. This study contributes theoretically by enriching literature on the identification of the factors behind user’s engagement in citizen reporting applications. It also contributes practically by supporting the developers of citizen reporting applications to consider these factors when designing and marketing this kind of application.

Author 1: Muna M. Alhammad
Author 2: Layla Hajar
Author 3: Sahar Alshathry
Author 4: Mashael Alqasabi

Keywords: Crowdsourcing; self-determination theory; intrinsic motivation; extrinsic motivation; citizens reporting

PDF

Paper 30: Securing Student Data Privacy using Modified Snake and Ladder Cryptographic Algorithm

Abstract: Transformed by the advent of the Digital Revolution, the world deals with a gold mine of data every day. Along with improvement in processing methods for the data, data security is of utmost importance. Recently, there was a noticeable surge in online learning during the pandemic. Modifying their workflow strategies, the educational institutions provided courses for students designed to suit the need of the hour. This opened up the avenue for a greater number of students to take part in the online learning. With the increase in number of students registered, there exist a substantial repository of data to deal with. Hackers have been targeting student data and using it for illegal purposes. In this research paper, an attempt has been made by modifying the classic Snake and Ladder game to perform encryption on short text student data to ensure data privacy. The Novel algorithm maintains simplicity yet produces a strong cipher text. The algorithm stands strong against the brute-force attack, cipher-only attack, etc. Decryption also uses same key as used for encryption, the key being symmetric in nature. New variable keys are generated every time the algorithm is used.

Author 1: Kamaladevi Kunkolienker
Author 2: Vaishnavi Kamat

Keywords: Student data; privacy; encryption; decryption; snake and ladder; variable keys

PDF

Paper 31: A Method to Accommodate Backward Compatibility on the Learning Application-based Transliteration to the Balinese Script

Abstract: This research proposed a method to accommodate backward compatibility on the learning application-based transliteration to the Balinese Script. The objective is to accommodate the standard transliteration rules from the Balinese Language, Script, and Literature Advisory Agency. It is considered as the main contribution since there has not been a workaround in this research area. This multi-discipline collaboration work is one of the efforts to preserve digitally the endangered Balinese local language knowledge in Indonesia. The proposed method covered two aspects, i.e. (1) Its backward compatibility allows for interoperability at a certain level with the older transliteration rules; and (2) Breaking backward compatibility at a certain level is unavoidable since, for the same aspect, there is a contradictory treatment between the standard rule and the old one. This study was conducted on the developed web-based transliteration learning application, BaliScript, where its Latin text input will be converted into the Balinese Script output using the dedicated Balinese Unicode font. Through the experiment, the proposed method gave the expected transliteration results on the accommodation of backward compatibility.

Author 1: Gede Indrawan
Author 2: I Ketut Paramarta
Author 3: I Gede Nurhayata
Author 4: Sariyasa

Keywords: Backward compatibility; Balinese Script; learning application; transliteration

PDF

Paper 32: Validation: Conceptual versus Activity Diagram Approaches

Abstract: A conceptual model is used to support development and design within the area of systems and software modeling. The notion of validation refers to representing a domain in a model accurately and generating results using an executable model. In UML specifications, validation verifies the correctness of UML diagrams against any constraints and rules defined within the model. Currently, significant research has been conducted on generating test sets to validate that UML diagrams conform to requirements. UML activity diagrams are a specific focus of such efforts. An activity diagram is a flexible instrument for describing a system’s behaviors and the internal logic of complex operations. This paper focuses on the notion of validation using activity diagrams and contrasts that process with a proposed method that involves an informal validation procedure. Accordingly, this informal validation involves comparing requirements to specifications expressed by a diagram of a modeling language called thinging machine (TM) modeling. The informal validation is a type of model checking that requires the model to be small enough for the verification to be done in a limited space or time period. In the proposed method, the model diagram is divided into subdiagrams to achieve this purpose. We claim the TM behavioral model comes with a particular dispositional structure that allows a designer to “carve” a model into smaller components for informal validation, which is shown through two case studies.

Author 1: Sabah Al-Fedaghi

Keywords: Validation; conceptual model; activity diagram; thinging machine; informal validation

PDF

Paper 33: Identifying Small and Medium Enterprise Smart Entrepreneurship Training Framework Components using Thematic Analysis and Expert Review

Abstract: Small and Medium Enterprises (SMEs) today are facing a competitive business environment, in a complex and rapidly changing environment. For that technology is seen as a mediator capable of transforming SMEs to greater heights in an amid and vigorous pace of a borderless world. The agenda of SMEs to generate national income as well as to create more employment opportunities has made the government much focused in providing improvements in business opportunities to SMEs to boost the country's economic growth. To ensure that the SME owners sustain their business, they should be able to adapt the use of the internet as a key component in designing new business model values, customer experiences and internal capabilities that support the key operations. However, there are still some SME owners who do not leverage on the use of Information and Communication Technology (ICT) in their business operations. This study interviewed eight SME owners who operated their businesses in Kuala Lumpur and Selangor to identify a list of most important business training courses needed for SMEs in Malaysia. The data was analyzed using Thematic Analysis method and it was found that there are five main components of courses in SMEs, namely, Business Management, Sales and Marketing, Accounting and Finance, ICT and Technology, and Production and Operations. As a result of this Thematic Analysis study, researchers have developed a smart entrepreneurship training framework related to the five components and produced a system called, the Malaysian SMEs Psychometric Test or U-PPM which has been reviewed and endorsed by the respective panels of experts. This proposed framework is important for SME owners and management and also the government and stakeholders, when making the correct decisions in selecting business training courses as well as to increase ICT and digital technologies usage in providing a positive impact to all SMEs in Malaysia.

Author 1: Anis Nur Assila Rozmi
Author 2: Puteri N.E. Nohuddin
Author 3: Abdul Razak Abdul Hadi
Author 4: Mohd Izhar A. Bakar

Keywords: Small Medium Enterprise (SME); business owner; thematic analysis method; expert panel; Information and Communication Technology (ICT); course selection system; smart entrepreneurship training framework

PDF

Paper 34: A Framework for Protecting Teenagers from Cyber Crimes and Cyberbullying

Abstract: Social applications consist of powerful tools that allow people to connect and interact with each other. However, its negative use cannot be ignored. Cyberbullying is a new and serious Internet problem. Cyberbullying is one of the most common risks for teenagers to go online. More than half of young people report that they do not tell their parents when this will occur, this can have significant physiological consequences. Cyberbullying involves the deliberate use of digital media on the Internet to convey false or embarrassing information about others. Therefore, this article provides a way to detect cyber-bullying in social media applications for parents. The purpose of our work is to develop an architectural model for identifying and measuring the state of Cyberbullying faced by children on social media applications. For parents, this will be a good tool for monitoring their children without invading their privacy. Finally, some interesting open-ended questions were raised, suggesting promising ideas for starting new research in this new field.

Author 1: Sultan Saud Alanazi
Author 2: Adwan Alownie Alanazi

Keywords: Cyberbullying; cyber bullying; internet crimes; social media security; e-crimes

PDF

Paper 35: Anonymity Feature in Android Mobile Apps for Interest Groups

Abstract: Mobile apps that provide platforms for interest-oriented communities (or interest groups) allow people with common interests to gather virtually in sharing their shared passion and ideas. While the participation of such apps is voluntary, some people feel uncomfortable revealing their real identities due to privacy and safety concerns. Thus, some mobile apps provide an anonymity feature that allows people to join the group anonymously. Nevertheless, little is known on how the anonymity feature relates to the people who prefer to join interest groups. Thus in this paper, we hypothesize that mobile apps users that highly value interest groups will also highly appreciate the anonymity feature provided in the interest groups. In particular, we explored the market segment of mobile Android apps with anonymity features within selected interest groups. A pilot study was conducted where 34 Android apps users, primarily Malaysian, filled up the questionnaires designed to investigate the anonymity feature in the apps. The results of the pilot study show that most Android apps nowadays offer their users to remain anonymous. The findings show that most users who give a high score on the importance of interest-based groups also provide a high score on the importance of the anonymity feature offered by the mobile apps providers.

Author 1: Lee Sin Yi
Author 2: Nurul A. Emran
Author 3: Norharyati Harum

Keywords: Anonymity feature; mobile social apps; quantitative observation; android apps; anonymous social media; interest groups component

PDF

Paper 36: A Method to Prevent SQL Injection Attack using an Improved Parameterized Stored Procedure

Abstract: Structured Query Language (SQL) injection is one of the critical threats to database security. The effects of SQL injection attacks cause the data contained in the database to be at risk of being exploited by irresponsible parties, compromising data integrity, disrupting server operations and in return affecting the organization's image. Although SQL injection is an attack performed at the application level, SQL injection prevention requires security controls at all levels, namely application level, database level and network level. The absence of SQL injection prevention measures at the application level makes the database vulnerable to attack. Reviews indicate that the current approaches still not sufficient in addressing these three issues, which are i) improper use of dynamic SQL, ii) lack of input validation process and iii) inconsistent error handling. Currently, program and database code security is based solely on basic security measures that are focused at the network level such as network firewalls, database access control and web server request filtering. Unfortunately, these measures are still inadequate and not sufficient to safe guard the program code and databases from the attack. To overcome this shortcoming as addressed by these three issues, a new comprehensive method is proposed using an improved parameterized stored procedure to enhance database security. Experimental results prove that the proposed method is able to prevent SQL injection from occurring and able to shorten the processing time when compared with existing methods, hence able to improve database security.

Author 1: Kamsuriah Ahmad
Author 2: Mayshara Karim

Keywords: SQL injection prevention; database security; parameterized stored procedure; network firewall

PDF

Paper 37: From User Stories to UML Diagrams Driven by Ontological and Production Model

Abstract: The User Story format has become the most popular way of expressing requirements in Agile methods. However, a requirement does not state how a solution will be physically achieved. The purpose of this paper is to present a new approach that automatically transforms user stories into UML diagrams. Our approach aims to automatically generate UML diagrams, namely class, use cases, and package diagrams. User stories are written in natural language (English), so the use of a natural language processing tool was necessary for their processing. In our case, we have used Stanford core NLP. The automation approach consists of the combination of rules formulated as a predicate and an ontological model. Prolog rules are used to extract relationships between classes and eliminate those that are at risk of error. To extract the design elements, the prolog rules used dependencies offered by Stanford core NLP. An ontology representing the components of the user stories was created to identify equivalent relationships and inclusion use cases. The tool developed was implemented in the Python programming language and has been validated by several case studies.

Author 1: Samia Nasiri
Author 2: Yassine Rhazali
Author 3: Mohammed Lahmer
Author 4: Amina Adadi

Keywords: Ontology; prolog rules; natural language processing; UML diagrams; user stories

PDF

Paper 38: Law Architecture for Regulatory-Compliant Public Enterprise Model: A Focus on Healthcare Reform in Egypt

Abstract: Public business operations are governed by a set of legal sources, which regulate their implementation under administrative laws, that are increasingly influencing software system design and development. Enterprise Architecture (EA) is a critical approach for driving business and Information Systems (IS) transformation in the public sector. On the other hand, EA frameworks lack representation schemas that support law models. Understanding of the law Architecture in the government domain is required for EA work and is thus the first architecture activity that must be completed. As EA approaches for Law compliance reviews are performed by legal experts, there is a gap between law experts and technical system architects. To cover these gaps, this paper proposes a novel framework for analyzing the administrative laws, extracting the legal policies and legal rules, identify their relationships with other EA domains, and identifying the law compliance requirements. Moreover, the integration of our proposed law architecture framework with existing EA frameworks to reach a law-compliant public enterprise model is identified. Finally, the applicability of the proposed framework is shown and validated through a case study. Moreover, subject matter experts of the legal domain also evaluated the extracted legal policies and rules during the implementation of our proposed framework.

Author 1: Alsayed Abdelwahed Mohamed
Author 2: Nashwa El-bendary
Author 3: A. Abdo

Keywords: Law architecture; regulatory compliance; requirements engineering; enterprise architecture; law ontology; TOGAF

PDF

Paper 39: Microorganisms: Integrating Augmented Reality and Gamification in a Learning Tool

Abstract: Microorganisms is a Year 6 Science subject that primary school students find considerably less attractive because of the enormous facts that require them to have good imaginary skills to understand. Only limited applications are available on smartphones as tools to learn subjects, especially Science, Technology, Engineering and Mathematics (STEM). Since the young generation is very much into current technology, there is a need to develop an application to gain students' interest and improve their understanding of microorganisms. Therefore, a microorganism learning application that combines augmented reality (AR) and gamification called Microorganisms was developed. The Microorganisms prototype includes two modules; learning and training. The learning modules use AR technology that scan marker images to display a digital layer of microorganisms in three dimensions (3D). Meanwhile, the training module is represented through gamification consisting of quiz questions, a timer, and a score. The application was designed using Agile methodology and developed using various software such as Unity, Autodesk 3ds Max, Vuforia, and Firebase. Ten respondents, nine students and one teacher in a primary school, assessed the prototype through experimental testing. The results showed that, on average, the user satisfaction value was 4.6 out of 5. Thus, the Microorganisms application based on AR and gamification can be considered a good learning tool for primary school students to learn about micro-organisms.

Author 1: Ratna Zuarni Ramli
Author 2: Nor Athirah Umairah Marobi
Author 3: Noraidah Sahari Ashaari

Keywords: Augmented reality; digital game-based learning (DGBL); game; learning tool; microorganisms; usability testing

PDF

Paper 40: Emotional Evocative User Interface Design for Lifestyle Intervention in Non-communicable Diseases using Kansei

Abstract: The advancement of technology has led to the development of an artificial intelligence-based healthcare-related application that can be easily accessed and used to assist people in lifestyle intervention for preventing the development of non-communicable diseases (NCD)s. Previous research suggested that users are demanding a more emotional evocative user interface design. However, most of the time, it has been ignored due to lack of a model that could be referred in developing emotional evocative user interface design. This creates a gap in the user interface design that could lead to the ineffectiveness of content delivery in the NCD domain. This paper aims to investigate emotion traits and their relationship with user interface design for lifestyle intervention. Kansei Engineering method was applied to determine the dimensions for constructing emotional evocative user interface design. Data analysis was done using SPSS statistic tool and the result showed the emotional concepts that are significant and impactful towards user interface design for lifestyle intervention in NCD domain. The outcome of this research shall create new research fields that incorporate multi research domain including user interface design and emotions.

Author 1: Noor Afiza Mat Razali
Author 2: Normaizeerah Mohd Noor
Author 3: Norulzahrah Mohd Zainudin
Author 4: Nur Atiqah Malizan
Author 5: Nor Asiakin Hasbullah
Author 6: Khairul Khalil Ishak

Keywords: Emotion; Kansei; non-communicable diseases; lifestyle intervention; user-interface design

PDF

Paper 41: Exploring the Socio-economic Implications of Artificial Intelligence from Higher Education Student’s Perspective

Abstract: As a result of the instability of oil prices, the economic prospects of the Gulf region are increasing their focus on new technologies. Thus, Saudi Arabia has demonstrated a strong commitment towards the development and implementation of Artificial Intelligence (AI) technologies as alternative sources for revenue and growth in line with globalisation, development, and the vision 2030. This paper examines the impact of AI in the Saudi Arabia community, especially for social and economic evolution. Special focus on the use of smart cars and smart cameras to monitor intelligently traffic, public services and national security is explored. A total of 424 participants from Eastern Province took part in this study. Analysis and discussion of the obtained results are also presented. The findings showed that 75.71% of participants mostly highly agreed about the AI economic impact leading to an increase in both government and business financial incomes. Whereas only 59.84% of participants mostly highly agreed about the social impact of AI as they are worried about AI ethical concerns, job loss and the changing workforce.

Author 1: Sarah Bamatraf
Author 2: Lobna Amouri
Author 3: Nahla El-Haggar
Author 4: Aishah Moneer

Keywords: Artificial intelligence; Saudi community; data analysis; social efficiency impact; economic productivity impact

PDF

Paper 42: Feistel Network Assisted Dynamic Keying based SPN Lightweight Encryption for IoT Security

Abstract: In the last few years Internet-of-Things (IoT) technology has emerged significantly to serve varied purposes including healthcare, surveillance and control, business communication, civic administration and even varied financial activities. Despite of such broadened applications, being distributed, wireless based systems, IoTs are often considered vulnerable towards intrusion or malicious attacks, where exploiting the benefits of loosely connected peers, the attackers intend to gain device access or data access un authentically. However, being resource constrained in nature while demanding time-efficient computation, the majority of the classical cryptosystems are either computationally exhaustive or limited to avoid attacks like Brute-Force, Smart Card Loss Attack, Impersonation, Linear and Differential attacks, etc. The assumptions hypothesizing that increasing key-size with higher encryption round can achieve augmented security often fail in IoT due to increased complexity, overhead and eventual resource exhaustion. Considering it as limitation, in this paper we proposed a state-of-art Generalized Feistel Network assisted Shannon-Conditioned and Dynamic Keying based SSPN (GFS-SSPN) Lightweight Encryption System for IoT Security. Unlike classical cryptosystems or even substitution and permutation (SPN) based methods, we designed Shannon-criteria bounded SPN model with Generalized Feistel Network (SPN-GFS) model that employs 64-bit dynamic key with five rounds of encryption to enable highly attack-resilient IoT security. The proposed model was designed in such manner that it could be suitable towards both data-level security as well as device level access-credential security to enable a “Fit-To-All” security solution for IoTs. Simulation results revealed that the proposed GFS-SSPN model exhibits very small encryption time with optimal NPCR and UACI. Additionally, correlation output too was found encouragingly fair, indicating higher attack-resilience.

Author 1: Krishna Priya Gurumanapalli
Author 2: Nagendra Muthuluru

Keywords: Internet-of-Things; dynamic programming; lightweight encryption; generalized Feistel network; substitution and permutation network

PDF

Paper 43: Cloud-based Secure Healthcare Framework by using Enhanced Ciphertext Policy Attribute-Based Encryption Scheme

Abstract: Cloud computing is an emerging technology that has been used to provide better healthcare services to the users because of its convenient and economical features. Noted that the healthcare services required fast and reliable data sharing at anytime from anywhere for better monitoring and decision making in medical requirements. However, the privacy and integrity of electronic healthcare record become a significant issue during data sharing and outsourcing in Cloud. The data privacy of clients/patients is important in healthcare services where exposure of the data to unauthorized parties is unexceptional. In order to address this security loophole, this paper presents a Cloud-based Secure Healthcare Framework (SecHS) to offer safe access to healthcare and medical data. Specifically, this paper enhance the Ciphertext Policy Attribute-Based Encryption (CP-ABE) scheme by adding two more modules which aims to provide fine-grained access control and offer privacy and integrity of data. It facilitates encryption and hashing schemes. The proposed framework is compared with existing frameworks that used CP-ABE scheme. It shows the SecHS offers better features towards securing the healthcare services data. Optimistically, data security requirements such as privacy, integrity and fine-grained access control are required to effectively proposed for assuring data sharing in the Cloud environment.

Author 1: Siti Dhalila Mohd Satar
Author 2: Mohamad Afendee Mohamed
Author 3: Masnida Hussin
Author 4: Zurina Mohd Hanapi
Author 5: Siti Dhalila Mohd Satar

Keywords: Cloud computing; privacy and integrity; fine-grained access control; Ciphertext policy attribute-based encryption; electronic health record

PDF

Paper 44: Determining Optimal Number of K for e-Learning Groups Clustered using K-Medoid

Abstract: e-Learning is appropriate when the learners are grouped and facilitated to learn according to their learning style and at their own pace. Elaborate researches have been proposed to categorize learners based on various e-learning parameters. Most of these researches have deployed the clustering principles for grouping eLearners, and in particular, they have utilized K-Medoid principle for better clustering. In the classical K-Medoid algorithm, predicting or determining the value of K is critical, two methods namely the Elbow and Silhouette methods are widely applied. In this paper, we experiment with the application of both these methods to determine the value of K for clustering eLearners in K-Medoid and prove that Silhouette method best predicts the value of K.

Author 1: S. Anthony Philomen Raj
Author 2: Vidyaathulasiraman

Keywords: Clustering; e-learning; elbow method; k-means; k-medoid; machine learning; silhouette method

PDF

Paper 45: Abnormal Pulmonary Sounds Classification Algorithm using Convolutional Networks

Abstract: In the world and in Peru, Acute Respiratory Infections are the main cause of death, especially in the most vulnerable population, children under 5 years of age and older adults. Pneumonia is the leading cause of death of children in the world. 60.2% of pneumonia cases affect children under 5 years of age. Thus, prevention and timely treatment of lung diseases are crucial to reduce infant mortality in Peru. Among the main problems associated with this high is percentage the lack of medical professionals and resources, especially in remote areas, such as Puno, Huancavelica and Arequipa, which experience temperatures as low as -20°C during the cold season. This study develops an algorithm based on computational neural networks to differentiate between normal and abnormal lung sounds. The initial base of 917 sounds was used, through a process of data augmentation, this base was increased to 8253 sounds in total, and this process was carried out due to the need of a large number of data for the use of computational neural networks. From each signal, features were extracted using three methods: MFCC, Melspectogram and STFT. Three models were generated, the first one to classify normal and abnormal, which obtained a training Accuracy of 1 and a testing accuracy of 0.998. The second one classifies normal sound, pneumonia and other abnormalities and obtained training Accuracy values of 0.9959 and a testing accuracy of 0.9885. Finally, we classified by specific ailment where we obtained a training Accuracy of 0.9967 and a testing accuracy of 0.9909. This research provides interesting findings about the diagnosis and classification of lung sounds automatically using convolutional neural networks, which is the beginning for the development of a platform to assess the risk of pneumonia in the first moment, thus allowing rapid care and referral that seeks to reduce mortality associated mainly with pneumonia.

Author 1: Alva Mantari Alicia
Author 2: Arancibia-Garcia Alexander
Author 3: Chávez Frías William
Author 4: Cieza-Terrones Michael
Author 5: Herrera-Arana Víctor
Author 6: Ramos-Cosi Sebastian

Keywords: Algorithm; classification; computational neural networks; lung sounds; mortality; pneumonia

PDF

Paper 46: Cost Effective Hybrid Fault Tolerant Scheduling Model for Cloud Computing Environment

Abstract: Cloud computing provides flexible and cost effective way for end users to access data from multiplatform environment. Despite of support by the features of cloud computing, there are also chances of resource failure. Hence there is a need of a fault tolerant mechanism to achieve undisrupted performance of cloud services. The task reallocation and duplication are the two commonly used fault tolerant mechanisms. But task replication method results in huge storage and computational overhead, when the number of tasks is increasing gradually. If the number of faults are high, it incurs more storage overhead and time complexity based on task criticality. In order to solve these issues, we propose to develop a Cost Effective Hybrid Fault Tolerant Scheduling (CEHFTS) Model for cloud computing. In this model, the Failure Occurrence Probability (FoP) of each VM is estimated by finding the previous failures and successful executions. Then an adaptive fault recovery timer is maintained during a fault, which is adjusted based on the type of faults. Experimental results have shown that CEHFTS model achieves 43% reduced storage cost and 13% reduced response delay for critical tasks, when compared to existing technique.

Author 1: Annabathula. Phani Sheetal
Author 2: K. Ravindranath

Keywords: Cloud computing; failures; fault tolerant; critical tasks; scheduling; fault recovery; overhead

PDF

Paper 47: An Efficient Image Clustering Technique based on Fuzzy C-means and Cuckoo Search Algorithm

Abstract: Clustering is a predominant technique used in image segmentation due to its simple, easy and efficient approach. It is very important for the analysis, extraction and interpretation of images; which makes it used in multiple applications and in various fields. In this article, we propose a different image segmentation technique based on the cooperation between an optimization algorithm which is the Cuckoo Search Algorithm (CSA) and a clustering technique which is the Fuzzy C-means (FCM). The clustering method we propose goes through two major steps. In the first step, CSA explores the entire search space of the specified data to find the optimal clustering centers. Subsequently, these centers are evaluated using a new objective function. The result of the first step is used to initialize the FCM algorithm in the second step. The efficiency of the suggested method is measured on several images selected from the BSD300 database and we compare it with other algorithms such as FCM optimized by genetic algorithms (FCM-GA) and FCM optimized by particle swarm optimization (FCM-PSO). The experimental results on the different algorithms used in this paper show that the proposed method improves the segmentation results, based on the analysis of the best values of fitness, MSE, PSNR, CC, RI, GCE, BDE and VOI.

Author 1: Lahbib KHRISSI
Author 2: Nabil EL AKKAD
Author 3: Hassan SATORI
Author 4: Khalid SATORI

Keywords: Clustering; classification; image segmentation; fuzzy c-means; cuckoo search algorithm

PDF

Paper 48: Prediction of Cantilever Retaining Wall Stability using Optimal Kernel Function of Support Vector Machine

Abstract: The Support Vector Machine is one of the artificial intelligence techniques that can be applied to forecast the stability of cantilever retaining walls. The selection of the right Kernel function is very important so that the Support Vector Machine model can make good predictions. However, there are no general guidelines that can be used to select Kernel functionality. Therefore, the Kernel function which consists of Linear, Polynomial, Radial Basis Function and Sigmoid has been evaluated to determine the optimal Kernel function by using 10 cross-validation (V-fold cross-validation). The achievement of each function is evaluated based on the mean square error value and the squared correlation coefficient. The mean square error value is closer to zero and the squared correlation coefficient closer to the value of one indicates a more accurate Kernel function. Results show that the Support Vector Machine model with Radial Basis Function Kernel can successfully predict the stability of cantilever retaining walls with good accuracy and reliability in comparison to the various other Kernel functions.

Author 1: Rohaya Alias
Author 2: Siti Jahara Matlan
Author 3: Aniza Ibrahim

Keywords: Cantilever retaining wall; kernel function; prediction; stability; support vector machine

PDF

Paper 49: Analyzing the Performance of Anomaly Detection Algorithms

Abstract: An outlier is a data observation that is considerably irregular from the rest of the dataset. The outlier present in the dataset may cause the integrity of the dataset. Implementing machine learning techniques in various real-world applications and applying those techniques to the healthcare-related dataset will completely change the particular field's present scenario. These applications can highlight the physiological data having anomalous behavior, which can ultimately lead to a fast and necessary response and help to gather more critical knowledge about the particular area. However, a broad amount of study is available about the performance of anomaly detection techniques applied to popular public datasets. But then again, have a minimal amount of analytical work on various supervised and unsupervised methods considering any physiological datasets. The breast cancer dataset is both a universal and numeric dataset. This paper utilized and analyzed four machine learning techniques and their capacity to distinguish anomalies in the breast cancer dataset.

Author 1: Chiranjit Das
Author 2: Akhtar Rasool
Author 3: Aditya Dubey
Author 4: Nilay Khare

Keywords: Anomaly; machine learning; outlier detection; minimum covariance determinant

PDF

Paper 50: Current Perspective of Symbiotic Organisms Search Technique in Cloud Computing Environment: A Review

Abstract: Nature-inspired algorithms in computer science and engineering are algorithms that take their inspiration from living things and imitate their actions in order to construct functional models. The SOS algorithm (symbiotic organisms search) is a new promising metaheuristic algorithm. It is based on the symbiotic relationship that exists between different species in an ecosystem. Organisms develop symbiotic bonds like mutualism, commensalism, and parasitism to survive in their environment. Standard SOS has since been modified several times, either by hybridization or as better versions of the original algorithm. Most of these modifications came from engineering construction works and other discipline like medicine and finance. However, little improvement on the standard SOS has been noticed on its application in cloud computing environment, especially cloud task scheduling. As a result, this paper provides an overview of SOS applications in task scheduling problem and suggest a new enhanced method for better performance of the technique in terms of fast convergence speed.

Author 1: Ajoze Abdulraheem Zubair
Author 2: Shukor Bin Abd Razak
Author 3: Md. Asri Bin Ngadi
Author 4: Aliyu Ahmed

Keywords: Cloud computing; cloud resource management; cloud task scheduling; symbiotic organisms search; entrapment; convergence speed

PDF

Paper 51: Sarcasm Detection in Tweets: A Feature-based Approach using Supervised Machine Learning Models

Abstract: Sarcasm (i.e., the use of irony to mock or convey contempt) detection in tweets and other social media platforms is one of the problems facing the regulation and moderation of social media content. Sarcasm is difficult to detect, even for humans, due to the deliberate ambiguity in using words. Existing approaches to automatic sarcasm detection primarily rely on lexical and linguistic cues. However, these approaches have produced little or no significant improvement in terms of the accuracy of sentiment. We propose implementing a robust and efficient system to detect sarcasm to improve accuracy for sentiment analysis. In this study, four sets of features include various types of sarcasm commonly used in social media. These feature sets are used to classify tweets into sarcastic and non-sarcastic. This study reveals a sarcastic feature set with an effective supervised machine learning model, leading to better accuracy. Results show that Decision Tree (91.84%) and Random Forest (91.90%) outperform in terms of accuracy compared to other supervised machine learning algorithms for the right features selection. The paper has highlighted the suitable supervised machine learning models along with its appropriate feature set for detecting sarcasm in tweets.

Author 1: Arifur Rahaman
Author 2: Ratnadip Kuri
Author 3: Syful Islam
Author 4: Md. Javed Hossain
Author 5: Mohammed Humayun Kabir

Keywords: Machine learning; detection; sarcasm; sentiment; tweets

PDF

Paper 52: Evaluation of Agent-Network Environment Mapping on Open-AI Gym for Q-Routing Algorithm

Abstract: The changes in network dynamics demands a routing algorithm that adapts intelligently with the changing requirements and parameters. In this regard, an efficient routing mechanism plays an essential role in supporting such requirements of dynamic and QoS-aware network services. This paper has introduced a self-learning intelligent approach to route selection in the network. A Q-Routing approach is designed based on a reinforcement learning algorithm to provide reliable and stable packet transmission for different network services with minimal delay and low routing overhead. The novelty of the proposed work is that a new customized environment for the network, namely Net-AI-Gym, has been integrated into Open-AI Gym. Besides, the proposed Q-routing with Net-AI-Gym offers optimization in exploring the path to support multi-QoS aware services in the different networking applications. The performance assessment of the NET-AI Gym is carried out with less, medium, and a high number of nodes. Also, the results of the proposed system are compared with the existing rule-based method. The study outcome shows the Net-AI-Gym's potential that effectively supports the varied scale of nodes in the network. Apart from this, the proposed Q-routing approach outperforms the rule-based routing technique regarding episodes vs. Rewards and path length.

Author 1: Varshini Vidyadhar
Author 2: R. Nagaraja

Keywords: Reinforcement learning; environment; agent; network; Net-AI-Gym; Q-routing; rule-based routing

PDF

Paper 53: Internet of Things (IoT) Based ECG System for Rural Health Care

Abstract: Nearly 30% of the people in the rural areas of Bangladesh are below the poverty level. Moreover, due to the unavailability of modernized healthcare-related technology, nursing and diagnosis facilities are limited for rural people. Therefore, rural people are deprived of proper healthcare. In this perspective, modern technology can be facilitated to mitigate their health problems. ECG sensing tools are interfaced with the human chest, and requisite cardiovascular data is collected through an IoT device. These data are stored in the cloud incorporates with the MQTT and HTTP servers. An innovative IoT-based method for ECG monitoring systems on cardiovascular or heart patients has been suggested in this study. The ECG signal parameters P, Q, R, S, T are collected, pre-processed, and predicted to monitor the cardiovascular conditions for further health management. The machine learning algorithm is used to determine the significance of ECG signal parameters and error rate. The logistic regression model fitted the better agreements between the train and test data. The prediction has been performed to determine the variation of PQRST quality and its suitability in the ECG Monitoring System. Considering the values of quality parameters, satisfactory results are obtained. The proposed IoT-based ECG system reduces the health care cost and complexity of cardiovascular diseases in the future.

Author 1: Md. Obaidur Rahman
Author 2: Mohammod Abul Kashem
Author 3: Al-Akhir Nayan
Author 4: Most. Fahmida Akter
Author 5: Fazly Rabbi
Author 6: Marzia Ahmed
Author 7: Mohammad Asaduzzaman

Keywords: Internet of things (IoT); electrocardiogram (ECG) monitoring system; ECG signal parameters; cardiovascular disease; logistic regression model

PDF

Paper 54: Design of a Plastic Shredding Machine to Obtain Small Plastic Waste

Abstract: One of the biggest environmental problems in the world is the excess of plastic waste. Although it is true, around the world there are companies that are responsible for recycling plastic waste, since 42% of plastics that are generated worldwide have a single utility. In Peru, the culture of recycling is not promoted, each year approximately each person uses 30 kilos of plastic and that only in Metropolitan Lima and Callao 46% of plastic waste is generated nationwide. In view of this problem, this article will design a plastic shredding machine to obtain small plastic waste to help people to be dedicated to the recycling industry in an automated way, it would also generate jobs since it requires a staff in charge of the machine and it will also be extremely useful to reduce plastic pollution in the environment, which due to COVID-19 is increasing. Through the design of the plastic waste shredding machine, the plastic will be selected by color and type of plastic composition, whether it is Polyethylene Terephthalate (PET), High Density Polyethylene (HDPE), Low Density Polyethylene (LDPE), Polychloride vinyl (PVC) or Others (Plastic Mix), then it will go through the shredding process to become small plastic waste, which could be turned into filament for 3D printer.

Author 1: Witman Alvarado-Diaz
Author 2: Jason Chicoma-Moreno
Author 3: Brian Meneses-Claudio
Author 4: Luis Nuñez-Tapia

Keywords: Automation; pollution; filaments; recycling; plastic waste

PDF

Paper 55: An HC-CSO Algorithm for Workflow Scheduling in Heterogeneous Cloud Computing System

Abstract: Many scientists are using meta-heuristic techniques for dynamic workflow task scheduling in the area of cloud computing systems to get optimum solutions. Many swarm intelligent algorithms have been designed so far which are having many limitations as some get trapped in local optima, a few are having low convergence speed, some are having poor global search facilities, etc. Still, there is a requirement of designing a new algorithm or modification of existing algorithms to overcome the limitations of the existing techniques. A new Hybrid Cat Swarm Optimization algorithm named H-CSO was designed inspired by the HEFT algorithm and the initialization problem of the Cat Swarm Optimization was overcome. Still, that algorithm has a limitation of getting stuck in local minima. To overcome this algorithm a part of the Crow Search Algorithm has been integrated into H-CSO and described in this paper. After simulation, it was found that the new hybrid algorithm named HC-CSO outperforms CSO and H-CSO.

Author 1: Jai Bhagwan
Author 2: Sanjeev Kumar

Keywords: Cloud computing; Crow Search Algorithm (CSA); Cat Swarm Optimization (CSO); H-CSO; HC-CSO; HEFT; Self-Motivated Inertia Weight (SMIW); Virtual Machines (VMs)

PDF

Paper 56: Improving Imbalanced Data Classification in Auto Insurance by the Data Level Approaches

Abstract: Predicting the frequency of insurance claims has become a significant challenge due to the imbalanced datasets since the number of occurring claims is usually significantly lower than the number of non-occurring claims. As a result, classification models tend to have a limited ability to predict the occurrence of claims. So, in this paper, we'll use various data level approaches to try to solve the imbalanced data problem in the insurance industry. We developed 32 machine learning models for predicting insurance claims occurrence {(under-sampling, over-sampling, the combination of over-and under-sampling (hybrid), and SMOTE)× (three Decision tree models, three boosting models, and two bagging models) = 32}, and we compared the models' accuracies, sensitivities, and specificities to comprehend the prediction performance of the built models. The dataset contains 81628 claims, each of which is a car insurance claim. There were 5714 claims that occurred and 75914 claims that didn't occur. According to the findings, the AdaBoost classifier with oversampling and the hybrid method had the most accurate predictions, with a sensitivity of 92.94%, a specificity of 99.82%, and an accuracy of 99.4%. And with a sensitivity of 92.48%, a specificity of 99.63%, and an accuracy of 99.1%, respectively. This paper confirmed that When analyzing imbalanced data, the AdaBoost classifier, whether using oversampling or the hybrid process, could generate more accurate models than other boosting models, Decision tree models, and bagging models.

Author 1: Mohamed Hanafy
Author 2: Ruixing Ming

Keywords: Machine learning; classification; insurance; imbalanced data problem; resampling methods

PDF

Paper 57: A New Feature Filtering Approach by Integrating IG and T-Test Evaluation Metrics for Text Classification

Abstract: High dimensionality is one of the main issues associated with text classification, such as selecting the most discrepant features subset for classifier's effective utilization is a difficult task. This significant preprocessing stage of selecting the relevant features is often called feature selection or feature filtering. Eliminating the non-relevant and noise features from the original feature set will drastically reduce the size of the feature set and the time complexity of the classification models and also improve or maintain their performance. Most of the existing filtering method produced a subset with relatively high number of features without much significant impact on running time, or produced subset with lesser number of features but results in performance degradation. In this paper, we proposed a new bi-strategy filtering approach that integrates Information Gain with t-test that selects a subset of informative features by considering both the score and ranking of respective features. Our approach considers the results' disparity produced by the benchmark metrics used in order to maximized and lessen their advantage and disadvantage. The approach set a new threshold parameter by computing V-score of the features with minimum scores present in both the two subsets and further refined the selected features. Hence, it reduces the size of the features subset without losing much informative features. Experiment results conducted on three different text datasets have shown that the proposed method is able to select features that are highly discrepant and at the same time achieves a significant improvement in terms of classification accuracy and F-score at the cost of a minimum running time.

Author 1: Abubakar Ado
Author 2: Mustafa Mat Deris
Author 3: Noor Azah Samsudin
Author 4: Aliyu Ahmed

Keywords: Dimensional reduction; feature filtering; feature selection; t-test; information gain; V-score

PDF

Paper 58: An Ontological Framework for Healthcare Web Applications Security

Abstract: The current era of digitization and transformation causes various issues and advantages both at the same time in the healthcare sector. The beneficial advantages are exceptionally good but the issues that are gardening the business of attackers is a serious issue and requires effective prevention. Current statistics and attack vector analysis portray that technical breaches have the most common and highest priority in numbers. This type of information opens a need to prevent and develop an effective model that helps to exerts and healthcare practitioners in security management. In order to achieve this desired goal, we adopt and apply an ontology-based approach of security and development methodology and provide a model that effectively produces systematic secure pathways to design healthcare web applications. The conceptual framework discussed in the study has many effective and beneficial advantages namely, it gives a unified pathway to future developers; the model also attains focus on requirement identification during development and portrays its significance.

Author 1: Mamdouh Alenezi

Keywords: Security; web application; healthcare; digitization; requirement

PDF

Paper 59: A Proposal to Improve the Bit Plane Steganography based on the Complexity Calculation Technique

Abstract: The video steganography technique is being studied and applied a lot today because of its benefits. In particular, the video steganography technique using Bit-Plane Complexity Segmentation (BPCS) has increasingly proven its effectiveness compared to other methods. In this paper, based on the theoretical basis of the BPCS method, we propose a new method to improve the efficiency of the steganography process. Accordingly, our improvement proposal in this paper is improving the complexity formula of the bit planes. Our new formula not only has improved the steganographic thresholds in the bit planes to find more planes hiding secret information, but also has ensured the amount of information hidden in the video and their safety. The experimental results in the paper have not only demonstrated the effectiveness of our proposed method but also provided a new mechanism for digital image analysis in general and video steganography techniques in particular.

Author 1: Cho Do Xuan

Keywords: Steganography; video steganography technique; bit-plane complexity segmentation (BPCS); complexity formula

PDF

Paper 60: Multiplicative Iterative Nonlinear Constrained Coupled Non-negative Matrix Factorization (MINC-CNMF) for Hyperspectral and Multispectral Image Fusion

Abstract: Hyperspectral and Multispectral (HS-MS) image fusion is a most trending technology that enhance the quality of hyperspectral image. By this technology, retrieve the precise information from both HS and MS images combined together increase spatial and spectral quality of the image. In the past decades, many image fusion techniques have been introduced in literature. Most of them using Coupled Nonnegative matrix factorization (CNMF) technique which is based on Linear Mixing Model (LMM) which neglect the nonlinearity factors in the unmixing and fusion technique of the hyperspectral images. To overcome this limitation, we are going to propose an unmixing based fusion algorithm namely Multiplicative Iterative Nonlinear Constrained Coupled Nonnegative Matrix Factorization (MINC-CNMF) that enhance the spatial quality of the image by considering the nonlinearity factor associated with the unmixing process of in the image. This method not only consider the spatial quality but also enhance the spectral data by imposing constraints known as minimum volume (MV) which helps to estimate accurate endmembers. We also measure the strength and superiority of our method against baseline methods by using four public dataset and found that our method shows outstanding performance than all the baseline methods.

Author 1: Priya K
Author 2: Rajkumar K K

Keywords: Hyperspectral data; multispectral data; minimum volume; nonlinear mixing model; spectral variability; spectral image fusion

PDF

Paper 61: Optimal Operation of Smart Distribution Networks using Gravitational Search Algorithm

Abstract: This paper proposes a methodology for an optimal operation of smart distribution network considering the network reconfiguration, distributed generation (DG) units allocation and optimally placing the shunt capacitors for reactive power compensation. In this work, the total power losses minimization objective is considered. By optimizing this objective, it can also results in the reduction of voltage deviation. The proposed problem is solved using evolutionary-based gravitational search algorithm (GSA). Simulation studies are performed on 33 bus radial distribution system (RDS). Simulation results reveal that there is a drastic reduction in the power losses by utilizing the network reconfiguration, DG allocation, and reactive power compensation.

Author 1: Surender Reddy Salkuti

Keywords: Distributed generation; renewable energy; meta-heuristic algorithms; network reconfiguration; smart grid; reactive power compensation

PDF

Paper 62: Analyzing the Performance of Stroke Prediction using ML Classification Algorithms

Abstract: A Stroke is a health condition that causes damage by tearing the blood vessels in the brain. It can also occur when there is a halt in the blood flow and other nutrients to the brain. According to the World Health Organization (WHO), stroke is the leading cause of death and disability globally. Most of the work has been carried out on the prediction of heart stroke but very few works show the risk of a brain stroke. With this thought, various machine learning models are built to predict the possibility of stroke in the brain. This paper has taken various physiological factors and used machine learning algorithms like Logistic Regression, Decision Tree Classification, Random Forest Classification, K-Nearest Neighbors, Support Vector Machine and Naïve Bayes Classification to train five different models for accurate prediction. The algorithm that best performed this task is Naïve Bayes that gave an accuracy of approximately 82%.

Author 1: Gangavarapu Sailasya
Author 2: Gorli L Aruna Kumari

Keywords: Stroke; machine learning; logistic regression; decision tree classification; random forest classification; k-nearest neighbors; support vector machine; naïve bayes classification

PDF

Paper 63: A Novel Threshold based Method for Vessel Intensity Detection and Extraction from Retinal Images

Abstract: Retinal vessel segmentation is an active research area in medical image processing. Several research outcomes on retinal vessel segmentation have emerged in recent years. Each method has its own pros and cons, either in the vessel detection stage or in its extraction. Based on a detailed empirical investigation, a novel retinal vessel extraction architecture is proposed, which makes use of a couple of existing algorithms. In the proposed algorithm, vessel detection is carried out using a cumulative distribution function-based thresholding scheme. The resultant vessel intensities are extracted based on the hysteresis thresholding scheme. Experiments are carried out with retinal images from DRIVE and STARE databases. The results in terms of Sensitivity, Specificity, and Accuracy are compared with five standard methods. The proposed method outperforms all methods in terms of Sensitivity and Accuracy for the DRIVE data set, whereas for STARE, the performance is comparable with the best method.

Author 1: Farha Fatina Wahid
Author 2: Sugandhi K
Author 3: Raju G
Author 4: Debabrata Swain
Author 5: Biswaranjan Acharya
Author 6: Manas Ranjan Pradhan

Keywords: Retinal images; blood vessel detection; and segmentation; segmentation; hysteresis thresholding; cumulative distribution function introduction

PDF

Paper 64: Integration of Identity Governance and Management Framework within Universities for Privileged Users

Abstract: The development of high-tech progression around the world, with the attentions of governance body and private companies towards well-organized setup access control measures within the organization. The importance of these exceedingly essential perception, this article is proposing an integrated approach with the assimilation of IAM (identity access management) as an authentication tool and PAM (privilegd access management) as a restricting accessing control measure in terms of an active directory. Originally the experimental setup organized within the Prince Sattam Bin Abdulaziz University, and it is analyzed using the real-time data which is available within the university database. We found that the proposed mechanism can be a vital method for protecting governance data or key business-oriented data from the unauthorized or adversarial attack. We also reviewed and compared other access control methods and find that the integrated method is relatively have an advantage to deal accessing task in any premier organization.

Author 1: Shadma Parveen
Author 2: Sultan Ahmad
Author 3: Mohammad Ahmar Khan

Keywords: Identity management; governance framework; privileged access management; security; privacy

PDF

Paper 65: Dorsal Hand Vein Extraction in Uncontrolled Environment

Abstract: Biometrics is an inseparable part of our day to day life. A major development in this area has been observed in past few decades. Over the recent years, dorsal hand veins have emerged as a promising biometric attribute due to its universality, stability and anti-forgery characteristics. However, detecting the veins of different thickness under different illumination is a challenging task. The traditional vein extraction approaches based on thresholding does not find their applicability in these situations. This paper presents a hybrid approach for vein segmentation for these hand images. The proposed approach is a combination of two techniques, i.e. repeated line tracking and maximum curvature points. The technique has been tested over Bosphorus hand vein dataset which contains 1575 images of different age groups captured under different illumination conditions. From the results, it is evident that this technique is suitable to extract vein pattern from all types of images. Further, these images have yielded an accuracy of more than 98% when subjected to feature extraction and classification steps.

Author 1: Nisha Charaya
Author 2: Anil Kumar
Author 3: Priti Singh

Keywords: Biometrics; security; forgery; hand vein; vein segmentation; vein extraction; thick veins; unclear veins

PDF

Paper 66: An Ensemble GRU Approach for Wind Speed Forecasting with Data Augmentation

Abstract: This paper proposes an ensemble model for wind speed forecasting using the recurrent neural network known as Gated Recurrent Unit (GRU) and data augmentation. For the experimentation, a single wind speed time series is used, from which four augmented time series are generated, which serve to train four GRU sub-models respectively, the results of these sub-models are averaged to generate the results of the proposal ensemble model (E-GRU). The results achieved by E-GRU are compared with those of each sub-model, showing that E-GRU outperforms the sub-models. Likewise, the proposal model (E-GRU) is compared with benchmark models without data augmentation such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), showing that E-GRU is much more precise, reaching a difference of around 15% with respect to the Relative Root mean Square Error (RRMSE) and 11% with respect to the Mean Absolute Percentage Error (MAPE).

Author 1: Anibal Flores
Author 2: Hugo Tito-Chura
Author 3: Victor Yana-Mamani

Keywords: Wind speed forecasting; recurrent neural networks; gated recurrent unit; ensemble GRU; data augmentation

PDF

Paper 67: Relative Merits of Data Mining Algorithms of Chronic Kidney Diseases

Abstract: Early prediction of Chronic Kidney Disease in human subjects is considered to be a critical factor for diagnosis and treatment. The use of data mining algorithms to reveal the hidden information from clinical and laboratory samples helps physician in early diagnosis, thus contributing towards increase in accuracy, prediction and detection of Chronic Kidney Disease. The experimental results obtained from this work, with subjected to optimal data mining algorithms for better classification and prediction, of Chronic Kidney Disease. The result of applying relevant algorithms, like K-Nearest Neighbors, Support Vector Machine, Multi Layer Perceptron, Random Forest, are studied for both clinical and laboratory samples. Our findings show that K - Nearest Neighbour algorithm provides the best classification for clinical data and, similarly, Random Forest for laboratory samples, when compared with the performance parameters like, precision, accuracy, recall and F1 Score of other data mining analysis techniques.

Author 1: Harsha Herle
Author 2: Padmaja K V

Keywords: Ultrasound images; support vector machine (SVM) k-nearest algorithm (K-NN); multilayer perceptron algorithm (MLP); random forest (RF); clinical data

PDF

Paper 68: Comparing the Balanced Accuracy of Deep Neural Network and Machine Learning for Predicting the Depressive Disorder of Multicultural Youth

Abstract: Multicultural youth are more likely to experience negative emotions (e.g. depressive symptoms) due to social prejudice and discrimination. Nevertheless, previous studies that analyzed the emotional aspects of multicultural youth mainly compared the characteristics of multicultural youth and those of the other youth or identified individual risk factors using a regression model. This study developed models to predict the depressive disorders of multicultural youth based on the Quick Unbiased Efficient Statistical Tree (QUEST), Classification And Regression Trees (CART), gradient boosting machine (G-B-M), random forest, and deep neural network (deep-NN) using epidemiological data representing multicultural youth and compared the prediction performance (PRED PER) of the developed models. Our study analyzed 19,431 youths (9,835 males and 9,596 females) aged between 19 and 24 years old. We developed models for predicting the self-awareness of health of youths by using QUEST, CART, G-B-M, random forest, and deep-NN and compared the balanced accuracy of them to evaluate their PRED PER. Among 19,431 subjects, 42.9% (5,838 people) experienced a depressive disorder in the past year. The results of our study confirmed that deep-NN had the best PRED PER with a specificity of 0.85, a sensitivity of 0.71, and a balanced accuracy of 0.78. It will be necessary to develop a model with optimal PRED PER by tuning hyperparameters (e.g., number of hidden layers, number of iterations, and activation function, number of hidden nodes) of deep-NN.

Author 1: Haewon Byeon

Keywords: Quick unbiased efficient statistical tree; gradient boosting machine; deep neural network; classification and regression trees; balanced accuracy

PDF

Paper 69: Privacy Preserving Dynamic Provable Data Possession with Batch Update for Secure Cloud Storage

Abstract: Cloud Server (CS) is an untrusted entity in cloud paradigm that may hide accidental data loss to maintain its reputation. Provable Data Possession (PDP) is a model that allows Third Party Auditor (TPA) to verify the integrity of outsourced data on behalf of cloud user without downloading the data files. But this public auditing model faces many security and performance issues such as: unnecessary computational burden on user as well as on TPA, to preserve identities of users from TPA during auditing, support for dynamic updates etc. Many PDP schemes creates computational burden either on TPA or Cloud User. To balance this overhead between TPA and User, this paper proposes Privacy-Preserving Dynamic Provable Data Possession (P2DPDP) scheme, which is based on ODPDP scheme. In ODPDP scheme, user relieves the burden by signing a contract with TPA regarding verification of his outsourced data. But this scheme generates computation overhead on TPA. To reduce this computation overhead of TPA, our P2DPDP scheme uses Indistinguishability Obfuscation (IO) with one-way function such as message authentication code to make a lightweight auditing process. P2DPDP scheme uses Rank-based Merkle Tree (RBMT) to support dynamic updates in batch mode which greatly reduces computation overhead of TPA. ODPDP lacks privacy which is maintained in P2DPDP using ring signature technique. Our experimental results demonstrate the reduced verification time and computation cost compared to existing schemes.

Author 1: Smita Chaudhari
Author 2: Gandharba Swain

Keywords: Public auditing; ring signature; Indistinguishability Obfuscation; Rank-based Merkle Tree (RBMT)

PDF

Paper 70: Classification Model Evaluation Metrics

Abstract: The purpose of this paper was to confirm the basic assumption that classification models are suitable for solving the problem of data set classifications. We selected four representative models: BaiesNet, NaiveBaies, MultilayerPerceptron, and J48, and applied them to a four-class classification of a specific set of hepatitis C virus data for Egyptian patients. We conducted the study using the WEKA software classification model, developed at Waikato University, New Zealand. Defeat results were obtained. None of the four classes envisaged has been determined reliably. We have described all 16 metrics, which are used to evaluate classification models, listed their characteristics, mutual differences, and the parameter that evaluates each of these metrics. We have presented comparative, tabular values that give each metric for each classification model in a concise form, detailed class accuracy with a table of best and worst metric values, confusion matrices for all four classification models, and a type I and II error table for all four classification models. In addition to the 16 metric classifications, which we described, we listed seven other metrics, which we did not use because we did not have the opportunity to show their application on the selected data set. Metrics were negatively rated selected, standard reliable, classification models. This led to the conclusion that the data in the selected data set should be pre-processed to be reliably classified by the classification model.

Author 1: Željko Ð. Vujovic

Keywords: Classification model; classification models; evaluate classification models; worst metric values; four-class classification; metric classification; reliable classified classification models; detailed class accuracy

PDF

Paper 71: Cluster based Certificate Blocking Scheme using Improved False Accusation Algorithm

Abstract: The aggregation of mobile nodes without the use of a base station is known as Mobile Ad Hoc Networks (MANETS). In nature, the nodes are moving. These networks are not connected and thus subject to security attacks due to their mobility. There are several mechanisms proposed to prevent mishaps while routing of the packets in such networks methods: The methodology outlined in Mobile Ad Hoc Networks to protect against various types of assaults is based on a recent method known as Cooperative Bait Detection Scheme. Its implementation scenario demonstrates that in the event of Sybil assaults, the packet delivery ratio and performance are low. on the network. Our goal is to propose a cluster-based methodology to improve delays, packet delivery ratio, and other performance assessment criteria. Improved Cooperative Bait Detection recommends a disjointed multipath technique to avoid attacks. Until date, the dropped packet delivery ratio has been the key to preventing collaborative and Sybil assaults. In the Hybrid Cooperative Bait Detection Scheme, nodes are verified in two stages: first, on the basis of packet delivery ratio, and then, in the second stage, the exact cause of performance decline is explored to check node behavior. In order to improve security, certifying procedures must be used to clustered networks. For malevolent entities, the false accusation algorithm provided certificate revocation and blocking approaches. An algorithm is proposed that remembers false accusations for a set period of time in order to increase the number of normal nodes in the network and hence improve the system's performance. Results: With the help of NS2 simulation, the clustering approach was evaluated by considering several Sybil-attack network scenarios. When the proposed work is compared to other ways such as Cooperative Bait Detection Scheme, Improve Comparative Bait Detection Scheme, and Hybrid Comparative Bait Detection Scheme, the results show that Packet Delivery Ratio and performance are improved for Sybil attackers over the internet. In conjunction with Certifying authority, the Cluster Head in the network identifies and prevents false complaints. The results of the comparison using several performance parameters reveal that the new strategy outperforms the existing ones. As the number of normal nodes in the system grows, the system will be able to work at its best, preventing various types of attacks.

Author 1: Chetan S Arage
Author 2: K. V. V. Satyanarayana

Keywords: Mobile Ad Hoc Networks; cooperative bait detection scheme; cluster; cluster head; certifying authority; certificate revocation

PDF

Paper 72: Augmented Reality based Adaptive and Collaborative Learning Methods for Improved Primary Education Towards Fourth Industrial Revolution (IR 4.0)

Abstract: Existing primary education is not organized properly to fulfil the demand of fourth industrial revolution (IR 4.0) which causes lack of engagement in learning materials, lack of spatial ability and motivation towards learning by young age students. Students especially from initial education level tend to learn from seeing rather than reading or memorizing. In this context, technology like augmented reality and virtual reality bears such visual power in lieu with interactivity and engaging characteristics by bringing virtual and real world together. In addition, flexible presentation mechanisms in lieu with tagging and tracking technology with various degree of freedom provide the effective ground of augmented reality and virtual reality to integrate with current primary education or initial level of education. In this context, this research presents comprehensive and critical reviews in the previous research in terms with several aspects such as methods, research design and experimental validation. Each of the aspects is demonstrated with adequate advantages and disadvantages to design the integration of augmented reality and virtual reality with primary education methodology. In addition, this research illustrates extensive critical explanation for various challenges to integrate augmented reality and virtual reality with primary education to make students motivated towards education, effective activity and memorization with visualization. The overall investigation proposed in this research is expected to fulfil the future demand for Fourth Industrial Revolution (IR 4.0).

Author 1: A F M Saifuddin Saif
Author 2: Zainal Rasyid Mahayuddin
Author 3: Azrulhizam Shapi'i

Keywords: Augmented reality; virtual reality; adaptive and collaborative learning; fourth industrial revolution

PDF

Paper 73: Feasibility Study of a Small-Scale Grid-Connected PV Power Plants in Egypt; Case Study: New Valley Governorate

Abstract: The construction of photovoltaic power plants (PVPPs) in the right place is an important task when planning the development of the power system and choosing investors. In this paper, the technical, environmental, the economic feasibility of installing a 50kW solar power plant in different places in the New Valley Governorate in Egypt has been presented using RETScreen Expert software. The input data used in the current study are obtained from the database of the Surface Meteorology and Solar Energy Dataset of NASA. In general, five sites for the construction of 50 kW power stations were assessed which represent the five administrative regions of the New Vally governorate. The study is based on annual electricity production, greenhouse gas (GHG) emissions, and financial analysis. With the proposed PV power plant, up to 100 MWh of electricity can be produced and a minimum of 43.3 tons of GHG emission can be prevented from the exhaust into the local atmosphere annually The obtained results from the RETScreen program proved the viability of installing the proposed 50kW photovoltaic power plant in any of the proposed locations. This study could give a piece of important information and feedback that can be utilized as a database for upcoming investments in the photovoltaic generation projects in Egypt.

Author 1: Mahmoud Saad
Author 2: Hamdy M. Sultan
Author 3: Ahmed Abdeltwab
Author 4: Ahmed A. Zaki Diab

Keywords: RETScreen; new valley; solar energy; energy cost; feasibility analysis; greenhouse gases

PDF

Paper 74: Augmented Reality Adapted Book (AREmotion) Design as Emotional Expression Recognition Media for Children with Autistic Spectrum Disorders (ASD)

Abstract: One of the Autism Spectrum Disorder (ASD) characteristics is their difficulty understanding other people's emotions. Their lack of skill of understanding emotion includes expression and appropriate emotional response for a certain situation. This paper proposes an adapted book that helps therapists and parents guide ASD children to learn facial emotional expression. The adapted book combined with video, animation, and Augmented Reality increases children with ASD at recognizing emotional expression. This research uses User-Centered Design (UCD) approach to design the AR Adapted Book application and design the social story to be observable in school and family environment. Based on usability testing tried on the application, the result shows that AREmotion has an average score of 82.73 percent, and the learnability aspect has the highest score of 86.7 percent. This preliminary usability testing proves that the design of the AREmotion application is ready to use in real implementation for children with ASD.

Author 1: Tika Miningrum
Author 2: Herman Tolle
Author 3: Fitra A Bachtiar

Keywords: Autism Spectrum Disorder (ASD); adapted book; Augmented Reality (AR); User-Centered Design (UCD)

PDF

Paper 75: Towards Evaluating Adversarial Attacks Robustness in Wireless Communication

Abstract: The emerging new technologies, such as autonomous vehicles, augmented reality, IoT, and other aspects that are revolutionising our world today, have highlighted new requirements that wireless communications must fulfil. Wireless communications are expected to have a high optimisation capability, efficient detection ability, and prediction flexibility to meet today's cutting-edge telecommunications technologies' challenges and constraints. In this regard, the integration of deep learning models in wireless communications appears to be extremely promising. However, the study of deep learning models has exhibited inherent vulnerabilities that attackers could harness to compromise wireless communication systems. The examination of these vulnerabilities and the evaluation of the attacks leveraging them remains essential. Therefore, this paper's main objective is to address the alignment of security studies of deep learning models with wireless communications' specific requirements, thereby proposing a pattern for assessing adversarial attacks targeting deep learning models embedded in wireless communications.

Author 1: Asmaa FTAIMI
Author 2: Tomader MAZRI

Keywords: Adversarial attacks; deep learning; wireless communication; security; robustness; vulnerability; threat

PDF

Paper 76: An Approach for Optimal Feature Selection in Machine Learning using Global Sensitivity Analysis

Abstract: The classification application is an important procedure for selecting the feature. The classification is mainly based on the features extracted from the object. You can select the best feature using the following three methods: wrapper selection, filter and embedded procedure. All three practices have been implemented by single or combined two approaches. As a result, there is no important feature in the classification process. This problem is solved by the proposed integrated global analysis of sensitivity. Each feature is selected in a classification based on the sensitivity of the feature and the correlation from the target vector in this integrated sensitivity and correlation approach. Likewise, the GSA approach uses a variety of filtering techniques for ranking attributes and optimization using particle swarm technique. Then, the optimum attributes are trained and tested using the Random Forest Classifier grid search via MATLAB software. In comparison to the existing method, wrapper-based selection, the performance of our integrated model is measured using sensitivity, specificity and accuracy. The experimental results of our proposed approach outweigh the sensitivities by 93.72%, 94.74% and the accuracy of 89.921% and 90% where, wrapper selection approach as sensitivity by 89.83% and the accuracy of 93%.

Author 1: G. Saranya
Author 2: A. Pravin

Keywords: Feature selection; feature sensitivity; feature correlation; global sensitivity analysis; classification

PDF

Paper 77: Energy Material Network Data Hubs

Abstract: In early 2015 the United States Department of Energy conceived of a consortium of collaborative bodies based on shared expertise, data, and resources that could be targeted towards the more difficult problems in energy materials research. The concept of virtual laboratories had been envisioned and discussed earlier in the decade in response to the advent of the Materials Genome Initiative and similar scientific thrusts. To be effective, any virtual laboratory needed a robust method for data management, communication, security, data sharing, dissemination, and demonstration to work efficiently and effectively for groups of remote researchers. With the accessibility of new, easily deployed cloud technology and software frameworks, such individual elements could be integrated, and the required collaboration architecture is now possible. The developers have leveraged open-source software frameworks, customized them, and merged them into a platform to enable collaborative energy materials science, regardless of the geographic dispersal of the people and resources. After five years in operations, the systems are demonstratively an effective platform for enabling research within the Energy Material Networks (EMN). This paper will show the design and development of a secured scientific data sharing platform, the ability to customize the system to support diverse workflows, and examples of the enabled research and results connected with some of the Energy Material Networks.

Author 1: Robert R. White
Author 2: Kristin Munch
Author 3: Nicholas Wunder
Author 4: Nalinrat Guba
Author 5: Chitra Sivaraman
Author 6: Kurt M. Van Allsburg
Author 7: Huyen Dinh
Author 8: Courtney Pailing

Keywords: Energy materials research; cloud computing; virtual laboratories; data management; consortium; network

PDF

Paper 78: A New Approach based on Fuzzy MADM for Enhancing Infrastructure Selection in the Case of VANET Network

Abstract: The emergence of the 5G mobile network has a huge impact on the evolution of services and functionalities offered to its customers; this latest version of mobile networks will allow the simultaneous connection of a significant number of people and IoT devices, in addition to the improvement of several other features. 5G will serve in a large part of smart cities and especially in the field of intelligent transportation systems (ITS). Vehicular Ad-Hoc Network (VANET) is one of the promising projects on which the ITS is relying on. Its main purpose is to provide communication and information-sharing support for the vehicles in its network. VANET is based on a heterogeneous network architecture composed mainly of two infrastructures, the first one is the cellular infrastructure, and the second is the road infrastructure. This paper proposes a new approach based on Fuzzy multiple attribute decision making (MADM) methods for the selection of the most appropriate infrastructure in the VANET network and consequently enhance the number of executed vertical handover to move from one infrastructure to another without loss of connection.

Author 1: Abdeslam Houari
Author 2: Tomader Mazri

Keywords: 5G; Vehicular adhoc network; internet of things; intelligent transportation system; cellular infrastructure; road infrastructure; fuzzy; vertical handover; MADM

PDF

Paper 79: A Modified Particle Swarm Optimization Approach for Latency of Wireless Sensor Networks

Abstract: In time-sensitive applications, such as detecting environmental and individual nuclear radiation exposure, wireless sensor networks are employed.. Such application requires timely detection of radiation levels so that appropriate emergency measures are applied to protect people and the environment from radiation hazards. In these networks, collision and interference in communication between sensor nodes cause more end-to-end delay and reduce the network's performance. A time-division multiple-access (TDMA) media access control protocol guarantees minimum latency and low power consumption. It also overcomes the problem of interference. TDMA scheduling problem determines the minimum length conflict-free assignment of slots in a TDMA frame where each node or link is activated at least once. This paper proposes a meta-heuristic centralized contention-free approach based on TDMA, a modified particle swarm optimization. This approach realizes the TDMA scheduling more efficiently compared with other existing algorithms. Extensive simulations were performed to evaluate the modified approach. The simulation results prove that the proposed scheduling algorithm has a better performance in wireless sensor networks than the interference degree leaves order algorithm and interference degree remaining leaves order algorithm. The results demonstrate also that integrating the proposed algorithm in TDMA protocols significantly optimizes the communication latency reduction and increases the network reliability.

Author 1: Jannat H. Elrefaei
Author 2: Ahmed Yahya
Author 3: Mouhamed K. Shaat
Author 4: Ahmed H. Madian
Author 5: Refaat M. Fikry

Keywords: Wireless sensor networks; media access control protocol; scheduling algorithms; meta-heuristics; particle swarm optimization

PDF

Paper 80: Case Study of Self-Organization Processes in Information System Caching Components

Abstract: Most modern Information Systems (IS) are designed with Object-Relational Mapping components (ORM). Such components bring down the number of queries to the database server and therefore increase the system performance. Caching mechanisms in software components are complex dynamic systems of open type. A simulation model of cache-application interaction has been created to assess the critical modes of the system. The authors suggest accumulating the cache elements states during the application run at discrete moments of time and present them as multi-variable time series. This work also suggests a method for reconstruction of phase-plane portrait of the system with multidimensional dynamics of the cache elements states. The article shows self-organization processes in caching components of information systems and illustrates the variability of system states for various initial conditions followed by transition to steady-state conditions. In particular, the paper illustrates dissipative structures as well as deterministic chaos with the complete determinism of queries in simulated information systems.

Author 1: Pavel Kurnikov
Author 2: Nina Krapukhina

Keywords: Caching mechanisms; ORM; phase space reconstruction; self-organization; dissipative structures; deterministic chaos

PDF

Paper 81: Dynamic Management of Security Policies in PrivOrBAC

Abstract: This article is a continuation of our previous work on identifying and developing tools and concepts to provide automatic management and derivation of security and privacy policies. In this document we are interested in the extension of the PrivOrBAC model in order to ensure a dynamic management of privacy-aware security policies. Our approach, based on smart contracts (SC) and the WS-Agreement Specification, allows automatic agents representing data providers and access requesters to enter into an access agreement that takes into consideration not only service level clauses but also security rules to protect the privacy of individuals. Our solution can be deployed in such a way that no human intervention is required to reach this type of agreement. This work shows how to use the WS-Agreement Specification to set up a process for negotiating, creating and monitoring Service Level Agreements (SLAs) in accordance with a predefined access control policy. This article concludes with a case study accompanied by a representative implementation of our solution.

Author 1: Jihane EL MOKHTARI
Author 2: Anas ABOU EL KALAM
Author 3: Siham BENHADDOU
Author 4: Jean-Philippe LEROY

Keywords: Access control; privacy; PrivOrBAC; PrivUML; smart contract; WS-agreement

PDF

Paper 82: Hyperspectral Image Classification using Convolutional Neural Networks

Abstract: Hyperspectral image is well-known for the identification of the objects on the earth's surface. Most of the classifier uses the spectral features and does not consider the spatial features to perform the classification and to recognize the various objects on the image. In this paper, the hyperspectral image is classified based on spectral and spatial features using a convolutional neural network (CNN). The hyperspectral image is divided into a small number of patches. CNN constructs the high level spectral and spatial features of each patch, and the multi-layer perceptron helps in the classification of image features into different classes. Simulation results show that CNN archives the highest classification accuracy of the hyperspectral image compared with other classifiers.

Author 1: Shambulinga M
Author 2: G. Sadashivappa

Keywords: Convolutional neural network; hyperspectral image; classification

PDF

Paper 83: Skeletonization of the Straight Leg Raise Movement using the Kinect SDK

Abstract: Biomechanics is widely used as a basis for research in studying disorders of the movement system in the human body. The development of biomechanics has an effect on the field of physiotherapy. Various physiotherapy tests or studies have been developed by researchers to identify and analyze the causes of movement disorders in the human body. Physiotherapy tests performed to detect movement disorders in the human body include: Hawkin Test, Standing Hip Flexion, Standing Trunk Sidebend Test and Straight Leg Raise Test. Straight Leg Raise test is a test to lift one leg straight up in a lying down position. In the medical field, Straight Leg Raise (SLR) movement is very specific for lower lumbar disc herniation and through the use of a device called Kinect which is a color based camera (RGB) and depth camera that can detect movement and track the human skeleton. To get the human body skeleton is through the stages from RGB image acquisition, pose calibration, depth image, skeletonization to feature extraction of the two legs that can form ROM. The result of this study, Kinect is able to track the user's foot frame in SLR pose. Kinect can be relied upon to be used as a tool for detecting and tracking skeletons, and can then be used to measure SLR ROM in real time and analyze low back pain problems and measure effectiveness in physiotherapy.

Author 1: Hustinawaty
Author 2: T. Rumambi
Author 3: M. Hermita

Keywords: Straight leg raise; range of motion; skeleton; kinect

PDF

Paper 84: MAGITS: A Mobile-based Information Sharing Framework for Integrating Intelligent Transport System in Agro-Goods e-Commerce in Developing Countries

Abstract: The technological advancement in Intelligent Transport Systems and mobile phones enable massive collaborating devices to collect, process, and share information to support the sales and transportation of agricultural goods (agro-goods) from farmer to market within the Agriculture Supply Chain. Mobile devices, especially smartphones and intelligent Point of Sale (PoS), provide multiple features such as Global Positioning System (GPS) and accelerometer to complement infrastructure requirements. Despite the opportunity, the development and deployment of the innovative platforms integrating Agro-goods transport services with e-commerce and e-payment systems are still challenging in developing countries. Some noted challenges include the high cost of infrastructure, implementation complexities, technology, and policy issues. Therefore, this paper proposes a framework for integrating ITS services in agro-goods e-commerce, taking advantage of mobile device functionalities and their massive usage in developing countries. The framework components identified and discussed are Stakeholders and roles, User Services, Mobile Operations, Computing environment with Machine Learning support, Service goals and Information view, and Enabling Factors. A Design Science Research (DSR) method is applied to produce a framework as an artifact using a six-step model. Also, a case study of potato sales and transportation from the Njombe region to Dar es Salaam city in Tanzania is presented. The framework constructs the ability to improve information quality shared among stakeholders; provide a cost-effective and efficient approach for buying, selling, payment, and transportation of Agriculture goods.

Author 1: Stivin Aloyce Nchimbi
Author 2: Mussa Ally Dida
Author 3: Gerrit K. Janssens
Author 4: Janeth Marwa
Author 5: Michael Kisangiri

Keywords: Intelligent transport system; stakeholders; mobile phone; agro-goods; information sharing; agriculture supply chain; machine learning

PDF

Paper 85: Software Project Estimation with Machine Learning

Abstract: This project involves research about software effort estimation using machine learning algorithms. Software cost and effort estimation are crucial parts of software project development. It determines the budget, time and resources needed to develop a software project. One of the well-established software project estimation models is Constructive Cost Model (COCOMO) which was developed in the 1980s. Even though such a model is being used, COCOMO has some weaknesses and software developers still facing the problem of lack of accuracy of the effort and cost estimation. Inaccuracy in the estimated effort will affect the schedule and cost of the whole project as well. The objective of this research is to use several algorithms of machine learning to estimate the effort of software project development. The best machine learning model is chosen to compare with the COCOMO.

Author 1: Noor Azura Zakaria
Author 2: Amelia Ritahani Ismail
Author 3: Afrujaan Yakath Ali
Author 4: Nur Hidayah Mohd Khalid
Author 5: Nadzurah Zainal Abidin

Keywords: Software effort estimation; project estimation; constructive cost model; COCOMO; machine learning

PDF

Paper 86: Do Learning Styles Enhance the Academic Performance of University Students? A Case Study

Abstract: Higher education models appear to be not entirely designed to support students in facing severe challenges, such as failure in exams and dropping out of school. To solve these challenges, several models of learning styles have been proposed, under the premise that these studies contribute to improving the student's learning experience. This research aims at quantifying the impact of learning styles (learning preferences/dimensions) on students' academic performance from a higher education institution. Ninety-six undergraduate students were surveyed during the 2018-2019 school year and randomly divided into two groups: control (CG) and experimental (EG). The learning preferences of the students were identified using the Unified Learning Style Model (ULSM) instrument. Subsequently, the level of students’ knowledge concerning the course was determined employing a pre-test exam. As a following step, the students of the EG consulted the learning objects designed considering different learning styles. The CG attend their lessons in a face-to-face environment; both groups answered a post-test exam to assess their learning. The learning styles' effect -learning objects were designed to cover several learning styles- on academic performance is quantified employing an ANOVA analysis. The results differ from those postulated in previous researches based on the ULSM since there is no statistical evidence that learning styles influence students' academic performance. Therefore, it is necessary to explore other cognitive and affective factors that make the student's learning experience efficient and effective.

Author 1: Jorge Muñoz-Mederos
Author 2: Elizabeth Acosta-Gonzaga
Author 3: Elena Fabiola Ruiz-Ledesma
Author 4: Aldo Ramírez-Arellano

Keywords: Learning styles; learning objects; unified learning style model (ULSM); academic performance; higher education

PDF

Paper 87: Application of Expert System with Smartphone-based Certainty Factor Method for Hypertension Risk Detection

Abstract: Hypertension is a factor health problem with a high prevalence in Indonesia. The total amount of hypertension nationwide is 34.11%. There are only 1/2 were diagnosed, and the remaining 1/2 are undiagnosed. Certainty factor was needed to prove the certain or not of a fact in a metric form, and it commonly was used in expert systems. This method was perfect for diagnosing something uncertain. This research aimed to build a smartphone-based hypertension risk detection system. This system was built by MPX5500DP pressure sensor components that served for blood pressure measurement, Bluetooth module HC-05 as microcontroller data transmission, Arduino Uno as sensor data processing, and Smartphone as hypertension risk detection interface and database access. Measuring blood pressure on the user's right or left arm was used to test the system. Then the data was sent to the smartphone to be classified and processed by using the certainty factor method implanted in the android smartphone to detect the risk of hypertension and then followed by the selection of symptoms and risk factors of hypertension according to previous experience. The results showed that the success rate of the system in detecting the risk of hypertension was 100%, the mean error of the systolic value of the right arm was 2.092%, and the average error of the diastolic value of the right arm was 2.977%, while the average error of the systolic value of the left arm. 1.944% and the mean error of the left arm diastolic value was 2.800%.

Author 1: Dodon Yendri
Author 2: Derisma
Author 3: Desta Yolanda
Author 4: Sabrun Jamil

Keywords: Hypertension; certainty factor; blood pressure; mpx5500dp

PDF

Paper 88: Using API with Logistic Regression Model to Predict Hotel Reservation Cancellation by Detecting the Cancellation Factors

Abstract: The aim of establishing hotels is to provide a service activity to its customers with the aim of making a profit. So, for that the cancellations are a key perspective of inn income administration since their effectiveness on room reservation systems. Cancelling the reservation eliminates the outcome. Many expected factors affect this problem. By knowing these factors, the hotel management can make a suitable cancellation policy. This project aims to create an API that can provide a function to predict if a reservation is most likely to cancel or not. That API can integrate with the hotel management systems to evaluate each reservation process with the same parameters. To do this, the study starts by defining the factors using Chi test, correlation to find the effective variables, and coefficient of the variables in the linear regression. And the results that have been found for the factors are: is repeated guest, previous cancellations, previous bookings not cancelled, required car parking spaces, and deposit type. For API function, the intercept and coefficients have been used from the logistics regression model to create a scoring function. Scoring function can be calculated by the sum of the factors multiplied by their coefficients in addition to the intercept. This score is to be evaluated as a probability later using the logistic function.

Author 1: Sultan Almotiri
Author 2: Nouf Alosaimi
Author 3: Bayan Abdullah

Keywords: Prediction; API; factors; logistics regression

PDF

Paper 89: A Technique for Constrained Optimization of Cross-ply Laminates using a New Variant of Genetic Algorithm

Abstract: The main challenge presented by the design of laminated composite material is the laminate layup, involving a set of fiber orientations, composite material systems, and stacking sequences. In nature, it is a combinatorial optimization problem with constraints that can be solved by the genetic algorithm. The traditional approach to solve a constrained problem is reformulating the objective function. In the present study, a new variant of the genetic algorithm is proposed for the design of composite material by using a mix of selection strategies, instead of modifying the objective function. To check the feasibility of a laminate subject to in-plane loading, the effect of the fiber orientation angles and material components on the first ply failure is studied. The algorithm has been validated by successfully optimizing the design of cross-ply laminate under different in-plane loading cases. The results obtained by this algorithm are better than works in related literature.

Author 1: Huiyao Zhang
Author 2: Atsushi Yokoyama

Keywords: Laminated composite; classical lamination theory; genetic algorithm; optimal design

PDF

Paper 90: Kinematic Analysis for Trajectory Planning of Open-Source 4-DoF Robot Arm

Abstract: Many small and large industries use robot arms to establish a range of tasks such as picking and placing, and painting in today’s world. However, to complete these tasks, one of the most critical problems is to obtain the desire position of the robot arm’s end-effector. There are two methods for analyzing the robot arm: forward kinematic analysis and inverse kinematic analysis. This study aims to model the forward and inverse kinematic of an open-source 4 degrees of freedom (DoF) articulated robotic arm. A kinematic model is designed and further evaluated all the joint parameters to calculate the end-effector’s desired position. Forward kinematic is simple to design, but as for the inverse kinematic, a closed-form solution is needed. The developed kinematic model’s performance is assessed on a simulated robot arm, and the results were analyzed if the errors were produced within the accepted range. At the end of this study, forward kinematic and inverse kinematic solutions of a 4-DoF articulated robot arm are successfully modeled, which provides the theoretical basis for the subsequent analysis and research of the robot arm.

Author 1: Han Zhong Ting
Author 2: Mohd Hairi Mohd Zaman
Author 3: Mohd Faisal Ibrahim
Author 4: Asraf Mohamed Moubark

Keywords: Robot arm; kinematic analysis; forward kinematic; inverse kinematic; open-source

PDF

Paper 91: Fake News Detection in Arabic Tweets during the COVID-19 Pandemic

Abstract: In March 2020, the World Health Organization declared the COVID-19 outbreak to be a pandemic. Soon af-terwards, people began sharing millions of posts on social media without considering their reliability and truthfulness. While there has been extensive research on COVID-19 in the English lan-guage, there is a lack of research on the subject in Arabic. In this paper, we address the problem of detecting fake news surrounding COVID-19 in Arabic tweets. We collected more than seven million Arabic tweets related to the corona virus pandemic from January 2020 to August 2020 using the trending hashtags during the time of pandemic. We relied on two fact-checkers: the France-Press Agency and the Saudi Anti-Rumors Authority to extract a list of keywords related to the misinformation and fake news topics. A small corpus was extracted from the collected tweets and manually annotated into fake or genuine classes. We used a set of features extracted from tweet contents to train a set of machine learning classifiers. The manually annotated corpus was used as a baseline to build a system for automatically detecting fake news from Arabic text. Classification of the manually annotated dataset achieved an F1-score of 87.8% using Logistic Regression (LR) as a classifier with the n-gram-level Term Frequency-Inverse Document Frequency (TF-IDF) as a feature, and a 93.3% F1-score on the automatically annotated dataset using the same classifier with count vector feature. The introduced system and datasets could help governments, decision-makers, and the public judge the credibility of information published on social media during the COVID-19 pandemic.

Author 1: Ahmed Redha Mahlous
Author 2: Ali Al-Laith

Keywords: Fake news; Twitter; social media; Arabic corpus

PDF

Paper 92: Detecting COVID-19 Utilizing Probabilistic Graphical Models

Abstract: Probabilistic graphical models are employed in a variety of areas such as artificial intelligence and machine learn-ing to depict causal relations among sets of random variables. In this research, we employ probabilistic graphical models in the form of Bayesian network to detect coronavirus disease 2019 (denoted as COVID-19) disease. We propose two efficient Bayesian network models that are potent in encoding causal relations among random variable, i.e., COVID-19 symptoms. The first Bayesian network model, denoted as BN1, is built depending on the acquired knowledge from medical experts. We collect data from clinics and hospitals in Saudi Arabia for our research. We name this authentic dataset DScovid. The second Bayesian network model, denoted as BN2, is learned from the real dataset DScovid depending on Chow-Liu tree approach. We also implement our proposed Bayesian network models and present our experimental results. Our results show that the proposed approaches are capable of modeling the issue of making decisions in the context of COVID-19. Moreover, our experimental results show that the two Bayesian network models we propose in this work are effective for not only extracting casual relations but also reducing uncertainty and increasing the effectiveness of causal reasoning and prediction.

Author 1: Emad Alsuwat
Author 2: Sabah Alzahrani
Author 3: Hatim Alsuwat

Keywords: Coronavirus disease 2019; COVID-19; artificial in-telligence; machine learning; probabilistic graphical models; causal models; Bayesian networks; detection methods

PDF

Paper 93: An Optimized Artificial Neural Network Model using Genetic Algorithm for Prediction of Traffic Emission Concentrations

Abstract: Global warming and climate change have become universal issues recently. One of the leading sources of climate change is automobiles. Automobiles are the prime source of air pollution in urban areas globally. This has resulted in a problematic and chaotic state in the development of an automatic traffic management system for capturing and monitoring vehicles’ hourly and daily passage. With the significant advancement of sensor technology, atmospheric information such as air pollu-tion, meteorological, and motor vehicle data can be harvested and stored in databases. However, due to the complexity and non-linear associations between air quality, meteorological, and traffic variables, it is difficult for the traditional statistical and mathematical models to analyze them. Recently, machine learning algorithms in the field of traffic emissions prediction have become a popular tool. Meteorological and traffic variables influence the variation and the trend of the traffic pollutants. In this paper, an optimized artificial neural network (OANN) was developed to enhance the existing artificial neural network (ANN) model by updating the initial weights in the network using a Genetic Algorithm (GA). The OANN model was implemented to predict the concentration of CO, NO, NO2, and NOx pollutants produced by motor vehicles in Kuala Lumpur, Malaysia. OANN was compared with Artificial Neural Network (ANN), Random Forest (RF), and Decision Tree (DT) models. The results show that the developed OANN model performed better than the ANN, RF, and DT models with the lowest MSE values of 0.0247 for CO, 0.0365 for NO, 0.0542 NO2, and 0.1128 for NOx. It can be concluded that the developed OANN model is a better choice in predicting traffic emission concentrations. The developed OANN model can help environmental agencies monitor traffic-related air pollution levels efficiently and take necessary measures to ensure the effectiveness of traffic management policy. The OANN model can also help decision-makers mitigate traffic emissions to protect citizens living in the neighborhood of highways.

Author 1: Akibu Mahmoud Abdullah
Author 2: Raja Sher Afgun Usmani
Author 3: Thulasyammal Ramiah Pillai
Author 4: Mohsen Marjani
Author 5: Ibrahim Abaker Targio Hashem

Keywords: Optimized Artificial Neural Network (OANN); Ge-netic Algorithm; traffic emissions

PDF

Paper 94: Discrete Time-Space Stochastic Mathematical Modelling for Quantitative Description of Software Imperfect Fault-Debugging with Change-Point

Abstract: Statistics and stochastic-process theories, along with the mathematical modelling and the respective empirical evidence support, describe the software fault-debugging phenomenon. In software-reliability engineering literature, stochastic mathemat-ical models based on the non-homogeneous Poisson process (NHPP) are employed to measure and boost reliability too. Since reliability evolves on account of the running of computer test-run, NHPP type of discrete time-space models, or difference-equation, is superior to their continuous time-space counterparts. The majority of these models assume either a constant, monotonically increasing, or decreasing fault-debugging rate under an imperfect fault-debugging environment. However, in the most debugging scenario, a sudden change may occur to the fault-debugging rate due to an addition to, deletion from, or modification of the source code. Thus, the fault-debugging rate may not always be smooth and is subject to change at some point in time called change-point. Significantly few studies have addressed the problem of change-point in discrete-time modelling approach. The paper examines the combined effects of change-point and imperfect fault-debugging with the learning process on software-reliability growth phenomena based on the NHPP type of discrete time-space modelling approach. The performance of the proposed modelling approach is compared with other existing approaches on an actual software-reliability dataset cited in literature. The findings reveal that incorporating the effect of change-point in software-reliability growth modelling enhances the accuracy of software-reliability assessment because the stochastic character-istics of the software fault-debugging phenomenon alter at the change-point.

Author 1: Mohd Taib Shatnawi
Author 2: Omar Shatnawi

Keywords: Stochastic mathematical modelling; discrete time-space; non-homogenous poisson process; change-point; imperfect fault-debugging; software-reliability

PDF

Paper 95: The Role of Data Pre-processing Techniques in Improving Machine Learning Accuracy for Predicting Coronary Heart Disease

Abstract: These days, in light of the rapid developments, people work day and night to live at a good level. This often causes them to not pay much attention to a healthy lifestyle, such as what they eat or even what physical activities they do. These people are often the most likely to suffer from coronary heart disease. The heart is a small organ responsible for pumping oxygen-rich blood to the rest of the human body through the coronary arteries. Accordingly, any blockage or narrowing in one of these coronary arteries may cause blood not to be pumped to the heart and from it to the rest of the body, and thus cause what is known as heart attacks. From here, the importance of early prediction of coronary heart disease has emerged, as it can help these people change their lifestyle and eating habits to become healthier and thus prevent coronary heart disease and avoid death. This paper improve the accuracy of machine learning techniques in predicting coronary heart disease using data preprocessing techniques. Data preprocessing is a technique used to improve the efficiency of a machine learning model by improving the quality of the feature. The popular Framingham Heart Study dataset was used for validation purposes. The results of the research paper indicate that the use of data preprocessing techniques had a role in improving the predictive accuracy of poorly efficient classifiers, and shows satisfactory performance in determining the risk of coronary heart disease. For example, the Decision Tree classifier led to a predictive accuracy of coronary heart disease of 91.39% with an increase of 1.39% over the previous work, the Random Forest classifier led to a predictive accuracy of 92.80% with an increase of 2.7% over the previous work, the K-Nearest Neighbor classifier led to a predictive accuracy of 92.68% with an increase of 2.58% over the previous work, the Multilayer Perceptron Neural Network (MLP) classifier led to a predictive accuracy of 92.64% with an increase of 2.64% over the previous work, and the Na¨ıve Bayes classifier led to a predictive accuracy of 90.56% with an increase of 0.66% over the previous work.

Author 1: Osamah Sami
Author 2: Yousef Elsheikh
Author 3: Fadi Almasalha

Keywords: Coronary heart disease; heart; machine learning; data preprocessing; classification technique

PDF

Paper 96: Numerical Investigation on System of Ordinary Differential Equations Absolute Time Inference with Mathematica®

Abstract: The purpose of this research is to perform a comparative numerical analysis of an efficient numerical methods for second-order ordinary differential equations, by reducing the second-order ODE to a system of first-order differential equations. Then we obtain approximate solutions to the system of ODE. To validate the accuracy of the algorithm, a comparison between Euler’s method and the Runge-Kutta method or order four was carried out and an exact solution was found to verify the efficiency, accuracy of the methods. Graphical representations of the parametric plots were also presented. Time inference analysis is taken to check the time taken to executes the algorithm in Mathematica®12.2.0. The obtained approximate solution using the algorithm shows that the Runge-Kutta method of order four is more efficient for solving system of linear ordinary differential equations.

Author 1: Adeniji Adejimi
Author 2: Surulere Samuel
Author 3: Mkolesia Andrew
Author 4: Shatalov Michael

Keywords: Euler’s method; Runge-Kutta method; System of ODE; Mathematica®; AbsoluteTiming

PDF

Paper 97: Maneuverable Autonomy of a Six-legged Walking Robot: Design and Implementation using Deep Neural Networks and Hexapod Locomotion

Abstract: Automatically real-time synthesizing behaviors for a six-legged walking robot pose several exciting challenges, which can be categorized into mechanics design, control software, and the combination of both. Due to the complexity of control-ling and automation, numerous studies choose to gear their attention to a specific aspect of the whole challenge by either proposing valid and low-power assumption of mechanical parts or implementing software solutions upon sensorial capabilities and camera. Therefore, a complete solution associating both mechanical moving parts, hardware components, and software encouraging generalization should be adequately addressed. The architecture proposed in this article orchestrates (i) interlocutor face detection and recognition utilizing ensemble learning and convolutional neural networks, (ii) maneuverable automation of six-legged robot via hexapod locomotion, and (iii) deployment on a Raspberry Pi, that has not been previously reported in the literature. Not satisfying there, the authors even develop one step further by enabling real-time operation. We believe that our contributions ignite multi-research disciplines ranging from IoT, computer vision, machine learning, and robot autonomy.

Author 1: Hiep Xuan Huynh
Author 2: Nghia Duong-Trung
Author 3: Tran Nam Quoc Nguyen
Author 4: Bao Hoai Le
Author 5: Tam Hung Le

Keywords: Six-legged walking robot; hexapod locomotion; Raspberry Pi; deep neural networks

PDF

Paper 98: Comprehensive Analysis of Augmented Reality Technology in Modern Healthcare System

Abstract: The recent advances of Augmented Reality (AR) in healthcare have shown that technology is a significant part of the current healthcare system. In recent days, augmented reality has proposed numerous intelligent applications in the healthcare domain including, wearable access, telemedicine, remote surgery, diagnosis of medical reports, emergency medicine, etc. These developed augmented healthcare applications aim to improve patient care, increase efficiency, and decrease costs. Therefore, to identify the advances of AR-based healthcare applications, this article puts on an effort to perform an analysis of 45 peer-reviewed journal and conference articles from scholarly databases between 2011 and 2020. It also addresses concurrent concerns and their relevant future challenges including, user satisfaction, convenient prototypes, service availability, maintenance cost, etc. Despite the development of several AR healthcare applications, there are some untapped potentials regarding secure data trans-mission, which is an important factor for advancing this cutting-edge technology. Therefore, this paper also analyzes distinct AR security and privacy including, security requirements (i.e., scalability, confidentiality, integrity, resiliency, etc.) and attack terminologies (i.e. sniffing, fabrication, modification, interception, etc.). Based on the security issues, in this paper, we propose an artificial intelligence-based dynamic solution to build an intelligent security model to minimize data security risks. This intelligent model can identify seen and unseen threats in the threat detection layer and thus can protect data during data transmission. In addition, it prevents external attacks in the threat elimination layer using threat reduction mechanisms.

Author 1: Jinat Ara
Author 2: Faria Benta Karim
Author 3: Mohammed Saud A Alsubaie
Author 4: Yeasin Arafat Bhuiyan
Author 5: Muhammad Ismail Bhuiyan
Author 6: Salma Begum Bhyan
Author 7: Hanif Bhuiyan

Keywords: Augmented Reality (AR); healthcare applications; healthcare challenges; AR-based healthcare security issues; dy-namic security solution

PDF

Paper 99: Identification of Abusive Behavior Towards Religious Beliefs and Practices on Social Media Platforms

Abstract: The ubiquitous use of social media has enabled many people, including religious scholars and priests, to share their religious views. Unfortunately, exploiting people’s religious beliefs and practices, some extremist groups intentionally or unin-tentionally spread religious hatred among different communities and thus hamper social stability. This paper aims to propose an abusive behavior detection approach to identify hatred, violence, harassment, and extremist expressions against people of any religious belief on social media. For this, first religious posts from social media users’ activities are captured and then the abusive behaviors are identified through a number of sequential processing steps. In the experiment, Twitter has been chosen as an example of social media for collecting dataset of six major religions in English Twittersphere. In order to show the performance of the proposed approach, five classic classifiers on n-gram TF-IDF model have been used. Besides, Long Short-term Memory (LSTM) and Gated Recurrent Unit (GRU) classifiers on trained embedding and pre-trained GloVe word embedding models have been used. The experimental result showed 85%accuracy in terms of precision. However, to the best of our knowledge, this is the first work that will be able to distinguish between hateful and non-hateful contents in other application domains on social media in addition to religious context.

Author 1: Tanvir Ahammad
Author 2: Md. Khabir Uddin
Author 3: Tamanna Yesmin
Author 4: Abdul Karim
Author 5: Sajal Halder
Author 6: Md. Mahmudul Hasan

Keywords: Social media; religious abuse detection; religious keywords; religious hatred; feature extraction; classifier

PDF

Paper 100: Fish Disease Detection System: A Case Study of Freshwater Fishes of Bangladesh

Abstract: The proposed system is designed for automatic detection and classification of fish diseases in freshwater es-pecially Rangamati Kaptai Lake and Sunamganj Hoar area of Bangladesh. Our experimental result is indicating that the proposed approach is significantly an accurate and automatic detection and recognition of fish diseases. This study presents fish disease detection based on the K-means and C-means fuzzy logic clustering method to segment the filtering image. Gabor’s Filters and Gray Level Co-occurrence Matrix (GLCM) are used to extracts the features from the segmented regions. Finally Multi-Support Vector Machine (M-SVMs) is used for classification of the test image. The proposed system demonstrated a comparison between K-means clustering and C-means fuzzy logic. The proposed methodology gave 96.48% accuracy using K-means and 97.90% using C-means fuzzy logic which is the highest accuracy rate to compare other existing methods. The proposed system has been experimented in the MATLAB environment on infected fish images of Rangamati Kaptai Lake and Sumangan Hoar area. It is a challenging task of fisheries farming in Hoar areas and Lake areas to detect fish diseases initially. The proposed methodology can detect and classify different fish diseases in early stages and also contributes to improved results for fish disease detection.

Author 1: Juel Sikder
Author 2: Kamrul Islam Sarek
Author 3: Utpol Kanti Das

Keywords: K-means; c-means fuzzy logic; multi-SVM; detection

PDF

Paper 101: Improving Data Services of Mobile Cloud Storage with Support for Large Data Objects using OpenStack Swift

Abstract: Providing data services support for large file upload and download is increasingly vital for mobile cloud storage. There is an increase in mobile users whose data access trends show more access and large file sharing. It is a challenging task for Mobile Application Developers to handle upload and retrieve large files to/from a mobile app because of difficulties with latency, bandwidth, speed, errors, and disruptions to service in a wireless mobile environment. Some scenarios require these large files to be used offline, sometimes to be updated by a single user, and sometimes be shared among all other users. The Wireless mobile environment must consider mobile user’s constraints, such as frequent disconnections and low bandwidth, which affect the ability to handle data and transactions management. The primary objective of this study is to propose a cloud-based Mobile Sync service (sometimes referred as Mobile backend as a Service) with OpenStack Swift object storage to manage large objects efficiently using two main techniques of segmentation and object chunking with compression in a mobile cloud environment. This work further contributed to a prototype implementation of the proposed framework and provides Application Programming Interface (API) consisting of Create, Read, and Delete queries and chunking operations and a lightweight sync protocol that can manage large file synchronization and access. The experimental findings with object-chunking tested size settings show that the proposed Mobile Sync framework can accommodate large files ranging from 100MB to 1GB and provides a decrease in upload/download synchronization times of 63.203% / 92.987%percent as compared to other frameworks.

Author 1: Aslam B Nandyal
Author 2: Mohammed Rafi
Author 3: M Siddappa
Author 4: Babu B. Sathish

Keywords: Mobile cloud computing; mobile backend as a service; large files; distributed systems

PDF

Paper 102: A Comparative Study of Stand-Alone and Hybrid CNN Models for COVID-19 Detection

Abstract: The COVID-19 pandemic continues to impact both the international economy and individual lives. A fast and accurate diagnosis of COVID-19 is required to limit the spread of this disease and reduce the number of infections and deaths. However, a time consuming biological test, Real-Time Reverse Transcription–Polymerase Chain Reaction (RT-PCR), is used to diagnose COVID-19. Furthermore, sometimes the test produces ambiguous results, especially when samples are taken in the early stages of the disease. As a potential solution, machine learning algorithms could help enhance the process of detecting COVID-19 cases. In this paper, we have provided a study that compares the stand-alone CNN model and hybrid machine learning models in their ability to detect COVID-19 from chest X-Ray images. We presented four models to classify such kinds of images into COVID-19 and normal. Visual Geometry Group (VGG-16) is the architecture used to develop the stand-alone CNN model. This hybrid model consists of two parts: the VGG-16 as a features extractor, and a conventional machine learning algorithm, such as support-vector-machines (SVM), Random-Forests (RF), and Extreme-Gradient-Boosting (XGBoost), as a classifier. Even though several studies have investigated this topic, the dataset used in this study is considered one of the largest because we have combined five existing datasets. The results illustrate that there is no noticeable improvement in the performance when hybrid models are used as an alternative to the stand-alone CNN model. VGG-16 and (VGG16+SVM) models provide the best performance with a 99.82% model accuracy and 100% model sensitivity. In general, all the four presented models are reliable, and the lowest accuracy obtained among them is 98.73%.

Author 1: Wedad Alawad
Author 2: Banan Alburaidi
Author 3: Asma Alzahrani
Author 4: Fai Alflaj

Keywords: COVID-19; convolutional neural network; hybrid models; chest X-Ray; deep learning

PDF

Paper 103: A Self-adaptive Algorithm for Solving Basis Pursuit Denoising Problem

Abstract: In this paper, we further consider a method for solving the basis pursuit denoising problem (BPDP), which has received considerable attention in signal processing and statistical inference. To this end, a new self-adaptive algorithm is proposed, its global convergence results is established. Furthermore, we also show that the method is sublinearly convergent rate of O( 1/k). Finally, the availability of given method is shown via somek numerical examples.

Author 1: Mengkai Zhu
Author 2: Xu Zhang
Author 3: Bing Xue
Author 4: Hongchun Sun

Keywords: Basis pursuit denoising problem; algorithm; global convergence; sublinearly convergent rate; sparse signal recovery

PDF

Paper 104: Wheelchair Control System based Eye Gaze

Abstract: The inability to control the limbs is the main reason that affects the daily activities of the disabled which causes social restrictions and isolation. More studies were performed to help disabilities for easy communication with the outside world and others. Various techniques are designed to help the disabled in carrying out daily activities easily. Among these technologies is the Smart Wheelchair. This research aims to develop a smart eye-controlled wheelchair whose movement depends on eye movement tracking. The proposed Wheelchair is simple in design and easy to use with low cost compared with previous Wheelchairs. The eye movement was detected through a camera fixed on the chair. The user's gaze direction is obtained from the captured image after some processing and analysis. The order is sent to the Arduino Uno board which controls the wheelchair movement. The Wheelchair performance was checked using different volunteers and its accuracy reached 94.4% with a very short response time compared with the other existing chairs.

Author 1: Samar Gamal Amer
Author 2: Rabie A. Ramadan
Author 3: Sanaa A. kamh
Author 4: Marwa A. Elshahed

Keywords: Dilip; numpy; gaze ratio; facial landmarks points; deep learning

PDF

Paper 105: Design and Implementation of a Most Secure Cryptographic Scheme for Lightweight Environment using Elliptic Curve and Trigonohash Technique

Abstract: The Internet of Things (IoT) is a rising development and is an organization of all gadgets that can be gotten to through the web. As a central advancement of the IoT, wireless sensor networks (WSN) can be used to accumulate the vital environment parameters for express applications. In light of the resource limitation of sensor devices and the open idea of remote channel, security has become an enormous test in WSN. Validation as an essential security service can be used to guarantee the authenticity of data access in WSN. The proposed three factor system using one way hash function is depicted by low computational cost, and limit overhead, while achieving all other benefits. Keys are made from secret key for meeting for improving the security. We differentiated the arrangement's security and execution with some lightweight plans. As shown by the examination, the proposed plan can give more prominent security incorporates low overhead of correspondence. Encryption and unscrambling is done using numerical thoughts and by using the possibility of hash function. Mathematical thoughts are lightweight and update the security up by a staggering degree by diminishing the chances of cryptanalysis. When contrasted with different calculations, the proposed calculation gives better execution results.

Author 1: Bhaskar Prakash Kosta
Author 2: PasalaSanyasi Naidu

Keywords: Internet of Things (IoT); authentication; one way hash function; lightweight environment; secret key

PDF

Paper 106: Detecting Malware Infection on Infrastructure Hosted in Iaas Cloud using Cloud Visibility and Forensics

Abstract: Cloud computing has been adopted very rapidly by organizations with different businesses and sizes, the use of cloud services is rising at an unparalleled rate these days especially IaaS services as cloud providers offer more powerful resources with flexible offerings and models. This rapid adoption opens new surface attacks to the organizations that attackers abuse with their malware to take advantage of these powerful resources and the valuable data that exist on them. Therefore for organizations to well defend against malware attacks they need to have full visibility not only on their data centers but also on their resources hosted on the cloud and don't take their security for granted. This paper discusses and aims to provide the best approaches to achieve continuous monitoring of malware attacks on the cloud along with their phases (before, during, and after) and the limitations of today's available techniques suggesting needed developments. Logging and forensics techniques have always been the cornerstone of achieving continuous monitoring and detection of malware attacks on-premises, this paper defines the best methods to bring loggings and forensics to the cloud and integrate them with on-premises visibility, thus achieving the full monitoring over the whole security posture of the organization assets whether they are on-premises or on the cloud.

Author 1: Lama Almadhoor
Author 2: A. A. bd El-Aziz
Author 3: Hedi Hamdi

Keywords: Malware attacks; infrastructure as a service (IaaS); amazon web services (AWS); malware detection; cloud forensics; visibility

PDF

Paper 107: Reputation Measurement based on a Hybrid Sentiment Analysis Approach for Saudi Telecom Companies

Abstract: Thousands of active people on social media daily share their thoughts and opinions about different subjects and different issues. Many social media platforms used to express the feeling or opinion and at top of it is Twitter. On Twitter, many opinions are expressed in many fields such as movies, events, products, and services; this data considered a valuable resource for companies and decision-makers to help in making decisions. This study was based on using a hybrid approach to extract the opinions from an Arabic tweet to measuring service providers’ reputation. In this study, the Saudi telecom companies used as a case study. This research concentrates on determining peoples’ opinions more accurately by utilizing the Retweet and Favorite. The number resulting from positive and negative tweets after applying the polarity equation was used to estimate reputation scores. The result indicated that the STC company represents a high reputation compared to other companies. The proposed approach shows promising results to expand existing knowledge of sentiment analysis in the domain of measure reputation.

Author 1: Bayan Abdullah
Author 2: Nouf Alosaimi
Author 3: Sultan Almotiri

Keywords: Reputation; sentiment analysis; Arabic language; social media

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org