The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

ijacsa Volume 11 Issue 9

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Efficient GPU Implementation of Multiple-Precision Addition based on Residue Arithmetic

Abstract: In this work, the residue number system (RNS) is applied for efficient addition of multiple-precision integers using graphics processing units (GPUs) that support the Compute Unified Device Architecture (CUDA) platform. The RNS allows calculations with the digits of a multiple-precision number to be performed in an element-wise fashion, without the overhead of communication between them, which is especially useful for massively parallel architectures such as the GPU architecture. The paper discusses two multiple-precision integer algorithms. The first algorithm relies on if-else statements to test the signs of the operands. In turn, the second algorithm uses radix complement RNS arithmetic to handle negative numbers. While the first algorithm is more straightforward, the second one avoids branch divergence among threads that concurrently compute different elements of a multiple-precision array. As a result, the second algorithm shows significantly better performance compared to the first algorithm. Both algorithms running on an NVIDIA RTX 2080 Ti GPU are faster than the multi-core GNU MP implementation running on an Intel Xeon 4100 processor.

Author 1: Konstantin Isupov
Author 2: Vladimir Knyazkov

Keywords: Multiple-precision algorithm; integer arithmetic; residue number system; GPU; CUDA

PDF

Paper 2: Classification of Pulmonary Nodule using New Transfer Method Approach

Abstract: Lung cancer is among the world's worst cancers, and accounted for 27%of all cancers in 2018. Despite substantial improvement in recent diagnoses and medications, the five year cure ratio is just 19%. Before even the diagnosis, classification of lung nodule is an essential step, particularly because early detection can help doctors with a highly valued opinion. CT image detection and classification is possible easily and accurately with advanced vision devices and machine-learning technology. This field of work has been extremely successful. Researchers have already attempted to improve the accuracy of CAD structures by computational tomography (CT) in the screening of lung cancer in several deep learning models. In this paper, we proposed a fully automated lung CT system for lung nodule classification, namely, new transfer method (NTM) which has two parts. First features are extracted by applying different VOI and feature extraction techniques. We used intensity, shape, contrast of border and spicula extraction to extract the lung nodule. Then these nodules are transfer to the classification part where we used advance-fully convolution network (A-FCN) to classify the lung nodule between benign and malignant. Our A- FCN network contain three types of layers that helps to enhance the performance and accuracy of NTM network which are convolution layer, pooling layer and fully connected layer. The proposed model is trained on LIDC-IDRI dataset and attained an accuracy of 89.90 % with AUC of 0.9485.

Author 1: Syed Waqas Gillani
Author 2: Bo Ning

Keywords: New transfer method; VOI extraction; feature extraction; classification; LIDC-IDRI dataset

PDF

Paper 3: 6G: Envisioning the Key Technologies, Applications and Challenges

Abstract: In 2030, 6G is going to bring remarkable revolution in communication technologies as it will enable Internet of Everything. Still many countries are working over 5G and B5G has yet to be developed, while some research groups have already initiated projects on 6G. 6G will provide high and sophisticated QoS e.g. virtual reality and holographic communication. At this stage, it is impossible to speculate every detail of 6G and which key technologies will mark 6G. The wide applications of ICT, such as IoT, AI, blockchain technology, XR (Extended Reality) and VR (Virtual Reality), has created the emergence of 6G technology. On the basis of 5G technique, 6G will put profound impact over ubiquitous connectivity, holographic connectivity, deep connectivity and intelligent connectivity. Notably, research fraternity should focus on challenges and issues of 6G. They need to explore various alternatives to meet desired parameters of 6G. Thus, there are many potential challenges to be envisioned. This review study outlines some future challenges and issues which can hamper deployment of 6G. We subsequently define key potential features of 6G to provide the state of the art of 6G technology for future research. We have provided a review of extant research on 6G. In this review, technology prospects, challenges, key areas and related issues are briefly discussed. In addition, we have provided technologies breakdown and framework of 6G. We have shed light over future directions, applications and practical considerations of 6G to help researchers for possible breakthroughs. Our aim is to aggregate the efforts and eliminate the technical uncertainties towards breakthrough innovations for 6G.

Author 1: Syed Agha Hassnain Mohsan
Author 2: Alireza Mazinani
Author 3: Warda Malik
Author 4: Imran Younas
Author 5: Nawaf Qasem Hamood Othman
Author 6: Hussain Amjad
Author 7: Arfan Mahmood

Keywords: IoT; AI; communication technologies; holographic communication; blockchain

PDF

Paper 4: Maximum Likelihood Classification based on Classified Result of Boundary Mixed Pixels for High Spatial Resolution of Satellite Images

Abstract: Maximum Likelihood Classification: MLC based on classified result of boundary Mixed Pixels (Mixel) for high spatial resolution of remote sensing satellite images is proposed and evaluated with Landsat Thematic Mapper: TM images. Optimum threshold indicates different results for TM and Multi Spectral Scanner: MSS data. This may since the TM spatial resolution is 2.7 times finer than MSS, and consequently, TM imagery has more spectral variability for a class. The increase of the spectral heterogeneity in a class and the higher number of channels being used in the classification process may play significant role. For example, the optimum threshold for classifying an agricultural scene using MSS data is about 2.5 standard deviations, while that for TM corresponds to more than four standard deviations. This paper compares the optimum threshold between MSS and TM and suggests a method of using unassigned boundary pixels to determine the optimum threshold. Further, it describes the relationship of the optimum threshold to the class variance with a full illustration of TM data. The experimental conclusions suggest to the user some systematic methods for obtaining an optimal classification with MLC.

Author 1: Kohei Arai

Keywords: Maximum likelihood classification; optimum threshold; Landsat TM; MSS; Mixed Pixel; spatial resolution

PDF

Paper 5: Weather Variability Forecasting Model through Data Mining Techniques

Abstract: Climate and weather variability are thought-provoking for world communities. In this apprehension, weather variability imposes a broad impact on the economy and the survival of the living entities. In relation to the African continent country Ethiopia, it is desirable to have great attention on the weather variability. The Ethiopian Dodota Woreda region is continuously affected by repeated droughts. It gives a great alarm to investigate and analyze the factors which are major causes of the frequent occurrence of drought. Although the weather scientists and domain experts are overwhelmed with meteorological data but lacking in analyzing and revealing the hidden knowledge or patterns about weather variability. This paper is an effort to design an enhanced predictive model for weather variability forecasting through Data Mining Techniques. The parameters used in this research are temperature, dew point, sunshine, rainfall, wind speed, maximum temperature, minimum temperature, and relative humidity to enhance the accuracy of forecasting. To improve the accuracy, we used the Multilayer-perceptron (MLP), Naïve Bayes, and multinomial logistic regression algorithms to design a proposed Predictive Model. The knowledge discovery in database (KDD) process model was used as a framework for the modeling purpose. The research findings revealed that the aforementioned parameters have a strong positive relationship with weather forecasting in meteorology sectors. The MLP model with selected parameters presents an interesting predictive accuracy result i.e. 98.3908% as correctly classified instances. The most performing algorithm, MLP was chosen and used to generate interesting patterns. The domain experts (meteorologists) validated the discovered patterns for the improved accuracy of weather variability forecasting.

Author 1: Sultan Shekana
Author 2: Addisu Mulugeta
Author 3: Durga Prasad Sharma

Keywords: Meteorological data; weather forecasting; multilayer-perceptron; Naïve Bayes; multinomial logistic regression algorithms

PDF

Paper 6: A Recommender System for Mobile Applications of Google Play Store

Abstract: With the growth in the smartphone market, many applications can be downloaded by users. Users struggle with the availability of a massive number of mobile applications in the market while finding a suitable application to meet their needs. Indeed, there is a critical demand for personalized application recommendations. To address this problem, we propose a model that seamlessly combines content-based filtering with application profiles. We analyzed the applications available on the Google Play app store to extract the essential features for choosing an app and then used these features to build app profiles. Based on the number of installations, the number of reviews, app size, and category, we developed a content-based recommender system that can suggest some apps for users based on what they have searched for in the application’s profile. We tested our model using a k-nearest neighbor algorithm and demonstrated that our system achieved good and reasonable results.

Author 1: Ahlam Fuad
Author 2: Sahar Bayoumi
Author 3: Hessah Al-Yahya

Keywords: Application profile; content-based filtering; Google play; mobile applications; recommender systems

PDF

Paper 7: Factored Phrase-based Statistical Machine Pre-training with Extended Transformers

Abstract: This paper presents the development of a cascaded hybrid multi- lingual automatic translation system, by allowing a tight coupling between the two underlying research approach in machine translation, namely, the neuronal (deterministic approach) and statistical (probabilistic approach), while fully taking advantage of each method in order to improve translation performance. This architecture addresses two major problems frequently occurring when dealing with morphologically richer languages in MT, that is, the significant number unknown tokens generated due to the presence of out of vocabulary (OOV) words, and size of the output vocabulary. Additionally, we incorporated factors (additional word-level linguistic information) in order to alleviate data sparseness problem or potentially reduce language ambiguity, the factors we considered are lemmatization and Part-of-Speech tags (taking into consideration its various compounds). We combined a fully-factored transformer and a factored PB-SMT, where, the training data is pre-translated using the trained fully-factored transformer, and afterwards employed to build an PB-SMT system, parallelly using the pre-translated development set to tune parameters. Finally, in order to produce the desired results, we operated the FPB-SMT system to re-decode the pre-translated test set in a post-processing step. Experiments performed on translations from Japanese to English and English to Japanese reveals that our proposed cascaded hybrid framework outperforms the strong HMT state-of-the-art by over 8.61% BLEU and 7.25% BLEU, respectively, for validation set, and over 8.70% BLEU and 7.70% BLEU, respectively, for test set.

Author 1: Vivien L. Beyala
Author 2: Marcellin J. Nkenlifack
Author 3: Perrin Li Litet

Keywords: Machine translation; transformer; statistical machine; morphologically rich; hybrid

PDF

Paper 8: Reward-Based DSM Program for Residential Electrical Loads in Smart Grid

Abstract: There is a positive attitude towards the use of different strategies for engaging in demand response (DR) programs in energy markets through the innovation and trend of smart grid technologies. In this paper, a reward-based approach is proposed that enhances the involvement of customers in the DR program by assuring the customer’s comfort level. Most of the previous works considered limited controllable loads like thermal loads for demand side management (DSM). In this approach thermal and all active electrical loads are considered for the analysis. Comfort indicator is used for the analysis which is an important parameter for measuring comfort of each resident. This technique significantly reduces the utility reward cost and maximizes the user satisfaction level compared with existing approach. The numerical example considered in this work illustrates the fruitfulness of the proposed approach. This problem is formulated as mixed-integer linear programming (MILP) and solved by using CPLEX solver in General Algebraic Modelling Software (GAMS).

Author 1: Muthuselvi G
Author 2: Saravanan B

Keywords: DSM; LSE; RLA; smart grid; reward

PDF

Paper 9: SQ-Framework for Improving Sustainability and Quality into Software Product and Process

Abstract: Sustainability is one of the most important quality factors and it integrates some other quality factors in the product too. Moreover, it makes an effective workflow and improves user satisfaction. A manager can meet the target by controlling a project but sustainability is more versatile. Quality factors are the measuring criteria of a product while sustainability drives to make the quality product, efficient project, and successful organization so it is a package of strategy, tasks, processes, technologies, and stakeholders. It is observed that there is a lacking of sustainability practice in software engineering like other engineering communities. There are many software developing models that exist with limited scope in quality control for sustainability. Given the aforementioned viewpoint, this research proposes a new software project management framework, “SQ-Framework”. Its hybrid structure consists of the features of methods, quality models, and sustainability. The execution guidance of “SQ-Framework” is provided according to “Karlskrona manifesto”. A manager can use the framework to improve the management process of a project, a developer can integrate quality factors with sustainability into the product, an executive could be motivated to integrate quality and sustainability strategy in the organization, and the users get inspiration to practice sustainability.

Author 1: Kamal Uddin Sarker
Author 2: Aziz Bin Deraman
Author 3: Raza Hasan
Author 4: Ali Abbas

Keywords: Software project management; sustainable project; sustainable product; sustainable and quality model; system development methodology

PDF

Paper 10: Forecasting the Global Horizontal Irradiance based on Boruta Algorithm and Artificial Neural Networks using a Lower Cost

Abstract: More solar-based electricity generation stations have been established markedly in recent years as new and an important source of renewable energy. That is to ensure a more efficient, reliable integration of solar power to overcome several challenges such as, the future forecasting, the costly equipment in the metrological stations. One of the effective prediction methods is Artificial Neural Networks (ANN) and the Boruta algorithm for optimal attributes selection, to train the proposed prediction model to obtain high accurate prediction performance at a lower cost. The precise goal of this research is to predict the Global Horizontal Irradiance (GHI) by building the ANN model. Also, reducing the total number of GHI prediction attributes/features consequently reducing the cost of devices and equipment required to predict this important factor. The dataset applied in this research is real data, collected from 2015-2018 by solar and meteorological stations in KSA. It provided by King Abdullah City for Atomic and Renewable Energy (KA CARE). The findings emphasize the achievement of accurate predictions of solar radiation with a minimum cost, which is considered to be highly important in KSA and all other countries that have a similar environment.

Author 1: Abdulatif Aoihan Alresheedi
Author 2: Mohammed Abdullah Al-Hagery

Keywords: Global horizontal irradiance; artificial neural networks; feature selection; boruta algorithm; cost reduction; machine learning

PDF

Paper 11: Towards Computational Models to Theme Analysis in Literature

Abstract: The recent years have witnessed the development of numerous computational methods that have been widely used in humanities and literary studies. In spite of their potentials of such methods in terms of providing workable solutions to different inherent problems within these domains including selectivity, objectivity, and replicability, very little has been done on thematic studies in literature. Almost all the work is done through traditional methods based on individual researchers’ reading of texts and intuitive abstraction of generalizations from that reading. These approaches have negative implications to issues of objectivity and replicability. Furthermore, it is challenging for such traditional methods to deal effectively with the hundreds of thousands of new novels that are published every year. In the face of these problems, this study proposes an integrated computational model for the thematic classifications of literary texts based on lexical clustering methods. As an example, this study is based on a corpus including Thomas Hardy’s novels and short stories. Computational semantic analysis based on the vector space model (VSM) representation of the lexical content of the texts is used. Results indicate that the selected texts were thematically grouped based on their semantic content. It can be claimed that text clustering approaches which have long been used in computational theory and data mining applications can be usefully used in literary studies.

Author 1: Abdulfattah Omar

Keywords: Computational models; computational semantics; lexical clustering; lexical content; philological methods; Thomas Hardy; Vector Space Model (VSM)

PDF

Paper 12: Pynq-YOLO-Net: An Embedded Quantized Convolutional Neural Network for Face Mask Detection in COVID-19 Pandemic Era

Abstract: The recent Coronavirus COVID-19 is a very infectious disease that is transmitted through droplets generated when an infected person coughs, sneezes, or exhales. So, people must wear a face mask to reduce the power of the transition of this virus. Governments around the world have imposed the use of face masks in public spaces and supermarkets. In this paper, we propose to build a face mask detection system based on a lightweight Convolutional Neural Network (CNN) and the YOLO object detection framework to implement it on an embedded low power device. The object detection framework was designed using a single Convolutional Neural Network for object detection in real-time. To make the YOLO framework suitable for embedded implementation, we propose to build a lightweight Convolutional Neural Network and quantize it by using a single bit for weight and 2 bits for activations. The proposed network called Pynq-YOLO-Net was implemented on the Pynq Z1 platform. The computation was divided between the software and the hardware. The features extraction part was executed on the hardware device and the output part was executed on the software. This configuration has allowed reaching real-time processing with a very good detection accuracy of 97% when tested on the combination of collected datasets.

Author 1: Yahia Said

Keywords: Face mask detection; Coronavirus COVID-19; YOLO; Convolutional Neural Network (CNN); embedded device; Pynq Z1 board

PDF

Paper 13: Best Path in Mountain Environment based on Parallel Hill Climbing Algorithm

Abstract: Heuristic search is a search process that uses domain knowledge in heuristic rules or procedures to direct the progress of a search algorithm. Hill climbing is a heuristic search technique for solving certain mathematical optimization problems in the field of artificial intelligence. In this technique, starting with a suboptimal solution is compared to starting from the base of the hill, and improving the solution is compared to walking up the hill. The optimal solution of the hill climbing technique can be achieved in polynomial time and is an NP-complete problem in which the numbers of local maxima can lead to an exponential increase in computational time. To address these problems, the proposed hill climbing algorithm based on the local optimal solution is applied to the message passing interface, which is a library of routines that can be used to create parallel programs by using commonly available operating system services to create parallel processes and exchange information among these processes. Experimental results show that parallel hill climbing outperforms sequential methods.

Author 1: Raja Masadeh
Author 2: Ahmad Sharieh
Author 3: Sanad Jamal
Author 4: Mais Haj Qasem
Author 5: Bayan Alsaaidah

Keywords: Hill climbing; heuristic search; parallel processing; Message Passing Interface (MPI)

PDF

Paper 14: High-Speed and Secure Elliptic Curve Cryptosystem for Multimedia Applications

Abstract: The last few years witnessed a rapid increase in the use of multimedia applications, which led to an explosion in the amount of data sent over communication networks. Therefore, it has become necessary to find an effective security solution that preserves the confidentiality of such enormous amount of data sent through unsecure network channels and, at the same time, meets the performance requirements for applications that process the data. This research introduces a high-speed and secure elliptic curve cryptosystem (ECC) appropriate for multimedia security. The proposed ECC improves the performance of data encryption process by accelerating the scaler multiplication operation, while strengthening the immunity of the cryptosystem against side channel attacks. The speed of the encryption process has been increased via the parallel implementation of ECC computations in both the upper scaler multiplication level and the lower point operations level. To accomplish this, modified version of the Right to Left binary algorithm as well as eight parallel multipliers (PM) were used to allow parallel implementation for point doubling and addition. Moreover, projective coordinates systems were used to remove the time-consuming inversion operation. The current 8-PM Montgomery ECC achieves higher performance level compared to previous ECC implementations, and can reduce the risk of side channel attacks. In addition, current research work provides performance and resources-consumption analysis for Weierstrass and Montgomery elliptic curve representations over prime field. However, the proposed ECC implementation consumes more resources. Presented ECCs were implemented using VHDL, and synthesized using the Xilinx tool with target FPGA.

Author 1: Mohammad Alkhatib

Keywords: Elliptic curves cryptosystem; performance; binary method; projective coordinates; security applications

PDF

Paper 15: A Survey on Privacy Vulnerabilities in Permissionless Blockchains

Abstract: Blockchain decentralization not only ensures transparency of transactions to eliminate need of trusting third party, but also makes the transactions of the network to be publicly accessible to all the participating peers in the network. As a result, data anonymity and confidentiality are compromised making several business enterprises and industrialists hesitant to adopt the technology. Although research community has proposed various privacy-preserving solutions for blockchain, however, they still lack in efficiency resulting in distrust of industries in opting for the technology. This study is conducted for contributing to the existing body of knowledge corresponding to privacy in blockchains. The fundamental goal of this study is to delve into privacy vulnerabilities of the blockchain network in a permissionless setting by identifying non-trivial roots of factors causing privacy breach in blockchain and presenting limitation of existing privacy preserving mechanisms. Studies with superficial comparison of privacy preserving techniques are available in literature but a detailed and in-depth analysis of their limitations and causes of privacy breach in blockchain is yet not done. Therefore, in this paper we first present comprehensive analysis of various privacy breaching factors of the blockchain networks. Next, we discuss existing cryptographic and non-cryptographic solutions in literature. We found out that these existing privacy preserving mechanisms have their own set of limitations and hence are inefficient at current point of time. The existing privacy preserving mechanisms need further consideration of the research community before they’re widely adopted and benchmarked. Therefore, in the end, we identified some future directions that need to be addressed to model an efficient privacy preserving mechanism for wider adoption of the blockchain technology.

Author 1: Aisha Zahid Junejo
Author 2: Manzoor Ahmed Hashmani
Author 3: Abdullah Abdulrehman Alabdulatif

Keywords: Blockchains; privacy vulnerabilities; cryptographic primitives; anonymity; confidentiality

PDF

Paper 16: Efficient Method for Three Loop MMSE-SIC based Iterative MIMO Systems

Abstract: Iterative decoding is one of the promising methods to improve the performance of MIMO systems. In iterative processing channel decoder and MIMO detector share the information in order to enhance the overall system performance. However, iterative processing requires a lot of computations therefore it is considered as a computationally complex approach due to complex detection schemes involving iterative processing. There are several promising detection methods that require further improvements and they can be candidates in order to practically implement iterative processing. In this paper, the propose method to improve the efficiency of three loop-based minimum mean squared errors with soft interference cancellation (MMSE-SIC) method by reducing its complexity with a single inverse operation. Simulation results are given in order to provide detail analysis of the proposed MMSE-SIC based approach for iterative detection and decoding (IDD).

Author 1: Zuhaibuddin Bhutto
Author 2: Saleem Ahmed
Author 3: Syed Muhammad Shehram Shah
Author 4: Azhar Iqbal
Author 5: Faraz Mehmood
Author 6: Imdadullah Thaheem
Author 7: Ayaz Hussain

Keywords: MIMO; Iterative Detection and Decoding (IDD); sphere decoding; Minimum Mean Squared Errors with Soft Interference Cancellation (MMSE-SIC)

PDF

Paper 17: An Empirical Comparison of Fake News Detection using different Machine Learning Algorithms

Abstract: Relying on social networks to follow the news has its pros and cons. Social media websites indeed allow the spread of information among people quickly. However, such websites might be leveraged to circulate low-quality news full of misinformation, i.e., "fake news." The wide distribution of fake news has a considerable negative impact on individuals and society as a whole. Thus, detecting fake news published on the various social media websites has lately become an evolving research area that is drawing great attention. Detecting the widespread fake news over the numerous social media platforms presents new challenges that make the currently deployed algorithms ineffective or not applicable anymore. Basically, fake news is deliberately written on the first place to mislead readers to accept false information as being true, which makes it difficult to detect based on news content solely; consequently, auxiliary information, like user social engagements on social media websites, need to be taken into account to help make a better detection. Using such auxiliary information is challenging because users' social engagements with fake news produce noisy, unstructured, and incomplete Big-Data. Due to the fact that fake news detection on social media is fundamental, this research aims at examining four well-known machine learning algorithms, namely the random forest, the Naïve Bayes, the neural network, and the decision trees, distinctively to validate the efficiency of the classification performance on detecting fake news. We conducted an experiment on a widely used public dataset i.e. LIAR, and the results show that the Naïve Bayes classifier defeats the other algorithms remarkably on this dataset.

Author 1: Abdulaziz Albahr
Author 2: Marwan Albahar

Keywords: Fake news; classification; machine learning; performance comparison

PDF

Paper 18: Cloud-Based Outsourcing Framework for Efficient IT Project Management Practices

Abstract: The optimum utilization of human resources is one of the crucial exercises in IT organizations. To provide a well-organized and cohesive working environment, organizations need to review their work culture in reference to newly evolved tools and techniques. To reduce the development cost of the IT projects and the optimum utilization of human resources, organizations need to review and redesign the project development processes. The significant challenges faced by IT organizations are the rapid switch-over (attrition) of IT professionals, physical migration or deployment, and redeployment of the human resources. This research paper is an effort towards the multilateral exploration of the techniques to adapt and improve the ICT enabled project management practices in an outsourced environment. This research is an effort with special reference to developing countries such as Ethiopia, where an acute shortage of high skilled IT human resources and their physical migration from one project location to another project location is a costly and challenging task. Ethiopia as a developing country and its IT industry is challenged by several issues like the capacity of ICT infrastructure and the skilled human resources. In such situations, IT projects are either challenged, impaired, or completed failed due to lack of IT human resources with desired skills and ultramodern up to date IT infrastructure. In this research paper, cloud computing technology is assumed as a key to the solution. For this, a systematic and careful investigation using mixed data analysis approach was used to adopt cloud-based outsourcing in IT project management practices i.e. design, development, and testing over outsourced systems by outsourced IT human resources. The major findings of this paper are to investigate and analyze how these cloud-based resources can be explored without physical movement or migration. For the novel improvement in the existing IT project management practices, the salient stakeholders’ views were collected and analyzed for designing cloud-based outsourcing IT project management framework for the Ethiopian IT industry. The framework was functionally tested over the cloud-based Bitrix24 platform.

Author 1: Mesfin Alemu
Author 2: Abel Adane
Author 3: Bhupesh Kumar Singh
Author 4: Durga Prasad Sharma

Keywords: Outsourcing; project management; cloud; IT industry; framework

PDF

Paper 19: A Clustering Hybrid Algorithm for Smart Datasets using Machine Learning

Abstract: In the field of data science, Machine Learning is treated as sub-field which primarily deals with designing of algorithms which have ability to learn from previous information and make future predictions accordingly. In traditional computational world the Machine Learning was generally performed on highly performance servers and machines. The implementation of these concepts on Big Data analytics algorithms has high potential and is still in its early stages. So far as machine learning is concerned, performance measure is an important parameter to evaluate the overall functionality of the algorithms. The data set is a different entity and the measuring of performance on a data which is unseen is also called as test set, and training set is a Data set which is training itself. The Data Mining is extensively using learning algorithms for data analysis and to formulate future predications based on archived data. The research presented provides a step forward to make smart data sets out of training data set by evaluating machine learning algorithm. The research presented a novel hybrid algorithm that attempts to incorporate the feature of similarities in Random Forest machine learning algorithm for improving the classification accuracy and efficiency of working.

Author 1: Dar Masroof Amin
Author 2: Munishwar Rai

Keywords: Random Forests (RF); Jaccard Similarity (JS); triangle; smart data; root mean square error; mean absolute error; machine learning

PDF

Paper 20: Grid Search Tuning of Hyperparameters in Random Forest Classifier for Customer Feedback Sentiment Prediction

Abstract: Text classification is a common task in machine learning. One of the supervised classification algorithm called Random Forest has been generally used for this task. There is a group of parameters in Random Forest classifier which need to be tuned. If proper tuning is performed on these hyperparameters, the classifier will give a better result. This paper proposes a hybrid approach of Random Forest classifier and Grid Search method for customer feedback data analysis. The tuning approach of Grid Search is applied for tuning the hyperparameters of Random Forest classifier. The Random Forest classifier is used for customer feedback data analysis and then the result is compared with the results which get after applying Grid Search method. The proposed approach provided a promising result in customer feedback data analysis. The experiments in this work show that the accuracy of the proposed model to predict the sentiment on customer feedback data is greater than the performance accuracy obtained by the model without applying parameter tuning.

Author 1: Siji George C G
Author 2: B.Sumathi

Keywords: Classification; grid search; hyperparameters; parameter tuning; random forest classifier; sentiment analysis

PDF

Paper 21: Aspect-Based Sentiment Analysis and Emotion Detection for Code-Mixed Review

Abstract: Review can affect customer decision making because by reading it, people manage to know whether the review is positive, or negative. However, positive, negative, and neutral, without considering the emotion will be not enough because emotion can strengthen the sentiment result. This study explains about the comparison of machine learning and deep learning in sentiment as well as emotion classification with multi-label classification. In machine learning comparison, the problem transformation that we used are Binary Relevance (BR), Classifier Chain (CC), and Label Powerset (LP), with Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Extra Tree Classifier (ET) as algorithms of machine learning. The features we compared are n-gram language model (unigram, bigram, unigram-bigram). For deep learning, algorithms that we applied are Gated Recurrent Unit (GRU) and Bidirectional Long Short-Term Memory (BiLSTM), using self-developed word embedding. The comparison results show RF dominates with 88.4% and 89.54% F1 scores with CC method for food aspect, and LP for price, respectively. For service and ambience aspects, ET leads with 92.65% and 87.1% with LP and CC methods, respectively. On the other hand, in deep learning comparison, GRU and BiLSTM obtained similar F1- score for food aspect, 88.16%. On price aspect, GRU leads with 83.01%. However, for service and ambience, BiLSTM achieved higher F1-score, 89.03% and 84.78%.

Author 1: Andi Suciati
Author 2: Indra Budi

Keywords: Sentiment analysis; emotion; multi-label classification; machine learning; deep learning

PDF

Paper 22: Pseudo Amino Acid Feature-based Protein Function Prediction using Support Vector Machine and K-Nearest Neighbors

Abstract: Bioinformatics facing the vital challenge in protein function prediction due to protein data are available in primary structure, an amino acid sequence. Every protein cell sequence length and size are in different sequence order. Protein is available in 20 amino acid sequence alphabetic order; however, the corresponding information of the membrane protein sequence is insufficient to capture the function and structures of a protein from primary sequence datasets. A challenging task to correctly identify protein structure and function from amino acid sequence. The basic principle of PseAAC (Pseudo Amino Acid Composition) is to generate a discrete number of every protein samples. In each protein, sequence length varies due to protein functions. Some protein sequence length is less than 50, and some are large. Due to this, different sizes of the amino acid sample are chances to lose sequence order information. PseAAC feature generates a fixed size descriptor value in vector space to overcome sequence information loss and is used to further systematic evolution. Therefore machine learning computational tool synthesizes accurate identification of structure and function class of membrane protein. In this study, SVM (Support Vector Machine) and KNN (K-nearest neighbors) based prediction classifier used to identifying membrane protein and their types.

Author 1: Anjna Jayant Deen
Author 2: Manasi Gyanchandani

Keywords: Membrane protein types; classifiers; SVM (RBF); KNN; Random Forest; PseAAC

PDF

Paper 23: Real Time Implementation and Comparison of ESP8266 vs. MSP430F2618 QoS Characteristics for Embedded and IoT Applications

Abstract: This research article proposes a novel Smart Com- munication Platform (SCP) to improve the Quality of Service (QoS) parameters in real time by using MSP430F2618. A static network has been implemented with narrow band Internet of Things (IoT) architecture which contains 10 nodes. SCP performs tracking of environmental parameters like Temperature, Humidity, Pressure, Proximity and light. A prototype has been developed by using Open source Red hat Linux 14.4 version and programmed in Embedded C.MSP430F2618 has been configured as master and slave nodes, the output is observed in a serial monitor and Gateway as well. The QoS parameters of MSP430F2618 and ESP8266 are compared in terms of power. The power consumption improvements of QoS (Quality of Service) analysis results are around 1.01mW has been seen with the experimental setup. These empirical results are much useful for wireless sensor network and IoT applications.

Author 1: Krishnaveni Kommuri
Author 2: Venkata Ratnam Kolluru

Keywords: Communication; ESP8266; gateway; Internet of Things (IoT); MSP430; power; sensor

PDF

Paper 24: A Critical Analysis of IS Governance Frameworks: A Metamodel of the Integrated Use of CobiT Framework

Abstract: Information Systems Governance (ISG) is an essential component of corporate governance. It refers to the implementation of the means of decision-making. A considerable number of studies on information systems governance (ISG) have been published. Nevertheless, there is a need to conceptualize and model this theoretical context. The aim of this paper is to provide a study of frameworks that integrates this domain as well as to bring a modeling of the concepts that structure the framework of this domain and a profound and clear understanding of the IS process, IS governance has been studied as a concept. The results demonstrated that the adoption of the COBIT repository in the organization could amplify its efforts. This input therefore enables the organization to capitalize on and build up knowledge in the field of IS governance, and to propose models for delivering an integrated, business-aligned IS.

Author 1: Lamia MOUDOUBAH
Author 2: Abir EL YAMAMI
Author 3: Mansouri KHALIFA
Author 4: Mohammed QBADOU

Keywords: Information Systems Governance (ISG); IS process; business process; COBIT

PDF

Paper 25: Prioritization of Software Functional Requirements from Developers Perspective

Abstract: Prioritizing software requirements is important and difficult task during requirements management phase of requirements engineering. To ensure timely delivery of project, software developers have to prioritize functional requirements. The importance of prioritization increases when size of requirements is big. Software for large enterprises like the Enterprise Resource Planning (ERP) systems are more likely to be developed by a team of software developers where large size requirements are distributed in parallel team members. However, requirements are dependent on each other, therefore development of pre-requisite requirements must be carefully timed and should be implemented first. Therefore, assigning importance and priority to some requirements over others is necessary so that requirements can be available on time to developers. This paper proposes a prioritization approach for functional requirements on the basis of their importance during implementation. The design of research method consists of Analytical Hierarchical Process (AHP) technique based on spanning trees. Through spanning trees, dependent requirements were linked in hierarchical structure and then AHP were applied. As a result of prioritization, requirements were distributed in such a way that dependency among requirements of developers were kept minimum as much as possible so that waiting time of requirements for their pre-requisite were reduced. With reduced effect of dependency in requirements of parallel developers, timely delivery of software projects can be assured.

Author 1: Muhammad Yaseen
Author 2: Aida Mustapha
Author 3: Noraini Ibrahim

Keywords: Requirements prioritization; Functional Requirements (FRs); directed graph; spanning tree (ST); Analytical Hierarchical Process (AHP)

PDF

Paper 26: Understanding user Emotions Through Interaction with Persuasive Technology

Abstract: Emotions play a vital role in persuasion; thus, the use of persuasive applications should affect and appeal to the users’ emotions. However, studies in persuasive technology have yet to discover what triggered the users’ emotions. Therefore, the objectives of this study are to examine user emotions and to identify the factors that affect user emotions in using persuasive applications. This study is conducted in three stages; pre-interaction, during-interaction and post-interaction, employed a mixed-method approach using Geneva Emotions Wheel (GEW) and open-ended survey questions that analyzed using thematic analysis. The result shows that most of the emotions that users felt belong to high-control positive valence emotions that consist of interest, joy and pleasure. User, system and interaction are the three factors that triggered the emotions encompasses of elements such as Individual Awareness, Personality, Interface Design, Persuasive Function, Content Presentation, System Quality, Usability, and Tasks. The findings contribute to the body knowledge of Persuasive Technology, where the discovered factors and its elements are the antecedents that should be the concern in constructing an emotion-based trust design framework that could bring emotional impact to users to ensure a successful persuasion.

Author 1: Wan Nooraishya Wan Ahmad
Author 2: Nazlena Mohamad Ali
Author 3: Ahmad Rizal Ahmad Rodzuan

Keywords: Emotion; emotional states; interaction; persuasive technology; captology

PDF

Paper 27: Decision-Making Analysis using Arduino-Based Electroencephalography (EEG): An Exploratory Study for Marketing Strategy

Abstract: Business technology has brought conventional marketing methods to the next level. These emerging integrated technologies has contributed to the growth and understanding of the consumer decision making process. Several studies have attempted to evaluate media content, especially on video advertising or TV commercials using various neuroimaging techniques such as the electroencephalography (EEG) device. Currently, the use of neuroscience in Malaysia’s marketing research is very limited due to its limited adoption as an emerging technology in this field. This research uncovers the neuroscientific approach, particularly through the use of an EEG device; examining consumers’ responses in terms of brain wave signals and cognition. A proposed theoretical framework on the factors affecting visual stimulus (movement, color, shape, and light) during the decision-making process by watching video advertising had been customized using two conceptual models of sensory stimulation. Ten respondents participated in the experiment to investigate the spectral changes of alpha brain wave signals detected in the occipital lobe. A 2-channel Arduino-based EEG device from Backyard Brains and Spike Recorder software was used to analyze the EEG signal through Fast Fourier Transform (FFT) method. Results obtained from the investigated population showed that there was statistically significant brain wave activity during the observation of the video advertising which demonstrated the interconnection with short-term memory through visual stimulus. Application of the neuroscience tool helped to explore consumer brain responses towards marketing stimuli with regards to the consumers’ decision-making processes. This study manifests a useful tool for neuromarketing and concludes with a discussion, together with recommendations on the way forward.

Author 1: Ahmad Faiz Yazid
Author 2: Siti Munirah Mohd
Author 3: Abdul Razzak Khan Rustum Ali Khan
Author 4: Shafinah Kamarudin
Author 5: Nurhidaya Mohamad Jan

Keywords: Arduino-based electroencephalography (EEG); neuromarketing; short-term memory; TV commercials; visual cognition

PDF

Paper 28: Finger Movement Discrimination of EMG Signals Towards Improved Prosthetic Control using TFD

Abstract: Prosthetic is an artificially made as a substitute or replacement for missing part of a body. The function of the missing body part can be replaced by using the prosthesis and it can help disabled people do their activities easily. A myoelectric control system is a fundamental part of modern prostheses. The electromyogram (EMG) signals are used in this system to control the prosthesis movements by taking it from a person’s muscle. The problem for the myoelectric control system is when it did not receive the same attention to control fingers due to more dexterous of individual and combined finger control in a signal. Thus, a method to solve the problem of the myoelectric control system by using time-frequency distribution (TFD) is proposed in this paper. The EMG features of the individual and combine finger movements for ten subjects and ten different movements is extracted using TFD, ie. spectrogram. Three machine learning algorithms which are Support Vector Machine (SVM), k-Nearest Neighbor (KNN) and Ensemble Classifier are then used to classify the individuals and combine finger movement based on the extracted EMG feature from the spectrogram. The performance of the proposed method is then verified using classification accuracy. Based on the results, the overall accuracy for the classification is 90% (SVM), 100% (KNN) and 100% (Ensemble Classifier), respectively. The finding of the study could serve as an insight to improve the conventional prosthetic control strategies.

Author 1: E. F. Shair
Author 2: N.A. Jamaluddin
Author 3: A.R. Abdullah

Keywords: Electromyography; feature extraction; time-frequency distribution; spectrogram; classification; machine learning

PDF

Paper 29: Autism Spectrum Disorder Diagnosis using Optimal Machine Learning Methods

Abstract: Autism spectrum disorder (ASD) is the disorder of communication and behavior that affects children and adults. It can be diagnosed at any stage of life. Most importantly, the first two years of life, regardless of ethnicity, race, or economic groups. There are different variations of ASD according to the severity and type of symptoms experienced by people. It is a lifelong disorder, but treatment and services can improve the symptoms. The literature focuses on one of the main methods used by physicians to diagnose ASD. Many types of research and medical reports have been reviewed; however, a few of them only give good medical results for the strong differentiation of ASD from healthy people. This paper focuses on using machine learning algorithms to predict an individual with specific ASD symptoms. The target is to predict an individual with specific ASD symptoms and finding the best machine learning model for diagnosis. Further, the paper aims to make the autism diagnosis faster to deliver the required treatment at an early stage of child development.

Author 1: Maitha Rashid Alteneiji
Author 2: Layla Mohammed Alqaydi
Author 3: Muhammad Usman Tariq

Keywords: Autism diagnosis; autism disorder; autism detection; machine learning; ASD

PDF

Paper 30: Educational Tool for Generation and Analysis of Multidimensional Modeling on Data Warehouse

Abstract: The curricular inclusion of topics, study plans, and teaching programs related to the study of Data Science has been trending mostly in higher-level education for the last years. However, the previous knowledge requirements for students to adequately assimilate these lessons are more specialised than the ones they obtain during secondary education. On the one hand, the interaction with complexes techniques and materials is needed, and on the other, tools to practice on-demand are required in the current learning. So, this is an excellent opportunity for the creation of data analysis tools for educational purpose that could be considered as a starting point of a broad area of application. This paper presents a pedagogical support tool aimed to facilitate the student approach to the basic knowledge of data mining through the practice of the analysis of online analytical processing (OLAP). It is a prototype that allows the visualisation of the multidimensional cubes generated with all possible combinations of the dimensions of the data set, as well as their storage in databases, the recovery operations for views, and the implementation of an algorithm for the selection of the optimal view set for materialising the set of records resulting from a search of the database, and computing the materialisation costs and total records recovered. The prototype also carries out and present recurrent patterns and association rules while considering factors such as support variables and reliability. All of this is steps are done explicitly to aid the students to comprehend the generation process of data cubes in the data mining discipline.

Author 1: Elena Fabiola Ruiz Ledesma
Author 2: Elizabeth Moreno Galván
Author 3: Enrique Alfonso Carmona García
Author 4: Laura Ivoone Garay Jiménez

Keywords: Educational data mining; data cube; view materialization; educational software

PDF

Paper 31: The Effects of Speed and Altitude on Wireless Air Pollution Measurements using Hexacopter Drone

Abstract: Air pollution has a severe impact on human beings and one of the top risks facing human health. The data collection near pollution sources is difficult to obtain due to obstacles such as industrial and rural areas, where sensing usually fails to give enough information about the air quality. Unmanned Aerial Vehicles (UAVs) equipped with different sensors offer new approaches and opportunity to air pollution and atmospheric studies. Despite that, there are new challenges that emerged with using UAVs, in particular, the effect of wind-generated from UAVs propellers rotation on the efficiency and ability to sense and measure gas concentrations. The results of gas measurement are affected by the propellers rotation and the wind resistance. Thus, the effect of changing UAV speed and altitude on the gas measurement both vertically and horizontally need to be performed. The aims of this paper is to propose a new mobile-wireless air pollution system composed of UAV equipped with low-cost sensors using LoRa transmission. The proposed system is evaluated by studying the effect of changing altitude and speed on the measured gas concentrations CO, LPG, H2, and smoke when flying in horizontally and vertically directions. The results showed that our system is capable of measuring CO, LPG, H2, and smoke in the vertical mode in both hovering and deploying scenarios. While in horizontal mode the results showed that system can detect and measure gas concentrations at speeds less than or equal to 6 m/s. While at high speed of 8 and 10 m/s there will be an impact on its performance and accuracy to detect the targeted gases. Also, the results showed that the LoRa shield and Radio transmitter AT9S can successfully transmit up to 800 m horizontally and 400 feet vertically.

Author 1: Rami Noori
Author 2: Dahlila Putri Dahnil

Keywords: Unmanned Aerial Vehicles (UAVs); low-cost sensors; air pollution; LoRa; air quality; radio transmitter; atmosphere

PDF

Paper 32: An Improved Image Retrieval by Using Texture Color Descriptor with Novel Local Textural Patterns

Abstract: This paper proposes a new local descriptor of color, texture known as a Median Binary Pattern for color images (MBPC) and Median Binary Pattern of the Hue (MBPH). These suggested methods are extract discriminative features for the color image retrieval. In the surrounding region of a local window, the suggested descriptor classification uses a plane to a threshold that distinguish two classes of color pixels. The Median Binary Patterns of the hue features are derived in the color space from HIS, called MBPH to maximize the discriminatory power of the proposed MBPC operator. In addition to MBPC, MBPH are fused to extract the MBPC+MBPH resulting in an efficient image recovery method combined with color histogram (CH). The structure of the two suggested MBPC and MBPH descriptors are combined with the other fuzzyfied based color histogram descriptor that formed MBPC+MBPH+FCH to improve the performance of the suggested method. The proposed methods are applied on datasets Wang, Corel-5K, and Corel-10K. Experimental results depicted that results of proposed methods are better than existing method in terms of retrieved accuracy. The significant recognition accuracy obtained from the proposed methods which is 60.1 and 63.9 for Wang dataset, 41.88 and 42.47 for Corel-5K and 32.89 and 33.89 for Corel-10K dataset. This hybrid proposed method greatly deals with different textural patterns as well as able to grasp minute color details.

Author 1: Punit Kumar Johari
Author 2: Rajendra Kumar Gupta

Keywords: Image retrieval; Binary pattern; feature extraction; Median Binary Pattern for Color (MBPC) image; Median Binary Pattern for Hue (MBPH)

PDF

Paper 33: A Review of Recommender Systems for Choosing Elective Courses

Abstract: In higher education, students face challenges when choosing elective courses in their study programmes. Most higher education institutions employ advisors to assist with this task. Recommender systems have their origins in commerce and are used in other sectors such as education. Recommender systems offer an alternative to the use of human advisors. This paper aims to examine the scope of recommender systems that assist students in choosing elective courses. To achieve this, a systematic literature review (SLR) on recommender systems corpus for choosing elective courses published from 2010–2019 was conducted. Of the 16 981 research articles initially identified, only 24 addressed recommender systems for choosing elective courses and were included in the final analysis. These articles show that several recommender systems approaches and data mining algorithms are used to achieve the task of recommending elective courses. This study identified gaps in current research on the use of recommender systems for choosing elective courses. Further work in several unexplored areas could be examined to enhance the effectiveness of recommender systems for elective courses. This study contributes to the body of literature on recommender systems, in particular those applied for assisting students in choosing elective courses within higher education.

Author 1: Mfowabo Maphosa
Author 2: Wesley Doorsamy
Author 3: Babu Paul

Keywords: Recommender systems; elective courses; data mining algorithms; systematic literature review; higher education

PDF

Paper 34: Susceptible, Infectious and Recovered (SIR Model) Predictive Model to Understand the Key Factors of COVID-19 Transmission

Abstract: On 31 December 2019, WHO was alerted to several cases of pneumonia in Wuhan City, Hubei Province of China. The virus did not match any other known virus. This raised concern because when a virus is new, general behavior and how it affects, people do not know. Initial few cases reportedly had some link to a large seafood and animal market, suggesting animal-to-person spread. However, a growing number of patients reportedly have not had exposure to animal markets, indicating person-to-person spread is occurring. At this time, it’s unclear how easily or sustainably this virus is spreading between people. At any given time during a flu epidemic, firstly, should know the number of people who are infected. Second, to know the numbers who have been infected and have recovered, because these people now have immunity to the disease. Well established SIR modeling methodology is used to develop a predictive model in order to understand the key factors that impact the COVID-19 transmission.

Author 1: DeepaRani Gopagoni
Author 2: P V Lakshmi

Keywords: COVID-19; SIR modeling; WHO; disease spread

PDF

Paper 35: Automated Estrus Detection for Dairy Cattle through Neural Networks and Bounding Box Corner Analysis

Abstract: Thorough and precise estrus detection plays a crucial role in the fertility of dairy cows. Farmers commonly used direct visual monitoring in recognizing estrus signs which demands time and effort and causes misinterpretations. The primary sign of estrus is the standing heat, where the dairy cows stand to be mounted by other cows for a few seconds. Through the years, researchers developed various detection methods, yet most of these methods involve contact and invasive approaches that affect the estrus behaviors of cows. So, the proponents developed a non-invasive and non-contact estrus detection system using image processing to detect standing heat behaviors. Through the TensorFlow Object Detection API, the proponents trained two custom neural network models capable of visualizing bounding boxes of the predicted cow objects on image frames. The proponents also developed an object overlapping algorithm that utilizes the bounding box corners to detect estrus activities. Based on the conducted tests, an estrus event occurs when the centroids of the detected objects measure a distance of less than 360px and have two interior angles with another fixed point of less than 25° and greater than 65° for Y and X axes, respectively. If the conditions are met, the program will save the image frame and will declare an estrus activity. Otherwise, it will restart its estrus detection and counting. The system observed 17 cows, a carabao, and a bull through the cameras installed atop of a cowshed, and detects the estrus events with an efficiency of 50%.

Author 1: Nilo M. Arago
Author 2: Chris I. Alvarez
Author 3: Angelita G. Mabale
Author 4: Charl G. Legista
Author 5: Nicole E. Repiso
Author 6: Rodney Rafael A. Robles
Author 7: Timothy M. Amado
Author 8: Romeo Jr. L. Jorda
Author 9: August C. Thio-ac
Author 10: Jessica S. Velasco
Author 11: Lean Karlo S. Tolentino

Keywords: Dairy cows; estrus detection; image processing; TensorFlow Object Detection API; custom neural network; object overlapping

PDF

Paper 36: An Efficient Cluster-Based Approach to Thwart Wormhole Attack in Adhoc Networks

Abstract: Mobile Ad-hoc networks is an ascertaining domain with promising advancements, attracting researchers with a scope of enhancements and evolutions. These networks lack a definite structure and are autonomous with dynamic nature. The strength of the Ad-hoc network lies in the routing protocols making it an apt choice for transmission. With several types of routing protocols available our focus is on LGF (Location-based Geo-casting and Forwarding) protocol that falls in Position based category. LGF assures to grab the attention with its feature of low bandwidth consumption and routing overhead at the cost of unvolunteered attacks resulting in compromising the security of data. In our approach, we present a technique to overcome the profound attacks like Wormhole and Blackhole by aggregating LGF with k++ Means Clustering aiming at route optimization and promoting security services. The proposed mechanism is evaluated against QoS factors like End to End delay, Delivery Ratio, Load balancing of LGF using Simulator NS3.2 which envisioned drastic performance acceleration in the aforementioned model.

Author 1: Kollu Spurthi
Author 2: T N Shankar

Keywords: MANET; LGF; K++ Means clustering; network security; wormhole attack; blackhole attack; secured algorithm

PDF

Paper 37: A Proposed User Requirements Document for Children’s Learning Application

Abstract: User requirements are the highest level of requirements. Flawed user requirements document can cause defects in the software being built—aspects of applications that were not presented in the user requirements document to cause a defect. In learning applications for children, there are aspects of pedagogy that need to be well documented. This aspect is not available in the general user requirements document, so it is often not well presented. The learning style and thinking skills level is crucial to be well presented in the user requirements document. That was because the children's persona cannot be compared at every range criteria of developmental age. That factor will undoubtedly affect the specifications of the software to be built. Users' viewpoints about different requirements can also make developers wrong in determining requirements. Applying requirements prioritization in the user requirements document can help resolve the problem. Measurement of document quality was also performed using parameters in measuring the quality of the user requirements document. The results of measuring the quality of the user requirements document found that it is reliable for use.

Author 1: Mira Kania Sabariah
Author 2: Paulus Insap Santosa
Author 3: Ridi Ferdiana

Keywords: User requirements; user requirements document; learning application; aspect of pedagogy children

PDF

Paper 38: Design and Implementation of Real Time Data Acquisition System using Reconfigurable SoC

Abstract: System on chip (SoC) technology is widely used for high speed and efficient embedded systems in various computing applications. To perform this task, Application Specific IC (ASIC) based system on chips are generally used till now by spending maximum amount of research, development time and money. However, this is not a comfortable choice for low and medium-level capacity industries. The reason is, with ASIC or standard IC design implementation, it is very difficult task where quick time to market, upgradability and flexibility are required. Therefore, better solution to this problem is design with reconfigurable SoCs. Therefore, FPGAs can be replaced in the place of ASICs where we can have more flexible and re-configurable platform than ASIC. In the embedded world, in many applications, accessing and controlling are the two important tasks. There are several ways of accessing the data and the corresponding data acquisition systems are available in the market. For defence, avionics, aerospace and automobile applications, high performance and accurate data acquisition systems are desirable. Therefore, an attempt is made in the proposed work, and it has been discussed that how a reconfigurable SoC based data acquisition system with high performance is designed and implemented. It is a semicustom design implemented with Zynq processing system IP, reconfigurable 7-series FPGA used as programmable logic, hygro, ambient light sensor and OLEDrgb peripheral module IPs. All these sensor and display peripheral modules are interfaced with processing unit via AXI-interconnect. The proposed system is a reconfigurable SoC meant for high-speed data acquisition system with an operating frequency of 100MHz. Such system is perfectly suitable for high speed and economic real time embedded systems.

Author 1: Dharmavaram Asha Devi
Author 2: Tirumala Satya Savithri
Author 3: Sai Sugun.L

Keywords: Application Specific Integrated Circuit (ASIC); Advanced eXtensible Interface (AXI); data acquisition system; Field Programmable Gate Array (FPGA); Peripheral Module (PMOD); System on Chip (SoC)

PDF

Paper 39: GAIT based Behavioral Authentication using Hybrid Swarm based Feed Forward Neural Network

Abstract: Authentication of appropriate users for accessing the liable gadgets exists as one among the prime theme in security models. Illegal access of gadgets such as smart phones, laptops comes with an uninvited consequences, such as data theft, privacy breakage and a lot more. Straight forward approaches like pattern based security, password and pin based security are quite expensive in terms of memory where the user has to keep remembering the passwords and in case of any security issue risen then the password has to be changed and once again keep remembering the recent one. To avoid these issues, in this paper an effective GAIT based model is proposed with the hybridization of Artificial Neural Network model namely Feedforward Neural Network Model with Swarm based algorithm namely Krill Herd optimization algorithm (KH). The task of KH is to optimize the weight factor of FNN which leads to the convergence of optimal solution at the end of the run. The proposed model is examined with 6 different performance measures and compared with four different existing classification model. The performance analysis shows the significance of proposed model when compared with the existing algorithms.

Author 1: Gogineni Krishna Chaitanya
Author 2: Krovi Raja Sekhar

Keywords: GIAT behavioral pattern recognition; feedforward neural network; krill herd algorithm

PDF

Paper 40: Electricity Cost Prediction using Autoregressive Integrated Moving Average (ARIMA) in Korea

Abstract: Electricity cost plays a vital role due to the immense increase in power utilization, rise in energy rates and alarms about the variations and impact on the environment which ultimately affects electricity cost. We claim that electrical power utilization data became more beneficial if it is presented to the customers along with the prediction of power consumption, prediction of energy prices and prediction of its expected electricity cost. It will assist the residents to alter their power utilization behavior, and thus will have an optimistic influence on the electricity production companies, dissemination network and electricity grid. In this study, we present a residential area power cost prediction by applying the Autoregressive Integrated Moving Average (ARIMA) technique in Korean apartments. We have investigated the energy utilization data on the foundation of daily, weekly and monthly power utilization. The accumulated data constructed on daily, weekly and monthly utilization are selected. Then we predict the maximum and average power consumption cost for each of the predicted daily, weekly and monthly power consumption. The power consumption and general price (General Electricity Price in Korea) data of Korea are used to analyze the efficiency of the prediction algorithm. The accuracy of the power cost prediction using the ARIMA model is verified using the absolute error.

Author 1: Safdar Ali
Author 2: Do-Hyeun Kim

Keywords: Electricity price; electricity cost; Autoregressive Integrated Moving Average (ARIMA); prediction; energy consumption

PDF

Paper 41: Deep Learning Bidirectional LSTM based Detection of Prolongation and Repetition in Stuttered Speech using Weighted MFCC

Abstract: Stuttering is a neuro-development disorder during which normal speech flow is not fluent. Traditionally Speech-Language Pathologists used to assess the extent of stuttering by counting the speech disfluencies manually. Such sorts of stuttering assessments are arbitrary, incoherent, lengthy, and error-prone. The present study focused on objective assessment to speech disfluencies such as prolongation and syllable, word, and phrase repetition. The proposed method is based on the Weighted Mel Frequency Cepstral Coefficient feature extraction algorithm and deep-learning Bidirectional Long-Short term Memory neural network for classification of stuttered events. The work has utilized the UCLASS stuttering dataset for analysis. The speech samples of the database are initially pre-processed, manually segmented, and labeled as a type of disfluency. The labeled speech samples are parameterized to Weighted MFCC feature vectors. Then extracted features are inputted to the Bidirectional-LSTM network for training and testing of the model. The effect of different hyper-parameters on classification results is examined. The test results show that the proposed method reaches the best accuracy of 96.67%, as compared to the LSTM model. The promising recognition accuracy of 97.33%, 98.67%, 97.5%, 97.19%, and 97.67% was achieved for the detection of fluent, prolongation, syllable, word, and phrase repetition, respectively.

Author 1: Sakshi Gupta
Author 2: Ravi S. Shukla
Author 3: Rajesh K. Shukla
Author 4: Rajesh Verma

Keywords: Speech; stuttering; deep learning; WMFCC; Bi-LSTM

PDF

Paper 42: Using of Redundant Signed-Digit Numeral System for Accelerating and Improving the Accuracy of Computer Floating-Point Calculations

Abstract: The article proposes a method for software implementation of floating-point computations on a graphics processing unit (GPU) with an increased accuracy, which eliminates sharp increase in rounding errors when performing arithmetic operations of addition, subtraction or multiplication with numbers that are significantly different from each other in magnitude. The method is based on the representation of floating-point numbers in the form of decimal fractions that have uniform distribution within a range and the use of redundant signed-digit numeral system to speed up calculations. The results of computational experiments for evaluating the effectiveness of the proposed approach are presented. The effect of accelerating computations is obtained for the problems of calculating the sum of an array of numbers and determining the dot product of vectors. The proposed approach is also applicable to the discrete Fourier transform.

Author 1: Otsokov Sh. A
Author 2: Magomedov Sh.G

Keywords: High-precision computation; redundant signed-digit numeral system; signed-digit floating-point format; redundant signed-digit arithmetic; decimal fractions

PDF

Paper 43: Self-Configurable Current-Mirror Technique for Parallel RGB Light-Emitting Diodes (LEDs) Strings

Abstract: Traditional current-mirror circuits require buck converter to deal with one fixed current load. This paper deals with improved self-adjustable current-mirror methods that can address different LED loads under different conditions with the help of one buck converter. The operating principle revolves around a dynamic and self-configurable combinational circuit of transistor and op-amp based current balancing circuit, along with their op-amp based dimming circuits. The proposed circuit guarantees uniformity in the outputs of the circuit. This scheme of current-balancing circuits omitted the need for separate power supply to control the load currents through different kinds of LEDs, i.e. RGB LEDs. The proposed methods are identical and modular, which can be scaled to any number of parallel current sources. The principle methodology has been successfully tested in Simulink environment to verify the current balancing of parallel LED strings.

Author 1: Shaheer Shaida Durrani
Author 2: Asif Nawaz
Author 3: Muhamamd Shahzad
Author 4: Rehan Ali Khan
Author 5: Abu Zaharin Ahmad
Author 6: Ahmed Ali Shah
Author 7: Sheeraz Ahmed
Author 8: Zeeshan Najam

Keywords: Current-balancing; LED driver; current mirror

PDF

Paper 44: Implementation of a Clinical Decision Support Systems-Based Neonatal Monitoring System Framework

Abstract: A Clinical Decision Support-based information systems to monitor the vital signs of the neonate’s conditions in prematurely born babies placed in infant incubators of Neonatal Intensive Care Unit (NICU) is developed in this work. A DMS was developed consisting of a supervisory microcomputer and sensitive sensors for measuring the vital signs. The Conventional Monitoring System (CMS) was used simultaneously with the DMS to collect the vital sign readings of thirty (30) neonates, over a period of one week. Fuzzy Inference System CDSS (FIS-CDSS) was developed for the three inputs: Temperature, Heart rate and Respiration rate (THR) based on their membership functions’ value (low, medium, high) and twenty-seven (27) IF-THEN fuzzy rules using fuzzy logic toolbox in Matrix Laboratory 8.1 (R2014a). The FIS-CDSS maps the THR to an output status (Normal, Abnormal and Critical). The performance of the FIS-CDSS was evaluated using confusion matrix. The results showed that the system yielded sensitivity ranges of 90 - 100, 80 - 89, 70 - 79, 60 - 69 and 50 - 59% for five, eleven, seven, six and one neonates, respectively with an average sensitivity of 77.92%. The specificity of the system ranged from 5.00 to 66.67% with an associated average specificity of 35.10%. The accuracy of the FIS-CDSS ranged from 70 to 100, 60 to 69, 50 to 59 and 0 to 49% for nine, nine, eight and four neonates, respectively with an average accuracy of 60.94%. The developed system provides adequate and accurate information for on-the-spot assessment of neonates for decision making that improves the mortality rate and recovery period of neonates.

Author 1: Sobowale A. A
Author 2: Olaniyan O. M
Author 3: Adetan. O
Author 4: Adanigbo. O
Author 5: Esan. A
Author 6: Olusesi. A.T
Author 7: Wahab. W.B
Author 8: Adewumi. O. A

Keywords: Clinical Decision Supports Systems (CDSS); Fuzzy Inference System (FIS); Neonatal Intensive Care Unit (NICU); vital signs; neonates

PDF

Paper 45: Machine Learning-Based Phishing Attack Detection

Abstract: This paper explores machine learning techniques and evaluates their performances when trained to perform against datasets consisting of features that can differentiate between a Phishing Website and a safe one. This capability of telling these sites apart from one another is vital in the modern-day internet surfing. As more and more of our resources shift online, one vulnerability and a leak of sensitive information by someone could bring everything down in a connected network. This paper's objective through this research is to highlight the best technique for identifying one of the most commonly occurring cyberattacks and thus allow faster identification and blacklisting of such sites, therefore leading to a safer and more secure web surfing experience for everyone. To achieve this, we describe each of the techniques we look into in great detail and use different evaluation techniques to portray their performance visually. After pitting all of these techniques against each other, we have concluded with an explanation in this paper that Random Forest Classifier does indeed work best for Phishing Website Detection.

Author 1: Sohrab Hossain
Author 2: Dhiman Sarma
Author 3: Rana Joyti Chakma

Keywords: Phishing attack; phishing attack detection; phishing website detection; machine learning; random forest classifier

PDF

Paper 46: Identifying Critical Success Factors of Financial ERP System in Higher Education Institution using ADVIAN® Method

Abstract: Enterprise resource planning (ERP) has been widely accepted by many organizations as an information technology process to seamlessly integrate, manage, and boost performance in different units of an organization. However, there linger an unpleasant chasm on success and satisfaction rates of ERP system implementation that have limited the effective use of the system. Moreover, the critical success factors (CSFs) of ERP system implementation have not been investigated in the literature for the case of financial functions in higher education institutions. This paper, through the application of advanced impact analysis (ADVIAN®) method exploits the CSFs of ERP system to support financial functions in a higher education institution. The applied ADVIAN® method highlights the CSFs that are measured according to the measures of criticality, integration, and stability. Furthermore, using precarious, driving, and driven measurements for ranking the factors, an effective model of CSFs for a financial ERP system implementation is attained to support financial functions. The study findings provide a comprehensive methodological scheme that can be used as a reference guide and as an orientation point for efficacious planning, implementing, and using ERP systems to support financial functions in higher education institutions.

Author 1: Ayogeboh Epizitone
Author 2: Oludayo. O. Olugbara

Keywords: Cross impact; enterprise resource; impact analysis; resource planning; success factor

PDF

Paper 47: Machine Learning based Analysis on Human Aggressiveness and Reactions towards Uncertain Decisions

Abstract: Tweet data can be processed as a useful information. Social media sites like Twitter, Facebook, Google+ are rapidly growing popularity. These social media sites provide a platform for people to share and express their views about daily routine life, have to discuss on particular topics, have discussion with different communities, or connect with globe by posting messages. Tweets posted on twitter are expressed as opinions. These opinions can be used for different purposes such as to take public views on uncertain decisions such as Muslim ban in America, War in Syria, American Soldiers in Afghanistan etc. These decisions have direct impact on user’s life such as violations & aggressiveness are common causes. For this purpose, we will collect opinions on some popular decision taken in past decade from twitter. We will divide the sentiments into two classes that is anger (hatred) and positive. We will propose a hypothesis model for such data which will be used in future. We will use Support Vector Machine (SVM), Naive Bayes (NB), and Logistic Regression (LR) classifier for text classification task. Further-more, we will also compare SVM results with NB, LR. Research will help us to predict early behaviors & reactions of people before the big consequences of such decisions.

Author 1: Sohaib Latif
Author 2: Abdul Kadir Abdullahi Hasan
Author 3: Abdaziz Omar Hassan

Keywords: Opinion mining; Naïve Bayes; linear regression; support vector machine

PDF

Paper 48: A Cluster-Based Mitigation Strategy Against Security Attacks in Wireless Sensor Networks

Abstract: Wireless Sensor Networks (WSNs) applications range across distinct application comprising of event detection at real-time. WSNs can be deployed for not only mobile nodes but also for static sensor nodes (SNs) for various applications which may include health care system, smart parking, environmental monitoring etc. Sensor nodes in WSN are constrained in terms of energy contents of each node and can be accessible by other nodes in a wireless medium are more likely to be susceptible to various categories of attacks. Wireless Network are more likely prone to various kinds of security attacks, one such type of attack caused by a malicious attacker, which can result to decay in the lifetime of the network and an adverse scenario can even lead to congestion in the entire network. This paper presents the overview of various attacks and their consequences on different layers and evaluates defense strategy used to mitigate the various categories of attacks on Wireless Sensor Networks. This study proposes a cluster-based approach for each node of a WSN where the nodes of network constrained by energy can organize and perform network duties as per the network performance for this one node performs the role of cluster head (CH) which is elected on the basis of the "Reputation" of a node which is an indicator of nodes individual behavior in the network and "Net_Credit_Score" which determines the cooperating behavior of sensor node in the cluster. Further, this study highlights few parameters which can be implemented to further enhance the defense strategy by taking into account the factors such as Cluster count, Stability factor of both the Cluster and Cluster Head and Intra-Cluster topology which can be crucial. This will result in formulating a road map for designing a secure and resistant reputation-based system for WSN to overcome the various security related attacks.

Author 1: Jahangir Khan
Author 2: Ansar Munir Shah
Author 3: Babar Nawaz
Author 4: Khalid Mahmood
Author 5: Muhammad Kashif Saeed
Author 6: Mehmood ul Hassan

Keywords: Wireless sensor network; security attacks; security issues; clusters

PDF

Paper 49: A Predictive Model for the Determination of Academic Performance in Private Higher Education Institutions

Abstract: The growth and development of predictive models in the current world has influenced considerable changes. Today, predictive modelling of academic performance has transformed more than a few institutions by improving their students' academic performance. This paper presents a computational predictive model using artificial neural networks to predict whether a student will pass or fail. The model is unique in the current literature as it is specifically designed to evaluate the effectiveness of the predictive strategies on neural networks as well as on five additional algorithms. The analysis of the experimental results shows that Artificial Neural Networks outperformed the eXtremeGBoost, Linear Regression, Support Vector Machine, Naive Bayes, and Random Forest algorithms for academic performance prediction.

Author 1: Francis Makombe
Author 2: Manoj Lall

Keywords: Classification modelling; data mining; higher education institutions; accuracy; academic performance

PDF

Paper 50: The Effect of Requirements Quality and Requirements Volatility on the Success of Information Systems Projects

Abstract: This study aims to identify the effect of poorly written requirements specifications of software development and its continuous changes; on information systems’ projects success and its influence on time and cost overrun of the project based on empirical understanding in practice. As the world is moving towards the internet of things and due to the dramatic increase in demand on complex information systems projects, the development of information systems became more difficult and handling the customers’ requirements became very challenging. This research follows a conclusive design, Using a descriptive research design was held first to reveal and discover the characteristics of a good requirement, and then a quantitative method was used through conducting questionnaire and distributing to more than 400 participants in the software industry in Egypt, to understand the relationship between variables and how to improve the quality of data based on real world observations or experiment. The data collected was analyzed using python and R analysis techniques. The results indicates that, the organizations with the highest quality of requirements and less requirement volatility, have higher software success rates in terms of Project’s efficiency as well as Business and direct organizational success, while the requirements volume doesn’t have significant effect on success rates. From this analysis we developed an initial model.

Author 1: Eman Osama
Author 2: Ayman Khedr
Author 3: Mohamed Abdelsalam

Keywords: Requirement engineering; Software Requirements Specification (SRS); requirements quality; requirements volatility; project success factor

PDF

Paper 51: Dissemination and Implementation of THK-ANEKA and SAW-Based Stake Model Evaluation Website

Abstract: The purpose of this study was to provide information about the dissemination and implementation of the THK-ANEKA and SAW-based Stake model evaluation website at Vocational Schools of IT in Bali. THK is an acronym for Tri Hita Karana. ANEKA is an acronym for Akuntabilitas, Nasionalisme, Etika publik, Komitmen mutu, dan Anti korupsi (in Indonesian) or Accountability, Nationalism, Public ethics, Quality commitment, and Anti-corruption (in English). SAW is an acronym for Simple Additive Weighting. This study used a development approach with the Borg and Gall model which consists of 10 development stages. Research in 2020 was focused on the dissemination and implementation stages. The research location was at several Vocational Schools of IT in Bali Province. The subjects involved in assessing website implementation were 110 respondents. The tool used to assess was a questionnaire. The analysis technique was carried out by interpreting the effectiveness level of dissemination and implementation. It was a reference to the eleven scale effectiveness standard. The research results showed that the dissemination and implementation of the THK-ANEKA and SAW-based Stake model evaluation website at Vocational Schools of IT in Bali had gone well. It was able to be seen from the documentary evidence of the dissemination activities implementation. The percentage results of the website implementing effectiveness were 88.973% and the simulation results of implementing the SAW method which was already accurate. It showed the evaluation aspects that support the realization of positive morals and students’ learning quality.

Author 1: Dewa Gede Hendra Divayana
Author 2: I Putu Wisna Ariawan
Author 3: Agus Adiarta

Keywords: Evaluation website; stake model; THK; ANEKA; SAW

PDF

Paper 52: Physically-Based Animation in Performing Accuracy Bouncing Simulation

Abstract: This study investigates the use of physics formulas in achieving plausible bouncing simulation in animation. The need for physics animation was to produce visually believable animations adhering to the basic laws of physics. Based on the review, the creation of accurate timing simulation in bouncing dynamic was significantly difficult particularly in setting keyframes. It was comprehensible that setting the value of keyframes was unambiguous while specifying the timing for keyframes was a harder task and often time-consuming. The case study of bouncing balls’ simulation was carried out in this research and the variables of mass, velocity, acceleration, force, and gravity are taking into consideration in the motion. However, the bouncing dynamic is a significant study in animation. It is often used and it shows many different aspects of animations, such as a falling object, walking, running, hopping, and juggling. Therefore, the physical framework was proposed in this study based on numerical simulations, as the real-time animation can be addressed for controlling the motion of bouncing dynamic object in animation. Animation based physics algorithm provided the animator the ability to control the realism of animation without setting the keyframe manually, to provide an extra layer of visually convincing simulation.

Author 1: Loh Ngiik Hoon

Keywords: Bouncing simulation; physics algorithm; physics animation; real time animation; animation

PDF

Paper 53: Physiotherapy: Design and Implementation of a Wearable Sleeve using IMU Sensor and VR to Measure Elbow Range of Motion

Abstract: Range of motion (RoM) is the measurement of angular movement of joints that defines the joints flexibility. It is crucial to measure RoM while performing musculo-skeletal diagnostics. The physiotherapy and the visits to hospitals can be very costly and demands a great deal of time; also most of the current digital instruments, used to measure RoM, are very expensive and hard to use. In this paper a digital wearable sleeve device is designed and tested which is cheap, time efficient and easy to use. The designed device is tested to be within 95 % agreement with Universal Goniometer (UG) when tested using Bland Altman Plots. Patients can take their measurements on their own and visualize results on their desktops or mobile phones. Patients also have graphical feedback, highlighting the extent of variation between their exercise performance and standard exercise. In addition to this; patients can also compare their current exercise from previous exercise using Kalmogorov-Simronov (K-S) test automatically. To make exercising more fun, we have developed 3D VR (Virtual Reality) gaming environment for elbow flexion, elbow supination and pronation and elbow extension exercises where patient can exercise in an interactive environment and visualize their progress side by side.

Author 1: Anzalna Narejo
Author 2: Attiya Baqai
Author 3: Neha Sikandar
Author 4: Absar Ali
Author 5: Sanam Narejo

Keywords: Range of Motion (RoM); physiotherapy; Inertial Measurement Unit (IMU); Virtual Reality (VR)

PDF

Paper 54: Outlier Detection using Nonparametric Depth-Based Techniques in Hydrology

Abstract: Several issues arise when extending the methods of outlier detection from a single dimension to a higher dimension. These issues include limited methods for visualization, marginal methods inadequacy, lacking a natural order and limitation in parametric modeling. The intension to overcome and address such limitations the nonparametric outlier identifier, based on depth functions, is introduced. These identifiers comprise of four threshold type outlyingness functions for outlier detection that are Mahalanobis distance, Tukey depth, spatial Mahalanobis depth, and projection depth. The object of the present research is the application of the proposed nonparametric technique in hydrology. The study is intended to be executed in two different frameworks that are multivariate hydrological data analysis and functional hydrological data analysis. The event of a flood is graphically represented by hydrograph whose components are used for computing flood characteristics that are peak(p) and volume(v). These characteristics are frequently employed for the various types of analysis in the multivariate study. Whereas, hydrograph is exhaustively employed in the analysis of functional data so that all the important information regarding flood event are not missed while analysis. The proposed technique in a multivariate framework is applied to the bivariate flood characteristics (p,v)while in functional framework proposed approach is applied to the initial two scores of principal components denoted as (z_1,z_2 ), since initial two principal components capture major variation of data employed for analysis.

Author 1: Insia Hussain

Keywords: Outlyingness functions; nonparametric techniques; flood characteristics; principal component scores; multivariate analysis; functional analysis

PDF

Paper 55: A Hybrid Approach to Enhance Scalability, Reliability and Computational Speed in LoRa Networks

Abstract: The spreading out of Internet of Things (IoT) facilitates with new wireless communication Technologies. To have reliable communication for long duration, low power wide area can be aimed at sensor nodes. These days, LoRa has become a recognizable preference for IoT based solutions. In this paper LoRa network performance is analyzed. A hybrid technique is proposed to overcome the scalability, reliability and computational speed issues in LoRa network. In the proposed hybrid technique, a lightweight scheduling technique is used to address scalability and reliability issues and then pruning algorithm is also incorporated to enhance the computational speed of LoRa network for IoT applications. Further, LoRa network for IoT applications is analyzed using LoRaWAN NS-3 module and the parameters packet error ratio (PER), network throughput and fairness for improved reliability and scalability are illustrated. Simulation results are obtained for multiple gateways scenarios. The analysis presents that the LoRa network has addressed scalability and reliability issues using lightweight scheduling technique and computational speed is also enhanced using pruning algorithm. Therefore, the hybrid technique illustrated for LoRa network for IoT applications is in good agreement.

Author 1: S. Raja Gopal
Author 2: V S V Prabhakar

Keywords: Hybrid technique; Internet of Things (IoT); lightweight scheduling; LoRa; pruning algorithm; scalability; reliability

PDF

Paper 56: A New Online Plagiarism Detection System based on Deep Learning

Abstract: The Plagiarism is an increasingly widespread and growing problem in the academic field. Several plagiarism techniques are used by fraudsters, ranging from a simple synonym replacement, sentence structure modification, to more complex method involving several types of transformation. Human based plagiarism detection is difficult, not accurate, and time-consuming process. In this paper we propose a plagiarism detection framework based on three deep learning models: Doc2vec, Siamese Long Short-term Memory (SLSTM) and Convolutional Neural Network (CNN). Our system uses three layers: Preprocessing Layer including word embedding, Learning Layers and Detection Layer. To evaluate our system, we carried out a study on plagiarism detection tools from the academic field and make a comparison based on a set of features. Compared to other works, our approach performs a good accuracy of 98.33 % and can detect different types of plagiarism, enables to specify another dataset and supports to compare the document from an internet search.

Author 1: El Mostafa Hambi
Author 2: Faouzia Benabbou

Keywords: Plagiarism detection; plagiarism detection tools; deep learning; Doc2vec; Stacked Long Short-Term Memory (SLSTM); Convolutional Neural Network (CNN); Siamese neural network

PDF

Paper 57: Impact of Project-Based Learning on Networking and Communications Competences

Abstract: The objective of this article is to establish the impact of project-based learning on networking and communication competences I in engineering students from Lima city. The study was of an applied type, quasi-experimental design, and was made up of a population conformed by 39 students of the VI cycle of engineering, an objective test was applied to measure the impact of project-based learning on network and communication competences I. The research results determined the statistically significant relationship of project-based learning and networking and communications competences I in engineering studentswith pretest values of Z = -, 498 greater than -1.96 (critical point) and level of significance p-value = 0.618 greater than α = 0 ,05 (p>α) and then with values in the posttest of Z = - 4,488 less than -1.96 (critical point) and level of significance p-value = 0.000 less than α = 0.05 (p <α), therefore the project-based learning positively and significantly impacts on network and communication competences I, in engineering students, supporting the alternative hypothesis and rejecting the null hypothesis. Consequently, we reached the conclusion that the application of the project-based learning methodology has demonstrated that caused a positive and significant impact on network and communications competences I in engineering students.

Author 1: Cristian Castro-Vargas
Author 2: Maritza Cabana-Caceres
Author 3: Laberiano Andrade-Arenas

Keywords: Project-based learning; competencies; networking; communications; network convergence

PDF

Paper 58: Artificial Intelligent Techniques for Palm Date Varieties Classification

Abstract: The demand on high quality palm dates is increasing due to its energy value and nutrient content, which are of great importance in human diet. To meet consumer and market standards with large-scale production, in Oman as among the top date producer, an inline classification system is of great importance. This paper addresses the potentiality of using Machine-Learning (ML) techniques in classifying automatically, without any physical measurement, the six most popular date fruit varieties in Oman. The effect of color, shape, size, and texture features and the critical parameters of the classifiers on the classification efficiency has been endeavored. Three different ML techniques have been used for automatic classification and qualitative comparison: (i) Artificial Neural Networks (ANN), (ii) Support Vector Machine (SVM), and (iii) K-Nearest Neighbor (KNN). Based on the merge of color, shape and size features contributes to achieve the highest accuracy. Experimental results show that the ANN classifier outperforms both SVM and KNN with the highest classification accuracy of 99.2%. This developed vision system in this paper can be successfully integrated in the packaging date factories.

Author 1: Lazhar Khriji
Author 2: Ahmed Chiheb Ammari
Author 3: Medhat Awadalla

Keywords: Palm date; feature extraction; machine learning; computer vision

PDF

Paper 59: Computational Analysis of Arabic Cursive Steganography using Complex Edge Detection Techniques

Abstract: Arabic language contains a multiple set of features which complete the process of embedding and extracting text from the compressed image. In specific, the Arabic language covers numerous textual styles and shapes of the Arabic letter. This paper investigated Arabic cursive steganography using complex edge detection techniques via compressed image, which comprises several characteristics (short, medium and Long sentence) as per the research interest. Sample of images from the Berkeley Segmentation Database (BSD) was utilized and compressed with a diverse number of bits per pixel through Least Significant Bit (LSB) technology. The method presented in this paper was evaluated based on five complex edge detectors (Roberts, Prewitt, Sobel, LoG, and Canny) via MATLAB. Canny edge detector has been demonstrated to be the most excellent solution when it is vital to perform superior edge discovery over-compressed image with little several facts, but Sobel appears to be better in term of the execution time for Long sentence contents.

Author 1: Anwar H. Ibrahim
Author 2: Abdulrahman S. Alturki

Keywords: Arabic language; Berkeley Segmentation Database (BSD); Least Significant Bit (LSB); Roberts; Prewitt; Sobel; LoG; Canny

PDF

Paper 60: The Most Efficient Classifiers for the Students’ Academic Dataset

Abstract: Educational institutions contain a vast collection of data accumulated for years, so it is difficult to use this data to solve problems related to the progress of the educational process and also contribute to achieving quality. For this reason, the use of data mining techniques helps to extract hidden knowledge that helps in making the decisions necessary to develop education and achieve quality requirements. The data of this study obtained from the College of Business and Economics at Qassim University. Three of the classifiers were compared in this study Decision Tree, Random Forest and Naïve Bayes. The results showed that Random Forest outperforms other algorithms with 71.5% of Precision, 71.2% F1-score, and also it got 71.3% of Recall and Classification Accuracy (CA). This study helps reduce failure by providing an academic advisor to students who have weaknesses in achieving a high-Grade Point Average (GPA). It also helps in developing the educational process by discovering and overcoming weaknesses.

Author 1: Ebtehal Ibrahim Al-Fairouz
Author 2: Mohammed Abdullah Al-Hagery

Keywords: Data mining; student performance; classification algorithms; evaluation

PDF

Paper 61: An Empirical Study of e-Learning Interface Design Elements for Generation Z

Abstract: E-learning is the latest evolution of electronic-based learning that creates, fosters, delivers and facilitates the learning process anytime and anywhere with the use of interactive network technology. The use of e-learning as a learning platform makes users want a high quality of interface design to interact with the e-learning system. Interface design that meets students’ needs and expectations may increase their involvement and satisfaction towards e-learning, especially generation Z students. However, interface design is always being criticized and has become a part of issues that cause the failure of e-learning. Lack of understanding about students’ cultural background and preferences towards e-learning interface design are the major factors that contribute to this phenomenon. To ensure the success of e-learning, a model of interface design specifically for generation Z students’ culture that consists of interface elements and design characteristics need to be developed. So, this study aimed to address this subject by identifying e-learning interface elements and design characteristics from existing literature, confirming the elements and design characteristics and discovering related elements for e-learning interface design from generation Z students’ perspective. This study used semi-structured for a focus group interview that included seven students. The focus group interview involved three main steps which were sampling, protocol and research instruments. This study validated several interface elements and design characteristics that contribute to the model of e-learning interface design. The findings could guide the interface designer in designing e-learning interface for generation Z students.

Author 1: Hazwani Nordin
Author 2: Dalbir Singh
Author 3: Zulkefli Mansor

Keywords: e-Learning; interface design; generation Z; culture; focus group

PDF

Paper 62: Cotton Leaf Image Segmentation using Modified Factorization-Based Active Contour Method

Abstract: Cotton plant is one of the most widely cultivated crop across worldwide. The leaf is one of the important parts which help in the food production. There are different cotton leaf diseases like Alternaria spot, foliar, bacterial blight, etc. which affects the agricultural yield. In order to detect the diseases, leaf region extraction becomes a significant task and to achieve this we use image processing techniques. Henceforth in this paper, a novel method used to extract the leaf region from a complex background. The proposed method is used for leaf extraction from complex background. The algorithm used in this method is modified factorization based active contour (MFACM) which helps in getting better output images. The database images used for research are acquired from the field using a digital camera. The proposed work is compared with existing active contour algorithms like Gradient Vector Flow (GVF), Adaptive Diffusion Flow (ADF), and Vector Flow Convolution (VFC). From the experiment, it can be observed that the proposed method is better than the other active contour methods in terms of computation time and the number of iterations. In addition to that segmented result is analyzed using specificity, sensitivity, precision which showed that our proposed method is better than the other methods.

Author 1: Bhagya M Patil
Author 2: Basavaraj Amarapur

Keywords: Cotton leaf; active contour; Gradient Vector Flow (GVF); Adaptive Diffusion Flow (ADF); Vector Flow Convolution (VFC); Modified factorization based active contour (MFACM)

PDF

Paper 63: Trading Saudi Stock Market Shares using Multivariate Recurrent Neural Network with a Long Short-term Memory Layer

Abstract: This study tests the Saudi stock market weak form using the weak form of an efficient market hypothesis and proposes a recurrent neural network (RNN) to produce a trading signal. To predict the next-day trading signal of several shares in the Saudi stock market, we designed the RNN with a long short-term memory architecture. The network input comprises several time series features that contribute to the classification process. The proposed RNN output is fed to a trading agent that buys or sells shares based on the share current value, current available balance, and the current number of shares owned. To evaluate the proposed neural network, we used the historical oil price data of Brent crude oil in combination with other stock features (e.g., previous day (opening and closing price of the evaluated share). The results indicate that oil price variations affect the Saudi stock market. Furthermore, with 55% accuracy, the proposed RNN model produces the next-day trading signal. For the same period, the proposed RNN trading method achieves an investment gain of 23%, whereas the buy-and-hold method obtained 1.2%.

Author 1: Fahd A. Alturki
Author 2: Abdullah M. Aldughaiyem

Keywords: Time series; neural network; long short-term memory; stock price; Tadawul

PDF

Paper 64: Implementing the Behavioral Semantics of Diagrammatic Languages by Co-simulation

Abstract: Due to the multidisciplinary nature of cyber-physical systems, it is impossible for an existing modeling language to be used effectively in all cases. For this reason, the development of domain-specific modeling languages is beginning to become an integral part of the modeling process. This diversification of modeling languages often implies the need to co-simulate subsystems in order to obtain the effect of a complete system. This paper presents how behavioral semantics of a diagrammatic DSML can be implemented by co-simulation. For the formal specification of the language we used mechanisms from the category theory. To specify behavioral semantics, we introduced the notion of behavioral rule as an aggregation between a graph transformation and a behavioral action. The paper also contains a relevant example and demonstrates that the implementation of behavioral semantics of a diagrammatic model can be achieved by co-simulating standalone FMUs associated to behavioral rules.

Author 1: Daniel-Cristian Craciunean

Keywords: DSML; cyber-physical systems; behavioral semantics; standalone FMU; FMI; diagrammatic language

PDF

Paper 65: A Cluster based Non-Linear Regression Framework for Periodic Multi-Stock Trend Prediction on Real Time Stock Market Data

Abstract: Trend prediction is and has been one of the very important tasks in the stock market since day one. For a sophisticated trend prediction using real time stock market data, stock sentiment news and technical analysis plays a vital role. While predicting the trend in the conventional way, technical indicators are delayed due to temporal data and less historic data. All the conventional stock trend predicting methods sustained without sentiment scores, technical scores and time periods for trend prediction. Considering the fact that all the previous conventional methods of stock trend predictions are bound to take single stock for trend prediction due to high computational memory and time, this prototype of highly functioning algorithms focus on trend prediction with multi stock data breaking all the conventional rules. This multi stock trend prediction model commissions and implements the effectively programmed algorithms on real time stock market data set. In this multi-stock trend prediction model, a new stock technical indicator and new stock sentiment score are proposed in order to improve the stock feature selection for trend prediction. In order to find the best real time feature selection model, a technical feature selection measure and stock news sentiment score are developed and incorporated. We used integrated stock market data to make a hybrid clustered model to find the relational multi stocks. Giving a final verdict, this is a cluster based nonlinear regression multi stock framework in order to predict the time-based trend prediction. The multi stock trend regression accuracy is bettered by 12% and recall by 11% while we cross check the experimental outcomes, henceforth making this model more accurate and precision furnished.

Author 1: Lakshmana Phaneendra Maguluri
Author 2: R. Ragupathy

Keywords: Multi-stock trend prediction; stock market; clustering; nonlinear regression

PDF

Paper 66: Development of a Graphic Information System Applied to Quality Statistic Control in Production Processes

Abstract: One of the advantages that organizations have when using an Information System is the control of their activities. This article develops an Information System that will allow an organization to graphically obtain the real results of a production process by applying Nelson's eight rules to determine if any measured variable is out of control. The software architecture pattern used is the Model View Controller (MVC) to keep the functionality of the application separate. The front-end, that is, the part that interacts with the users, was developed in ASP.NET as a web platform to provide the required services, JavaScript, HTML 5, Razor and Bootstrap. The back-end, which is the part that processes the entry of the front-end and performs the calculations, operations, communication with the database and reading of files, was developed with the C sharp programming language, the SQL Server database management system and the entity framework. As a result, the system sends an e-mail as an alarm with an explanation of what has happened when it detects that some measured variable is out of control by applying Nelson's rules. This allows the organization to make effective decisions in the processes involved.

Author 1: Laura Vazquez
Author 2: Alicia Valdez
Author 3: Griselda Cortes
Author 4: Mariana Rosales

Keywords: Information system; Nelson’s rules; Model View Controller pattern; C#; ASP.NET

PDF

Paper 67: Netnography and Text Mining to Understand Perceptions of Indian Travellers using Online Travel Services

Abstract: Advancements in the electronic commerce industry have helped online travel services (OTA) in many ways. The paper examines the overall impact of traveller’s using online services and their sentiments derived from a collection of reviews for online travel service providers known as online travel agents (OTA) in India. Customer reviews from different identified sources are collected and the satisfaction of travellers using various online travel services is analyzed using netnographic analysis and text mining. This paper also covers a detailed process of data collection, analysis using netnography and text mining methods which helps us for the analysis and deriving sentiments from collected reviews. Various results obtained are presented as part of token lists, keyword analysis, and service-specific analysis. The statistical analysis of different results is tested to understand the relationship between various services and OTA.

Author 1: Dashrath Mane
Author 2: Prateek Srivastava

Keywords: Consumer; travellers; netnography; text mining; OTA; sentiment; perception

PDF

Paper 68: Multi-Dimensional Fraud Detection Metrics in Business Processes and their Application

Abstract: Occupational fraud is defined as the deliberate misuse of one’s occupation for personal enrichment. It poses a significant challenge for organizations and governments. Estimates indicate that the funds involved in occupational fraud cases investigated across 125 countries between 2018 and 2019 exceeded US$3.6 billion. Process-based fraud (PBF) is a form of occupational fraud that is perpetrated inside business processes. Business processes underlie the logic of the work that organizations undertake, and they are used to execute an organization’s strategies to achieve organizational goals. Business processes should be examined for potential fraud risks to ensure that businesses achieve their objectives. While it is impossible to prevent fraud entirely, it must be detected. However, PBF detection metrics are not well developed at present. They are scattered, unstandardized, not validated, and, in some cases, absent. This study aimed to develop a comprehensive PBF detection metric by leveraging and operationalizing a taxonomy of fraud detection metrics for business processes as an underlying theory. 41 PBF detection metrics were deduced from the taxonomy using design science research. To evaluate their utility, the application of the metrics was undertaken using illustrative scenarios, and a real example of the implementation of the metrics was provided. The developed metrics form a complete, classified, validated, and standardized list of PBF detection metrics, which include all the necessary PBF detection dimensions. It is expected that the stakeholders involved in PBF detection will use the metrics established in this work in their practice to increase the effectiveness of the PBF detection process.

Author 1: Badr Omair
Author 2: Ahmad Alturki

Keywords: Business process fraud; fraud detection; fraud indicators; fraud measures; fraud metrics; PBF; red flags

PDF

Paper 69: Video Processing for Animation at Key Points of Movement in the Mimosa Pudica

Abstract: The processing of an image of a moving plant is inadequate, for this reason, digital video processing must be incorporated, which allows the behavior of an algorithm to be analyzed over time. A method is presented that takes images of a plant with autonomous movement filmed on video; the frames are digitally processed and the information is used to generate animations. Our representation of the structure is derived from an analysis of the image where the plant is deformed; the projections of the movement of the plant are recovered from the video frames and are used as a basis to generate videograms in an animation based on key points taken from an image; Harris and Brisk algorithms are applied. The main plant used is the Mimosa Pudica. Once the frames have been obtained, correlation is proposed as a mechanism to find movement. The techniques are equally useful for any other moving plant such as carnivores or sunflowers.

Author 1: Rodolfo Romero-Herrera
Author 2: Laura Mendez-Segundo

Keywords: Harris; Brisk; correlation; ROI (Region of Interest); Canny. Sobel; Mimosa Pudica; movement

PDF

Paper 70: Population based Optimized and Condensed Fuzzy Deep Belief Network for Credit Card Fraudulent Detection

Abstract: In this information era, with the advancement in technology, there is a high risk due to financial fraud which is a continually increasing menace during online transactions. Credit card fraudulent identification is a toughest challenge because of two important issues, as the profile of the credit card user’s behavior changes constantly and credit card datasets are skewed. The factors which greatly affects the credit card fraudulent transaction detection are primarily based on data sampling models, features involved in feature selection and detection approaches implied. To overwhelm these issues, instead of using certainty theory, this paper encapsulates with three different empowered models are deployed for intellectual way of fraudulent transaction detection. In this work uncertainty theory of intuitionistic fuzzy theorem to determine the significant features which will influence the detection process effectively. Maximized relevancy among dependent and independent features of credit card dataset are determined using grade of membership and non-membership information of each features. The intuitionistic fuzzy mutual information with the knowledge of entropy it selects the features with highest information score as significant feature subset. This proposed model devised Fuzzy Deep Belief Network enriched with Sea Turtle Foraging for credit card fraudulent detection (EFDBN-STFA). The fuzzy deep belief network greatly handles the complex pattern of credit card transactions with its deep knowledge and stacked restricted Boltzmann machine the pattern of dataset is analyzed. The weights assigned to the hidden nodes are fine-tuned by the sea turtle foraging using its fitness measure and thus it improves the detection accuracy of the FDBN. Simulation results proved the efficacy of EFDBN-STFA on two different credit card datasets with its gained ability of handling hesitation factor and optimization using metaheuristic approach, it achieves higher detection rate with reduced false alarms compared to other existing detection models.

Author 1: Jisha M. V
Author 2: D. Vimal Kumar

Keywords: Credit card fraudulent; uncertainty; intuitionistic fuzzy; fuzzy deep belief network; sea turtle foraging

PDF

Paper 71: Meta-Analysis of Artificial Intelligence Works in Ubiquitous Learning Environments and Technologies

Abstract: Ubiquitous learning (u-learning) refers to anytime and anywhere learning. U-learning has progressed to be considered a conventional teaching and learning approach in schools and is adopted to continue with the school curriculum when learners cannot attend schools for face-to-face lessons. Computer Science, namely the field of Artificial Intelligence (AI) presents tools and techniques to support the growth of u-learning and provide recommendations and insights to academic practitioners and AI researchers. Aim: The aim of this study was to conduct a meta-analysis of Artificial Intelligence works in ubiquitous learning environments and technologies to present state from the plethora of research. Method: The mining of related articles was devised according to the technique of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The complement of included research articles was sourced from the broadly used databases, namely, Science Direct, Springer Link, Semantic Scholar, Academia, and IEEE. Results: A total of 16 scientific research publications were shortlisted for this study from 330 articles identified through database searching. Using random-effects model, the estimated pooled estimate of artificial intelligence works in ubiquitous learning environments and technologies reported was 10% (95% CI: 3%, 22%; I2 = 99.46%, P = 0.00) which indicates the presence of considerable heterogeneity. Conclusion: It can be concluded based on the experimental results from the sub group analysis that machine learning studies [18% (95% CI: 11%, 25%), I2 = 99.83%] was considerably more heterogeneous (I2 = 99.83%) than intelligent decision support systems, intelligent systems and educational data mining. However, this does not mean that intelligent decision support systems, intelligent systems and educational data mining is not efficient.

Author 1: Caitlin Sam
Author 2: Nalindren Naicker
Author 3: Mogiveny Rajkoomar

Keywords: Educational data mining; intelligent systems; artificial intelligence; PRISMA; machine learning; ubiquitous learning

PDF

Paper 72: Hate Speech Detection in Twitter using Transformer Methods

Abstract: Social media networks such as Twitter are increasingly utilized to propagate hate speech while facilitating mass communication. Recent studies have highlighted a strong correlation between hate speech propagation and hate crimes such as xenophobic attacks. Due to the size of social media and the consequences of hate speech in society, it is essential to develop automated methods for hate speech detection in different social media platforms. Several studies have investigated the application of different machine learning algorithms for hate speech detection. However, the performance of these algorithms is generally hampered by inefficient sequence transduction. The Vanilla recurrent neural networks and recurrent neural networks with attention have been established as state-of-the-art methods for the assignments of sequence modeling and sequence transduction. Unfortunately, these methods suffer from intrinsic problems such as long-term dependency and lack of parallelization. In this study, we investigate a transformer-based method and tested it on a publicly available multiclass hate speech corpus containing 24783 labeled tweets. DistilBERT transformer method was compared against attention-based recurrent neural networks and other transformer baselines for hate speech detection in Twitter documents. The study results show that DistilBERT transformer outperformed the baseline algorithms while allowing parallelization.

Author 1: Raymond T Mutanga
Author 2: Nalindren Naicker
Author 3: Oludayo O Olugbara

Keywords: Attention transformer; deep learning; neural network; recurrent network; sequence transduction

PDF

Paper 73: VerbNet based Citation Sentiment Class Assignment using Machine Learning

Abstract: Citations are used to establish a link between articles. This intent has changed over the years, and citations are now being used as a criterion for evaluating the research work or the author and has become one of the most important criteria for granting rewards or incentives. As a result, many unethical activities related to the use of citations have emerged. That is why content-based citation sentiment analysis techniques are developed on the hypothesis that all citations are not equal. There are several pieces of research to find the sentiment of a citation, however, only a handful of techniques that have used citation sentences for this purpose. In this research, we have proposed a verb-oriented citation sentiment classification for researchers by semantically analyzing verbs within a citation text using VerbNet Ontology, natural language processing & four different machine learning algorithms. Our proposed methodology emphasizes the verb as a fundamental element of opinion. By developing and assessing the proposed methodology and according to benchmark results, the methodology can perform well while dealing with a variety of datasets. The technique has shown promising results using Support Vector Classifier.

Author 1: Zainab Amjad
Author 2: Imran Ihsan

Keywords: Citation content analysis; sentiment analysis; semantic analysis; ontology; natural language processing

PDF

Paper 74: DBSR: A Depth-Based Secure Routing Protocol for Underwater Sensor Networks

Abstract: Depth-Based protocol has gained considerable attention as an efficient routing scheme for Underwater Wireless Sensor Networks UWSNs. It requires only depth information to perform the routing process. Despite this feature, UWSNs which operate with the employment of DBR protocol are vulnerable to depth spoofing attack. In this paper, Depth Based Secure Routing protocol is proposed to overcome this vulnerability. DBSR modifies traditional DBR routing algorithm by securing the depth information which is embedded in the header part of DBR packet. In addition to that, each node verifies the sender’s identity based on a digital signature scheme. We extensively evaluate the overhead and performance gain of DBSR for two signature schemes based on Elliptic Curve Cryptography method considering various network conditions. The simulation study is performed using NS3-based simulator. Our results show that DBSR can avoid depth-spoofing attack by achieving 95% and 85% delivery ratios under low and high network loads respectively. Contrary to popular belief, results show that careful utilization of cryptographic techniques is justifiable without significant overhead on the communication cost.

Author 1: Ayman Alharbi

Keywords: UWSN; DBR; ECC

PDF

Paper 75: Product Recommendation in Offline Retail Industry by using Collaborative Filtering

Abstract: The variety of purchased products is important for retailers. When a customer buys a specific product in a large number, the customer might get benefit, such as more discounts. On contrary, this could harm the retailers since only some products are sold quickly. Due to this problem, big retailers try to entice customers to buy many variations of products. For an offline retailer, promoting specific products based on the markets’ taste is quite challenging because of the unavailability of information regarding customers’ preferences. This study utilized four years of purchase transaction data to implicitly find customers’ ratings or feedback towards specific products they have purchased. This study employed two Collaborative Filtering methods in generating product recommendations for customers and find the best method. The result shows that the Memory-based approach (k-NN Algorithm) outperformed the Model-based (SVD Matrix Factorization). Another finding is that the more data training being used, the better the performance of the recommendation system will result. To cope with the data scalability issue, customer segmentation through k-Means Clustering was applied. The result implies that this is not necessary since it failed to boost up the models' accuracy. The result of the recommendation system is then applied in a suggested business process for a specific offline retailer shop.

Author 1: Bayu Yudha Pratama
Author 2: Indra Budi
Author 3: Arlisa Yuliawati

Keywords: Recommendation system; offline retail store; memory-based collaborative filtering; customer segmentation

PDF

Paper 76: Using Wearable Sensors for Human Activity Recognition in Logistics: A Comparison of Different Feature Sets and Machine Learning Algorithms

Abstract: The topic of human activity recognition has gained a lot of attention due to its usage for exercise monitoring, smart health and assisted living. Even though the aforementioned domains have received significant interest by researchers, activity recognition for industrial settings has received little attention in comparison. Industry 4.0 involves the assimilation of industrial workers with robots and other equipment used in the industry and necessitates the development of recognition methodologies for activities being performed in industries. In this regard, this paper presents a comparison in performance of various time/frequency domain features and popular machine learning algorithms for use in activity recognition in a logistics scenario. Experiments were conducted on inertial measurement sensor data from the recently released LARa dataset which involved three feature sets being used with four machine learning algorithms; Support Vector Machines, Decision Trees, Random Forests and Extreme Gradient Boost (XGBoost). The best result achieved in the experiments was an average accuracy of 78.61% using the XGBoost classifier while using both time and frequency domain features. This work serves as a baseline for activity recognition in logistics using IMU sensors and enables the development of solutions to support fulfillment of Industry 4.0 goals.

Author 1: Abbas Shah Syed
Author 2: Zafi Sherhan Syed
Author 3: Muhammad Shehram Shah
Author 4: Salahuddin Saddar

Keywords: Human Activity Recognition (HAR); inertial sen-sors; LARa dataset; smart industry

PDF

Paper 77: Real-Time Healthcare Monitoring System using Online Machine Learning and Spark Streaming

Abstract: The real-time monitoring and tracking systems play a critical role in the healthcare field. Wearable medical devices with sensors, mobile applications, and health cloud have continuously generated an enormous amount of data, often called streaming big data. Due to the higher speed of the streaming data, it is difficult to ingest, process, and analyze such huge data in real-time to make real-time actions in case of emergencies. Using traditional methods that are inadequate and time-consuming. Therefore, there is a significant need for real-time big data stream processing to guarantee an effective and scalable solution. So, we proposed a new system for online prediction to predict health status using Spark streaming framework. The proposed system focuses on applying streaming machine learning models (i.e. streaming linear regression with SGD) on streaming health data events ingested to spark streaming through Kafka topics. The experimental results are done on the historical medical datasets (i.e. diabetes dataset, heart disease dataset, and breast cancer dataset) and generated dataset which is simulated to wearable medical sensors. The historical datasets have shown that the accuracy improvement ratio obtained using the diabetes disease dataset is the highest one with respect to the other two datasets with an accuracy of 81%. For generated datasets, the online prediction system has achieved accuracy with 98% at 5 seconds window size. Beyond this, the experimental results have proofed that the online prediction system can online learn and update the model according to the new data arrival and window size.

Author 1: Fawzya Hassan
Author 2: Masoud E. Shaheen
Author 3: Radhya Sahal

Keywords: Online machine learning; streaming data; Apache Spark; Apache Kafka; spark streaming machine learning

PDF

Paper 78: Fundamental Capacity Analysis for Identically Independently Distributed Nakagami-q Fading Wireless Communication

Abstract: With the advancement in technology, decent trans-fer rate of data for fast communication is an exigency. Different distributions on different wireless communication channels have been used previously to model them and to do performance analysis on the systems. In this work, capacity analysis of iden-tically independently distributed Nakagami-q fading single-input multiple-output (SIMO) wireless communication is presented. The derivation of channel capacity with the analytical solution have been conducted using small limit argument approximation. Where the small limit argument approximation corresponds to the low signal-to-noise ratio (SNR) regime. SIMO channel capacity behavior with respect to number of receiver antennas and with respect to SNR have been explored in depth. The improvement of capacity is depicted rigorously. It has been found that using Nakagami-q distribution, capacity of the system increases as number of receiver antenna increases. It is also found that the capacity of this SIMO wireless system can be further improved through changing of certain parameters.

Author 1: Siam Bin Shawkat
Author 2: Md. Mazid-Ul-Haque
Author 3: Md. Sohidul Islam
Author 4: Borshan Sarker Sonok

Keywords: Wireless communication; SIMO channel capacity; Nakagami-q fading; Hoyt distribution; low SNR regime

PDF

Paper 79: Unified Approach for White Blood Cell Segmentation, Feature Extraction, and Counting using Max-Tree Data Structure

Abstract: Accurate identification and counting of White Blood Cells (WBCs) from microscopy blood cell images are vital for several blood-related disease diagnoses such as leukemia. The inevitability of automated cell image analysis in medical diagnosis results in a plethora of research for the last few decades. Microscopic blood cell image analysis involves three major steps: cell segmentation, classification, and counting. Several techniques have been employed separately to solve these three problems. In this paper, a simple unified model is proposed for White Blood Cell segmentation, feature extraction for classification, and counting with connected mathematical morphological operators implemented using the max-tree data structure. Max-tree creates a hierarchical representation of connected components of all possible gray levels present in an image in such a way that the root holds the connected components comprise of pixels with the lowest intensity value and the connected components comprise of pixels with the highest intensity value are in the leaves. Any associated attributes such as the size or shape of each connected component can be efficiently calculated on the fly and stored in this data structure. Utilizing this knowledge-rich data structure, we obtain a better segmentation of the cells that preserves the morphology of the cells and consequently obtain better accuracy in cell counting.

Author 1: Bilkis Jamal Ferdosi

Keywords: Segmentation; feature extraction; White Blood Cell (WBC); mathematical morphology; max-tree

PDF

Paper 80: DistB-SDoIndustry: Enhancing Security in Industry 4.0 Services based on Distributed Blockchain through Software Defined Networking-IoT Enabled Architecture

Abstract: The concept of Industry 4.0 is a newly emerging focus of research throughout the world. However, it has lots of challenges to control data, and it can be addressed with various technologies like Internet of Things (IoT), Big Data, Artificial Intelligence (AI), Software Defined Networking (SDN), and Blockchain (BC) for managing data securely. Further, the complexity of sensors, appliances, sensor networks connecting to the internet and the model of Industry 4.0 has created the chal-lenge of designing systems, infrastructure and smart applications capable of continuously analyzing the data produced. Regarding these, the authors present a distributed Blockchain-based security to industry 4.0 applications with SDN-IoT enabled environment. Where the Blockchain can be capable of leading the robust, privacy and confidentiality to our desired system. In addition, the SDN-IoT incorporates the different services of industry 4.0 with more security as well as flexibility. Furthermore, the authors offer an excellent combination among the technologies like IoT, SDN and Blockchain to improve the security and privacy of Industry 4.0 services properly. Finally , the authors evaluate performance and security in a variety of ways in the presented architecture.

Author 1: Anichur Rahman
Author 2: Umme Sara
Author 3: Dipanjali Kundu
Author 4: Saiful Islam
Author 5: Md. Jahidul Islam
Author 6: Mahedi Hasan
Author 7: Ziaur Rahman
Author 8: Mostofa Kamal Nasir

Keywords: IoT; SDN; BC; AI; security; privacy; industry 4.0

PDF

Paper 81: Small-LRU: A Hardware Efficient Hybrid Replacement Policy

Abstract: Replacement policy plays a major role in improving the performance of the modern highly associative cache memo-ries. As the demand of data intensive application is increasing it is highly required that the size of the Last Level Cache (LLC) must be increased. Increasing the size of the LLC also increases the associativity of the cache. Modern LLCs are divided into multiple banks where each bank is a set-associative cache. The replacement policy implemented on such highly associative banks consume significant hardware (storage and area) overhead. Also the Least Recently Used (LRU) based replacement policy has an issue of dead blocks. A block in the cache is called dead, if the block is not used in the future before its eviction from the cache. In LRU policy, a dead block can not be remove early until it become LRU-block. So, we have proposed a replacement technique which is capable of removing dead block early with reduced hardware cost between 77% to 91% in comparison to baseline techniques. In this policy random replacement is used for 70% ways and LRU is applied for rest of the ways. The early eviction of dead blocks also improves the performance of the system by 5%.

Author 1: Purnendu Das
Author 2: Bishwa Ranjan Roy

Keywords: Replacement policies; cache memories; last level cache; hardware overheads; dead block

PDF

Paper 82: Parameter Estimation of the ALBA Autonomous Surface Craft

Abstract: Arequipa region holds the largest extension of the Peruvian littoral at the Pacific sea, has also fresh water resources composed of rivers and lagoons from the coast to the Andes highland. The ALBA vehicle is a low-cost autonomous surface vessel with open source architecture that is being developed to support water monitoring tasks in the region. This article deals with the nonlinear identification problem for an autonomous surface craft and the maximum likelihood estimation approach is used to estimate its parameters. The parametric nonlinear model is considered with simulated and experimental data. The results shows good fitting values when two, three and a maximum four parameters are estimated.

Author 1: Melanie M. Valdivia-Fernandez
Author 2: Brayan A. Monroy-Ochoa
Author 3: Daniel D. Yanyachi
Author 4: Juan C. Cutipa-Luque

Keywords: Autonomous surface craft; parameter estimation; modeling; maximum likelihood; nonlinear; zigzag

PDF

Paper 83: Mobility-Aware Container Migration in Cloudlet-Enabled IoT Systems using Integrated Muticriteria Decision Making

Abstract: Service migration plays a vital role in continuous service delivery in Internet of Things (IoT) systems. This pa-per presents a mobility-aware container migration algorithm for Cloudlet-enabled IoT systems. The proposed algorithm is based on an integrated multicriteria decision making (MCDM) approach. It has been implemented using a specialized simulation tool and compared to other existing migration algorithms. Simu-lation results demonstrate the ability of the proposed algorithm to achieve up to 48%, 48%, 20% and 36% improvement in migration time, service downtime, migration reliability and service loss rate, respectively as compared to other migration algorithms. The proposed algorithm is capable of perceiving the run-time dynamics of IoT systems and appropriately manage the process of container migration.

Author 1: Mutaz A. B. Al-Tarawneh

Keywords: Internet of Things (IoT); container; migration; Cloudlet; criteria; decision making

PDF

Paper 84: Disaster Recovery in Cloud Computing Systems: An Overview

Abstract: With the rapid growth of internet technologies, large-scale online services, such as data backup and data recovery are increasingly available. Since these large-scale online services require substantial networking, processing, and storage capacities, it has become a considerable challenge to design equally large-scale computing infrastructures that support these services cost-effectively. In response to this rising demand, cloud computing has been refined during the past decade and turned into a lucrative business for organizations that own large datacenters and offer their computing resources. Undoubtedly cloud computing provides tremendous benefits for data storage backup and data accessibility at a reasonable cost. This paper aims at surveying and analyzing the previous works proposed for disaster recovery in cloud computing. The discussion concentrates on investigating the positive aspects and the limitations of each proposal. Also examined are discussed the current challenges in handling data recovery in the cloud context and the impact of data backup plan on maintaining the data in the event of natural disasters. A summary of the leading research work is provided outlining their weaknesses and limitations in the area of disaster recovery in the cloud computing environment. An in-depth discussion of the current and future trends research in the area of disaster recovery in cloud computing is also offered. Several work research directions that ought to be explored are pointed out as well, which may help researchers to discover and further investigate those problems related to disaster recovery in the cloud environment that have remained unresolved.

Author 1: Abedallah Zaid Abualkishik
Author 2: Ali A. Alwan
Author 3: Yonis Gulzar

Keywords: Cloud computing; data backup; disaster recovery; multi-cloud

PDF

Paper 85: An IoT based Urban Areas Air Quality Monitoring Prototype

Abstract: According to the World Health Organization, the most affected places with the presence of polluting gases and particles in suspension are urban areas due to the emissions corresponding to human activities, they have also caused diseases and deaths in millions of people in the world. This paper describes the process of design and implementation of an electronic prototype applying the Internet of Things concept with a cloud storage and processing service. This device has the purpose of monitoring in real-time the air quality through the presence of pollutant gases and PM10 and PM2.5 suspended particles to carry out later studies that contribute to prevention measures in the health care of the population.

Author 1: Martin M. Soto-Cordova
Author 2: Martha Medina-De-La-Cruz
Author 3: Anderson Mujaico-Mariano

Keywords: Air pollution; air quality; Arduino; Internet of Things (IoT); cloud service; MQTT; Air Quality Index (AQI); sensors

PDF

Paper 86: Modeling and Interpretation of Covid-19 Infections Data at Peru through the Mitchell’s Criteria

Abstract: In this paper, the criteria of Tom Mitchell based at the philosophy of Machine Learning have been used to interpret data of new cases per week of infections by Covid-19 at Per´u. For this, it was constructed a mathematical scheme that encloses the Mitchell’s criteria as well as the idea of propagation as commonly used in modern physics to attack complex problems of interactions. With this, both the 2009 season of AH1N1 flu outbreak and the ongoing Covid-19 data were analyzed in terms of task, performance and experience. In contrast with the AH1N1 case, the Covid-19 data do not exhibit any performance in terms of minimize infections at the first weeks of the beginning of the outbreak, suggesting that precise actions to reduce infections have not been taken appropriately.

Author 1: Huber Nieto-Chaupis

Keywords: Covid-19; epidemiology; machine learning; Tom Mitchell; Monte Carlo

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org