The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 11 Issue 3

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Apple Carving Algorithm to Approximate Traveling Salesman Problem from Compact Triangulation of Planar Point Sets

Abstract: We propose a modified version of the Convex Hull algorithm for approximating minimum-length Hamiltonian cycle (TSP) in planar point sets. Starting from a full compact triangulation of a point set, our heuristic “carves out” candidate triangles with the minimal Triangle Inequality Measure until all points lie on the outer perimeter of the remaining partial triangulation. The initial candidate list consists of triangles on the convex hull of a given planar point set; the list is updated as triangles are eliminated and new triangles are thereby exposed. We show that the time and space complexity of the “apple carving” algorithm are O(n2) and O(n), respectively. We test our algorithm using a well-known problem subset and demonstrate that our proposed algorithm outperforms nearly all other TSP tour construction heuristics.

Author 1: Marko Dodig
Author 2: Milton Smith

Keywords: TSP; heuristics; combinatorial optimization; computational geometry; compact triangulation

PDF

Paper 2: Real-Time Cryptocurrency Price Prediction by Exploiting IoT Concept and Beyond: Cloud Computing, Data Parallelism and Deep Learning

Abstract: Cryptocurrency has as of late pulled in extensive consideration in the fields of economics, cryptography, and computer science due to it is an encrypted digital currency, peer- to- peer virtual forex produced using codes, and it is much the same as another medium of the trade like real cash. This study mainly focuses to combine the Deep Learning with Data parallelism and Cloud Computing Machine learning engine as “hybrid architecture” to predict new Cryptocurrency prices by using historical Cryptocurrency data. The study has exploited 266,776 of Cryptocurrency prices values from the pilot experiment, and Deep Learning algorithm used for the price prediction. The four hybrid architecture models, namely, (i) standalone PC, (ii) Cloud computing without data parallelism (GPU-1), (iii) Cloud computing with data parallelism (GPU-4), and (iv) Cloud computing with data parallelism (GPU-8) introduced and utilized for the analysis. The performance of each model is evaluated using different performance evaluation parameters. Then, the efficiency of each model was compared using different batch sizes. An experimental result reveals that Cloud computing technology exposes new era by performing parallel computing in IoT to reduce computation time up to 90% of the Deep Learning algorithm-based Cryptocurrencies price prediction model and many other IoT applications such as character recognition, biomedical field, industrial automation, and natural disaster prediction.

Author 1: Ajith Premarathne
Author 2: Malka N. Halgamuge
Author 3: R. Samarakody
Author 4: Ampalavanapillai Nirmalathas

Keywords: Internet of things; IoT; data parallelism; deep learning; cloud computing

PDF

Paper 3: The New High-Performance Face Tracking System based on Detection-Tracking and Tracklet-Tracklet Association in Semi-Online Mode

Abstract: Despite recent advances in multiple object tracking and pedestrian tracking, multiple-face tracking remains a challenging problem. In this work, the authors propose a framework to solve the problem in semi-online manner (the framework runs in real-time speed with two-second delay). The proposed framework consists of two stages: detection-tracking and tracklet-tracklet association. Detection-tracking stage is for creating short tracklets. Tracklet-tracklet association is for merging and assigning identifications to those tracklets. To the best of the authors’ knowledge, the authors make contributions in three aspects: 1) the authors adopt a principle often used in online approaches as a part of the framework and introduce a tracklet-tracklet association stage to leverage future information; 2) the authors propose a motion affinity metric to compare trajectories of two tracklets; 3) the authors propose an efficient way to employ deep features in comparing tracklets of faces. The authors achieved 78.7% precision plot AUC, 68.1% success plot AUC on MobiFace dataset (test set). On OTB dataset, the authors achieved 78.2% and 72.5% precision plot AUC, 51.9% and 43.9% success plot AUC on normal and difficult face subsets, respectively. The average speed was maintained at around 44 FPS. In comparison to the state-of-the-art methods, the proposed framework’s performance maintains high rankings in top 3 on two datasets while keeping the processing speed higher than the other methods in top 3.

Author 1: Ngoc Q. Ly
Author 2: Tan T. Nguyen
Author 3: Tai C. Vong
Author 4: Cuong V. Than

Keywords: Face tracking; face re-identification; detection-tracking; tracklet-tracklet association

PDF

Paper 4: Mobile Sensor Node Deployment Strategy by using Graph Structure based on Estimation of Communication Connectivity and Movement Path

Abstract: We propose a multiple-mobile sensor node (MSN) deployment strategy that considers wireless communication quality and operation time of underground wireless sensor networks. After an underground disaster, it is difficult to perform a rescue operation because the internal situation cannot be confirmed. Hence, gathering information using a teleoperated robot has been widely discussed. However, wireless communication is unstable and the corresponding wireless infrastructure to operate the teleoperated robot is unavailable underground. Therefore, we studied the disaster information-gathering support system using wireless sensor networks and a rescue robot. In this study, the movement path information of the teleoperated robot is fed to MSNs in a graph structure. MSNs are deployed in the underground environment by adding an evaluation of communication quality and operation status to a given graph structure. The simulation was evaluated in an assumed underground environment. The results confirmed that the wireless communication quality between each MSN was maintained and energy consumption was balanced during the deployment.

Author 1: Koji Kawabata
Author 2: Tsuyoshi Suzuki

Keywords: Wireless sensor networks; deployment strategy; communication connectivity

PDF

Paper 5: Classification of Malignant and Benign Lung Nodule and Prediction of Image Label Class using Multi-Deep Model

Abstract: Lung cancer has been listed as one of the world’s leading causes of death. Early diagnosis of lung nodules has great significance for the prevention of lung cancer. Despite major improvements in modern diagnosis and treatment, the five-year survival rate is only 18%. Before diagnosis, the classification of lung nodules is one important step, in particular, because automatic classification may help doctors with a valuable opinion. Although deep learning has shown improvement in the image classifications over traditional approaches, which focus on handcraft features, due to a large number of intra-class variational images and the inter-class similar images due to various imaging modalities, it remains challenging to classify lung nodule. In this paper, a multi-deep model (MD model) is proposed for lung nodule classification as well as to predict the image label class. This model is based on three phases that include multi-scale dilated convolutional blocks (MsDc), dual deep convolutional neural networks (DCNN A/B), and multi-task learning component (MTLc). Initially, the multi-scale features are derived through the MsDc process by using different dilated rates to enlarge the respective area. This technique is applied to a pair of images. Such images are accepted by dual DCNNs, and both models can learn mutually from each other in order to enhance the model accuracy. To further improve the performance of the proposed model, the output from both DCNNs split into two portions. The multi-task learning part is used to evaluate whether the input image pair is in the same group or not and also helps to classify them between benign and malignant. Furthermore, it can provide positive guidance if there is an error. Both the intra-class and inter-class (variation and similarity) of a dataset itself increase the efficiency of single DCNN. The effectiveness of mentioned technique is tested empirically by using the popular Lung Image Consortium Database (LIDC) dataset. The results show that the strategy is highly efficient in the form of sensitivity of 90.67%, specificity 90.80%, and accuracy of 90.73%.

Author 1: Muahammad Bilal Zia
Author 2: Zhao Juan Juan
Author 3: Xujuan Zhou
Author 4: Ning Xiao
Author 5: Jiawen Wang
Author 6: Ammad Khan

Keywords: Lung nodule classification; dilated blocks; dual DCNNs; multi-task learning; multi-deep model

PDF

Paper 6: ECG and EEG Pattern Classifications and Dimensionality Reduction with Laplacian Eigenmaps

Abstract: In this paper, we investigate the effect of dimensionality reduction using Laplacian Eigenmap (LE) in the case of several classes of electroencephalogram (EEG) and electrocardiographic (ECG) signals. Classification results based on a boosting method for EEG signals exhibiting P300 wave and k-nearest neighbour for ECG signals belonging to 8 classes are computed and compared. For EEG signals, the difference between the rate of classification in the original and reduced space with LE is relatively small, only several percent (maximum 10% for the 3 – dimensional space), and the original EEG signals belonging to a 128-dimensional space. This means that, for classification purposes the dimensionality of EEG signals can be reduced without significantly affecting the global and local arrangement of data. Moreover, for EEG signals that are collected at high frequencies, a first stage of data preprocessing can be done by reducing the dimensionality. For ECG signals, for segmentation with and without centering of the R wave, there is a slight decrease in the classification rate at small data sizes. It is found that for an initial dimensionality of 301 the size of the signals can be reduced to 30 without significantly affecting the classification rate. Below this dimension there is a decrease of the classification rate but still the results are very good even for very small dimensions, such as 3. It has been found that the classification results in the reduced space are remarkable close to those obtained for the initial spaces even for small dimensions.

Author 1: Monica Fira
Author 2: Liviu Goras

Keywords: Laplacian Eigenmaps; dimensionality reduction; biosignals; electrocardiographic signal (ECG); electroencephalogram (EEG)

PDF

Paper 7: A Solution to the Hyper Complex, Cross Domain Reality of Artificial Intelligence: The Hierarchy of AI

Abstract: Artificial Intelligence (AI) is an umbrella term used to describe machine-based forms of learning. This can encapsulate anything from Siri, Apple’s smartphone-based assistant, to Tesla’s autonomous vehicles (self-driving cars). At present, there are no set criteria to classify AI. The implications of which include public uncertainty, corporate scepticism, diminished confidence, insufficient funding and limited progress. Current substantial challenges exist with AI such as the use of combinationally large search space, prediction errors against ground truth values, the use of quantum error correction strategies. These are discussed in addition to fundamental data issues across collection, sample error and quality. The concept of cross realms and domains used to inform AI, is considered. Furthermore there is the issue of the confusing range of current AI labels. This paper aims to provide a more consistent form of classification, to be used by institutions and organisations alike, as they endeavour to make AI part of their practice. In turn, this seeks to promote transparency and increase trust. This has been done through primary research, including a panel of data scientists / experts in the field, and through a literature review on existing research. The authors propose a model solution in that of the Hierarchy of AI.

Author 1: Andrew Kear
Author 2: Sasha L. Folkes

Keywords: Artificial intelligence; classification; ground truth value; Hierarchy of AI; Model of AI

PDF

Paper 8: An Ontology Driven ESCO LOD Quality Enhancement

Abstract: The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.

Author 1: Adham Kahlawi

Keywords: ESCO; linked open data; ontology; semantic web; data quality; SPARQL; OWL; metadata

PDF

Paper 9: Implementation of a Proof of Concept for a Blockchain-based Smart Contract for the Automotive Industry in Mauritius

Abstract: In recent years, there has been a growth of interest in the blockchain technology across a wide range of industries. Blockchain technology has the potential to transform the way businesses operate especially in the automotive industry. The distributed infrastructure and the secure nature of the blockchain technology encourages trust among businesses and consumers. In Mauritius, the automotive industry is facing challenges such as tampering of vehicle information, falsification of mileage and poor traceability which leads to a lack of trust from customers. In this work, an implementation of a proof of concept (POC) for a blockchain-based smart contract application has been proposed and implemented to mitigate these challenges. The automotives use cases: (a) vehicle importation; and (b) vehicle sale and registration have been implemented in the IBM blockchain platform which provides a secure and transparent way to invoke transactions. Finally, the performance and benefits of the Hyperledger Fabric vehicle application have been assessed based on transparency, security, traceability and efficiency.

Author 1: Keshav Luchoomun
Author 2: Sameerchamd Pudaruth
Author 3: Somveer Kishnah

Keywords: Blockchain; smart contract; hyperledger fabric; vehicles; Mauritius

PDF

Paper 10: Method for Rainfall Rate Estimation with Satellite based Microwave Radiometer Data

Abstract: Method for rainfall rate estimation with satellite based microwave radiometer data is proposed. A method to consider the geometric relationship of the observed ice particles and microwave radiometer in the estimation of precipitation is shown, and its validity is shown by comparing it with precipitation radar data on the ground. Observations at high altitudes, such as ice particles, differ greatly in the location of the observation point projected on the ground surface and in the upper troposphere where the observations exist. This effect was insignificant when the precipitation was small because ice particles were often absent, but it was found that the effect was large when the precipitation was large. In other words, the proposed method is effective and effective for Advanced Microwave Scanning Radiometer (AMSR) data in Houston, which was shown as an example of a highly developed convective rain cloud with an In the case of Kwajalein, the effect is insignificant. In addition, the proposed method requires an assumption of ice particle height, and it is necessary to make assumptions based on climatic values. In addition, microwaves in the 89 GHz band, which are considered to be sensitive to ice particles, are not only sensitive to ice particles, so it must be taken into account that they are also affected by the presence of non-ice particles.

Author 1: Kohei Arai

Keywords: Rainfall rate estimation; Advanced Microwave Scanning Radiometer: AMSR; geometric relation

PDF

Paper 11: Image-based Individual Cow Recognition using Body Patterns

Abstract: The existence of illumination variation, non-rigid object, occlusion, non-linear motion, and real-time implementation requirement has made tracking in computer vision a challenging task. In order to recognize individual cow and to mitigate all the challenging tasks, an image processing system is proposed using the body pattern images of the cow. This system accepts an input image, performs processing operation on the image, and output results in form of classification under certain categories. Technically, convolutional neural network is modeled for the training and testing of each pattern image of 1000 acquired images of 10 species of cow which will pass it through a series of convolution layers with filters, pooling, fully connected layers and softmax function for the pattern images classification with probabilistic values between 0 and 1. The performance evaluation of the proposed system for both training and testing data was carried out for each cow’s identification and 92.59% and 89.95% accuracies were achieved respectively.

Author 1: Rotimi-Williams Bello
Author 2: Abdullah Zawawi Talib
Author 3: Ahmad Sufril Azlan Mohamed
Author 4: Daniel A. Olubummo
Author 5: Firstman Noah Otobo

Keywords: Cow; body patterns; convolutional neural network; image; recognition

PDF

Paper 12: Modeling Network Security: Case Study of Email System

Abstract: We study operational security in computer network security, including infrastructure, internal processes, resources, information, and physical environment. Current works on developing a security framework focus on a security ontology that contributes to applying common vocabulary, but such an approach does not assist in constructing a foundation for a holistic security methodology. We focus on defining the bounds and creating a representation of a security system by developing a diagrammatic representation (i.e. a model) as a means to describe computer network processes. The model, referred to a thinging machine, is a first step toward developing a security strategy and plan. The general aim is to demonstrate that the representation of the security system plays a key role in making thinking visible through conceptual description of the operational environment, a region in which active security operations are undertaken. We apply the proposed model for email security by conceptually describing a real email system.

Author 1: Sabah Al-Fedaghi
Author 2: Hadeel Alnasser

Keywords: Network security; conceptual model; diagrammatic representation; email system

PDF

Paper 13: Usability Study of Smart Phone Messaging for Elderly and Low-literate Users

Abstract: Smartphones are electronic devices that people can carry around and install/add compatible third-party Apps to expend their functionality. Smartphones are mainly developed for calling and messaging purposes. All applications’ interfaces are designed for the current trends. Therefore, Senior Citizen and Low-literate users face difficulties to use smartphones due to the perceived complicated interface and functionality. This paper analyzes Senior Citizen and Low-literate user's requirements to read and write messages from users “memory load”, “navigation consistency”, “consistency and standard”, and “touch screen finger-based tapping” perspective. Then a framework based on “visual representation”, “navigation” and “miss click avoidance” is developed. A comparison between the proposed application and other messaging applications is provided. This research work focused on the Senior Citizen and Low-literate users to improve their user experience of the smartphone messaging application.

Author 1: Rajibul Anam
Author 2: Abdelouahab Abid

Keywords: Smartphone interface; smartphone messaging; visual color; adaptation

PDF

Paper 14: Improved Control Strategies of Electric Vehicles Charging Station based on Grid Tied PV/Battery System

Abstract: In this paper, improved control strategies of a smart topology of EVs charging station (CS) based on grid tied PV/Battery system are designed and analyzed. The proposed strategies consist of three operating modes i.e. Pv2B; charging a battery storage buffer (BSB) of the CS from solar energy, V2G; discharging an EV battery via grid, and Pv2G; injecting the produced power from PV system into the energy distribution system. However, the BSB is connected to the PV system through a single ended primary inductor converter, the V2G operating mode is emulated by an EV lithium-ion battery tied to the grid via a high frequency full bridge inverter and a bidirectional dc/dc converter. The aim of this work is to improve the energy efficiency of the CS by using a hybrid energy system. Simulation studies are performed in Matlab/Simulink in order to operate the proposed solar CS with multiple control strategies of each case scenario based on a CS management algorithm (CSMA). To provide credible findings of this research, a low power prototype is developed in order to validate the proposed CSMA and its associated controls.

Author 1: Abdelilah Hassoune
Author 2: Mohamed Khafallah
Author 3: Abdelouahed Mesbahi
Author 4: Tarik Bouragba

Keywords: component; Electric vehicle charging station; solar energy; battery storage buffer; electrical grid; charging station management algorithm

PDF

Paper 15: Problem based Learning with Information and Communications Technology Support: An Experience in the Teaching-Learning of Matrix Algebra

Abstract: Students and teachers face problems in the teaching-learning processes of matrix algebra, due to the level of abstraction required, the difficulty of calculation and the way in which the contents are presented. Problem-based Learning (PBL) arises as a solution to this problem, as it contextualizes the contents in everyday life, allows students to actively build that knowledge and contributes to the development of skills. The proposal describes a didactic sequence based on PBL, which uses cooperative techniques and MATLAB, as instruments that facilitate the resolution of problems close to the student experience. The features of the Moodle platform are used to support the face-to-face educational process. The perception of students, in relation to the activity shows that 83% believe that it contributed to the understanding of the topics covered and 79% think that it allowed them to develop their creativity and capacity for expression.

Author 1: Norka Bedregal-Alpaca
Author 2: Olha Sharhorodska
Author 3: Doris Tupacyupanqui-Jaen
Author 4: Victor Corneko-Aparicio

Keywords: Problem-based learning; cooperative techniques; constructionism; matrix algebra

PDF

Paper 16: Project based Learning Application Experience in Engineering Courses: Database Case in the Professional Career of Systems Engineering

Abstract: In many universities, training research is applied in courses as a basic element of research and fundamental in the professional training of every student, which result in strengthening and increasing knowledge about certain areas, as well as to achieve skills, competence, abilities and attitudes. The present work shows the formal application experience of the Project Based Learning (ABPr) methodology in Database Course (BD) at the Professional School of Systems Engineering (EPIS) of the National University of San Agustín (UNSA), Arequipa-Peru, accommodating the nature of the course being the theory taught by a teacher and laboratory practices by another teacher. The goal is to apply an active teaching strategy to an engineering training course. The methodology used is Project-Based Learning for a research project training for a real problem in an organization to be developed by each team in the semester, with deliverables that will be evaluated by grade scale and the formative research report assessed through the rubric; the input and feedback that the teacher makes of them serves for the improvement and experience in the training of the student. The results obtained show that the objectives in the training of students were achieved, as well as the development of the competencies related to the course, in addition that the application of ABPr gives good results for courses of an engineering career serving as feedback for the continuous improvement of this course and experience for the implementation of ABPr in other curriculum courses. Concluding that formative research as a pillar of a basic level of research initiation is given in a cross-cutting way in the curriculum courses, that the active teaching strategies properly planned and properly applied to each reality allow to achieve the desired results such as: increase knowledge of the area, strengthen skills, abilities, attitudes as the case of the present.

Author 1: César Baluarte-Araya

Keywords: Project based learning; formative research; competences; skills; formative research report

PDF

Paper 17: Heart Rate Monitoring with Smart Wearables using Edge Computing

Abstract: Heart is a vital component of every human health. The development of wearables and its sensor enables the possibility of easy-to-use real-time monitoring. The goal of this study is to improve an IoT monitoring system by enabling real-time heart rate monitoring and analysis, also to assess the use of PPG sensors in smart wearables compared to other clinical-tested heart rate sensors. The PPG sensor will be used to record heart rate data of the user physically. The measurements are then sent to the application for pre-processing. The application can then transmit the pre-processed measurements to the cloud server for monitoring or further analysis, i.e. to assess the health of users’ heart. The measurement comparison with measurement collected by a BCG sensor is carried out in this paper. While neither are standard for heart rate measurements, the findings of the evaluation show that the PPG sensor achieves quite similar input data and assessment results during awake stages. The Fitbit sensor tested often underestimates, with sometimes delayed or doesn’t detect a sudden increase in heart rate during sleeping.

Author 1: Stephen Dewanto
Author 2: Michelle Alexandra
Author 3: Nico Surantha

Keywords: Internet-of-things; heart rate; smart wearables; real-time monitoring

PDF

Paper 18: Prediction of Prostate Cancer using Ensemble of Machine Learning Techniques

Abstract: Several diseases are associated with humans; some are synonymous to female and some to male. Example of diseases synonymous to the male gender is Prostate Cancer (PC). Prostate cancer occurs when cells in the prostate gland starts to grow uncontrollably. Statistics shows that prostate cancer is becoming an epidemic among men. Hence, several research works have tried to solve this problem using various methods. Although numerous medical research works are ongoing in the area, the need to introduce technology to battle the epidemic is paramount. Because of this, some researchers have developed several models to help solve issues of prostate cancer in men, but the area is still open to contribution. Recently, some researchers have adopted some well-established Machine Learning (ML) techniques to predict and diagnose the occurrence of prostate cancer, but issues of low prediction accuracy, inability to implement model, low sensitivity; among others still lingers. This paper approached these challenges by developing an ensemble model that combines three (3) ML techniques; Support Vector Machine (SVM), Decision Tree (DT), and Multilayer Perceptron (MP) to predict PC in men. Our developed model was evaluated using sensitivity, specificity and accuracy as performance metrics, and our result showed a prediction accuracy of 99.06%, sensitivity of 98.09% and, specificity of 99.54%, which is a relative improvement on the existing systems.

Author 1: Oyewo O. A
Author 2: Boyinbode O.K

Keywords: Prostate cancer; machine learning; support vector; machine; decision tree; multilayer perceptron; diseases

PDF

Paper 19: WoT Communication Protocol Security and Privacy Issues

Abstract: In this paper, we have proposed a novel approach for the prevention of the Internet of Things (IoT) from fake devices and highlighted privacy issues by using third party Application Program Interface (RestAPI) in Web of Things (WoT). For the ease of life, the usage of IoT devices, sensors, and Radio-Frequency Identifications (RFIDs) increased rapidly. Such as in transport for monitoring vehicles, taxi services, healthcare for patient’s health condition monitoring, smart cars, smart grids, and smart homes, etc. Due to this for financial gain attackers are targeting these networks or protocol and adversaries are trying to damage the reputation of the organization or to steal intellectual property. From the last two decades or more, the injection vulnerabilities are more threatening security risks for the web application still exists. The new security challenges occur for the security professional or security researchers in the form of IoT or WoT (Web of Things) communication protocols implementation. These protocol Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), WebSockets, and RestAPI have a different type of security issues. Respectively insertion of fake devices, authentication is not implemented in WebSocket connections, and user’s privacy can be leaked with the use of RestAPI without its validation. We have developed a program in Personal Home Pages (PHP) for the detection of new devices in the IoT network. With this, the user’s privacy and data will be protected along with some critical security issues of WoT underlying protocols.

Author 1: Sadia Murawat
Author 2: Fahima Tahir
Author 3: Maria Anjum
Author 4: Mudasar Ahmed Soomro
Author 5: Saima Siraj
Author 6: Zojan Memon
Author 7: Anees Muhammad
Author 8: Khuda Bux

Keywords: Internet of Things; Web of Things; WoT; security issues; privacy issues; protocols MQTT CoAP

PDF

Paper 20: Smart Energy Control Internet of Things based Agriculture Clustered Scheme for Smart Farming

Abstract: The era of smart farming has already begun, and its consequences for society and environment are expected to be massive. In this situation, Internet of Things (IoT) technologies have become a key route towards new agricultural practices. IoT nodes detect and track physical or environmental conditions and transmit data through multihop routing to their base station. However, these IoT nodes have come up with energy constraints and complex routing processes due to limited capacities. Hence, lead to data transmission failure and delay in the fields of IoT-based farming. Because of these limitations, the IoT nodes distant from the base station are dependent on their cluster heads (CHs), causing additional load on CHs leading to high energy consumption and shortening their lifetime. To address these issues, this research proposes a smart energy control IoT based agriculture clustered scheme to reduce load on CHs by introducing a novel clustering scheme. Simulations are conducted for validation and comparison is made with LEACH protocol in Agriculture and results show that proposed scheme has much lower energy consumption and longer network life as compared to its counterparts.

Author 1: Sabir Hussain Awan
Author 2: Sheeraz Ahmed
Author 3: Zeeshan Najam
Author 4: Muhammad Yousaf Ali Khan
Author 5: Asif Nawaz
Author 6: Muhammad Fahad
Author 7: Muhammad Tayyab
Author 8: Atif Ishtiaq

Keywords: Agriculture; IoT; network; energy; scheme

PDF

Paper 21: Video Genre Classification using Convolutional Recurrent Neural Networks

Abstract: A wide amount of media in the internet is in the form of video files which have different formats and encodings. Easy identification and sorting of videos becomes a mammoth task if done manually. With an ever-increasing demand for video streaming and download, the Video Classification problem is brought into foresight for managing such large and unstructured data over the internet and locally. We present a solution for classifying videos into genres and locality by training a Convolutional Recurrent Neural Network. It involves feature extraction from video files in the form of frames and audio. The Neural Networks makes a suitable prediction. The final output layer will place the video in a certain genre. This problem could be applied to a vast number of applications including but not limited to search optimization, grouping, critic reviews, piracy detection, targeted advertisements, etc. We expect our fully trained model to identify, with acceptable accuracy, any video or video clip over the internet and thus eliminate the cumbersome problem of manual video classification.

Author 1: K Prasanna Lakshmi
Author 2: Mihir Solanki
Author 3: Jyothi Swaroop Dara
Author 4: Avinash Bhargav Kompalli

Keywords: Convolutional recurrent neural networks; video classification; temporal and spatial aspects; machine learning; computer vision; images classification; audio classification

PDF

Paper 22: Mobility Management Challenges and Solutions in Mobile Cloud Computing System for Next Generation Networks

Abstract: As of late, there is a dynamic improvement in the field of mobile computing, and mobile cloud computing (MCC) has been familiar with a potential development for portable administrations. Likewise, the mobile phones and their applications have high framework in the administration at any point and grow more rapidly. Again, MCC is depended upon to deliver on a very basic level increasingly inventive with multi applications. Moreover, Mobile handling incorporates versatile equipments, portable correspondence, cell programming, and right now there are various compact cloud applications. Versatility the board for supporting the area following and area based assistance is a significant issue of savvy city by giving the way to the smooth transportation of individuals and products. The mobility is valuable to contribute the development in both open and private transportation frameworks for keen urban communities. As the information is distributed computing and getting to it with cell phones all the exchanges experience the system so it is powerless against assault. For keeping the utilization of this fundamental apparatus of steady in this development world we are giving a portion of the answers for these difficulties to address in the field of MCC. In this paper, the main challenges faced by the enterprises and their corresponding solutions are discussed with the mobile cloud computing.

Author 1: L. Pallavi
Author 2: A. Jagan
Author 3: B. Thirumala Rao

Keywords: Mobility management; energy consumption; network resource management; traffic management; security management

PDF

Paper 23: State-of-the-Art Reformation of Web Programming Course Curriculum in Digital Bangladesh

Abstract: For last 15 years universities around the world are continuously developing effective curricula for Web Engineering in order to create good opportunities for graduates to cope up with IT-Software industries. From this study we will show the gap between the skill requirements of IT-Software industries and universities’ web course curricula. Also, we will provide a balanced and structured web course curriculum for any universities. Nowadays, there is a rapid development in web-based applications everywhere but most of our students are late bloomer in programming. So, to ease their difficulties in web sector we need a balanced web curriculum and effective teaching method. By this curriculum one can achieve an overall idea and a minimum view of web engineering which can be beneficial for them in further Web development. Students get a little knowledge in their university on Web Engineering because of the vastness of the contents and the small duration of semester. Our two-semester web course curricula will help them to overcome this problem. Two-semester web course curricula have a huge impact on achieving the minimum required skill in web development field in IT-Software industries. It will help to obtain most of the area of web related content also it will increase problem solving skill and versatile knowledge of web engineering in undergraduate life.

Author 1: Susmita Kar
Author 2: Md. Masudul Islam
Author 3: Mijanur Rahaman

Keywords: Web engineering; web development; outcome based learning; CDIO; web course curriculum; web ecosystem; digital Bangladesh

PDF

Paper 24: Improving Performance of the Multiplexed Colored QR Codes

Abstract: The vast popularity and useful applications of the QR code were the incentives that encourage the research towards improving its storage capacity and performance. A colored quick response (QR) algorithm is proposed, devised and tested to expand the capacity of the black and white modules to hold colored modules that can fold as many as those available in the 8 bits red-Green-Blue (RGB) color code. Fast Multiplexing Technique (FMT) is established to improve the performance and the storage capacity of the QR color code by multiplexing the black/white QR codes into RGB color shades then folding them into the QR code Modules. Comparative experiments with the classical multiplexing technique -MUX system- proved that FMT has a much better performance, (exponentially faster), while maintaining the capacity multitude to 24 folds that of the classical QR code.

Author 1: Islam M. El-Sheref
Author 2: Fatma A. El-Licy
Author 3: Ahmed H. Asad

Keywords: Quick response codes; colored quick response codes; data capacity enhancement; multiplexing quick response color coding

PDF

Paper 25: An Enhanced Twitter Corpus for the Classification of Arabic Speech Acts

Abstract: Twitter has gained wide attention as a major social media platform where many topics are discussed on daily basis through millions of tweets. A tweet can be viewed as a speech act (SA), which is an utterance for presenting information, hiding indirect meaning, or carrying out an action. According to SA theory, SA can represent an assertion, a question, a recommendation, or many other things. In this paper, we tackle the problem of constructing a reference corpus of Arabic tweets for the classification of Arabic speech acts. We refer to this corpus as the Arabic Tweets Speech Act Corpus (ArTSAC). It is an enhancement of a modern standard Arabic (MSA) tweet corpus of speech acts called ArSAS. ArTSAC is more advantageous than ArSAS in terms of its richness of annotated features. The goal of ArTSAC is twofold: Firstly, to understand the purpose and intention of tweets which act in accordance with the SA theory, and hence positively influencing the development of many natural language processing (NLP) applications. Secondly, as a future goal, to be used as a benchmark annotated dataset for testing and evaluating state-of-the-art Arabic SA classification algorithms and applications. ArTSAC has been put in practice to classify Arabic tweets containing speech acts using the Support Vector Machine (SVM) classification algorithm. The results of the experiments show that the enhanced ArTSAC corpus achieved an average precision of 90.6% and an F-score of 89.6%. Substantially it outperformed the results of its predecessor ArTSAC corpus.

Author 1: Majdi Ahed
Author 2: Bassam H. Hammo
Author 3: Mohammad A. M. Abushariah

Keywords: Arabic speech acts; twitter; modern standard Arabic; speech act classification

PDF

Paper 26: Recognition of Image in Different Cameras using an Improved Algorithm in Viola-Jones

Abstract: Technological evolution through computer tools has given rise to tasks of impossible recognition for an ordinary man, but at the same time favorable for the safety of people. Deep learning is considered a tool that uses images and video to detect and interpret real-world scenes. Therefore, it is necessary to validate the application of an algorithm with different cameras for the recognition of people, being a contribution to surveillance in domestic environments and of companies. In this research, an algorithm is presented that, through a camera, allows to detect the image of a person. The objective of this research is to validate the process in the recognition of the image with four cameras through the application of the improved algorithm Viola-Jones. The validation was carried out through a mathematical analysis, which allowed us to base the recognition of the image using four different cameras. As a result of the study, an effective and functional validation was obtained, about the results achieved with the application of the algorithm, using the four cameras and effective in the speed-based recognition concerning the different tests performed on the capture and recognition of each image, reducing the recognition time and optimizing the software and hardware used.

Author 1: Washington Garcia-Quilachamin
Author 2: Luzmila Pro - Concepción

Keywords: Algorithm; video surveillance; cameras; image of a person

PDF

Paper 27: An Improved RDWT-based Image Steganography Scheme with QR Decomposition and Double Entropy

Abstract: This paper introduces an improved RDWT-based image steganography with QR decomposition and double entropy system. It demonstrates image steganography method that hides grayscale secret image into grayscale cover image using RDWT, QR decomposition and entropy calculation. The proposed scheme made use of the human visual system (HVS) in the embedding process. Both cover and secret image are being segmented into non-overlapping blocks with identical block size. Then, entropy values generated from every image block will be sorted from the lowest value to the highest value. The embedding process starts by embedding the secret image block with lowest entropy value into the cover image block with lowest entropy value. The process goes on until all image blocks have been embedded. Embedding secret image into cover image according to the entropy values causes differences that HVS can less likely to detect because of the small changes on image texture. By applying the double entropy system, proposed scheme managed to achieve a higher PSNR value of 60.3773 while previous work gave a value of 55.5771. In terms of SSIM value, proposed scheme generated a value of 0.9998 comparing to previous work’s value of 0.9967. The proposed scheme eliminated the false-positive issue and required low computational time of only 0.72 seconds for embedding and 1.14 seconds for extraction process. Also, it has shown better result compared to previous work in terms of imperceptibility.

Author 1: Ke-Huey Ng
Author 2: Siau-Chuin Liew
Author 3: Ferda Ernawan

Keywords: Steganography; image steganography; transform domain; Redundant Discrete Wavelet Transform (RDWT); QR decomposition; entropy; human visual system (HVS); imperceptibility

PDF

Paper 28: Spectrum Occupancy Measurement of Cellular Spectrum and Smart Network Sharing in Pakistan

Abstract: In wireless communication, the radio spectrum is a very rare and precious resource that has currently become a major problem to efficiently exploit the underutilized band of the static allocated licensed band. Recently, the cognitive radio (CR) has emerged as a promising technology to overcome the spectrum crisis, in which, the licensed band can be utilized by the unlicensed user until and unless it does not affect the transmission of the licensed band. In this paper, the spectrum occupancy of three bands i.e. GSM 900, 1800 and 2100 bands have been measured through spectrum analyzer in the indoor and outdoor environment. The measured results of all the three bands have been calculated through MATLAB against the power spectral density versus frequency plots. Results have shown that the majority of the licensed band is underutilized. Therefore, the CR can play a pivotal role to efficiently utilize the unused spectrum and to overcome the cellular wireless spectrum crisis in Pakistan. The second part of this paper deals with the emerging concept of network sharing among mobile operators and its impacts on cost. Network sharing become the standard among mobile operators worldwide and so in Pakistan. Capital (CAPEX) as well as operational (OPEX) expenditure and rapid advancement in technology encouraged all operators to go for sharing business models. In Pakistan, all four mobile operators Jazz, Telenor, Zong, and Ufone are actively adopting this model to maintain EBITDA (earnings before interest, taxes, depreciation, and amortization). Mainly there are two types of network sharing, Passive infrastructure sharing, and active resource sharing. Passive network sharing is widely used in Pakistan among operators.

Author 1: Aftab Ahmed Mirani
Author 2: Sajjad Ali Memon
Author 3: Saqib Hussain
Author 4: Muhammad Aamir Panhwar
Author 5: Syed Rizwan Ali Shah

Keywords: Cognitive radio; spectrum occupancy; cellular networks; spectrum analyzer; mobile network operators; sharing models; passive infrastructure sharing

PDF

Paper 29: Analysis on the Requirements of Computational Thinking Skills to Overcome the Difficulties in Learning Programming

Abstract: Programming has evolved as an effort to strengthen science, technology, engineering and mathematics (STEM). Programming is a complex process, especially for novices, since it requires problem-solving skills to solve problems of developing algorithms and programme codes. Problem-solving competencies, which are necessary as 21st-century skills, include a set of cognitive skills that are related to problem-solving and programme development or specifically known as computational thinking (CT) skills. In particular, this study quantitatively assessed the computational thinking skills in the context of programming, specifically on the difficulties in learning programming. From the perspectives of the instructors, the survey results highlighted the need to implement CT skills as an approach in teaching and learning programming. A model for teaching and learning programming is necessary as a guide for instructors in the teaching and learning process of programming.

Author 1: Karimah Mohd Yusoff
Author 2: Noraidah Sahari Ashaari
Author 3: Tengku Siti Meriam Tengku Wook
Author 4: Noorazean Mohd Ali

Keywords: Problem-solving; STEM; difficulties in learning programming; cognitive; novice

PDF

Paper 30: Adapted Lesk Algorithm based Word Sense Disambiguation using the Context Information

Abstract: The process of identifying the meaning of a polysemous word correctly from a given context is known as the Word Sense Disambiguation (WSD) in natural language processing (NLP). Adapted Lesk algorithm based system is proposed which makes use of knowledge based approach. This work utilizes WordNet as the knowledge source (lexical database). The proposed system has three units – Input query, Pre-Processing and WSD classifier. Task of input query is to take the inputs sentence (which is an unstructured query) from the user and render it to the pre-processing unit. Pre-processing unit will convert the received unstructured query into a structured query by adding some features such as Part of Speech (POS) tagging, grammatical identification (Subject, Verb, and Object) and this structured query is transferred to the WSD classifier. WSD classifier uniquely identifies the sense of the polysemous word using the context information of the query and the lexical database.

Author 1: Manish Kumar
Author 2: Prasenjit Mukherjee
Author 3: Manik Hendre
Author 4: Manish Godse
Author 5: Baisakhi Chakraborty

Keywords: Word Sense Disambiguation; natural language processing; WordNet; context; machine translation

PDF

Paper 31: An Application of Zipf's Law for Prose and Verse Corpora Neutrality for Hindi and Marathi Languages

Abstract: Availability of the text in different languages has become possible, as almost all websites have offered multilingual option. Hindi is considered as official language in one of the states of India. Hindi text analysis is dominated by the corpus of stories and poems. Before performing any text analysis token extraction is an important step and supports many applications like text summarization , categorizing text and so on. Token extraction is a part of Natural language processing (NLP). NLP includes many steps such as preprocessing the corpus, lemmatization and so on. In this paper the tokens are extracted by two methods and on two corpora. BaSa, a context-based term extraction technique having different NLP activities, e.g. Term Frequency Inverse Document Frequency (TF-IDF) and Zipf ‘s law are used to count and compare extracted tokens. Further token comparison between both of the methods is achieved. The corpus contains proses and verses of Hindi as well as the Marathi language. Common tokens from corpora of verses and proses of Marathi as well as Hindi are identified to prove that both of them behave same as per as NLP activities are concerened. The betterment of BaSa over Zipf’s law is proved. Hindi Corpus includes 820 stories and 710 poems and Marathi corpus includes 610 stories and 505 poems.

Author 1: Prafulla B. Bafna
Author 2: Jatinderkumar R. Saini

Keywords: Marathi; NLP; Synset; Zipf’s law

PDF

Paper 32: Logical Intervention in the Form of Work Breakdown Strategy using Object Constraint Language for e-Commerce Application

Abstract: This paper proposes a framework for Rule Based inhibition on e-commerce website for prevention of double payment and computing time invariant for concurrent event handling. Authors have analyzed computational models in terms of Customer segmentation replicating their buying characteristics and Dialogue level constraint establishment through OCL. The tool used are MDT-OCL and matching logic for logic level interpretation. The MDT tool generates Context Syntax Tree. We have used LPG grammar to be applied WBS codification by differentiating Descriptive Noun containing description about the products and associated verbs about the product. Authors have used Eclipse plug-ins to embed it logic constraint mapping to check for any ontological errors and double selection and payment errors.

Author 1: Shikha Singh
Author 2: Manuj Darbari

Keywords: OCL; e-commerce; concurrent handling; work breakdown structure; augmented querying

PDF

Paper 33: Adaptive Scheduling Design for Time Slotted Channel Hopping Enabled Mobile Adhoc Network

Abstract: Industrial Internet of things (IIOT) applications comprises the wearable sensor devices for human activity monitoring; these devices generate the continuous data at higher data rate and it is powered through the battery. Hence, it restricts the uses of wireless protocol such as IEEE 802.15.4 and BLE (Bluetooth Low Energy). Moreover, there are promising technologies such as TSCH (Time Slotted Channel Hoping) MAC (Medium Access Control) which can be deployed in the different environment, which are prone to interference. In this research work we focus on overcoming the issue for designing the AS (Adaptive Scheduling) for TSCH – MANET (Mobile Adhoc Network); furthermore it is very difficult to design scheduling technique considering the unpredictable nature of data source location and wireless link, this results in waste of reserve resource. Moreover proposed Adaptive Scheduling model allows the both slots i.e. shared and dedicated slots, it also allows the communicating device to active the assigned slots adaptively. Hence, our proposed AS model achieves the higher overall access fairness, minimal idle listening overhead, higher packet deliver rate; further to cope up with the higher traffic load MANET device can activate the additional slots dynamically. Moreover, the outcome of Adaptive Scheduling model shows the higher data transmission and lower energy consumption.

Author 1: Sridhara S. B
Author 2: Ramesha M
Author 3: Veeresh Patil

Keywords: 6TiSCH; access fairness; energy efficiency; MANET (Mobile Ad Hoc Network); TSCH (Time Slotted Channel Hopping); scheduling

PDF

Paper 34: JeddahDashboard (JDB): Visualization of Open Government Data in Kingdom of Saudi Arabia

Abstract: Open data is data that anyone can freely use, access and redistribute without financial, legal or even technical restrictions. Accordingly, all governmental and non-governmental organizations may publish data that they own open for various purposes on the Internet without any restrictions such as (climate statistics, education statistics, transportation, industry, water abstraction, etc.). Further, Open Government Data (OGD) initiatives are proliferated in every country including Kingdom of Saudi Arabia (KSA). OGD should supposedly escalate the transparency, collaboration, and participation of citizens towards using OGD. However, the presentation of OGD format may not be attractive enough to users and vice-versa the data may not be easy for them to understand and interpret. These stumbling blocks may dampen the use of OGD among citizens. The problems can be resolved through visualization of the available data sets and to represent these data in accordance to user preference. This research emphasizes on visualization efforts of OGD in KSA named JeddahDashboard (JDB) website. The aim of creating JeddahDashBoard is to visualize the published government data in KSA. The idea was inspired by the DublinDashBoard in Ireland where data and real-time information, time series index data, and interactive maps on vast aspects of the city are provided mostly in an interactive ways and attractive charts that are easy to understand. In order to create JeddahDashBoard, two tools were used “the tableau” and then “chart.js” because the later was simple and flexible. Finally, this paper shares researchers experience and challenges in establishing JDB.

Author 1: Mashael Khayyat

Keywords: Open Data; Open Government Data; visualization; Dashboard; Saudi Arabia; KSA

PDF

Paper 35: Optimizing Genetic Algorithm Performance for Effective Traffic Lights Control using Balancing Technique (GABT)

Abstract: Genetic Algorithm (GA) is implemented and simulation tested for the purpose of adaptable traffic lights management at four roads-intersection. The employed GA uses hybrid Boltzmann Selection (BS) and Roulette Wheel Selection techniques (BS-RWS). Selection Pressure (SP) and Population (Pop) parameters are used to tune and balance the designed GA to obtain optimized and correct control of passing vehicles. A very successful implementation of such parameters resulted in obtaining minimum number of Iterations (IRN) for a wide spectrum of SP and Pop. The algorithm is mathematically modeled and analyzed and a proof is obtained regarding the condition for balanced GA. Such Balanced GA is most useful in traffic management for an optimized Intelligent Transportation Systems, as it requires minimum iterations for convergence with faster dynamic controlling time.

Author 1: Mahmoud Zaki Iskandarani

Keywords: Genetic algorithm; traffic lights; intelligent transportation systems; correlation; roulette wheel selection; boltzmann selection; selection pressure; population

PDF

Paper 36: A New Approach for Multi-Level Evaluation of Strategic Educational Goals

Abstract: Educational organizations with multiple level of management promotes for their strategic educational goals as a correlated and clustered data. The typical assessment and feedback approaches are paper-based where word documents and flowcharts are used to evaluate strategic educational goals augmented with quantitative indicators. Unfortunately, the paper-based approach often neglects the relationship and dependencies between the educational goals defined at different levels. This may lead to complications in the analysis, lack of clarity, and subject to different interpretations by the multiple management. We propose a multi-level model-driven approach that improves the assessment of strategic educational goals, handles the clustered data efficiently and allows the individual and group level assessment to take effect simultaneously. The approach also allows decision makers in academic institution to extract valuable information from goal models at different academic levels and measure the fulfilment of the educational goals with respect to the target performance in a formal way.

Author 1: Mohammad Alhaj
Author 2: Mohammad Hassan
Author 3: Abdullah Al-Refai

Keywords: Evaluation process; goal model; multi-level modelling; goal requirement language; program educational goals

PDF

Paper 37: A Modified Weight Optimization for Artificial Higher Order Neural Networks in Physical Time Series

Abstract: Many methods and approaches have been proposed for analyzing and forecasting time series data. There are different Neural Network (NN) variations for specific tasks (e.g., Deep Learning, Recurrent Neural Networks, etc.). Time series forecasting are a crucial component of many important applications, from stock markets to energy load forecasts. Recently, Swarm Intelligence (SI) techniques including Cuckoo Search (CS) have been established as one of the most practical approaches in optimizing parameters for time series forecasting. Several modifications to the CS have been made, including Modified Cuckoo Search (MCS) that adjusts the parameters of the current CS, to improve algorithmic convergence rates. Therefore, motivated by the advantages of these MCSs, we use the enhanced MCS known as the Modified Cuckoo Search-Markov Chain Monté Carlo (MCS-MCMC) learning algorithm for weight optimization in Higher Order Neural Networks (HONN) models. The Lévy flight function in the MCS is replaced with Markov Chain Monté Carlo (MCMC) since it can reduce the complexity in generating the objective function. In order to prove that the MCS-MCMC is suitable for forecasting, its performance was compared with the standard Multilayer Perceptron (MLP), standard Pi-Sigma Neural Network (PSNN), Pi-Sigma Neural Network-Modified Cuckoo Search (PSNN-MCS), Pi-Sigma Neural Network-Markov Chain Monté Carlo (PSNN-MCMC), standard Functional Link Neural Network (FLNN), Functional Link Neural Network-Modified Cuckoo Search (FLNN-MCS) and Functional Link Neural Network-Markov Chain Monté Carlo (FLNN-MCMC) on various physical time series and benchmark dataset in terms of accuracy. The simulation results prove that the HONN-based model combined with the MCS-MCMC learning algorithm outperforms the accuracy in the range of 0.007% to 0.079% for three (3) physical time series datasets.

Author 1: Noor Aida Husaini
Author 2: Rozaida Ghazali
Author 3: Nureize Arbaiy
Author 4: Norhamreeza Abdul Hamid
Author 5: Lokman Hakim Ismail

Keywords: Modified Cuckoo Search Markov Chain Monté Carlo; MCS-MCMC; neural networks; higher order; time series forecasting

PDF

Paper 38: Remote Sensing Satellite Image Clustering by Means of Messy Genetic Algorithm

Abstract: Messy Genetic Algorithm (GA) is applied to the satellite image clustering. Messy GA allows to maintain a long schema, due to the fact that schema can be expressed with a variable length of codes, so that more suitable cluster can be found in comparison to the existing Simple GA clustering. The results with simulation data show that the proposed Messy GA based clustering shows four times better cluster separability in comparison to the Simple GA while the results with Landsat TM data of Saga show almost 65% better clustering performance.

Author 1: Kohei Arai

Keywords: Genetic Algorithm: GA; Messy GA; Simple GA; clustering introduction

PDF

Paper 39: Improve Speed Real-Time Rendering in Mixed Reality HOLOLENS during Training

Abstract: Augmented reality (AR), virtual reality (VR), and mixed reality (MR) are advanced applications of computer visualization Which a hybrid structure allows users to explore novel used with other technologies in healthcare among other sectors give a promising future owing to the capabilities that come along with the technology that enables the medical personnel can carry out their surgical operations precisely. HOLOLENS 1, an MR product by Microsoft, is one of the first AR devices that have been widely applied in medicine for the treatment of complex diseases. It is also applied in operations that require a lot of care, for example, surgery of the liver. It is the main objective in the research to use HOLOLENS in performing surgeries while increasing Time Interval and controlling Semantic Segmentation while maintaining the truth of patient liver data during surgery for the segmentation liver 3d model. Next, we describe a new technology that increases the points of light, and the more the 3D intensity, the brighter the images the easier to interact with them. Holographic intensity is also to avoid blurring images to the point where the user sees through transparent HOLOLENS lenses. This improves Time Interval lens sensitivity and user detection in the environment. Finally, we describe a new framework for improving speed real-time render and model segmentation used hybrid Visualization between VR & AR called MR in which we decree render time speed through increase point light throw color calculations and energy function to be fast in sending and receiving data via WIFI unit.

Author 1: Rafeek Mamdouh
Author 2: Hazem M. El-Bakry
Author 3: Alaa Riad
Author 4: Nashaat El-Khamisy

Keywords: Mixed reality; time interval; semantic segmentation; Microsoft HOLOLENS; computer visualization

PDF

Paper 40: Model of Tools for Requirements Elicitation Process for Children’s Learning Applications

Abstract: Requirements Elicitation are the initial stages in the application development process, where a set of needs from the system will be built and obtained by communicating with stakeholders who have a direct and indirect influence on those needs. Failure in the requirements elicitation process was caused by weak communication. Communication is an essential thing in carrying out the requirements elicitation process. The selection of the right elicitation technique is not only a solution. Informants as sources of information on requirements also need to be considered. The choice of the correct technique often fails because of the tools not useful. The availability of the right form of equipment needs to be considered so that the communication between the elicitation team and the informant goes well. Children have characteristics not the same as adults. Limitations in terms of psychomotor, cognitive, and emotional children are considered in choosing elicitation techniques and tools. These limitations are also influenced by the age range of child development. The use of digital elicitation devices is recommended to be used in the requirements elicitation process. The presentation of interactive tools makes it easier for children to convey their desires. In learning applications for children, aspects of pedagogy that need to be explored are learning styles and children's thinking abilities. Every child in every age range has a different preference for learning style. That is because children do not have learning experiences. That also applies to the level of thinking ability of children. Therefore, these two things need to be appropriately explored when the learning application development process. The proposed elicitation tool model was made by taking into account both components of that pedagogical aspects. The test results of the built model show that the application has satisfaction. That means that children can communicate well in conveying the needed as requirements to the learning application.

Author 1: Mira Kania Sabariah
Author 2: Paulus Insap Santosa
Author 3: Ridi Ferdiana

Keywords: Requirements elicitation; communication; children learning application; pedagogical aspect; learning style

PDF

Paper 41: Intelligent System for Price Premium Prediction in Online Auctions

Abstract: The use of data mining techniques in the field of auctions has attracted considerable interest from the research community. In auctions, the users try to achieve the highest gain and avoid loss as much as possible. Therefore, data mining techniques can be implemented in the auctioning domain to develop an intelligent method that can be used by the users in online auctions. However, determining the factors that affect the result of an auction, especially the initial price, is critical. In addition, the intelligent system must be established based on clean data to ensure the accuracy of the results. In this paper, we propose an intelligent system (classifier) to predict the initial price of auctions. The proposed system uses the double smoothing method (DSM) for data cleaning in terms of preprocessing. This system is implemented on a data set collected from the eBay website and cleaned using the proposed DSM. In the training phase, the CART technique is employed for the classifier construction. Compared to similar techniques, the proposed system exhibits a better performance in terms of the accuracy and robustness against noisy data, as determined using ROC curves.

Author 1: Mofareah Bin Mohamed
Author 2: Mahmoud Kamel

Keywords: Classification; auction; CART; training; testing; preprocessing; noise; outlier; DSM

PDF

Paper 42: Applying Social-Gamification for Interactive Learning in Tuberculosis Education

Abstract: There are several methods of education for tuberculosis, and one of them is through the DOTS (Direct Observed Treatment Shortcourse) program. The management of tuberculosis education through the DOTS program is performed in clinics and hospitals only to patients and their families. The purpose of this study is to describe the development and testing of a prototype (social-game education) for interactive education for tuberculosis patients in particular and the general public. The data collection process is through direct observation of tuberculosis patients and health professionals (doctors, nurses, and DOTS health workers). Challenge the game in the prototype was provided with content that contained tuberculosis information that had been previously validated by a specialist. In addition to tuberculosis information as the main content, two important elements are making up this prototype, which are gamification and social media elements. In the game elements, this study adopted elements of the leaderboard, badge/achievement, challenge, and level. As for the third element, social media includes likes, comments, and shares. Prototype application testing was conducted on two participant groups (N = 48) consisting of 23 tuberculosis patients and 25 random participants. By using the user experience questionnaire (UEQ) technique, this research focuses on identifying the user's motivation in capturing compositional information as well as the clarity of the prototype. With a confidence interval of 5% (p = 0.05) per scale. The results indicate that participants have a high level of motivation towards the prototype; this is seen in the rating scale of stimulation with an average of 1.578. Likewise the effectiveness level of information in the rating scale of perspicuity has a mean of 1.224 also has a rate that is quite effective.

Author 1: Dhana Sudana
Author 2: Andi W.R. Emanuel
Author 3: Suyoto
Author 4: Ardorisye S. Fornia

Keywords: Education; gamification; mobile application; social-media; tuberculosis

PDF

Paper 43: New Approach for the Detection of Family of Geometric Shapes in the Islamic Geometric Patterns

Abstract: This article proposes a new approach to detect the family of geometric shapes in Islamic geometric patterns. This type of geometric pattern which is constructed by tracing the grids with the respect of precise measurement criteria and the concept of symmetry of a method which is called ‘Hasba’. This geometric pattern generally found in the tiles which cover the floors or walls of many buildings around the Islamic world such as mosques. this article describe a new method which is based on the calculation of the Euclidean distance between the different geometric shapes which constitute the geometric Islamic pattern, in order to detect similar regions in this type of geometric pattern encountered in Islamic art.

Author 1: Ait Lahcen Yassine
Author 2: Jali Abdelaziz
Author 3: El Oirrak Ahmed
Author 4: Abdelmalek. Thalal
Author 5: Youssef. Aboufadil
Author 6: M. A. Elidrissi R

Keywords: Family geometric; shapes; Euclidean distance; ‘Hasba’; geometric art; Islamic patterns

PDF

Paper 44: Place-based Uncertainty Prediction using IoT Devices for a Smart Home Environment

Abstract: In this work, an uncertainty prediction method for the home environment is proposed using the IoT devices (sensors) for predicting uncertainties using place-based approach. A neural network (NN) based smart communication system was implemented to test the results obtained from place-based approach using the inputs from sensors linked with internet of things (IoT). In general, there are so many smart systems for home automation is available for alerting the owners using IoT, but they can communicate only after an accident happens. But it is always better to predict a hazard before it happens is very important for a safe home environment due to the presence of kids and pet animals at home in the absence of parents and guardians. Therefore, in this work, the uncertainty prediction component (UPC) using place-based approach helps to make suitable prediction decisions and plays a vital role to predict uncertain events at the smart home environment. A comparison of different classifiers like multi-layer perceptrons (MLP), Bayesian Networks (BN), Support Vector Machines (SVM), and Dynamic Time Warping (DTW) is made to understand the accuracy of the obtained results using the proposed approach. The results obtained in this method shows that place-based approach is providing far better results as compared to the global approach with respect to training and testing time as well. Almost a difference of 10 times is seen with respect to the computing times, which is a good improvement to predict uncertainties at a faster rate.

Author 1: Amr Jadi

Keywords: IoT; place-based approach; uncertainty prediction; MLP; SVM; BN; DTW

PDF

Paper 45: “Onto-Computer-Project”, a Computer Project Domain Ontology : Construction and Validation

Abstract: Ontologies, nowadays, play a primordial role in the representation, the re-use and the sharing of knowledge of a well given domain in a consensual and explicit way more precisely in the computing field. It is in this context that we have proposed a domain ontology baptised onto-computer-project which presents the key to our research goal. This essential goal is to arrive at a final step to elaborate a knowledge based system for computer projects reusing. The aimed system is essentially based on the construction of a memory projects. This memory projects could be defined as a collection of historical and achieved projects around the sphere of computer. This sphere is so wide including many subfields beginning at the database, software engineering fields and arriving at the fields of artificial intelligence, computer vision and so on. This research work requires at first to construct a well-defined ontology in the way to structure and to unify vocabulary often shared by multi actors in the domain of computer projects. To concretize this goal, our paper will describe a construction approach for the proposed domain ontology which is mainly based on an existing methodology named “methontology”. The proposed ontology construction approach, which is composed of seven steps, is the result of a comparative study between some ontology construction approaches belonging to different categories of methodology. In fact, we can distinguish four main categories of ontology development approaches: ontology construction approaches from zero, text-based construction approaches, building approaches based on the reuse of already existing ontologies, and crowd souring-based approaches. In our research work, we are interested by the approach of building ontology from zero. Indeed, the construction of the proposed ontology follows an autonomous approach which is not based on any other existing ontology or the updating of an already constructed ontology. In addition, in this paper we are interested by the problem of validating the content of domain ontology and in this context we have proposed an incremental approach for validating the proposed ontology which is composed of six steps. In this context we have studied some ontology validation approaches: those which are questionnaire based, others based on question answering. The problem here that all approaches studied are single actor approaches where a single validation actor can validate the entire ontology and this by applying the semantic and the structural validation definitively with no return. The main originality of our validation approach consists essentially of three criteria: the incremental validation, the multi-intervention, and the respecting of the “V” cycle. In fact, the passage from one validation step to another results in an update of the initial ontology and this by the intervention of three experts (project management expert, a project computer expert and a specialist in ontology engineering). Our proposal approach requires a feedback between all the validation phases and can return to any expert for revalidation if needed. The result of this research is improved a validated ontology which is allowed us to build our project memory and to feed our knowledge base which will serve us to develop our knowledge-based system.

Author 1: Mejri Lassaad
Author 2: Hanafi Raja
Author 3: Henda Hajjami Ben Ghezala

Keywords: Domain ontology; ontology construction; ontology validation; computer project; project memory; knowledge representation

PDF

Paper 46: Comparison of Item Difficulty Estimates in a Basic Statistics Test using ltm and CTT Software Packages in R

Abstract: Two free computer software packages “ltm” and “CTT” in the R software environment were tested to demonstrate its usefulness in an item test analysis. The calibration of the item difficulty parameters given the binary responses of two hundred five examinees for the fifteen items multiple choice test were analyzed using the Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies. The software latent trait model “ltm” employed the IRT framework while the software classical test theory functions “CTT” operated under CTT. The IRT Rasch model was used to model the responses of the examinees. The conditional maximum likelihood estimation method was used to estimate the item difficulty parameters for all the items. On the other hand, all the item difficulty indices using the “CTT” software were also calculated. Both the statistical analyses of this study were done in the R software. Results showed that among the fifteen items, the estimates of their item difficulty parameters differed mostly on their values between the two methods. In an IRT framework, items showed extreme difficulty or easy cases as compared to CTT. However, when the estimated values were categorized into intervals and labelled according to its verbal difficulty description, both methodologies showed some similarities in their item difficulties.

Author 1: Jonald L. Pimentel
Author 2: Marah Luriely A. Villaruz

Keywords: Classical test theory; indices; item calibration; item difficulty; item response theory; R software

PDF

Paper 47: Mosques Smart Domes System using Machine Learning Algorithms

Abstract: Millions of mosques around the world are suffering some problems such as ventilation and difficulty getting rid of bacteria, especially in rush hours where congestion in mosques leads to air pollution and spread of bacteria, in addition to unpleasant odors and to a state of discomfort during the pray times, where in most mosques there are no enough windows to ventilate the mosque well. This paper aims to solve these problems by building a model of smart mosques’ domes using weather features and outside temperatures. Machine learning algorithms such as k-Nearest Neighbors (k-NN) and Decision Tree (DT) were applied to predict the state of the domes (open or close). The experiments of this paper were applied on Prophet’s mosque in Saudi Arabia, which basically contains twenty-seven manually moving domes. Both machine learning algorithms were tested and evaluated using different evaluation methods. After comparing the results for both algorithms, DT algorithm was achieved higher accuracy 98% comparing with 95% accuracy for k-NN algorithm. Finally, the results of this study were promising and will be helpful for all mosques to use our proposed model for controlling domes automatically.

Author 1: Mohammad Awis Al Lababede
Author 2: Anas H. Blasi
Author 3: Mohammed A. Alsuwaiket

Keywords: Decision tree; k-nearest neighbors; smart domes; weather prediction; machine learning

PDF

Paper 48: Accident Detection and Disaster Response Framework Utilizing IoT

Abstract: The internet of things (IoT) leads the noteworthy edges above customary information and communication technologies (ICT) for Intelligent Transportation Systems (ITS). The progression in the transportation system, the increment in vehicles and the accidents happened on the roads are cumulative up to an alarming situation. Additionally, 1.256 million people expire by the road bumps every year and it is very problematic to find the precise accident location of the user. If an accident occurs, the survival rate of the victim increases, if he is given instantaneous remedial assistance. You can provide remedial assistance to the victim only when you identify the precise location of accident. The main persistence of this system is to identify an accident and find the location of the user. After tracing the location, the system will search nearby hospitals for remedial treatment. System will send a message that contains user's current location, to the nearby hospitals in case of an emergency. System will acquire recommended contacts from the cloud and also send message to them for user’s support by using API. If the user is safe, he can cancel the message that is being sent to the nearest hospital and the recommended contacts. This system will help the users in saving their lives within minimal time.

Author 1: Shoaib ul Hassan
Author 2: Jingxia CHEN
Author 3: Tariq Mahmood
Author 4: Ali Akbar Shah

Keywords: Internet of things (IOT); accident detection; nearby places; nearby hospitals; cloud computing; intelligent transportation systems; information and communication technologies

PDF

Paper 49: Design and Development of AI-based Mirror Neurons Agent towards Emotion and Empathy

Abstract: Retracted: After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IJACSA`s Publication Principles. We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

Author 1: Faisal Rehman
Author 2: Adeel Munawar
Author 3: Aqsa Iftikhar
Author 4: Awais Qasim
Author 5: Jawad Hassan
Author 6: Fouzia Samiullah
Author 7: Muhammad Basit Ali Gilani
Author 8: Neelam Qasim

Keywords: Mirror neurons functionalities; emotions; empathy; machine learning; artificial intelligence

PDF

Paper 50: Effect of Header-based Features on Accuracy of Classifiers for Spam Email Classification

Abstract: Emails are an integral part of communication in today’s world. But Spam emails are a hindrance, leading to reduction in efficiency, security threats and wastage of bandwidth. Hence, they need to be filtered at the first filtering station, so that employees are spared the drudgery of handling them. Most of the earlier approaches are mainly focused on building content-based filters using body of an email message. Use of selected header features to filter spam, is a better strategy, which was initiated by few researchers. In this context, our research intends to find out minimum number of features required to classify spam and ham emails. A set of experiments was conducted with three datasets and five Feature Selection techniques namely Chi-square, Correlation, Relief Feature Selection, Information Gain, and Wrapper. Five-classification algorithms-Naïve Bayes, Decision Tree, NBTree, Random Forest and Support Vector Machine were used. In most of the approaches, a trade-off exists between improper filtering and number of features. Hence arriving at an optimum set of features is a challenge. Our results show that in order to achieve the objective of satisfactory filtering, minimum 5 and maximum 14 features are required.

Author 1: Priti Kulkarni
Author 2: Jatinderkumar R. Saini
Author 3: Haridas Acharya

Keywords: Email classification; Chi-Square; correlation; relief feature selection; wrapper; information gain; Naive Bayes; J48; spam; support vector machine; random forest; NBTree

PDF

Paper 51: Enhancing the Quality of Service of Cloud Computing in Big Data using Virtual Private Network and Firewall in Dense Mode

Abstract: Cloud Computing entails accessing and storing programs and data over the internet instead of the hard drive of a personal computer. Over the Internet, it is the practice of software and hardware to pass a service. Cloud gives the ability to consumers to access big data and use applications from every device that can have access to the internet, however, the key problem is security and this can be solvable by a firewall and Virtual Private Network. Recently, research has been accomplished in deploying firewalls and Virtual Private Networks with parameters of throughput and load in sparse mode. In this paper, an examination of firewall and Virtual Private Network is considered based on average throughput, average packet loss and average end-to-end delay in dense mode. To examine the performance of cloud computing without Firewall and Virtual Private Network, with firewall only, and with firewall and Virtual Private Network is the research goal. The simulation results have shown that Firewall and Virtual Private Network offers better security through a wide investigation with slight distress in the cloud performance.

Author 1: Hussain Shah
Author 2: Aziz ud Din
Author 3: Abizar
Author 4: Adil Khan
Author 5: Shams ud Din

Keywords: Cloud computing; big data; firewall; virtual private network; security; performance

PDF

Paper 52: Cloud Computing Adoption at Higher Educational Institutions in the KSA for Sustainable Development

Abstract: Rapid changes in the advancements of information and communication technologies (ICT) have prompted Higher Educational Institutions (HEIs) to enhance teaching and learning. Over the years, cloud computing (CC) has become an emerging and adoptable paradigm in many industries including healthcare, finance, and law with its promising benefits. This trend is also growing in the field of education around the globe. Due to its inherent qualities of reliability, scalability, flexibility and reasonable cost, cloud is the solution that addressed the accessibility issue for quality education. CC plays an important role and will have major impacts on HEIs of the Kingdom of Saudi Arabia (KSA) in the near future. HEIs are used to utilize the benefits of CC based services provided by the cloud service providers (CSPs). The CSP can be owned by the KSA government, private, or third-party vendors. By using cloud-based services at HEIs, staff, faculty, and students can utilize its services to perform various academic responsibilities on demand. This paper aims to adopt CC for HEIs and explore the prominent features and potential benefits of adopting cloud services in the HEIs of KSA. This paper also reveals numerous challenges, impacts, and major issues involved in adopting cloud services for HEIs.

Author 1: Ashraf Ali

Keywords: Cloud computing; higher educational institutions; Cloud Service Provers (CSP); Software-as-a-Service (SaaS); Platform-as-a-Service (PaaS); Infrastructure-as-a-Service (IaaS)

PDF

Paper 53: Automatic Assessment of Performance of Hospitals using Subjective Opinions for Sentiment Classification

Abstract: Social media is the venue where the opinions are shared in form of text, images and videos by public. Hospitals’ performance can be judged by opinions that are written by patients or their relatives. Machine learning techniques can be used to detect sentiments of the opinion givers. For the research work presented in this article, opinions for few big hospitals were collected using Facebook, twitter and hospitals’ webpage. The corpus was constructed and the sentiment analysis was performed after few preprocessing tasks. Resources like Stanford POS Tagger and WordNet were used to discover aspects. In this paper, the challenges of annotation of subjective opinions are discussed in detail. Two sentiment lexicons namely NRC-Affect-Intensity lexicon and SentiWordNet 3.0 lexicon were used to calculate sentiment scores of the comments that were used by different machine learning classifiers. Moreover, the results of the experiments on the constructed dataset are provided. For the experiments that aimed to discover overall sentiments of user towards hospital, Random forest outperformed other classifiers achieving accuracy of 76.49% using scores from NRC-Affect-Intensity lexicon. For the experiments that were directed towards discovering sentiments of users towards particular aspect of a hospital, Random forest overtook other classifiers reaching accuracy of 80.7339 % using NRC-Affect-Intensity lexicon sentiment scores. The research results show that machine learning can be very helpful in identifying sentiments of users from their textual comments that are vastly available on different social media platforms. The results can be helpful in improvement of hospital performance and are expected to contribute to growing field of health informatics.

Author 1: Muhammad Badruddin Khan

Keywords: Health informatics; Classification Algorithms; Sentiment Analysis; Sentiment Lexicons; Text Mining

PDF

Paper 54: Priority based Energy Distribution for Off-grid Rural Electrification

Abstract: Rural off-grid electrification is always very challenging due to mostly using limited output renewable energy source such as solar power system. Owing to its nature of power generation that depends on weather condition, the reliability in power provision is often affected by uncontrolled overwhelming usage or bad weather condition. Total power system blackout that frequently happens not only disturb the night activity routine but also can be life threatening if the rural community is unable to initiate telephony communication with the outside world during state of emergency due to power outage. In order to reduce the frequency of total system blackout caused by the reasons mentioned, we proposed a priority-based energy distribution scheme to assist the off-grid standalone solar power system to improve the overall operating hours of the critical appliances in rural areas. The scheme takes into consideration of criticality of the home appliances as defined by the rural users, so that the system would distribute power supply based on the current state of the system with an objective to prolong the service availability of the critical appliances that matter the most to the users. The scheme has been evaluated under simulated scenario and has shown a 100% operation availability of the critical appliance is achievable even during bad weather season that has very low solar input.

Author 1: Siva Raja Sindiramutty
Author 2: Chong Eng Tan
Author 3: Sei Ping Lau

Keywords: PI (Panel Input); BP (Battery Power); critical appliances; non-critical appliances; prioritization; operating hour

PDF

Paper 55: Beyond Sentiment Classification: A Novel Approach for Utilizing Social Media Data for Business Intelligence

Abstract: Extracting people’s opinions from social media has attracted a large number of studies over the years. This is as a result of the growing popularity of social media. People share their sentiments and opinions via these social media platforms. Therefore, extracting and analyzing these sentiments is beneficial in many ways, for example, business intelligence. However, despite a large number of studies on extracting and analyzing social media data, only a fraction of these studies focuses on its practical application. In this study, we focus on the use of product reviews for identifying whether the reviews signify the intention of purchase or not. Therefore, we propose a novel lexicon-based approach for the classification of product reviews into those that signify the intention of purchase and those that do not signify the intention of purchase. We evaluated our proposed approach using a benchmark dataset based on accuracy, precision, and recall. The experimental results obtained prove the efficiency of our proposed approach to purchase intention identification.

Author 1: Ibrahim Said Ahmad
Author 2: Azuraliza Abu Bakar
Author 3: Mohd Ridzwan Yaakub
Author 4: Mohammad Darwich

Keywords: Purchase intention; sentiment analysis; lexicon; social media; product reviews

PDF

Paper 56: Efficient Mining of Maximal Bicliques in Graph by Pruning Search Space

Abstract: In this paper, we present a new algorithm for mining or enumerating maximal biclique (MB) subgraphs in an undirected general graph. Our algorithm achieves improved theoretical efficiency in time over the best algorithms. For an undirected graph with n vertices, m edges and k maximal bicliques, our algorithm requires O(kn2) time, which is the state of the art performance. Our main idea is based on a strategy of pruning search space extensively. This strategy is made possible by the approach of storing maximal bicliques immediately after detection and allowing them to be looked up during runtime to make pruning decisions. The space complexity of our algorithm is O(kn) because of the space used for storing the MBs. However, a lot of space is saved by using a compact way of storing MBs, which is an advantage of our method. Experiments show that our algorithm outperforms other state of the art methods.

Author 1: Youngtae Kim
Author 2: Dongyul Ra

Keywords: Graph algorithms; maximal bicliques; maximal biclique mining; complete bipartite graphs; pruning search space; social networks; protein networks

PDF

Paper 57: Nabiha: An Arabic Dialect Chatbot

Abstract: Nowadays, we are living in the era of technology and innovation that impact various fields, including sciences. In computing and technology, many outstanding and attractive programs and applications have emerged, including programs that try to mimic the human behavior. A chatbot is an example of the artificial intelligence-based computer programs that try to simulate the human behavior by conducting a conversation and an interaction with the users using natural language. Over the years, various chatbots have been developed for many languages (such as English, Spanish, and French) to serve many fields (such as entertainment, medicine, education, and commerce). Unfortunately, Arabic chatbots are rare. To our knowledge, there is no previous work on developing a chatbot for the Saudi Arabic dialect. In this study, we have developed “Nabiha,” a chatbot that can support conversation with Information Technology (IT) students at King Saud University using the Saudi Arabic dialect. Therefore, Nabiha will be the first Saudi chatbot that uses the Saudi dialect. To facilitate access to Nabiha, we have made it available on different platforms: Android, Twitter, and Web. When a student wants to talk with Nabiha, she can download an application, talk with her on Twitter, or visit her website. Nabiha was tested by the students of the IT department, and the results were somewhat satisfactory, considering the difficulty of the Arabic language in general and the Saudi dialect in particular.

Author 1: Dana Al-Ghadhban
Author 2: Nora Al-Twairesh

Keywords: Artificial intelligence; natural language processing; chatbot; artificial intelligence markup language; Pandorabots; Arabic; Saudi dialect

PDF

Paper 58: Personality Classification from Online Text using Machine Learning Approach

Abstract: Personality refer to the distinctive set of characteristics of a person that effect their habits, behaviour’s, attitude and pattern of thoughts. Text available on Social Networking sites provide an opportunity to recognize individual’s personality traits automatically. In this proposed work, Machine Learning Technique, XGBoost classifier is used to predict four personality traits based on Myers- Briggs Type Indicator (MBTI) model, namely Introversion-Extroversion(I-E), iNtuition-Sensing(N-S), Feeling-Thinking(F-T) and Judging-Perceiving(J-P) from input text. Publically available benchmark dataset from Kaggle is used in experiments. The skewness of the dataset is the main issue associated with the prior work, which is minimized by applying Re-sampling technique namely random over-sampling, resulting in better performance. For more exploration of the personality from text, pre-processing techniques including tokenization, word stemming, stop words elimination and feature selection using TF IDF are also exploited. This work provides the basis for developing a personality identification system which could assist organization for recruiting and selecting appropriate personnel and to improve their business by knowing the personality and preferences of their customers. The results obtained by all classifiers across all personality traits is good enough, however, the performance of XGBoost classifier is outstanding by achieving more than 99% precision and accuracy for different traits.

Author 1: Alam Sher Khan
Author 2: Hussain Ahmad
Author 3: Muhammad Zubair Asghar
Author 4: Furqan Khan Saddozai
Author 5: Areeba Arif
Author 6: Hassan Ali Khalid

Keywords: Personality recognition; re-sampling; machine learning; XGBoost; class imbalanced; MBTI; social networks

PDF

Paper 59: Control BLDC Motor Speed using PID Controller

Abstract: At present, green technology is a major concern in every country around the world and electricity is a clean energy which encourages the acquisition of this technology. The main applications of electricity are made through the use of electric motors. Electric power is converted to mechanical energy using a motor, that is to say, the major applications of electrical energy are accomplished through electric motors. Brushless direct current (BLDC) motors have become very attractive in many applications due to its low maintenance costs and compact structure. The BLDC motors can be substituted to make the industries more dynamic. To get better performance BLDC motor requires control drive facilitating to control its speed and torque. This paper describes the design of the BLDC motor control system using in using MATLAB/SIMULINK software for Proportional Integral Derivative (PID) algorithm that can more effectively improve the speed control of these types of motors. The purpose of the paper is to provide an overview about the functionality and design of the PID controller. Finally, the study undergoes some well-functioning tests that will support that the PID regulator is far more applicable, better operational, and effective in achieving satisfactory control performance compared to other controllers.

Author 1: Md Mahmud
Author 2: S. M. A. Motakabber
Author 3: A. H. M. Zahirul Alam
Author 4: Anis Nurashikin Nordin

Keywords: PID controller; green technology; fuzzy logic control; speed control; BLDC motor

PDF

Paper 60: Recurrent Neural Networks for Meteorological Time Series Imputation

Abstract: The aim of the work presented in this paper is to analyze the effectiveness of recurrent neural networks in imputation processes of meteorological time series, for this six different models based on recurrent neural networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) are implemented and it is experimented with hourly meteorological time series such as temperature, wind direction and wind velocity. The implemented models have architectures of 2, 3 and 4 sequential layers and their results are compared with each other, as well as with other imputation techniques for univariate time series mainly based on moving averages. The results show that for temperature time series on average the recurrent neural network achieve better results than the imputation techniques based on moving averages; in the case of wind direction time series, on average only one model based on RNN manages to exceed the models based on moving averages; and finally, for wind velocity time series on average, no RNN-based model manages to exceed the results achieved by moving averages-based models.

Author 1: Anibal Flores
Author 2: Hugo Tito
Author 3: Deymor Centty

Keywords: Recurrent neural network; long short-term memory; gated recurrent unit; univariate time series imputation

PDF

Paper 61: Enhance the Security and Prevent Vampire Attack on Wireless Sensor Networks using Energy and Broadcasts Threshold Values

Abstract: Measuring and monitoring the surrounded environment are the main tasks of the most battery-based Wireless Sensor Networks (WSNs). The main energy consumption in the WSN is the communication and transferring data between these nodes. There are many researches works on how to preserve the energy consumption of the nodes inside this network. Most of these methods could save the energy and made the WSN lives for longer. However, there might be another reason of consuming energy and loosing these nodes from the network by the threats that targeting this kind of Networks such as the vampire attack that load the WSN with fake traffic. In this paper, a proposed method is presented of preventing the vampire attack from wasting the energy of the sensor nodes based on the energy level of the intermediate nodes in the way to the destination.

Author 1: Hesham Abusaimeh

Keywords: Network security; vampire attack; sensor nodes; energy; lifetime; power consumption; packets delivery ratio

PDF

Paper 62: Automated Measurement of Hepatic Fat in T1-Mapping and DIXON MRI as a Powerful Biomarker of Metabolic Profile and Detection of Hepatic Steatosis

Abstract: Abnormal or excessive excess of intraperitoneal fat at different anatomical sites (heart, kidneys, liver, etc.) alters the metabolic profile by generating diseases causing cardiovascular complications. These include hepatic steatosis, which requires being increased surveillance before its severe progression to cirrhosis and its complications. Our objective in this study (in-vivo) was to propose a new approach to characterize and quanti-fy hepatic fat. Then, differentiated patients with metabolic dis-eases, obesity, Type 2 diabetes (T2D), metabolic syndrome and healthy subjects. This distinction was not only according to tradi-tional measurement tools such as body mass index (BMI) and waist circumference, but also according to the amount of fat from magnetic resonance imaging (MRI) DIXON image and T1-mapping at 1.5 Tesla (T). The evaluation results show that our proposed approach is reproducible, fast and robust. The distri-bution of the amount of hepatic fat in a cohort of data composed of four groups shows that hepatic fat is able to differentiate the metabolic population on the study chest. Relationship study of hepatic fat and cardiovascular parameters shows that hepatic fat is able to differentiate the metabolic population on the study chest. The relationship study of hepatic fat and cardiovascular parameters shows that hepatic fat has a negative influence on the heart if the amount it increases.

Author 1: Khouloud AFFI
Author 2: Mnaouer KACHOUT

Keywords: Nonalcoholic fatty liver disease; non-alcoholic steatohepatitis; image processing; metabolic diseases; magnetic resonance imaging; active contour

PDF

Paper 63: Producing Standard Rules for Smart Real Estate Property Buying Decisions based on Web Scraping Technology and Machine Learning Techniques

Abstract: Purchasing of real estate property is a stressful and time-consuming activity, regardless of the individual in question is a buyer or seller. The act is also a major financial decision which can lead to numerous consequences if taken hastily. Therefore, it is encouraged that a person should properly invest their time and money in research relating to price demands, property type and location, etc. It can be a difficult task to assess what real estate property can be considered as the best property to buy. The key idea of the current research study is to create a set of standard rules, which should be embraced to make a smart decision of buying real estate property, based on web scraping technology and machine learning techniques.

Author 1: Haris Ahmed
Author 2: Tahseen Ahmed Jilani
Author 3: Waleej Haider
Author 4: Syed Noman Hasany
Author 5: Mohammad Asad Abbasi
Author 6: Ahsan Masroor

Keywords: Web scraping technology; HtmlAgilityPack; machine learning; C4.5 decision tree; Weka-J48

PDF

Paper 64: TADOC : Tool for Automated Detection of Oral Cancer

Abstract: Cancer is a group of related diseases and it is necessary to classify the type and its impact. In this paper an automated learning-based system for detection of oral cancer from Whole Slide Images (WSI) has been designed. The main challenges of the system were to handle the huge dataset and to train the machine learning model as it consumed more time for each iteration involved. This further increased the time consumed to get a proper model and decrease of freedom for experimentation. Other important key features of the system were to implement a futuristic deep learning architecture to classify small patches from the large whole slide images and use of carefully designed post-processing methods for the slide-based classification.

Author 1: Khalid Nazim Abdul Sattar

Keywords: Cancer; CT Scan; MRI Scan; Machine Learning; Deep learning; Convolutional Neural Network (CNN); Whole Slide Image (WSI); Residual Networks (ResNets)

PDF

Paper 65: Deep Learning based, a New Model for Video Captioning

Abstract: Visually impaired individuals face many difficulties in their daily lives. In this study, a video captioning system has been developed for visually impaired individuals to analyze the events through real-time images and express them in meaningful sentences. It is aimed to better understand the problems experienced by visually impaired individuals in their daily lives. For this reason, the opinions and suggestions of the disabled individuals within the Altınokta Blind Association (Turkish organization of blind people) have been collected to produce more realistic solutions to their problems. In this study, MSVD which consists of 1970 YouTube clips has been used as training dataset. First, all clips have been muted so that the sounds of the clips have not been used in the sentence extraction process. The CNN and LSTM architectures have been used to create sentence and experimental results have been compared using BLEU 4, ROUGE-L and CIDEr and METEOR.

Author 1: Elif Güsta Özer
Author 2: Ilteber Nur Karapinar
Author 3: Sena Basbug
Author 4: Sümeyye Turan
Author 5: Anil Utku
Author 6: M. Ali Akcayol

Keywords: Video captioning; CNN; LSTM

PDF

Paper 66: Optimized Approach in Requirements Change Management in Geographically Dispersed Environment (GDE)

Abstract: Managing requirements is an essential trait in engineering development process as requirements change and emerge throughout the development process. In the following research work the primary prominence is to eke out requirements that are changing frequently in geographically dispersed setup. To efficiently and effectively cope up with the changing requirements are the key to fulfill customers’ requirements in geographically dispersed environment (GDE) and thus, appropriate procedural modeling is presented in this work to covenant with changing requirements to cut overall cost of the project and increase profitability by gratify the customers and the stakeholders. In the following research we have proposed an approach to tackle changing requirements in software development that are geographically dispersed and we have validated the presented procedural model through case scenario. Comprehensive systematic literature review has been performed in this section (II) to propose the efficient methodology in GDE, traits and risk and further to effectively eke out the evolving project’s requirements in geographically dispersed environment. Changing requirements in geologically dispersed environment can effectively be managed if the proposed MCR model followed and it will mitigate the risk and challenges which we have to face in global software development and as well as it will cut down the overall project’s cost and profitability will expectedly increase.

Author 1: Shahid N. Bhatti
Author 2: Frnaz Akbar
Author 3: Mohammad A. Alqarni
Author 4: Amr Mohsen Jadi
Author 5: Abdulrahman A. Alshdadi
Author 6: Abdulah J. Alzahrani

Keywords: CM (Change Moderator); GDE (Geographically Dispersed Environment); MCR (Managing Changing Requirements); MCR in the GDE framework; RM (Requirements Management)

PDF

Paper 67: Data Mining for Student Advising

Abstract: This paper illustrates how to use data mining techniques to help in advising students and predicting their academic performance. Data mining is used to get previously unknown, hidden and perhaps vital knowledge from a large amount of data. It combines domain knowledge, advanced analytical skills, and a vast knowledge base to reveal hidden patterns and trends that are applicable in virtually any sector ranging from engineering to medicine, to business. However, it is possible for educational institutes to use data mining to find useful information from their databases. This is usually called Educational Data Mining (EDM). Advancing the field of EDM with new data analysis techniques and new machine learning algorithms is vital. Classification and clustering techniques will be used in this project to study and analyse student performance. The key importance of this project is that it discusses different data mining techniques in the literature review to study student behaviour depending upon their performance. We tried to identify the most suitable algorithms from the existing research methods to predict the success of students. Various data mining approaches were discussed and their results were evaluated. In this paper, the J48 algorithm was applied to the data set, gathered from Umm Al-Qura University in Makkah.

Author 1: Hosam Alhakami
Author 2: Tahani Alsubait
Author 3: Abdullah Aljarallah

Keywords: Data mining; performance prediction; student analytics; academic advising; classification algorithms; decision tree; J48; neural network; Weka

PDF

Paper 68: Climate Change Adaptation and Resilience through Big Data

Abstract: The adverse effect of climate change is gradually increasing all over the world and developing countries are more sufferer. The potential of big data can be an effective tool to make an appropriate adaptation strategy and enhance the resilience of the people. This study aims to explore the potential of big data for taking proper strategy against climate change effects as well as enhance people’s resilience in the face of the adverse effect of climate change. A systematic literature review has been conducted in the last ten years of existing kinds of literature. This study argues that resilience is a process of bounce back to the previous condition after facing any adverse effect. It also focuses on the integrated function of the adaptive, absorptive and transformative capacity of a social unit such as individual, community or state for facing any natural disaster. Big data technologies have the capacity to show the information regarding upcoming issues, current issues and recovery stages of the adverse effect of climate change. The findings of this study will enable policymakers and related stakeholders to take appropriate adaptation strategies for enhancing the resilience of the people of the affected areas.

Author 1: Md Nazirul Islam Sarker
Author 2: Bo Yang
Author 3: Yang Lv
Author 4: Md Enamul Huq
Author 5: M M Kamruzzaman

Keywords: Disaster resilience; administrative resilience; community resilience; disaster management; environmental management

PDF

Paper 69: Enhanced Accuracy of Heart Disease Prediction using Machine Learning and Recurrent Neural Networks Ensemble Majority Voting Method

Abstract: To solve many problems in data science, Machine Learning (ML) techniques implicates artificial intelligence which are commonly used. The major utilization of ML is to predict the conclusion established on the extant data. Using an established dataset machine determine emulate and spread them to an unfamiliar data sets to anticipate the conclusion. A few classification algorithm’s accuracy prediction is satisfactory, although other perform limited accuracy. Different ML and Deep Learning (DL) networks established on ANN have been extensively recommended for the disclosure of heart disease in antecedent researches. In this paper, we used UCI Heart Disease dataset to test ML techniques along with conventional methods (i.e. random forest, support vector machine, K-nearest neighbor), as well as deep learning models (i.e. long short-term-memory and gated-recurrent unit neural networks). To improve the accuracy of weak algorithms we explore voting based model by combining multiple classifiers. A provisional cogent approach was used to regulate how the ensemble technique can be enforced to improve an accuracy in the heart disease prediction. The strength of the proposed ensemble approach such as voting based model is compelling in improving the prognosis accuracy of anemic classifiers and established adequate achievement in analyze risk of heart disease. A superlative increase of 2.1% accuracy for anemic classifiers was attained with the help of an ensemble voting based model.

Author 1: Irfan Javid
Author 2: Ahmed Khalaf Zager Alsaedi
Author 3: Rozaida Ghazali

Keywords: Deep learning; machine learning; heart disease; majority voting ensemble; University of California; Irvine (UCI) dataset

PDF

Paper 70: Temporal Analysis of GDOP to Quantify the Benefits of GPS and GLONASS Combination on Satellite Geometry

Abstract: Global Navigation Satellite Systems (GNSS) have developed rapidly over the last few years. At present, there are GNSS receivers that combine satellites from two or more different constellations. The geometry of the satellites in relation to the receiver location, i.e. how nearly or distantly they are disposed in the sky, impacts on the quality of the survey, which is essential to achieve the highest level of position accuracy. A dimensionless number identified as Geometric Dilution of Precision (GDOP) is used to represent the efficiency of the satellite distribution and can be easy calculated for each location and time using satellite ephemeris. This paper quantifies the influence of multi-GNSS constellation, in particular GPS (Global Positioning System) and GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema) combination, on satellite geometry considering a precise period. A new index named Temporal Variability of Geometric Dilution of Precision (TVGDOP) is proposed and analyzed in different scenarios (different cut-off angles as well as real obstacles such as terrain morphology and buildings). The new index is calculated for each of the two satellite systems (GPS and GLONASS) as well as for their integration. The TVGDOP values enable the three cases to be compared and permit to quantify the benefits of GNSS integration on satellite geometry. The results confirm the efficiency of the proposed index to highlight the better performance of combination GPS+GLONASS especially in presence of obstacles.

Author 1: Claudio Meneghini
Author 2: Claudio Parente

Keywords: GDOP (Geometric Dilution of Precision); GPS (Global Positioning System); GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema); Multi-GNSS (Global Navigation Satellite System) Constellation

PDF

Paper 71: Enhancing Educational Data Mining based ICT Competency among e-Learning Tutors using Statistical Classifier

Abstract: The implementation of computer-supported collaborative learning has come to play a pivotal role in e-learning platforms. Educational Data Mining (EDM) is a promising area for the exclusive skill development of e-learning tutors, the major concern being investigations over large datasets. The tutors possessing efficient and sufficient soft skills can teach students within less time and with greater productivity. EDM is a regularly used research area that handles the development of methods to explore new ideas in the educational field. Computer-supported collaborative learning in e-learning and competencies on a real-time perspective among teachers are calculated using statistical classifiers. This paper aims to identify a feasible perspective on EDM based ICT competency over e-learning tutors using statistical classifiers. A set of tutors from diverse e-learning centers of various universities is selected for the evaluation purpose. The teachers from the department of mathematics in the universities are selected to attend a professional Qualified Teacher Status numeracy skills test and tutors’ online test. The results of online tests are collected and correlated with the Naive Bayes Classifiers algorithms. Naive Bayes Classifiers are used in this paper to find the classification performance results among teachers. Naive Bayes based classification is beneficial for skill identification and improvement among the teachers. Significantly, the data mining classifiers performed well with the large dataset.

Author 1: Lalbihari Barik
Author 2: Ahmad AbdulQadir AlRababah
Author 3: Yasser Difulah Al-Otaibi

Keywords: Data mining; e-learning tutors; Naive Bayes Classifiers algorithms; ICT; QTS numeracy

PDF

Paper 72: Histogram Equalization based Enhancement and MR Brain Image Skull Stripping using Mathematical Morphology

Abstract: In brain image processing applications the skull stripping is an essential part to explore. In numerous medical image applications the skull stripping stage act as a pre-processing step as due to this stage the accuracy of diagnosis increases in the manifold. The MR image skull stripping stage removes the non-brain tissues from the brain part such as dura, skull, and scalp. Nowadays MRI is an emerging method for brain imaging. However, the existence of the skull region in the MR brain image and the low contrast are the two main drawbacks of magnetic resonance imaging. Therefore, we have proposed a method for contrast enhancement of brain MRI using histogram equalization techniques. While morphological image processing technique is used for skull stripping from MR brain image. We have implemented our proposed methodology in the MATLAB R2015a platform. Peak signal to noise ratio, Signal to noise ratio, Mean absolute error, Root mean square error has been used to evaluate the results of our presented method. The experimental results illustrate that our proposed method effectively enhance the image and remove the skull from the brain MRI.

Author 1: Zahid Ullah
Author 2: Su-Hyun Lee
Author 3: Donghyeok An

Keywords: Contrast enhancement; skull stripping; magnetic resonance imaging; mathematical morphology

PDF

Paper 73: Analysis of Web Content Quality Factors for Massive Open Online Course using the Rasch Model

Abstract: The lack of understanding among content providers towards the quality of MOOC motivates the development of several MOOC quality models. However, none was focused on the web content from the perspective of content providers or experts despite the facts that their views are important particularly in the development phase. MOOCs learners and instructors definitely understand the functional external quality, but content providers have better understanding to the internal qualities, which is required during the development phase. The initial quality model for MOOC web content based on 7C’s of Learning Design and PDCA model for continuity have been proposed, consisted of nine categories and 54 factors. This research focuses on the validation towards the proposed model by content providers and experts to provide systematic evidence of construct validity. This involved two main processes; content validity test and survey on acceptability. The content validity test was conducted to confirm the agreeability of proposed categories and factors among respondents. The Dichotomous Rasch model was utilized to explain the conditional probability of a binary outcome, given the person's agreeability level and the item's endorsability level. Subsequently, the survey on acceptability was conducted to obtain confirmation and verification from the experts group pertaining on MOOC web content quality factors. Rasch Rating Scale model was used since it specifies the set of items, which share the same rating scale structure. The usage of the Rasch Model in instrument development generally ease variable measurement by converting the nonlinear raw data to linear scale, while assists researchers in tackling fitness validation and other instrumentation issues like person reliability and unidimensionality. This paper demonstrates the strengths of applying Rasch Model in construct validation and instrument building, which provides a strong foundation for the model adaptation as a methodological tool.

Author 1: Wan Nurhayati Wan Ab Rahman
Author 2: Hazura Zulzalil
Author 3: Iskandar Ishak
Author 4: Ahmad Wiraputra Selamat

Keywords: Web content; quality model; hierarchical model; Rasch Model; rating scale; survey reliability

PDF

Paper 74: Prediction Intervals based on Doubly Type-II Censored Data from Gompertz Distribution in the Presence of Outliers

Abstract: The study aims at getting the Bayesian predication intervals for some order statistics of future observations from the distribution of Gompertz (Gomp (a;ß)). Doubly Type-II censored data has assisted obtaining in the presence of single outlier that arose from the different same family members of distribution. Single outlier of type ß ß0 and ß+ ß0 are considered and bivariate independent prior density for a and ß are used. The problem of solving the Double integral to obtain the closed form for a and ß, leads us to use MCMC for calculating the Bayesian Predication Intervals. The use of numerical examples and statistical data has enable to properly present and describe the procedure. We conclude that the Bayesian predication intervals are shorter for y1 than y5 when we are increasing the ß0 value.

Author 1: S. F. Niazi Alil
Author 2: Ayed R. A. Alanzi

Keywords: Bayesian prediction; Gompertz distribution; predic-tive distribution; doubly Type-II censored data; Markov Chain Monte Carlo; single outliers

PDF

Paper 75: Code Readability Management of High-level Programming Languages: A Comparative Study

Abstract: Quality can never be an accident and therefore, software engineers are paying immense attention to produce quality software product. Source code readability is one of those important factors that play a vital role in producing quality software. The code readability is an internal quality attribute that directly affects the future maintenance of the software and re-usability of same code in similar other projects. Literature shows that readability does not just rely on programmer’s ability to write tidy code but it also depends on programming language’s syntax. Syntax is the most visible part of any programming language that directly influence the readability of its code. If readability is a major factor for a given project, the programmers should know about the language that they shall choose to achieve the required level of quality. For this we compare the readability of three most popular high-level programming languages; Java, C#, and C++. We propose a comprehensive framework for readability comparison among these languages. The comparison has been performed on the basis of certain readability parameters that are referenced in the literature. We have also implemented an analysis tool and performed extensive experiments that produced interesting results. Furthermore, to judge the effectiveness of these results, we have performed statistical analysis using SPSS (Statistical Package for Social Sciences) tool. We have chosen the Spearman’s correlation ad Mann Whitney’s T-test for the same. The results show that among all three languages, Java has the most readable code. Programmers should use Java in the projects that have code readability as a significant quality requirement.

Author 1: Muhammad Usman Tariq
Author 2: Muhammad Bilal Bashir
Author 3: Muhammad Babar
Author 4: Adnan Sohail

Keywords: Source code; high-level programming languages; Java; C++; C#; code readability; code readability index

PDF

Paper 76: CA-PCS: A Cellular Automata based Partition Ciphering System

Abstract: In this paper, the authors present a modified version of the Partition Ciphering System (PCS) encryption system previously proposed. The previously developed encryption system PCS uses the partition problem to encrypt a message. The goals of newly developed system are avoiding statistical and frequency attacks, by providing a balance between 0s and 1s, ensuring a good level of entropy and achieving confidentiality through encryption. One of the novelties of the new design compared to its predecessor is the use of cellular automata (CAs) during the encryption. The use of CAs is justified by their good cryptographic properties that provide a level of security against attacks, and better confusion and diffusion properties. The new design is first presented with details of the encryption and decryption mechanisms. Then, the results of the DIEHARDER battery of tests, the results of the avalanche test, a security analysis and the performance of the system are outlined. Finally, a comparison between CA-PCS and PCS as well as the AES encryption system is provided. The paper shows that the modified version of PCS displays a better performance as well as a good level of security against attacks.

Author 1: Fatima Ezzahra Ziani
Author 2: Anas Sadak
Author 3: Charifa Hanin
Author 4: Bouchra Echandouri
Author 5: Fouzia Omary

Keywords: Partition ciphering system; partition problem; fre-quency analysis; cellular automata; avalanche effect; confusion; diffusion; statistical properties; cryptographic properties

PDF

Paper 77: Performance Analysis of Machine Learning Techniques for Smart Agriculture: Comparison of Supervised Classification Approaches

Abstract: Agriculture form one of the most important aspects of life necessities, it is responsible to feed 7.7 billion person for the time being, and it is expected to supply more than 9.6 billion individual in 2050, the thing that made classical farming insufficient, and give birth to the notion of smart farming, and the race has begun toward using the latest technologies in the field. They integrate the Internet of Things (IoT), automation, Artificial Intelligence (AI), etc. And as researchers from a country that highly depends on agriculture, we have decided to also contribute to this evolution, and we chose Machine learning (ML) as our entrance to the field to satisfy the need for automated classification of the different products produced by a farm. In this work, we wanted to solve the problem of automatic classification of agricultural products, without the need of any human intervention, and we concentrate on the classification of red fruits, due to our proximity to a location that its product is red fruits. In other words, we are doing a comparative study among the well-known approaches that are used in image classification, and we are applying the best-found method to correctly classify the pictures of red fruits. And this empirically leads us to achieve great results as shown in the numerical result area.

Author 1: Rhafal Mouhssine
Author 2: Abdoun Otman
Author 3: El khatir Haimoudi

Keywords: Support vector machine; K-nearest neighbor; deep neural networks; convolutional neural networks; smart agriculture; Cifar10

PDF

Paper 78: Minimal Order Linear Functional Observers Design for Multicell Series Converter

Abstract: The requirement of high voltage-power level for many applications like the energy conversion system give rise to use a structure of a multilevel converter like the multicell series converter. To benefit as much as possible from this power converter an appropriate voltage distribution for each cell must be performed, hence the need to estimate this voltages. This paper aims to design a minimal single linear functional observer for a multicell series converter to estimate the capacitor voltages. Based on its hybrid model, an observability study prove the ability to estimate this capacitor voltages. Also a linear functional observers are proposed using a direct procedure without solving the Sylvester equation and based on an operation mode classification approach. Simulations of four-cells multicell converter are given in order to check the efficiency of the converter’s hybrid model and the performance of the proposed minimal single linear functional observers.

Author 1: Mariem Jday
Author 2: Paul-Etienne Vidal
Author 3: Joseph Hagg`ege

Keywords: Multicell converter; voltage capacitor; hybrid model; Z(TN)-Observability; functional observer

PDF

Paper 79: Binning Approach based on Classical Clustering for Type 2 Diabetes Diagnosis

Abstract: In recent years, numerous studies have been fo-cusing on metagenomic data to improve the ability of human disease prediction. Although we face the complexity of disease, some proposed frameworks reveal promising performances in using metagenomic data to predict disease. Type 2 diabetes (T2D) diagnosis by metagenomic data is one of the challenging tasks compared to other diseases. The prediction performances for T2D usually reveal poor results which are around 65% in accuracy in state-of-the-art. In this study, we propose a method com-bining K-means clustering algorithm and unsupervised binning approaches to improve the performance in metagenome-based disease prediction. We illustrate by experiments on metagenomic datasets related to Type 2 Diabetes that the proposed method embedded clusters generated by K-means allows to increase the performance in prediction accuracy reaching approximately or more than 70%.

Author 1: Hai Thanh Nguyen
Author 2: Nhi Yen Kim Phan
Author 3: Huong Hoang Luong
Author 4: Nga Hong Cao
Author 5: Hiep Xuan Huynh

Keywords: Unsupervised binning; K-means clustering algo-rithm; metagenomics; metagenome-based disease prediction; Type 2 diabetes diagnosis

PDF

Paper 80: Enhanced Performance of the Automatic Learning Style Detection Model using a Combination of Modified K-Means Algorithm and Naive Bayesian

Abstract: Learning Management System (LMS) is well de-signed and operated by an exceptional teaching team, but LMS does not consider the needs and characteristics of each student’s learning style. The LMS has not yet provided a feature to detect student diversity, but LMS has a track record of student learning activities known as log files. This study proposes a detection model of student’s learning styles by utilizing information on log file data consisting of four processes. The first process is pre-processing to get 29 features that are used as the input in the clustering process. The second process is clustering using a modified K-Means algorithm to get a label from each test data set before the classification process is carried out. The third process is detecting learning styles from each data set using the Naive Bayesian classification algorithm, and finally, the analysis of the performance of the proposed model. The test results using the validity value of the Davies-Bouldin Index (DBI) matrix indicate that the modified K-Means algorithm achieved 2.54 DBI, higher than that of original K-Means with 2.39 DBI. Besides having high validity, it also makes the algorithm more stable than the original K-Means algorithm because the labels of each dataset do not change. The improved performance of the clustering algorithm also increases the values of precision, recall, and accuracy of the automatic learning style detection model proposed in this study. The average precision value rises from 65.42% to 71.09%, the value of recall increases from 72.09% to 80.23%, and the value of accuracy increases from 67.06% to 71.60%.

Author 1: Nurul Hidayat
Author 2: Retantyo Wardoyo
Author 3: Azhari SN
Author 4: Herman Dwi Surjono

Keywords: Learning management system; log file, K-means; Davies-Bouldin Index

PDF

Paper 81: Improved Candidate Generation for Pedestrian Detection using Background Modeling in Connected Vehicles

Abstract: Pedestrian detection is widely used in today’s ve-hicle safety applications to avoid vehicle-pedestrian accidents. The current technology of pedestrian detection utilizes onboard sensors such as cameras, radars, and Lidars to detect pedestrians, then information is used in a safety feature like Automatic Emer-gency Braking (AEB). This paper proposes pedestrian detection system using vehicle connectivity, image processing and computer vision algorithms. In the proposed model, vehicles collect image frames using on-vehicle cameras, then frames are transferred to the Infrastructure database using Vehicle to Infrastructure communication (V2I). Image processing and machine learning algorithms are used to process the infrastructure images for pedestrian detection. Background modeling is used to extract the foreground regions in an image to identify regions of interest for candidate generation. This paper explains the algorithms of the infrastructure pedestrian detection system, which includes image registration, background modeling, image filtering, candi-date generation, feature extraction, and classification. The paper explains the MATLAB implementation of the algorithm with a road-collected dataset and provides analysis for the detection results with respect to detection accuracy and runtime. The algorithm implementation results show an improvement in the detection performance and algorithm runtime.

Author 1: Ghaith Al-Refai
Author 2: Osamah A. Rawashdeh

Keywords: Pedestrian detection; computer vision; image pro-cessing; machine learning; vehicle safety

PDF

Paper 82: Intelligent Parallel Mixed Method Approach for Characterising Viral YouTube Videos in Saudi Arabia

Abstract: In social networking platforms, comprehending vi-rality, exemplified by YouTube, is of great importance, which helps in understanding what characteristics utilised to create content along with what dynamics involved in contributing to YouTube’s strength as a platform for sharing content. The current literature surrounding virality problem appears sparse concern-ing development theories, investigations regarding empirical facts, and an understanding of what makes videos go viral. The over-arching objective is to understand deeply the phenomena of viral YouTube videos in Saudi Arabia, hence we propose an intelligent convergent parallel mixed-methods approach that begins, as an internal step, by a qualitative thematic analyses method and an NLP-based quantitative method independently, followed by training an unsupervised clustering model for integrating the internal analysis outputs for deeper insights. We have empirically analysed some trended YouTube videos along with their contents, for studying such phenomena. One of our main findings revealed that boosting entertainments, traditions, politics, and/or religion issues when making a video, that is associated in somehow with sarcastic or rude remarks, is likely the preeminent impulse for letting a regular video go viral.

Author 1: Abdullah Alshanqiti
Author 2: Ayman Bajnaid
Author 3: Abdul Rehman Gilal
Author 4: Shuaa Aljasir
Author 5: Aeshah Alsughayyir
Author 6: Sami Albouq

Keywords: Virality; text mining; sentiment analysis; social media analysis; mixed method approach

PDF

Paper 83: Predicting Students’ Performance of the Private Universities of Bangladesh using Machine Learning Approaches

Abstract: Every year thousands of students get admitted into different universities in Bangladesh. Among them, a large number of students complete their graduation with low scoring results which affect their careers. By predicting their grades before the final examination, they can take essential measures to ameliorate their grades. This article has proposed different machine learning approaches for predicting the grade of a student in a course, in the context of the private universities of Bangladesh. Using different features that affect the result of a student, seven different classifiers have been trained, namely: Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Logistic Regression, Decision Tree, AdaBoost, Multilayer Perceptron (MLP), and Extra Tree Classifier for classifying the students’ final grades into four quality classes: Excellent, Good, Poor, and Fail. Afterwards, the outputs of the base classifiers have been aggregated using the weighted voting approach to attain better results. And here this study has achieved an accuracy of 81.73%, where the weighted voting classifier outperforms the base classifiers.

Author 1: Md. Sabab Zulfiker
Author 2: Nasrin Kabir
Author 3: Al Amin Biswas
Author 4: Partha Chakraborty
Author 5: Md. Mahfujur Rahman

Keywords: Prediction; machine learning; weighted voting ap-proach; private universities of Bangladesh

PDF

Paper 84: Vehicle Routing Optimization for Surplus Food in Nonprofit Organizations

Abstract: Non-profit organizations mitigate the problem of food insecurity by collecting surplus food from donors and delivering it to underprivileged people. In this paper, we focus on a single non-profit organization located in Makkah city (Saudi Arabia), referred to as Ekram. The current surplus food pickup/delivery and operational routing model of Ekram orga-nization have several apparent deficiencies. First, we model the surplus pickup/delivery and operational routing model as a vehi-cle routing problem (VRP). Then, we optimize the pickup/delivery of different types of food groups through the different available routes. Finally, we utilize the formulated VRP problem by minimizing the total route distances. Our proposed model ensures reduction of the total time and effort necessary to send the collecting vehicles to the donors of surplus food.

Author 1: Ahmad Alhindi
Author 2: Abrar Alsaidi
Author 3: Waleed Alasmary
Author 4: Maazen Alsabaan

Keywords: Non-profit organization; vehicle routing problem; donor; surplus food; decision support

PDF

Paper 85: Development of an Interactive Tool based on Combining Graph Heuristic with Local Search for Examination Timetable Problem

Abstract: Every university faces a lot of challenges to solve the examination timetabling problem because the problem is NP-hard and contains numerous institutional constraints. Although several attempts have been taken to address the issue, there are scarcities of interactive and automated tools in this domain that can schedule exams effectively by considering institutional resources, different constraints, and student enrolment in courses. This paper presents the development of a system as a graphical and interactive tool for examination timetabling problem. To develop the system, combining graph coloring heuristic and local search meta-heuristic algorithms are employed. The graph heuris-tic ordering is incorporated for constructing initial solution(s), whereas the local search meta-heuristic algorithms are used to produce quality exam timetables. Different constraints and objective functions from ITC2007 exam competition rules are adopted, as it is a complex real word exam timetabling problem. Finally, the system is tested on the ITC2007 exam benchmark dataset, and test results are presented. The main aspect of the system is to deliver an easy-to-handle tool that can generate quality timetables based on institutional demands and smoothly manage several key components. These components are collecting data associated with the enrolment of students in exams, defining hard and soft constraints, and allocating times and resources. Overall, this software can be used as a commercial scheduler in order to provide institutions with automated, accurate, and quick exam timetable.

Author 1: Ashis Kumar Mandal

Keywords: Examination timetable; graph heuristic; local search meta-heuristic; ITC2007 exam dataset; interactive tool; NP-hard problem

PDF

Paper 86: Invariant Feature Extraction for Component-based Facial Recognition

Abstract: This paper proposes an alternative invariant feature extraction technique for facial recognition using facial compo-nents. Can facial recognition over age progression be improved by analyzing individual facial components? The individual facial components: eyes, mouth, nose, are extracted using face landmark points. The Histogram of Gradient (HOG) and Local Binary Pattern (LBP) features are extracted from the individually de-tected facial components, followed by random subspace principal component analysis and cosine distance. One of the preprocessing steps implemented is the facial image alignment using angle of inclination. The experimental results show that facial recognition over age progression can be improved by analyzing individual facial components. The entire facial image can change over time, but appearance of some individual facial components is invariant.

Author 1: Adam Hassan
Author 2: Serestina Viriri

Keywords: Invariant features; facial components; facial recog-nition; age progression; HOG; LBP

PDF

Paper 87: Feature Selection for Learning-to-Rank using Simulated Annealing

Abstract: Machine learning is being applied to almost all corners of our society today. The inherent power of large amount of empirical data coupled with smart statistical techniques makes it a perfect choice for almost all prediction tasks of human life. Information retrieval is a discipline that deals with fetching useful information from a large number of documents. Given that today millions, even billions, of digital documents are available, it is no surprise that machine learning can be tailored to this task. The task of learning-to-rank has thus emerged as a well-studied domain where the system retrieves the relevant documents from a document corpus with respect to a given query. To be successful in this retrieving task, machine learning models need a highly useful set of features. To this end, meta-heuristic optimization algorithms may be utilized. The aim of this work is to investigate the applicability of a notable meta-heuristic algorithm called simulated annealing to select an effective subset of features from the feature pool. To be precise, we apply simulated annealing algorithm on the well-known learning-to-rank datasets to methodically select the best subset of features. Our empirical results show that the proposed framework achieve gain in accuracy while using a smaller subset of features, thereby reducing training time and increasing effectiveness of learning-to-rank algorithms.

Author 1: Mustafa Wasif Allvi
Author 2: Mahamudul Hasan
Author 3: Lazim Rayan
Author 4: Mohammad Shahabuddin
Author 5: Md. Mosaddek Khan
Author 6: Muhammad Ibrahim

Keywords: Information retrieval; learning-to-rank; feature se-lection; meta-heuristic optimization algorithm; simulated annealing

PDF

Paper 88: Software-Defined Networking (SDN) based VANET Architecture: Mitigation of Traffic Congestion

Abstract: In VANETs (Vehicular Ad-hoc Networks), the num-ber of vehicles increased continuously, leading to significant traffic problems like traffic congestion, a feasible path, and associated events like accidents. Though, the Intelligent Transportation System (ITS) providing excellent services, such as safety appli-cations and emergency warnings. However, ITS has limitations regarding traffic management tasks, scalability, and flexibility because of the enormous number of vehicles. Therefore, extending the traditional VANET architecture is indeed a must. Thus, in the recent period, the design of the SD-VANETs (Software-Defined Networking defined VANETs) has gained significant interest and made VANET more intelligent. The SD-VANET architecture can handle the aforesaid VANET challenges. The centralized (logically) SDN architecture is programmable and also has global information about the VANET architecture. Therefore, it can effortlessly handle scalability, traffic management, and traffic congestion issues. The traffic congestion problem leads to longer trip times, decreases the vehicles’ speed, and prolong average end-to-end delay. Though, somewhere, some routes in the network are available with capacity, which can minimize the congestion problem and its characteristics. Therefore, we proposed heuristic algorithms called Congestion-Free Path (CFP) and Optimize CFP (OCFP), in SD-VANET architecture. The proposed algorithms address the traffic congestion issue and also provide a feasible path (less end-to-end delay) for a vehicle in VANET. We used the NS-3 simulator to evaluate the performance of the proposed algorithms, and for generating a real scenario of VANET traffic; we use the SUMO module. The results show that the proposed algorithms decrease road traffic congestion drastically compared to exiting approaches.

Author 1: Tesfanesh Adbeb
Author 2: Wu Di
Author 3: Muhammad Ibrar

Keywords: Software-Defined Networking; VANET; congestion; feasible path; NS3; SUMO

PDF

Paper 89: Perceived Usability of Educational Chemistry Game Gathered via CSUQ Usability Testing in Indonesian High School Students

Abstract: Educational game is now a commonplace among students and teachers alike. Recent researches show that studies regarding educational game general effectiveness in the learning environment are nothing new. However, usability studies in the educational game are rather rare compared to general non-game-related usability studies. This research synthesizes the result obtained from the Computer System Usability Questionnaire (CSUQ) and separated between multiple students pre-existing grouping such as genders, prior knowledge, as well as experimental treatment setup such as materials given before the game session. The metrics are tested in an Indonesian high school by using an educational game of chemistry regarding the topic of reaction rate with a total of 53 participants. General results show that there exist many differences of perceived usability aspects between male and female students, the existence of learning materials given before the game session, as well as the existence of students' prior knowledge. Overall, the main findings of this research show that usability in the educational game is affected by gender, materials existence, and previous knowledge existence.

Author 1: Herman Tolle
Author 2: Muhammad Hafis
Author 3: Ahmad Afif Supianto
Author 4: Kohei Arai

Keywords: Usability testing; CSUQ; educational game; male students; female students

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org