The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 10 Issue 12

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Local-Set Based-on Instance Selection Approach for Autonomous Object Modelling

Abstract: With the increasing presence of robotic agents in our daily life, computationally efficient modelling of real-world objects by autonomous systems is of prime importance for enabling these artificial agents to automatically and effectively perform tasks such as visual object recognition. For this purpose, we introduce a novel, machine-learning approach for instance selection called Approach for Selection of Border Instances (ASBI). This method adopts the notion of local sets to select the most representative instances at the boundaries of the classes, in order to reduce the set of training instances and, consequently, to reduce the computational resources that are necessary to perform the learning process of real-world objects by the artificial agents. Our new algorithm was validated on 27 standard datasets and applied on 2 challenging object-modelling datasets to test the automated object recognition task. ASBI performances were compared to those of 6 state-of-art algorithms, considering three standard metrics, namely, accuracy, reduction, and effectiveness. All the obtained results show that the proposed method is promising for the autonomous recognition task, while presenting the best trade-off between the classification accuracy and the data size reduction.

Author 1: Joel Luis Carbonera
Author 2: Joanna Isabelle Olszewska

Keywords: Machine learning; instance selection; autonomous systems; object modelling; visual object recognition; computer vision; machine vision

PDF

Paper 2: Activation and Spreading Sequence for Spreading Activation Policy Selection Method in Transfer Reinforcement Learning

Abstract: This paper proposes an automatic policy selection method using spreading activation theory based on psychological theory for transfer learning in reinforcement learning. Intel-ligent robot systems have recently been studied for practical applications such as home robot, communication robot, and warehouse robot. Learning algorithms are key to building useful robot systems important. For example, a robot can explore for optimal policy with trial and error using reinforcement learning. Moreover, transfer learning enables reuse of prior policy and is effective for environment adaptability. However, humans de-termine applicable methods in transfer learning. Policy selection method has been proposed for transfer learning in reinforcement learning using spreading activation model proposed in cognitive psychology. In this paper, novel activation function and spreading sequence is discussed for spreading policy selection method. Fur-ther computer simulations are used to examine the effectiveness of the proposed method for automatic policy selection in simplified shortest-path problem.

Author 1: Hitoshi Kono
Author 2: Ren Katayama
Author 3: Yusaku Takakuwa
Author 4: Wen Wen
Author 5: Tsuyoshi Suzuki

Keywords: Reinforcement learning; transfer learning; spread-ing activation theory; policy selection

PDF

Paper 3: Best-Choice Topology: An Optimized Array-based Maximum Finder

Abstract: Extracting maximum from an unsorted set of binary elements is important in many signal processing applications. Since quite few maximum-finder implementations are found in the recent literature, in this paper we provide an update on the current topic. Generally, maximum-finders are considered array-based, with parallel bit-by-bit comparison of the elements, or more efficient tree-based structures, with the hierarchical maximum extraction. In this paper, we concentrate on array-based topologies only, since our goal is to propose a new maximum-finder design called Best-Choice Topology (BCT), which is an optimized version of the standard Array Topology (AT). The usual bit-by-bit parallel comparison is applied for extracting the maximum and its one-of-N address. Boolean expressions are derived for BCT logical design and the minimum-finder equivalent. Functionality of the proposed architecture and the reference designs is verified with Xilinx ISE Design Suite 14.5. Synthesis is done on Application Specific Integrated Circuit (ASIC) TSMC 65nm technology. The conclusion of the paper is two-fold. First, we confirm the timing efficiency of BCT compared to AT. Next, we show that BCT is more efficient than the recent maximum-finder design called Maximum Magnitude Generator (MaxMG) and it has a great potential to be used for real-time signal processing applications.

Author 1: Marina Prvan
Author 2: Julije Ožegovic
Author 3: Ivan Soco
Author 4: Duje Coko

Keywords: Array topology; best-choice topology; maximum finder; maximum magnitude generator

PDF

Paper 4: Words Segmentation-based Scheme for Implicit Aspect Identification for Sentiments Analysis in English Text

Abstract: Implicit and Explicit aspects extraction is the amassed research area of natural language processing (NLP) and opinion mining. This method has become the essential part of a large collection of applications which includes e-commerce, social media, and marketing. These application aid customers to buy online products and collect feedbacks based on product and aspects. As these feedbacks are qualitative feedback (comments) that help to enhance the product quality and delivery service. Whereas, the main problem is to analyze the qualitative feedback based on comments, while performing these analysis manually need a lot of effort and time. In this research paper, we developed and suggest an automatic solution for extracting implicit aspects and comments analyzing. The problem of implicit aspect extraction and sentiments analysis is solved by splitting the sentence through defined boundaries and extracting each sentence into a form of isolated list. Moreover, these isolated list elements are also known as complete sentence. As sentences are further separated into words, these words are filtered to remove anonymous words in which words are saved in words list for the aspects matching; this technique is used to measure polarity and sentiments analysis. We evaluate the solution by using the dataset of online comments.

Author 1: Dhani Bux Talpur
Author 2: Guimin Huang

Keywords: Implicit aspect; explicit aspects; polarity; sentiments analysis

PDF

Paper 5: Energy Balanced Two-level Clustering for Large-scale Wireless Sensor Networks based on the Gravitational Search Algorithm

Abstract: Organizing sensor nodes in clusters is an effective method for energy preservation in a Wireless Sensor Network (WSN). Throughout this research work we present a novel hybrid clustering scheme that combines a typical gradient clustering protocol with an evolutionary optimization method that is mainly based on the Gravitational Search Algorithm (GSA). The proposed scheme aims at improved performance over large in size networks, where classical schemes in most cases lead to non-efficient solutions. It first creates suitably balanced multihop clusters, in which the sensors energy gets larger as coming closer to the cluster head (CH). In the next phase of the proposed scheme a suitable protocol based on the GSA runs to associate sets of cluster heads to specific gateway nodes for the eventual relaying of data to the base station (BS). The fitness function was appropriately chosen considering both the distance from the cluster heads to the gateway nodes and the remaining energy of the gateway nodes, and it was further optimized in order to gain more accurate results for large instances. Extended experimental measurements demonstrate the efficiency and scalability of the presented approach over very large WSNs, as well as its superiority over other known clustering approaches presented in the literature.

Author 1: Basilis Mamalis
Author 2: Marios Perlitis

Keywords: Gravitational search algorithm; wireless sensors; network lifetime; nodes clustering; data collection

PDF

Paper 6: Application of Computer-Aided to Improve Industrial Productivity in Cement Factories by using a Novel Design of Quantitative Conveyor

Abstract: To deal with the industrial evolution 4.0, many cement enterprises must be enhanced the productive ability. In this paper, the novel design of quantitative conveyor is introduced to improve the industrial productivity. Firstly, the test hardware platform is set-up based on customer’s requirement. Then, the analysis and control of mechanical platform for quantitative conveyor is investigated. From the experimental results, the operation of conveyor is stable and precise to ensure the output products for cement factories.

Author 1: Anh Son Tran
Author 2: Ha Quang Thinh Ngo

Keywords: Motion control; cement industry; conveyor; automation; robotics system

PDF

Paper 7: Software Design using Genetic Quality Components Search

Abstract: The paper presents a software design methodology based on computational experiments for effective selection of software component set. The selection of components is performed with respect to the numerical quality criteria evaluated in the reproducible experiments with various sets of components in the virtual infrastructure simulating the operating conditions of a software system being developed. To reduce the number of experiments with unpromising sets of components the genetic algorithm is applied. For representing the sets of components in the form of natural genotypes, the encoding mapping is introduced, reverse mapping is used to decipher the genotype. In the first step of the technique, the genetic algorithm creates an initial population of random genotypes that are converted into the assessed sets of software components. The paper shows the application of the proposed methodology to find the effective choice of Node.js components. For this purpose, a MATLAB program of genetic search and experimental scenario for a virtual machine running Ubuntu 16.04 LTS operating system were developed. To guarantee the proper reproduction of the experimental conditions, the Vagrant and Ansible configuration tools were used to create the virtual environment of the experiment.

Author 1: Evgeny Nikulchev
Author 2: Dmitry Ilin
Author 3: Aleksander Gusev

Keywords: Software design; selection of software components set; numerical quality criteria evaluated; genetic algorithm

PDF

Paper 8: Resonance Mitigation and Performance Improvement in Distributed Generation based LCL Filtered Grid Connected Inverters

Abstract: Resonance turns into a growing issue of paramount importance for stable operation of LCL filtered grid-connected inverters. Active damping algorithms are widely adopted to restrain the resonance peak associated with LCL filters. The focus of this paper is to develop an improved active damping solution based on filter capacitor current for better control performance of three-phase LCL grid-connected inverters. In the proposed solution, an improved compensator is included across the LCL filter system and capacitor current feedback. A damping loop is implemented with the proposed combination, which is further feedback at a reference voltage point of the three-phase inverter to damp the aroused resonance peak. The substantial features of the proposed configuration are wide damping range of resonance frequency and high control bandwidth, which results in faster dynamic response in comparison with conventional capacitor current proportionally feedback. Moreover, the stability of current loop is examined in detail by implementing the proposed damping method under the filter parametric variations. Finally, the efficacy of the proposed method is validated by illustrating the outcomes based on steady state and transient responses through simulations and experimental results of the laboratory prototype.

Author 1: Danish Khan
Author 2: Muhammad Mansoor Khan
Author 3: Yaqoob Ali
Author 4: Abdar Ali
Author 5: Imad Hussain

Keywords: LCL filter; high pass filter; grid connected inverters (GCI); stability analysis; robustness; SPWM technique; D-space

PDF

Paper 9: Automatically Extract Vertebra and Compute the Cobb Angle based on Spine’s Features and Adaptive ASMs in Posteroanterior Radiographs

Abstract: Nowadays, clinical diagnoses are more and more supported by medical equipment, but doctors still need a lot of time and effort to diagnosis. The construction of a system that can diagnose automatically will help doctors a lot when the doctor must handle many medical records. Many diseases need to be diagnosed by radiographs and they can be diagnosed automatically, especially bone diseases. The purpose of this paper is to introduce a new method to measure the curve of the spine in the X-ray images. We split this subject into two problems. The first problem is to extract the spine from the X-ray image. We use the threshold to remove redundant information from the images. Then an automatic mask is created to save the position of the spine and smooth the boundary of the spine. Based on the spine extracted from the previous steps, the Active Shape Model (ASM) which is designed from the characteristics of the vertebrae, is used to extract each vertebra from the spine. Finally, we measure the Cobb angle which is formed by vertebrae. The solution of two problems is implemented on X-ray images which have high quality, the results are more than 80% an area of the spine is extracted, and the Cobb angle is measured correctly, the accuracy of our method will decrease if the quality of the image is low.

Author 1: Pham The Bao

Keywords: Spine detection; spine extraction; vertebrae detection; cobb angle; adaptive active shape model

PDF

Paper 10: System for Monitoring People with Disabilities in the Event of an Accident using Mobile Terminals

Abstract: Being in the speed century, people around the world are busy scheduling every day and therefore, it is impossible to spend enough time with the elderly and people with disabilities or people who have a chronic illness. These persons need a lot more attention and care because they cannot cope with the daily activities like a healthy person would. Daily monitoring and assistance of elderly or disabled people is a very important task, both in the current activity, or, especially when emergencies can occur. Fortunately, we can easily use today's technologies, which are constantly developing to be able to monitor them remotely. This paper tries to find a solution to reduce cases of fatality due to accidents, by using advanced technologies of today, e.g. smartphones, fast communications. The use of these technologies can provide permanent monitoring of the elderly and persons with disabilities, without bounding their mobility and without affecting their quality of life. In this way, if emergency situations arise for the elderly or people with disabilities or chronic diseases, measures can be taken as soon as possible. The development of a mobile application capable to monitor the occurrence of accidents for the above-mentioned persons is obviously a help granted to the doctors involved in ensuring their health. Thus, the main objective of the application is to detect the accidental falls of the persons in the shortest possible time. Another objective is to provide an application that runs in the background of the mobile operating system, using as little as possible the power supply.

Author 1: Alexandra Fanca
Author 2: Monica Cujerean
Author 3: Adela Puscasiu
Author 4: Dan-Ioan Gota
Author 5: Honoriu Valean

Keywords: Smartphones; built-in smartphone sensors; monitoring people; android applications; accident detecting system

PDF

Paper 11: Embedding Adaptation Levels within Intelligent Tutoring Systems for Developing Programming Skills and Improving Learning Efficiency

Abstract: Intelligent Tutoring Systems (ITSs) represent the virtual learning environment that provides learning needs, adapts to the characteristics of learners according to their cognitive and behavioral aspects, to reach desired learning outcomes. The purpose of this study is to investigate the impact of embedding some adaptation levels within intelligent tutoring systems on developing Object-Oriented Programming skills (OOP), as well as on learning efficiency for students of the computer science department, Faculty of Science and Arts at Qassem University. In this context, the author developed an Intelligent Tutoring System (ITS) that provides multiple levels of adaptation (Learner level, links level) to support automatic adaptation to each of the students' characteristics and investigate the effectiveness of the system on dependent variables. The random sample consisted of (n=44) students. Those students were divided into two similar groups, Experimental an (ITS), and Control (face-to-face, traditional). The findings revealed that there was a noticeable improvement in the students' performance for the experimental group than the control group how used face-to-face method for the programming skills and learning efficiency.

Author 1: Mohamed A Elkot

Keywords: Intelligent Tutoring Systems (ITS); programming skills; adaptive e-learning; learning style; learning efficiency

PDF

Paper 12: Neural Network-based Diabetic Type II High-Risk Prediction using Photoplethysmogram Waveform Analysis

Abstract: This work aims to predict and classify patients into diabetic and nondiabetic subjects based on age and four independent variables extracted from the analysis of photoplethysmogram (PPG) morphology in time domain. The study has two main stages, the first one was the analysis of PPG waveform to extract b/a, RI, DiP, and SPt indices. These parameters contribute by some means to the prediction of diabetes. They were statistically significant and correlated with the HbA1C test. The second stage was building a neural network based classifier to predict diabetes. The model showed an accuracy of 90.2% in training phase and an accuracy of 85.5% in testing phase. The findings of this research work may contribute towards the prediction of diabetes in early stages. Also, the proposed classifier showed a high accuracy in predicting the existence of diabetes in Saudi people population.

Author 1: Yousef K Qawqzeh

Keywords: Diabetes; prediction; classification; photoplethysmogram; neural networks; diagnosis

PDF

Paper 13: Clustering Analysis for Malware Behavior Detection using Registry Data

Abstract: The increase of malware attacks may increase risk in information technology industry such as Industrial Revolution 4.0 that consists of multiple sectors especially in cyber security. Because of that malware detection technique plays vital role in detecting malware attack that can give high impact towards the cyber world. In accordance with the technique, one of unsupervised machine learning able to detect malware attack by identifying the behavior of the malware; which called clustering technique. Owing to this matter, current research shows a paucity of analysis in detecting malware behavior and limited source that can be used in identifying malware attacks. Thus, this paper introduce clustering detection model by using K-Means clustering approach to detect malware behavior of data registry based on the features of the malware. Clustering techniques that use unsupervised algorithm in machine learning plays an important role in grouping similar malware characteristics by studying the behavior of the malware. Throughout the experiment, malware features were selected and extracted from computer registry data and eventually used in the proposed clustering detection model to be clustered as normal or suspicious behavior. The results of the experiment indicates that this proposed model is capable to cluster normal and suspicious data into two separate groups with high detection rate which is more than 90 percent accuracy. Ultimately, the main contribution based on the findings is the proposed framework can be used to cluster the data with the use of data registry to detect malware.

Author 1: Nur Adibah Rosli
Author 2: Warusia Yassin
Author 3: Faizal M.A
Author 4: Siti Rahayu Selamat

Keywords: Malware; malware detection; behavior analysis; k-means clustering; data registry

PDF

Paper 14: Convolutional Neural Network Considering Physical Processes and its Application to Diatom Detection

Abstract: Convolutional Neural Network (CNN) considering physical processes with time series of stages for diatom detection with remote sensing satellite derived physical data (Chlorophyll-a, Photosynthesis Available Radiance (PAR), Turbidity, Sea Surface Temperature (SST)) and meteorological data is proposed. Diatom is bloomed under the condition of suitable sea water temperature, nutrition rich water (Chlorophyll-a derived from river water flow), photosynthesis available radiance derived from solar irradiance, transparency of the sea water for photosynthesis (turbidity), and sea water convection between bottom sea water and sea surface water. Almost all the conditions can be monitored by remote sensing satellite-based radiometers. The proposed diatom prediction based on convolutional neural network with remote sensing satellite and meteorological data is validated. Through the experiments at Ariake bay area, Kyushu, Japan with gathered time series of remote sensing data of Moderate resolution of Imaging Spectroradiometer (MODIS) derived turbidity as well as chlorophyll-a data estimated for the winter seasons (from January to March) during from 2010 to 2018 together with measured and acquired meteorological data for the same winter seasons, the proposed method is validated.

Author 1: Kohei Arai

Keywords: Chlorophyl-a concentration; red tide; diatom; MODIS; satellite remote sensing; neural network; meteorological data

PDF

Paper 15: Designing an Automated Intelligent e-Learning System to Enhance the Knowledge using Machine Learning Techniques

Abstract: The modern digital world requires its users to learn continuously in order to enhance their knowledge in the working environment and the academic sector. This kind of learning is significantly facilitated by the E-Learning platform, which is better than the traditional methods. As E-Learning offers benefits like time and space independence, many learners have made it their choice. However, since an abundant of E-Learning courses are available on websites, learners are confused as to which is the right one to choose. This paper proposes an Automated Intelligent Learning (AIL) methodology which covers the entire Teaching-Learning Process (TLP) to overcome this issue. It enables the selection of suitable topics and framing an appropriate course syllabus and assessment questions for the users. In it, the learner satisfies topic selection based on Bloom's taxonomy. This enables high-quality knowledge outcomes in the learner. The subject curriculum is framed by using Hierarchical clustering techniques. This helps the user to fix suitable topics and conveniently generate questions using machine learning techniques. The proposed methodology was evaluated by carrying out post and pre-assessment tests on undergraduate students from computer science courses. The performance analysis of the proposed methodology was compared with that of the existing methodology. It was observed that the proposed methodology is effective in applying the topic selection hierarchical method to make a perfect syllabus for the course, and assessment questions. Besides, it was found to enable the learner to learn without any confusion or distraction.

Author 1: G Deena
Author 2: K. Raja

Keywords: e-Learning; teaching learning process; pre-assessment and post- assessment; blooms taxonomy; machine learning

PDF

Paper 16: A Comparative Study of Supervised Machine Learning Techniques for Diagnosing Mode of Delivery in Medical Sciences

Abstract: The uses of machine learning techniques in medical diagnosis are very helpful tools now-a-days. By using machine learning algorithms and techniques, many complex medical problems can be solved easily and quickly. Without these techniques, it was a difficult task to find the causes of a problem or to suggest most appropriate solution for the problem with high accuracy. The machine learning techniques are used in almost every field of medical sciences such as heart diseases, diabetes, cancer prediction, blood transfusion, gender prediction and many more. Both supervised and unsupervised machine learning techniques are applied in the field of medical and health sciences to find the best solution for any medical illness. In this paper, the implementation of supervised machine learning techniques is performed for classifying the data of the pregnant women on the basis of mode of delivery either it will be a C-Section or a normal delivery. This analysis allows classifying the subjects into caesarean and normal delivery cases, hence providing the insight to physician to take precautionary measures to ensure the health of an expecting mother and an expected child.

Author 1: Syeda Sajida Hussain
Author 2: Tooba Fatima
Author 3: Rabia Riaz
Author 4: Sanam Shahla Rizvi
Author 5: Farina Riaz
Author 6: Se Jin Kwon

Keywords: Machine learning; supervised learning; bioinformatics; medical sciences

PDF

Paper 17: Affective Educational Application of Fish Tank Hydroponics System

Abstract: This project develops algorithms for the design and implementation of an embedded system in hydroponic gardens for homes located on roofs, terraces, and even in kitchens in the house; since a fishbowl is used. Vegetables, flowers, etc. are contemplated. One plant was obtained per seed. The development and care of Nature is also a special theme of this project with the objective of educating in the care of the environment, for which it is essential not to overlook the affection that one has for life. It is here that it is important to ensure that plants can transmit the physical conditions in which they find themselves through an emotional interface that translates the lack of water into an emotional state of sadness; or enough moisture with a state of joy. Thus a technique is presented, which allows through affective or emotional interfaces to educate owners about the care of the plant and take advantage of the emotional states of the people for the development of educational software.

Author 1: Rodolfo Romero Herrera
Author 2: Francisco Gallegos Funes

Keywords: Hydroponics; affective interface; embedded system; gardens; educational software

PDF

Paper 18: Memory-based Collaborative Filtering: Impacting of Common Items on the Quality of Recommendation

Abstract: In this study, the impact of the common items between a pair of users on the accuracy of memory-based collaborative filtering (CF) is investigated. Although CF systems are a widely used recommender system, data sparsity remains an issue. As a result, the similarity weight between a pair of users with few ratings is almost a fake relationship. In this work, the similarity weight of the traditional similarity methods is determined using exponential functions with various thresholds. These thresholds are used to specify the size of the common items amongst the users. Exponential functions can devalue the similarity weight between a pair of users who has few common items and increase the similarity weight for users who have sufficient co-rated items. Therefore, the pair of users with sufficient co-rated items obtains a stronger relationship than those with few common items. Thus, the significance of this paper is to succinctly test the impacting of common items on the quality of recommendation that creates an understanding for the researchers by discussing the findings presented in this study. The MovieLens datasets are used as benchmark datasets to measure the effect of the ratio of common items on the accuracy. The result verifies the considerable impact exerted by the factor of common items.

Author 1: Hael Al-bashiri
Author 2: Hasan Kahtan
Author 3: Mansoor Abdullateef Abdulgabber
Author 4: Awanis Romli
Author 5: Mohammad Adam Ibrahim Fakhreldin

Keywords: Collaborative filtering; memory-based; similarity method; data sparsity

PDF

Paper 19: Winning the Polio War in Pakistan

Abstract: Polio is one of the most important issues which have caught the global attention. It has been eradicated globally except Pakistan and Afghanistan. Its quiet alarming, where whole world is polio free, still polio cases are emerging from Pakistan. The major motivation behind this research is to study and analyze the past cases (trend analysis) and to predict the number of future cases and obstacles hindering Pakistan to eliminate polio. The areas with peak level of influx could be prioritized for effective tracking, planning and monitoring of vaccination activities and better utilization of human resources for targeted and controlled interventions. It shall provide better management and resource allocation decisions for speedy eradication of this epidemic syndrome. Polio cases are displayed on Google Maps for localization and clustering, and trend analysis is performed for future prediction using linear regression.

Author 1: Toorab Khan
Author 2: Waheed Noor
Author 3: Junaid Babar
Author 4: Maheen Bakhtyar

Keywords: Prediction; visualization; regression; clustering

PDF

Paper 20: Investigation of Different Modulation Formats for Extended Reach NG-PON2 using RSOA

Abstract: Global market forecasts predicted that by 2020, more than 26 billion internet devices and connections universally interconnected will require nearly 3 times the data traffic generated when compared to the year 2015. The increase in data traffic, demands for enormous bandwidth capacity. The potential to deliver 10 Gbps of huge data to individual businesses and households will be of paramount importance and a challenging issue for the present day service providers. An intensive study is carried out for the Fiber-To-The-Home Passive Optical Network (FTTH PON) for their use in the optical communication, due to their high data rates and more bandwidth. The current evolution of Next Generation-Passive Optical Networks Stage 2 (NG-PON2) network is the primary key technology for the growing demands of higher bandwidth and transmission of the data to the subscribers present in the access network from the service providers. Time Wavelength Division Multiplexing PON (TWDM-PON) architecture is the viable essential solution for NG-PON2 which provides more bandwidth for bidirectional transmission. This article proposes a design for extended reach TWDM-PON based on reflective semiconductor optical amplifier (RSOA). The exclusive feature of the RSOA is the wavelength conversion, which replaces the transmitters in the subscriber end. The Quality of Service (QoS) performance is critically analyzed for different optical modulation formats in proposed extended reach TWDM-PON using RSOA. The TWDM-PON using RSOA is simulated and investigated for different photodetectors. The analysis is also carried for various distance and data rates. The results exhibited that APD receivers have better performance of minimum bit error rate obtained is 10-11 and minimum Q factor is 6.2 when compared with PiN receivers. The comparative analysis of different modulation formats shows that the Carrier Suppressed Return to Zero- Differential Phase Shift Keying (CSRZ-DPSK) gives the best performance for longer distance and large data rates and Return to Zero(RZ) gives the least performance.

Author 1: S Rajalakshmi
Author 2: T. Shankar

Keywords: Fiber To The Home (FTTH); Passive Optical Network (PON); Next Generation-Passive Optical Networks Stage 2 (NG-PON2); Quality of Service (QoS); Reflective Semiconductor Optical Amplifier (RSOA); Time and Wavelength Division Multiplexing (TWDM)

PDF

Paper 21: Distributed Shadow Controllers based Moving Target Defense Framework for Control Plane Security

Abstract: Moving Target Defense (MTD) has drawn substantial attention of research community in recent past for designing secure networks. MTD significantly reduced the asymmetric advantage of attackers by constantly changing the attack surface. In this paper Software Defined Networking (SDN) based MTD framework SMTSC (SDN based MTD framework using Shadow Controllers) has been proposed. Although the previous work in SDN based MTD targets the Data plane security, we exploit MTD for the protection of Control plane of SDN. The proposed solution uses the concept of Shadow Controllers for producing dynamism in order to provide security at the Control plane of SDN environment. We proposed the concepts of Shadow Controllers for throttling the reconnaissance attacks targeting Controllers. The advantage of our approach is multifold. First it exploits the mechanism of MTD for providing security in the Control plane. The other advantage is that the multi-controller approach provides higher availability in the SDN network. Another critical gain is the lower computational overhead of SMTSC. Mininet and ONOS Controller are used to implement the proposed framework. The effectiveness and overheads of the framework is evaluated in terms of attacker’s effort, defender cost and complexity introduced in the network. Results demonstrated promising trends for the protection of Control plan of SDN environment.

Author 1: Muhammad Faraz Hyder
Author 2: Muhammad Ali Ismail

Keywords: Control plane security; moving target defense; shadow controllers; software defined networks

PDF

Paper 22: Scientific Text Sentiment Analysis using Machine Learning Techniques

Abstract: Over time, textual information on the World Wide Web (WWW) has increased exponentially, leading to potential research in the field of machine learning (ML) and natural language processing (NLP). Sentiment analysis of scientific domain articles is a very trendy and interesting topic nowadays. The main purpose of this research is to facilitate researchers to identify quality research papers based on their sentiment analysis. In this research, sentiment analysis of scientific articles using citation sentences is carried out using an existing constructed annotated corpus. This corpus is consisted of 8736 citation sentences. The noise was removed from data using different data normalization rules in order to clean the data corpus. To perform classification on this data set we developed a system in which six different machine learning algorithms including Naïve-Bayes (NB), Support Vector Machine (SVM), Logistic Regression (LR), Decision Tree (DT), K-Nearest Neighbor (KNN) and Random Forest (RF) are implemented. Then the accuracy of the system is evaluated using different evaluation metrics e.g. F-score and Accuracy score. To improve the system’ accuracy additional features selection techniques, such as lemmatization, n-graming, tokenization, and stop word removal are applied and found that our system provided significant performance in every case compared to the base system. Our method achieved a maximum of about 9% improved results as compared to the base system.

Author 1: Hassan Raza
Author 2: M. Faizan
Author 3: Ahsan Hamza
Author 4: Ahmed Mushtaq
Author 5: Naeem Akhtar

Keywords: Sentimental analysis; scientific citations; machine learning; scientific literature; classification

PDF

Paper 23: Enrichment Ontology with Updated user Data for Accurate Semantic Annotation

Abstract: Annotation is considered one of the main applications that semantic web applies. The idea beyond annotation focused on adding metadata to existing information which facilitates machines dealing with data that have meanings and can be readable. Semantic annotation is one of the techniques used for the enrichment of web content semantically, which facilitates writing comments and evaluate previously annotated resources that can lead to better search results. Our framework aims to enrich ontology via embedding data directly to ontology in order to have completed and accurate data.

Author 1: Haytham Al-Feel
Author 2: Hanaa Ghareib Hendi
Author 3: Heba Elbeh

Keywords: Ontology; Semantic Web; Semantic Annotation; RSS News

PDF

Paper 24: Detecting Fake Images on Social Media using Machine Learning

Abstract: In this technological era, social media has a major role in people’s daily life. Most people share text, images, and videos on social media frequently (e.g. Twitter, Snapchat, Facebook, and Instagram). Images are one of the most common types of media share among users on social media. So, there is a need for monitoring of images contained in social media. It has become easy for individuals and small groups to fabricate these images and disseminate them widely in a very short time, which threatens the credibility of the news and public confidence in the means of social communication. This research attempted to propose an approach to extracting image content, classify it and verify the authenticity of digital images and uncover manipulation. Instagram is one of the most important websites and mobile image sharing applications on social media. This allows users to take photos, add digital photographic filters and upload pictures. There are many unwanted contents in Instagram's posts such as threats and forged images, which may cause problems to society and national security. This research aims to build a model that can be used to classify Instagram content (images) to detect any threats and forged images. The model was built using deep algorithms learning which is Convolutional Neural Network (CNN), Alexnet network and transfer learning using Alexnet. The results showed that the proposed Alexnet network offers more accurate detection of fake images compared to the other techniques with 97%. The results of this research will be helpful in monitoring and tracking in the shared images in social media for unusual content and forged images detection and to protect social media from electronic attacks and threats.

Author 1: Njood Mohammed AlShariah
Author 2: Abdul Khader Jilani Saudagar

Keywords: Convolution Neural Network (CNN); Image forgery; Classification; Alexnet; Rectified Linear Unit (ReLU); SoftMax function; Features extraction

PDF

Paper 25: Optimal Global Threshold based on Two Dimension Otsu for Block Size Decision in Intra Prediction of H.264/AVC Coding

Abstract: The Advanced Video Coding (H.264/AVC) has proved its ability in finding the tradeoff between the compressed bit rate value and the visual quality of video comparing to the others of traditional coding. One of the most encoder stages consuming time is the intra prediction in which different sizes of a block are exhaustively examined for selecting the suitable block size to the best block mode decision. In this paper, an efficient approach is suggested to select the best block size for the intra prediction adaptively to achieve high compression efficiency. The proposed approach exploits the idea of quad tree decomposition for blocks partitioning based on a predefined threshold value. An optimal global threshold value based on two dimension Otsu technique is suggested for the decision of block division in this work. The proposed technique is carried out on different set of videos resolutions with different quantization parameters using Matlab software. The comparison of the proposed approach with the reference JM18.6 video coding is done in terms of bit rate (BR), time saving and peak signal to noise ratio (PSNR). A tangible acceleration on the running time can be accomplished besides improvement in both of visual quality and bit rate with some of QCIF and CIF videos resolution by the proposed technique. The simulation results demonstrate saving in time by average 42% to 68% with CIF and QCIF videos. Concerning the visual quality in terms of Bjontegaard Delta parameters, the PSNR improved in which its value increased from 0.2 to 1.6, while the value of BR reduced from 0.79 to 15.3 respectively with some videos of resolutions QCIF, CIF and 720p. In addition, the performance of the suggested approach with the high resolution videos achieves minor improvement with some of them while has a slightly degradation with others of them.

Author 1: Sawsan Morkos Gharghory

Keywords: H.264/AVC coding; intra prediction; block size decision; Otsu two dimensions method

PDF

Paper 26: A Framework for Cloud Security Risk Management based on the Business Objectives of Organizations

Abstract: Security is considered one of the top ranked risks of Cloud Computing (CC) due to the outsourcing of sensitive data onto a third party. In addition, the complexity of the cloud model results in a large number of heterogeneous security controls that must be consistently managed. Hence, no matter how strongly the cloud model is secured, organizations continue suffering from lack of trust on CC and remain uncertain about its security risk consequences. Traditional risk management frameworks do not consider the impact of CC security risks on the business objectives of the organizations. In this paper, we propose a novel Cloud Security Risk Management Framework (CSRMF) that helps organizations adopting CC identifies, analyze, evaluate, and mitigate security risks in their Cloud platforms. Unlike traditional risk management frameworks, CSRMF is driven by the business objectives of the organizations. It allows any organization adopting CC to be aware of cloud security risks and align their low-level management decisions according to high-level business objectives. In essence, it is designed to address impacts of cloud-specific security risks into business objectives in a given organization. Consequently, organizations are able to conduct a cost-value analysis regarding the adoption of CC technology and gain an adequate level of confidence in Cloud technology. On the other hand, Cloud Service Providers (CSP) is able to improve productivity and profitability by managing cloud-related risks. The proposed framework has been validated and evaluated through a use-case scenario.

Author 1: Ahmed E Youssef

Keywords: Information security; data privacy; cloud security risks; risk management; business objectives; cloud computing

PDF

Paper 27: Handwritten Arabic Text Recognition using Principal Component Analysis and Support Vector Machines

Abstract: In this paper, an offline holistic handwritten Arabic text recognition system based on Principal Component Analysis (PCA) and Support Vector Machine (SVM) classifiers is proposed. The proposed system consists of three primary stages: preliminary processing, feature extraction using PCA, and classification using the polynomial, linear, and Gaussian SVM classifiers. In this proposed system, text skeleton is first extracted and the images of the text are normalized into uniform size for extraction of the global features of the Arabic words using PCA. Recognition performance of this proposed system was evaluated on version 2 of the IFN/ENIT database of handwritten Arabic text using the polynomial, linear, and Gaussian SVM classifiers. The classification results of the proposed system were compared with the results produced by a benchmark. TRS that is depending on the Discrete Cosine Transform (DCT) method using numerous normalization sizes of Arabic text images. The experimental testing results support the effectiveness of the proposed system in holistic recognition of the handwritten Arabic text.

Author 1: Faisal Al-Saqqar
Author 2: Atallah. M AL-Shatnawi
Author 3: Mofleh Al-Diabat
Author 4: Mesbah Aloun

Keywords: Handwritten Arabic text; holistic recognition; principal component analysis; support vector machines

PDF

Paper 28: 5G Enabled Technologies for Smart Education

Abstract: 5G technology use cases depicts the prospects of 5G network model to revolutionize Industry and Education is not an exception. The 5G model in general is made up of three main blocks: Enhanced Mobile Broadband, Massive Machine Type Communication and Ultra Reliable and Low Latency Communication. Within these blocks are the services 5G offers to users. In this paper, we focus on Educational users as beneficiaries of 5G technologies. The modern day Educational Institutions can benefit from the deployment of 5G-enabled services adapted to this sector. We proposed frameworks relating 5G and its disruptive technologies in advancing tools that will propel the idea of a Smart Educational System. This paper hence provides a comprehensive discussion on 5G technologies that will facilitate new teaching and learning trends in Educational environment.

Author 1: Delali Kwasi Dake
Author 2: Ben Adjei Ofosu

Keywords: 5G Networks; smart education; smart campus; machine learning; artificial intelligence; big data; internet of things

PDF

Paper 29: A Multi-Layered Security Model for Learning Management System

Abstract: A learning management system is a web-based software application that is used for the documentation, administration, tracking, reporting and delivery of training programs and educational courses. It is an efficient and effective way to give valuable information to the students in a short time. With the evolution of e-learning, the learning management system is widely adopted in the education sector as well as in corporate market. Thus, it became a valued target for attackers to focus their attacks on LMS platforms. Most of the popular learning management systems available now a day don’t pay enough attention to the security mechanism and that gives opportunity to intruders to gain unauthorized access by manipulating the security gaps and breach into the system. The result is information leakage, unwanted data deletion or modification and compromised integrity of the data. The aim of this research paper is to focus on the need of security concerns and to provide a solution that can make the learning management system secure from any possible potential threats and attacks. In this paper, a complete multi-layered security model is proposed. The implementation of proposed model will provide a very secure environment for any learning management system.

Author 1: Momeen Khan
Author 2: Tallat Naz
Author 3: Mohammad Awad Hamad Medani

Keywords: Multi-layered security model; designing a security model for learning management system; learning management system

PDF

Paper 30: Comparative Study between Lean Six Sigma and Lean-Agile for Quality Software Requirement

Abstract: Requirement Elicitation is one of the challenging phases in the entire software development life cycle. It is the process of extracting and analyzing the requirements from customers to understand thoroughly of what system needs to be built. Despite all the advances in methodologies and practice approaches, extracting and establishing the right requirements are still part of the research debate. The objective of this paper is to compare the characteristics of two hybrid development approaches; Lean Six Sigma vs. Lean Agile. Most of the comparative studies done by most of the research compared within its relative knowledge such as; Lean vs. Six Sigma, Define-Measure-Analyze-Improve-Control vs. Design-For-Six-Sigma or Lean vs. Six Sigma vs Lean Six Sigma. Whereas in software industries, the comparative studies were focused on Lean vs. Agile, Agile vs. Waterfall, Lean vs. Kanban vs. Agile, which compared the project size, process cycle time, sequential or iterative process. The following parts of the study is to explore the differences and similarities in principles and practices. The study contributes significantly to the business analysts to systematically address the solutions and actions to ensure continuous improvement in producing quality software requirement.

Author 1: Narishah Mohamed Salleh
Author 2: Puteri NE Nohuddin

Keywords: Lean Six Sigma; Lean Agile; DMAIC; SCRUM; Requirement Elicitation

PDF

Paper 31: Indonesian Words Error Detection System using Nazief Adriani Stemmer Algorithm

Abstract: Stemming in each language has a different process and is determined according to the structure of the language. Stemming is mostly used as a complete step in the processing of words and phrases. There are many stemming algorithms available, and some used as a process for word processing. One function of stemming is to detect word errors in Indonesian. In this study, researchers created the Indonesian words error detection system using Nazief and Adriani algorithm. In the trials conducted, the system will accept text input obtained from the user. Then the system will preprocess the text. In this study, there are three stages of preprocessing, namely tokenization, case folding, and filtering. After the stages in preprocessing are finished, the system will call each word for the process of stemming. The results of the stemming will be compared with the base words available in the database. If it does not match, then the word is highlighted and is considered an error word. The first finding is the Nazief Adriani's algorithm can be able to detect words error until 100%. The second finding is the Nazief Adriani's algorithm also detect non-words error, the accuracy of detecting is 97.464%.

Author 1: Anton Yudhana
Author 2: Abdul Fadlil
Author 3: Muhamad Rosidin

Keywords: Indonesian; word error; stemming; Nazief and Adriani stemmer algorithm; detection system

PDF

Paper 32: Problems Solving of Cell Subscribers based on Expert Systems Neural Networks

Abstract: With the growing demand for telecommunications services, the number of calls to telecommunications companies related to issues of using services, setting up and maintaining equipment, as well as resolving possible problems arising in the process of using services is also growing. From the point of view of system analysis, the problem is the mismatch between the existing and the required (target) state of the system for a given state of the environment at the moment in time. Based on this definition, we consider the problem of the subscriber of the cellular network a mismatch between the existing and the required state of the cellular network in this state of the environment at the moment in time. The state of the cellular network is characterized by the functioning of all devices, the proposed range of services. A short time for analyzing problem situations and making decisions, a large amount of information characterizing the current situation, the difficulty of solving poorly formalized and poorly structured tasks in the absence of complete and reliable information about the state of the cellular communication network and the functioning of its elements lead to a mismatch of human capabilities to effectively solve these problems. In this regard, the development and implementation of a precedent based neural network expert system for solving the problems of subscribers of a cellular communication network is an urgent scientific and technical task.

Author 1: Ahmad AbdulQadir AlRababah

Keywords: Neural network expert system; telecommunications companies; system analysis; cellular network; structured tasks; reliable information; human capabilities; telecommunications services

PDF

Paper 33: Understanding Students’ Motivation and Learning Strategies to Redesign Massive Open Online Courses based on Persuasive System Development

Abstract: Electronic learning or E-learning is currently flourishing immensely in areas such as secondary and tertiary education, lifelong learning programs and adult education. Within recent years, massive open online courses (MOOCs) have received profound attention within the field of E-learning. Persuasive principles can be implemented to enhance the system design and motivate students to engage with the system. The aim of this study is to identify students’ motivation and learning strategies that affect their academic performance in using MOOCs among tertiary education students. 40 students enrolled in the Ethnic Relations course participated in the online survey. Motivated Strategies for Learning Questionnaire (MSLQ) is the instrument used in this study while Automatic Linear Modelling (ALM) and Multiple Linear Regression (MLR) were used in the analysis. The result shows that there is a correlation between students’ motivation, learning strategies and their academic performance. It is found that resource management, cognitive and metacognitive and value component are the main scales that influenced their motivation and learning strategies towards excellent academic performance. The results can be used to fulfil the first phase of designing a persuasive system based on the Persuasive System Design (PSD) model which is to understand the issues behind a system.

Author 1: Mohamad Hidir Mhd Salim
Author 2: Nazlena Mohamad Ali
Author 3: Mohamad Taha Ijab

Keywords: Persuasive; MOOCs; motivation; learning strategies

PDF

Paper 34: Integrated Methodological Framework for Digital Transformation Strategy Building (IMFDS)

Abstract: There is still a conflict among the definitions, frameworks, and formulation of the digital transformation strategy in the literature. Despite extensive research on Digital Transformation Strategies and Digital Transformation Assessment, there is not a clear and global meta-model describing the general concepts and guidelines of the digital transformation to frame and drive a successful digital transformation. Several digital transformation approaches have been presented in the literature, these approaches are focusing on specific cases and specific concepts. The present paper describes the digital transformation and its relationship with IT governance. It presents how IT governance can lead the digital transformation. A literature review has been conducted on the most well-known IT Frameworks (COBIT, ITIL, CMMI) and their structure in order to provide a standard and known framework by practitioners. This paper proposes an Integrated Methodological Framework for Digital Transformation Strategy Building. The proposed framework is called IMFDS, it is based on IT governance elements (Business Strategic Planning, IT Strategic Planning, IT Organizational Structure, IT Reporting, IT Budgeting, IT Investment Decisions, Steering committee, IT Prioritization Process and IT Reaction Capacity). It provides specific guidelines to help organizations formulating, implementing and monitoring their transformation strategies. IMFDS is articulated across 9 blocks (steps) and 34 processes.

Author 1: Zineb Korachi
Author 2: Bouchaib Bounabat

Keywords: Digital transformation strategy; digital transformation assessment; IT governance; IT framework

PDF

Paper 35: Real-Time Carpooling Application based on k-NN Algorithm: A Case Study in Hashemite University

Abstract: The current revolution of mobile technology in different aspects of community directs the researchers and scientists to employ this technology to identify practical solutions for daily life problems using mobiles. One of the major challenges in our developing countries is the public transportation system. Public transportation system is an essential requirement for the welfare of modern society and has a critical impact on the people productivities and thus on the entire economic development process. Therefore, different solutions had been investigated to find applicable solutions. “Carpooling” is one of the initiative solutions that based on the usage of a single shared car by a group of people heading to the same location on a daily basis. In addition, carpooling can be considered as an efficient alternative to overcome the limitations of the conventional transportation system with an easier, quicker and more environmentally friendly car journeys. This paper presents an intelligent carpooling mobile app to commute students of the Hashemite University. The proposed solution is founded on using data mining technique, and more specifically the k-Nearest-Neighbour (k-NN) technique.

Author 1: Subhieh El Salhi
Author 2: Fairouz Farouq
Author 3: Randa Obeidallah
Author 4: Yousef Kilani
Author 5: Esraa Al Shdaifat

Keywords: Mobile Application; Carpooling; Data mining; Classification; k-NN algorithms

PDF

Paper 36: Cardiovascular Disease Diagnosis: A Machine Learning Interpretation Approach

Abstract: Research on heart diseases has always been the center of attention of the world health organization. More than 17.9 million people died from it in 2016, which represent 31% of the overall deaths globally. Machine learning techniques have been used extensively in that area to assist physicians to develop a firm opinion about the conditions of their heart disease patients. Some of the existing machine learning models still suffers from limited predication ability, and the chosen analysis approaches are not suitable. As well, it was noticed that the existing approaches pay more attention to building high accuracy models, while overlooking the ability to interpret and understand the recommendations of these models. In this research, different renowned machine learning techniques: Artificial Neural Networks, Support Vector Machines, Naïve Bayes, Decision Trees and Random Forests have been investigated to help in building, understanding and interpreting different heart disease diagnosing models. The Artificial Neural Networks model showed the best accuracy of 84.25% compared to the other models. In addition, it was found that despite some designed models have higher accuracies than others, it may be safer to choose a lower accuracy model as a final design of this study. This sacrifice was essential to make sure that a more transparent and trusted model is being used in the heart disease diagnosis process. This transparency validation was conducted using a newly suggested metric: the Feature Ranking Cost index. The use of that index showed promising results by making it clear as which machine learning model has a balance between accuracy and transparency. It is expected that following the detailed analyses and the use of this research findings will be useful to the machine learning community as it could be the basis for post-hoc prediction model interpretation of different clinical data sets.

Author 1: Hossam Meshref

Keywords: Heart diseases; machine learning; artificial neural networks; support vector machines; Naïve Bayes; decision trees; random forests; model interpretation; feature ranking cost index

PDF

Paper 37: Towards the Development of Collaborative Learning in Virtual Environments

Abstract: The objective of the research is to evaluate strategies such as Wikis, Forums and Chat in the development of collaborative learning in higher education students. A collaborative experience was developed with 25 students in an asynchronous e-learning environment. The activities consisted of forum discussions, chat and project development in a wiki environment. The research method includes a quantitative analysis whose forum rating was developed by applying a rubric. The use of didactic strategies such as Wikis, Forums and Chat in the learning sessions promotes collaborative learning where the main factors for this to happen are the degree of appropriation of these technologies by students and the mastery of their use by teachers. It is not possible to affirm the superiority of one tool over another because each has its own characteristics and could be used for different purposes, besides having complementary functions, they must organize and complement each other to develop collaborative learning.

Author 1: Benjamin Maraza-Quispe
Author 2: Nicolás Caytuiro-Silva
Author 3: Eveling Castro-Gutierrez
Author 4: Melina Alejandro-Oviedo
Author 5: Walter Choquehuanca-Quispe
Author 6: Walter Fernandez-Gambarini
Author 7: Luis Cuadros-Paz
Author 8: Betsy Cisneros-Chavez

Keywords: Wikis; forums; chat; learning; collaborative

PDF

Paper 38: Modification of Manual Raindrops Type Observatory Ombrometer with Ultrasonic Sensor HC-SR04

Abstract: Water, in any way it comes, is important for the life of all living things. Indonesia is an area of tropical equatorial with a variation of rain, which is quite high. The regularity of the distribution of rainfall is one of the aspects most important to the activity of the community. As the development of technology, the intensity of rainfall can be measured manually using Ombrometer Observatory tool. The manual tool for measuring the rain precipitation, Ombrometer Observatorium, is used to take data manually. Samples should be taken at 7.00 a.m. everyday using a measuring cup to know the height of the water contained. However, the type is prone to error at the high rainfall intensity, since the drainage of the samples is conducted every 24 hours. Therefore, much water is wasted. To solve the problem, a modification of a rainfall gauge was made, that is Ombrometer Observatory with ultrasonic sensor HC-SR04. The height of the water in the container is sent through a server of which the data is stored in the database every ten minutes to reduce the risk of evaporation. It also minimizes the error in measuring the rainfall intensity. The results have been compared to the ones by BMKG (Meteorology, Climatology, and Geophysics Agency). The correlation value of the measurement ratio reached 0.9739 or 97.39%.

Author 1: Anton Yudhana
Author 2: Jessy Rahmayanti
Author 3: Son Ali Akbar
Author 4: Subhas Mukhopadhyay
Author 5: Ismail Rakip Karas

Keywords: Observatory Ombrometer; rainfall; database; ultrasonic sensor; IoT; rain gauge

PDF

Paper 39: Clustering based Privacy Preserving of Big Data using Fuzzification and Anonymization Operation

Abstract: Big Data is used by data miner for analysis purpose which may contain sensitive information. During the procedures it raises certain privacy challenges for researchers. The existing privacy preserving methods use different algorithms that results into limitation of data reconstruction while securing the sensitive data. This paper presents a clustering based privacy preservation probabilistic model of big data to secure sensitive information..model to attain minimum perturbation and maximum privacy. In our model, sensitive information is secured after identifying the sensitive data from data clusters to modify or generalize it.The resulting dataset is analysed to calculate the accuracy level of our model in terms of hidden data, lossed data as result of reconstruction. Extensive experiements are carried out in order to demonstrate the results of our proposed model. Clustering based Privacy preservation of individual data in big data with minimum perturbation and successful reconstruction highlights the significance of our model in addition to the use of standard performance evaluation measures.

Author 1: Saira Khan
Author 2: Khalid Iqbal
Author 3: Safi Faizullah
Author 4: Muhammad Fahad
Author 5: Jawad Ali
Author 6: Waqas Ahmed

Keywords: Big data; clustering; privacy preservation; reconstruction; perturbation

PDF

Paper 40: Flooding and Oil Spill Disaster Relief using Sentinel of Remote Sensing Satellite Data

Abstract: Flooding and oil spill disaster relief using Sentinel of remote sensing satellite data is conducted. Kyushu, Japan had severe heavy rain from 26 August to 30 August 2019. Optical sensor and Synthetic Aperture Radar: SAR onboard remote sensing satellite is used for disaster relief. NDVI and SWIR data derived from the Sentinel data are used for disaster relief. Merits and demerits of the optical sensor and SAR instrument are compared from the disaster relief of point of view.

Author 1: Kohei Arai

Keywords: Sentinel; disaster relief; satellite remote sensing; flooding; oil spill; synthetic aperture radar; optical sensor; vegetation index

PDF

Paper 41: Analysis of Multi-hop Wireless Sensor Networks using Probability Propagation Models

Abstract: This paper presents a formula for estimating the probability of collecting a given amount of data from a propagation model and multi-hop wireless sensor networks (WSNs) based on Monte Carlo simulation with cluster-tree topology. The probabilistic model is based on an analytical model of the IEEE 802.15.4 MAC protocol. The probability of successful node transmission is extended to the probabilities of successful collection at the cluster P(X=k) and sink node P(X ≥ k). A numerical example has been provided for comparing the probabilities. We propose a model to calculate the probability from the ratio of the collection rate to the total number of nodes and therefore provide the likeliness of complete data collection. Finally, the results from our analysis provide an estimation of the probability of achieving successful transmission in WSNs.

Author 1: Komgrit Jaksukam
Author 2: Teerawat Tongloy
Author 3: Santad Chuwongin
Author 4: Siridech Boonsang

Keywords: Probabilistic modelling; wireless sensor network; multi-hop networks; data collection scheme; Monte Carlo simulation; probability propagation models; probabilistic analysis

PDF

Paper 42: Cloud-Edge Network Data Processing based on User Requirements using Modify MapReduce Algorithm and Machine Learning Techniques

Abstract: Edge computing extends cloud computing to enhancing network performance in terms of latency and network traffic of many applications such as: The Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities. This extension aims at reducing data communication and transmission through the network. However, data processing is the main challenge facing edge computing. In this paper, we proposed a data processing framework based on both edge computing and cloud computing, that is performed by partitioning (classification and restructuring) of data schema on the edge computing level based on feature selection. These features are detected using MapReduce algorithm and a proposed machine learning subsystem built on user requirements. Our approach mainly relies on the assumption that the data sent by edge devices can be used in two forms, as control data (i.e. real-time analytics) and as knowledge extraction data (i.e. historical analytics).We evaluated the proposed framework based on the amount of transmitted, stored data and data retrieval time, the results show that both the amount of sending data was optimized and data retrieval time was highly decreased. Our evaluation was applied experimentally and theoretically on a hypothetical system in a kidney disease center.

Author 1: Methaq Kadhum
Author 2: Saher Manaseer
Author 3: Abdel Latif Abu Dalhoum

Keywords: Edge computing; cloud computing; data processing; data partitioning; MapReduce; machine learning; feature selection; user requirement

PDF

Paper 43: Proof of Credibility: A Blockchain Approach for Detecting and Blocking Fake News in Social Networks

Abstract: Rumors and misleading information detection and prevention still represent a big challenge against social network developers and researchers. Since newsworthy information propagation is a traditional behavior of most of the users in social media, then verifying information credibility and reliability is indeed a vital security requirement for social network platforms. Due to its immutability, security, tamper-proof and P2P design, Blockchain as a powerful technology can provide a magical solution to overcome this challenge. This Paper introduces a novel blockchain approach called Proof of Credibility (PoC) for detecting fake news and blocking its propagation in social networks. The functionality of the PoC protocol has been simulated on two datasets of newsworthy tweets collected from different news sources on Twitter. The results clarified a satisfying performance and efficiency of the proposed approach in detecting rumors and blocking its propagation.

Author 1: Mohamed Torky
Author 2: Emad Nabil
Author 3: Wael Said

Keywords: Blockchain technology; social networks; fake news detection

PDF

Paper 44: Predictive Control for Distributed Smart Street Light Network

Abstract: With the advent of smart city that embedded with smart technology, namely, smart streetlight, in urban development, the quality of living for citizens has been vastly improved. TALiSMaN is one of the promising smart streetlight schemes to date, however, it possesses certain limitation that led to network congestion and packet dropped during peak road traffic periods. Traffic prediction is vital in network management, especially for real-time decision-making and latency-sensitive application. With that in mind, this paper analyses three real-time short-term traffic prediction models, specifically simple moving average, exponential moving average and weighted moving average to be embedded onto TALiSMaN, that aim to ease network congestion. Additionally, the paper proposes traffic categorisation and packet propagation control mechanism that uses historical road traffic data to manage the network from overload. In this paper, we evaluate the performance of these models with TALiSMaN in simulated environment and compare them with TALiSMaN without traffic prediction model. Overall, weighted moving average showed promising results in reducing the packet dropped while capable of maintaining the usefulness of the streetlight when compared to TALiSMaN scheme, especially during rush hour.

Author 1: Pei Zhen Lee
Author 2: Sei Ping Lau
Author 3: Chong Eng Tan

Keywords: Traffic prediction; adaptive street lighting; smart cities; energy efficient; network congestion

PDF

Paper 45: Developing a Framework for Potential Candidate Selection

Abstract: Recruitment is the process of hiring the right person for the right job. In the current competitive world, recruiting the right person from thousands of applicants is a tedious work. In addition, analyzing these huge numbers of applications manually might result into biased and erroneous output which may eventually cause problems for the companies. If these pools of resumes can be analyzed automatically and presented to the employers in a systematic way for choosing the appropriate person for their company, it may help the applicants and the employers as well. So in order to solve this need, we have developed a framework that takes the resume of the candidates, pull out information from them by recognizing the named entities using machine learning and score the applicants according to some predefined rules and employer requirements. Furthermore, employers can select the best suited candidates for their jobs from these scores by using skyline filtering.

Author 1: Farzana Yasmin
Author 2: Mohammad Imtiaz Nur
Author 3: Mohammad Shamsul Arefin

Keywords: Information extraction; named entity recognition; machine learning; skyline queries

PDF

Paper 46: Identification of People with Parkinson's Suspicions through Voice Signal Processing

Abstract: Parkinson is considered a disease with a very random prognosis, in addition to its origin due to a multisystemic neurodegenerative process that affects the central nervous system, which is responsible for motor control of the body and also produces chronic joint pain if the patient is not treated also suffers states of depression. This disease currently has no cure so it recommends the patient's family to provide quality of life, the age of incidence is from 40 years, according to the INCN (Instituto Nacional de Ciencias Neurológicas) indicates that there are 3,000 cases of Parkinson in Peru annually. In this research paper, it proposes the creation of an algorithm in MATLAB capable of extracting the characteristics of the voice spectrum through the voice signal processing to provide an early detection so that they can receive treatment, appease and slow down Parkinson's disease. This processing will consist of submitting the audio by the Fast Fourier Transform (FFT), identifying the signal bodies, separating by frequency periods, to finally find the average and maximum values. It was identified that in the lower frequencies are where there are major differences, in addition the test was done with patients who has Parkinson's suspicions and the same differences were obtained resulting in the frequency periods [9Hz – 13Hz], [20Hz –30Hz] and [40Hz – 54Hz]. Also note that the period of 20 Hz to 30 Hz is where if the values in this frequency are less than 3.5 in amplitude they are principles of suspicion of Parkinson's disease.

Author 1: Brian Meneses-Claudio
Author 2: Witman Alvarado-Diaz
Author 3: Avid Roman-Gonzalez

Keywords: Voice signal processing; Parkinson Disease (PD); Fast Fourier Transform; speech signal segmentation; audio treatment

PDF

Paper 47: Towards the Identification of Student Learning Communities using Centrality

Abstract: Emergence of universities towards “digital university” has already been present for some years. The use of digital is largely developed to ensure a good quality of education. Universities therefore use large-scale learning management systems to manage the interaction between learners and teachers. Teachers can provide online training and educational materials for students following their classes and courses, monitor their participation and evaluate their performance. Students can use interactive features such as discussion threads, videoconferences, and discussion forums. These online tools make it possible to create new social networks or connect online social interactions. This will allow us to understand the structure of this complex network and extract useful information. In this article, we report our research on the detection of student learning communities based on learner activity. We found that it is possible to group students in communities through their messages and response structures using standard community detection algorithms. Also, that their behaviours can be strongly correlated with their closest peers who belong to the same community.

Author 1: Intissar Salhi
Author 2: Hanaa El Fazazi
Author 3: Mohammed Qbadou
Author 4: Khalifa Mansouri

Keywords: Student’s learning communities; complex network; learner activity; community detection

PDF

Paper 48: Visualising Image Data through Image Retrieval Concept using a Hybrid Technique: Songket Motif’s

Abstract: It has been proven that the massive dataset is strictly complex in Content Based image Retrieval (CBIR) because the present strategies in CBIR might have faced difficulties in feature extraction of the images. Moreover, technological constraints encountered in the analysis and extraction of the image arrays are how the system customizes the primitive geometric structures known as polygonal approximations structure. Hence, this study has discovered that image feature extraction is utilized by applying the Principal Component Analysis (PCA) technique, which is primarily based on the matrix of image representation that will enlarge the similarity of detection. The PCA approach needs to be enhanced resulting from the lack of the extraction of features in songket motives images. Therefore, this study proposes a new hybrid model that will integrate PCA with geometric techniques for image feature extraction to increase the recall and precision result. This paper employs the use of a qualitative experimental design model that involves three phases of activities. First, the analysis and design phase, secondly is a development phase, and lastly is the testing and evaluation phase. This paper focuses on those two phases in terms of design and development phases. The outcome process of the empirical phase is followed by designing the algorithm and model based on the result of literature review. This study has found that the hybrid between the principal component analysis model and the geometry technique will help to reduce the problems faced by the basic engineering technique model, which is the constraint in analysing and extracting the image features to customize the geometric primitive structure.

Author 1: Nadiah Yusof
Author 2: Amirah Ismail
Author 3: Nazatul Aini Abd Majid

Keywords: Multimedia; image; content-based image retrieval (CBIR); image retrieval; near-duplicate; principal component analysis (PCA); geometric

PDF

Paper 49: Accurate Speech Emotion Recognition by using Brain-Inspired Decision-Making Spiking Neural Network

Abstract: A portion of speech recognition is taken away by emotion recognition which is a smart update and it is necessary for its gain massively. Feature selection is an indispensable stage among the furtherance of various schemes in order to implement the classification of sentiments in speaking. The communication among features prompted from the alike audio origin has been rarely deliberated at present, which might yield terminated features and cause an upswing in the computational costs. To resolve these defects the deep learning-based feature extraction technique is used. An incredible modernization in speech recognition in recent years incorporates machine learning techniques with a deep structure for feature extraction. In this paper, the speech signal obtained from the SAVEE database is used as an input for a deep belief network. In order to perform pre-training in the network, the layer-wise rapacious feature extraction tactic is implemented and by using systematic samples, the smearing back-propagation method is accomplished for attaining fine-tuning. Brain-inspired decision-making spiking neural network (SNNs) is used to recognize different emotions but training by deep SNNs remains a challenge, but it improves the determination of the result. In order to enhance the parameters of SNNs, a social ski-driver (SSD) evolutionary optimization algorithm is used. The results of the SNN-SSD algorithm are related to artificial neural networks and long short term memory with different emotions to refine the classification for authorization.

Author 1: Madhu Jain
Author 2: Ms. Shilpi Shukla

Keywords: Brain-inspired decision-making spiking neural network (BDM-SNN); deep belief network; social ski-driver (SSD) optimization; emotion recognition

PDF

Paper 50: Power Quality Evaluation for Electrical Installation of Hospital Building

Abstract: This paper presents improvements to the quality of power in hospital building installations using power capacitors. Power quality in the distribution network is an important issue that must be considered in the electric power system. One important variable that must be found in the quality of the power distribution system is the power factor. The power factor plays an essential role in determining the efficiency of a distribution network. A good power factor will make the distribution system very efficient in using electricity. Hospital building installation is one component in the distribution network that is very important to analyze. Nowadays, hospitals have a lot of computer-based medical equipment. This medical equipment contains many electronic components that significantly affect the power factor of the system. In this study, power quality analysis has been carried out on the building installation of one of the largest hospitals in Yogyakarta, Indonesia. In the initial condition, the power losses at the facility were quite high. Installation of power capacitors in these installations can improve the power factor, and ultimately improve the performance of the electrical installation system in the hospital building.

Author 1: Agus Jamal
Author 2: Sekarlita Gusfat Putri
Author 3: Anna Nur Nazilah Chamim
Author 4: Ramadoni Syahputra

Keywords: Power quality; power capacitor; hospital building; electrical installation

PDF

Paper 51: Dynamic Performance of Synchronous Generator in Steam Power Plant

Abstract: This paper presents dynamic performance of synchronous generator in steam power plant. Steam power plants are the most popular power plants to date. Until the end of 2018, 48.43% of the total installed capacity of power plants in Indonesia is this type of power plant. The largest steam power plant in Indonesia is in Paiton, Probolinggo, East Java, which is the object of this research. In its operation, the generator in this generator experiences dynamics as electricity load changes. This study discusses the analysis of the performance of synchronous generators to changes in electrical load. The analysis includes the voltage, active power, reactive power, power factor, and generator efficiency variables. The results showed that the generator performance remained good despite serving a very dynamic electricity load.

Author 1: Ramadoni Syahputra
Author 2: Andi Wahyu Nugroho
Author 3: Kunnu Purwanto
Author 4: Faaris Mujaahid

Keywords: Synchronous generator; steam power plant; dynamic performance; efficiency

PDF

Paper 52: The Impact of using Social Network on Academic Performance by using Contextual and Localized Data Analysis of Facebook Groups

Abstract: Social Networks due to their intrinsic nature of being addictive have become an integral part of our civilization and plays an important role in our daily interactions. Facebook being the largest global online network, is used as a primary platform for carrying out our study and hypothesis testing. We built a web crawler for data extraction and used that data for our analysis. Primary goal of this study is to identify patterns among members of a Facebook group using a contextual and localised approach. We also intend to validate some hypotheses using a data driven approach like comparison of student’s social participation and activeness with actual class participation and its impact on his/her grades. We have also used user interactions in Facebook groups for identifying close relationships. The polarity of content in a group’s comments and posts defines a lot about that group and is also conferred in this paper.

Author 1: Muhammad Aqeel
Author 2: Mukarram Pasha
Author 3: Muhammad Saeed
Author 4: Muhammad Kamran Nishat
Author 5: Maryam Feroz
Author 6: Farhan Ahmed Siddiqui
Author 7: Nasir Touheed

Keywords: Social networks; data analysis; data mining; NLP; sentiment analysis

PDF

Paper 53: GPLDA: A Generalized Poisson Latent Dirichlet Topic Model

Abstract: The earliest modification of Latent Dirichlet Allocation (LDA) in terms of words or document attributes is by relaxing its exchangeability assumption via the Bag-of-word (BoW) matrix. Several authors have proposed many modifications of the original LDA by focusing on model that assumes the current topic depends on the words from previous topic. Most of the earlier work ignored the document length distribution since it is assumed that it will fizzle out at the modelling stage. Thus, in this paper, the Poisson document length distribution of LDA model is replaced with Generalized Poisson (GP) distribution which has the strength of capturing complex structures. The main strengths of GP are in capturing overdispersed (variance larger than mean) and under dispersed (variance smaller than mean) count data. The Poisson distribution used by LDA strongly relies on the assumption that the mean and variance of document lengths are equal. This assumption is often unrealistic with most real-life text data where the variance of document length may be greater than or less than their mean. Approximate estimate of the GPLDA model parameters was achieved using Newton-Raphson approximation technique of log-likelihood. Performance and comparative analysis of GPLDA with LDA using accuracy and F1 showed improved results.

Author 1: Ibrahim Bakari Bala
Author 2: Mohd Zainuri Saringat

Keywords: Bag-of-word; generalized Poisson distribution; topic model; latent Dirichlet allocation

PDF

Paper 54: A Mobile Agent Team Works based on Load-Balancing Middleware for Distributed Computing Systems

Abstract: The aim of this paper is to present a load balancing middleware for parallel and distributed systems. The great challenge is to balance the tasks between heterogeneous distributed nodes for parallel and distributed computing models based distributed systems, by the way to ensure HPC (High performance computing) of these models. Accordingly, the proposed middleware is based on mobile agent team work which implements an efficient method with two strategies: (i) Load balancing Strategy that determines the node tasks assignment based on node performance, and (ii) Rebalancing Strategy that detects the unbalanced nodes and enables tasks migration. The paper focuses on the proposed middleware and its cooperative mobile agent team work model strategies to dynamically balance the nodes, and scale up distributed computing systems. Indeed, some experimental results that highlight the performance and efficiency of the proposed middleware are presented.

Author 1: Fatéma Zahra Benchara
Author 2: Mohamed Youssfi

Keywords: Load balancing; middleware; parallel and distributed systems; parallel and distributed computing models; high performance computing; mobile agents; distributed computing

PDF

Paper 55: An Efficient Method for Speeding up Large-Scale Data Transfer Process to Database: A Case Study

Abstract: Among the of characteristics of Large Data complexity comprising of volume, velocity, variety, and veracity (4Vs), this paper focuses on the volume to ensure a better performance of data extract, transform, and load processes in the context of data migration from one server to the other due to the necessity of update to the population data of Tegal City. An approach often used by most programmers in the Department of Population and Civil Registration of Tegal City is conducting the transfer process by transferring all available data (in specific file format) to the database server regardless of the file size. It is prone to errors that may disrupt the data transfer process like timeout, oversized data package, or even lengthy execution time due to large data size. The research compares several approaches to extract, transform, and load/transfer large data to a new server database using a command line and native-PHP programming language (object-oriented and procedural style) with different file format targets, namely SQL, XML, and CSV. The performance analysis that we conducted showed that the big scale data transfer method using LOAD DATA INFILE statement with comma-separated value (CSV) data source extension is the fastest and effective, therefore recommendable.

Author 1: Ginanjar Wiro Sasmito
Author 2: M. Nishom

Keywords: Big data; speeds up; data processing; data transfer

PDF

Paper 56: A Systematic TRMA Protocol for Yielding Secure Environment for Authentication and Privacy Aspects

Abstract: RFID is a system that uses the radio waves to scrutinize and capture data pertained to a tag for an object attached to it. In spite of RFID's wide application in industries, it poses a severe security issue. There is high susceptibility that RFID might be attacked with future attacks to invade the privacy and data in the system. To protect the RFID system against such attacks, the Pad-generation (Pad-Gen) function is used. This paper presents a mutual authentication scheme Tag Reader Mutual Authentication (TRMA) that is implemented using two approaches, the XOR operation and the MOD operation by modifying the Pad-Gen function. The proposed framework is executed on low-cost Artix7 FPGA XC7A100T-3CSG324, and its hardware verification is done on chip scope pro tool.

Author 1: Anusha R
Author 2: Veena Devi Shastrimath V

Keywords: Mutual Authentication; Modified Pad-Gen; Radiofrequency Identification (RFID); Privacy; Security; Tag-Reader Mutual Authentication (TRMA)

PDF

Paper 57: Face Recognition on Low-Resolution Image using Multi Resolution Convolution Neural Network and Antialiasing Method

Abstract: Video surveillance applications usually take pictures of faces that have a low resolution (12x12) due to distance, lighting and shooting angles Most of face recognition algorithm have the poor performance accuracy and poor identify face on low resolution. Based on the problem, identifying the face of the query in low resolution, based on high resolution (64x64) proves to be a huge challenge. The aim of this research is to develop a new model for face recognition of low-resolution image in order to increase the accuracy of recognition. A Multi-Resolution Convolutional Neural Network (MRCNN) is proposed to address the problem. First, Antialiasing is used in preprocessing phase, then use MRCNN to extract the feature of the image. LWF (Labeled Face in Wild) will be used to evaluate the model. The result of this study is increasing the accuracy of face recognition on low-resolution image compared to the previous MRCNN model.

Author 1: Mario Imandito
Author 2: Suharjito

Keywords: Face recognition; low resolution; convolutional neural network; antialiasing

PDF

Paper 58: Research Trends in Surveillance through Sousveillance

Abstract: Collective Intelligence is an immense research area that has wide application to cross-disciplines, like social, legal, and computation. Research trends in Surveillance find its place in the work of this area generating curated data set helpful in answering complex queries. Sousveillance is a recent term coined by researchers and had been discussed in different literatures. However our findings suggest that integration of Surviellance through Sousviellance data set has not been given much importance in collective fashion. In this literature we introduced an effective model of collective intelligence by integrating surveillance through sousviellance in a campus environment. For testbed networking devices are used to generate sousvillance data to provide validation, and cleaning to enable reliability and trust in the target object.

Author 1: Siraj Munir
Author 2: Syed Imran Jami

Keywords: Semantics; querying; profiling; IoT; surveillance; sousveillance

PDF

Paper 59: Outlier Detection using Graphical and Nongraphical Functional Methods in Hydrology

Abstract: Graphical methods are intended to be introduced in hydrology for visualizing functional data and detecting outliers as smooth curves. These proposed methods comprise of a rainbow plot for visualization of data in large amount and bivariate and functional bagplot and boxplot for detection of outliers graphically. The bagplot and boxplot are composed by using first two score series of robust principal component following Tukey’s depth and regions of highest density. These proposed methods have the tendency to produce not only the graphical display of hydrological data but also the detected outliers. These outliers are intended to be compared with outliers obtained from several other existing nongraphical methods of outlier detection in functional context so that the superiority of the proposed graphical methods for identifying outliers can be legitimated. Hence present paper aims to demonstrate that the graphical methods for detection of outliers are authentic and reliable approaches compare to those methods of outlier detection that are nongraphical.

Author 1: Insia Hussain

Keywords: Rainbow plot; bivariate bagplot; functional bagplot; bivariate boxplot; functional boxplot

PDF

Paper 60: Heart Disease Prediction based on External Factors: A Machine Learning Approach

Abstract: Technology has immensely changed the world over the last decade. As a consequence, the life of the people is undergoing multiple changes that directly have positive and negative effects on health. Less physical activity and a lot of virtual involvements are pushing people into various health-related issues and heart disease is one of them. Currently, it has gained a great deal of attention among various life-threatening diseases. Heart disease can be detected or diagnosed by different medical tests by considering various internal factors. However, this type of approach is not only time-consuming but also expensive. Concurrently, there are very few studies conducted on heart disease prediction based on external factors. To bridge this gap, we proposed a heart disease prediction model based on the machine learning approach which enables predicting heart disease with 95% accuracy. To acquire the best result, 6 distinct machine learning classifiers (Decision Tree, Random Forest, Naive Bayes, Support Vector Machine, Quadratic Discriminant, and Logistic Regression) were used. At the same time, sklearn.ensemble.ExtraTreesClassifier has been used to extract relevant features to improve predictive accuracy and control over-fitting. Findings reveal that Support Vector Machine (SVM) outperforms the others with greater accuracy (95%).

Author 1: Maruf Ahmed Tamal
Author 2: Md Saiful Islam
Author 3: Md Jisan Ahmmed
Author 4: Md. Abdul Aziz
Author 5: Pabel Miah
Author 6: Karim Mohammed Rezaul

Keywords: Heart disease; Risk prediction; Decision Tree (DT); Support Vector Machine (SVM); Naive Bayes (NB); Random Forest (RF); Logistic Regression (LR); Quadratic Discriminant Analysis (QDA); Machine learning

PDF

Paper 61: Embracing Localization Inaccuracy with a Single Beacon

Abstract: This paper illustrates a new mechanism to determine the coordinates of the sensors using a beacon node and determines the definitive error associated with it. In UWSNs (underwater wireless sensor networks), actual and precise location of the deployed sensors which accumulate data is vital, because the accumulated data without the location information has less significance. Moreover it has limited value in the domain of location based services. In UWSN, trilateration or multilateration is exploited to assess the location of the deployed hosts; having three or more reference nodes to localize a deployed sensor is not pragmatic at all. On the other hand, non-linear equations are usually solved in conventional method where degree-of-freedom is uncertain to lead to an exclusive solution. In this paper, associated localization inaccuracies has been shown for a unique configuration where a single beacon is used to determine the coordinates of three deployed sensors simultaneously. Cayley-Menger determinant is used for the configuration and system of nonlinear distance equations have been linearized for better accuracy and convergence. Simulations with Euclidean distances validate the propounded model and reflect the acquired accuracies in sensors’ coordinates and bearings. Moreover, an experiment has been conducted with ultrasonic sensors in terrestrial environments to validate the proposed model; the associated inaccuracies were found to be generated from the distance measurement errors; on the other hand, considering Euclidean distances proves the model to be precise and accurate.

Author 1: Anisur Rahman
Author 2: Vallipuram Muthukkumarasamy

Keywords: Underwater localization; linearization; mobile beacon; Cayley-Menger determinant; bearing; underwater wireless sensor network

PDF

Paper 62: Joint Demographic Features Extraction for Gender, Age and Race Classification based on CNN

Abstract: Automatic verification and identification of face from facial image to obtain good accuracy with huge dataset of training and testing to using face attributes from images is still challengeable. Hence proposing efficient and accurate facial image identification and classification based of facial attributes is important task. The prediction from human face image is much complex. The proposed research work for automatic gender, age and race classification is based on facial features and Convolutional Neural Network (CNN). The proposed study uses the physical appearance of human face to predict age, gender and race. The proposed methodology consists of three sub systems, Gender, Ageing and Race. Therefore different feature are extracted for every sub system. These features are extracted by using Primary, Secondary features, Face Angle, Wrinkle Analysis, LBP and WLD. The accuracy of classification is based on these features. CNN used to classify by using these features. The proposed study has been evaluated and tested on large database MORPH II and UTKF. The performance of proposed system is compared with state of art techniques.

Author 1: Zaheer Abbas
Author 2: Sajid Ali
Author 3: Muhammad Ashad Baloch
Author 4: Hamida Ilyas
Author 5: Moneeb Ahmad
Author 6: Mubasher H. Malik
Author 7: Noreen Javaid
Author 8: Tanvir Fatima Naik Bukht

Keywords: Appearance features; age; gender; wrinkle analysis; face angle; classification; race; LBP

PDF

Paper 63: A Novel Method for Patients Identification in Emergency Cases using RFID based RADIO Technology

Abstract: Medical records provide an important role in the process of providing health care in hospitals and in various types of medical institutions. Medical records play a vital role in maintaining the information of the entire patients which includes the basic information, medical information, history of operation and medication etc. These medical records have been produced for the purpose of identifying a patient. In this paper, a novel method for identification of patients using the Radio Frequency Identification (RFID) technology is proposed. This paper explains the concept of electronic medical record and explains how to use RFID based technology in order to create an electronic medical-card for patients. The proposed methodology also aims to identify patients quickly in the case of emergencies using the magnetic card reader device, which provides detailed medical information for the patient file. It also helps the doctors who are present in the ambulance of patient. The proposed methodology is importance in some emergency cases where patients cannot provide their information to the hospital because they didn’t know their identity and medical history.

Author 1: Eman Galaleldin Ahmed Khalil
Author 2: Asim Seedahmed Ali Osman

Keywords: Medical records; radio frequency identification; magnetic card reader; patient; emergency; electronic health record; laboratory

PDF

Paper 64: Multi-Label Classification using an Ontology

Abstract: During these last few years, the problem of multi-label classification (ML) has been studied in several domains, such as text categorization. Multi-label classification is a main challenging task because each instance can be assigned to multiple classes simultaneously. This paper studies the problem of Multi-label classification in the context of web pages categorization. The categories are defined in an ontology. Among the weakness of the multi-label classification methods, exist the number of positive and negative examples used to build the training dataset of a specific label. So the challenge comes from the huge number of labels combinations that grows exponentially. In this paper, we present an ontology-based Multi-label classification which exploit dependence between the labels. In addition, our approach uses the ontology to take into account relationships between labels and to give the selection of positive and negative examples in the learning phase. In the prediction phase, if a label is not predicted, the ontology is used to prune the set of descendant labels. The results of experimental evaluation show the effectiveness of our approaches.

Author 1: Yaya TRAORE
Author 2: Sadouanouan MALO
Author 3: Didier BASSOLE
Author 4: Abdoulaye SERE

Keywords: Multi-label classification (ML); Binary Relevance (BR); ontology; categorization; prediction

PDF

Paper 65: Energy Efficient Cluster Head Selection using Hybrid Squirrel Harmony Search Algorithm in WSN

Abstract: The Wireless Sensor Network (WSN) has found an extensive variety of applications, which include battlefield surveillance, monitoring of environment and traffic, modern agriculture due to their effectiveness in communication. Clustering is one of the significant mechanisms for enhancing the lifespan of the network in WSN. This clustering scheme is exploited to improve the sensor network’s lifespan by decreasing the network’s energy consumption and increasing the stability of the network. The existing cluster head selection algorithm suffers from the inconsistent tradeoffs between exploration – exploitation and global search constraints. Therefore, in this research, the hybridization of two popular optimization algorithms, namely, Harmony Search Algorithm (HSA) and Squirrel Search Algorithm (SSA) is executed for optimal selection of cluster heads in WSN with respect to distance and energy. The proposed Hybrid Squirrel Harmony Search Algorithm (HSHSA) is found to be energy efficient when compared with first node death (FND) and last node death (LND) of existing Cluster Head Selection (CHS) techniques. In addition to this, the proposed HSHSA shows enhancements in overall throughput and residual energy of the wireless sensor network by 31.02% and 85.69%, respectively than the existing algorithms.

Author 1: N Lavanya
Author 2: T. Shankar

Keywords: Cluster head selection; clustering; harmony search algorithm; squirrel search algorithm; wireless sensor network

PDF

Paper 66: Knowledge based Soil Classification Towards Relevant Crop Production

Abstract: Pakistan’s economy is strongly associated with agriculture sector. For a country having 25 % of GDP contributed through agriculture, there is a need to modernize the agriculture by acclimatizing contemporary approaches. Unfortunately, it has become a common trend among farmers to cultivate crops, being used in food items or which can easily be sold out in the market without using knowledge about the suitability or relevancy of crops to the soil environment. Consequently, the farmers face financial losses. Many researchers have proposed soil classification methods for various soils related researches, but they have very little contribution towards guidance of the farmers to select most suitable crops for cultivation at a particular soil type. Without the use of technology and computer-assisted approaches, the process of classifying soil environments could not help the farmers in taking decisions regarding appropriate crop selection in their respective fields. In this paper, an effective knowledge-oriented approach for soil classification in Pakistan has been presented using crowd sourced data obtained from 1557 users regarding 103 agricultural zones. The data were also obtained from AIMS (Govt. of Punjab) and Ministry of National Food Security & Research. In this work, random forest classifier has been used for processing and predicting complex tiered relationship among soil types belonging to agricultural zones and major suitable crops for improving yield production. The proposed model helps in computing the degree of relevancy of crop to agricultural region that help former selecting suitable crops for their cultivated lands.

Author 1: Waleej Haider
Author 2: M. Nouman Durrani
Author 3: Aqeel ur Rehman
Author 4: Sadiq ur Rehman

Keywords: Knowledge creation; agriculture; soil classification; random forest; knowledge distribution; crop relevancy

PDF

Paper 67: HCAHF: A New Family of CA-based Hash Functions

Abstract: Cryptographic hash functions (CHF) represent a core cryptographic primitive. They have application in digital signature and message authentication protocols. Their main building block are Boolean functions. Those functions provide pseudo-randomness and sensitivity to the input. They also help prevent and lower the risk of attacks targeted at CHF. Cellular automata (CA) are a class of Boolean functions that exhibit good cryptographic properties and display a chaotic behavior. In this article, a new hash function based on CA is proposed. A description of the algorithm and the security measures to increase the robustness of the construction are presented. A security analysis against generic and dedicated attacks is included. It shows that the hashing algorithm has good security features and meet the security requirements of a good hashing scheme. The results of the tests and the properties of the CA used demonstrate the good statistical and cryptographic properties of the hash function.

Author 1: Anas Sadak
Author 2: Fatima Ezzahra Ziani
Author 3: Bouchra Echandouri
Author 4: Charifa Hanin
Author 5: Fouzia Omary

Keywords: Hash function; boolean function; cellular automata; cryptography; information security; avalanche; nist statistical suite; DIEHARDER battery of tests; generic attacks; dedicated attacks

PDF

Paper 68: Classification Performance of Violence Content by Deep Neural Network with Monarch Butterfly Optimization

Abstract: Violence is self-sufficient, it is perplexing due to visibility of content dissimilarities among the positive instances that been displayed on media. Besides, the ever-increasing demand on internet, with various types of videos and genres, causes difficulty for a proper search of these videos to ensure the contents is humongous. It involves in aiding users to choose movies or web videos suitable for audience, in terms of classifying violence content. Nevertheless, this is a cumbersome job since the definition of violence is broad and subjective. Detecting such nuances from videos becomes technical without a human’s supervision that can lead to conceptual problem. Generally, violence classification is performed based on text, audio, and visual features; to be precise, it is more relevant to use of audio and visual base. However, from this perspective, deep neural network is the current build-up in machine learning approach to solve classification problems. In this research, audio and visual features are learned by the deep neural network for more specific violence content classification. This study has explored the implementation of deep neural network with monarch butterfly optimization (DNNMBO) to effectively perform the classification of the violence content in web videos. Hence, the experiments are conducted using YouTube videos from VSD2014 dataset that are publicly available by Technicolor group. The results are compared with similar modified approaches such as DNNPSO and the original DNN. The outcome has shown 94% of violence classification rate by DNNMBO.

Author 1: Ashikin Ali
Author 2: Norhalina Senan
Author 3: Iwan Tri Riyadi Yanto
Author 4: Saima Anwar Lashari

Keywords: Deep learning; monarch butterfly; violence video; classification

PDF

Paper 69: Object Detection and Tracking using Deep Learning and Artificial Intelligence for Video Surveillance Applications

Abstract: Data is the new oil in current technological society. The impact of efficient data has changed benchmarks of performance in terms of speed and accuracy. The enhancement is visualizable because the processing of data is performed by two buzzwords in industry called Computer Vision (CV) and Artificial Intelligence (AI). Two technologies have empowered major tasks such as object detection and tracking for traffic vigilance systems. As the features in image increases demand for efficient algorithm to excavate hidden features increases. Convolution Neural Network (CNN) model is designed for urban vehicle dataset for single object detection and YOLOv3 for multiple object detection on KITTI and COCO dataset. Model performance is analyzed, evaluated and tabulated using performance metrics such as True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), Accuracy, Precision, confusion matrix and mean Average Precession (mAP). Objects are tracked across the frames using YOLOv3 and Simple Online Real Time Tracking (SORT) on traffic surveillance video. This paper upholds the uniqueness of the state of the art networks like DarkNet. The efficient detection and tracking on urban vehicle dataset is witnessed. The algorithms give real-time, accurate, precise identifications suitable for real-time traffic applications.

Author 1: Mohana
Author 2: HV Ravish Aradhya

Keywords: Artificial Intelligence (AI); Computer Vision (CV); Convolution Neural Network (CNN); You Look Only Once (YOLOv3); Urban Vehicle Dataset; Common objects in Context (COCO); Object detection; object tracking

PDF

Paper 70: KNN and SVM Classification for Chainsaw Sound Identification in the Forest Areas

Abstract: We present in this paper a comparative study of two classifiers, namely, SVM (support vector machine) and KNN (K-Nearest Neighbors), which we combine to MFCC (Mel-Frequency Cepstral Coefficients) in order to make possible the detection of chainsaw’s sounds in a forest environment. Optimization’s calculation of the relevant characteristics of the sounds recorded in the forest and the judicious choice of the key parameters of the classifiers allows us to obtain a true positive rate of 95.63% for the SVM-LOG-KERNEL and 94.02% for the KNN. The SVM-LOG-KERNEL classifier offers a better classification result and a processing time 30 times faster than KNN.

Author 1: N’tcho Assoukpou Jean GNAMELE
Author 2: Yelakan Berenger OUATTARA
Author 3: Toka Arsene KOBEA
Author 4: Geneviève BAUDOIN
Author 5: Jean-Marc LAHEURTE

Keywords: KNN Algorithm; SVM Algorithm; MFCC; sound recognition; forest monitoring; machine learning

PDF

Paper 71: An Efficient Algorithm to Find the Height of a Text Line and Overcome Overlapped and Broken Line Problem during Segmentation

Abstract: Line segmentation is a critical phase of the Optical Character Recognition (OCR) which separates the individual lines from the image documents. The accuracy rate of the OCR tool is directly proportional to the line segmentation accuracy followed by the word/character segmentation. In this context, an algorithm, named height_based_segmentation is proposed for the text line segmentation of printed Odia documents. The proposed algorithm finds the average height of a text line and it helps to minimize the overlapped text line cases. The algorithm also includes post-processing steps to combine the modifier zone with the base zone. The performance of the algorithm is evaluated through the ground truth and also by comparing it with the existing segmentation approaches.

Author 1: Sanjibani Sudha Pattanayak
Author 2: Sateesh Kumar Pradhan
Author 3: Ramesh Chandra Mallik

Keywords: Document image analysis; line segmentation; word segmentation; database creation; printed Odia document

PDF

Paper 72: Adverse Impacts of Social Networking Sites on Academic Result: Investigation, Cause Identification and Solution

Abstract: Social networking sites (SNS) have become more prevalent over the previous decade. Interactive design and addictive characteristics have made SNS an almost indispensable part of life, particularly among university learners. Previous studies have shown that excessive use of SNS adversely affects learners' academic success as well as mental health. However, still now, there is a lack of clear evidence of the actual rationalization behind these adverse effects. Concurrently, any significant preventive measures are not yet introduced to counter the excessive use of SNS, particularly for students. To bridge this gap, considering a view of 1862 students (male = 1183, female = 659), the current study investigates how and in which way spending time in SNS negatively influences students’ academic performance. Correlation and regression analyses showed that there is a powerful negative correlation between students’ spending time in social media (STISM) and their educational outcome. Simultaneously, our investigation indicates that classroom standing social media use and late night social media use result in poor educational outcome of the students. Based on the findings of the investigation, an Android based application framework called SMT (Social Media Tracker) is designed and partially implemented to minimize the engagement between students and SNS.

Author 1: Maruf Ahmed Tamal
Author 2: Maharunnasha Antora
Author 3: Karim Mohammed Rezaul
Author 4: Md. Abdul Aziz
Author 5: Pabel Miah

Keywords: Social networking sites; SNS; social media; SM; addiction; mental health; poor academic outcome; sleep disorder; social media tracker introduction

PDF

Paper 73: A New Method to Find Image Recovery

Abstract: Scattering media imagery is degraded during the physical process of image formation, which shifts contrast, color, and turns overall visibility white. With the computer vision system, sight can be amazingly restored. Although the medium transmission in distant artifacts is small, it is vulnerable to amplification of the noise. Here we present / propose the picture recovery of the L0 gradient, which solves the issue discussed previously. In comparison to raw images, the single image is processed and recovered significantly improves while noise amplification discards. The state-of-the-art studies on dehazing have been reviewed in this paper. In addition, L0-gradient minimization of image smoothing was studied in combination with H Kosmedier Image Formation Physical System to solve the dehazing issue as L0 smoothing approximates better results with higher false discovery rate (FDR) .Recovery using L0-Gradient Minimization is formalized in a depth chart that reduces noise adaptively to recover estimated structures marginally in spatially changing media delivery. The minimal gradient is non-zero. Therefore, noise and blur in the nearby objects with low measurement difficulty and impact have been effectively removed, raising the transmitting approximation contributing to the enhancement of the recovered image. We are experimenting with atmospheric, submarine, at night, indoor turbid medium images qualitatively and quantitatively.

Author 1: Nouf Saeed Alotaibi

Keywords: Computer vision; image enhancement; digital image processing

PDF

Paper 74: Distributed SDN Deployment in Backbone Networks for Low-Delay and High-Reliability Applications

Abstract: Internet applications, such as video streaming, critical-mission, and health applications, require real-time or near real-time data delivery. In this context, Software Defined Networking (SDN) has been introduced to simplify the network management providing a more dynamic and flexible configuration based on centralizing the network intelligence. One of the main challenges in SDN applications consists of selecting the number of deployed SDN controllers, and their locations, towards improving the network performance in terms of low delay and high reliability. Traditional k-center and 􀀀-median methods have been fairly successful in reducing propagation latency, but ignore other important network aspects such as reliability. This paper proposes a new approach for controller placement that addresses both network reliability and reducing network delay. The proposed heuristic algorithm focuses on four different robustness functions, viz, algebraic connectivity (AC), network criticality (NC), load centrality (LC), and communicability, and has been applied in four different real-world physical networks, for performance evaluation based on degree-, closeness-, and betweenness-based centrality-based attacks. Experimental results show that the proposed controller selection algorithms based on AC, NC, LC, and communicability, achieve a high network resilience and low C2C delays, outperforming the latest, widely-used baseline methods, such as 􀀀-median and 􀀀-center ones, especially when using the NC method.

Author 1: Mohammed J.F Alenazi

Keywords: Software Defined Networking (SDN); Controller Placement Problem (CPP); physical network; graph robustness metrics; reliability; resilience

PDF

Paper 75: Vulnerable Road User Detection using YOLO v3

Abstract: Detection and classification of vulnerable road users (VRUs) is one of the most crucial blocks in vision based navigation systems used in Advanced Driver Assistance Systems. This paper seeks to evaluate the performance of object classification algorithm, You Only Look Once i.e. YOLO v3 algorithm for the purpose of detection of a major subclass of VRUs i.e. cyclists and pedestrians using the Tsinghua – Daimler dataset. The YOLO v3 algorithm used here requires less computational resources and hence promises a real time performance when compared to its predecessors. The model has been trained using the training images in the mentioned benchmark and have been tested for the test images available for the same. The average IoU for all the truth objects is calculated and the precision recall graph for different thresholds was plotted.

Author 1: Saranya K C
Author 2: Arunkumar Thangavelu

Keywords: Yolo v3; Tsinghua-Daimler cyclist benchmark; cy-clist detection; pedestrian detection; IoU

PDF

Paper 76: On Developing an Integrated Family Mobile Application

Abstract: Now-a-days mobile applications have been seen as the most effective, popular and powerful technologies and this is due to the widespread of mobile devices. Moreover, the raising power of mobile devices has a great impact on people of all ages; and more specifically on social relationships including interaction between parents and kids. Therefore, this paper presents a highly integrated Family Mobile Application (FMA) that provides a wide range of services to control, manage, organize and support the different daily tasks of family members effectively. The essential tasks of the FMA are mainly described in terms of facilitating the daily life routine and responsibilities, enhancing the communication between the family members (in different aspects) and supporting the Augmented Reality (AR) which is directed to the children of the family to support educational goals in particular. Moreover, a website has been established to enrich the functionality of the proposed FMA application. The FMA has been analysed, designed, implemented and evaluated on real-world users of the system. The evaluation was con-ducted in terms of the usability testing that considers satisfaction, simplicity and ease of use. More details of the real evaluation are illustrated and presented.

Author 1: Subhieh El-Salhi
Author 2: Fairouz Farouq
Author 3: Randa Obeidallah
Author 4: Mo’taz Al-Hami

Keywords: Mobile technology; social apps; family mobile ap-plication; Augmented Reality (AR)

PDF

Paper 77: Assessing Architectural Sustainability during Software Evolution using Package-Modularization Metrics

Abstract: Sustainability of software architectures is largely dependent on cost-effective evolution and modular architecture. Careful modularization, characterizing proper design of complex system is cognitive and challenging task for insuring improved sustainability. Moreover, failure to modularize the software sys-tems during its evolution phases often results in requiring extra effort towards managing design deterioration and solving unfore-seen inter-dependencies. In this paper, we present an empirical perspective of package-level modularization metrics proposed by Sarkar, Kak and Rama to characterize modularization quality through packages. In particular, we explore impact of these design based modularization metrics on other well known mod-ularity metrics and software quality metrics. Our experimental examination over open source java software systems illustrates that package-level modularization metrics significantly correlate with architectural sustainability measures and quality metrics of software systems.

Author 1: Mohsin Shaikh
Author 2: Dilshod Ibarhimov
Author 3: Baqir Zardari

Keywords: Software architecture; software modularity; soft-ware quality; packages

PDF

Paper 78: Knowledge Construction by Immersion in Virtual Reality Environments

Abstract: The objective of this work is to analyze the potential use of Immersive Virtual Reality technologies as a teaching/learning tool to enrich the organization of the learning environments of educational programs. The study and analysis of human cognition is theoretically based, also considering the Biology of Cognition and various approaches proposed by the theoreticians and researchers of Education Sciences. In the work, the state of the art of immersive technologies is established and their contributions in the construction of the knowledge of cognitive subjects are analyzed as a means for the development of teaching/learning activities, with the support of emerging immersive technologies. The methodology used is that of the bibliographic review of the classic works of printed literature in relation to the Biology of Cognition, and the search in diverse databases of theses and diverse works in universities and digital repositories. The main weakness of the research lies in the fact that the search was limited to documents using English, Spanish, and Portuguese language. To finish, conclusions and recommendations for future work have been established.

Author 1: Luis Alfaro
Author 2: Claudia Rivera
Author 3: Jorge Luna-Urquizo
Author 4: Sofia Alfaro
Author 5: Francisco Fialho

Keywords: Computer assisted learning environments; immer-sive technologies; virtual reality; full immersion in virtual reality environments; knowledge construction by immersion in virtual reality

PDF

Paper 79: A Technical Guide for the RASP-FIT Tool

Abstract: Fault injection tools are designed to serve various purposes, such as validate the design under test concerning reli-ability requirements, find sensitive/critical locations that require error mitigation, determine the expected circuit response in the existence of faults. Fault Simulation/Emulation (S/E) applications are involved in Field Programmable Gate Array (FPGA) based design’s verification and simulation at the Hardware Description Languages (HDL) code level. A tool is developed, named RASP-FIT, to perform code modification of FPGA designs, testing of such designs, and finding the sensitive area of designs. This tool works on the FPGA designs written in Verilog HDL at various abstraction levels, gate, data-flow and behavioural levels. This paper presents a technical aspect and the user-guide for the proposed tool in detail, which includes generation of the standalone application (an executable file of the tool for Windows operating system) and installation method.

Author 1: Abdul Rafay Khatri

Keywords: Code-modifier; fault injection; FPGA designs; fault injection tool; Verilog HDL

PDF

Paper 80: Internet of Things Cyber Attacks Detection using Machine Learning

Abstract: The Internet of Things (IoT) combines hundreds of millions of devices which are capable of interaction with each other with minimum user interaction. IoT is one of the fastest-growing areas in of computing; however, the reality is that in the extremely hostile environment of the internet, IoT is vulnerable to numerous types of cyberattacks. To resolve this, practical countermeasures need to be established to secure IoT networks, such as network anomaly detection. Regardless that attacks cannot be wholly avoided forever, early detection of an attack is crucial for practical defense. Since IoT devices have low storage capacity and low processing power, traditional high-end security solutions to protect an IoT system are not appropriate. Also, IoT devices are now connected without human intervention for longer periods. This implies that intelligent network-based security solutions like machine learning solutions must be developed. Although many studies in recent years have discussed the use of Machine Learning (ML) solutions in attack detection problems, little attention has been given to the detection of attacks specifically in IoT networks. In this study, we aim to contribute to the literature by evaluating various machine learning algorithms that can be used to quickly and effectively detect IoT network attacks. A new dataset, Bot-IoT, is used to evaluate various detection algorithms. In the implementation phase, seven different machine learning algorithms were used, and most of them achieved high performance. New features were extracted from the Bot-IoT dataset during the implementation and compared with studies from the literature, and the new features gave better results.

Author 1: Jadel Alsamiri
Author 2: Khalid Alsubhi

Keywords: Network anomaly detection; machine learning; In-ternet of Things (IoT); cyberattacks; bot-IoT dataset

PDF

Paper 81: UAV Path Planning for Civil Applications

Abstract: We will present a simple and efficient algorithm for solving the path planning problem for civil UAV operating in a dynamic or incomplete environment. This algorithm searches for a continuous waypoints sequence starting from the initial configuration, visiting all the desired locations and reaching the final position. We will present our proposed algorithm on two steps: The first produces a sorted location set. The second step generates an optimal path for the overall mission. The same algorithm constructs the initial path or re-plans a new one when changes occur to the configuration space. To prove the effectiveness of our proposed algorithm, we will provide computer simulations. A comparison of many results will show that this algorithm yields good experience performance over a wide variety of examples.

Author 1: IDALENE Asmaa
Author 2: BOUKHDIR Khalid
Author 3: MEDROMI Hicham

Keywords: Unmanned Aerial Vehicle (UAV); path planning; path re-planning; computer simulation

PDF

Paper 82: Evaluating Programmed Artificial Insemination for Cattle Production

Abstract: Cattle productivity in Japan has been declining though livestock farmers and breeders tried to use artificial insemination regularly. The reason behind this declining produc-tivity is the poor evaluation of the applicability of artificial insem-ination. To address this issue, this research proposes an objective evaluation method to estimate the applicability of programmed Artificial Insemination (pAI). The objective evaluation method tries to estimate the applicability of pAI based on the analysis of various indices from dairy and beef cattle using Bayesian Network Model (BNM). The estimation of the pAI applicability considers 14 and 17 physiological indices for diary and beef cattle respectively. These indices include the basic information (days after childbirth, parity, etc.), diagnosis of appearance, diagnosis of genital organ and the veterinarians’ judgments. The overall success rate in estimating the applicability is 89.8% for 1051 records of dairy cattle and 95.6% for 1128 records of beef cattle. The proposed method avoids the subjective error in estimating the applicability of pAI. In addition, the experiment revealed that the applicability of pAI can be evaluated even though the number of measure indices is few.

Author 1: Takuya Yoshihara
Author 2: Yunan He
Author 3: Osamu Fukuda
Author 4: Hiroshi Okumura
Author 5: Kohei Arai
Author 6: Iqbal Ahmed
Author 7: Kenji Endo
Author 8: Naoki Takenouchi
Author 9: Hideo Matsuda
Author 10: Tadayuki Yamanouchi
Author 11: Junki Egashira
Author 12: Kenichi Yamashita

Keywords: Bayesian network; programmed artificial insemina-tion; cattle production

PDF

Paper 83: Employing Takaful Islamic Banking through State of the Art Blockchain: A Case Study

Abstract: Takaful – an Islamic alternative to conventional in-surance – is fast becoming one of the most important constituents of modern Islamic financial market. The fundamental difference between the two forms of risk mitigation is entrenched from the type of contract selected. The conventional insurance work on the principle of bilateral contracts between the customer (insured) and insurance provider where the insured pay regular premium in return for payment of compensation, in case of a predefined event occurs. On the other hand, Takaful works on the principle of mutual guarantee, cooperation and indemnity where the participants in the scheme mutually insure each other. The Takaful providers are mainly responsible for managing, administering and investigating the Takaful funds according to Islamic laws. This studies provides a decentralized architecture that securely implements Takaful risk mitigation system accord-ing to its principles. Since all major banking sectors are shifting towards Blockchain technology, as it is currently the only viable solution to offers security, transparency, integrity of resources and ensure trustworthiness among customers. The proposed studies offer state-of-the-art Blockchain technology and focus provide a Takaful system that strictly follows the underlying Islamic laws for this risk mitigation system. Moreover, the proposed platform provides all Takaful transactions over Blockchain that brings confidence and transparency to the community involved in the process.

Author 1: Mohammad Abdeen
Author 2: Salman Jan
Author 3: Sohail Khan
Author 4: Toqeer Ali

Keywords: Takaful; hyperledger; blockchain; consensus; de-centralized network; muzariba and wakalah

PDF

Paper 84: BulkSort: System Design and Parallel Hardware Implementation Considerations

Abstract: Algorithms are commonly perceived as difficult subjects. Many applications today require complex algorithms. However, the researchers look for ways to make them as simple as possible. In high time demanding fields, the process of sorting represents one of the foremost issues in the data structure for searching and optimization algorithms. In parallel processing, we divide program instructions among multiple processors by breaking problems into modules that can be executed in parallel, to reduce the execution time. In this paper, we proposed a novel parallel, re-configurable and adaptive sorting network of the BulkSort algorithm. Our architecture is based on simple and elementary operations such as comparison and binary shifting. The main strength of the proposed solution is the ability to sort in parallel without memory usage. Experimental results show that our proposed model is promising according to the required resources and its ability to perform a high-speed sorting process. In this study, we take into account the analysis result of the Simulink design to establish the required hardware resources of the proposed system.

Author 1: Soukaina Ihirri
Author 2: Ahmed Errami
Author 3: Mohammed Khaldoun
Author 4: Essaid Sabir

Keywords: Sorting; FPGA; bulk-sort; parallel processing

PDF

Paper 85: Semantic Knowledge Transformation for Context-aware Heterogeneous Formalisms

Abstract: In recent years, an increasing social dependency has been observed over the cell phones and now evolved into smart devices. Due to the rapid escalation of these smart devices, users are becoming habitual in utilizing these services using smart-phones and /or wearable devices in which different applications are running to assist and facilitate users in daily life routine activities. Mobility and context-awareness are the core features of pervasive computing. Context-awareness has the capability to identify the current situation and respond accordingly in the environment whenever and wherever needed. However, it is quite challenging to detect and sense the more appropriate contextual information when various interactive devices communicate among themselves. This paper presents the semantic knowledge transfor-mation techniques for ontology-driven context-aware formalisms to model heterogeneous systems. We propose theoretical as well as practical approaches to transform semantic knowledge into first-order Horn-clause rules format which can be used by context-aware multi-agent systems to achieve their desired goals.

Author 1: Hafiz Mahfooz Ul Haque
Author 2: Sajid Ullah Khan
Author 3: Ibrar Hussain

Keywords: Context-aware system; semantic knowledge trans-formation; ontology; interoperability; smart spaces

PDF

Paper 86: Performance Analysis of Network Intrusion Detection System using Machine Learning

Abstract: With the coming of the Internet and the increasing number of Internet users in recent years, the number of attacks has also increased. Protecting computers and networks is a hard task. An intrusion detection system is used to detect attacks and to protect computers and network systems from these attacks. This paper aimed to compare the performance of Random Forests, Decision Tree, Gaussian Na¨ıve Bayes, and Support Vector Machines in detecting network attacks. An up-to-date dataset was chosen to compare the performance of these classifiers. The results of the conducted experiments demonstrate that both Random Forests and Decision Tree performed effectively in detecting attacks.

Author 1: Abdullah Alsaeedi
Author 2: Mohammad Zubair Khan

Keywords: Intrusion Detection System (IDS); classifiers; AI; machine learning; KDD99; CICIDS2017; DoS; U2R; R2L

PDF

Paper 87: Embedded Mission Decision-Making based on Dynamic Decision Networks in SoPC Platform

Abstract: This paper tackles a Bayesian Decision Making approach for unmanned aerial vehicle (UAV) mission that allows UAV to quickly react to unexpected events under dynamic environments. From online observations and the mission state-ment, the proposed approach is designed by means of Dynamic Bayesian Networks (DBN) arising from the safety or performance failures analysis. After proposing a DBN model, a probabilistic approach based on Multiple-Criteria Decision-Making (MCDM) is then applied to find the best configuration reaching a balance between performance and energy consumption, thus decide which tasks will be implemented as SW and which as HW execution units, regarding the mission requirement. The proposal UAV mission decision-making is three-pronged, providing: (1) real time image pre-processing of sensor observations; (2) temporal and probabilistic approach based on Bayesian Networks to continuously update the mission plan during the flight; and (3) low-power hardware and software implementations for online and real time embedded Decision Making using Xilinx System on Programmable Chip (SoPC) platform. The proposed approach is then validated with a practical case UAV mission planning using the proposed dynamic decision-maker implemented on embedded system based on a hybrid device.

Author 1: Hanen Chenini

Keywords: Bayesian Decision Making; Dynamic Bayesian Net-works (DBN); Multiple-Criteria Decision-Making (MCDM); SoPC; practical case

PDF

Paper 88: Adaptive Cluster based Model for Fast Video Background Subtraction

Abstract: Background subtraction (BGS) is one of the impor-tant steps in many automatic video analysis applications. Several researchers have attempted to address the challenges due to illumination variation, shadow, camouflage, dynamic changes in the background and bootstrapping requirement. In this paper, a method to perform BGS using dynamic clustering is proposed. A background model is generated using the K􀀀-means algorithm. The normalized γ corrected distance values and an automatic threshold value is used to perform the background subtraction. The background models are updated online to handle slow illu-mination changes. The experiment was conducted on CDNet2014 dataset. The experimental results show that the proposed method is fast and performs well for baseline, camera-jitter and dynamic background categories of video.

Author 1: Muralikrishna SN
Author 2: Balachandra Muniyal
Author 3: U Dinesh Acharya

Keywords: Background subtraction; Gaussian mixture model; -means; clustering; object detection; transform

PDF

Paper 89: High Predictive Performance of Dynamic Neural Network Models for Forecasting Financial Time Series

Abstract: The study presents high predictive performance of dynamic neural network models for noisy time series data; explicitly, forecasting the financial time series from the stock market. Several dynamic neural networks with different architecture models are implemented for forecasting stock market prices and oil prices. A comparative analysis of eight architectures of dynamic neural network models was carried out and presented. The study has explained the techniques used in the study involving the processing of data, management of noisy data, and transformations stationary time series. Experimental testing used in this work includes mean square error, and mean absolute percentage error to evaluate forecast accuracy. The results depicted that the different structures of the dynamic neural network models can be successfully used for the prediction of nonstationary financial signals, which is considered very challenging since the signals suffer from noise and volatility. The nonlinear autoregressive neural network with exogenous inputs (NARX) does considerably better than other network models as the accuracy of the comparative evaluation achieves a better performance in terms of profit return. In non-stationary signals, Long short term memory results are considered the best on mean square error, and mean absolute percentage error.

Author 1: Haya Alaskar

Keywords: Dynamic neural network; financial time series; prediction stock market; financial forecasting; deep learning-based technique

PDF

Paper 90: Improving Gated Recurrent Unit Predictions with Univariate Time Series Imputation Techniques

Abstract: The work presented in this paper has its main objective to improve the quality of the predictions made with the recurrent neural network known as Gated Recurrent Unit (GRU). For this, instead of making different adjustments to the architecture of the neural network in question, univariate time series imputation techniques such as Local Average of Nearest Neighbors (LANN) and Case Based Reasoning Imputation (CBRi) are used. It is experimented with different gap-sizes, from 1 to 11 consecutive NAs, resulting in the best gap-size of six consecutive NA values for LANN and for CBRi the gap-size of two NA values. The results show that both imputation techniques allow improving prediction quality of Gated Recurrent Unit, being LANN better than CBRi, thus the results of the best configurations of LANN and CBRi allowed to surpass the techniques with which they were compared.

Author 1: Anibal Flores
Author 2: Hugo Tito
Author 3: Deymor Centty

Keywords: Gated recurrent unit; local average of nearest neighbors; case based reasoning imputation; GRU+LANN; GRU+CBRi

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org