The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

IJACSA Volume 9 Issue 10

Copyright Statement: This is an open access publication licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

View Full Issue

Paper 1: Coronary Heart Disease Diagnosis using Deep Neural Networks

Abstract: According to the World Health Organization, cardiovascular disease (CVD) is the top cause of death worldwide. In 2015, over 30% of global deaths was due to CVD, leading to over 17 million deaths, a global health burden. Of those deaths, over 7 million were caused by heart disease, and greater than 75% of deaths due to CVD were in developing countries. In the United States alone, 25% of deaths is attributed to heart disease, killing over 630,000 Americans annually. Among heart disease conditions, coronary heart disease is the most common, causing over 360,000 American deaths due to heart attacks in 2015. Thus, coronary heart disease is a public health issue. In this research paper, an enhanced deep neural network (DNN) learning was developed to aid patients and healthcare professionals and to increase the accuracy and reliability of heart disease diagnosis and prognosis in patients. The developed DNN learning model is based on a deeper multilayer perceptron architecture with regularization and dropout using deep learning. The developed DNN learning model includes a classification model based on training data and a prediction model for diagnosing new patient cases using a data set of 303 clinical instances from patients diagnosed with coronary heart disease at the Cleveland Clinic Foundation. The testing results showed that the DNN classification and prediction model achieved the following results: diagnostic accuracy of 83.67%, sensitivity of 93.51%, specificity of 72.86%, precision of 79.12%, F-Score of 0.8571, area under the ROC curve of 0.8922, Kolmogorov-Smirnov (K-S) test of 66.62%, diagnostic odds ratio (DOR) of 38.65, and 95% confidence interval for the DOR test of [38.65, 110.28]. Therefore, clinical diagnoses of coronary heart disease were reliably and accurately derived from the developed DNN classification and prediction models. Thus, the models can be used to aid healthcare professionals and patients throughout the world to advance both public health and global health, especially in developing countries and resource-limited areas with fewer cardiac specialists available.

Author 1: Kathleen H Miao
Author 2: Julia H. Miao

Keywords: Cardiovascular disease; heart disease; coronary artery disease; classification; accuracy; diagnosis; diagnostic odds ratio; deep learning; deep neural network; machine learning; F-score; global health; public health; K-S test; precision; prediction; prognosis; ROC curve; specificity; sensitivity

PDF

Paper 2: Human Related-Health Actions Detection using Android Camera based on TensorFlow Object Detection API

Abstract: A new method to detect human health-related actions (HHRA) from a video sequence using an Android camera. The Android platform works not only to capture video images through its camera, but also to detect emergency actions. An application for HHRA is to help monitor unattended children, individuals with special needs or the elderly. The application has been investigating based on TensorFlow Object Detection Application Program Interface (API) technique with Android studio. This paper fundamentally focuses on the comparison, in terms of improving speed and detection accuracy. In this work, two promising new approaches for HHRA detection has been proposed: SSD Mobilenet and Faster RCNN Resnet models. The proposed approaches are evaluated on the NTU RGB+D dataset, which it knows as the present greatest publicly accessible 3D action recognition dataset. The dataset has been split into training and testing dataset. The total confidence scores detection quality (total mAP) for all the actions classes are 95.8% based on the SSD-Mobilenet model and 93.8% based on Faster-R-CNN-Resnet model. The detection process is achieved using two methods to evaluate the detection performance using Android camera (Galaxy S6) and using TensorFlow Object Detection Notebook in terms of accuracy and detection speed. Experimental results have demonstrated valuable improvements in terms of detection accuracy and efficiency for human health-related actions identification. The experiments have executed on Ubuntu 16.04LTS GTX1070 @ 2.80GHZ x8 system.

Author 1: Fadwa Al-Azzoa
Author 2: Arwa Mohammed Taqia
Author 3: Mariofanna Milanovab

Keywords: Android camera; TensorFlow object detection API; emergency actions; detection accuracy

PDF

Paper 3: Isolated Automatic Speech Recognition of Quechua Numbers using MFCC, DTW and KNN

Abstract: The Automatic Speech (ASR) area is defined as the transformation of acoustic signals into string words. This area has been being developed for many year facilitating the lives of people so it was implemented in several languages. However, the development of ASR in some languages with few database resources but with a large population speaking these languages is very low. The development of ASR in Quechua language is almost null which leads culture and population isolation from technology and information. In this work an ASR system of isolated Quechua numbers is developed where Mel-Frequency Cepstral Coefficients (MFCC), Dynamic Time Warping (DTW) and K-Nearest Neighbor (KNN) methods are implemented using a database composed by recorded audio numbers from one to ten in Quechua. The recorded audios to feed the data base were uttered by natives man and women speakers of Quechua. The recognition accuracy reached in this research work was 91.1%.

Author 1: Hernan Faustino Chacca Chuctaya
Author 2: Rolfy Nixon Montufar Mercado
Author 3: Jeyson Jesus Gonzales Gaona

Keywords: Automatic Speech Recognition; MFCC; DTW; KNN

PDF

Paper 4: RASP-FIT: A Fast and Automatic Fault Injection Tool for Code-Modification of FPGA Designs

Abstract: Fault Injection (FI) is the most popular technique used in the evaluation of fault effects and the dependability of a design. Fault Simulation/Emulation (S/E) is involved in several applications such as test data generation, test set evaluation, circuit testability, fault detection & diagnosis, and many others. These applications require a faulty module of the original design for fault injection testing. Currently, Hardware Description Languages (HDL) are involved in improving methodologies related to the digital system testing for Field Programmable Gate Array (FPGA). Designers can perform advanced testing and fault S/E methods directly on HDL. To modify the HDL design, it is very cumbersome and time-consuming task. Therefore, a fault injection tool (RASP-FIT) is developed and presented, which consists of code-modifier, fault injection control unit and result analyser. However, in this paper, code modification techniques of RASP-FIT are explained for the Verilog code at different abstraction levels. By code-modification, it means that a faulty module of the original design is generated which includes different permanent and transient faults at every possible location. The RASP-FIT tool is an automatic and fast tool which does not require much user intervention. To validate these claims, various faulty modules for different benchmark designs are generated and presented.

Author 1: Abdul Rafay Khatri
Author 2: Ali Hayek
Author 3: Josef Borcsok

Keywords: Code generator; Fault emulation; Fault injection; Fault simulation; Instrumentation; Parser

PDF

Paper 5: A P System for K-Medoids-Based Clustering

Abstract: The membrane computing model, also known as the P system, is a parallel and distributed computing system. K-medoids algorithm is one of the most famous algorithms in partition-based clustering algorithms, and has been widely used in data analysis and modern scientific research. Combining the P system with the k-medoids algorithm, the maximum parallelism calculated by the P system can effectively reduce the time complexity of the k-medoids clustering algorithm. Based on this, this paper proposes a cell-like P system with promoters and inhibitors based on k-medoids clustering, and then an instance is given to illustrate the practicability and effectiveness of the P system designed.

Author 1: Ping Guo
Author 2: Jingya Xie

Keywords: P systems; Clustering; K-medoids-based clustering; Membrane computing; Parallel and distributed computing

PDF

Paper 6: Emotional Changes Detection for Dementia People with Spectrograms from Physiological Signals

Abstract: Due to aging society, there has recently been an increasing percentage of people with serious cognitive decline and dementia around the world. Such patients often lose their diversity of facial expressions and even their ability to speak, rendering them unable to express their feelings to their caregivers. However, emotions and feelings are strongly correlated with physiological signals, detectable with EEG and ECG etc. Therefore, this research develops an emotion predicting system for people with dementia using bio-signals to support their interaction with their caregivers. In this paper, we focused on a previous study for binary classification of emotional changes using spectrograms of EEG and RRI by CNN, verifying the effectiveness of the method. Firstly, the participants were required to watch simulating videos while collecting their EEG and ECG data. Then, STFT was performed, processing the raw data signals by extracting the time-frequency domain features to get the spectrograms. Finally, deep learning was used to detect the emotional changes. CNN was used for arousal classification, with an accuracy of 90.00% with EEG spectrograms, 91.67% with RRI spectrograms, and 93.33% with EEG and RRI spectrograms.

Author 1: Zeng Fangmeng
Author 2: Liao Peijia
Author 3: Miyuki Iwamoto
Author 4: Noriaki Kuwahara

Keywords: Emotion classification; people with dementia; EEG and ECG; spectrograms; CNN

PDF

Paper 7: The User Behavior Analysis Based on Text Messages Using Parafac and Block Term Decomposition

Abstract: Tensor decompositions represent a start for big data analysis and a start in reduction of dimensionality, object detection, clustering and so on. This paper presents a method to study the behavior of users in the online environment and beyond. A beginning for analyzing this type of data is uniting the Parafac Tensor Decomposition and the Block Term Decomposition.

Author 1: Bilius Laura Bianca

Keywords: Parafac decomposition; block term decomposition; clustering

PDF

Paper 8: Artificial Intelligence based Fertilizer Control for Improvement of Rice Quality and Harvest Amount

Abstract: Artificial Intelligence: AI based fertilizer control for improvement of rice quality and harvest amount is proposed together with intelligent drone based rice field monitoring system. Through experiments at the rice paddy fields which is situated at Saga Prefectural Research Institute of Agriculture: SPRIA in Saga city, Japan, it is found that the proposed system allows control rice crop quality and harvest amount by changing fertilizer type and supply amount. It, also, is found the most appropriate fertilizer supply management method which maximizing rice crop quality and harvest amount. Furthermore, these rice crop quality and harvest mount can be predicted in the early stage of rice leaf grow. Therefore, rice crop quality and harvest amount becomes controllable.

Author 1: Kohei Arai
Author 2: Osamu Shigetomi
Author 3: Yuko Miura

Keywords: Nitrogen content; protein content; rice paddy field; remote sensing; regression analysis; rice crop quality; harvest amount; fertilizer

PDF

Paper 9: Determination of Weighting Assessment on DREAD Model using Profile Matching

Abstract: Web application creators often get lack understanding of security threats that can occur in applications that are made, while security threats can create new problems that are more complex. These security threats will pose risks and can even result in large losses. Determining the risk ratings on a web application software development team is still experiencing problem or debate. The problem which occurs is that not all of the team members agree on the risk rating assessment process. This problem is caused by the differences in opinions and assumptions of the team members about threats and the fact that the assessor has different types of expertise, DREAD model places each expert in the same position. It means that there are no differences in weight at the time of assessment. DREAD stands for five aspects which are related to security threats in web applications. They are D (Potential Damage), R (Reproducibility), E (Exploitability), A (Affected User), and D (Discoverability). The proposal gives weight to the assessor by using profile matching method to produce an assessment involving assessors with different types of expertise, weighting on each assessor is according to their relevance to the assessed aspects, and rating on the type of expertise is according to the aspects assessed for the DREAD model. The result of the study shows that the proposed method can produce the weight closeness of the assessment to the target.

Author 1: Didit Suprihanto
Author 2: Retantyo Wardoyo
Author 3: Khabib Mustofa

Keywords: DREAD; risk; assessment; profile matching

PDF

Paper 10: Dynamic Weight Dropping Policy for Improve High-Priority Message Delivery Delay in Vehicular Delay-Tolerant Network

Abstract: Vehicular Delay-Tolerant Network (VDTN) is a special case of Delay-Tolerant Network (DTN) in which connectivity is provided by movement of vehicles with traffic prioritization to meet the requirements of different applications. Due to high node mobility, short contact time, intermittent connectivity, VDTNs use multi-copy routing protocols to increase message delivery rates and reduce the delay. However due to limited resources (bandwidth and storage capacity), these protocols cause the rapid buffer overflow and therefore the degradation of overall network performance. In this paper, we propose a buffer drop policy based on message weight by including traffic prioritization to improve the high priority messages delivery delay. Thus, the memory is subdivided into a high-weight queue and a low-weight queue. When the buffer is overflowing, and a new message arrives, the algorithm determines the message to be dropped in the queues considering that the current node is the destination of the message, the position of the current node with respect to the destination of the message and the age of the messages in the network.

Author 1: GBALLOU Yao Théophile
Author 2: GOORE Bi Tra
Author 3: Brou Konan Marcelin

Keywords: Vehicular delay-tolerant network; dropping policies; traffic differentiation; message weight; high priority message

PDF

Paper 11: Normalization of Unstructured and Informal Text in Sentiment Analysis

Abstract: Sentiment Analysis is problem of natural language processing which deals with the extraction and analysis of public sentiments shared about target entities over microbloging websites. This field has gained great attention due to the huge availability of decision making textual contents. Sentiment Analysis has enormous application areas such as; Market Analysis, Service Analysis, Showbiz analysis, Movies, sports and even the popularity and acceptance rate of political policies can also be predicted via sentiment analysis systems. Although tremendous volume of opinionative text is available but it is unstructured and noisy due to which sentiment classifiers can’t achieve good outcomes. Normalization is the process used to clean noise from unstructured text for sentiment analysis. In this study we have proposed a mechanism for the normalization of informal and unstructured text. Proposed mechanism is comprised of four essential phases; Noise Reduction, Part of Speech Tagging, Stop Word Removal stemming and Lemmatization. Numerous experiments are performed on twitter data set with unsupervised lexicons and dictionaries. Python and Natural language toolkit is used for performing all four essential steps. This study demonstrates that utilization and normalization of informal tokens in tweets improved the overall classification accuracy from 75.42 to 82.357.

Author 1: Muhammad Javed
Author 2: Shahid Kamal

Keywords: Informal; normalization; opinion mining; roman; sentiment analysis; text preprocessing

PDF

Paper 12: Development of Purchasing Module for Agriculture E-Commerce using Dynamic System Development Model

Abstract: Trading model has been changing since the vast implementation of Information and Communication Technology in every sector. This model is known as e-Commerce. Unlikely, there is still limited company which specifically trades agriculture product. Agriculture e-Commerce is known as a platform to buy and sell some agriculture products. Agriculture e-commerce has important role to support economic development and market expansion for farmers in particular and people in rural areas in general. There is still limited access and provider which buy and sell agriculture product to farmer and its farmer representative. Therefore, this research develops specific agriculture e-commerce. There are two main modules for agriculture e-commerce, purchasing and buying module. On this article, we acknowledge to develop the first module, which is purchasing module. Purchasing module was developed using Dynamic System Development Method (DSDM). Development phase includes feasibility study, business study, functional model iteration, and design and build model iteration. At the end of the phase, testing is conducted. The result of this study is the prototype of agriculture e-Commerce product with predefined functions. Purchasing module of the system depicts the opportunity for farmer to buy the tools and materials. This system has two main functions: purchasing system management and reporting management. System testing also was conducted to test the system.

Author 1: Rosa Delima
Author 2: Halim Budi Santoso
Author 3: Novan Andriyanto
Author 4: Argo Wibowo

Keywords: Agricultural e-commerce; dynamic system development method; DSDM; purchase module

PDF

Paper 13: Formalization of UML Composite Structure using Colored Petri Nets

Abstract: Design specification and requirement analysis, during development process involved in transformation of real world problems to software system are subjected to severe issues owing to involvement of semantics. Though, for design and specification of object-oriented systems, Unified Modeling Language (UML) is now recognized as standard language however, its structures have numerous drawbacks which include lack of semantics definition and unidentified deadlocks. The research work proposes a model to avoid deadlocks, specifically in composite structure of UML. Verification of system models by formal methods holds significance, particularly, at requirement specification and design level, to ensure the accuracy of models and high light the design problems before implementation. The paper proposes the rules that allow software engineers to formalize the behavior of UML 2.0 composite structure using Colored petri nets. Using these rules, the research shall analyze the correspondent Colored petri nets and conclude the properties of the original work flow, using theoretical outcomes in the Colored petri nets domain.

Author 1: Rao Sohail Iqbal
Author 2: Ramzan Talib
Author 3: Muhammad Awais
Author 4: Haseeb Ur Rehman
Author 5: Wajid Raza

Keywords: Design specification; UML (Unified Modeling Language); semantic; transformation; deadlocks

PDF

Paper 14: Tracking Systems as Thinging Machine: A Case Study of a Service Company

Abstract: Object tracking systems play important roles in tracking moving objects and overcoming problems such as safety, security and other location-related applications. Problems arise from the difficulties in creating a well-defined and understandable description of tracking systems. Nowadays, describing such processes results in fragmental representation that most of the time leads to difficulties creating documentation. Additionally, once learned by assigned personnel, repeated tasks result in them continuing on autopilot in a way that often degrades their effectiveness. This paper proposes the modeling of tracking systems in terms of a new diagrammatic methodology to produce engineering-like schemata. The resultant diagrams can be used in documentation, explanation, communication, education and control.

Author 1: Sabah S. Al-Fedaghi
Author 2: Yousef Atiyah

Keywords: Tracking systems; system documentation; system control; abstract machine; conceptual model; thinging

PDF

Paper 15: A Novel Rule-Based Root Extraction Algorithm for Arabic Language

Abstract: Non-vocalized Arabic words are ambiguous words, because non-vocalized words may have different meanings. Therefore, these words may have more than one root. Many Arabic root extraction algorithms have been conducted to extract the roots of non-vocalized Arabic words. However, most of them return only one root and produce lower accuracy than reported when they are tested on different datasets. Arabic root extraction algorithm is an urgent need for applications like information retrieval systems, indexing, text mining, text classification, data compression, spell checking, text summarization, question answering systems and machine translation. In this work, a new rule-based Arabic root extraction algorithm is developed and focuses to overcome the limitation of previous works. The proposed algorithm is compared to the algorithm of Khoja, which is a well-known Arabic root extraction algorithm that produces high accuracy. The testing process was conducted on the corpus of Thalji, which is mainly built to test and compare Arabic roots extraction algorithms. It contains 720,000 word-root pairs from 12000 roots, 430 prefixes, 320 suffixes, and 4320 patterns. The experimental result shows that the algorithm of Khoja achieved 63%, meanwhile the proposed algorithm achieved 94% of accuracy.

Author 1: Nisrean Thalji
Author 2: Nik Adilah Hanin
Author 3: Walid Bani Hani
Author 4: Sohair Al-Hakeem
Author 5: Zyad Thalji

Keywords: Root; stem; rules; affix; pattern; corpus

PDF

Paper 16: Runtime Reasoning of Requirements for Self-Adaptive Systems Using AI Planning Techniques

Abstract: Over the years, the domain of Self-Adaptive Systems (SAS) has gained significant importance in software engineering community. Such SAS must ensure high customizability and at the same time effective reasoning to meet their objectives by meeting end-user goals more effectively and efficiently. In this context, techniques related to Automated Planning have acquired substantial precedence owing to their adaptability to diverse scenarios based upon their enhanced knowledge extraction from available Knowledge Base. These AI planning techniques help in supporting self-adaptation mechanism of SAS. We have investigated these techniques to perform runtime reasoning of SAS requirements. This paper proposes an architecture for implementing the reasoning component of previously proposed Continuous Adaptive Requirement Engineering (CARE) framework. The proposed architecture has been experimentally verified by implementation of a prototype application using JSHOP2 (Java implementation of SHOP2, an HTN Planner).

Author 1: Zara Hassan
Author 2: Nauman Qureshi
Author 3: Muhammad Adnan Hashmi
Author 4: Arshad Ali

Keywords: Self-Adaptive Systems (SAS); reasoning; requirement engineering; AI planning; CARE framework; runtime reasoning of requirements

PDF

Paper 17: Design of Strategic Management System for Northern Border University using Unified Modeling Language

Abstract: All organizations engage in the strategy management process either formally or informally. Strategy management is used to refer to the entire scope of strategic-decision making activity in an organization to ensure its continuous success. Hence, a strategic management system is viewed as an important tool for strategy management. Northern Border University started to initiate its first five-year strategy plan for the year 1435-1439H (2013-2018). However, the strategy plan is managed without having a strategic management system. Thus, the university has a fundamental disconnect between the formulation of the strategy and the execution of that strategy into useful action. There is no integration between the strategy formulation and implementation which are treated separately instead of as an integrated system. Therefore, it is difficult for the university to translate their strategies into operational objectives, processes and activities. This paper presents the design process of the strategic management system for the university, whose main purpose is to manage the university’s strategy plan throughout its life cycle. The design of the strategic management system is based on object-oriented approach using Unified Modeling Language. The system will be used to formulate, implement, monitor and control appropriate university’s strategy plan to support on strategic-decision making for the university. The solution will thus contribute to the improvement of the university’s performance.

Author 1: Shahrin Azuan Nazeer

Keywords: Strategy management; strategic management system; object-oriented analysis and design; unified modeling language

PDF

Paper 18: The Design and Evaluation of a User-Centric Information Security Risk Assessment and Response Framework

Abstract: The risk of sensitive information disclosure and modification through the use of online services has increased considerably and may result in significant damage. As the management and assessment of such risks is a well-known discipline for organizations, it is a challenge for users from the general public. Users have difficulties in using, understanding and reacting to security-related threats. Moreover, users only try to protect themselves from risks salient to them. Motivated by the lack of risk assessment solutions and limited impact of awareness programs tailored for users of the general public, this paper aims to develop a structured approach to help in protecting users from threats and vulnerabilities and, thus, reducing the overall information security risks. By focusing on the user and that different users react differently to the same stimuli, the authors developed a user-centric risk assessment and response framework that assesses and communicates risk on both user and system level in an individualized, timely and continuous way. Three risk assessment models were proposed that depend on user-centric and behavior-related factors when calculating risk. This framework was evaluated using a scenario-based simulation of a number of users and results analyzed. The analysis demonstrated the effectiveness and feasibility of the proposed approach. Encouragingly, this analysis provided an indication that risk can be assessed differently for the same behavior based upon a number of user-centric and behavioral-related factors resulting in an individualized granular risk score/level. This granular risk assessment, provided a more insightful evaluation of both risk and response. The analysis of results was also useful in demonstrating how risk is not the same for all users and how the proposed model is effective in adapting to differences between users offering a novel approach to assessing information security risks.

Author 1: Manal Alohali
Author 2: Nathan Clarke
Author 3: Steven Furnell

Keywords: Risk; analysis; security behavior; BFI; correlation

PDF

Paper 19: ABJAD Arabic-Based Encryption

Abstract: The researcher introduced an enhanced classical Arabic-based encryption technique that is essentially designed for Arab nations. The new algorithm uses the shared key technique where the Keyword system Modulus is employed to add randomness and confusion to the table of alphabets being used. The results proved that the technique is resistant to brute force and cryptanalysis attacks. The time needed to break the algorithm is huge and the possibilities of decrypting the cipher text using the language frequency and language characteristics are hard and unfeasible. The technique assumes the existence of a secure channel for the keyword exchange.

Author 1: Ahmad H. Al-Omari

Keywords: Arabic-based cryptography; classical encryption; Arabic language encryption; shared key; keyword

PDF

Paper 20: Negotiation as a Collaborative Tool for Determining Permissions and Detection of Malicious Applications

Abstract: In Android OS users find it very difficult to understand and comprehend its permission mechanism. Frequently, users tend to ignore permission negotiations dialogs during installation of an application. Users, who pay attention to the permission negotiation dialogs, find it tough to comprehend the description and evaluation of permission procedure. They do not know the impact of granting these permissions on their data. One major issue is that user is unaware about how application uses their data. He has no insight after granting permission to the application and effect of these permissions on his data’s privacy and security. This research reveals that discrete permission settings are helpful for user to secure his device resources and data. This study uses a distinct technique to detect danger of unnecessary permissions. It helps end users of Android OSs to understand the problems and provides them batter way to deal with the problems and grounds to explore alternatives.

Author 1: Rabia Riaz
Author 2: Sanam Shahla Rizvi
Author 3: Mubashar Ahmad
Author 4: Sana Shokat
Author 5: Se Jin Kwon

Keywords: Collaborative learning; intrusion detection; mobile applications; information security; web based learning

PDF

Paper 21: Assessment of Groundwater Vulnerability to Pollution using DRASTIC Model and Fuzzy Logic in Herat City, Afghanistan

Abstract: Groundwater (GW) vulnerability maps have become a standard tool for protecting groundwater resources from pollution because, from one hand groundwater represents the main source of drinking water, and on the other hand high concentrations of human activities such industrial, agricultural, and household represent real or potential sources of groundwater contamination. The main objective of this study is to assess the groundwater vulnerable zones in Herat city, which is the second fastest growing big city in Afghanistan, using the DRASTIC model and fuzzy logic. DRASTIC is based on the seven data layers i.e. Depth of water, net Recharge, Aquifer media, Soil media, Topography, Impact of vadose zone and hydraulic Conductivity that provide the input to the modeling. The study shows that 51% of the city’s groundwater is under highly vulnerable to water pollution. Validation of model showed that vulnerability map which integrated by kriging interpolated layers has better accuracy than inverse distance weighing (IDW) method. The study suggests, that the proposed model can be an effective tool for local authorities who are responsible for managing groundwater resources especially in Afghanistan and assigning rating value of DRASTIC parameters using inference system of fuzzy logic.

Author 1: Nasir Ahmad Gesim
Author 2: Takeo Okazaki

Keywords: Afghanistan; DRASTIC; de-fuzzification; groundwater; modeling; vulnerability; herat

PDF

Paper 22: CryptoROS: A Secure Communication Architecture for ROS-Based Applications

Abstract: Cyber-attacks are a growing threat to future robots. The shift towards automatization has increased relevance and reliance on robots. Securing robots has been secondary or ternary priority and thus robots are vulnerable to cyber-attacks. Securing robots must become an essential (built-in) part of the design rather than being considered as a subsequent (later) add-on. ROS is a widely used and popular open source framework and robots using ROS are increasing in popularity. However, ROS is vulnerable to cyber-attacks. ROS needs to be secured before robots using ROS reach mass market. This study aims at proposing an architecture to secure ROS, using cryptography mechanism, which addresses the most common ROS safety issues. The advantages of our proposed secure architecture, CryptoROS, is that no changes to ROS software libraries and tools is required, it works with all ROS client libraries (e.g. rospy, roscpp) and rebuilding nodes is not necessary.

Author 1: Roham Amini
Author 2: Rossilawati Sulaiman
Author 3: Abdul Hadi Abd Rahman Kurais

Keywords: Robotics; ROS; cyber security; cryptography; access control

PDF

Paper 23: Evaluating the Effectiveness of Decision Support System: Findings and Comparison

Abstract: Nowadays, regardless of the popularity and credibility of Decision Support Systems (DSS), measuring the efficacy of the decisions taken by the DSS is yet to be proven. As previous works identifies the complexities involved in measuring the efficiency of DSS, most of the time DSS efficiency is case dependent. The list of methods for collecting and analyzing data, building models, deployment models, data and model integration, and finally taking decisions are some of the major issues related to measuring DSS effectiveness. This paper focuses on measuring the effectiveness of DSS. The paper highlights the issues that still need to be addressed with efficient frameworks. Based on the literature review and discussion presented in Section I and II, this study proposed a framework and its implementation. Presents how the proposed model can improve the previous work. The major findings of this study reflect that every decision made by DSS is based on the collected data, analyzed by DSS tools, as well as depends on the developed models. Therefore, this study illustrated that each component of DSS plays vital role in measuring the effectiveness of DSS whatever the case and problem for which the DSS has been built and implemented for. In addition, the supporting methods and measuring factors for each component are other findings of this study. Any decision taken by DSS will be evaluated separately in order to measure the effectiveness of the system. The proposed framework resembles a new framework for the decision makers working in any industry.

Author 1: Ayman G. Fayoumi

Keywords: DSS; effectiveness of decisions; framework; measurement phase

PDF

Paper 24: A Study of Mobile Forensic Tools Evaluation on Android-Based LINE Messenger

Abstract: The limitation of forensic tool and the mobile device’s operating system are two problems for researchers in mobile forensics field. Nevertheless, some kinds of forensic tools testing in several devices might be helpful in an investigation. Therefore, the evaluation of forensic tool is one gate to reach the goal of a digital forensics study. Mobile forensics as one of the digital forensics branch that focusing on data recovery process on mobile devices has some problems in the analytical ability because of the different features of forensic tools. In this research, the researchers present studies and techniques on tools ability and evaluated them based on digital evidence of LINE analysis. The experiment was combined VV methods and NIST standard forensic methods to produce a model of forensic tool evaluation steps. As the result of the experiment, Oxygen Forensic has 61.90% of index number and MOBILedit Forensic has the highest index number at 76.19% in messenger application analysis. This research has successfully assessed the performance of forensic tools.

Author 1: Imam Riadi
Author 2: Abdul Fadlil
Author 3: Ammar Fauzan

Keywords: Forensic; investigation; mobile; evaluation; performance

PDF

Paper 25: Moving from Heterogeneous Data Sources to Big Data: Interoperability and Integration Issues

Abstract: Heterogeneous databases now facing an emerging challenge of moving towards big data. These databases are adhoc polyglot systems, complex, and NoSQL tools which are semantically annotated. Integration of these heterogeneous databases becoming very challenging because of big data analytics is integrating human and machines contexts. In this paper, an attempt is made to study heterogeneous databases and their interoperability issues and integration issues, their impact on analysis of data. The data science has grown exponentially and a new paradigm has emerged which is of integration of heterogeneous data to big data. Information, knowledge and decision making become easier but the size of databases has grown and it became big data.

Author 1: Mohamed Osman Hegazi
Author 2: Dinesh Kumar Saini
Author 3: Kashif Zia

Keywords: Heterogeneous databases; interoperability; integration; big data; analytics and intelligence

PDF

Paper 26: An Evaluation of the Proposed Framework for Access Control in the Cloud and BYOD Environment

Abstract: As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues.

Author 1: Khalid Almarhabi
Author 2: Kamal Jambi
Author 3: Fathy Eassa
Author 4: Omar Batarfi

Keywords: Bring your own device; access control; policy; security

PDF

Paper 27: Model Development for Predicting the Occurrence of Benign Laryngeal Lesions using Support Vector Machine: Focusing on South Korean Adults Living in Local Communities

Abstract: The disease is a consequence of interactions between many complex risk factors, rather than a single cause. Therefore, it is necessary to develop a disease prediction model by using multiple risk factors instead of using a single risk factor. The objective of this study was to develop a model for predicting the occurrence of benign laryngeal lesions based on support vector machine (SVM) using ear, nose and throat (ENT) data from a national-level survey and to provide a basis for selecting high-risk groups and preventing a voice disorder. This study targeted 16,938 adults (≥19years) who participated in the ENT examination among the people who completed the Korea National Health and Nutrition Examination Survey from 2010 to 2012. This study compared the prediction power of the Gauss function, which was used for this study, with that of a linear algorithm, that of a polynomial algorithm, and that of a sigmoid algorithm. Moreover, four kernels were divided into C-SVM and Nu-SVM to compare the prediction accuracy of C-SVM with that of Nu-SVM. The ‘benign laryngeal lesion prediction model’ based on SVM could derive preventive factors and risk factors. The final prediction rate of this SVM using 479 support vectors was 97.306. The fitness results indicated that the difference between C-SVM and Nu-SVM was not large in the benign laryngeal lesion prediction model. In terms of kernel type, the prediction accuracy of Gauss kernel was the highest and the prediction accuracy of the sigmoid kernel was the lowest. The results of this study will provide an important basis for preventing and managing benign laryngeal lesions.

Author 1: Haewon Byeon

Keywords: Support vector machine; SVM; dysphonia; voice disorder; prediction model; risk factor; data mining

PDF

Paper 28: Construction Project Quality Management using Building Information Modeling 360 Field

Abstract: A quality management process plays a vital role in the success of engineering and construction projects. The management process needs to be effective and efficient if projects are to be completed on time and within the project’s budget. Many construction projects’ quality management processes are paper-based, which makes them time-consuming and inefficient. The next generation of Building Information Modeling (BIM) is the BIM-cloud. The BIM-cloud can help to enhance the effectiveness of a quality management process; it can also save an organization time and money. This paper proposes a quality management model based on cloud computing, mobile devices, and the Autodesk BIM 360 Field software. This software functions as a platform for gathering, managing and controlling the quality of the management data. The process is then applied to a real project in Vietnam to verify the benefits and barriers of using the BIM 360 Field for a construction project.

Author 1: Phong Thanh Nguyen
Author 2: Thu Anh Nguyen
Author 3: Tin Minh Cao
Author 4: Khoa Dang Vo
Author 5: Vy Dang Bich Huynh
Author 6: Quyen Le Hoang Thuy To Nguyen
Author 7: Phuong Thanh Phan
Author 8: Loan Phuc Le

Keywords: BIM 360 field; cloud computing; project management; quality management

PDF

Paper 29: Task Scheduling Frameworks for Heterogeneous Computing Toward Exascale

Abstract: The race for Exascale Computing has naturally led computer architecture to transit from the multicore era and into the heterogeneous era. Many systems are shipped with integrated CPUs and graphics processing units (GPUs). Moreover, various applications need to utilize both CPUs and GPUs executive resources, as many of their unique features prove the significant importance and strengths of using each one of the process units PUs. Several research studies consider partitioning the applications, scheduling their execution and allocating them onto the PUs resources. They investigate the important role of optimization and tackle intelligently scheduled tasks on the combination of CPU/GPU architecture CPUs and GPUs cores in achieving the peace of performance and power consumption of Exascale Computing. In this paper, the evolution of heterogeneous computing architectures, the approaches, and challenges toward achieving Exascale Computing, and the various algorithms and techniques used to partition and scheduling tasks are all reviewed. The existing frameworks and runtime systems utilized to optimize performance and improve energy efficiency in desecrates and fused chips in order to attain the objectives of Exascale Computing will also be reviewed.

Author 1: Suhelah Sandokji
Author 2: Fathy Eassa

Keywords: Exascale computing; heterogenous computing; task scheduler framework

PDF

Paper 30: Heterogeneous HW/SW FPGA-Based Embedded System for Database Sequencing Applications

Abstract: Database sequencing applications including sequence comparison, searching, and analysis are considered among the most computation power and time consumers. Heuristic algorithms suffer from sensitivity while traditional sequencing methods, require searching the whole database to find the most matched sequences, which requires high computation power and time. This paper introduces a dynamic programming technique based-on a measure of similarity between two sequential objects in the database using two components, namely frequency and mean. Additionally, database sequences that have the lowest scores in the comparison process were excluded such that the similarity algorithm between a query sequence and other database sequences is applied to meaningful parts of the database. The proposed technique was implemented and validated using a heterogeneous HW/SW FPGA-based embedded system platform. The implementation was partitioned into (1) hardware part (running on logic gates of FPGA) and (2) software part (running on ARM processor of FPGA). The validation study showed a significant reduction in computation time by accelerating the database sequencing processes by 60% comparing to traditional known methods.

Author 1: Talal Bonny

Keywords: Database; sequence comparison; dynamic programming; FPGA

PDF

Paper 31: Convolutional Neural Network Hyper-Parameters Optimization based on Genetic Algorithms

Abstract: In machine learning for computer vision based applications, Convolutional Neural Network (CNN) is the most widely used technique for image classification. Despite these deep neural networks efficiency, choosing their optimal architecture for a given task remains an open problem. In fact, CNNs performance depends on many hyper-parameters namely CNN depth, convolutional layer number, filters number and their respective sizes. Many CNN structures have been manually designed by researchers and then evaluated to verify their efficiency. In this paper, our contribution is to propose an innovative approach, labeled Enhanced Elite CNN Model Propagation (Enhanced E-CNN-MP), to automatically learn the optimal structure of a CNN. To traverse the large search space of candidate solutions our approach is based on Genetic Algorithms (GA). These meta-heuristic algorithms are well-known for non-deterministic problem resolution. Simulations demonstrate the ability of the designed approach to compute optimal CNN hyper-parameters in a given classification task. Classification accuracy of the designed CNN based on Enhanced E-CNN-MP method, exceed that of public CNN even with the use of the Transfer Learning technique. Our contribution advances the current state by offering to scientists, regardless of their field of research, the ability of designing optimal CNNs for any particular classification problem.

Author 1: Sehla Loussaief
Author 2: Afef Abdelkrim

Keywords: Machine learning; computer vision; image classification; convolutional neural network; CNN hyper parameters; enhanced E-CNN-MP; genetic algorithms; learning accuracy

PDF

Paper 32: E2-Invisible Watermarking for Protecting Intellectual Rights of Medical Images and Records

Abstract: In today’s digital era, practice of telemedicine has become common which involves the transmission of medical images and Myhealthrecord (MHR) for higher diagnosis in case of emergency and maintaining integrity, robustness, authentication and confidentiality of such patient’s data becomes necessary. Many works has shown that the digital watermarking is one of the solutions but simultaneously, it is known that no complete algorithm is available to fulfil all the requirements of a field. Till the watermarking technique becomes robust, encryption technique can be considered as one of the best solution for protecting the data. Encoding is used for transforming the information in to another form and in the proposed work of digital watermarking (DWM); encoding is combined with encryption and DWM to enhance the protection of data by maintaining the above said constraints. In this paper, DWM for medical images is implemented by joint combination of spatial and frequency domain technique Singular value decomposition-Integer wavelet transform (SVD-IWT) respectively, 64-bit Rivest-Shamir-Adleman (RSA) crypto-technique and new encoding procedure. To avoid the degradation of the medical image which is very essential in the medical field, data payload should be less and is achieved by the use of quick response (QR) code which consumes less space for large information. Finally the proposed system is compared with other traditional methods and also evaluated against various image processing and geometric attacks.

Author 1: Kavitha K. J.
Author 2: Dr. B. Priestly Shan

Keywords: Myhealthrecord; SVD; IWT; RSA; QR code; encoding

PDF

Paper 33: A Hybrid Background Subtraction and Artificial Neural Networks for Movement Recognition in Memorizing Quran

Abstract: Movement change beyond the duration of time and the variations of object appearance becomes an interesting topic for research in computer vision. Object behavior can be recognized through movement change on video. During the recognition of object behavior, the target and the trace of an object in a video must be determined in the sequence of frames. To date, the existence of object on a video has been widely used in different areas such as supervision, robotics, agriculture, health, sports, education, and traffic. This research focuses on the field of education by recognizing the movement of Quantum Maki Quran memorization through a video. The purpose of this study is to enhance the existing computer vision technique in detecting the Quantum Maki Quran memorization movement on a video. It combines the Background Subtraction method and Artificial Neural Networks; and evaluates the combination to optimize the system accuracy. Background Subtraction is used as object detection method and Back propagation in Artificial Neural Networks is used as object classification. Nine videos are obtained by three different volunteers. These nine videos are divided into six training and three testing data. The experimental result shows that the percentage of accuracy system is 91.67%. It can be concluded that there are several factors influencing the accuracy, such as video capturing factors, video improvements, the models, feature extraction and parameter definitions during the Artificial Neural Networks training.

Author 1: Anton Satria Prabuwono
Author 2: Ismatul Maula
Author 3: Wendi Usino
Author 4: Arif Bramantoro

Keywords: Movement recognition; computer vision; Quran memorization movement; background subtraction; back propagation; artificial neural networks

PDF

Paper 34: Evaluation of Distance Measures for Feature based Image Registration using AlexNet

Abstract: Image registration is a classic problem of computer vision with several applications across areas like defence, remote sensing, medicine etc. Feature based image registration methods traditionally used hand-crafted feature extraction algorithms, which detect key points in an image and describe them using a region around the point. Such features are matched using a threshold either on distances or ratio of distances computed between the feature descriptors. Evolution of deep learning, in particular convolution neural networks, has enabled researchers to address several problems of vision such as recognition, tracking, localization etc. Outputs of convolution layers or fully connected layers of CNN which has been trained for applications like visual recognition are proved to be effective when used as features in other applications such as retrieval. In this work, a deep CNN, AlexNet, is used in the place of handcrafted features for feature extraction in the first stage of image registration. However, there is a need to identify a suitable distance measure and a matching method for effective results. Several distance metrics have been evaluated in the framework of nearest neighbour and nearest neighbour ratio matching methods using benchmark dataset. Evaluation is done by comparing matching and registration performance using metrics computed from ground truth.

Author 1: K. Kavitha
Author 2: B. Sandhya
Author 3: B. Thirumala Rao

Keywords: Distance measures; deep learning; feature detection; feature descriptor; image matching

PDF

Paper 35: Greedy Algorithms to Optimize a Sentence Set Near-Uniformly Distributed on Syllable Units and Punctuation Marks

Abstract: An optimum sentence set that near-uniformly dis-tributed on syllable units and punctuation marks is important to develop a syllable-based automatic speech recognition (ASR). It is usually extracted from a mother set of millions of unique sentences using Modified Least-to-Most (LTM) Greedy algorithm. The Modified LTM Greedy is capable of minimizing the number of syllables but ignores distributing their frequencies. Hence, two schemes are proposed to minimize the number of syllables as well as to distribute their frequencies near-uniformly. Testing on a mother set of 10 million Indonesian sentences shows that both schemes perform better than the Modified LTM Greedy for two syllable units: monosyllables and bisyllables.

Author 1: Bagus Nugroho Budi Nurtomo
Author 2: Suyanto

Keywords: read-speech corpus; optimum sentence set; syllable; punctuation marks; Modified Least-to-Most Greedy

PDF

Paper 36: Data Governance Cloud Security Checklist at Infrastructure as a Service (IaaS)

Abstract: Security checklist is an important element in measuring the level of computing security, especially in cloud computing. Vulnerability in cloud computing become major concern because it will lead to security issue. While security awareness and training can educate users on the severe impact of malware, implementation on data governance and security checklist also can help to reduce the risk of being attacked. Since security checklist is important element to measure security level in cloud computing, data governance can help to manage data right with correct procedure. Due to increasing threats and attacks, service providers and service consumers need to adhere to guidelines and/or checklists when measuring the security level of services and to be prepared for unforeseen circumstances, especially in the IaaS platform. As the IaaS platform lies at the lower level in cloud computing where data are stored, it is vital that IaaS security be given serious consideration to prevent not only data breaches but also data losses. The objective of this paper is to discuss the implementation of security checklist in IaaS layer. In this paper also, several studies related with security assessment and checklist that had been discussed and developed by previous researchers and professional bodies will be discussed. This paper will also discuss the result from interview session that had been conducted by the author with several data centers (DCs) and experts regarding the implementation of security measures in small cloud DCs.

Author 1: Kamariah Abu Saed
Author 2: Norshakirah Aziz
Author 3: Said Jadid Abdulkadir
Author 4: Noor Hafizah Hassan
Author 5: Izzatdin A Aziz

Keywords: IaaS; security checklist; guidelines; threats; cloud computing

PDF

Paper 37: An Investigational Study and Analysis of Cloud-based Content Delivery Network: Perspectives

Abstract: The content management includes a major technical strategy in the network paradigm of the internet which is called a Content delivery network. The design and the deployment of the CDN shall ensure optimal Quality of services (QoS). This paper aims to brief the taxonomy of the CDN along with its typical architecture. Much latest advancement in smartphones and smart devices which are content hungry require more efficient and reliable mechanism for the cost-effective delivery of the contents irrespective of bottleneck constraints that leads to redesign the entire architecture of CDN on the cloud as CCDN or a new business model of CCDN as a service. The challenges of design for CCDN along with the evolved architecture are discussed in this paper.

Author 1: Suman Jayakumar
Author 2: Prakash .S
Author 3: C.B Akki

Keywords: Content delivery network; cloud computing; distribution network; mobility; scalability; distribution

PDF

Paper 38: Adapted Speed Mechanism for Collision Avoidance in Vehicular Ad hoc Networks Environment

Abstract: The disrespect of the safety distance between vehicles is the cause of several road accidents. This distance cannot certainly be estimated at random because of some physical rules to be calculated. The more speed gets higher, the more stopping distance increases, mainly in danger case. Thus, the difference between two vehicles must be calculated accordingly. In this paper, we present a mechanism called Adapted Speed Mechanism (ASM) allowing the adaptation of speed to keep the necessary safety distance between vehicles. This mechanism is based on VANET network operation and Multi Agent System integration to ensure communication and collaboration between vehicles. So, it is necessary to perform real-time calculations to make adequate and relevant decisions.

Author 1: Said Benkirane
Author 2: Ahmed Jadir

Keywords: VANET; multi-agent systems; safety distances; stopping distance; JADE framework

PDF

Paper 39: Software Components’ Coupling Detection for Software Reusability

Abstract: Most of the software systems design and modeling techniques concentrates on capturing the functional aspects that comprise a system’s architecture. Non-functional aspects are rarely considered on most of the software system modeling and design techniques. One of the most important aspects of software component is reusability. Software reusability may be understood by identifying components’ dependence, which can be measured by measuring the coupling between system’s components. In this paper an approach to detect the coupling between software system’s components is introduced for the purpose of identifying software components’ reusability that may help in refining the system design. The proposed approach uses a dynamic notion of sequence diagram to understand the dynamic behavior of a software system. The notion of data and control dependence is used to detect the dependences among software components. The components’ dependences are identified in which one component contributes to the output computation of the other component. The results of the experiments show that the proposed algorithm can help the software engineers to understand the dependences among the software components and optimize the software system model by eliminating the unnecessary dependences among software components to enhance their cohesiveness. Such detection provides a better understanding of the software system model in terms of its components’ dependences and their influence on reusability, in which their elimination may enhance software reusability.

Author 1: Zakarya A. Alzamil

Keywords: Software component coupling; software component dependence; software component reusability; components interdependence; components dependence testing

PDF

Paper 40: Analysis of Coauthorship Network in Political Science using Centrality Measures

Abstract: In recent era, networks of data are growing massively and forming a shape of complex structure. Data scientists try to analyze different complex networks and utilize these networks to understand the complex structure of a network in a meaningful way. There is a need to detect and identify such a complex network in order to know how these networks provide communication means while using the complex structure. Social network analysis provides methods to explore and analyze such complex networks using graph theories, network properties and community detection algorithms. In this paper, an analysis of co-authorship network of Public Relation and Public Administration subjects of Microsoft Academic Graph (MAG) is presented, using common centrality measures. The authors belong to different research and academic institutes present all over the world. Cohesive groups of authors have been identified and ranked on the basis of centrality measures, such as betweenness, degree, page rank and closeness. Experimental results show the discovery of authors who are good in specific domain, have a strong field knowledge and maintain collaboration among their peers in the field of Public Relations and Public Administration.

Author 1: Adeel Ahmed
Author 2: Muhammad Fahad Khan
Author 3: Muhammad Usman
Author 4: Khalid Saleem

Keywords: Social networks; undirected graph; centrality measures; community detection; data visualization

PDF

Paper 41: A Study of Retrieval Methods of Multi-Dimensional Images in Different Domains

Abstract: Multiple amount of multi-dimensional images are designed and most of them are available on internet at free of cost. The 3D images include three characteristics namely width, height, and depth. The images which are created as 3D can describe the geometry in terms of 3D co-ordinates. These co-ordinates help to obtain the object from the image much easier and accurate. In this paper, we presented a review about the Multi-dimensional image retrieval. Multi-dimensional image retrieval is a process of extracting the relevant 2D or 3D images from the huge database. To perform image retrieval process on large database, several methods like text based, Content based, Annotation based, semantic based, and sketch based were used. The image retrieval techniques are mostly used in the fields like Digital library, medical, forensic science, and so on. A systematic literature review has been shown for image retrieval methods reported on 2010 to 2017. The aim of this article is to show the various concept and efforts of different authors on image retrieval technique.

Author 1: Shruti Garg

Keywords: Image retrieval techniques; 3D image retrieval; image retrieval survey

PDF

Paper 42: HOG-AdaBoost Implementation for Human Detection Employing FPGA ALTERA DE2-115

Abstract: Human detection system using Histogram of Oriented Gradients (HOG) feature and AdaBoost classifier (HOG-AdaBoost) in FPGA ALTERA DE2-115 are presented in this paper. This work is expanded version from our previous study. This paper discusses 1) the HOG performance in detecting human from a passive images with other point-of-views (30 deg., 40 deg., 50 deg., 60 deg. and up to 70 deg.); 2) FPS test with various image sizes (320 x 240, 640 x 480, 800 x 600, and 1280 x 1024); 3) re-measurement the FPGA’s power consumption and 4) simulate the architecture in RTL. We used three databases as a parameter for test purpose, i.e. INRIA, MIT, and Daimler.

Author 1: Trio Adiono Adiono
Author 2: Kevin Shidqi Prakoso
Author 3: Christoporus Deo Putratama
Author 4: Bramantio Yuwono
Author 5: Syifaul Fuada

Keywords: FPGA; human detection; adaboost classifier; ALTERA DE2-115; Histogram Oriented Gradients (HOG) feature

PDF

Paper 43: Implementation of Intelligent Automated Gate System with QR Code

Abstract: This paper is about QR code-based automated gate system. The aim of the research is to develop and implement a type of medium-level security gate system especially for small companies that cannot afford to install high-tech auto gate system. IAGS is a system that uses valid staffs’ QR code pass card to activate the gate without triggering the alarm. It is developed to connect to the internet and provide a real-time email notification if any unauthorized activities detected. Besides that, it is also designed to record all the incoming and outgoing activities for all staff. All QR code pass cards that are generated to staff will be encrypted to provide integrity to the data. The system is based on items such as PIR motion sensor, servo motor, Arduino microcontroller, Piezo buzzer, and camera. The software is implemented using VB.NET and the QR recognition level is about 99% accurate.

Author 1: Erman Hamid
Author 2: Lim Chong Gee
Author 3: Nazrulazhar Bahaman
Author 4: Syarulnaziah Anawar
Author 5: Zakiah Ayob
Author 6: Akhdiat Abdul Malek

Keywords: Component; internet of things; gate system; VB.NET; QR code

PDF

Paper 44: Missing Values Imputation using Similarity Matching Method for Brainprint Authentication

Abstract: This paper proposes a similarity matching imputation method to deal with the missing values in electroencephalogram (EEG) signals. EEG signals with rather high amplitude can be considered as noise, normally they will be removed. The occurrence of missing values after this artefact rejection process increases the complexity of computational modelling due to incomplete data input for model training. The fundamental concept of the proposed similarity matching imputation method is founded on the assumption that similar stimulation on a particular subject will acquire comparable EEG signals response over the related EEG channels. Hence, we replaced the missing values using the highest similarity amplitude measure across different trials in this study. Next, wavelet phase stability (WPS) was used to evaluate the performance of the proposed method since WPS portrays better signals information as compared to amplitude measure in this situation. The statistical paired sample t-test was used to validate the performance of the proposed similarity matching imputation method and the preceding mean substitute imputation method. The lower the value of mean difference indicates the better approximation of imputation data towards its original form. The proposed method is able to treat 9.75% more missing value trials, with significantly better imputation value, than the mean substitution method. Continuity of the current study will be focusing on evaluating the robustness of the proposed method in dealing with different rate of missing data.

Author 1: Siaw-Hong Liew
Author 2: Yun-Huoy Choo
Author 3: Yin Fen Low

Keywords: Similarity matching; data imputation; wavelet phase stability; missing values; artefact rejection

PDF

Paper 45: Topology-Aware Mapping Techniques for Heterogeneous HPC Systems: A Systematic Survey

Abstract: At the present time, the modern platforms of high-performance computing (HPC) consists of heterogeneous computing devices which are connected through complex hierarchical networks. Moreover, it is moving towards the Exascale era and which makes the number of nodes to increase as well as the number of cores within a node to increase. As a consequence, the communication costs and the data movement are increasing. Given that, the efficient topology-aware process mapping has become vital to efficiently optimize the data locality management in order to improve the system performance and energy consumption. It will also decrease the communication cost of the processes by matching the application virtual topology (exploited by the system for assigning the processes to the physical processor) to the target underlying hardware architecture called physical topology. Additionally, improving the locality problem which is one of the most challenging issues faced by the current parallel applications. In this survey paper, we have studied various topology-aware mapping techniques and algorithms.

Author 1: Saad B. Alotaibi
Author 2: Fathy alboraei

Keywords: Virtual topology; physical topology; topology-aware mapping; parallel applications; communication pattern

PDF

Paper 46: Detecting and Classifying Crimes from Arabic Twitter Posts using Text Mining Techniques

Abstract: Crime analysis has become a critical area for helping law enforcement agencies to protect civilians. As a result of a rapidly increasing population, crime rates have increased dramatically, and appropriate analysis has become a time-consuming effort. Text mining is an effective tool that may help to solve this problem to classify crimes in effective manner. The proposed system aims to detect and classify crimes in Twitter posts that written in the Arabic language, one of the most widespread languages today. In this paper, classification techniques are used to detect crimes and identify their nature by different classification algorithms. The experiments evaluate different algorithms, such as SVM, DT, CNB, and KNN, in terms of accuracy and speed in the crime domain. Also, different features extraction techniques are evaluated, including root-based stemming, light stemming, n-gram. The experiments revealed the superiority of n-gram over other techniques. Specifically, the results indicate the superiority of SVM with tri-gram over other classifiers, with a 91.55% accuracy.

Author 1: Hissah AL-Saif
Author 2: Hmood Al-Dossari

Keywords: Crimes; text mining; classification; features extraction techniques; arabic posts; twitter

PDF

Paper 47: Pipeline Hazards Resolution for a New Programmable Instruction Set RISC Processor

Abstract: The work presented in this paper is a part of a project that aims to concept and implement a hardwired programmable processor. A 32-bit RISC processor with customizable ALU (Arithmetic and Logic Unit) is designed then the pipeline technique is implemented is order to reach better performances. However the use of this technique can lead to several troubles called hazards that can affect the correct execution of the program. In this context, this paper identifies and analyzes all different hazards that can occur in this processor pipeline stages. Then detailed solutions are proposed, implemented and validated.

Author 1: Hajer Najjar
Author 2: Riad Bourguiba
Author 3: Jaouhar Mouine

Keywords: Processor; RISC; hardware; instruction set; pipeline; hazards; branch predictor; bypass

PDF

Paper 48: Automatic Short Answer Scoring based on Paragraph Embeddings

Abstract: Automatic scoring systems for students’ short answers can eliminate from instructors the burden of grading large number of test questions and facilitate performing even more assessments during lectures especially when number of students is large. This paper presents a supervised learning approach for short answer automatic scoring based on paragraph embeddings. We review significant deep learning based models for generating paragraph embeddings and present a detailed empirical study of how the choice of paragraph embedding model influences accuracy in the task of automatic scoring.

Author 1: Sarah Hassan
Author 2: Aly A. Fahmy
Author 3: Mohammad El-Ramly

Keywords: Automatic scoring; short answer; Pearson correlation coefficient; RMSE; deep learning

PDF

Paper 49: Improving Recommendation Techniques by Deep Learning and Large Scale Graph Partitioning

Abstract: Recommendation is very crucial technique for social networking sites and business organizations. It provides suggestions based on users’ personalized interest and provide users with movies, books and topics links that would be most suitable for them. It can improve user effectiveness and business revenue by approximately 30%, if analyzed in intelligent manner. Social recommendation systems for traditional datasets are already analyzed by researchers and practitioners in detail. Several researchers have improved recommendation accuracy and throughput by using various innovative approaches. Deep learning has been proven to provide significant improvements in image processing and object recognition. It is machine learning technique where hidden layers are used to improve outcome. In traditional recommendation techniques, sparsity and cold start are limitations which are due to less user-item interactions. This can be removed by using deep learning models which can improve user-item matrix entries by using feature learning. In this paper, various models are explained with their applications. Readers can identify best suitable model from these deep learning models for recommendation based on their needs and incorporate in their techniques. When these recommendation systems are deployed on large scale of data, accuracy degrades significantly. Social big graph is most suitable for large scale social data. Further improvements for recommendations are explained with the use of large scale graph partitioning. MAE (Mean Absolute Error) and RMSE (Root Mean Squared Error) are used as evaluation parameters which are used to prove better recommendation accuracy. Epinions, MovieLens and FilmTrust datasets are also shown as most commonly used datasets for recommendation purpose.

Author 1: Gourav Bathla
Author 2: Rinkle Rani
Author 3: Himanshu Aggarwal

Keywords: Social big data; social recommendation; deep learning; graph partitioning; social trust

PDF

Paper 50: Parameters Affecting Underwater Channel Communication Performance

Abstract: Underwater or Acoustic propagation is characterized by three major factors: attenuation that increases with signal frequency, time-varying multipath propagation, and low speed of sound. The background noise, often characterized as Gaussian, is not white, but has a decaying power spectral density. Channel capacity depends on the distance, and may be extremely limited. As acoustic propagation is best supported at low frequencies, an acoustic communication system is inherently wideband and bandwidth is not negligible with respect to its center frequency. The channel has a sparse impulse response, where each physical path acts as a time-varying low-pass filter, and motion introduces Doppler spreading and shifting. Surface waves, internal turbulence and fluctuations in sound speed, contribute to random signal variations. Till date, there are no standardized models for acoustic channel fading. Experimental measurements are often made to assess the statistical properties of the underwater channel.

Author 1: Sheeraz Ahmed
Author 2: Malik Taimur Ali
Author 3: Saqib Shahid Rahim
Author 4: Zahid Farid
Author 5: Owais Amanullah Khan
Author 6: Zeeshan Najam

Keywords: UWSN; doppler effect; attenuation; noise; salinity

PDF

Paper 51: Resource Management in Cloud Data Centers

Abstract: Vast sums of big data is a consequence of the data from different diversity. Conventional data computational frameworks and platforms are incapable to compute complex big data sets and process it at a fast pace. Cloud data centers having massive virtual and physical resources and computing platforms can provide support to big data processing. In addition, most well-known framework, MapReduce in conjunction with cloud data centers provide a fundamental support to scale up and speed up the big data classification, investigation and processing of the huge volumes, massive and complex big data sets. Inappropriate handling of cloud data center resources will not yield significant results which will eventually leads to the overall system’s poor utilization. This research aims at analyzing and optimizing the number of compute nodes following MapReduce framework at computational resources in cloud data center by focusing upon the key issue of computational overhead due to inappropriate parameters selection and reducing overall execution time. The evaluation has been carried out experimentally by varying the number of compute nodes that is, map and reduce units. The results shows evidently that appropriate handling of compute nodes have a significant effect on the overall performance of the cloud data center in terms of total execution time.

Author 1: Aisha Shabbir
Author 2: Kamalrulnizam Abu Bakar
Author 3: Raja Zahilah Raja Mohd. Radzi
Author 4: Muhammad Siraj

Keywords: Big data; cloud data center; MapReduce; resource utilization

PDF

Paper 52: A Multi-Energetic Modeling Approach based on Bond Graph Applied to In-Wheel-Motor Drive System

Abstract: This paper proposes a multi-energetic modeling approach based on Bond Graph tool to modeling a mechatronic system. The use of this approach allows to better understand the real behavior of such system as well as to express the interaction between the elements and their environments. Firstly, the dynamic model of the In-Motor-Wheel Drive System is built using the Bond Graph tool, which is well suited for a multi-energetic modeling system, where several types of energies are included. Secondly, the control system is established and is based on the Pulse Width Modulation (PWM) technique. Finally, the dynamic model is coupled to the control system. They are then successfully implemented and simulated under 20-Sim environment. The simulation results present the performance and the efficiency of the adapted tool not only for dynamic modeling of the synergetic systems, but also to elaborate its control system.

Author 1: Sihem Dridi
Author 2: Ines Ben Salem
Author 3: Lilia El Amraoui

Keywords: Multi-energetic approach; bond graph tool; PWM; 20-Sim environment; in-wheel motor drive system; mechatronic system

PDF

Paper 53: Zynq FPGA based and Optimized Design of Points of Interest Detection and Tracking in Moving Images for Mobility System

Abstract: In this paper, an FPGA based mobile feature detection and tracking solution is proposed for complex video processing systems. Presented algorithms include feature (corner) detection and robust memory allocation solution to track in real-time corners using the extended Kalman filter. Target implementation environment is a Xilinx Zynq SoC FPGA based. Using the HW/SW partitioning flexibility of Zynq, the ARM dual core processor performance and hardware accelerators generated by Xilinx SDSOC and Vivado HLS tools improve the system ability of processing video accurately with a high frame rate. Several original innovations allow to improve the processing time of the whole system (detection and tracking) by 50% as shown in experimental validation (tracking of visually impaired during their outdoor navigation).

Author 1: Abdelkader BEN AMARA
Author 2: Mohamed ATRI
Author 3: Edwige PISSALOUX
Author 4: Richard GRISEL

Keywords: Feature detection; harris & stephens corner detector; tracking; extended kalman filter; HW/SW partitionning; zynq SoC; computer vision; memory access; ARM A9; HLS; Interlacing; blancking; progressive video

PDF

Paper 54: Information Processing in EventWeb through Detection and Analysis of Connections between Events

Abstract: Information over the Web is rapidly becoming event-centric with the next age of WWW projected to be an EventWeb in which nodes are inter-connected through diverse types of links. These nodes represent events having informational and experiential information and analysis of these events has a substantial semantic impact regarding enhancement of information search, visualization and story link detection. Information regarding semantics of EventWeb connections is also important for event planning and web management tasks. In this paper, we devise and implement an event algebra for detection and analysis of event connections. As compared to traditional solutions, we process both context-match operators and analytical operators, cater for all event information attributes, and define the strength of connections. We implement a tool to evaluate our algebra over events occurring in the academic domain. We demonstrate an almost perfect precision and recall for context-match operators and high precision and recall for analytical operators.

Author 1: Tariq Mahmood
Author 2: Shaukat Wasi
Author 3: Khalid Khan
Author 4: Syed Hammad Ahmed
Author 5: Zubair. A. Shaikh

Keywords: EventWeb; information processing; event algebra; operators; link detection; link analysis; information analysis; context-match

PDF

Paper 55: Liver Extraction Method from Magnetic Resonance Cholangio-Pancreatography (MRCP) Images

Abstract: Liver extraction from medical images like CT scan and MR images is a challenging task. There are many manuals, Semi-automatic and automatic methods available to extract the liver from computerized tomography (CT) scan images and MR images. However, no method is available in the literature to extract the liver from Magnetic Resonance Cholangio-pancreatography (MRCP) images. Extracting the liver accurately from MRCP images is needed, so that the physician can diagnose the disease easily and plan preoperative liver surgery accordingly. In this paper, we propose a liver extraction method based on Graph Cut algorithm for liver extraction from MRCP images. The experimental results show that the proposed method is very effective for liver extraction from MRCP images.

Author 1: Sajid Ur Rehman Khattak
Author 2: Dr. Mushtaq Ali
Author 3: Faqir Gul
Author 4: Nadir Hussian Khan
Author 5: Amanullah Baloch
Author 6: M. Shoaib Ahmed

Keywords: Liver extraction; graph cut algorithm; MRCP images; liver surgery; medical images; liver mask; adaptive thresholding method

PDF

Paper 56: An Enhanced Method for Detecting the Shaded Images of the Car License Plates based on Histogram Equalization and Probabilities

Abstract: Shadow is one of the major and significant challenges in detection algorithms which track the objects such as the license plates. The quality of images captured by cameras is influenced by weather conditions, low ambient light and low resolution of the camera. The shadow in images reduces the reliability of the sight algorithms of the device as well as the visual quality of images. The previous papers indicate that no effective method has been presented to improve the license plate detection accuracy of the shaded images. In other words, the methods that have been presented for automatic license plate detection in shadowed images until now use a combination of color features and texture of the image. In all these methods, in order to detect the frame of the shadow and the texture of the image, sufficient light is required in the image; this necessity cannot be found in most of the regular images captured by road cameras. In order to solve this problem, an improved license plate detection method is presented in this research which is able to detect the license plate area in shadowed images effectively. In fact, this is a contrast-improving method which utilizes the dual binary method for automatic plate detection and is introduced to analyze the interior images with low contrast, and also night shots, blurred and shadowed images. In this method, the histogram of the image is firstly calculated for each dimension and then the probability of each pixel in the whole image is obtained. As a result, after calculating the cumulative distribution of the pixels and replacing it in the image, it will be possible to remove the shadow from the image easily. This new method of detection was tested and simulated for 1000 images of vehicles under different conditions. The results indicated the detection accuracy of 90/30, 97/87 and 98/70 percent for the license plates detection in three databases of University of Zagreb, Numberplates.com and National Technical University of Athens, respectively. In other words, comparing the performance of the proposed method with two similar and new methods, namely Hommos and Azam, indicates an average improvement of 26/70 and 72/95 percent for the plate detection and 32/38 and 36/53 percent for the time required for rapid and correct license plate detection, even in shaded images.

Author 1: Mohammad Faghedi
Author 2: Behrang Barekatain
Author 3: Kaamran Raahemifar

Keywords: Automatic license plate detection; shadowed images; histogram equalization; cumulative distribution; pixel probability

PDF

Paper 57: A Real-Time Algorithm for Tracking Astray Pilgrim based on in-Memory Data Structures

Abstract: Large crowd management presents a significant challenge to organizers and for the success of the event and to achieve the set objectives. One of the biggest events and with largest crowd in the world is the Muslim pilgrimage to Mecca that happens every year and lasts for five years. The event hosts over two million people from over 80 countries across the world with men, women, and children of various age groups and many languages. One of the challenges that faces the authorities in Saudi Arabia is that many of the pilgrims become astray during the event due to the relative complexity of the rituals mainly mountainous landscape and the language barrier. This result in them being unable to perform the required rituals on the prescribed time(s) with the possibility to invalidate the whole pilgrimage and jeopardize their once-in-a-life journey. Last year over 20,000 pilgrims went astray during the pilgrimage season. In this paper we present a tracking algorithm to help track, alarm, and report astray pilgrims. The algorithm is implemented on a server that contains pilgrims’ data such as geolocations, time stamp and personal information such as name, age, gender, and nationality. Each pilgrim is equipped with a wearable device to report the geolocations and the timestamp to the centralized server. Pilgrims are organized in groups of 20 persons at maximum. By identifying the distance of the pilgrim to its group’s centroid and whether or not the pilgrim’s geolocation is where it is supposed to be according to the pilgrimage schedule, the algorithm determines if the pilgrim is astray or on a verge of becoming astray. Algorithm complexity analysis is performed. For better performance and shorter real-time time to determine the pilgrim’s status, the algorithm employs an in-memory data structure. The analysis showed that the time complexity is O(n). The algorithm has also been tested using simulation runs based on synthesized data that is randomly generated within a specified geographical zone and according to the pilgrimage plan. The simulation results showed good agreement with the analytical performance analysis.

Author 1: Mohammad A.R. Abdeen
Author 2: Ahmad Taleb

Keywords: In-Memory structure; real-time; tracking algorithm for astray pilgrim; large crowd management

PDF

Paper 58: A Decision Support Platform based on Cross-Sorting Methods for the Selection of Modeling Methods

Abstract: The hospital supply chain performance is a concept that qualifies the good governance, the continuous improvement and the optimization of human and material resources of the hospital system. Thus, several performance analysis methods have been proposed for qualifying organizational flows and resources management. The main goal of the present study is to expose a literature review of the main graphical modeling and performance analysis techniques used in different research projects in the hospital field. The literature review will be analyzed and complemented by a classification study of the previous techniques. It is about a review in which will be proposed a computer platform based on Multi-Criteria Decision Analysis. This platform uses fuzzy pairwise comparisons and cross-sorting methods. Finally, the classification study is chosen in order to highlight the most adapted techniques to the different characteristics and components of the hospital system as part of the overall support decision process.

Author 1: Manal Tamir
Author 2: Fatima Ouzayd
Author 3: Raddouane Chiheb

Keywords: Hospital supply chain; graphical modeling and performance analysis techniques; multi-criteria decision analysis; fuzzy pairwise comparisons; support decision process; computer platform; cross-sorting methods

PDF

Paper 59: Opinion Mining and thought Pattern Classification with Natural Language Processing (NLP) Tools

Abstract: Opinion mining from digital media is becoming the easiest way to obtain trivial aspects of the thinking trends. Currently, there exists no hard and fast modeling or classification over this for any society or global community. The marketing companies are currently relying on sentiment analysis for their products. In this paper social sentiment is focused on the form of collective sentiment and individual sentiment; we intend to classify these in the form of Macro and Micro-social sentiment. The sentiment varies among groups, sects etc. and various classes of society are depending on many other characteristics of the society. The social media is available to explore certain ideas, various trends, and their significance. The significance requires further exploration of more patterns and this cycle continues. The exploration cycle focuses on a research outcome. Based on above all the study focuses on the opinion classes towards the general think patterns. The Think Patterns (TP) are developed over time due to social traditions, fashions, family norms etc. The specific community think patterns are very difficult to classify like a female in restricted societies or rural societies of our country. Such trends and patterns are the focus of this study based on various defined parameters. The opinion and sentiment data analysis will be assessed using natural language processing (NLP) tools, Twitter, GATE, Google API’s, etc.

Author 1: Sayyada Muntaha Azim Naqvi
Author 2: Muhammad Awais
Author 3: Muhammad Yahya Saeed
Author 4: Muhammad Mohsin Ashraf

Keywords: Opinion mining; sentiment analysis; natural language processing; think pattern; GATE

PDF

Paper 60: Undergraduate’s Perception on Massive Open Online Course (MOOC) Learning to Foster Employability Skills and Enhance Learning Experience

Abstract: The Massive Open Online Course (MOOC) is a very recent development in higher education institutions in Malaysia. As in September 2015, Universiti Teknikal Malaysia Melaka (UTeM) has introduced Mandarin course under Malaysia MOOCs. The study has focused on undergraduate’s perception of MOOC in Mandarin subject in fostering their employability skills as a research variable. The researcher used qualitative and quantitative method as a research design. An interview was used to investigate their perception of MOOC in Mandarin subject to foster their employability skills. An online survey was also conducted to investigate the effectiveness of MOOC learning. Undergraduates in UTeM were selected as the respondents of this study. The findings show that among all the employability skill, students believe Mandarin Massive online learning course fosters two employability skills which are ‘information gaining skill’ and ‘system and technology skill’. This study on MOOCs is important for the decision-making of the government and relevant institution to make sound decisions. This research is also significant for its contribution towards teaching practices in higher education institution.

Author 1: Cheong Kar Mee
Author 2: Sazilah binti Salam
Author 3: Linda Khoo Mei Sui

Keywords: MOOCs; mandarin; employability skills; perception; undergraduates

PDF

Paper 61: Roadmap to Project Management Office (PMO) and Automation using a Multi-Stage Fuzzy Rules System

Abstract: The Project Management Office (PMO) has proven to be a successful approach to enhance the control on projects and improve their success rate. One of the main functions of the PMO is monitoring projects and ensuring that the adequate processes are applied if a project starts to slip. Due to the high complexity of the parameters involved in choosing the actions to take depending on the type and status of the projects, organizations face difficulties in applying the same standards and processes on all projects across the organizations. In this paper, the authors will provide an overview of the main functions of the PMO, suggest a roadmap to start a PMO function within an organization and the authors will propose an architecture to automate the monitoring and control function of a PMO using a multi-stage fuzzy rules system.

Author 1: Magdi Amer
Author 2: Noha Elayoty

Keywords: Roadmap to build a PMO; automating the PMO; multi-stage Fuzzy System

PDF

Paper 62: Fixed Point Implementation of Tiny-Yolo-v2 using OpenCL on FPGA

Abstract: Deep Convolutional Neural Network (CNN) algorithm has recently gained popularity in many applications such as image classification, video analytic and object detection. Being compute-intensive and memory expensive, CNN-based algorithms are hard to be implemented on the embedded device. Although recent studies have explored the hardware implementation of CNN-based object classification models such as AlexNet and VGG, there is still a rare implementation of CNN-based object detection model on Field Programmable Gate Array (FPGA). Consequently, this study proposes the fixed-point (16-bit) implementation of CNN-based object detection model: Tiny-Yolo-v2 on Cyclone V PCIe Development Kit FPGA board using High-Level-Synthesis (HLS) tool: OpenCL. Considering FPGA resource constraints in term of computational resources, memory bandwidth, and on-chip memory, a data pre-processing approach is proposed to merge the batch normalization into convolution layer. To the best of our knowledge, this is the first implementation of Tiny-Yolo-v2 object detection algorithm on FPGA using Intel FPGA Software Development Kit (SDK) for OpenCL. Finally, the proposed implementation achieves a peak performance of 21 GOPs under 100 MHz working frequency.

Author 1: Yap June Wai
Author 2: Zulkalnain bin Mohd Yussof
Author 3: Sani Irwan bin Salim
Author 4: Lim Kim Chuan

Keywords: FPGA; CNN; Tiny-Yolo-v2; OpenCL; detection

PDF

Paper 63: Designing a Switching based Workflow Scheduling Framework for Networked Environments

Abstract: Due to the dynamics of the power of resources in non-dedicated computing environments such as Grid, and on the other hand, the autonomy of these environments and, consequently, the impossibility of repeating the operating scenarios to compare the algorithms created in this context, creating an environment by providing such conditions is necessary. In this paper, a framework for evaluating workflow-scheduling algorithms has been created, focusing on the dynamics of the power of resources in distributed environments. This framework based on a switching model that is capable of considering the change in the processing power of resources with high precision. Using the ability of this framework, the effectiveness of several different workflow scheduling algorithms has been evaluated.

Author 1: Hamid Tabatabaee
Author 2: Mohamad Reza Mohebbi
Author 3: Hosein Salami

Keywords: Scheduling framework; workflow scheduling; grid; switching based framework

PDF

Paper 64: Reliable and Energy Efficient MAC Mechanism for Patient Monitoring in Hospitals

Abstract: In medical body area network (MBAN) sensors are attached to a patient’s body for continuous and real-time monitoring of biomedical vital signs. Sensors send patient’s data to hospital base station so that doctors/caregivers can access it and be timely informed if patient’s condition goes critical. These tiny sensors have low data rates, small transmission ranges, limited battery power and processing capabilities. Ensuring reliability in MBAN is important due to the critical nature of patient’s data because any wrong/missing/delayed data can create a situation in which doctors may take wrong decisions about patient’s health which can have fatal results. Data transmission reliability in MBAN can be ensured by retransmissions, acknowledgments or guaranteed time slot mechanism but it causes more power consumption. We propose an efficient MAC mechanism to achieve both reliability and energy efficiency at an acceptable trade-off level. The proposed MAC mechanism not only overcomes the limitations of ZigBee MAC mechanism such as inefficient CSMA/CA and underutilization of guaranteed time slots, but also adapts for different traffic types such as emergency and normal traffic. Our results show that application level throughput and packet delivery ratio increase and packet loss decreases. We also optimize energy utilization by tuning macMaxCSMABackoffs and macMinBE parameters of ZigBee MAC mechanism.

Author 1: Madiha Fatima
Author 2: Adeel Baig
Author 3: Irfan Uddin

Keywords: Medical Body Area Network; MAC Protocols; Zig-Bee MAC Mechanism; Guaranteed Time Slot Allocation Scheme

PDF

Paper 65: Towards Evaluating Web Spam Threats and Countermeasures

Abstract: Web spam is a deceiving technique that aims to get high ranks for the retrieved web pages at the top Search Engine Result Pages (SERPs). This paper provides an evaluation for the web spam threats and countermeasures. It is started by presenting the different types of web spam threats that aim to deceive the users with incorrect information, distribute phishing and propagate malware. Then, presenting a detailed description for the proposed anti web spam tools, systems and countermeasures, and conducting a comparison between them. The results indicate that online real time tools are highly recommended solutions against web spam threats.

Author 1: Lina A. Abuwardih

Keywords: SEO; web spam threats; phishing; malware; web attacks

PDF

Paper 66: Data Modeling Guidelines for NoSQL Document-Store Databases

Abstract: Good database design is key to high data avail-ability and consistency in traditional databases, and numerous techniques exist to abet designers in modeling schemas appropri-ately. These schemas are strictly enforced by traditional database engines. However, with the emergence of schema-free databases (NoSQL) coupled with voluminous and highly diversified datasets (big data), such aid becomes even more important as schemas in NoSQL are enforced by application developers, which requires a high level of competence. Precisely, existing modeling techniques and guides used in traditional databases are insufficient for big-data storage settings. As a synthesis, new modeling guidelines for NoSQL document-store databases are posed. These guidelines cut across both logical and physical stages of database designs. Each is developed based on solid empirical insights, yet they are prepared to be intuitive to developers and practitioners. To realize this goal, we employ an exploratory approach to the investigation of techniques, empirical methods and expert consultations. We analyze how industry experts prioritize requirements and analyze the relationships between datasets on the one hand and error prospects and awareness on the other hand. Few proprietary guidelines were extracted from a heuristic evaluation of 5 NoSQL databases. In this regard, the proposed guidelines have great potential to function as an imperative instrument of knowledge transfer from academia to NoSQL database modeling practices.

Author 1: Abdullahi Abubakar Imam
Author 2: Shuib Basri
Author 3: Rohiza Ahmad
Author 4: Junzo Watada
Author 5: Maria T. Gonzlez-Aparicio
Author 6: Malek Ahmad Almomani

Keywords: Big Data; NoSQL; Logical and Physical Design; Data Modeling; Modeling Guidelines; Document-Stores; Model Quality

PDF

Paper 67: A Routing Calculus with Distance Vector Routing Updates

Abstract: We propose a routing calculus in a process algebraic framework to implement dynamic updates of routing table using distance vector routing. This calculus is an extension of an existing routing calculus DRωπ where routing tables are fixed except when new nodes are created in which case the routing tables are appended with relevant entries. The main objective of implementing dynamic routing updates is to demonstrate the formal modeling of distributed networks which is closer to the networks in practice. We justify our calculus by showing its reduction equivalence with its specification Dπ (distributed π-calculus) after abstracting away the unnecessary details from our calculus which in fact is one of the implementations of Dπ. We nomenclate our calculus with routing table updates as DRϕπ .

Author 1: Priyanka Gupta
Author 2: Manish Gaur

Keywords: Routing Calculi; Routing Protocols; Well Formed Configuration; Reduction Semantics

PDF

Paper 68: TokenSign: Using Revocable Fingerprint Biotokens and Secret Sharing Scheme as Electronic Signature

Abstract: Electronic signature is a quick and convenient tool, used for legal documents and payments since business practices revolutionized from traditional paper-based to computer-based systems. The growing use of electronic signature means they are used in many applications daily, both in government and private organizations such as financial services, where an electronic signature is taken from group of people at once to cash checks or perform a transaction approval. However, non-repudiation and authentication issues remain highlighted concerns for electronic signature. To overcome these obstacles, we propose a TokenSign system that uses revocable fingerprints biotokens with Secret Sharing as electronic signature. TokenSign maintains two layers of security. First, TokenSign scheme transforms and encrypts a user fingerprint data. Second, TokenSign embeds a shared secret inside the encrypted fingerprints. Then, TokenSign Scheme distributes all shares of electronic signatures over multiple clouds. During the matching/signing process, TokenSign utilizes thread-ing to do parallel matching for the fingerprints in its secure encrypted form without decrypting the data. Finally, TokenSign scheme applies Secret Sharing scheme to compute the shared secret, producing an electronic signature. As a result, our experi-ments show that TokenSign scheme achieves comparable accuracy and improves performance comparing to the two baselines.

Author 1: Fahad Alsolami

Keywords: Signature; Fingerprint; Electronic; Security

PDF

Paper 69: KNN and ANN-based Recognition of Handwritten Pashto Letters using Zoning Features

Abstract: This paper presents an intelligent recognition sys-tem for handwritten Pashto letters. However, handwritten char-acter recognition is challenging due to the variations in shape and style. In addition to that, these characters naturally vary among individuals. The identification becomes even daunting due to the lack of standard datasets comprising of inscribed Pashto letters. In this work, we have designed a database of moderate size, which encompasses a total of 4488 images, stemming from 102 distinguishing samples for each of the 44 letters in Pashto. Furthermore, the recognition framework extracts zoning features followed by K-Nearest Neighbour (KNN) and Neural Network (NN) for classifying individual letters. Based on the evaluation, the proposed system achieves an overall classification accuracy of approximately 70.05% by using KNN, while an accuracy of 72% through NN at the cost of an increased computation time.

Author 1: Sulaiman Khan
Author 2: Hazrat Ali
Author 3: Zahid Ullah
Author 4: Nasru Minallah
Author 5: Shahid Maqsood
Author 6: Abdul Hafeez

Keywords: KNN; deep neural network; OCR; zoning technique; Pashto; character recognition; classification

PDF

Paper 70: Towards Secure IoT Communication with Smart Contracts in a Blockchain Infrastructure

Abstract: The Internet of Things (IoT) is undergoing rapid growth in the IT industry, but, it continues to be associated with several security and privacy concerns as a result of its massive scale, decentralised topology, and resource-constrained devices. Blockchain (BC), a distributed ledger technology used in cryptocurrency has attracted significant attention in the realm of IoT security and privacy. However, adopting BC to IoT is not straightforward in most cases, due to overheads and delays caused by BC operations. In this paper, we apply a BC technology known as Hyperledgder Fabric, to an IoT network. This technol-ogy introduces an execute-order technique for transactions that separates the transaction execution from consensus, resulting in increased efficiency. We demonstrate that our proposed IoT-BC architecture is sufficiently secure with regard to fundamental se-curity goals i.e., confidentiality, integrity, and availability. Finally, the simulation results are highlighted that shows the performance overheads associated with our approach are as minimal as those associated with the Hyperledger Fabric framework and negligible in terms of security and privacy.

Author 1: Jawad Ali
Author 2: Toqeer Ali
Author 3: Shahrulniza Musa
Author 4: Ali Zahrani

Keywords: IoT; Blockchain Authorization; Hyperledger Fabric; BC; Blockchain Integrity

PDF

Paper 71: An Spin / Promela Application for Model checking UML Sequence Diagrams

Abstract: UML sequence diagrams usually represent the behavior of systems execution. Automated verification of UML sequence diagrams’ correctness is necessary because they can model critical algorithmic behaviors of information systems. UML sequence diagrams applications are often on the requirement and design phases of the software development process, and their correctness guarantees the accurate and transparent implemen-tation of software products. The primary goal of this article is to review and improve the translation of basic and complex UML sequence diagrams into Spin / Promela code taking into account behavioral properties and elements of combined fragments of UML sequence diagrams for synchronous and asynchronous messages. This article also redefines a previous proposal for a transition system for UML sequence diagrams by specifying Linear Temporal Logic (LTL) formulas to verify the model correctness. We present an application example of our modeling proposal on a modified version of a traditional case study by using UML sequence diagrams to translate it into Promela code to verify their properties and correctness.

Author 1: Cristian L. Vidal-Silva
Author 2: Rodolfo Villarroel
Author 3: Jos´e Rubio
Author 4: Franklin Johnson
Author 5: Erika Madariaga
Author 6: Camilo Campos
Author 7: Luis Carter

Keywords: Spin / Promela; UML Sequence Diagrams; Fault Tolerance; LTL formulas; Combined Fragment

PDF

The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org